top of page

DeepSeek Development Cost Was $1.6 Billion, With 50,000 Nvidia GPUs

Writer's picture: By The Financial DistrictBy The Financial District

Chinese startup DeepSeek recently made headlines in the tech world for its surprisingly low reported usage of computing resources in developing its advanced AI model, R1.


DeepSeek's computing infrastructure reportedly includes approximately 50,000 Hopper GPUs, comprising 10,000 H800s and 10,000 H100s, along with additional purchases of H20 units. I Photo: Nvidia



The model is believed to be competitive with OpenAI’s Q1, despite DeepSeek’s claims that it cost only $6 million and required just 2,048 GPUs to train, Anton Shilov reported for Tom’s Hardware.


However, industry analyst firm SemiAnalysis reports that the company behind DeepSeek actually incurred $1.6 billion in hardware costs and operates a fleet of 50,000 hs.



This finding challenges the notion that DeepSeek has revolutionized AI training and inference with significantly lower investments than leading AI companies.


According to SemiAnalysis, DeepSeek's computing infrastructure includes approximately 50,000 Hopper GPUs, comprising 10,000 H800s and 10,000 H100s, along with additional purchases of H20 units.



These resources are distributed across multiple locations and are used for AI training, research, and financial modeling. The company’s total capital investment in servers is estimated at $1.6 billion, with approximately $944 million spent on operating costs.




Comentários


Register for News Alerts

  • LinkedIn
  • Instagram
  • X
  • YouTube

Thank you for Subscribing

TFD [LOGO] (10).png

WHERE BUSINESS CLICKS

TFD [LOGO].png

The Financial District®  2023

bottom of page