DeepSeek might not be as disruptive as claimed, firm reportedly has 50,000 Nvidia GPUs and spent $1.6 billion on buildouts

Source: Tom's Hardware added 02nd Feb 2025

  • deepseek-might-not-be-as-disruptive-as-claimed,-firm-reportedly-has-50,000-nvidia-gpus-and-spent-$1.6-billion-on-buildouts
(Image credit: Nvidia)

Chinese startup DeepSeek recently took center stage in the tech world with its startlingly low usage of compute resources for its advanced AI model called R1, a model that is believed to be competitive with Open AI’s o1. However, SemiAnalysis reports that it took DeepSeek $1.6 billion in hardware costs and 50,000 Hopper GPUs to develop its next model, a finding that undermines the idea that DeepSeek reinvented AI training and inference.

DeepSeek operates an extensive computing infrastructure with approximately 50,000 Hopper GPUs, the report claims. This includes 10,000 H800s and 10,000 H100s, with additional purchases of H20 units, according to SemiAnalysis. These resources are distributed across multiple locations and serve purposes such as AI training, research, and financial modeling. The company’s total capital investment in servers is around $1.6 billion, with an estimated $944 million spent on operating costs, according to SemiAnalysis.

DeepSeek took the attention of the AI world by storm when it disclosed the minuscule hardware requirements of its DeepSeek-V3 Mixture-of-Experts (MoE) AI model that are vastly lower when compared to those of U.S.-based models. Then DeepSeek shook the high-tech world with an Open AI-competitive R1 AI model. But then the reputable market intelligence company SemiAnalysis revealed its findings to indicate that DeepSeek used some $1.6 billion of hardware for R1.

DeepSeek originates from High-Flyer, a Chinese hedge fund that adopted AI early and heavily invested in GPUs. In 2023, High-Flyer launched DeepSeek as a separate venture solely focused on AI. Unlike many competitors, DeepSeek remains self-funded, giving it flexibility and speed in decision-making. Despite claims that it is a minor offshoot, the company has invested over $500 million into its technology, according to SemiAnalysis.

A major differentiator for DeepSeek is its ability to run its own datacenters, unlike most other AI startups that rely on external cloud providers. This independence allows for full control over experiments and AI model optimizations. In addition, it enables rapid iteration without external bottlenecks, making DeepSeek highly efficient compared to traditional players in the industry.

Then there is something that one would not expect from a Chinese company: talent acquisition from mainland China, with no poaching from Taiwan or the U.S. DeepSeek exclusively hires from within China, focusing on skills and problem-solving abilities rather than formal credentials, according to SemiAnalysis. Recruitment efforts target institutions like Peking University and Zhejiang University, offering highly competitive salaries. Some AI researchers at DeepSeek earn over $1.3 million, exceeding compensation at other leading Chinese AI firms such as Moonshot, according to the research.

Due to the talent inflow, DeepSeek has pioneered innovations like Multi-Head Latent Attention (MLA), which required months of development and substantial GPU usage, SemiAnalysis reports. DeepSeek emphasizes efficiency and algorithmic improvements over brute-force scaling, reshaping expectations around AI model development. This approach has for many reasons led some to believe that rapid advancements may reduce the demand for high-end GPUs, impacting companies like Nvidia.

Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.

A recent claim that DeepSeek trained its latest model for just $6 million has fueled much of the hype. However, this figure refers only to a portion of the total training cost— specifically, the GPU time required for pre-training. It does not account for research, model refinement, data processing, or overall infrastructure expenses. In reality, DeepSeek has spent well over $500 million on AI development since its inception. Unlike larger firms burdened by bureaucracy, DeepSeek’s lean structure enables it to push forward aggressively in AI innovation, SemiAnalysis believes.

DeepSeek’s rise underscores how a well-funded, independent AI company can challenge industry leaders. However, the public discourse has been driven by hype. Reality is more complex: DeepSeek’s success is built on strategic investments of billions of dollars, technical breakthroughs, and a competitive workforce. What it means is that there are no wonders. As Elon Musk noted a year or so ago, if you want to be competitive in AI, spend billions per year, this is apparently what was spent.

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

Read the full article at Tom's Hardware

media: Tom's Hardware  

Related posts


Notice: Undefined variable: all_related in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 88

Notice: Undefined variable: all_related in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 88

Related Products



Notice: Undefined variable: all_related in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 91

Warning: Invalid argument supplied for foreach() in /var/www/vhosts/rondea.com/httpdocs/wp-content/themes/rondea-2-0/single-article.php on line 91