Jump to content
  • Sign Up
×
×
  • Create New...

Nvidia to consume 77% of wafers used for AI processors in 2025: Report


Recommended Posts

  • Diamond Member

This is the hidden content, please

Nvidia to consume 77% of wafers used for AI processors in 2025: Report

This is the hidden content, please

Over the past few years, we have seen a lot of AI-market-related metrics, starting from petaflops of performance and going all the way up to gigawatts of power consumption. A rather unexpected metric is perhaps the one from Morgan Stanley (via

This is the hidden content, please
) that counts the wafer consumption of AI processors. There are no surprises, though: Nvidia controls the lion’s share of wafers designated for AI and is set to increase its domination in 2025 as it chews through up to 77% of the world’s supply of wafers destined for AI applications. 

While Nvidia is operating at an unprecedented scale and continues ramping up production dramatically, AMD’s share of AI wafer usage will actually decline next year. The figures also cover other industry heavyweights like AWS,

This is the hidden content, please
, Tesla,
This is the hidden content, please
, and ******** vendors.

Morgan Stanley’s analysis is the best in the industry. It’s data you won’t find anywhere else… pic.

This is the hidden content, please
.com/FhGwaf2Ux6
This is the hidden content, please

If you expand the above tweet, you can see that Morgan Stanley predicts that  Nvidia will dominate AI semiconductor wafer consumption in 2025, increasing its share from 51% in 2024 to 77% in 2025 while consuming 535,000 300-mm wafers. 

AI-specific processors, such as

This is the hidden content, please
TPU v6 and AWS Trainium, are gaining traction but remain far behind Nvidia’s GPUs. As such, AWS’s share is set to fall from 10% to 7%, while
This is the hidden content, please
’s share is projected to fall from 19% to 10%.
This is the hidden content, please
allocates 85,000 wafers to TPU v6, while AWS dedicates 30,000 to Trainium 2 and 16,000 to Trainium 3, according to Morgan Stanley’s projections. 

As for AMD, its share is expected to drop from 9% to 3% as its MI300, MI325, and MI355 GPUs — the company’s main offerings —  have wafer allocations ranging from 5,000 to 25,000 wafers. Notably, this doesn’t mean that AMD will consume fewer wafers next year, just that its percentage of the overall share will decline.  

Intel’s Gaudi 3 processors (named Habana in the graph) will remain around 1%. 

Tesla,

This is the hidden content, please
, and ******** vendors hold minimal shares. This may not be a problem, though. Tesla’s Dojo and FSD processors have limited wafer demand, which reflects their niche role in AI computing.
This is the hidden content, please
’s Maia 200 and its enhanced version have similarly small wafer allocations because these chips remain secondary for the company as it continues to use Nvidia’s GPUs for training and inference.

Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.

What the published graph does not indicate is whether Nvidia’s dominance stems from the massive demand expected in 2025 or the fact that the company booked more TSMC logic and TSMC CoWoS capacity than everyone else. 

The total AI market is projected to reach 688,000 wafers, and the estimated value is said to be $14.57 billion. This projection could be an underestimation, though. TSMC earned

This is the hidden content, please
in 2024, and
This is the hidden content, please
of it came from segments that the foundry calls high-performance computing (HPC). Technically, HPC includes everything from AI GPUs to processors for client PCs (smartphones are another category, and it accounted for 35% of TSMC’s revenue in 2024) to game consoles. However, AI GPUs and data center CPUs account for a lion’s share of that HPC revenue of $32 billion. 

The largest contributor to the growth of the wafers consumed by AI processors is Nvidia’s B200 GPU, which is expected to require 220,000 wafers, generating $5.84 billion in revenue, according to Morgan Stanley projections. Other Nvidia GPUs for AI, including the H100, H200, and B300, add to its dominance. All of these products use TSMC’s 4nm-class process technologies, and their compute die sizes range from 814 mm^2 to 850 mm^2, which explains the vast wafer demand.




This is the hidden content, please

#Nvidia #consume #wafers #processors #Report

This is the hidden content, please

This is the hidden content, please

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.