Regular readers here know that Nvidia remains the lead supplier of the white hot demand funnel for AI GPU chips, systems, and data center infrastructure. They have over 70% of the global market for LLM AI GPU chips for both training and inference, and are likely to remain in that position for years, despite swathes of competitors ramping up, including their best customers. But it’s important to remember that there are two Kings of two distinct Hills, Nvidia in AI chips, and TSMC for all chips worldwide.
As I’ve pointed out before, and what US and China are also focused on, is Nvidia, along with Google, Apple and most of the major tech companies worlwide ultimately depend on Taiwan Semiconductor, aka TSMC, which ultimately makes the chips the world needs in ‘Fabs’ around the world that each cost tens of billions of dollars, and take years to build and ramp up.
Nvidia is the world’s leading chip manufacturer in its fabs, with over 60% share, followed by Samsung at 13%, and a host of other companies behind them, including Intel in the US which is just ramping up its Fab ‘Foundry’ business, with billions from the US government.
What brings this all to the fore front, is TSMC getting ready to “produce ultra-advanced 1.6nm (nanometer) chips by 2026)”, as reported by Nikkei Asia:
“Taiwan Semiconductor Manufacturing Co. says it will start production of ultra-advanced 1.6-nanometer chips by 2026 as the world's top chipmaker races to secure its leadership over the next decade.”
“TSMC unveiled its A16 technology at the North America Technology Symposium in Santa Clara, California, on Wednesday, saying the introduction of 1.6-nm chipmaking technology can "greatly improve logic [chip] density and performance."
“At TSMC, we are offering our customers the most comprehensive set of technologies to realize their visions for AI [using] the world’s most advanced silicon," chief executive C.C. Wei said at the event.”
Note that Apple reportedly bought up all of TSMC’s 3nm capacity last year for the iPhone 15:
“In general, a smaller nanometer size indicates a more advanced and powerful chip. Traditionally, the number referred to the distance between transistors on a chip. Smaller distances allow for more transistors to be packed into the same space, leading to significant performance gains. However, modern chipmaking has become increasingly complex. Pushing the boundaries of computing power now requires not only shrinking the size of transistors, but also a complete overhaul of their structure. Starting from 2-nm tech, TSMC and Intel will adopt the so-called gate-all-around, or nanosheet transistor, structure. Samsung began trialing such technology with its 3-nm node.”
“Apple's latest premium iPhone Pro uses TSMC's 3-nm technology. Many industry executives expect that generative AI will eventually need even more advanced chips.”
“Currently, only TSMC, Intel and Samsung are able to continue investing to make ever-smaller transistors and push chip production to new frontiers.”
“The U.S. government has awarded these three companies a total of $21.5 billion under its CHIPS Act to bring the most advanced chip production to American soil.”
“TSMC is the leader in foundry services, the business of making chips for others, with a market share of nearly 60%, according to the technology market research firm Counterpoint. Samsung follows in second place with a share of around 13%, trailed by Taiwan's UMC at 6%. Intel previously mainly produced chips for in-house use but has entered the arena, pledging to become the No. 2 player by 2030.”
And it’s a global race, especially against China. As the Information reports today in “Huawei leads Chinese effort to compete with Nvidia’s AI chips”:
“Huawei Technologies is leading a group of Chinese semiconductor companies seeking memory chip breakthroughs that could help China develop home-grown alternatives to Nvidia’s cutting-edge artificial intelligence chips, which can’t be sold to the country.”
“The Huawei-led consortium, backed by funding from the Chinese government, aims to produce high-bandwidth memory chips—a crucial component in advanced graphic processing units—by 2026, according to two people close to the company.”
“HBM is a type of memory that offers faster data transfer while using less power by stacking memory chips and connecting them via miniature wires. It is popular for training large language models for AI, but that popularity has led to shortages.”
“If Huawei and its partners can successfully mass-produce HBMs, China’s high-performance data center chip developers, especially Huawei itself, will have a better and more stable supply of HBMs for developing domestic alternatives to Nvidia’s cutting-edge chips. The amount of government funding for and total spending on the project are not known.”
“The HBM chips themselves are not directly subject to U.S. export controls. But only three companies make HBMs—SK Hynix and Samsung Semiconductor in South Korea, and Micron Technology in the U.S. All three use U.S.-made equipment to produce HBM chips, and since 2020 the U.S. government has barred Huawei from purchasing computer chips manufactured using American technology.”
“Huawei’s Ascend chips are considered an alternative to Nvidia’s GPUs. But demand for Ascend chips far outstrips supply; one reason for the shortfall is a shortage of HBMs, according to one of the people close to Huawei.”
The US/China geopolitical tussles which I’ve written about extensively in terms of ‘threading the needle’, is currently deeply coming down to AI chips, data centers, and their ramping demand for power. Not to mention AI talent both in the US and in China, which I’ve also written about extensively as well.
I’m in the tech weeds of chips because the the AI Tech Wave is utterly dependent on AI chips, power, and talent ramp up here and abroad over the next few years. It’s all about Box nos. 1 and 2 above that makes Boxes 4 through 6 possible in the AI Tech Stack above.
Companies like Meta, whose AI bread is buttered in Box 6 at the application layer, goes out of its way to boast about the 600,000 plus Nvidia AI H100 chip equivalent compute it has booked on hand, and the 350,000 H100 chips it already has secured as the second largest buyer of Nvidia’s top of the line AI chips. Those chips alone go for $30,000 per chip, with a cost to Nvidia of $3300+ per chip, which explains why currently Nvidia is in the relative ‘pole position’ in AI. It starts at the top with the close ‘bromance’ relationship between founder/CEOs Mark Zuckerberg of Meta, and Jensen Huang of Nvidia.
The amounts to be invested by the AI industry around the world are in the hundreds of billions, and time measured in years. Without adequate and timely supply of the above items, as I’ve said before, “‘AI’ just represents two non-adjacent letters in the English alphabet”.
But given enough time and capital, as I’ve also said before, ‘semiconductor chips are like potato chips’. They’ll make more. Both the current Kings of the two Hills, and many others are on it. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)