As I outlined in this Sunday’s piece on AI Chips, Sam Altman in his inimitable style, is conducting again how the world sees and frames the global AI innovation and investments currently underway. The WSJ had a follow-up to its piece a few days ago on OpenAI’s Founder/CEO Sam Altman projecting plans to raise as much as $7 trillion to power the AI Tech Wave, in terms of currently scarce AI GPU Chips and semiconductor factories (aka ‘Fabs’). Titled “Raising Trillions of Dollars Might Be the Easy Part of Altman’s Chip Plan”, the piece goes onto to outline:
“OpenAI CEO’s obstacles include staffing, a cyclical market and a lack of viable chip makers.”
Not to mention completing its Board restructuring that cause his near-ouster last November from OpenAI. The WSJ continued:
“OpenAI Chief Executive Sam Altman’s plan to reshape the global semiconductor industry envisions pouring vast sums into a challenge that is far more complicated than money.
“Manufacturing chips is enormously capital intensive. It is also one of the most intricately complex industries in the world with a history of sharp cyclical swings that have made companies wary of radical expansion.”
“It took decades for the world’s most advanced chip makers to reach their current heights. Some chip companies faltered during one of the industry’s notorious downturns, like in the early and middle 2010s. Others stopped developing cutting-edge chips along the way, wary of high costs and the high risk of failure.”
“There are now only three companies in the world capable of making the most cutting-edge chips—including the processors used to power AI systems—in large volumes:”
“Taiwan Semiconductor Manufacturing Co., Samsung Electronics, and Intel.”
“Altman has held discussions with chip makers about joining with them and using trillions of dollars to build and operate new factories, along with investments in energy and other AI infrastructure. Many of the world’s largest chip companies, including Nvidia, design their chips but outsource their production to companies such as TSMC.”
“Building a cutting-edge chip factory typically costs at least $10 billion. But even with that, the scale Altman is discussing is extreme: Stacy Rasgon, an analyst at Bernstein Research, estimates that a little more than $1 trillion has been spent on chip-manufacturing equipment in the entire history of the industry.”
Even the founder/CEO of Nvidia, Jensen Huang, downplayed the $7 trillion dollar investment number, as Bloomberg noted in “Nvidia CEO Says Tech Advances Will Keep AI Cost in Check”:
“Jensen Huang downplays reported $7 trillion fundraising for AI”
“He sees AI data centers doubling in scale over five years.”
I would imagine the senior executives at TSMC in Taiwan, the leading global chip company that actually has the ‘Fabs’ where these chips are made, would also provide more rational context around these multi-trillion dollar numbers.
The Information put some additional context around Sam Altman’s multi-trillion dollar chip ambitions:
“OpenAI CEO Sam Altman stole the show last week with a report that he is trying to raise trillions of dollars in capital to develop and manufacture AI chips. But let’s set aside our collective incredulity for a moment (we’ll come back to it!) and consider the implication of Altman’s fundraising: that unlimited computing power will lead to all-powerful AI.”
“In other words, he believes the only thing standing in the way of artificial superintelligence—software that could help us colonize Mars or solve global warming on Earth—is a shortage of servers.”
“Altman is not alone in holding this view, but it’s far from universally accepted. Four years ago, OpenAI published a paper on “scaling laws” in the field of large language models. Those laws, or principles, state that training LLMs on more computing power and data improves their capabilities in predictable ways. Following those principles pushed LLM developers like OpenAI to spend over $100 million on training a single model.”
The piece went onto put the trillions in better historical context of the chip industry to date:
“How much money does Altman actually need to reach the computing promised land? Probably not the $7 trillion figure he has reportedly floated with oil-rich sheikhs in the United Arab Emirates who he hopes would fund new chips and servers. Building the data centers, power plants and chip foundries to generate 10 times the amount of computing power that Microsoft already has would cost between $100 billion and $200 billion, a high-profile AI chip CEO told us on Friday.”
“The CEO and others in the field said there’s only so much that money can do to speed up the creation of chip fabs, data centers, and power plants because of labor and supply chain constraints. (Even Nvidia CEO Jensen Huang expressed skepticism about the $7 trillion number at the World Government Summit in Dubai today.) And if Altman’s plan involves OpenAI creating its own server chip, that would take years and isn’t guaranteed to work. (That raises the question of why he’s looking for trillions in the first place.)”
“However, many AI practitioners believe that throwing more chips and data at today’s AI models isn’t the path to achieving superhuman AI. As we run out of high-quality, human-generated data to use to train AI models, it may be easier to develop software that can learn and reason the way humans do, based on relatively little data. After all, OpenAI’s GPT-4 and Google’s Gemini have already been trained on most of the world’s public text information and they aren’t approaching artificial superintelligence levels.”
That last bit is particularly true, as recent AI research shows. The FT in a piece titled “Babies vs AI-it’s no contest” explains:
”Faced with a baby screaming the house down and throwing food on the floor, frazzled parents may be surprised to hear that their beloved offspring is probably the smartest learner in the known universe. But some computer scientists have long recognised that reality and are now trying to mimic babies’ extraordinary processing powers to develop artificial intelligence models. In some respects, our latest technological creations appear near-magical in their capabilities, but when it comes to autonomous learning, they are as dumb as a diaper.”
“Can they be trained to learn as baby processing units do by exploring partial, messy, real-world data? A team of researchers at New York University has been trying to do just that and this month published their findings in the journal Science. Their experiment drew data from a lightweight camera attached to the head of a baby in Adelaide called Sam, recording 61 hours of his life from the age of 6 to 25 months.”
The study applied some of the learnings from those AI models and data to LLM AI models, and got some encouraging results:
”Up until now, attempts to build multimodal AI models that can combine text, images, audio and video have mostly relied on the application of massive computing power to vast amounts of curated data. But the NYU researchers found their model could successfully associate images and sounds with substantially less data from one baby’s video feed. Their model had an accuracy rate of 61.6 per cent when it came to classifying 22 “visual concepts”. “We were very surprised that the model could exhibit a pretty remarkable degree of learning given the limited data it had,” Wai Keen Vong, the lead author of the NYU paper, told me in a video interview.”
But, there are steep climbs ahead:
”These findings are an encouraging prompt for the development of future AI models. But, as Vong notes, they also underscore the phenomenal learning abilities of babies themselves, who can respond to visual signals and develop their own learning hypotheses. Part of the reason for their precociousness is that human babies spend an uncommonly long time actively exploring the world before they have to fend for themselves. “Children are the R&D department of the human species — the blue-sky guys, the brainstormers. Adults are production and marketing,” as Alison Gopnik memorably wrote in her book The Philosophical Baby.”
“According to Gopnik, a psychology professor at University of California, Berkeley, babies have three core skills that AI systems lack. First, babies excel at imaginative model building, creating a conceptual framework to explain the world. They are also curious, adventure loving and embodied learners, actively exploring new environments, rather than being passively encased in lines of code. And babies are social animals, learning from all those they interact with, helping develop empathy, altruism and a moral sensibility.”
Yes, we’re going to need a lot more AI research and innovation, and yes, an almost incomprehensible amount of next generation AI Chips and infrastructure to even get to close to what Babies do, from today’s AI technologies and chips. But we’re a long way away, with time scales likely much longer than currently imagined.
Then there’s the issue of Investor Appetite for risks and rewards in a sustainable, long-term context. As I outlined in my Sunday piece “AI Chips, Going Fast & Furious”,
“But trillions in AI Chips alone is likely a reach, especially in the context of investor return expectations in a realistic investment horizon. Even measured in terms of a decade or more. We ‘only’ saw tens and hundreds of billions invested in each of the previous tech booms in areas ranging from self-driving electric cars, to 5G, to of course crypto and other aspirations in the ‘Metaverse’. Not trillions in any one area of tech over a compressed period of time. Even physical transportation infrastructure are measured in trillions only in the context of entire countries and regions of the globe.”
I believe Sam understands the incredulity by the industry at large at his $7 trillion ‘Bid’, but did it anyway to mark the intensity of his message here. He remains the world’s leading AI conductor, and in many ways, it’s Pied Piper.
Which is that the global chip capacity needed for his aspirational goal for Artificial General Intelligence (aka ‘AGI’ and ‘Super-Intelligence’), in the way OpenAI expects eventually, needs to be scaled up WAY MORE than any one thinks in rational and historical terms.
It’s kind of like what he did with the AI Security Fears over the early years of OpenAI’s founding, and its unique governance structure. And then tempered them down in recent months, to rational reality.
This likely is ‘Tech Performance Art’, on a grand scale. With rational goals underneath. We certainly need a lot of chips and chip infrastructure ahead, but just paced at the right pace, tempo, and rational scale. Stay tuned.
(Update, 2/15/24: The Information reported that Sam Altman ‘Isn’t Raising Trillions of Dollars” for Chips only, clarifying that:
“Altman privately has told people that figure represents the sum total of investments that participants in such a venture would need to make, in everything from real estate and power for data centers to the manufacturing of the chips, over some period of years.”
Even with that clarification, the $7 trillion number remains on the high side of the AI infrastructure needed over the coming years. As the Information also clarified in the updated piece:
“Just this week, Nvidia CEO Jensen Huang said at an event in the United Arab Emirates that $1 trillion worth of new data centers would need to be built in the next four to five years to handle all of the AI computing that’s coming. Huang, whose firm pioneered the graphics processing units that OpenAI uses to develop its technology, joked that $7 trillion would be enough to buy “all the GPUs, apparently.”
“But, Huang also hinted that Altman’s $7 trillion figure may not be taking into account the fact that hardware advances will continue, which will help improve computing efficiency. “If you just assume computers aren't going to get any faster, you might come to the conclusion that we need 14 planets, three galaxies and four more suns to fuel all this, but computer architecture continues to advance,” he said.”)
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)