AI: OpenAI's variable costs at AI Scale. RTZ #428
...front-end loaded now, more manageable over time in cloud and local devices
Doing foundation LLM AI models at this early stage of the AI Tech Wave is capex investment intensive at a scale not seen in previous tech waves. That is a topic I’ve covered many times in these pages. Including the broader question of syncing the costs with revenues (and eventually profits), for the AI industry at large.
It applies to all the big tech ‘Magnificent 7’ companies, and to the AI ‘startups’ focused on it at Scale. Particularly Boxes 1 through 4 below at least, all on the way to Box 6 where live cool AI Apps, Products and Services.
And as the Information explains in “Why OpenAI Could Lose $5 Billion This Year”, it applies to the flagship AI company as well:
“OpenAI has built one of the fastest-growing businesses in history. It may also be one of the costliest to run.”
“The ChatGPT maker could lose as much as $5 billion this year, according to an analysis by The Information, based on previously undisclosed internal financial data and people involved in the business. If we’re right, OpenAI, most recently valued at $80 billion, will need to raise more cash in the next 12 months or so.”
“We’ve based our analysis on our informed estimates of what OpenAI spends to run its ChatGPT chatbot and train future large language models, plus guesstimates of what OpenAI’s staffing would cost, based on its prior projections and what we know about its hiring. Our conclusion pinpoints why so many investors worry about the profit prospects of conversational artificial intelligence—including technology introduced by Google, Meta Platforms and Anthropic, a startup whose founders previously worked at OpenAI.”
“Our results also underline the question of whether those companies will eventually need to charge higher prices for their technology if they can’t find a way to reduce the cost of developing and running the AI.”
“All that said, here are the details:”
“On the cost side, OpenAI as of March was on track to spend nearly $4 billion this year on renting Microsoft’s servers to power ChatGPT and its underlying LLMs (otherwise known as inference costs), said a person with direct knowledge of the spending.”
“In addition to running ChatGPT, OpenAI’s training costs—including paying for data—could balloon to as much as $3 billion this year. Last year, OpenAI ramped up the training of new AI faster than it had originally planned, said a person with direct knowledge of the decision. So while the company earlier planned to spend about $800 million on such costs, it ended up spending considerably more, this person said. We’re estimating that such costs will double this year as it has trained new versions of its flagship LLM and embarked on training a new flagship model.”
“On top of those costs, OpenAI employs about 1,500 people, a workforce that is quickly growing and that could cost about $1.5 billion, in part due to the fierce battle for technical talent with the likes of Google.”
“That’s a guesstimate, to be sure. OpenAI had projected workforce costs of $500 million for 2023 while doubling head count to around 800 by the end of that year, according to the person with direct knowledge of the spending. It has nearly doubled the workforce again since then and is likely to add even more personnel in the second half of 2024, if the nearly 200 open jobs it has listed on its website are any indication.”
“All together, OpenAI’s operating costs this year could be as high as $8.5 billion.”
“In terms of revenue, ChatGPT recently was on pace to generate around $2 billion annually. (One of OpenAI’s issues is that millions of people use a free version, raising its computing costs without generating any additional revenue. That cost could go up as Apple later this year begins routing iPhone customers’ queries to ChatGPT as part of a new arrangement.)”
“But OpenAI has other revenue. It charges developers for accessing its LLMs so they can develop their own conversational AI apps or coding assistants. That business, which is known as an application programming interface, was generating more than $80 million in revenue per month as of March.”
“OpenAI was recently generating $283 million in total revenue per month, implying that its full-year revenue could end up between $3.5 billion and $4.5 billion, depending on its sales in the second half of the year.”
“Deduct the potential costs of $8.5 billion from revenue of up to $4.5 billion, and you get losses of $4 billion to $5 billion.”
“The analysis explains why OpenAI CEO Sam Altman has described the company as “the most capital-intensive startup in Silicon Valley history.”
“It also means OpenAI will need to raise money soon. Microsoft invested $10 billion in OpenAI at the start of last year, and OpenAI may have lost more than $2 billion for the year, even as it generated around $1 billion in revenue.”
The piece goes onto explore OpenAI’s business model vs Anthropic and other LLM AI companies. And provides further context for how costs could be managed down and revenues up.
But the key underlying thread to take away from the piece is that unlike prior tech waves, AI Tech models for now are saddled with VARIABLE COSTS that rise with uage by consumers and businesses. This is of course due to the currently high variable costs of AI Inference Compute, as users put in their text based queries in to chatbots today. And multimodal image, photo, voice and other input based queries tomorrow.
Every query in every modality, for now means massive amounts of AI GPU calculations per second that are now done mostly on massively expensive AI data center cloud facilities. Over time, some of these inference calculations will switch to the AI enabled PCs, smartphones and other devices closer to consumer and business users. Not to mention the ongoing costs of Data to feed the ever Scaling AI models, that are a unique ingredient in this Tech wave vs others.
This is a key point I’ve made in favor of Apple’s emerging ‘Apple Intelligence’ strategy, that will bring massive amount of bottoms-up AI products and applications on over two billion devices in user hands. That will go a long way to mitigating some of the highly variable inference costs of AI compute.
But at this point of the AI Tech Wave, it means that the leading companies like OpenAI have a lop-sided, front-end loaded cost and revenue models. And thus the large financial numbers above, to go with the large computation numbers in AI today. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)