AI: Discovering market prices for AI. RTZ #474
...OpenAI testing AI price elasticities for key markets
OpenAI is not just leading in the top LLM AI platforms, product iterations and innovations. It’s apparently also leading in figuring out the price elasticity of its many products and services.
Determining what the market will bear is tough in AI since the end benefits are still in a fog. Additionally, as I’ve outlined, AI products and services are uniquely variable in terms of its underlying Compute costs to deliver to end customers. Be they businesses or consumers. So costs go up as users like and use your AI products at scale.
The price elasticity test directions are described by the Information’s “OpenAI Considers Higher Priced Subscriptions to its Chatbot AI”:
“How much would you be willing to pay for ChatGPT every month? $50? $75? How about $200 or $2,000?”
“That’s the question facing OpenAI, whose executives we hear have discussed high-priced subscriptions for upcoming large language models, such as OpenAI’s reasoning-focused Strawberry and a new flagship LLM, dubbed Orion. (More on those models here and here.) How much customers are willing to pay for the AI chatbot matters not only to OpenAI but rivals offering similar products, including Google, Anthropic and others.”
“In early internal discussions at OpenAI, subscription prices ranging up to $2,000 per month were on the table, said one person with direct knowledge of the numbers, though nothing is final. We have strong doubts the final price would be that high.”
“Still, it’s a notable detail because it suggests that the paid version of ChatGPT, which was recently on pace to generate $2 billion in revenue annually, largely from $20-per-month subscriptions, may not be growing fast enough to cover the outsize costs of running the service. Those costs include the expenses of a free tier used by hundreds of millions of people per month.”
“And more-advanced models such as Strawberry and Orion may be more expensive to train and run than prior models. For instance, we’ve reported that, when given additional time to think, the Strawberry model can answer more complicated questions or puzzles than OpenAI's current models can. That additional thinking, or processing time, could mean more computing power—and, therefore, more costs. If that’s the case, OpenAI may want to pass along some of the costs to customers.”
I’ve discussed at length OpenAI’s roadmap to AGI with the next two steps (Levels 2 and 3) being AI ‘Reasoning’ (aka ‘Strawberry’), and ‘Agents’, followed by the higher level capabilities of Innovation and Organization.
So all these levels and innovations have to find their ‘product-market-fit’ AND pricing fit in the market.
It’s the same process Microsoft is undergoing with its AI Copilot services, and others like Google, Meta and others are trying to figure out as well. For OpenAI, figuring out these price points is a priority:
“Of course, a high price would also mean OpenAI believes its existing white-collar customers of ChatGPT will find these upcoming models a lot more valuable to their coding, analytics or engineering work.”
“The pricing discussions come as the ChatGPT maker looks to raise billions of dollars in capital to make up for the billions of dollars it’s losing per year. In line with other reports, we’ve heard existing investor Microsoft is in the mix to put more cash into the startup, perhaps along with Apple and Nvidia, and that Josh Kushner’s Thrive Capital is looking to lead the new round, as the Wall Street Journal earlier reported.”
It’s “Generative AI’s money game”, as the Information described early last year, barely three months after OpenAI’s ‘ChatGPT moment:
“The generative AI revolution will be monetized, but nobody can yet say exactly how.”
“The big picture: It seems logical that if AI can conduct conversations and produce images, companies will figure out how to use it to build revenue and profits — but there's no guarantee, and the technology could also become a money sink for early adopters.”
“What's happening: In the short to medium term, the winners will likely be the owners of the foundational AI models — like Microsoft and OpenAI — who can charge others to use them for experiments and new applications.”
“If those experiments in applying generative AI's capabilities to specific commercial problems and consumer desires succeed, then we might see broader new businesses emerge built on tech's two time-tested business models, advertising and fees or subscriptions.”
“In the meantime, the news around projects like ChatGPT and Bing AI may quickly shift from the gee-whiz world of mindblowing sci-fi breakthroughs to very mundane nuances of B2B pricing.”
These questions still abound. For all the big tech LLM AI players, with OpenAI in the pole position. As it figures out its Board Governance transition from non-profit to for-profit after last November’s drama around founder/CEO Sam Altman.
I’ve talked about OpenAI’s rich product pipeline for 2024 and beyond, not the least of which is GPT-5 (apparently code-named ‘Orion’). That follows GPT-4o for Omni, which has impressive Voice AI capabilities. That of course go up against Google Gemini’s live, which goes up against Apple’s AI enabled Siri, expected to be launched at Apple’s ‘Glowtime Fall iPhone 16 and other devices being launched Monday September 9th. And OpenAI is a price influencer given its leadership position in LLM AIs as the Information highlights:
“Whatever OpenAI decides to charge for upcoming AI service will influence what its rivals do with their products as well. So far, most model developers have settled on subscription business models that charge their chatbot users double-digit amounts per month. A dramatic increase in OpenAI’s pricing—and reactions from consumers—could change that.”
Pricing of all these services is one of the key questions on the minds of investors of course as they digest the hundreds of billions in premptive AI Capex being expended by the largest tech companies around the world.
This is a unique dynamic in this AI Tech Wave vs prior tech waves, given the unprecedented AI capex, and variable costs of demand.
And discovering the right price at the right time, is as important as developing the AI at Scale in the first place. We’re going to have a lot more information on price elasticities sooner than later, as the AI ‘Table Stakes’ go up and to the right. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)