One thing is clear about the two remaining Founder/CEOs of trillion dollar plus big tech ‘Magnificent 7’ companies. Both Nvidia’s Jensen Huang, and Meta’s Mark Zuckerberg, do not do things by half measure. They like to blanket the market as soon as possible. When they decide on a course of action, they execute to the fullest. And assume customers will generally like it and want it.
For Nvidia, it’s been CEO Jensen Huang’s long focused on AI ‘Accelerated Computing’ strategy in this AI Tech Wave.
For Mark Zuckerberg it’s been actions like decisively ordering industry scarce Nvidia AI H100 GPUs in recent months in the hundreds of thousands, implying financial capex commitments in the tens of billions. And using them to roll out the open source next gen version of Llama 3. ‘Pedal to the Metal’ indeed. The other action along that path was of course rolling out the company’s new Meta AI chat search applications immediately across Meta’s three plus billion users across Meta properties worldwide.
As Axios notes in “Meta's AI is everywhere all at once”, regardless of whether customers figure out why they might want it:
“Meta is pushing generative AI into every nook and cranny of its giant platforms — Facebook, Instagram and WhatsApp — frustrating some longtime users and threatening to worsen existing problems with spam and misinformation.”
“The big picture: Meta's fast-and-furious deployment of new AI features aims to make the technology's benefits accessible — but also risks degrading the experience for its billions of users.”
“Context: The company, which has been investing in AI for years, took a hit from investors last week after CEO Mark Zuckerberg admitted that he is doubling down on spending on AI infrastructure even though any bottom-line payoff is a long way off.”
“Unlike rivals Google, Microsoft and OpenAI, Meta has no clear path to charging consumers for their usage of expensive-to-run AI tools.”
“Driving the news: Meta is putting its AI assistant in lots of different places across desktop, mobile and web versions of its apps — sometimes to the delight of customers, but as often in ways that are frustrating.”
“In Meta's various messaging products, people can choose to have a separate conversation with Meta AI or summon the assistant to help out in new and existing group chats, whether that's to share a funny image created from a text prompt or to ask for advice on where to eat out together.”
This move to put Meta AI ‘everywhere’ across Meta’s major services, means that Meta AI chatbot, with integrated Google and Microsoft Bing search, has one of the broadest and deepest distributions of an AI service in 2024. Far surpassing OpenAI’s class-leading ChatGPT, which still officially has a 100 million plus monthly users, and undoubtedly tens of millions more via APIs at Microsoft and elsewhere.
The only other ‘Magnificent 7’ company with that kind of potential reach and distribution, is of course Apple, whose 2 billion plus device users interact with Apple’s core services everyday. As I’ve outlined before, Apple is reportedly in negotiations with both Google and OpenAI, to potentially use their Gemini and/or GPT LLM AI models as the basis for future Apple AI apps and services.
In the meantime, the company has been aggressively ramping up its LLM AI efforts, as illustrated by this recent publishing of AI research papers on some open source LLM AI models dubbed ‘OpenELM’ for small devices.
The point is that the race is on to get AI services in front of billions as fast as possible, regardless of whether they’re ready to pay for it directly, and whether they’ve figured out what these AI services can do and how it fits into their daily habits.
That will come later it’s presumed. For now, to paraphrase ‘Field of Dreams’, ‘Blanket it and they will come'. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)