In my post a couple of days ago, “AI: Also to the Edge”, I highlighted Nvidia and Apple as being two large potential Tech beneficiaries of bringing AI computational capabilities ‘close to the edge’. It’s where people go about their daily tasks on their personal devices, with their own personal data. And their usage of the AI services provide valuable reinforcement feedback loops for the core Data sets to be better trained and made more reliable. It’s how the core value gets built in AI.
I left out one other large company that also has a big role to play at the AI Edge, that of course being Tesla. The reason to discuss them separately is three fold.
Their core application for AI is making their millions of electric vehicles (EVs), capable of “Full Self-Driving” or FSD. For customers paying the $15,000 for the FSD option, or buying it on a subscription plan. They still collect massive amounts of massive training data from the cars regardless of the FSD buyers, and that gives them a key advantage on the Data front vs competitors given their millions of Tesla cars on the road. They don’t use radar (aka LiDAR) vs competitors like Google Waymo, GM et al, for cost reasons, relying mostly on cameras. It’s a strong Elon Musk conviction.
Theirs is what would be called an ‘AI Narrow’ application, ‘ANI’, vs more general AI (AGI) aspiring systems like OpenAI’s ChatGPT and Google’s Bard/Search Generative Experience (SGE). Those are true large language model AI or LLM AI applications, compared to the highly image focused, convolutional network AI work that Tesla cars do. That’s AI tech optimized for image processing vs text. They’re totally different types of AI computations with differing hardware requirements (less GPU intensive). Tesla has their own custom chips in their cars (HW3 and soon HW4), and their centralized cloud servers called ‘Dojo’, to process the reinforcement feedback loops from the dozen or so cameras on each Tesla vehicle on the road.
That’s where the distinction between Tesla and other companies would stop, BUT for Elon Musk’s recent announcement to throw his hat in the ring and create a competitor to OpenAI/Microsoft, called X.AI. Remember, Elon Musk was a co-founder of OpenAI all the way back in 2015, and broke with OpenAI in 2018, stepping off the board. Now he has thrown his hat in the ring to create an alternative LLM AI company with different values than OpenAI, which means hewing a different technical path than core Tesla, hiring highly in demand, world-class AI talent, and focusing on a different kind of AI than what he has been doing for his cars.
In particular, it means he will need thousands of Nvidia’s GPU chips as he can get in a globally constrained environment, and they easily run into the hundreds of millions and billions of dollars, in the quantities needed to build Foundation LLM AI models. Remember, OpenAI is apparently looking to raise another $100 billion after raising over $12 billion plus from Microsoft. Most of that going back into Microsoft Azure cloud infrastructure. (Will have more in that in future posts).
So, to do X.AI, even Elon Musk needs resources, and he likely will use Tesla as the balance sheet for his Nvidia GPU and X.AI data center purchases. And Tesla shareholders will have to sort all that out. But at least car deliveries are on track with recent discounts, so that’s a tailwind.
So it’s AI at the Edge indeed for Elon and Tesla. It’s the other Cage Match. Stay tuned.