Readers of AI: Reset to Zero know that I’ve made the case for a while that Apple has one of the best AI opportunities amongst the Big Tech mega caps to provide the most interesting LLMAI and Generative AI services to billions.
Part of the reason is that I believe the current focus on investing tens and hundreds of billions on Foundation LLM AI models and ‘Training’ ever bigger LLM AIs using ever more powerful GPU clusters in AI Compute data centers is but one side of the huge AI Tech Wave opportunity. As I’ve outlined, there are over a dozen major companies, both public and private investing in this direction with bigger Foundation models out over the next 18-24 months worldwide.
The bigger opportunity over the next five plus years to me is the deployment of fast LLM AI models that work where the users are, personalized to their ‘Data Exhausts’, lives, interests, and daily concerns. Those models and services with “Smart Agents”, queried in ‘Inference’ Reinforcement Learning Loops via ChatGPT like interfaces, or better via ‘Voice Assistants’ do not yet exist at scale. But they’re coming.
Apple with its Siri and Homepod voice assistants, along with Amazon with Alexa and Echo, and Google with Google Assistant and Nest devices, are recently being revamped by their respective organizations to ramp up LLM AI technologies onto those services. Each of those companies has invested billions in this area BEFORE the current LLM AI generative AI revolution, and each of them is going back to the drawing board to deploy the next generation of ‘Smart Assistant’ services here.
A report this week by The Information has more details on Apple’s here and other bottom up services where LLM AI and machine learning can make transformative differences at the Edge, in the hands of users with their own Data, secured with the appropriate technologies for user Privacy. Here’s a snapshot on that effort:
“Apple has been expanding its computing budget for building artificial intelligence to millions of dollars a day. One of its goals is to develop features such as one that allows iPhone customers to use simple voice commands to automate tasks involving multiple steps.”
“The technology, for instance, could allow someone to tell the Siri voice assistant on their phone to create a GIF using the last five photos they’ve taken and text it to a friend. Today, an iPhone user has to manually program the individual actions.”
“The moves come four years after Apple’s head of AI, John Giannandrea (aka ‘JG’), authorized the formation of a team to develop conversational AI, known as large-language models, before the technology became a focus of the software industry. That move now seems prescient following the launch last fall of OpenAI’s ChatGPT, a chatbot that catalyzed a boom in language models.”
As I’ve outlined, Apple already has a head start over a range of US companies from Elon Musk’s X and upcoming xAI, to Meta, Uber, and others to develop an “Everything App” akin to Tencent’s ‘WeChat’ in China. With over two billion devices running Apple operating system software of various types, and Apple Silicon that is years ahead of competitors in GPU and neural processors on local devices, brimming with improving power and power efficiency every year. Apple has a head start on delivering AI services at the Edge. Apple Silicon alone with GPUs in local devices, is ahead of most other companies and platforms. This was potentially under-used until LLM and Generative AI came alone.
Voice services at the Edge via Siri on everything from iPhones to Apple Watches on ‘Wearables’ platform, and Apple’s upcoming game-changing Vision Pro platform next year at a ‘bargain’ price, is the next critical step.
Apple is more than tinkering with AI at the Edge and everywhere else. They just release stuff when they’re good and ready. It’s early days. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)