AI: Apple's 'ACDC' AI Server Chips. RTZ #352
...Apple's 'Electric' Cloud to Local 'Big & Small' AI Strategy
Apple is reportedly working on AI chips in cloud servers, code-named ‘ACDC’, or ‘Apple Chips in the Data Center. Let me unpack the significance.
With this possible move, Apple continues to move towards revealing its long-awaited LLM/SLM AI strategy next month at its WWDC Developer conference next month.
I’ve long asserted that Apple, while viewed as an ‘AI laggard’ amongst the ‘Magnificent 7’ Big Tech companies, is actually a long-term AI winner in this AI Tech Wave. Especially with its unique slate of over 2 billion users across a relatively unified ecosystem of applications, hardware platforms, oerating systems, software tools, and services. Not to mention its latest platform efforts with the Vision Pro.
It now appears that the company is bolstering its capabilties in AI chips in cloud services. As Bloomberg notes in “Apple to Power AI Tools With In-House Server Chips This Year”:
“Company puts Mac-grade chips in data centers for fall launch”
“Apple to offer features using on-device and cloud approaches”
“Apple Inc. will deliver some of its upcoming artificial intelligence features this year via data centers equipped with its own in-house processors, part of a sweeping effort to infuse its devices with AI capabilities.”
“The company is placing high-end chips — similar to ones it designed for the Mac — in cloud-computing servers designed to process the most advanced AI tasks coming to Apple devices, according to people familiar with the matter. Simpler AI-related features will be processed directly on iPhones, iPads and Macs, said the people, who asked not to be identified because the plan is still under wraps.”
“The move is part of Apple’s much-anticipated push into generative artificial intelligence — the technology behind ChatGPT and other popular tools. The company is playing catch-up with Big Tech rivals in the area but is poised to lay out an ambitious AI strategy at its Worldwide Developers Conference on June 10.”
“Apple’s plan to use its own chips and process AI tasks in the cloud was hatched about three years ago, but the company accelerated the timeline after the AI craze — fueled by OpenAI’s ChatGPT and Google’s Gemini — forced it to move more quickly.”
“The first AI server chips will be the M2 Ultra, which was launched last year as part of the Mac Pro and Mac Studio computers, though the company is already eyeing future versions based on the M4 chip.”
In my view, Apple continues to have a pole position in AI, especially by uniquely blending a plethora of what I’ve called ‘Small AI’ applications and services served locally off billions of ‘AI Edge’ devices.
This leverages its on device AI chips to privately blend user data with ‘Small Language Model or SLM’ technologies. We know Apple’s doing work along those lines with its recent open sourced OpenELM models that demonstrate these capabilities. By blending both local and data center based ‘server chips’, Apple has a unique set of possibilities in AI applications and services. Again, as Bloomberg highlihgts:
“The move, coming as part of Apple’s iOS 18 rollout in the fall, represents a shift for the company. For years, Apple prioritized on-device processing, touting it as a better way to ensure security and privacy. But people involved in the creation of the Apple server project — code-named ACDC, or Apple Chips in Data Centers — say that components already inside of its processors can safeguard user privacy. The company uses an approach called Secure Enclave that can isolate data from a security breach.”
“For now, Apple is planning to use its own data centers to operate the cloud features, but it will eventually rely on outside facilities — as it does with iCloud and other services. The Wall Street Journal reported earlier on some aspects of the server plan.”
All this may be combined with possible LLM AI deals by Apple with OpenAI and/or Google with its Gemini LLM AI capabilties.
An Apple AI Server chip becomes particularly potent at Scale, given Apple’s direct ‘first customer’ status with the world’s leading chip ‘fab’ foundry company, TSMC of Taiwan, which comprises over 60% of the world’s chip making capacity. Access to AI chips at scale as I’ve outlined, is of course critical for AI market leadership for now.
All this means that there are a lot of interesting and ‘Electric’ AI possibilities for Apple in this AI Tech Wave ahead very soon. Not an AI laggard at all. Stay tuned.
NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)
Everyone in the world agrees with you but I still don’t get it. It was a genius move to design their own chips for their devices, so they could have tight integration and design and control their destiny. But cloud cycles are just a commodity. Granted that there’s a bubble at the moment and supply constraint. So it’s great Apple can maybe get to the front of the line producing these chips for their server but nothing to get massively excited about. They’ll save on giving NVIDIA and others some margin. Maybe they’ll get some economies of scale since they have a chip business anyway for their onboard device chips. Maybe their chips will be a little better or worse than other people’s cloud chips but it’s a foot race with lots of big players racing to put out better chips and sell them to both proprietary and generic cloud server farms. Ultimately I don’t see Apple having different cloud chips as a differentiator that will make any difference to what my watch and phone will do when I talk to them. Personally I think this is just an announcement of something they had in the works for years to keep us happy for another 6 months while they prep their on-board AI releases to catch up and maybe lead again in what really matters, which is the device I hold in my hand, on my wrist, on my head, and maybe someday in my head.