AI: Apple's focus on AI & Privacy. RTZ #372
...solving key problem of scaling Trust in AI, while scaling AI
Couple of days ago I discussed “Apple’s emerging AI Gameplan”. A lot of fuzzy details have emerged, but the broader Apple AI strategy remains to be seen. We’ll likely get more details at the Apple WWDC developer conference on June 10.
But a key issue in these early days of the AI Tech Wave, ESPECIALLY for Apple given its foundational commitment to user privacy for its 2.2 billion users of Apple devices, is HOW to deliver state of the art AI services to mainstream users. While keeping user privacy as intact as possible.
Especially since AI at its most useful will have to navigate data both personal and at large across our health, wealth, and ever-changing aspirations for happiness.
It’s a thorny problem, and often cited as a reason for Apple to be unable to deliver AI products and services at scale. Unlike its ‘Magnificent 7’ peers like Google, Meta and others, especially as those powerhouses’ business models depend on the user being the product in many core cases.
Apple, who’s been working hard at user privacy for a decade and more with hardware/software innovations at the device level like ‘Secure Enclave’, which empowers mainstream Apple services like ‘Apple Pay’ and others. Apple is presumably hard at work, thinking about delivering AI at scale with a focus on Privacy and user trust.
The Information in “Apple’s Plan to Protect Privacy With AI: Putting Cloud Data in a Black Box”, points to some ways Apple may be making some progress here:
“When Apple executives appear at its annual developer conference in mid-June, they are expected to unveil details of how it will integrate AI in its Siri virtual assistant and other products. One of the big questions is how it will do so while still honoring its promises to protect the personal data of its users.”
“The answer? Apple plans to process data from AI applications in a virtual black box, making it impossible for its employees to access it, according to four former Apple employees who worked on the project. Over the past three years, the company has been working on a secret project, known internally as Apple Chips in Data Centers or ACDC, that would allow for such black box processing. Its approach is similar in concept to confidential computing, an industry term that means the data is kept private even while it’s being processed.”
“While it’s common practice for tech companies to encrypt data that is stored or shuttled from one device to another, it’s far less common and more challenging to keep that data confidential while it’s being accessed in computer memory for processing.”
“Apple’s confidential computing techniques utilize the high-end custom chips it originally designed for Macs, which offer better security than competing chips made by Intel and AMD, the people said. By using its own chips, Apple controls both the hardware and software on its servers, giving it the unique advantage of being able to design more secure systems over its competitors. In doing so, Apple could make the claim that processing personal data on its servers is just as private as on the iPhone, they said.”
“If this approach works, it will allow Apple to integrate AI into its products without threatening its longstanding promise to keep its user data private. That privacy stance has put Apple at a disadvantage to competitors such as Google and OpenAI. Those companies operate chatbots and other generative AI products on cloud servers that aren’t limited by the processing power, computer memory and battery life of a smartphone.”
There are a LOT of ‘ifs, ands and buts’, with this possible approach. And a lot of technical and operational questions whether this could be made to work at scale. Overall, the article does a good job discussing the issues. The whole piece is worth reading.
But just that Apple is exploring in this direction means the industry is likely to heighten its focus on this important issue of AI and Privacy as well. Especially when the AI cloud and data centers need to be leveraged along with AI wielded in the privacy of on premises, local devices. Big AI and Small AI, both being able to thread the privacy needle.
And it’s an important threshold not just for Apple, but the industry at large. Especially since AI, to be truly useful to billions, has to REALLY get personal in what it knows about us. Through endless inference loops on floods of data barely tapped.
In many ways scaling our Trust with AI is as critical as scaling AI itself. And while it may cost us many hundreds of billions of dollars on the latter, it’s the former than is going to be needed to solve for hundreds of billions of AI agents, to do its thing for mainstream users in the billions.
Trust is as important to scale as AI. And thus the focus on Privacy as a start. Apple seems to be well-focused on this challenge and opportunity. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)