Apple is doing a lot with AI and machine learning (ML) as I’ve talked about for a while. Not the least of it is a cool little new feature called ‘double tap’ on the latest Apple Watches that uses a surprising amount of AI/ML, and on device ‘compute’ to make happen. Let me explain.
Earlier this month, in “Feeling our way to AI UI/UX” (user interface/user experience), I noted:
“In the ten months since OpenAI’s ‘ChatGPT moment’, and after 200 billion plus in AI Infrastructure investments, the world is still trying to figure out how AI will be used at scale in all its many possible incarnations. And so many ways for users to interact and experiment, ‘UI/UX’ in silicon valley speak.”
I added,
“Apple is hard at work as outlined here and here, finding bottom up ways to integrate AI into its many devices, apps and services, including its next generation Vision Pro platform in early 2024, even at its ‘historical bargain’ price of $3499. Recent reports have their AI head ‘JG’ looking hard at everything from Siri, to search delivered by Apple products using both its own technologies and Google.”
The Vision Pro doesn’t get here until next year. The signature way to interact with objects via the Vision Pro glasses are the ‘double tap’ gesture with the user’s fingers that are then registered by the crazy complex array of cameras, sensors, and a deep array of custom software and hardware.
We got a hint of how complex by this revelation by one of the designers, Sterling Crispin, who worked at Apple for years perfecting some of the tech. Here is some of what he had to say about what went into this next generation of UI/UX:
“I spent 10% of my life contributing to the development of the #VisionPro while I worked at Apple as a Neurotechnology Prototyping Researcher in the Technology Development Group. It’s the longest I’ve ever worked on a single effort.”
“I’ve been working on AR and VR for ten years, and in many ways, this is a culmination of the whole industry into a single product.”
“The work I did supported the foundational development of Vision Pro, the mindfulness experiences, and also more ambitious moonshot research with neurotechnology. Like, predicting you’ll click on something before you do, basically mind reading. I was there for 3.5 years and left at the end of 2021.”
The reason I’m going into some of this now is that Apple released a very small version of this UI/UX capability, the ‘double tap’ feature, in the current version of their multi-billion “Wearables” Apple Watch business, in the latest Apple Watch S9 and Ultra 2. As the Verge explains in “The Apple Watch’s double tap gesture points at a new way to use wearables”:
“The coolest Apple Watch Series 9 and Ultra 2 feature wasn’t actually available when the watches launched last month. Double tap, which finally arrives today via the watchOS 10.1 update, lets you interact with the watch without ever needing to use the touchscreen. With a quick pinching motion, you can use it to scroll through the new smart stack of widgets in watchOS 10, pause or end timers, skip music tracks, and answer phone calls. It’s the sort of feature that you might read about and scoff at — until you’re unloading groceries from your car, hands full, and an important call comes through on your watch.”
The Verge reviewer Victoria Song then goes on to add:
“In talking with Clark and Charles, it’s clear Apple went through such a tedious process because this is supposed to be one of those magical features that “just works.”
And, when double tap performs as intended, it does feel a bit like the watch can read my mind. It’s genuinely cool to see double tap work with not just my index finger but the rest of them as well. To my surprise, it feels less gimmicky than I expected.”
Yes, it’s being described as special. Similar to the experience hundreds of reviewers had trying out the early versions of Apple’s Vision Pro headsets with its focus on ‘magical’ hand gestures to control it, that ‘just worked’.
But here’s why I’m really talking about this almost esoteric ‘double tap’ feature today. Which is the amount of AI and machine learning technology, along with a whole host of the multiple arrays of technologies discussed above for the Vision Pro, went into making this one feature happen, as explained by David Clark, senior director of Apple Watch software engineering (bolding mine below):
“The short answer is that the Apple Watch Series 9 and Ultra 2 have a more powerful chip. Specifically, the new S9 features four neural engines for machine learning, which is what powers double tap.”
“Because we’re on a purpose-built part of the processor, we’re not contending with all the other things the CPU is doing at any given time.”
“There is an absurd amount of data that needs to be processed for double tap to work. At the most basic level, the algorithm that detects the double tap gesture is trained on data from the accelerometer, gyroscope, and optical heart rate sensor collected from the wrist.”
The sentence in bold above is the reason I’m going through this whole discussion. “An Absurd amount of data that needs to be processed’…’For double tap to work’.
Then the AI/ML algorithms have to compute the majority of the data that needs to be ignored, and the few key ‘signals’ that need to be kept, and then the whole series of loops on the user’s finger movements have to be recalculated, an immense number of times in real-time. All without draining the battery too fast.
This is the crazy computation at the heart of AI I’ve been discussing, along with the critical ‘reinforcement learning loops’ for inference calculations on the LLM AI models.
It highlights several key elements of AI that I’ve been discussing for a while in different pieces, all coming together in this ‘simple’ double tap by Apple:
New Data used: The ‘extractive’ Data needed for new cool things to be done with AI is almost unbounded, and not yet collected at scale by anyone. Mostly because we didn’t have the LLM AI and GPU chips to run the data through intense computations to figure out cool new ways to use this data, in this case, the ‘double tap’.
On device ‘Small AI’: A lot of the coolest applications are not going to be run in ‘Big AI’ cloud compute data centers, but rather it’s ‘Small AI’ running locally, on devices in this case, literally at one’s finger tips.
Unimagined applications: And lastly, the range of applications we can use these AI technologies are also likely to be unbounded.
We’re just scratching the surface with the AI at our fingertips. Or in this case, snapping our fingers to do some ‘magical’ things. Read the whole Verge review for the specifics of things that one can do with the ‘double tap’ on the latest Apple Watch.
But recognize that this is just the beginning of what’s going to be at our fingertips soon with this AI Tech Wave. It’s all barely started. AI Software doing a whole host of new things with traditional software. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here).