The Bigger Picture, Sunday October 29, 2023
In last week’s ‘Bigger Picture’, I argued that the tech we need to build fast on near-term AI opportunities, and not let our far-term goals distract us from what’s possible now. This week I’d like to argue that the quest for self-driving cars over the last dozen plus years, where dozens of companies worldwide have invested over $200 billion dollars, may have a lot to teach us in how much the quest for AI promises may truly take in time, money and talent. That self-driving cars may be the canaries in the AI coal mine despite the hundredX plus improvements expected in LLM AI models and GPU hardware for general AI (aka AGI and ‘super intelligence’ by OpenAI et al), just in the next three plus years. Let me explain.
In that same post last week I highlighted how we got safer ‘Drive Assist’ in our cars while still waiting for full self driving cars. That link in the preceding sentence explains in detail how were still at ‘Level 2’ of self-driving cars, in a scale that goes to Level 5.
That’s described as:
“Level 2 autonomy is mostly where we’re at today: computers take over multiple functions from the driver – and are intelligent enough to weave speed and steering systems together using multiple data sources. Mercedes says it’s been doing this for six years. The latest Mercedes S-Class is Level 2-point-something. It takes over directional, throttle and brake functions for one of the most advanced cruise control systems yet seen – using detailed sat-nav data to brake automatically for corners ahead, keeping a set distance from the car in front and setting off again when jams clear, with the driver idle.”
I’d like to make this description even more vivid for you by watching at least the few minutes of this 25 minute review by Edmunds. It’s a sometimes heart-stopping take of their real-world testing of the current four top self-driving cars. They feature “BMW’s Driving Assistant Plus, GM’s Super Cruise, Ford’s BlueCruise and, of course, Tesla’s Full Self-Driving Beta.” The state of the art in what AI can do with cars today.
In particular, focus on the four minute mark in the video where one of the reviewer is startled by an unexpected behavior by his car on the infamous I-405 in LA. Watch how it takes him a while to recover from the shock of the car’s ‘errant, utterly unexpected behavior’. Here’s the transcript:
“This is a good test for the car because we're…4:03
“needing to get over soon--4:04”
“whoa, whoa! 4:05”
“Jesus. 4:11”
“OK, let my heart rate come down for a second. 4:15”
In fact, if one watches the entire review, one gets the feeling that self-driving cars today are an exercise in truly stress filled driving. Like forever watching over the shoulder of a novice teen driver prepping for their driver’s license exam. This despite spending over $50 to $100,000 dollars plus for the cars in question, and then spending thousands more for the ‘self-driving’ capability, along with subscription fees in some cases. May feel like the car companies should be paying the drivers, not the other way around, Tom Sawyer style, at this stage of the AI in self-driving cars.
The ‘Whoa’ moment above is an example of what’s called the ‘Edge’ case in AI. Situations that occur that have low-probability, and can occur for a whole host of reasons that are both unanticipated from the billions of hours of prior driving data, real and synthetic, used to train the models, and the unexplainable reasons in most cases, due to the nature of how AI math works to interpret why the behavior occurred, and of course fix it.
Bloomberg highlighted some more of these ‘Edge cases’ in this piece last year, capping it with the industry’s ‘Left-turn’ problem:
“It all sounds great until you encounter an actual robo-taxi in the wild. Which is rare: Six years after companies started offering rides in what they’ve called autonomous cars and almost 20 years after the first self-driving demos, there are vanishingly few such vehicles on the road. And they tend to be confined to a handful of places in the Sun Belt, because they still can’t handle weather patterns trickier than Partly Cloudy. State-of-the-art robot cars also struggle with construction, animals, traffic cones, crossing guards, and what the industry calls “unprotected left turns,” which most of us would call “left turns.”
“The industry says its Derek Zoolander problem applies only to lefts that require navigating oncoming traffic. (Great.) It’s devoted enormous resources to figuring out left turns, but the work continues. Earlier this year, Cruise LLC—majority-owned by General Motors Co.—recalled all of its self-driving vehicles after one car’s inability to turn left contributed to a crash in San Francisco that injured two people.”
And this despite over at least a $200 billion spent according to McKinsey in this endeavor over a dozen years now. Google in the quarter just reported outlined how they continue to spend over a billion dollars a quarter for their Waymo subsidiary. GM’s Cruise cars just got banned from their on-going tests in California post a grisly accident, and self-suspended themselves from driving their cars nationwide.
This is not to say the effort in time and money is not worthwhile. Indeed, as long-time AI luminary, critic, and realist Gary Marcus puts it:
“I—truly a techno-optimist, popular perceptions to the contrary—do think driverless will come, eventually. 30,000 people dying on US roads a year, a millio worldwide, is no joke; there is still room for improvement. I do still think that reliable self-driving cars will emerge, and eventually be safer than humans, who can get distracted, become tired and so forth.”
“To get there, though, what we need is a much smarter AI, a kind that can reason, and not hallucinate, grounded in reality and not just corpus statistics— which require fundamental research of a sort that is currently getting starved out in the LLM euphoria.”
“The same by the way probably holds for generative search and personalized agents. We will eventually see search engines that return neatly written paragraphs, and eventually see AI agents that can organize lives.”
I agree with Gary and others cited in the piece. What’s happening with self-driving cars even today, are an example of an issue that’s likely going to be with AI applications at large for some time.
I’ve also argued from the beginning that AI technology takes way longer to do what we think it can do, and we need to focus on the more prosaic, practical applications of AI ahead of the long-term BIGGER possibilities. So it’s important not to necessarily slow down and not try the big things. But recognize that just like the AI in self-driving cars, we may take some more time and money to get there. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)