AI: Sam Altman: "deep learning worked". RTZ #489
...Steve Jobs' 'bicycle for the mind' scales up with AI
OpenAI founder/CEO Sam Altman is hinting at the AI road ahead again. Only a few days after his teasers on GPT-5 ‘Orion’ this winter. This time it’s on a core OpenAI subject, the road to AGI, aka ‘SuperIntelligence’.
In a new essay titled “The Intelligence Age”, (worth reading in full), he simplifies this whole AI Tech Wave I’ve been writing about for hundreds of days now as follows:
“This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.”
“How did we get to the doorstep of the next leap in prosperity?”
“In three words: deep learning worked.”
“In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.”
That’s it.
Three words “deep learning worked’, extended to fifteen words”.
Those fifteen words can be folded into the first three into these five, taking inspiration from the three iconic words from the US Constitution:
“We the Compute Powered People”.
That’s the next step.
In five words.
A geometric mean between the 3 and 15 above (for the math nerds).
I choose to use the word ‘Compute’ instead of AI.
Because massively scaled compute in the form of AI infrastructure, from chips to data centers with ample power and more, will be needed to make Sam’s essay achievable.
And to not anthropomorphize AI.
Steve Jobs famously said in 1981 that the computer was the bicycle for the mind.
He went on to say,
“That’s nothing compared with what’s coming in the next hundred years”.
That was in 1981, when Jobs was 26 years old, remembering a Scientific American article he read when he was 12. Which then inspired that quote only five years after he had co-founded Apple Computer in 1976.
So by his reckoning, we are only 57 more years to his ‘next hundred years’.
Ample enough for the ‘few thousand days’ that Sam Altman thinks gets us to ‘superintelligence’. To be fair, he does add “it may take longer”, but emphasizes his optimism on getting there.
AI promises to be at least a ‘battery powered bicycle’ for our minds.
Perhaps over time, morphing into a golf cart into a car into a plane into a rocket ship ‘for the mind’ and beyond.
There are A LOT of hard challenges and problems from here to there. Many documented in these pages daily in sometimes painstaking detail.
But as Sam says in his essay:
“There are a lot of details we still have to figure out, but it’s a mistake to get distracted by any particular challenge.”
Countless resources and time will be devoted to ‘figuring out the details’. Most posts here daily are on those details.
And of course executing on them, with more failures than successes before we get to even the ‘golf cart for the mind’ with AI.
Today’s AI, even as we go from GPT-4 to GPT-5 soon, via OpenAI-o1, Sora and more, are but ‘battery-powered bicycles’ for the mind, as I said above. For now.
But as someone who’s been a technology analyst for over three decades, it’s gratifying to see the challenge and opportunity boiled down to three words.
I prefer to look at the optimistic possibilities (what I call OTSOG) while having eyes wide open for the pessimistic curves. They’re bound to come, both fast and furious.
All because we’re kinda here:
Tim Urban described the chart above and more on AI in two parts back in 2015. Still worth reading. Two years before Google’s famous “Attention is all you need” AI paper that kicked it all off for this version of AI. And OpenAI to its credit, ran with it hard.
“deep learning worked” indeed.
Took decades. And we’re just getting started.
As I said when I started writing about AI: Reset to Zero 489 days ago today:
“AI is coming. Ready or not.”
“This explores the glass half full. By a seasoned tech explorer”.
Sam’s more than glass half full words, are helpful context for the road ahead.
Perhaps just zippier bicycles for now.
“we the compute powered people”.
And all because “deep learning worked”.
In all its reinforcement learning loop-driven ‘matrix math’ glory.
Our tools are really getting better.
Slowly to some, but surely. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)