As the Wall Street Journal highlights today, as far as AI is concerned it may indeed be the best of times:
“The Nasdaq has risen 32% this year—the Dow Jones Industrial Average is up 3.4%—while Microsoft shares have climbed 41% and Nvidia shares have almost tripled on the back of optimism that AI will bolster their businesses.”
And as I outlined just a few days ago, the prospects around AI for both the second half of 2023, and indeed the next three years are quite encouraging.
And so it was such a sharp contrast to see OpenAI yesterday describe their plans for the next four years. The company that kicked off the AI gold rush globally is now using distinctly different language to descibe their AI ambitions:
“Introducing Superalignment: We need scientific and technical breakthroughs to steer and control AI systems much smarter than us. To solve this problem within four years, we’re starting a new team, co-led by Ilya Sutskever and Jan Leike, and dedicating 20% of the compute we’ve secured to date to this effort.”
They’re talking about ‘superalignment’ for the ‘superintelligence’, that they think is within reach for their systems in the next few years. Note they‘ve transitioned from calling it ‘Artificial General Intelligence’, or AGI, to ‘SuperIntelligence’.
“Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.
While superintelligence seems far off now, we believe it could arrive this decade.”
Yes, they’re discussing AI existential risks again. They then go onto describe the crux of the issue:
“Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs.”
They’re talking about reinforcement learning above, which as I’ve explained before, is the heart of how AI systems can be meaningfully improved going forward.
So it’s a commitment of meaningful resources by this 500 person company that’s now eight years old, and has raised over $13 billion, and is said to be raising another $100 billion soon.
“We are dedicating 20% of the compute we’ve secured to date over the next four years to solving the problem of superintelligence alignment. Our chief basic research bet is our new Superalignment team, but getting this right is critical to achieve our mission and we expect many teams to contribute, from developing new methods to scaling them up to deployment.”
They’re talking about tens of thousands of GPUs worth hundreds of millions and more, in a GPU constrained AI Gold Rush.
And remember, this is the Chief Scientist of OpenAI and his team talking. He has two decades of working with AI at the deepest technical levels, and his team is the one that got Microsoft to see their AI light in 2019 and commit billions in investment and ‘Compute’, to build todays GPT4 and ChatGPT systems with their current ‘Sparks’ of AGI, according to Microsoft researchers.
So if anyone understands the promise and perils ahead, it’s these folks. And what they’re describing is a respect for truly geometric growth in the capabilities of AI. Humans have a particularly hard time internalizing these curves. Ours is a linear world most of the time.
To put it in a different way, the average IQ of the population is between 85 and 115. 68% of people have an IQ between 85 and 115. And 98% of people have an IQ below 130. 2% of people have an IQ above 130. IQ tests are designed to have an average score of 100. Psychologists revise the test every few years to maintain this average.
Current measures of these tests on the AI systems show that:
“Bing AI and GPT4 Has an IQ of 114 and is Smarter Than the Average Human.
GPT4 and Google’s unreleased Palm system are getting IQ scores above the average human in IQ tests and tests that correlate to IQ.”
What OpenAI is preparing for above, is a world where GPT 5 or 6 starts to consistently score above human genius IQ levels. Einstein and other geniuses would score at 160 and above.
Remember, these are apples and oranges comparison and intelligence is not fungible across machines and humans. These are but one of many possible measures of IQ, and there’s a lot more that goes into judging intelligence. And it’s VERY important not to Anthropomorphize AI systems of today and tomorrow. Oh, and they’re not sentient.
But some of the smartest people in AI are getting themselves prepared for AI technologies and approaches that better enable us to ‘steer and control’ AI going forward. And expend meaningful resources on the task. While being mindful of the exciting opportunities ahead for AI to do societal good. Stay tuned.