AI: How AI 'Stands to Reason' (part 1 of 2). RTZ #419
...how OpenAI, Google et al starting to define their levels to AGI
OpenAI as I outlined on Sunday, is reframing their path to ‘AGI’ (Artificial General Intelligence). And they’re also doing it indirectly for the AI industry. The timing to AGI, is a hotly debated topic for the industry, as I discussed recently.
OpenAI is doing their reframing with an iterative process progressing through five Levels. They include Level 1 (Chatbots today), going next to Level 2 (Reasoning), Level 3 (Agents), Level 4 (Innovators), and Level 5 (Organizations).
In the near term, as I highlight in this video clip, the holy grails of AI are Agentic ‘Reasoning’ and ‘Agents’. So let’s see how it ‘all stands to reason’ for now.
As I explained Sunday, Levels 2 and 3 are areas the industry has already been well focused, developing and honing the ‘Reasoning’ of their LLM and SLM AI models (Large and Small Language Models).
Both what I call the ‘small r’ and ‘Big R’ AI Reasoning that I describe in this video clip here, in a podcast with Ram Ahluwalia of Lumida Wealth.
OpenAI is not alone in this approach to AGI.
Google is on a similar 5 level path towards a reframed AGI, as Axios highlights today in ‘OpenAI nears ‘Reasoning’ capabile AI’:
“OpenAI's five-level AI ladder resembles similar schemes for understanding progress toward fully autonomous vehicles.”
“A paper by researchers at Google's DeepMind, first posted last year, outlined another five-level breakdown for measuring AI advances.”
Here’s how Google describes their approach to AI Reasoning Levels via Gemini and other AI technologies:
“We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors. This framework introduces levels of AGI performance, generality, and autonomy, providing a common language to compare models, assess risks, and measure progress along the path to AGI.”
“To develop our framework, we analyze existing definitions of AGI, and distill six principles that a useful ontology for AGI should satisfy. With these principles in mind, we propose "Levels of AGI" based on depth (performance) and breadth (generality) of capabilities, and reflect on how current systems fit into this ontology. We discuss the challenging requirements for future benchmarks that quantify the behavior and capabilities of AGI models against these levels.”
“Finally, we discuss how these levels of AGI interact with deployment considerations such as autonomy and risk, and emphasize the importance of carefully selecting Human-AI Interaction paradigms for responsible and safe deployment of highly capable AI systems.”
Google’s Levels, laid out in a detailed ‘Table 1’ in the paper, goes from L1 ‘Emerging AI’ (today), to L2 ‘Competent AI’ soon, to L3 ‘Expert AI’ , to L4 ‘Virtuoso AI’, to eventual L5 ‘Superhuman AI’, aka ‘ASI’ (Artificial Superintelligence). The last one is the one so many are fearful about. With AI super-luminaries like ex-OpenAI co-founder & AI research Guru Ilya Sutskever recently founding Safe Superintelligence (SSN).
To that point, Axios continues:
\“Fun fact: Neither OpenAI's nor Google's five levels make reference to the possibility that AI systems might become autonomous — able to act on their own initiative without human direction.”
“Also, neither addresses the even farther-out prospect that AIs might become sentient or self-aware.”
Googe’s paper also outlines in Table 2, a further set of Levels classifying AI capabilities as they evolve. These go from Level 0 ‘No AI’, to L2 ‘AI as a tool’, L3 ‘AI as a consultant’, L4 ‘AI as an Expert’, to L5 ‘AI as an Agent’. Interestingly, L5 on this ‘AI capabilities’ table foots more with L5 ‘Superhuman AI’ in Table 1 of the paper. With a lot of detailed nuance in between.
Again, all these levels in BOTH TABLES have detailed specifications, qualitative and some quantitative. Timelines of course stretch into the years to come.
The summary to all this for now is pithily stated by Axios again:
“Our thought bubble: OpenAI, along with Google, Meta, Anthropic and other key competitors, is spending billions to "get to" AGI, so it's good that the company is devoting thought and care to defining what it means by that vague term.”
“The catch is that these companies might just be transferring their hand-waving from one set of words to another.”
"Human-level problem solving" and "reasoning" could be as tough to specify as "artificial general intelligence."
I agree that some of this may ‘just be transferring from one set of words to another’ in the early days of this AI Tech Wave.
Which is fine, as the deep fundamentals of the ‘Reasoning’ and ‘Agent’ technologies, far less the levels beyond, are just being invented. Remember Apple doesn’t use ‘Levels’, but they’re reframing ‘AI’ as ‘Apple Intelligence’, as I’ve discussed in depth. Meta has its own open source approach to ‘Meta AI’, and so on.
Much less being actually deployed to mainstream audiences. It’s complex, in the weeds stuff, barely understood by the inventors, and being invested in to run at Scale. More on that in ‘part 2’ of ‘AI Stands to Reason’ tomorrow. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)