Yes, it’s a spin on please ‘don’t feed’ signs we’ve all learned to ignore over our ages. The quest to figure out if ‘AI is conscious’ continues, even at this earliest stage for LLM AI technologies, and the resultant AI Tech Wave. A new Paper titled “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness”, by a group of philosophers, neuroscientists and computer scientists, abstracts as follows:
“Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness.”
“We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive "indicator properties" of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them.”
And then the punchline:
“Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.”
So a repeat of what we know. That we really don’t know deep down how this AI tech works, but we’re going to continue to study and prepare for ‘SuperAlignment’ of human interests with whatever interests these Foundation LLM AI technologies end of having. Oh, and we’re going to continue to invest tens of billions in the effort to make Foundation LLM AI many times more powerful than exist today. Until we’re there. Fingers crossed.
The New York Times did a piece on the above paper titled “How to Tell if your AI is Conscious”, and no they don’t have an answer on ‘how to tell’. But they do go on:
“The fuzziness of consciousness, its imprecision, has made its study anathema in the natural sciences. At least until recently, the project was largely left to philosophers, who often were only marginally better than others at clarifying their object of study. Hod Lipson, a roboticist at Columbia University, said that some people in his field referred to consciousness as “the C-word.” Grace Lindsay, a neuroscientist at New York University, said, “There was this idea that you can’t study consciousness until you have tenure.”
Nonetheless, a few weeks ago, a group of philosophers, neuroscientists and computer scientists, Dr. Lindsay among them, proposed a rubric with which to determine whether an A.I. system like ChatGPT could be considered conscious. The report, which surveys what Dr. Lindsay calls the “brand-new” science of consciousness, pulls together elements from a half-dozen nascent empirical theories and proposes a list of measurable qualities that might suggest the presence of some presence in a machine.”
The whole piece is worth reading, if only to see the immensity of big human brains across so many fields get ready for society to brace for what may come with AI, super-intelligent or not.
The issue has come up before since LLM AI and ChatGPT started showing us the possibilities of ‘emergent’ behaviors in AI, as the NYTimes reminded us:
“Advances in A.I. and machine learning are coming faster than our ability to explain what’s going on. In 2022, Blake Lemoine, an engineer at Google, argued that the company’s LaMDA chatbot was conscious (although most experts disagreed); the further integration of generative A.I. into our lives means the topic may become more contentious.”
As we see smarter ‘Smart Agents’, ‘AI Companions’, and ‘Digital Twins’ of everyone we can imagine, and am sure some we can’t imagine, this question of consciousness is likely to persist. Human brains are wired that way.
Remember Tom Hanks’ memorable performance with ‘Wilson’ in ‘Cast Away’, the highest grossing film of 2000. It won Hanks (not ‘Wilson’), a nomination for Best Actor at the 73rd Oscars.
Our brains anthropomorphize everything from our pets to our volleyballs. Let’s not go the same direction for AI just yet.
Let the researchers from all the various disciplines continue to do their work. And the AI companies continue to work hard on keeping them safe, reliable, and super-aligned to us humans. Oh, and make us humans better at everyday life, work and play using AI. And of course grow the economy a bit using AI. While of course holding off further Anthropomorphizing the AIs in the meantime . Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)