AI: Making AI look like it's 'thinking'. RTZ #630
...making anthropomorphizing AI a feature, not a bug
Imagine if in an alternate universe ‘Artificial Intelligence’ (aka AI today), was known as ‘Computer Infomatics’, or ‘CI’.
In that world, the billions of mainstream users just think of ‘CI’ as a super powerful computing tool to help them think, reason, and do things in a computer automated way far more efficiently. Like riding a bike, driving a car, or flying, instead of walking. On human legs. Just a tool. To augment our capabilities.
Not an alternative to human thinking. Not something that can take over humans. In that universe with their ‘CI Tech Wave’, instead of our AI Tech Wave, there were no existential regulatory and societal fear intensified debates about the civilization extinguishing ‘doomer’ possibilities of ‘CI’. Computer Infomatics, why would that be deemed scary?
No fear of ‘AGI’ (Artificial General Intelligence), that we’re marching on towards today. They were smart enough to call it ‘GCI’ (General Computer Infomatics). Just a more powerful tool.
No Biggie.
Without attributing ‘Intelligence’ to these probabilistic computation systems, there would be no need to add words like ‘Reasoning’, ‘Agents’ and the like. They’d just be computers and software doing things far more complex computationally than spreadsheets, word processors and automatic text suggestions. Even still using gobs of Nvidia AI GPUs and OpenAI’s LLMs and Reasoning software. But still just computers.
Like the computers and software doing the ‘AI’ things in our universe today, on the way to ‘AGI’.
Alas, in our universe, we’ve just really gone down the ‘AI’ emulating humans road. We’ve infused human capabilities into our computing systems long ago. First in our scifi, and now in the very way we research, innovate, develop, and deploy ‘CI’, umm, ‘AI’, in our world. We’ve anthropomorphized our nascent probabilistic computers in the same way we’ve done with our dogs and cats.
All this comes to mind as our leading AI companies are racing to give ‘human thinking’ characteristics to our latest AI creations.
The Washington Post explains pithily in “The hottest new idea in AI? Chatbots that look like they think”:
“How Chinese start-up DeepSeek triggered a rush to launch chatbots that “reason.”
“Chinese start-up DeepSeek recently displaced ChatGPT as the top-ranking artificial intelligence app, in part by dazzling the public with a free version of the hottest idea in AI — a chatbot that “thinks” before answering a user’s question.”
“The app’s “DeepThink” mode responds to every query with the text, “Thinking…,” followed by a string of updates that read like the chatbot talking to itself as it figures out its final answer. The monologue unspools with folksy flourishes like, “Wait,” “Hmm,” or “Aha.”
Add that innovation to the range of other DeepSeek innovations and implications I’ve charted and discussed here and here in recent days.
Indeed, Perplexity, a US AI Search engine I’ve discussed before, uses a similar UI (user interface) technique, with a spinning icon as it seems to ‘reason’ its way through a user queries. Most of the time, the animations are clever, visual ways to buy time, to hide the ‘latency’ of their AI compute, as the systems process the user queries.
The Washington Post continues:
“Chatbots that yap to themselves before answering are now spreading as American rivals race to outdo DeepSeek’s viral moment. This style of AI assistant can be more accurate on some tasks, but also mimics humans in ways that can hide its limitations.”
“The self-talk technique, sometimes dubbed “reasoning,” became trendy in top artificial intelligence labs late last year, after OpenAI and Google released AI tools that scored higher on math and coding tests by monologuing through problems, step by step.'“
“At first, this new type of assistant wasn’t available to the masses: OpenAI released a system called o1 in December that cost $200 a month and kept its inner workings secret. When DeepSeek launched its “thinking” app free, and also shared the R1 reasoning model behind it, a developer frenzy ensued.”
“People are excited to throw this new approach at every possible thing,” said Nathan Lambert, an AI researcher for the nonprofit Allen Institute for AI.”
That triggered a rapid response by US LLM AI companies to DeepSeek’s approach:
“In the two weeks since DeepSeek’s rise tanked U.S. tech stocks, OpenAI made some of its reasoning technology free inside ChatGPT and launched a new tool built on it called Deep Research that searches the web to compile reports.”
“Google on Wednesday made its competing product, Gemini 2.0 Flash Thinking Experimental, available to consumers for the first time, free, via its AI app Gemini.”
“The same day, Amazon’s cloud computing division said it was betting on “automated reasoning” to build trust with users. The next day, OpenAI had ChatGPT started showing users polished translations of its raw “chains of thought” in a similar way to DeepSeek. (Amazon founder Jeff Bezos owns The Washington Post.)”
And now AI Reasoning is on the menu of accelerating AI investments here:
“U.S. companies will soon spend “hundreds of millions to billions” of dollars trying to supercharge this approach to AI reasoning, Dario Amodei, chief executive of Anthropic, the maker of the chatbot Claude, predicted in an essay on the implications of DeepSeek’s debut on U.S.-China competition.”
“The flood of investment and activity has boosted the tech industry’s hopes of building software as capable and adaptable as humans, from a tactic first proven on math and coding problems. “We are now confident we know how to build AGI,” or artificial general intelligence, OpenAI Sam Altman wrote in a blog post last month.”
“Google’s vice president for its Gemini app, Sissie Hsiao, said in a statement that reasoning models represent a paradigm shift. “They demystify how generative AI works — making it more understandable and trustworthy by showing their ‘thoughts,’” while also helping with more complex tasks, she said.”
““As we introduce reasoning models to more people, we want to build a deeper understanding of their capabilities and how they work” to create better products, OpenAI spokesperson Niko Felix said in a statement. “Users have told us that understanding how the model reasons through a response not only supports more informed decision-making but also helps build trust in its answers.”
And that means emulating human like voice behaviors, including in the multimodal Voice AIs that are going mainstream. Complete with artificial human like inflections, ‘umms and ahhs’. Coming to hundreds of millioms of devices around us.
We are racing to do the same with AI Reasoning and Agents:
“The AI industry appears to be headed in a different direction. Reasoning models widely released since Silicon Valley’s DeepSeek shock include design features that, like those in the Chinese app, encourage consumers to believe that software’s “thoughts” show it reasoning like a human.”
“On the ChatGPT homepage, a “Reason” mode button features prominently in the chat box. In a post on X, Altman called “chain of thought” a feature where the AI “shows its thinking.”
“To an everyday user it feels like gaining insight into how an algorithm works,” said Sara Hooker, head of the research lab Cohere for AI. But it’s a way to boost performance, not peek under the hood, she said.”
“Ethan Mollick, a professor who studies AI at the Wharton School of the University of Pennsylvania, said seeing a chatbot’s supposed inner monologue can trigger empathy.”
Precisely, using computers and software:
“Compared with ChatGPT’s flatter tone, responses from DeepSeek’s R1 seemed “neurotically friendly and desperate to please you,” he said.”
“We’re kind of seeing this weird world where the hardcore computer science is aligning with marketing — it’s not clear if even the creators know which is which.”
And it’s just the beginning.
In a post September 21 2023, titled “Don’t Anthropomorphize the AIs”, I argued for the need to be wary of giving human like characteristics to our computers:
“As we see smarter ‘Smart Agents’, ‘AI Companions’, and ‘Digital Twins’ of everyone we can imagine, and am sure some we can’t imagine, this question of consciousness is likely to persist. Human brains are wired that way. “
“Remember Tom Hanks’ memorable performance with ‘Wilson’ in ‘Cast Away’, the highest grossing film of 2000. It won Hanks (not ‘Wilson’), a nomination for Best Actor at the 73rd Oscars.”
“Our brains anthropomorphize everything from our pets to our volleyballs. Let’s not go the same direction for AI just yet.”
Too late.
We’re turning that ‘bug’ into an active ‘feature’ in this next AI reasoning and agentic phase of the AI Tech Wave.
And leaning into the human characterizations of our probabilistic computing systems. Not ‘Computer Infomatics’, but ‘Artificial Intelligence’. We’re even calling their inputs and outputs ‘Intelligence Tokens’.
Language matters.
We’re going all in. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)