Google now has gone through several phases of ‘Code Red’ reactions to OpenAI’s ‘ChatGPT moment’ last November, from nudges to shoves to the Empire now trying hard to bring its formidable LLM AI assets, Google Cloud datacenters, and Tensor TPU chips together, as a unified whole. Remember, it was Google researchers who came up with the AI Transformers paper, “Attention is all you need”, that kicked off the current LLM AI boom in 2017.
The big Google AI reorganization occurred a couple of months ago, with Google DeepMind in London, being merged into Google Brain in California, under the leadership of new Google AI Czar Demis Hassabis. He now runs Google’s AI efforts globally out of London. The Verge’s editor-in-chief Nilay Patel had a detailed interview this week that offers some new glimpses into where the company is focused on around AI in the coming months. Demis started out reassuring Nilay that they’re on their way to being organized as one team rather than disparate teams, which Google has been famous for doing to date:
“No, no, for sure it’s one unified team. I like to call it a “super unit,” and I’m very excited about that. But obviously, we’re still combining that and forming the new culture and forming the new grouping, including the organizational structures. It’s a complex thing — putting two big research groups together like this. But I think, by the end of the summer, we’ll be a single unified entity, and I think that’ll be very exciting.”
“And we’re already feeling, even a couple of months in, the benefits and the strengths of that with projects like Gemini that you may have heard of, which is our next-generation multimodal large models — very, very exciting work going on there, combining all the best ideas from across both world-class research groups. It’s pretty impressive to see.”
Which leads to the question of product directions going forward:
“Nilay: Let’s put that next to some products. You said there’s a lot of DeepMind technology and a lot of Google products. The ones that we can all look at are Bard and then your Search Generative Experience. There’s AI in Google Photos and all this stuff, but focused on the LLM moment, it’s Bard and the Search Generative Experience.”
To which Demis responds:
“But you’re right, the current products are not the end state; they’re actually just waypoints. And in the case of chatbots and those kinds of systems, ultimately, they will become these incredible universal personal assistants that you use multiple times during the day for really useful and helpful things across your daily lives.”
“And I think we’re still far away from that with the current chatbots, and I think we know what’s missing: things like planning and reasoning and memory, and we are working really hard on those things. And I think what you’ll see in maybe a couple of years’ time is today’s chatbots will look trivial by comparison to I think what’s coming in the next few years.”
So smart agents a couple of years away at least. Which leads to the inevitable question around AGI, or General AI, what OpenAI has been redefining as ‘SuperIntelligence’:
“I think there’s a lot of uncertainty over how many more breakthroughs are required to get to AGI, big, big breakthroughs — innovative breakthroughs — versus just scaling up existing solutions. And I think it very much depends on that in terms of timeframe.”
“Obviously, if there are a lot of breakthroughs still required, those are a lot harder to do and take a lot longer. But right now, I would not be surprised if we approached something like AGI or AGI-like in the next decade.”
So a decade away is what Google AI head thinks is a reasonable time frame for AGI. As compared to the four year imperative OpenAI is setting up for ‘Superaligning Superintelligence’. It’s a safe Google statement with ample wiggle room. Despite the immense amount of global open source and commercial research AI innovations underway in the space, from the hardware and chip tech stack up.
Overall, one gets the sense Google is buttoning down its disparate efforts around AI around the world, and slowly but surely productizing the bits that are ready for mainstream use, as safely and reliably as possible. If anything, they’re likely to err on the side of caution, which given the stakes involved, is likely a feature than a bug. Stay tuned.