AI: Ilya's back, focused on AI 'Safe Superintelligence'. RTZ #393
...ex-OpenAI co-founder digs in on even safer AI
The AI drama continues in this young AI Tech Wave. We now know what OpenAI co-founder and LLM AI luminary Ilya Sutskever is going to be doing next. After being the catalyzing vote to first oust co-founder and CEO Sam Altman, then promptly reversing himself to bring him back, his next steps have been shrouded in mystery.
Now, after recently leaving OpenAI once and for all, he’s announcing his next steps, founding ‘Safe Superintelligence’, a new AI Research shop based in the US and Israel. In an ‘exclusive interview’ to Bloomberg, in this piece “Ilya Sutskever Has a New Plan for Safe Superintelligence”, we learn:
“OpenAI’s co-founder discloses his plans to continue his work at a new research lab focused on artificial general intelligence.”
“For the past several months, the question “Where’s Ilya?” has become a common refrain within the world of artificial intelligence. Ilya Sutskever, the famed researcher who co-founded OpenAI, took part in the 2023 board ouster of Sam Altman as chief executive officer, before changing course and helping engineer Altman’s return. From that point on, Sutskever went quiet and left his future at OpenAI shrouded in uncertainty. Then, in mid-May, Sutskever announced his departure, saying only that he’d disclose his next project “in due time.”
“Now Sutskever is introducing that project, a venture called Safe Superintelligence Inc. aiming to create a safe, powerful artificial intelligence system within a pure research organization that has no near-term intention of selling AI products or services. In other words, he’s attempting to continue his work without many of the distractions that rivals such as OpenAI, Google and Anthropic face. “This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever says in an exclusive interview about his plans. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”
“Sutskever declines to name Safe Superintelligence’s financial backers or disclose how much he’s raised.”
“As the company’s name emphasizes, Sutskever has made AI safety the top priority. The trick, of course, is identifying what exactly makes one AI system safer than another or, really, safe at all. Sutskever is vague about this at the moment, though he does suggest that the new venture will try to achieve safety with engineering breakthroughs baked into the AI system, as opposed to relying on guardrails applied to the technology on the fly. “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” he says.”
“Sutskever has two co-founders. One is investor and former Apple Inc. AI lead Daniel Gross, who’s gained attention by backing a number of high-profile AI startups, including Keen Technologies. (Started by John Carmack, the famed coder, video game pioneer and recent virtual-reality guru at Meta Platforms Inc., Keen is trying to develop an artificial general intelligence based on unconventional programming techniques.) The other co-founder is Daniel Levy, who built a strong reputation for training large AI models working alongside Sutskever at OpenAI. Safe Superintelligence will have offices in Palo Alto, California, and Tel Aviv. Both Sutskever and Gross grew up in Israel.”
A lot more in the piece worth absorbing, but the core is a return to trying to get AGI (artificial general intelligence), the holy grail of the AI industry. It’s been at the heart of the AI doomer debate that’s been going on since the beginning of this AI Tech Wave.
All the way back to 2015, when OpenAI was but a gleam in the eye of Google co-founder Larry Page, and Tesla/xAI co-founder Elon Musk. Musk by his own account, was instrumental in bringing Ilya to join OpenAI.
The AI industry has been wracked between AI Fear and Greed since then, with the amplitude of the curves getting more pronounced. And OpenAI has been at center stage for it all, with companies like Anthropic spun out by OpenAI co-founders who also wanted to make AI ‘safer’. So now the spectrum of OpenAI founder founded AI companies seems to go from OpenAI to Anthropic to Safe Superintelligence.
The media and regulators will know exactly who to ping for the right quote at the right time as the AI Tech Wave continues to do its thing. It’s early days, so a lot of innovation, building, and drama to come. Especially as OpenAI goes onto the next versions of GPT-5 and beyond.
As I’ve said before, whilte Scaling AI is important to get to the true promise of these technologies, Scaling AI Trust is Job #1.
The drama around OpenAI co-founders and others in the AI industry, are but an internal debate over how to get there. And as I’ve outlined as well, other companies like Apple have their own, more prosaic ways to hopefully get to the same place: safer, but really useful versions of AI products and services.
In the meantime, AI Fear will continue to ebb and flow. AI Fear will ultimately give way to AI sunlight in the morning.
Especially as we either trudge or race towards AGI, and billions of mainstream folk really get to see what AI is all about. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)