These last few days, AI existentialism almost seems at peak levels, after the recent OpenAI Four year plan to ‘Superalign Superintelligence’. Now, we have a new piece by well known tech reporter Kevin Roose, discussing the pre-launch mood at Anthropic, the hottest ‘AI Native’ startup after OpenAI, backed by Google no less:
“At Anthropic, the doom factor is turned up to 11.”
“A few months ago, after I had a scary run-in with an A.I. chatbot, the company invited me to embed inside its headquarters as it geared up to release the new version of Claude, Claude 2.”
Remember, Anthropic was founded by ex-OpenAI scientists, who left because they thought OpenAI was moving too fast on commercializing next generation Foundation LLM AI models:
“Worrying about A.I. is, in some sense, why Anthropic exists.”
“It was started in 2021 by a group of employees of OpenAI who grew concerned that the company had gotten too commercial. They announced they were splitting off and forming their own A.I. venture, branding it an “A.I. safety lab.”
And fear about AI seems deeply embedded in the company’s DNA. Despite having raised over a billion dollars from Google and Salesforce, and could potentially raise billions more to launch bigger LLM AI models. Kevin continues:
“I spent weeks interviewing Anthropic executives, talking to engineers and researchers, and sitting in on meetings with product teams ahead of Claude 2’s launch. And while I initially thought I might be shown a sunny, optimistic vision of A.I.’s potential — a world where polite chatbots tutor students, make office workers more productive and help scientists cure diseases — I soon learned that rose-colored glasses weren’t Anthropic’s thing.”
“They were more interested in scaring me.”
“In a series of long, candid conversations, Anthropic employees told me about the harms they worried future A.I. systems could unleash, and some compared themselves to modern-day Robert Oppenheimers, weighing moral choices about powerful new technology that could profoundly alter the course of history. (“The Making of the Atomic Bomb,” a 1986 history of the Manhattan Project, is a popular book among the company’s employees.)”
The whole piece is worth a read, if only to highlight how unusual this Tech wave is relative to prior ones in the concentration of Fear vs Opportunities ahead.
The irony of course is that the leading industry players are rushing ahead to spend billions on ever larger models and ‘Compute’. OpenAI is still focused on potentially raising another $100 billion beyond the $12 plus billion raised from Microsoft. They’re in the lead with their GPT4 model with over a trillion parameters, and the race is onto to bigger models from them and others.
There are no shortage of mega tech companies and well funded ‘AI Native’ startups that are investing tens of billions in Foundation LLM AI models and GPU Compute over the rest of this year and next. As the chip research folks at Semianalysis note:
“OpenAI is keeping the architecture of GPT-4 closed not because of some existential risk to humanity but because what they’ve built is replicable. In fact, we expect Google, Meta, Anthropic, Inflection, Character, Alibaba, Tencent, ByteDance/(TikTok), Baidu, and more to all have models as capable as GPT-4 if not more capable in the near term.”
What they highlight is that the secret sauce to building new models is out there, and the gold rush has started. And the capital is abundant. Even though OpenAI has distinct advantages in its current lead over others:
“Don’t get us wrong, OpenAI has amazing engineering, and what they built is incredible, but the solution they arrived at is not magic. It is an elegant solution with many complex tradeoffs. Going big is only a portion of the battle. OpenAI’s most durable moat is that they have the most real-world usage, leading engineering talent, and can continue to race ahead of others with future models.”
“They, other firms, and society in general can and will spend over one hundred billion on creating supercomputers that can train single massive model. These massive models can then be productized in a variety of ways. That effort will be duplicated in multiple countries and companies. It’s the new space race. “
“With AI there is tangible value that will likely come in the short term from human assistants and autonomous agents.”
So the toothpaste is definitely out of the tube. To repeat,
“Over the next few years, multiple companies such as Google, Meta, and OpenAI/Microsoft will train models on supercomputers worth over one hundred billion dollars.”
Nvidia alone will sell millions of their GPU chips worth billions to companies all over the world, even with curbs on trade with China. Industry experts estimate over a million of Nvidia’s top of the line H100 GPU chips (>$30,000/chip) will ship by the end of 2023 alone. And that despite current GPU shortages. Chips are like potato chips. They will always make more.
Meta, Microsoft, Google, Amazon and others are already leaning in here, and companies like Apple have yet to show their cards. But the race is on towards larger LLM AI Foundation models with reinforcement learning loops, powered by billions of users. And we haven’t even started on AI at the Edge on local devices in earnest.
So it’s ok to be focused on Safety and long-term AI risks, but the race is well underway, led by the same companies at the head of the AI doomerism line. Stay tuned.