This post is about fears sometimes getting in the way of seeing good things happening in front of us.
Stanford University's Human Centered Artificial Intelligence (HCAI) group continues its hard work to make sense of the flood of open and ‘closed’ LLM AI models thus far in 2024. And to outline the emerging benefits vs the risks. Following up on last year’s AI Foundation Model Transparency Index, which I discussed, Stanford this week published its 7th Annual AI Index Report for 2024.
Of the ten top takeaways summarized, the one I’d like to highlight is # 7:
“The data is in: AI makes workers more productive and leads to higher quality work.”
“In 2023, several studies assessed AI’s impact on labor, suggesting that AI enables workers to complete tasks more quickly and to improve the quality of their output. These studies also demonstrated AI’s potential to bridge the skill gap between low- and high-skilled workers. Still other studies caution that using AI without proper oversight can lead to diminished performance.”
I focus on that one because this early on, the benefits of AI models are not yet broadly understood by users, businesses, and governments.
Indeed, most of the ten takeaways are positive, but #9 highlights the current mood around AI:
”9. The number of AI regulations in the United States sharply increases.
The number of AI-related regulations in the U.S. has risen significantly in the past year and over the last five years. In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56.3%.”
Unlike every other previous tech waves, the AI wave has been prematurely burdened with fears of its ‘existential’ risks. And most governmental efforts here and abroad are focused on trying to act on those fears. ‘Fire, Aim, Ready’ for the most part rather than the other way around.
Even before the AI technologies are fully invented and deployed at scale with the appropriate safeguards. Indeed, as we saw last November, the leading company leading the ‘ChatGPT’ AI revolution, saw an unprecedented ‘Governance Revolution’. Whose repercussions are still in progress, and the broader issue is still leading AI and technology circles here and abroad.
Just note last week’s speech by Margrethe Vestager, the EU’s top tech regulator, at the Institute for Advanced Study in Princeton, New Jersey (yes, that one, founded and headed by J. Robert Oppenheimer, THAT Oppenheimer). As Axios summarized in “AI is repeating A-bomb’s story, says EU tech overseer”:
“Artificial intelligence is ushering in a "new world" as swiftly and disruptively as atomic weapons did 80 years ago, Margrethe Vestager, the EU's top tech regulator, told a crowd Tuesday at the Institute for Advanced Study in Princeton, New Jersey.”
“Threat level: In an exclusive interview with Axios afterward, Vestager said that while both the A-bomb and AI have posed broad dangers to humanity, AI comes with additional "individual existential risks" as we empower it to make decisions about our job and college applications, our loans and mortgages, and our medical treatments.”
"If we deal with individual existential threats first, we'll have a much better go at dealing with existential threats towards humanity," Vestager told Axios.”
“Humans have never before been "confronted with a technology with so much power and no defined purpose," she said.”
“The Institute for Advanced Study was famously led by J. Robert Oppenheimer from 1947 to 1966, after his Manhattan Project developed the world's first nuclear weapons.”
Whew! Talk about spot on staging for the message to be planted.
But even the most concerned around nuclear energy risks over the last few decades are now coming around to the view that with the benefit of historical hindsight, perhaps a careful, continued development of SAFE nuclear power, even the FISSION kind, may have helped alleviate some of the larger scale issues the world is wrestling with collectively today.
Coming back to AI, we’re debating the perceived existential risks, before the AI tech is barely developed and useful, especially when compared to previous technology waves. Even with the exponential progress expected in software and hardware ahead.
As usual, I’ll add the caveat that despite the ongoing flood of AI models from companies large and small, it’s really early in the AI Tech Wave in 2024.
Kind of like trying to evaluate and assess the flood of operating systems of personal computing platforms in the late 1970s, early 1980s, before the market solidified around the ‘IBM PC’ in 1982, powered by Microsoft’s swiftly licensed and re-licensed MS-DOS. Or the flood of commercial browsers in 1994-1995, before the market coalesced around the Netscape browser for the most part.
Indeed, as this Axios summary points out around Stanford’s AI Index Report, the flood of LLM AIs continue in 2024:
“By the numbers: Other key findings in the Stanford AI index include that U.S. organizations built a greater number of significant AI models (61) than those in the EU (21) or China (15).”
“The U.S. led on private investment in AI, with $67.2 billion invested, but China led on patents, with 61% of all AI patents.”
“In 2023, 108 newly released foundation models came from industry, but only 28 originated from academia.”
That of course continues this year, with tech companies ramping up LLM AI investment into the hundreds of billions, if not trillion + in the next few years, after factoring in new AI data centers, power, chips, and talent. The latest to follow Microsoft and OpenAI’s massive AI investments, is of course Google. It’s AI head Demis Hassabis confirmed the company’s AI investments around its Gemini models and related efforts will likely exceed over $100 billion in the near term. Meta and Amazon and others aren’t far behind.
And yesterday, I highlighted how even this early in the AI Tech Wave, a whole host of AI startups are zig zagging to different technical approaches to improve AI model scaling.
So let’s allow the technologies to get baked and built. And allow the entrepreneurs to breathe and dream. And the investors to follow their investment instincts, right and wrong. And markets do their ‘invisible hand’ thing.
And keep the fears in check as much as possible. The AI benefits barely getting started. Both individually and collectively. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)
Art imitates life, hysteria imitates art. Your write-up today made me think about how fears are harnessed.
In modern imagination, over the last few decades, there have been seminal films that have used AI as the protagonist in their plot structures to highlight how a higher than human intelligence outsmarts humans or outright destroys mankind. Some prominent films that come to mind are 2001: A Space Odyssey, Blade Runner, Ex Machina, and of course the highly popular Terminator and Matrix series. These masterful stories about existential threats and amazing displays of post apocalyptic worlds are part of collective human memory.
Politicians included, these are the references that most of have to work with when we hear about AI, helping us shout down tech advancements with rules and control.
Anyone who watched the unbelievable Tom Cruise caper, Mission: Impossible - Dead Reckoning Part One, released on 19th June 2023 could be forgiven for thinking that wicked AI could be harnessed in a submarine and controlled with a key. Wait, there is a part 2 coming out soon.
Hysteria sells. What better way to express the fear of the unknown and FOMO.
Sci-fi has been a key reason driving current premature concerns around AI. Both in books and movies. But as you said, they’re so masterful in their story telling when done well!