Since the unleashing of Generative AI by OpenAI’s ChatGPT in November 2022 in this AI Tech Wave, the worries over AI’s existential risks have of course skyrocketed exponentially. And I noted a few days ago, some of those Fears have receded from 2023 to 2024 thus far, driven by the new Fears of Missing Out (FOMO), by commercial and sovereign interests worldwide.
But as Axios reports today, some parts of the US Military maybe pumping “the brakes on generative AI”:
"Some branches of the U.S. military are hitting the brakes on generative AI after decades of Department of Defense experiments with broader AI technology.”
“Why it matters: As businesses race to put generative AI in front of customers everywhere, military experts say its strengths and limitations need further testing and evaluation in order to deploy it responsibly.”
“Driving the news: In a new essay in Foreign Affairs, Jacquelyn Schneider, the Hoover Institution's director of the wargaming and crisis simulation initiative, and AI researcher Max Lamparth write that large language models have a potentially dangerous tendency to go nuclear.”
‘War Games’ movie vibes indeed. Again, Scifi-driven fears come to the fore. Not Terminator’s Skynet, but directional vector.
The piece continues:
“When they tested LLMs from OpenAI, Anthropic and Meta in situations like simulated war games, the pair found the AIs suggested escalation, arms races, conflict — and even use of nuclear weapons — over alternatives.”
"It is practically impossible for an LLM to be taught solely on vetted high-quality data," Schneider and Lamparth write.”
“Zoom out: Older forms of machine learning-based AI are already deeply woven into the U.S. military, which uses it in everything from supply-chain analysis to interpreting satellite data.”
“But the emergence of generative AI has happened on a lightning time-scale that has confounded the Pentagon.”
The appetite for risk on two coasts and cultures seems wide indeed:
“"The risk-taking appetite in Washington is not very great. And the risk-taking appetite out here [in Silicon Valley] is unparalleled," former Secretary of State Condoleezza Rice told Axios at a Hoover Institution media roundtable at Stanford University this week.”
There are a whole host of issues that various branches of the Military have with AI for now, and the pieces does a good job of summarizing them.
This despite some branches like the US Air Force recently picking “Anduril and General Atomics to Develop New Collaborative Combat Aircraft for Air Force”.
Anduril is a VC backed company that is at the forefront of driving AI driven drones and other technologies in the military, with many of their products seeing live deployment in Ukraine and elsewhere.
Despite these commissions and deployments, AI in the Military, has another institutional bottleneck, DATA. This one is for Data sources for AI, a major issue I’ve highlighted for the broader AI Tech Wave as well:
“The military's approach to data ownership makes it hard for anyone in the Pentagon to make the case for generative AI, even the new DoD chief digital and AI officer, the experts said.”
"The services own their own data, they own their own acquisition of technologies," Schneider said.
"You have a team of lawyers that sit on those decisions. So we make it extremely complicated to be able to share data, acquire data, and to put all that data together" for generative AI implementation, per Schneider.”
This dynamic is worth pondering for industries beyond the Military. This is an institutional culture issue, that pervades and permeates entire industries, from Healthcare, to Education, to Financial Services and so many others. It’s a human institutional bottleneck, not a technical one. Often exacerbated by ‘Regulatory Capture’ for the incumbent ecosystem at the industry in question.
And AI will have to face it just as every tech wave before it has, from the Internet back.
The Military’s ‘pause’ for now is worth noting for these broader institutional and industry reasons alone. But ultimately, given the ongoing exponential improvements in AI technologies, and competitive pressures from other countries like China especially, the military will find a way to navigate these issues. So will other industries. They will all want to ‘play the Game’, with safer parameter controls, and AI ‘Agentic Workflows’ of course. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)