Every technology wave since the early days of computing a few decades ago has had the perennial issue of open source vs closed, proprietary software. It’s coming up again in the context of LLM AI and Generative AI for similar issues around money and control, this time fueled Regulators that can be influenced by one camp vs another.
The issues historically have been around how each camp makes money from software, and the degree of control that is traded for customer flexibility. What makes it different this time is that the debate is being used to sway regulators into one camp or the other, taking advantage of the unique set of fears that are associated with AI. We’ve discussed many of these issues on open vs closed, and how the regulatory fears need to be balanced with the net good that AI technologies can ultimately do for society and the economy.
What’s brought it up again this week are the regulatory appeals being made by each camp coming back from Summer. As the Information notes:
“Artificial intelligence leaders and policymakers are divided on a key question: Are cutting-edge AI models too powerful to hand to just anyone?
That question has pitted companies such as Meta Platforms, which recently made the code for its conversational AI freely available on the internet, against rivals such as OpenAI that sell proprietary AI like ChatGPT but don’t share the code. OpenAI, Anthropic and other proprietary software makers have said governments should regulate the most capable AI models so bad actors can’t easily use them. That could hamstring Meta and startups that increasingly rely on open-source models.”
“Advances in open-source AI have thrust the question into discussions between tech executives and policymakers, people who have been involved in those meetings say. The topic also surfaced at the Senate’s AI Insight Forum last month. There, CEOs including Meta’s Mark Zuckerberg argued that open-source models were necessary for the U.S. to stay competitive, and that the tech industry could resolve concerns about their safety.”
“It’s better that the standard is set by American companies that can work with our government to shape these models,” Zuckerberg said, rather than allowing other countries to lead on open-source code.”
Think Zuck has it right there, on the key difference in the open vs closed software debate this time.
As this Venturebeat piece reminds us, “An open-source debate is as old as software”:
“The open-source software movement of the late ‘90s and early ‘00s produced iconic innovations like Mozilla’s Firefox web browser, Apache server software and the Linux operating system, which was the foundation of the Android OS that powers the majority of the world’s smartphones.”
The debate is not binary. There is a spectrum from open to closed with the gradations being determined by features, capabilities, and the permissiveness of the licenses on what can be done with the software. Even Meta’s Llama 2 LLM AI which is categorized as open source, has gradations in their licensing from academic to commercial use. And special provisions for companies with large number of users to negotiate separate usage licenses with Meta before using the software. So for example, Apple can’t just use Llama 2 without cutting a specific licensing deal with Meta.
Axios has a good summary of the issues and the various camps on either side of the issue:
“Open-source AI models, which let anyone view and manipulate the code, are growing in popularity as startups and giants alike race to compete with market leader ChatGPT.”
“Why it matters: The White House and some experts fear open-source models could aid risky uses — especially synthetic biology, which could create the next pandemic. But those sounding the alarms may be too late.”
“What's happening: The wide-open code helps little guys take on tech giants. But it also could help dictators and terrorists.”
“The startups are joined by Meta, which hopes to undercut Microsoft (OpenAI’s key backer), and by foreign governments looking to out-innovate the U.S.”
“Top government officials are freaked out by the national security implications of having large open-source AI models in the hands of anyone who can code.”
The industry has always figured out commercial terms between open and closed software over all the tech waves, and this AI Tech Wave is going to be no different. What’s different this time is how either camp manages to sway regulators to their cause. The Information piece also has a useful chart with the various actors, pitting OpenAI, Anthropic and others in the closed camp, Meta, Databricks, and others in the open camp, and companies like Microsoft, Google, and others in the middle.
I’d put companies like Amazon, Nvidia, CoreWeave, and Hugging Face also in the middle to open side of the spectrum, since they like Microsoft and Google also cater to businesses large and small. And business of all stripes generally want the flexibility of choice between open and close depending on their business needs. The AI cloud compute business is worth hundreds of billions of dollars at least over the next few years, so the country with the most regulatory balance between open and closed likely wins.
In a recent piece on US vs EU regulatory approaches on AI, I emphasized that the EU AI Act deliberations risked throwing the AI baby out with the bath water. And applauded the US regulatory approach to date, allowing more flexibility for tech and AI companies large and small to experiment and innovate with open and closed software models, while being vigilant on safety and national security. This so far has been in contrast with the EU approach which may end up adding head-winds for European companies to innovate and compete in the global AI marketplace.
Finally, in another piece ‘Tech Speed Limits’, I made the point that “a higher speed with AI is likely better for society than not.” It’s important for regulators to continue balance the risks of any new technology vs the benefits. The thing with AI is that the current early state of the technologies may magnify the risks in our mind.
Particularly given the way AI has been typecast in our media culture over the decades as a dystopian technology. It doesn’t have to lead to Skynet, but can lead to Star Trek. Like any good wine, let the AI technologies breathe. And let the businesses duke it out on the open vs closed spectrum, driven by customer interest and demand.
And the regulators to stand by as referees for safety and security, without tipping the scales on one vs the other. And certainly NOT asking all AI technologies open and closed to be registered with the government before use. There is so much to be invented, innovated and figured out with AI by developers large and small. And for businesses to figure out AI’s eventual ‘product-market-fit’.
Any delay is tantamount to defeat, especially vs China and others, who will find a way. The AI toothpaste is out of the tube globally, open and closed, and the ultra-marathon is on.
We’re likely to be positively surprised if we go that route. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)