So the EU is first to close the gate on possible AI risks and downsides. Every tech wave over the last three plus decades has its share of early regulatory temptations of balancing right on the new tech’s feared perils vs potential.
We saw it with the Internet tech wave in earnest in the mid-1990s when the US provided ‘relief valves’ for the technology to grow businesses online on issues of taxation, content moderation, and de-regulation of the telecom industry to unleash the extraordinary amounts of capital needed for vast networks based on ‘TCP/IP’ internet protocols. And the obvious comparison the hundreds of billions being deployed for the ‘AI Compute’ infrastructure needed at scale today.
With the AI Tech Wave, these issues of fears of peril vs potential comes in ‘Big Gulp’ size. The the world has the benefit of learnings from past tech cycles. How they fared, AND the unintended consequences of both early regulation and not the right areas of regulation. The areas of missed regulation opportunities are only evident in hindsight (e.g., social media impacts on politis and culture, and ‘weaponization’ by motivated actors). All this different this time in the early AI cycle, while dealing with the additional layers of geopolitical and re-globalization issues around US vs China.
So as I’ve discussed in earlier posts, the race is on by regulators to figure out and apply the right amount of regulation at the right time. Many times that effort has led to more ‘Fire, Ready, Aim’, than ‘Ready, Aim, Fire’.
Nowhere is this clearer this time than the sharp contrast between the AI regulatory approach in Europe via the EU, and the US. As the New York Times outlines it:
“The agreement over the A.I. Act solidifies one of the world’s first comprehensive attempts to limit the use of artificial intelligence.”
“European Union policymakers agreed on Friday to a sweeping new law to regulate artificial intelligence, one of the world’s first comprehensive attempts to limit the use of a rapidly evolving technology that has wide-ranging societal and economic implications.”
“The law, called the A.I. Act, sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security. The law still needs to go through a few final steps for approval, but the political agreement means its key outlines have been set.”
“European policymakers focused on A.I.’s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy. Makers of the largest general-purpose A.I. systems, like those powering the ChatGPT chatbot, would face new transparency requirements. Chatbots and software that creates manipulated images such as “deepfakes” would have to make clear that what people were seeing was generated by A.I., according to E.U. officials and earlier drafts of the law.”
“Use of facial recognition software by police and governments would be restricted outside of certain safety and national security exemptions. Companies that violated the regulations could face fines of up to 7 percent of global sales.”
Although Europe is first out of the gate with AI regulations on paper, as Axios points out, the actual enforcement of the rules don’t kick in for a while:
“The European Union's comprehensive AI regulations, finalized Friday after a 36-hour negotiating marathon, come with a catch: The EU is stuck in a legal void until 2025, when the rules come into force.
Why it matters: As the first global power to pass comprehensive AI legislation, the EU is once again setting what could become worldwide regulatory standards — much as it did on digital privacy rules — but the transition could be bumpy.”
Some details of the Act thus far:
“The big picture: European policymakers began work on their AI Act before ChatGPT's Nov. 2022 arrival and the explosion in the generative AI market during 2023.:
“The EU approach categorizes AI uses according to four risk levels, with increasingly stringent restrictions matched to greater potential risks.”
“Details: The EU law bans several uses of AI, including bulk scraping of facial images and most emotion recognition systems in workplace and educational settings. There are safety exceptions — such as using AI to detect a driver falling asleep.”
“The new law also bans controversial "social scoring" systems — efforts to evaluate the compliance or trustworthiness of citizens.”
“It restricts facial recognition technology in law enforcement to a handful of acceptable uses, including identification of victims of terrorism, human trafficking and kidnapping.”
“Foundation model providers will need to submit detailed summaries of the training data they used to build their models.”
“Companies violating the rules could face fines ranging from 1.5% to 7% of global sales.”
“Operators of systems creating manipulated media will have to disclose that to users.”
As I said back in September in “EU potentially missing the AI Boat”:
“In my view, the US approach though potentially far less ‘efficient’ and ‘satisfying from a legislative perspective, may be the better approach.”
“The subtle point is that Europe may be missing the AI boat in a rush to regulate ahead of the technologies even being built.”
As Axios points out above, the European AI Act work began before OpenAI’s ChatGPT was launched in November 2022. European AI companies like Mistral and others, are just gettimg their latest Foundation LLM AI models out the door. while raising fresh capital in the hundreds of million from US and European investors.
Reaction from US leaders has been cautionary, again as Axios highlights:
“Senate Majority Leader Chuck Schumer has expressed concern that EU-style laws enacted by the U.S. would put American firms at a disadvantage competing with China.”
Nevertheless, the EU legislators feel some self-satisfaction with their EU AI Act:
“What they're saying: EU officials are in a self-congratulatory mode, framing the law as "a launchpad for EU startups and researchers," per European industry commissioner Thierry Breton.”
Much remains to be figured and worked out, especially in many of the specifics of regulation and enforcement. But for now, the EU is off and running on AI regulation, trying to close the gate early on possible perils. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)