AI: In my View, September 24
In this Sunday’s ‘AI: In my View’, I’d like to focus briefly on the BIG ISSUE of how regulators in the US and Europe are tackling the risks and opportunities around AI. And underline a subtle point that may result in Europe missing the AI boat in a rush to regulate ahead of the technologies even being built. Let me explain.
But first, let me set the table for the discussion. The Information in a piece titled “Europe has figured out how to tame Big Tech. Can the US learn its tricks?”. They followed with how Paul Tang, a Dutch politician and member of the European Parliament, answered a question by visiting senior US Congressional representatives:
“How had the EU managed to bring big tech to heel?”
His answer:
“I always say the Americans go to court; the Europeans make legislation. That’s how we do things.”
The piece goes on to lay out the differences in the US and EU systems for regulation, and the political realities. All worth reading and understanding. The European system due to its distinct governance differences can establish ‘consensus’ on regulating potential harms from technology in general and AI in particular, more efficiently than the US system. And it explains the process difference in Europe vs the US.
In particular, the EU has the leverage of a market of close to half a billion souls, larger than the 350 million in the US. And they’re not afraid to use that leverage:
“While U.S. lawmakers have done little to check the power of tech companies over the past few years, the EU has demonstrably taken the lead. It’s chief weapons: “market power and enforcement mechanisms that actually bite,” said Andres Guadamuz, a law and technology academic at the University of Sussex. Many call this combination the ‘Brussels effect.’”
But here’s where they potentially in my view ‘missing the boat”, by potentially benefiting their citizens of the net long-term benefits of technologies and AI in particular. See if you can spot the places where (hint: I’ve bolded them for convenience for each of the companies mentioned):
“The results of these efforts are beginning to show up in companies’ altered products and services.”
“TikTok has now offered users the chance to use an algorithm-free version of the app that recommends videos based on what’s popular in a geographical area rather than the personal habits of an individual user. A TikTok spokesperson declined to say what proportion of users have chosen the algorithm-free flavor in Europe, but for lawmakers, the results were clear. “We wanted to have a nonpersonalized timeline,” said Parliament member Tang. “Now everyone has that. That was the easy bit.”
“Similarly, Meta hired an additional 1,000 employees to handle changes required by the DSA. The changes include public explanations of how the Facebook and Instagram algorithms work, as well as a planned algorithm-free feed. The company is also considering rolling out a paid-for, ad-free version of Facebook and Instagram, according to The New York Times.”
“Google also bowed to the new laws by introducing new “transparency centers” (essentially public-facing webpages) that reveal more detail about its algorithms, advertisers and access to data for researchers. The company said it had “devoted significant resources into tailoring our programs to meet [the DSA’s] specific requirements.”
In each of the instances above, and many, many more to potentially come, the possible net good can come out of the immense personalization potential of AI in our lives, is being diluted, or made not available. This is not just ‘missing the boat’ on the benefits of AI. To add another metaphor, it’s throwing ‘the baby out with the bath water’.
I’ve talked about the opportunities ahead for the probabilistic math capabilities of LLM and Generative AI to help individuals CREATE and REASON far better than the deterministic computing hardware and software capabilities that traditional computers have helped propel our economies with for over fifty years.
To take away the massive personalization capabilities of the algorithms for fear of the ‘harms’ that may be ahead may be premature. In that context, the US system guided by laws and litigation may be better. At least each issue can be properly examined and detail and figured out. Whether by duking it out in the courts, or settling on compromises and win/win possibilities.
So in my view the US approach though potentially far less ‘efficient’ and ‘satisfying’ from a legislative perspective, may be the better approach. I’ve talked more about the ‘accelerated’ approach with potentially higher benefits for the risks here.
Let’s hold off on giving the award for the better approach to Europe for now. It may be really premature this early in the AI Tech Wave. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)