Two of the three ‘Godfathers of AI’ again just called alarm over what AI could be when it grows up. As Venturebeat reports:
“Yoshua Bengio and Geoffrey Hinton, two of the so-called AI godfathers, have joined with 22 other leading AI academics and experts to propose a framework for policy and governance that aims to address the growing risks associated with artificial intelligence.”
“The proposals are significant because they come in the run-up to next week’s AI safety summit meeting at Bletchley Park in the UK, where international politicians, tech leaders, academics and others will gather to discuss how to regulate AI amid growing concerns around its power and risks.”
“The paper said companies and governments should devote a third of their AI research and development budgets to AI safety, and also stressed urgency in pursuing specific research breakthroughs to bolster AI safety efforts.”
A third is a lot, especially for younger tech companies trying to build on AI possibilities. Yesterday, in “AI: Booms that Rhyme”, I highlighted that the Internet Tech wave alone in the 1990s saw internet telecom infrastructure capex approaching a trillion dollars over almost a decade. That laid the groundwork for the narrowband growing to broadband networks, both wired and wireless, that gave us Netflix, YouTube, 4/5G smartphones, and so much more over the fiber backbones capacity of the internet.
Much of that internet telecom infrastructure was unleashed, for better or for worse, by timely telecom legislation by legislators and regulators in 1996. As Fabricated Knowledge reminds us:
“The Telecommunications Act of 1996 opened the gates to new entrants—the act’s goal was to promote competition in the telecommunication industry, especially in long-haul markets. Any communication business could compete in any market against each other, eliminating the natural monopoly status of long-distance telephone companies.”
“In one pen stroke, local telephone companies, long-distance telephone companies, cable companies, and emerging ISPs could compete. What’s more, they had to share their communication services and could buy and sell competitors’ network capacity.”
This created a frenzy of entrants. New companies and established players alike threw their hats into the ring, and players Global Crossing, WorldCom, Enron, Qwest, Lucent, Level 3, and so many others had stratospheric rises and falls. Some of these companies were formed specifically in the aftermath of the deregulation. The key provision in the Act was the right for competitors to purchase services from them at fair rates and thus flesh out their entire network.”
What makes this relevant today, is that the AI Tech Wave and the multi-hundred billion dollar plus AI infrastructure gold rush under-way today is a historical echo of those times past. Especially in the need for new LLM AI models, next generation GPUs and data center infrastructure at massive scale today for the AI industry to do its thing over the next decade and more. Again, JUST as the internet telecom infrastructure boom (and bust, then boom again) was necesary for the internet to get us set decades later for the AI technologies to deliver on the massive promises ahead.
Regulation is a key variable now as well as it was then. But a key difference this time around as I’ve stated is the immense societal and thus regulator fears of AI ahead, fueled in part by scifi and movies like the Terminator etc.
But also in no small part by the ‘AI Doomerism’ in the air, fueled by both economic and safety concerns. Many of the key technologists and academics who’ve laid the groundwork for what LLM AI and Generative AI is poised to do, have also been highlighting these concerns, despite the fact that there’s A LOT MORE research, innovation, and infrastructure capex ahead.
And just the right amount of regulatory oversight would be helpful, ESPECIALLY in the early formative years for the AI industry. Just as it helped in the early days of the Internet Wave that led us to the point where almost 30% the S&P 500 are US tech companies large and also smaller, private companies yet to go public.
It’s notable that the third ‘AI Godfather’, Yann LeCun, AI head for Meta, is decidedly more balanced in his approach to what AI has for us ahead. As this FT piece highlights:
“Premature regulation of artificial intelligence will only serve to reinforce the dominance of the big technology companies and stifle competition, Yann LeCun, Meta’s chief AI scientist, has said.”
“Regulating research and development in AI is incredibly counterproductive,” LeCun, one of the world’s leading AI researchers, told the Financial Times ahead of next month’s Bletchley Park conference on AI safety hosted by the British government. “They want regulatory capture under the guise of AI safety.”
Mr. LeCun pithily adds:
“AI is still dumber than cats, so worries over existential risks are ‘premature’”.
So there is a healthy debate amongst even the AI gurus of the risks and opportunities of AI. That’s a good thing. And it’s also good that regulators especially in the US, relative to the EU, are taking their time to study and understand these pluses and minuses of AI. And eventually do what’s right for the ‘net good’ of society over time.
In these early days especially, it’s important for all stakeholders, open and closed, deploying AI that’s big and small, narrow and/or wide, AND for regulators everywhere, to make sure they have the ‘ready, aim, fire’ sequence in the right order. Including on geopolitical considerations. It’s for the net good of us all. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)