Yesterday we discussed the White House blessing thousands of hackers at a prominent Hacker conference in Vegas to ‘Red Team’ their way into the best LLM AI technologies from the Who’s Who of the AI industry. Washington regulators have for months now been going to school on AI to best understand the risks and opportunities from these technologies, and of course proactively address the widespread fears of existential risks from AI.
Just last week saw a senior group of Congressional representatives get a multi-day introduction to AI at Stanford University, the seat of a lot of the AI research initiatives driving the industry. And these congressional initiatives will continue into the Fall with Senator Schumer on the Senate side leading a bipartisan effort to move forward on driving AI governance and regulation in the coming months.
And of course for months now, regulators have been focused on the deeper existential risks that may be ahead with ‘Super-intelligence” AI, aka Artificial General Intelligence or AGI. As we’ve recounted, AI ‘doomerism’ is in the air where AI is concerned. And it’s reflected in the general public perception, where the AI fear quotient is far higher than other major tech waves like the PC or the Internet of decades past.
So the next area of DC AI concerns beyond our life of course, has to be our money. And that’s what we’re seeing from the head of the SEC, Gary Gensler, with a series of public pronouncements on potential AI risks ahead to the financial markets.
As Axios summarizes:
“AI will be at the center of future financial crises — and regulators are not going to be able to stay ahead of it. That's the message being sent by SEC chair Gary Gensler, arguably the most important and powerful regulator in the U.S. at the moment, Axios' Felix Salmon reports.”
“Why it matters: A paper Gensler wrote in 2020, while a professor at MIT, is an invaluable resource for understanding those risks — and how little regulators can do to try to address them.”
“The big picture: The most obvious risk from AI in financial markets is that AI-powered "black box" trading algorithms run amok, and all end up selling the same thing at the same time, causing a market crash.”
"There simply are not that many people trained to build and manage these models, and they tend to have fairly similar backgrounds," Gensler wrote.”
“Model homogeneity risk could also be created by regulations themselves. If regulators exert control over what AIs can and can't do, that increases the risk that they'll all end up doing the same thing at the same time.”
Gary Gensler may or may not be on the right track getting ahead of the potential AI risks to the financial markets in the years ahead. But the financial experts on Wall Street are still trying to figure out how to use LLM AI to make money in the markets. As the WSJ puts it in a recent piece titled “AI can write a song, but it can’t beat the Market”:
“Where is Wall Street’s AI revolution?”
“Almost every industry, from architecture to entertainment, is testing generative artificial intelligence, hoping to profit from a technology that can produce writing, images and art much like humans.”
“Wall Street has long used automated algorithms for tasks such as placing trades and managing risk. But investors haven’t made much progress relying on AI to tackle their biggest challenge: beating the market. While some see ChatGPT as a way to boost sales and research efforts, the investing results using AI haven’t been especially impressive.”
“Progress in applying AI to investing has been limited, though innovations in language modeling could change that in the years ahead,” says Jonathan Larkin, a managing director with Columbia Investment Management Co., which manages the $13 billion endowment for Columbia University.”
Am sure as LLM AI technologies evolve, Wall Street will figure out how to ‘beat the market’, as the WSJ asks now. And we've seen how algorithmic trading, which is but earlier forms of AI in machine learning clothing, can create market upheavals from time to time.
But we may have a ways to go before AI comes for our money, while it figures out what to do with our lives. Will need to be braced for it all, while remembering that tech generally does more good that bad over time. And both take longer to happen than anticipated or feared. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here).