The much awaited 800 pages long AI executive order from President Biden was unveiled today, and overall, it’s a good first non-legislative stab at focusing on some of the potential safety issues and opportunities with LLM AI and Generative AI tech over the long run. As the Information explains:
“The White House unveiled this morning a far-reaching executive order on artificial intelligence.”
“The order requires that companies developing foundation models that pose a “serious risk to national security, national economic security, or national public health and safety” share their safety test results with the U.S. government. (Notably, it doesn’t specify what makes a model dangerous or not.) It also assigns regulatory responsibilities to different federal agencies: The Commerce Department, for instance, will be responsible for developing watermarking methods for AI-generated content.”
“Additionally, the executive order will create guidelines to allow companies to train their models while preserving the privacy of the underlying data. And to assuage fears around biases in AI models, the order will also ensure that this technology doesn’t contribute to discrimination in healthcare, housing or the justice system. Biden’s directive also hopes to help workers whose jobs are replaced by AI, while funding AI research at academic institutions and other underfunded labs.”
“Not everything in the order is about limiting the capabilities of AI, though: For instance, it encourages AI-powered drug development and educational tools.”
Axios summarizes the document, as follows:
“The Biden administration's long-awaited executive order on artificial intelligence will require developers of the most powerful AI systems to share critical testing information with the government.”
“What's happening: President Biden is expected to sign the AI executive order Monday afternoon at a White House event.”
“Why it matters: A White House fact sheet on the order outlines the steps companies and the government will be directed to take to foster responsible AI, partly by requiring developers to share the kind of internal testing information that's usually kept private.”
“It's a significant transparency-boosting step that may not by welcomed by the secretive industry.”
“The order comes as Congress holds insight forums with AI experts to help form legislation and White House officials are about to take part in a U.K. AI summit this week.”
Much to be coordinated with Congress of course on AI regulations, and there’s a lot of activity there as well.
And the Devil is in the details as they say on the White House executive order, and here’s more on those:
“Details: Companies developing models that pose serious risks to public health and safety, the economy or national security will have to notify the federal government when training the model and share results of red-team safety tests before making models public.”
“The provision would apply to future models that go beyond a specific compute power threshold and would not lead to any restrictions or removal of existing AI tools in the marketplace, a senior administration official said.”
“The provision goes beyond voluntary commitments that the White House garnered from AI companies and requires notification in accordance with the Defense Production Act.”
“A senior administration official said the DPA enforcement mechanism was not previewed for industry and it remains to be seen how companies will react.”
“Ahead of the 2024 elections, the administration is also tackling deepfakes by instructing the Commerce Department to develop guidance for content authentication and watermarking.”
Also to be applauded is the White House focus on human capital needed for the US to continue its global lead on AI technologies:
“Visa criteria, interviews and reviews will be modernized and streamlined to encourage high-skilled immigrants and nonimmigrants with expertise in critical areas to study, stay and work in the U.S.”
An important detail, given the white hot need to AI trained developers and professionals at every level, if AI is to augment people to the fullest potential.
Overall, the XO while increasing requirements for next generation LLM AI technologies on a number of fronts going forward, leave the doors open for AI innovation. A lot of the implementation details around the executive order will have to wait until federal agencies finish translating the 800 pages into specific policy requirements and actions. A lot can change from here to there on overall impact, including unintended consequences, good and bad.
My takeaway on the executive order at first read, is that it’s a balanced approach focused on the long-term NET GOOD for AI users across society:
Future AI model focus: The orders are leaning toward regulation of FUTURE FOUNDATION, next generation LLM AI models, open or closed, and not current ones. Amongst the big tech ‘magnificent seven’, Meta has been leading the charge here, with Amazon, Microsoft, Google, and Nvidia leaning into open source AI for business and enterprise customers in particular in their AI cloud, data center businesses. Encouraging overall for the multi-hundreds of billions of AI investments underway.
Disclosures limited in scope: The testing disclosures and red-teaming actions SEEM to be focused on Defense/national security touching LLM AIs, not general applications for now. Remains to be see as implementation plans are put together by the relevant federal agencies.
Neutral on open-source: At first read there does not seem to be much anti-open source at first glance. I continue to view open source is a net positive for the AI industry, as it has been for the technology and software industries over the past three decades at least.
Actions across Government: It’s a plus that the order spreads out executive actions ACROSS federal agencies rather than just one new or existing agency or department. AI is not a discreet tech by itself. It will impact every human activity over time. As such it’s important for all parts of government to be responsible for AI governance in their areas of focus.
Overall, a good step forward by Executive Branch regulators on AI issues in the US, and relative to European efforts by the EU. And consistent with the US efforts on threading the needle with China on the AI issues affected by Geopolitics. It’s not to say that the XO is laudable in its entirety, both in process and substance, as ex-Microsoft software luminary Steven Sinofsky makes clear in his well-argued piece here (recommended read).
But regulators will try and do their bit whenever we see new technologies with possible harms. And in this early stage of the process, this executive order on AI an attempt to balance both. Overall, for now at least, there is a clear effort to thread the needle in the near-term, between AI safety and opportunity in this AI Tech Wave. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)