Cybersecurity

White House Issues Sweeping Executive Order to Secure AI

Artificial Intelligence & Machine Learning
,
Critical Infrastructure Security
,
Legislation & Litigation

Biden Administration Demands to See Red-Teaming Safety Tests of Foundational Models

White House Issues Sweeping Executive Order to Secure AI
U.S. President Joe Biden is set to sign an executive order on artificial intelligence on Oct. 30, 2023. (Image: Shutterstock)

U.S. President Joe Biden says he’s invoking Cold War-era executive powers over private industry in an order directing developers of advanced artificial intelligence models to notify the government and share the results of safety tests.

See Also: Navigating SEC Compliance: A Comprehensive Approach to Cybersecurity Resilience


The mandate is part of a much-anticipated executive order on the developing technology, which Biden is set to sign Monday.


It invokes the Defense Production Act – a 1950 statute that became law at the onset of the Korean War – to require developers of generative AI foundation models that could pose a “serious risk” to national security, national economic security or national public health, to notify the government when they’re training such a model. Developers must also share the results of all red-team safety tests, a White House fact sheet states.


The executive order directs the National Institute of Standards and Technology to set new standards for extensive red-team testing prior to public release of a new foundational model. Foundational models are AI systems that can be used for many applications. The Department of Health and Human Services will establish a safety program meant to act on reports of unsafe healthcare practices involving AI.


Bruce Reed, White House deputy chief of staff, described the executive order as “the strongest set of actions any government in the world has ever taken on AI safety, security and trust.”


The order also mandates the Department of Homeland Security to apply the NIST standards to critical infrastructure sectors, as well as establish an AI Safety and Security Board. A new government cybersecurity program will also be created to develop AI security tools, including identifying and addressing vulnerabilities in AI technologies.


Federal agencies will be required to follow new standards for the use of AI systems and encouraged to acquire AI products and services. In addition, DHS and the Department of Energy are set to work together to target threats to critical infrastructure posed by AI systems. The goal is to create a comprehensive set of standards designed to address chemical, biological, radiological, nuclear and cybersecurity risks.


The executive order seeks to position the United States as a leader in AI research, in part by expanding grants across such key sectors as healthcare and climate change, backed by a new pilot program called the National AI Research Resource. AI researchers and students across the country will be able to use the new tool to gain access to a range of resources and data.


The departments of State and Commerce will lead an effort to establish comprehensive international frameworks to manage the risks associated with AI and ensure safety. The order comes as Vice President Kamala Harris is set to attend this week’s Summit on AI Safety hosted by U.K. Prime Minister Rishi Sunak.


The order is the latest in a series of actions the White House has taken under President Joe Biden to address risks associated with the emerging technology. Last year, the administration published the Blueprint for an AI Bill of Rights, which provides a framework for the safe and effective deployment of AI and other automated systems.


The administration also previously issued an executive order instructing agencies to combat AI algorithmic discrimination, and it recently secured voluntary commitments from 15 AI companies -including Amazon, Google, Microsoft, Meta and OpenAI – to abide by a set of standards and requirements around the development of new AI tools and technologies. The commitments include conducting extensive internal and external security testing, expanding information-sharing initiatives, investing in cybersecurity and insider threat safeguards, and publicly reporting their AI systems’ capabilities (see: IBM, Nvidia, Others Commit to Develop ‘Trustworthy’ AI).