Biden Issues Nation’s 1st National Security Memorandum on AI

‘The fundamental premise is that AI will have significant implications for national security,’ White House economic adviser says.
Biden Issues Nation’s 1st National Security Memorandum on AI
President Joe Biden signs a proclamation in the Oval Office of the White House on Aug. 16, 2024. Anna Moneymaker/Getty Images
Andrew Thornebrooke
Updated:
0:00

President Joe Biden has issued the nation’s first national security memorandum on the use of artificial intelligence (AI).

The memorandum, released on Oct. 24, directs the U.S. government to lead in developing “safe, secure, and trustworthy AI,” according to a fact sheet provided by the White House.

“AI has emerged as an era-defining technology and has demonstrated significant and growing relevance to national security,” the memorandum reads. “The United States must lead the world in the responsible application of AI to appropriate national security functions.”

The memorandum also directs the U.S. government to harness AI to advance national security priorities and promote international consensus on rules and norms for its use.

“The fundamental premise is that AI will have significant implications for national security,” White House national economic adviser Lael Brainard said in a prepared statement.

“The AI National Security Memorandum establishes that retaining US leadership in the most advanced AI models will be vital for our national security in coming years.”

To that end, the memorandum directs the government to create resilient semiconductor supply chains.

“Sustaining U.S. preeminence in frontier AI into the future will require strong domestic foundations in semiconductors, infrastructure, and clean energy—including the large data centers that provide computing resources,” Brainard said.

The directive also calls for a framework for Washington to work with allies to ensure that AI “is developed and used in ways that adhere to international law while protecting human rights and fundamental freedoms.”

Speaking at the Brookings Institution think tank on Oct. 23, White House national security adviser Jake Sullivan said the administration is already leading in technological development around the world with its allies, coordinating closely on shared standards.

“We’re building a network of AI safety institutes around the world, from Canada to Singapore to Japan, to harness the power of AI responsibly,” Sullivan said.

The memorandum also directs U.S. intelligence agencies to inform technology companies of incidents in which foreign entities have been detected attempting to steal their intellectual property.

The administration believes that ensuring government leadership over the private development of AI is a key means of maintaining a competitive edge against nations like communist China, which are implementing their own whole-of-government approaches to AI development.

“A failure to take advantage of this leadership and adopt this technology we worry could put us at risk of a strategic surprise by our rivals, such as China,” a senior administration official told reporters during an Oct. 23 press call.

“Because countries like China recognize similar opportunities to modernize and revolutionize their own military and intelligence capabilities using artificial intelligence, it’s particularly imperative that we accelerate our national security community’s adoption and use of cutting-edge AI capabilities to maintain our competitive edge.”

The official also said that the memorandum directs U.S. agencies to “gain access to the most powerful AI systems and put them to use.”

Such an initiative will likely raise the hackles of many groups already concerned about government overreach and a general lack of transparency in intelligence collection.

As such, the memorandum will be issued alongside a sister document that will provide a policy framework to prohibit certain uses of AI, including applications that would violate constitutionally protected civil rights.

However, it remains unclear how the government will adequately vet data used by large AI models, which are often trained on unreliable, publicly available data.

When asked about the issue, the administration official said that the White House is creating a “process of accrediting systems” and would ensure “existing law is complied with.”

To that end, the memorandum states, “Government must also protect human rights, civil rights, civil liberties, privacy, and safety, and lay the groundwork for a stable and responsible international AI governance landscape.”

The directive is the latest move by the Biden administration to address AI as Congress’s own efforts to regulate the emerging technology have stalled. The administration will follow up on its efforts next month when it convenes a global safety summit in San Francisco.

AI is already reshaping much of U.S. policy and presents numerous threats to the security and domestic life of Americans.

China-based hackers, for example, have used AI to impersonate American voters. U.S. military leadership envisions shifting to a predominantly robotic Army in the coming decade.
Lawmakers have also expressed fears that the proliferation of AI could lead to mass white-collar unemployment, which, in turn, could trigger societal unrest.
Speaking in September at the U.N. General Assembly in New York City, Biden said the United States will remain committed to developing AI responsibly, with humanity’s well-being in mind.

“As AI grows more powerful, it ... must grow more responsive to our collective needs and values,” the president said.

“We must make certain that the awesome capabilities of AI will be used to uplift and empower everyday people, not to give dictators more powerful shackles on the human spirit.”

Andrew Thornebrooke
Andrew Thornebrooke
National Security Correspondent
Andrew Thornebrooke is a national security correspondent for The Epoch Times covering China-related issues with a focus on defense, military affairs, and national security. He holds a master's in military history from Norwich University.
twitter