Apple Agrees to Abide by Biden Admin’s AI Safety Guidelines

Apple Agrees to Abide by Biden Admin’s AI Safety Guidelines
Apple CEO Tim Cook delivers remarks at the start of the Apple Worldwide Developers Conference (WWDC) on June 10, 2024 in Cupertino, California. (Justin Sullivan/Getty Images)
Tom Ozimek
Updated:
0:00

Apple on Friday joined more than a dozen major tech companies pledging to abide by the Biden administration’s guidelines for the development of artificial intelligence (AI), according to the White House, which is looking to mitigate AI-related risks.

“Today, the administration announced that Apple has signed onto the voluntary commitments, further cementing these commitments as cornerstones of responsible AI innovation,” the White House said in a July 26 press release, which also gave an update on federal agency actions at the 270-day mark after President Joe Biden’s executive order establishing new standards for AI safety and security.

With the move, Apple joins 15 other companies, including Amazon, Google, Meta, Microsoft, and OpenAI, in pledging to develop AI to include granting government access to the test results of the companies’ AI models to assess biases and security risks.

Apple’s commitment to abide by the voluntary AI pact was made as the tech giant is going all-in on generative AI, announcing in June the launch of its “Apple Intelligence” system that claims to unlock novel ways the technology can be leveraged by combining generative AI and “personal context.”

Under the guidelines, AI developers such as Apple promise to follow rigorous new standards and tests for their AI models. This includes subjecting their models to “red-team” tests, which simulate adversarial hack attacks to test the robustness of the models’ safety measures. A key aim of the stress tests is to mitigate the potential threat that AI systems pose to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.

Companies that have signed onto the pledge also commit to developing their AI models in a way that incorporates privacy-preserving features for users while also abiding by guidelines that will be developed by the Department of Commerce to protect Americans from AI-enabled fraud and deception.

President Biden’s executive order also tasked federal agencies to develop various AI-related standards and guidelines.

The White House said that various agencies—including the Commerce Department, the Department of Energy, and the Department of Defense—have released new guidelines for preventing misuse of AI, expanded AI testbeds, and addressed AI-related vulnerabilities in government networks.

For instance, the Commerce Department announced Friday that its National Institute of Standards and Technology (NIST) has released three final guidance documents: the first to manage the risks of generative AI, the second to address concerns about malicious training data negatively affecting generative AI systems, and the third to provide guidelines for promoting transparency around the origin and detection of “synthetic” content that has been created or altered by AI.
Tom Ozimek is a senior reporter for The Epoch Times. He has a broad background in journalism, deposit insurance, marketing and communications, and adult education.
twitter
Related Topics