Senators from both major parties have united to introduce an artificial intelligence (AI) bill directing federal agencies to create standards providing transparency and accountability for AI tools.
Sens. Amy Klobuchar (D-Minn.) and John Thune (R-S.D.) introduced the legislation. While four of their colleagues on the Senate Committee on Commerce, Science, and Transportation co-sponsored.
“It will put in place common-sense safeguards for the highest-risk applications of AI—like in our critical infrastructure—and improve transparency for policymakers and consumers,” Sen. Klobuchar said.
Specifically, the Artificial Intelligence Research, Innovation, and Accountability Act aims to create enforceable testing and evaluation standards for the highest-risk AI systems by directing the Department of Commerce to issue standards for testing and evaluating AI systems.
The Commerce Department will be tasked with submitting a five-year plan for testing and certifying critical-impact AI. The department would also be required to update the plan regularly. Companies would have to submit transparency and risk assessment reports to the Commerce Department before deploying critical-impact AI systems.
The National Institute of Standards and Technology (NIST) would also be directed to develop standards for the authenticity of online content to provide consumers with clearer distinctions between human and AI-generated content. Among the NIST’s other tasks would be developing recommendations for technical, risk-based guardrails on AI systems. New definitions for terms such as generative and high-impact AI systems and a clear distinction between developer and deployer of AI systems are part of the bill as well.
Deepfakes a Growing Concern as AI Use Increases
Ms. Klobuchar has been spearheading the effort to address the threat of misleading AI-generated content for a while now. In early October, she wrote to Meta founder Mark Zuckerberg about what was being done to protect political figures from “deepfakes” and the ramifications that could follow.The video was debunked, but if it hadn’t been, the consequences could have been devasting for the Ukrainian war effort. Even if the video was considered legitimate for only a few hours, Russian military forces could have gained an enormous advantage that could have changed the course of the war.
Tech Companies Already Taking Steps Around AI Use
Meta, the company that owns Facebook, has already taken some steps to reign in the use of AI during the election. Earlier this month, the tech giant barred political campaigns and advertisers in regulated industries from utilizing its new generative AI advertising products.The new policy was publicly disclosed through updates on the company’s help center and is aimed at curbing the spread of election misinformation in the run-up to the presidential elections.
YouTube has also announced plans in the coming months to introduce updates that will inform viewers when the content they’re seeing is synthetically created using AI. In a Nov. 14 blog post by YouTube Vice Presidents of Product Management, Jennifer Flannery O'Connor and Emily Moxley said AI has great potential for creativity on the video platform. However, they also believe AI will “introduce new risks and will require new approaches.”
As a result, and in the interests of maintaining a “healthy ecosystem of information on YouTube,” creators will need to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools.
“When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material,” the blog post says.
“For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.”
Ms. Flannery O'Connor and Moxley stress these new changes will be especially important in videos discussing sensitive topics, such as elections, ongoing conflicts and public health crises.
Content creators who consistently choose not to disclose whether their videos are AI-generated could be subject to content removal, suspension from the YouTube Partner Program, or other penalties.