Congress Bill Wants AI Content to Be Labeled

Congress Bill Wants AI Content to Be Labeled
A smartphone with a displayed ChatGPT logo is placed on a computer motherboard in this illustration taken on Feb. 23, 2023. Dado Ruvic/Reuters
Efthymis Oraiopoulos
Updated:
0:00

Content created by Artificial intelligence (AI) could soon be labeled as such, as a bill has been prepared in Congress for its regulation.

Rep. Ritchie Torres (D-N.Y.) is planning to introduce a bill that will require any content produced by AI—whether it be text, image, or video—to have a disclaimer attached to it informing that it was generated by AI.

The bill, entitled the “AI Disclosure Act of 2023,” would require any output from AI to include the sentence “Disclaimer: this output has been generated by artificial intelligence.”

The Federal Trade Commission will be tasked with enforcing the new rule.

Torres said that AI could be used as a “weapon of disinformation, dislocation, and destruction.”

He said that a challenge Congress will face is regulating AI and managing its risks.

“The simplest place to start is disclosure. All generative AI—whether the content it generates is text or images, video or audio—should be required to disclose itself as AI,” Torres said. “Disclosure is by no means a magic bullet, but it’s a common-sense starting point to what will surely be a long road toward federal regulation.”

Rep. Ritchie Torres speaks on stage during a Community Service Society of New York event at City Winery, in New York, on Oct. 20, 2022. (Monica Schipper/Getty Images for The Community Service Society of New York)
Rep. Ritchie Torres speaks on stage during a Community Service Society of New York event at City Winery, in New York, on Oct. 20, 2022. Monica Schipper/Getty Images for The Community Service Society of New York

Meanwhile, the European Union’s (EU) government is asking big tech companies to identify AI-generated content that contains false information.

EU Commission vice president Vera Jourova said she asked Google, Meta, Microsoft, and other companies to tackle the AI problem.

Companies offering services that have the potential to spread AI-generated disinformation should roll out technology to “recognize such content and clearly label this to users,” she said.

Online platforms that have integrated generative AI into their services, such as Microsoft’s Bing search engine and Google’s Bard chatbot, should build safeguards to prevent “malicious actors” from generating disinformation, Jourova said at a briefing in Brussels.

The legislative efforts from the United States and the EU come after many AI experts and lawmakers have expressed concerns about the future of AI development.

AI tool ChatGPT’s abilities to generate text, images, or other content that is indistinguishable from a human’s work, as well as a possible self-governed AI that is hostile to human beings, was the basis of open letters and signature drives for regulating or halting AI’s development.

Tech billionaire Elon Musk warned of the existential threat to humanity if AI’s development is left unchecked.

During a Dubai World Government Summit on Feb. 15, he said AI is “something we need to be quite concerned about.”

Calling it “one of the biggest risks to the future of civilization,” Musk stressed that such groundbreaking technologies are a double-edged sword.

Musk was one of the signatories of a March letter from thousands of experts that called for “immediately” pausing the development of AI systems more powerful than GPT-4 for at least six months.

The letter argued that AI systems having human-competitive intelligence can pose “profound risks to society and humanity” while changing the “history of life on earth.”

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”

Another open letter signed by AI experts and big tech leaders says that addressing AI risks should have the same priority as averting a pandemic or a nuclear war.

AI Turning Against Humans

A newly introduced AI robot in the United Kingdom was asked last week by a reporter what “nightmare scenario” could come about with AI and robots.

“A world where robots have become so powerful that they are able to control or manipulate humans without their knowledge,“ it replied. ”This could lead to an oppressive society where the rights of individuals are no longer respected.”

Col. Tucker Hamilton, USAF’s chief of AI Test and Operations, talked about the experiment at the Future Combat Air and Space Capabilities Summit in London on Friday. In a simulated test, an AI drone was assigned a mission to identify and destroy Surface-to-Air Missile (SAM) sites, with a human operator being the ultimate decision maker.

“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” Hamilton said.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

The simulated experiment then set up a scenario where the AI drone would lose points if it killed the operator. “So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

AI Disinformation

Some notable incidents of AI fake content in 2022 included a deepfake video of Ukrainian President Volodymyr Zelenskyy surrendering and U.S. prisons using call-monitoring technology on their inmates.

Other recent examples of debunked deepfakes include a realistic picture of Pope Francis in a white puffy jacket and an image of billowing black smoke next to a building accompanied by a claim that it showed an explosion near the Pentagon.

Politicians have even enlisted AI to warn about its dangers. Danish Prime Minister Mette Frederiksen used OpenAI’s ChatGPT to craft the opening of a speech to Parliament last week, saying it was written “with such conviction that few of us would believe that it was a robot—and not a human—behind it.”

The Associated Press and Naveen Athrappully contributed to this report.
Efthymis Oraiopoulos
Efthymis Oraiopoulos
Author
Efthymis Oraiopoulos is a news writer for NTD, focusing on U.S., sports, and entertainment news.
Related Topics