US Exploring Rules to Regulate AI Tools Like ChatGPT

US Exploring Rules to Regulate AI Tools Like ChatGPT
Alan Davidson, director of U.S. public policy of the Americas at Google Inc. testifies during a hearing of the Congressional-Executive Commission on Capitol Hill in Washington, on March 24, 2010. Win McNamee/Getty Images
Eva Fu
Updated:
0:00

The Biden administration is soliciting public input on measures to regulate artificial intelligence tools such as ChatGPT as questions mount about the fast-moving technology’s impact on national security and education.

The National Telecommunications and Information Administration (NTIA), a Commerce Department agency that advises the White House on telecommunications and information policy, said it will spend the next 60 days examining options such as audits, risk assessments, and a potential certification process to ease public anxiety around the AI models.

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms,” said Alan Davidson, NTIA administrator and Assistant Secretary of Commerce for Communications and Information, on April 11. “For these systems to reach their full potential, companies and consumers need to be able to trust them.”

ChatGPT, the interactive AI chatbot developed by OpenAI, has gripped public attention for its ability to generate human-like conversations by processing vast amounts of data, and can answer complex questions in a matter of seconds. It has grown to be one of the fastest-growing apps in history since its rollout in late November, with 1 billion visits in February alone.

A keyboard is seen reflected on a computer screen displaying the website of ChatGPT, an AI chatbot from OpenAI, in this illustration picture taken on Feb. 8, 2023. (Florence Lo/Reuters)
A keyboard is seen reflected on a computer screen displaying the website of ChatGPT, an AI chatbot from OpenAI, in this illustration picture taken on Feb. 8, 2023. Florence Lo/Reuters
While its speed in answering complex queries has wowed some users, the program has drawn concerns over privacy and partisan bias. Researchers also worried about how it could open up the floodgates of plagiarism in schools.

The NTIA wants to hear public feedback on policies to “shape the AI accountability ecosystem,” such as the types of trust and safety testing AI developers should conduct and different approaches necessary in different industry sectors.

“Just as food and cars are not released into the market without proper assurance of safety, so too AI systems should provide assurance to the public, government, and businesses that they are fit for purpose,” the agency said in a statement.
President Joe Biden last week left it vague whether he believes AI is dangerous, but said technology companies must ensure their products are safe before making them public.

But industry analysts are already sounding off alarms.

Late last month, tech ethics group the Center for Artificial Intelligence and Digital Policy asked the Federal Trade Commission to suspend the commercial release of ChatGPT’s latest version, GPT-4, calling it “biased, deceptive, and a risk to privacy and public safety.”

Elon Musk, one of the co-founders of OpenAI, said in February that he believes AI is one of the “biggest risks to the future of civilization.” He is one of nearly 21,000 signers to an open letter calling for a six-month halt to the training of AI systems more powerful than GPT-4.
Eva Fu
Eva Fu
Reporter
Eva Fu is a New York-based writer for The Epoch Times focusing on U.S. politics, U.S.-China relations, religious freedom, and human rights. Contact Eva at [email protected]
twitter
Related Topics