The Biden administration is soliciting public input on measures to regulate artificial intelligence tools such as ChatGPT as questions mount about the fast-moving technology’s impact on national security and education.
The National Telecommunications and Information Administration (NTIA), a Commerce Department agency that advises the White House on telecommunications and information policy, said it will spend the next 60 days examining options such as audits, risk assessments, and a potential certification process to ease public anxiety around the AI models.
“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms,” said Alan Davidson, NTIA administrator and Assistant Secretary of Commerce for Communications and Information, on April 11. “For these systems to reach their full potential, companies and consumers need to be able to trust them.”
ChatGPT, the interactive AI chatbot developed by OpenAI, has gripped public attention for its ability to generate human-like conversations by processing vast amounts of data, and can answer complex questions in a matter of seconds. It has grown to be one of the fastest-growing apps in history since its rollout in late November, with 1 billion visits in February alone.
The NTIA wants to hear public feedback on policies to “shape the AI accountability ecosystem,” such as the types of trust and safety testing AI developers should conduct and different approaches necessary in different industry sectors.
But industry analysts are already sounding off alarms.
Late last month, tech ethics group the Center for Artificial Intelligence and Digital Policy asked the Federal Trade Commission to suspend the commercial release of ChatGPT’s latest version, GPT-4, calling it “biased, deceptive, and a risk to privacy and public safety.”