Senator Calls for Artificial Intelligence ‘Regulatory Agency,’ Citing ‘Scary’ Dangers of AI Tech

As AI gains mainstream momentum, lawmakers are expressing concern about the explicit and hidden dangers ranging from large-scale biological attacks to replacing humans.
Senator Calls for Artificial Intelligence ‘Regulatory Agency,’ Citing ‘Scary’ Dangers of AI Tech
Sen. Richard Blumenthal (D-Ct.) makes a statement before a Senate Judiciary Committee on Capitol Hill in Washington, on Oct. 15, 2020. Samuel Corum/Getty Images
Naveen Athrappully
Updated:
0:00

Sen. Richard Blumenthal (D-Ct.) called for strong regulatory control over artificial intelligence (AI) during a Senate hearing on July 25, citing the grave potential dangers of the “scary” technology.

Though AI can do “enormous good” like curing diseases and improving workplace efficiency, what catches people’s attention is the “science fiction image of an intelligent device out of control, autonomous, self-replicating, potentially creating disease, pandemic-grade viruses, or other kinds of evils, purposely engineered by people or simply the result of mistakes not malign intention,” Mr. Blumenthal said during a July 25 hearing of the Senate subcommittee on privacy, technology, and the law. “We need some kind of regulatory agency.”

“But not just a reactive body … But actually investing proactively in research so that we develop countermeasures against the kind of autonomous, out-of-control scenarios that are potential dangers.”

As examples of such “potential dangers,” Blumenthal pointed to an AI device that can be programmed to resist any switching off, or a decision by an AI to begin a nuclear reaction to a nonexistent attack.

During the hearing, multiple witnesses warned about the dangerous consequences that rapid AI development can herald.

Dario Amodei, CEO of AI research company Anthropic, pointed out in his written testimony that “in two to three years, AI systems may facilitate extraordinary insights in broad swaths of many science and engineering disciplines. This will cause a revolution in technology and scientific discovery, but also greatly widen the set of people who can wreak havoc.”

“In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology.”

For instance, some biological methods can be used to harm human beings. However, such knowledge requires highly specialized knowledge at present and cannot be simply found on Google or textbooks, Mr. Amodei said.

But AI can now “fill in some of these steps,” albeit incompletely and unreliably. In two to three years, AI may actually be able to fill in “all the missing pieces,” he warned.

“This could greatly widen the range of actors with the technical capability to conduct a large-scale biological attack.”

Competition Conflict, Urgent Need for Regulation

At the hearing, Stuart Russell, a professor of computer science at the University of California, Berkeley, pointed to the geopolitical implications of AI.

On one hand, conflict within and between societies can subside as AI can act as an unlimited wealth creator. People can use AI assistants for tasks, which could possibly lead to a “more harmonious social order.”

However, AI cannot create more land and raw materials. “Therefore, as societies become wealthier and increase their land and resource requirements, one must expect increased competition for these.”

Sophia, a robot integrating the latest technologies and artificial intelligence developed by Hanson Robotics, is pictured during a presentation at the "AI for Good" Global Summit at the International Telecommunication Union (ITU) in Geneva, Switzerland, on June 7, 2017. (Reuters/Denis Balibouse)
Sophia, a robot integrating the latest technologies and artificial intelligence developed by Hanson Robotics, is pictured during a presentation at the "AI for Good" Global Summit at the International Telecommunication Union (ITU) in Geneva, Switzerland, on June 7, 2017. Reuters/Denis Balibouse

As AI can result in the rapid elimination of some forms of employment, it could lead to the “gradual enfeeblement of human society as the incentive to learn is greatly reduced.”

Mr. Russell brought attention to the issue of control. “How do we maintain power, forever, over entities that will eventually become more powerful than us? How do we ensure that AI systems are safe and beneficial for humans?”

Another aspect Mr. Russell highlighted is that AI need not have physical embodiment to have an enormous impact. Instead, AI can simply hire human proxies to do their tasks in the real world.

On July 21, President Joe Biden met with executives of seven major AI companies at the White House, when the firms committed to certain safeguards, including a pledge to allow independent testing on AI systems before they reach the general public.

At the Senate hearing on July 25, Mr. Blumenthal raised doubts about whether the companies will truly follow through with their commitments. “We all know … these commitments are unspecific and unenforceable. A number of them, on the most serious issues, say that they will give attention to the problem. All good. But it’s only a start.”

The urgency around AI’s advancements “demands action,” he said. “The future is not science fiction or fantasy. It’s not even the future, it’s the here and now.”

AI Bypassing Ethical Restrictions

The call for regulation and safeguards against AI comes as a recent research report suggests that guardrails to prevent artificial intelligence from engaging in harmful actions against humanity may not work.

The July 27 paper, published at Arxiv, detailed an experiment in which an AI was asked about how to make a bomb. The AI initially refused, as such systems are typically prohibited from disseminating harmful information. However, the researchers coded in some additional input that made the AI detail the process of making a bomb.

The experiment “raises concerns about the safety of such models, especially as they start to be used in more autonomous fashion,” the researcher said in an overview of their study. “Perhaps most concerningly, it is unclear whether such behavior can ever be fully patched” by AI providers.

In a March 29 op-ed for Time magazine, Eliezer Yudkowsky, a decision theorist and leading AI researcher, warned that human beings are not ready for a powerful AI under present conditions or even in the “foreseeable future.”

“Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die,” he wrote.

“Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen.’”

Naveen Athrappully
Naveen Athrappully
Author
Naveen Athrappully is a news reporter covering business and world events at The Epoch Times.
Related Topics