Legislation Introduced in California to Regulate Artificial Intelligence

The bill would require the largest AI developers to comply with pre-deployment testing and cybersecurity protection standards.
Legislation Introduced in California to Regulate Artificial Intelligence
A Tesla robot is seen on display during the World Artificial Intelligence Conference (WAIC) in Shanghai on July 6, 2023. Wang Zhao/AFP via Getty Images
Travis Gillmore
Updated:

With several prominent machine learning experts warning about potential threats to humanity if artificial intelligence technologies are allowed to develop without comprehensive oversight, a California lawmaker recently introduced legislation that would regulate the industry.

Senate Bill 1047, authored by state Sen. Scott Wiener, would require the largest AI developers to comply with pre-deployment testing and cybersecurity protection standards.

Noting the benefits the new technology could provide while acknowledging the risks, the author said the bill will help ensure safe development.

“Large-scale artificial intelligence has the potential to produce an incredible range of benefits for Californians and our economy—from advances in medicine and climate science to improved wildfire forecasting and clean power development,” Mr. Wiener said in a Feb. 8 press release announcing the legislation. “It also gives us an opportunity to apply hard lessons learned over the last decade, as we’ve seen the consequences of allowing the unchecked growth of new technology without evaluating, understanding, or mitigating the risks.”

Navigating the new technological frontier requires a moderate, balanced approach to regulating the field, he said.

“SB 1047 does just that, by developing responsible, appropriate guardrails around development of the biggest, most high-impact AI systems to ensure they are used to improve Californians’ lives, without compromising safety or security,” Mr. Wiener said.

California state Senator Scott Wiener speaks at the Lambda Legal 2018 West Coast Liberty Awards at the SLS Hotel in Beverly Hills, Calif., on June 7, 2018. (Randy Shropshire/Getty Images for Lambda Legal)
California state Senator Scott Wiener speaks at the Lambda Legal 2018 West Coast Liberty Awards at the SLS Hotel in Beverly Hills, Calif., on June 7, 2018. Randy Shropshire/Getty Images for Lambda Legal

Such follows federal efforts announced earlier this month by the National Institute for Standards and Technology, under the Department of Commerce, with the launch of the AI Safety Institute Consortium, which will provide guidelines to evaluate the industry.

The bill would also establish a “research cluster” known as CalCompute to provide opportunities for researchers, startups, and other groups to work together on the development of AI systems.

“By providing a broad range of stakeholders with access to the AI development process, CalCompute will help align large-scale AI systems with the values and needs of California communities,” Mr. Wiener said.

From wildfire protection to novel drug discovery and other creative endeavors, AI is being used in various circumstances to enhance understanding and improve efficiency by accelerating research.

While the benefits are potentially immense for a variety of applications, some experts are warning about the dangers that could also accompany the technology’s rapid pace of development experienced in recent years.

“Forty years ago, when I was training the first version of the AI algorithms behind tools like ChatGPT, no one—including myself—would have predicted how far AI would progress,” Geoffrey Hinton, professor emeritus of computer science at the University of Toronto, said in Mr. Wiener’s press release. “Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously.”

Visitors watch a Tesla robot displayed at the World Artificial Intelligence Conference (WAIC) in Shanghai on July 6, 2023. (Wang Zhao/AFP via Getty Images)
Visitors watch a Tesla robot displayed at the World Artificial Intelligence Conference (WAIC) in Shanghai on July 6, 2023. Wang Zhao/AFP via Getty Images

Recognized as one of the “godfathers” of the industry, Mr. Hinton resigned from Google in 2023 citing apprehension about the risks of AI.

“I am still passionate about the potential for AI to save lives through improvements in science and medicine, but it’s critical that we have legislation with real teeth to address the risks,” he said. “California is a natural place for that to start, as it is the place this technology has taken off.”

Fellow AI pioneer Yoshua Bengio—who with Mr. Hinton and Yann LeCun won the 2018 Turing award known as the “Nobel Prize of Computing”—agreed that dangers to humanity exist if the technology develops too fast.

“AI systems beyond a certain level of capability can pose meaningful risks to democracies and public safety,” Mr. Bengio, professor of computer science at the University of Montreal, said in the press release. “Therefore, they should be properly tested and subject to appropriate safety measures.”

Legal experts are also concerned that advanced AI systems in the wrong hands could prove disastrous, including potentially allowing foreign actors to create offensive cyber-attacks and weapons of mass destruction using biological, chemical, and nuclear technologies.

“The theft of a powerful AI system from a leading lab by our adversaries would impose considerable risks on us all,” Andrew C. Weber, former assistant secretary of defense for the U.S.’s nuclear, chemical & biological defense programs, said in the press release. “Developers of the most advanced AI systems need to take significant cybersecurity precautions given the potential risks involved in their work.”

Travis Gillmore
Travis Gillmore
Author
Travis Gillmore is an avid reader and journalism connoisseur based in California covering finance, politics, the State Capitol, and breaking news for The Epoch Times.
Related Topics