California’s legislature passed a bill on Aug. 29 that seeks to regulate artificial intelligence (AI) models and establish guidelines for developing the most powerful systems. The measure now awaits Gov. Gavin Newsom’s signature.
The proposed measure is supported by dozens of tech companies, labor, and other organizations, including the Los Angeles Area Chamber of Commerce; the Center for AI Safety Action Fund, a San Francisco-based research-oriented nonprofit; and the Economic Security Project Action, a nonprofit with staff nationwide that advocates economic power for all Americans, among others.
However, the bill is hotly contested by more than 100 organizations, including numerous technology firms, the California Chamber of Commerce, and the California Manufacturers and Technology Association—which represents 400 businesses—among others.
Opponents generally object to what they believe are overly stringent regulations that would limit the industry’s ability to innovate.
The bill’s author said that while AI can help to advance knowledge of medicine, wildfire forecasting, and other emerging technologies, the measure is needed to protect Californians from unintended consequences and potential malicious use of powerful computing models.
“[AI] also gives us an opportunity to apply hard lessons learned over the last decade, as we’ve seen the consequences of allowing the unchecked growth of new technology without evaluating, understanding, or mitigating the risks,” Wiener said in a legislative analysis. “SB 1047 does just that, by developing responsible, appropriate guardrails around development of the largest, most powerful AI systems, to ensure they are used to improve Californians’ lives, without compromising safety or security.”
Tech giant and billionaire Elon Musk has voiced his support of the measure in recent days.
Uncertainty abounds, according to consultants for the Assembly’s Judiciary Committee who said in an analysis published in July that the industry presents “unprecedented opportunities and significant challenges,” as highly trained AI models can behave unpredictably and can attract malicious actors intent on using the technology for malevolent purposes.
“This unpredictability, coupled with the high stakes involved, underscores the necessity for stringent oversight and safety protocols,” committee staff wrote in the analyses.
Supporters of the bill suggest that criminals and terrorists could use the technology to coordinate attacks, and they highlight concerns about cyberattacks, espionage, and misinformation campaigns as just a few of the examples of why regulations are needed.
Some lawmakers in both chambers, representing both sides of the aisle, have spoken in support of the bill as it has made its way through the legislature this year.
“Artificial intelligence has an enormous potential to benefit our state, our nation, and the world,” Democratic Assemblyman Steve Bennett said during an Assembly hearing on Aug. 28. “Artificial intelligence also has an enormous potential to be misused and cause serious problems that are beyond our ability to even imagine.”
He said the proposal is a requisite first step toward safeguarding the industry.
“This bill is, after my examination of it, I believe, a light touch, the lightest touch you could possibly come up with, which is the companies themselves need to do their own due diligence to make sure this is safe,” Bennett said. “That’s all the bill does: require them to ... do their own due diligence.”
While critics of the bill said that it could potentially stifle innovation, another assembly member pushed back on that notion.
“It’s time that big tech plays by some kind of a rule. And I’m kind of frankly sick of hearing all this different stuff of, ‘oh, we’re going to stop the growth of tech,’” Republican Assemblyman Devon Mathis told fellow assembly members before voting on the bill. “No, we’re not. But you have to put guide rails. We have to make sure that they’re going to be responsible players.”
One co-sponsor of the bill said the regulations for the most powerful systems—which will cost at least $100 million to develop—are a significant matter of national security and have public safety implications.
“SB 1047 introduces essential safeguards for the creation of highly capable AI models,” Encode Justice, a California-based advocacy group seeking AI regulations, said in a legislative analysis.
Opponents said the bill’s requirements would unnecessarily hinder AI developers and the companies that utilize the technology, and the measure would force them to solve for all types of potential harms, even those that are relatively inconceivable.
“Unfortunately, SB 1047 forces model developers to engage in speculative fiction about imagined threats of machines run amok, computer models spun out of control, and other nightmare scenarios for which there is no basis in reality,” the Chamber of Progress, a tech industry coalition headquartered in Virginia, said in legislative analyses.
Another critic said the bill is too vague and could create obstacles for open-source AI development and for companies without extensive legal teams compared with larger, well-financed tech firms.
If ultimately signed into law, the legislature’s appropriations committees estimated that the regulations would cost the state between $5 million and $10 million annually in government operations, in addition to between $4 million and $6 million to implement and annually operate the CalCompute system.
The state’s costs to manage violations could also amount to millions of dollars, depending on their number and the workload needed to address them, according to the appropriations committees.
The governor has until Sept. 30 to sign or veto the bill.