
WHAT THE BILL WOULD DO
Senate Bill 53 by Sen. Scott Wiener, a Democrat from San Francisco, requires AI makers to test AI models for their ability to enable things like cybersecurity crimes or biological weapons attacks and would give developers and the public a way to report AI that poses a catastrophic risk. Gov. Gavin Newsom vetoed a previous version of the bill last year. A report ordered by Newsom released in June calls for rules to protect whistleblowers and transparency in the model development process to balance innovation with guardrails.
The bill defines catastrophic risk as an AI model that demonstrates the ability to cause the death or injury of 50 or more people, or more than $1 billion in damage or loss of property. Only companies that make $500 million or more annually must comply with the law.
WHO SUPPORTS IT
The bill is supported by organizations like Tech Oversight California, independent state oversight group Little Hoover Commission, the Service Employees International Union and several groups concerned that AI can go rogue and harm people.
“While we believe that frontier AI safety is best addressed at the federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington,” said Anthropic, a business founded by former OpenAI employees.
It’s unclear if the final version will win the governor’s support, but co-sponsor Encode told CalMatters that the governor’s office helped shape final amendments to the bill.
WHO IS OPPOSED
Business groups like the Chamber of Commerce and TechNet oppose the legislation, along with OpenAI, which argue instead for a national policy set by lawmakers in Washington D.C.
WHY IT MATTERS
This bill gives the public and people inside companies developing AI an easy way to alert authorities if the tech shows an ability to cause great harm. The legislation states that “timely reporting of critical safety incidents to the government is essential to ensure that public authorities are promptly informed of ongoing and emerging risks to public safety.”
AI researchers have called for whistleblower protections before, because of concerns about how AI can amplify existing inequality and with the hypothetical ways the tech may harm people in the future.
The legislation would also require companies to publish details about how they evaluate models for catastrophic risk on their websites, and establishes CalCompute, a public cloud that must focus on developing AI for the public good.
GOVERNOR’S CALL ✅