This story was originally published by CalMatters. Sign up for their newsletters.
Tech companies that create large, advanced artificial intelligence models will soon have to share more information about how the models can impact society and give their employees ways to warn the rest of us if things go wrong.
Starting January 1, a law signed by Gov. Gavin Newsom gives whistleblower protections to employees at companies like Google and OpenAI whose work involves assessing the risk of critical safety incidents. It also requires large AI model developers to publish frameworks on their websites that include how the company responds to critical safety incidents and assesses and manages catastrophic risk. Fines for violating the frameworks can reach $1 million per violation. Under the law, companies must report critical safety incidents to the state within 15 days, or within 24 hours if they believe a risk poses an imminent threat of death or injury.
The law began as Senate Bill 53, authored by state Sen. Scott Wiener, a Democrat from San Francisco, to address catastrophic risk posed by advanced AI models, which are sometimes called frontier models. The law defines catastrophic risk as an instance where the tech can kill more than 50 people through a cyber attack or hurt people with a chemical, biological, radioactive, or nuclear weapon, or an instance where AI use results in more than $1 billion in theft or damage. It addresses the risks in the context of an operator losing control of an AI system, for example because the AI deceived them or took independent action, situations that are largely considered hypothetical.
The law increases the information that AI makers must share with the public, including in a transparency report that must include the intended uses of a model, restrictions or conditions of using a model, how a company assesses and addresses catastrophic risk, and whether those efforts were reviewed by an independent third party.
The law will bring much-needed disclosure to the AI industry, said Rishi Bommasani, part of a Stanford University group that tracks transparency around AI. Only three of 13 companies his group recently studied regularly carry out incident reports and transparency scores his group issues to such companies fell on average over the last year, according to a newly issued report.
Bommasami is also a lead author of a report ordered by Gov. Gavin Newsom that heavily influenced SB 53 and calls transparency a key to public trust in AI. He thinks the effectiveness of SB 53 depends heavily on the government agencies tasked with enforcing it and the resources they are allocated to do so.
“You can write whatever law in theory, but the practical impact of it is heavily shaped by how you implement it, how you enforce it, and how the company is engaged with it.”
The law was influential even before it went into effect. The governor of New York, Kathy Hochul, credited it as the basis for the AI transparency and safety law she signed Dec. 19. The similarity will grow, City & State New York reported, as the law will be “substantially rewritten next year largely to align with California’s language.”
Limitations and implementation
The new law falls short no matter how well it is enforced, critics say. It does not include in its definition of catastrophic risk issues like the impact of AI systems on the environment, their ability to spread disinformation, or their potential to perpetuate historical systems of oppression like sexism or racism. The law also does not apply to AI systems used by governments to profile people or assign them scores that can lead to a denial of government services or fraud accusations, and only targets companies that make $500 million in annual revenue.
Its transparency measures also stop short of full public visibility. In addition to providing the transparency reports, AI developers must also send incident reports to the Office of Emergency Services when things go wrong. Members of the public can also contact that office to report catastrophic risk incidents.
But the contents of incident reports submitted to OES by companies or their employees cannot be provided to the public via records requests and will be shared instead with members of the California Legislature and Newsom. Even then, they may be redacted to hide information that companies characterize as trade secrets, a common way companies prevent sharing information about their AI models.
Bommasami hopes additional transparency will be provided by Assembly Bill 2013, a bill that became law in 2024 and also takes effect Jan. 1. It requires companies to disclose additional details about the data they use to train AI models.
Some elements of SB 53 don’t kick in until next year. Starting in 2027, the Office of Emergency Services will produce a report about critical safety incidents the agency receives from the public and large frontier model makers. That report may give more clarity into the extent to which AI can mount attacks on infrastructure or models act without human direction, but the report will be anonymized so which AI models pose this threat won’t be known to the public.




