California’s legislator behind SB 1047 Reignites Push for Mandated Ai Safety Reports

dome of California State Capitol Building, Sacramento

California State Senator Scott Wiener Intestrictively Intended New Ambinder for his latest Bill, SB 53, Wow requires the world’s large AI companies to publish security and security protocols and results when security events.

If California was signed, California would be the first state to set meaningful transparency requirements to lead AI developers, probably includes Openai, Google, Anthropic and Xai.

Senator Wiener’s previous AI bill, SB 1047, included similar requirements for AI model developers to publish security reports. However, Silicon Valley fought violently against this bill, and it was ultimatalically veto by governor Gavin Newsom. California Governor then called for a group of AI leaders-Inclusive the leading Stanford scientist and co-founder of World Labs, FEI-FEI LI-to form a policy group and set goals for the state’s AI security efforts.

California’s AI policy group recently published their final recommendations and cited a need for “requirements for industry to publish information about their system” to establish a “robust and transparent evidence environment.” Senator Wiener’s Office said in a press release that SB 53’s changes were heavily affected by this report.

“The bill remains an ongoing job, and I look forward to working with all stakeholders in the coming week to refine this proposal for the most scientific and fair law it may be,” Senator Wiener said in the release.

SB 53 aims to find a balance that Governor Newsom Claime SB 1047 failed to achieve -ideally, creating the transparency requirements for the major AI developers without averting the rapid growth of California’s AI industry.

“These are concerned that my organization and others have been talking about for a while,” said Nathan Calvin, VP for state affairs for nonprofit AI Safety Group, Clot, in an interview with Techcrunch. “Having companies explaining the public and the government what measures the things to tackle these risks feel like a soft minimum, reasonable step to take.”

The bill also creates whistleblower protection for employees of AI laboratories who believe that their company’s technology poses a “critical risk” to society -defined in the bill as a contribution to death or harm to more than 100 people or more than $ 1 billion in injury.

In addition, the bill aims to create Calcompute, a public cloud computing cluster to support startups and researchers developing large-scale AI.

Unlike SB 1047, Senator Wiener’s new Bill des Not AI model developments make the damage of their AI models. The SB 53 was also designed not to constitute a Barden on Startups and researchers who fine-tune AI models from leading AI developers or using open source models.

With the new changes, SB 53 is now on its way to California’s State Assembly Committee for Privacy and Consumer Protection for approval. Should it pass there, the bill must also review several other legislative bodies before reaching Governor Newsom’s desk.

On the other hand of the United States, New York Governor Kathy Hochy is now considering a similar AI security bill, Raise Act, which would also require large AI developers to publish security and security reports.

The fate of state AI-Love as Raise Act and SB 53 was briefly at risk as federal legislators considered a 10-year AI moratorium for state AI regulation-one exception to limit a “patchwork” of AI-Love that companies were to navigate. However, this proposal failed in the 99-1 senate earlier in July.

“Enfioring AI has been developed certainly should not be controversial – it must be basic,” said Geoff Ralston, the form president of the Y Combinator, in a trema for TechCrunch. “Congress must be a leader and demand transparency and accountability from the companies that build border models. But without serious federal action in sight, states must step up.

Until this time, lawmakers have not failed to get AI companies on board with state transparency requirements. Anthropically has widely approved the need for increased transparency in companies and even explicitly modest optimism about the recommendations of California AI policy group. But companies like Openai, Google and Meta have been more resistant to these efforts.

Leading AI model developers typically publish security reports for their AI models, but they have been less consists of in recent months. For example, Google decided not to publish a security report for its most advanced AI model ever released, Gemini 2.5 Pro, until months after it was available. Openai also decides not to publish a security report for its GPT-4.1 model. Later, a third-party survey came out that suggested it might be less in line than previous AI models.

SB 53 representatives a toned-down version of previous AI security bills, but it can still force AI companies to publish more information Endory does Teday. Currently, they are watching tightly as Senator Wiener once watching these bonds.