On September 29, 2025, Governor Gavin Newsom signed SB 53, the Transparency in Frontier Artificial Intelligence Act (“the Act”) into law, establishing a regulatory framework for developers of advanced artificial intelligence (AI) systems. The law imposes new transparency, reporting, and risk management requirements on entities developing high-capacity AI models. It is the first of its kind in the United States. Although several states, including California, Colorado, Texas, and Utah have passed consumer AI laws, SB 53 is focused on the safety of the development and use of large AI platforms. According to Newsom in his signing message to the California state Senate, the Act “will establish state-level oversight of the use, assessment, and governance of advanced artificial intelligence (AI) systems…[to]strengthen California’s ability to monitor, evaluate, and respond to critical safety incidents associated with these advanced systems, empowering the state to act quickly to protect public safety, cybersecurity, and national security.”

Newsom highlighted that “California is the birthplace of modern technology and innovation” and “is home to many of the world’s top AI researchers and developers.” This allows for “a unique opportunity to provide a blueprint for well-balanced AI policies beyond our borders-especially in the absence of a comprehensive federal AI policy framework and national AI safety standards.”

Although the Biden administration issued an Executive Order in October of 2023 designed to start the discussion and development of guardrails around using AI in the United States, President Trump gutted the AI EO on his first day in office in January of 2025, without providing any meaningful replacement. Since then, there has been nothing from the White House except encouragement for AI developers to move fast and furiously. As a result, states are recognizing the risk of AI for consumers, cybersecurity, and national intelligence and, as usual, California is leading the way in addressing these risks.

Newsom noted in his message to the California State Senate that, in the event “the federal government or Congress adopt national AI standards that maintain or exceed the protections in this bill, subsequent action will be necessary to provide alignment between policy frameworks—ensuring businesses are not subject to duplicative or conflicting requirements across jurisdictions.” A summary of the substance of the bill is outlined below.

Who is Covered?

The Act is meant to cover only certain powerful artificial intelligence models. The Act defines AI models generally as computer systems that can make decisions or generate responses based on the information they receive. Such systems can operate with varying levels of independence and are designed to affect real-world or digital environments, such as controlling devices, answering questions, or creating content. The Act defines several specific types of AI models and AI developers:

The Act applies to frontier developers. The law is designed to target developers with significant resources and influence over high-capacity AI systems and is not meant to cover smaller or less computationally intensive projects.

Key Compliance Requirements

  1. Frontier AI Framework – Large frontier developers must publish and maintain a documented framework outlining how they assess and mitigate catastrophic risks associated with their models. The framework may include risk thresholds and mitigation strategies, cybersecurity practices, and internal governance and third-party evaluations. A catastrophic risk is defined as a foreseeable and material risk that a frontier model could contribute to thedeath or serious injury of at least 50 people, or cause over $1 billion in property damage, through misuse or malfunction.

Whistleblower Protections

The law prohibits retaliation against employees who report safety concerns or violations. Large developers must notify employees of their whistleblower rights, implement anonymous reporting mechanisms, and provide regular updates to whistleblowers.

Enforcement and Penalties

Noncompliance may result in civil penalties of up to $1 million per violation, enforceable by the California Attorney General. This is a high ceiling for penalties and likely to incentivize proactive compliance and documentation. Penalties may be imposed for failure to publish required documents, false statements about risk, or noncompliance with the developer’s own framework.

CalCompute Initiative

The Act also establishes a consortium to develop CalCompute, a public cloud computing cluster intended to support safe and equitable AI research. A report outlining its framework is due to the California Legislature by January 1, 2027. CalCompute could become a strategic resource for academic and nonprofit developers who seek access to high-performance computing but lack the necessary commercial infrastructure.

Takeaways

The Act introduces a structured compliance regime for high-capacity AI systems. Organizations subject to the Act should begin reviewing their AI development practices, internal governance structures, and incident response protocols.

Leave a Reply

Your email address will not be published. Required fields are marked *