California continues to lead the nation in artificial intelligence (“AI”) regulation with the recent enactment of Senate Bill (“SB”) 53—the Transparency in Frontier Artificial Intelligence Act (“TFAIA” or the “Act”)[1]. Signed by Governor Gavin Newsom earlier this fall, the TFAIA takes effect January 1, 2026, and establishes significant oversight, accountability, and reporting requirements for advanced developers at the cutting edge of artificial intelligence. This law sets a framework for transparency and public safety, and is expected to set a nationwide precedent for future AI legislation to come.

Background

Prior to the TFAIA, a similar California AI bill known as SB 1047 was introduced and vetoed by Governor Newsom in 2024. SB 1047 was characterized as more aggressive compared to SB 53 as it would have imposed very stringent AI safety regulations and, as a result, received massive pushback from tech industry groups who argued that the bill would stifle AI innovation.

After vetoing the earlier AI bill, Newsom convened the Joint California Policy Working Group on AI Frontier Models, which produced a report with recommendations for AI safety and regulation. These recommendations helped shape the TFAIA. The TFAIA was authored by California Senator Scott Wiener (D-SF) and is a significantly narrower, transparency-focused AI safety legislation that puts forth state-level oversight of AI use, assessment, governance, and reporting mechanisms to mitigate serious risks and harms of AI systems created by certain advanced AI developers.

Who Is Covered?

The Act primarily regulates the largest and most advanced AI developers, targeting those “frontier” developers with the capability or resources to create highly sophisticated models. Developers fall into two categories:

Key Takeaways

Below is an overview of the TFAIA’s major provisions:

  1. Publication of Frontier AI Framework: Large frontier developers must publish a “Frontier AI Framework” on their website, describing: (1) how they incorporate national, international and best practices; (2) governance structures for managing AI safety; and (3) documented procedures for identifying and mitigating “catastrophic risk”—defined as foreseeable, material risk that a frontier model will cause the death of, or serious injury to, more than 50 people, or cause over $1 billion in property damage, in a single incident, among other requirements. The TFAIA specifies that actions by frontier models that could lead to these outcomes include providing expert-level assistance in creating or releasing weapons (chemical, biological, radiological, or nuclear), engaging in autonomous criminal acts, evading control, and conducting large-scale cyberattacks without significant human oversight. This framework must be reviewed and updated annually, with all material changes and justifications published within 30 days.
  2. Publication of Transparency Report: Before or during deployment of a frontier model or a substantially modified version of such model, a frontier developer must publish on its website a transparency report. Reports must include frontier model details such as the release date, intended uses, conditions, restrictions, and languages supported by the frontier model. In addition to the above, large frontier developers’ transparency reports must also include summaries of catastrophic risk assessments from the frontier model, their results, and the role of any third-party evaluators. Large frontier developers must submit to the California Governor’s Office of Emergency Services (“OES”) a summary of a catastrophic risk assessment every 3 months or pursuant to a reasonable schedule.
  3. Reporting of Critical Safety Incidents: Additionally, the Act creates a new mechanism for frontier developers and the public to report critical safety incidents to OES. A “critical safety incident” involves specific events, such as loss of control or unauthorized access, related to a frontier model that results in harm. All frontier developers are required to disclose critical safety incidents to the OES within 15 days of discovering the incident. If the critical safety incident “poses an imminent risk of death or serious physical injury,” the frontier developer is required to disclose such incident to a governmental authority (e.g., law enforcement agency or public safety agency) within 24 hours. Frontier developers may amend their reports if new information is discovered after a critical safety incident has been reported.
  4. Whistleblower Protections: Building on the emphasis on disclosure, the TFAIA also offers protections to employees who report potential catastrophic risks to authorities. Accordingly, employers are banned from retaliating against employees who report any risks they uncover while working for frontier developers. Employees can report frontier developer activities that pose a specific and substantial danger to public health or safety, or violate the TFAIA, if they have reasonable cause to believe such misconduct has occurred. The law also establishes a hotline to make it easier for employees to disclose potentially dangerous workplace activities.
  5. Enforcement: To ensure safe and sustainable AI use, the TFAIA grants the California Attorney General broad authority to enforce violations of the law, with civil penalties of up to $1 million per violation, based on the severity of the offense. Notably, penalties will be imposed if large frontier developers fail to report critical safety incidents or do not submit required compliance documents to regulatory agencies.
  6. Other Key Provisions:
    • CalCompute – The Act also established CalCompute, a public computing cluster dedicated to advancing safe, ethical, and sustainable AI development for the public sector. Its primary goal is to foster research and innovation that directly benefits California’s residents, prioritizing public good over corporate interests.
    • Updates to the Law – Acknowledging that AI is rapidly advancing, the TFAIA requires the California Department of Technology to regularly review AI advancements and update the law to keep pace with the industry. Beginning on January 1, 2027, the Department must make annual recommendations on updates to California’s AI regulations.

Next Steps

To ensure readiness and compliance with the TFAIA, healthcare organizations should consider the following proactive steps: 

Looking Ahead

With the TFAIA, California continues to regulate and promote responsible AI use in the absence of comprehensive federal regulations. We will continue to monitor how the implementation of the TFAIA unfolds and will provide updates accordingly.

FOOTNOTES

[1] California Senate Bill 53: Transparency in Frontier Artificial Intelligence Act.

Listen to this article

Leave a Reply

Your email address will not be published. Required fields are marked *