Imagine going online to chat with someone and finding an account with a profile photo, a description of where the person lives, and a job title . . . indicating she is a therapist. You begin chatting and discuss the highs and lows of your day among other intimate details about your life because the conversation flows easily. Only the “person” with whom you are chatting is not a person at all; it is a “companion AI.”

Recent statistics indicate a dramatic rise in adoption of these companion AI chatbots, with 88% year-over-year growth, over $120 million in annual revenue, and 337 active apps (including 128 launched in 2025 alone). Further statistics about pervasive adoption among youth indicate three of every four teens have used companion AI at least once, and two out of four use companion AI routinely. In response to these trends and the potential negative impacts on mental health in particular, state legislatures are quickly stepping in to require transparency, safety and accountability to manage risks associated with this new technology, particularly as it pertains to children. 

As we noted in our October 7 blog on the subject, state legislatures are moving quickly to find solutions to the disturbing mental health issues arising from use of this technology—even as the federal push for innovation threatens to displace state AI regulation, as we reported in July. For example, New York’s S. 3008, Artificial Intelligence Companion Models, effective November 5, 2025, was one of the first laws addressing these issues. It mandates a protocol for identifying suicidal ideation, and requires notifications at the beginning of every interaction, and every three hours thereafter, that the companion AI is not human. California’s recent SB 243, effective July 1, 2027, adopts provisions similar to New York’s law.

California has emerged as one of the leaders, if not the bellwether, of state AI regulations impacting virtually every private sector industry, as it seeks to impose accountability and standards to ensure the transparent, safe design and deployment of AI systems. Indeed, SB 243 is one of several laws that California Governor Gavin Newsom signed in October 2025 that relate to the protection of children online. Spurred by concern that minors, in particular, have harmed themselves or others after becoming addicted to AI chatbots—these laws seek to prevent “AI psychosis,” a popular term, if not yet a medical diagnosis. Like New York’s S. 3008, California’s SB 243 imposes requirements on developers of companion AI to take steps designed to reduce adverse effects on users’ mental health. Unlike New York’s S. 3008, however, it authorizes a private right of action for persons suffering an injury as a result of noncompliance. Penalties include a fine of up to $1,000 per violation, as well as attorney’s fees and costs.

SB 243 does not impose requirements on providers or others engaged in the provision of mental health care. By contrast, as we previously noted, California’s AB 489, signed into law on October 11, does regulate the provision of mental health via companion AI. It expands the application of existing laws relating to unlicensed health care professionals to entities developing or deploying AI. AB 489 prohibits a “person or entity who develops or deploys [an AI] system or device” from stating or implying that the AI output is provided by a licensed health care professional. We further examine California’s new laws, SB 243 and AB 489, below.

SB 243’s Disclosure and Protocol Requirements

SB 243 adds a new Chapter 22.6, Sections 22601 to 22605, to the Business and Professional Code, Division 8, imposing requirements on “operators,” meaning deployers, or persons “who [make] a companion chatbot platform available to a user in the state[.]” Operators must maintain a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user—including through crisis service provider referrals—and publishing the details of such protocols on the operators’ website.

If the user is an adult, the operator must:

If the deployer or operator “knows [the user] is a minor,” it must:

The law is silent as to how the deployer or operator should ascertain whether the user is an adult or minor. 

SB 243’s Reporting Requirements

SB 243 requires operators to annually report, beginning July 1, 2027, to the Office of Suicide Prevention (“OSP”) of the California Department of Public Health, the following information:

The OSP is then required to post data from the reports on its website.

AB 489’s Requirements Addressing Impersonation of a Licensed Professional

AB 489 adds a new Chapter 15.5 to Division 2 of the Business and Professions Code to provide that prohibited terms, letters, or phrases that misleadingly indicate or imply possession of a license or certificate to practice a health care profession—terms, letters, and phrases that are already prohibited by, for example, the state Medical Practice Act or the Dental Practice Act—are also prohibited for developers and deployers of AI or generative AI (GenAI) systems. (Note that AI and GenAI are already defined in Section 11549.64 of California’s Government Code.)

The law prohibits use of a term, letter, or phrase in the advertising or functionality of an AI or GenAI system, program, device, or similar technology that indicates or implies that the care, advice, reports, or assessments offered through the AI or GenAI technology are being provided by a natural person in possession of the appropriate license or certificate to practice as a health care professional. Each use is considered a separate violation.

Enforcement Under SB 243 and AB 489

SB 243 provides that a successful plaintiff bringing a civil action under the law may recover injunctive relief and damages equal to the greater of actual damages or $1000 per violation, as well as reasonable attorney’s fees and costs. AB 489, by contrast, subjects developers and deployers to the jurisdiction of “the appropriate health care professional licensing board or enforcement agency.”

As a result of these novel laws, developers and deployers should consult legal counsel to understand these new requirements and develop appropriate compliance mechanisms.

Other Bills that Address Impersonation and Disclosure

California has approved multiple bills relating to AI in recent weeks. As indicated by the governor’s October 13 press release—which lists 16 signed laws—many are aimed at protecting children online. While this blog focuses on laws related to mental health chatbots, these laws do not exist in a vacuum. California and other states are becoming more serious about regulating AI from a transparency, safety, and accountability perspective. Further, existing federal laws and regulations administered by the U.S. Food and Drug Administration (which apply to digital therapeutic products and others), the Federal Trade Commission, and other agencies may also regulate certain AI chatbots, depending upon how they are positioned and what claims are made regarding their operation, benefits, and risks.

Leave a Reply

Your email address will not be published. Required fields are marked *