Joint Commission and Coalition for Health AI Release First-of-Its-Kind Guidance on Responsible AI Use in Healthcare

Key Takeaways

The Joint Commission and the Coalition for Health AI released a new framework, the Responsible Use of AI in Healthcare (RUAIH), to guide safe, ethical adoption of AI tools in clinical and operational settings.
As AI tools rapidly enter health care workflows, they bring both promise and risk. Without clear oversight, organizations may face challenges like algorithmic bias, data misuse and erosion of clinician trust.
Health care organizations should review the RUAIH framework’s seven core principles and take steps to build governance structures, training protocols and safeguards that align AI use with patient-centered care.

On September 17, 2025, the Joint Commission (TJC), in collaboration with the Coalition for Health AI (CHAI), released its Guidance on the Responsible Use of Artificial Intelligence in Healthcare (RUAIH). This marks the first formal framework from a U.S. accrediting body aimed at helping healthcare organizations safely, effectively and ethically integrate AI technologies into clinical and operational practice.
The Joint Commission’s partnership with CHAI reflects the growing recognition that AI will play a transformative role in care delivery. While currently voluntary, the guidance is intended to serve as a foundation for internal governance, risk management and quality oversight, and will likely inform future accreditation and certification pathways related to AI use.
The guidance outlines seven foundational elements that health care organizations should adopt to ensure AI tools are deployed responsibly and integrated into existing governance and compliance systems. This article summarizes those recommendations and explores what they mean for health systems, patients and regulators.
The Promise and Perils of AI in Health Care
AI is poised to transform how health care is delivered. AI tools can identify subtle patterns in imaging scans, forecast disease progression, optimize treatment plans and automate time-consuming administrative tasks, like collecting or connecting patient information, scheduling and coding patient visits and managing health insurance claims and billing. When used effectively, AI tools can enhance diagnostic accuracy, reduce clinician workload and improve operational efficiency, and potentially even save lives.
But the risks are also significant. The RUAIH framework flagged several critical concerns:

Algorithmic bias and error: AI relies on massive datasets, creating risks that inaccurate or unrepresentative data can lead to misdiagnosis or inappropriate treatment recommendations.
Transparency challenges: The “black box” nature of many AI systems undermines clinician trust and patient understanding.
Data privacy and security: AI tools will receive, process and disclose patient data — creating the risk of unauthorized use, breaches and commercial exploitation.
Workflow disruption: Integrating AI tools into established clinical environments can strain operations and generate resistance among staff.
Overreliance: Dependence on AI tools could erode clinician judgment and depersonalize patient care.

While the risks are real, they aren’t, on their own, a reason to rule out AI tools in patient care. The RUAIH framework emphasizes the need to manage those risks responsibly, through structured policies, safeguards and oversight.
Seven Elements of Responsible Use of AI in Health Care
The RUAIH framework outlines seven key elements that health care organizations should consider when deploying or managing AI systems.

AI Policies and Governance Structures

Health care organizations should establish a formal governance framework integrated with existing compliance, risk and patient safety structures. Multidisciplinary oversight (including clinical, technical, legal and patient perspectives) is strongly recommended.

Establish a multi-disciplinary governance body to lead and monitor the selection, implementation and use of AI tools;
Develop policies covering selection, implementation, lifecycle management and compliance for both in-house and third-party AI tools;
Align the use of AI tools with external regulatory compliance and ethical frameworks; and
Provide regular updates to the hospital’s senior leadership and governing board regarding the use, outcomes and potential adverse events of AI tools.

Patient Privacy and Transparency

Health care organizations should build on existing HIPAA and other privacy initiatives to address AI-specific issues such as secondary data use, model transparency and vendor accountability.

Develop policies governing the access, use and protection of patient data, consistent with federal and state data privacy laws and regulations;
Disclose to patients when AI tools are being used and educate patients about the way that AI tools are used in their care; and
Incorporate mechanisms to obtain informed consent when AI tools are used to assist or directly influence clinicians’ treatment decisions.

Data Security and Data Use Protections

Health care organizations must ensure that their data security standards and protocols are applied consistently to AI tools, using both technical and contractual safeguards.

Ensure that all patient data is encrypted in transit and at rest and establish permission-based access controls with audit logs to limit exposure;
Conduct regular vulnerability assessments and penetration testing;
Develop a comprehensive incident response plan and identify appropriate incident response resources to prepare for a potential data breach incident; and
Enter into appropriate data use agreements with vendors that define permissible uses, govern the use of identified and de-identified data and establish audit rights.

Ongoing Quality Monitoring

AI tools should be treated as dynamic systems requiring continuous oversight rather than static technologies.

Validate each proposed AI tool prior to deployment by reviewing each vendor’s testing, validation and bias assessments;
Establish feedback channels that encourage routine reporting of errors, performance issues and concerns to the AI governance team, organization leadership and vendors; and
Develop systems to track the performance, outcomes and adverse events from using AI tools.

Voluntary Reporting of AI Safety-Related Events

The Joint Commission encourages health care organizations to adopt confidential reporting structures for AI tool-related safety issues in the same manner as other patient safety incidents.

Use existing channels such as Patient Safety Organizations (PSOs) or Joint Commission sentinel-event processes;
Submit de-identified reports of AI near-misses, unsafe recommendations or performance failures; and
Contribute to initiatives like CHAI’s Health AI Registry, which aggregates blinded safety reports for cross-industry learning.

Risk and Bias Assessment

AI tools can inadvertently amplify inequities if not carefully managed. Health care organizations should evaluate AI tools for potential risks and biases before and after implementation.

Require vendors to disclose known risks, limitations and bias in their AI tools;
Validate the AI tools using accurate and representative patient data; and
Use standardized tools to capture and track risk information and monitor patient outcomes when using AI tools to continually identify any disparities.

Education and Training

The guidance highlights workforce training and user education as a core component of responsible AI adoption.

Conduct role-specific training for clinicians and staff on the functionality, limitations and safe use of AI tools;
Deploy AI literacy initiatives to build a baseline understanding of AI concepts, risks and terminology; and
Train on AI policies and procedures used within the organization.

Looking Ahead
Although the RUAIH framework is voluntary at present, it signals the Joint Commission’s likely approach to incorporating AI governance into future accreditation surveys. Providers should begin documenting AI oversight policies, governance structures, validation procedures and staff training. Surveyors may soon request evidence that AI systems are governed with the same rigor as other safety-critical technologies. Establishing these practices now will position organizations for both compliance and leadership as accreditation standards are established and evolve.
The Joint Commission and CHAI have announced that a series of follow-on products will launch later this year and into 2026. The next release will include AI governance playbooks, developed after a national series of workshops designed to gather input from hospitals and health systems of all sizes and regions. These playbooks will expand upon the original guidance and provide practical, operational details to help organizations implement AI governance at scale.
Following the release of the playbooks, the Joint Commission plans to introduce a voluntary AI certification program, available to its more than 22,000 accredited and certified healthcare organizations nationwide. This certification will build on the RUAIH principles and serve as a benchmark for demonstrating responsible and trustworthy AI deployment in clinical and operational settings.
Clinicians will adopt new technology, including AI tools, with or without institutional support. That reality makes it essential for health care organizations to be proactive. By adopting governance structures, safeguarding data, ensuring transparency, monitoring performance, addressing bias and educating staff, health care organizations can unlock the transformative power of AI while minimizing its risk.
AI tools won’t replace clinicians or human judgment, nor should they. But as adoption accelerates, the responsibility falls to health care leaders to ensure that AI is deployed with intention, aligned with core principles and backed by the safeguards patients and clinicians deserve.

Trademark Risks in the AI Age – Navigating Infringement, Dilution and Genericness

Key Takeaways

Artificial intelligence (AI) tools are accelerating risks of trademark infringement, dilution and genericness by enabling the rapid creation of brand-like content—including logos, names and imagery—often without users realizing they’re mimicking protected marks.
Even unintentional AI-generated branding can expose companies to liability under federal and common law, especially where there’s a likelihood of consumer confusion or dilution of a famous mark’s distinctiveness.
Brand owners and startups should actively monitor AI use, enforce IP rights and conduct proper clearance before adopting AI-generated branding to avoid exposure and preserve trademark value.

AI tools are having a direct impact on brand owners and those looking to create their brands. Like any powerful tool, its advancements are often offset by certain trade-offs. While AI unlocks new opportunities for companies, it also amplifies existing risks of trademark infringement, dilution and genericness. As they adapt to AI’s ever-evolving influence, companies and practitioners alike should consider these risks and how best to mitigate them.
Trademark Infringement Risk: AI Prompts Can Lead to Liability
AI tools operate with massive amounts of data, and generative AI has the capacity to assist individuals or companies to create logos and brand names upon command. By using these quick and efficient tools, graphics can be created without the expense of a traditional graphic designer. However, AI may mimic or replicate pre-existing trademarks unintentionally, leading to a heightened risk of trademark infringement.
This risk affects both established companies and startups. Brand owners may face dilution or direct infringement issues, while startups risk unknowingly adopting marks that infringe existing rights. Existing brands need to be more diligent in their monitoring and enforcement, and startups need to ensure that they understand the inherent risks of adopting trademarks and do their own proper diligence. 
The standard for trademark infringement under the Lanham Act is tripartite. It requires 1) ownership of a valid trademark, 2) unauthorized use in commerce and 3) a likelihood of consumer confusion.1 Trademark rights can also be asserted through common law rights, where ownership of a valid trademark registered with the United States Patent and Trademark Office is not necessary.
Regardless of whether trademark rights are being asserted for a registered or unregistered mark, liability does not require access to or knowledge of the trademark that is being infringed. This means a company can infringe another’s mark even where it develops its own mark independently. Importantly, infringement hinges on whether the accused infringer’s use of its mark in commerce is likely to confuse consumers about the product or service’s source, sponsorship or affiliation. So, in a situation where a company adopts an AI-generated design as its logo, and the logo is confusingly similar to an existing logo, the company may face liability for trademark infringement. An improper prompt could lead to the creation of branding that mimics the more popular brands in a category, heightening the risk of infringement. The company’s knowledge of an existing logo or good faith intent to avoid copying another’s logo is immaterial, so long as there is a likelihood of confusion between the marks.
Dilution Risk: AI Content Can Weaken Famous Marks
Beyond trademark infringement, AI-generated logos and designs that resemble existing marks pose another concern: they can erode the distinctiveness of these well-known, existing marks. The Lanham Act protects famous brands even when there is no likelihood of confusion, under the principle of “dilution.”
Dilution occurs when a third-party utilizes a famous mark in an unauthorized way that weakens its distinctiveness or reputation, even without causing confusion among consumers. It primarily takes two forms: blurring, which weakens the mark’s singular ability to identify a product, and tarnishment, which harms the mark’s reputation by associating it with inferior or offensive goods or services. AI tools make it easier than ever to mimic the look and feel of famous brands in creating new designs, logos, branding and personal, non-commercial styles. To succeed in a dilution case, the brand owner must show that their mark is 1) famous, 2) used in commerce and 3) used in a way that blurs its distinctiveness or tarnishes the brand’s reputation regardless of the presence or absence of actual or likely confusion, of competition, or of actual economic injury.2
As these tools make it easier to mimic famous logos, imagery and brand aesthetics, the dilution risk is no longer theoretical. Well-known brands are already seeing viral trends and AI-generated content that blur their distinctiveness, prompting increased enforcement efforts.
One common example is utilizing the “style of” a famous brand. For example, a viral trend in the last few years saw AI image generators creating “Pixar-style” memes that resembled fictional Disney characters, misappropriating Disney and Pixar logos.3 In an effort to comply with Disney’s requests, Microsoft blocked the word “Disney” from use in its Bing AI imaging tool4, although this did not completely mitigate the harm of infringement and dilution. In another instance, Getty Images filed a complaint5 against Stability AI, Inc. for, inter alia, trademark dilution under both federal and Delaware trademark law and trademark infringement relating to the unauthorized use of Getty Images’ trademarks, where generative AI outputs allegedly replicated Getty Images’ watermarks. These examples show how quickly brands must act to prevent reputational damage and dilution, especially when misuse spreads virally. Unauthorized use of a company’s intellectual property, even in seemingly innocuous contexts, can weaken its marks and mislead consumers.  
In another recent example, Hermès v. Rothschild, an artist created “MetaBirkin NFTs” depicting digital images of fuzzy Hermès Birkin bags, leading consumers to believe that Hermès authorized or was affiliated with the NFTs. Hermès sued for trademark infringement, dilution and cybersquatting and was awarded $133,000 in damages after the jury found that Rothschild’s use was misleading.6 An appeal is underway, as Rothschild argues that his work falls within protected artistic expression.7 This case highlights that artificially generated images or goods are not necessarily shielded from liability where consumers are misled or where the distinctiveness of famous brands is diluted.
As the convenience of AI tools explodes, we expect to see more and more examples of dilution concerns where logos or other imagery are created in the style of a famous brand, blurring the distinctiveness of the brand’s trademark rights or tarnishing the brand’s reputation.8 Brand enforcement efforts to preserve trademark distinctiveness are critical, as the prevalence of look-alike AI-generated imagery can erode the strength of famous marks and impede the success of future dilution claims.
Genericness Risk: AI Can Turn Brand Names Into Common Terms
Brands must also consider the risk of AI accelerating genericness of a brand name. For example, if AI chatbots begin using a brand’s trademark as the generic word for a product category, over time, the trademark may lose its ability to signify the brand as the source of the goods or services and come to signify the goods or services, generally. The very purpose of trademark rights is to allow marks to serve as source identifiers, thereby protecting both brands and consumers. However, when trademarks become generic, they no longer serve as source identifiers, and trademark rights are lost.
Practical Steps for Brand Protection in the AI Era
As generative AI becomes a common tool for creating look-a-like marks and logos, the risk of infringement, dilution and genericness will only grow. Brand owners can’t afford to take a passive approach. Below are proactive steps companies should take to mitigate exposure and preserve the strength of their marks:

Monitor user-generated content and AI platforms. Brand owners should monitor user-generated content as well as AI platforms—including internal AI programs or AI interfaces designed for customer inquiries or interactions—for misuse and misappropriation of protected marks.
Submit takedown notices promptly. When misuse occurs, brands should act quickly to prevent consumer confusion or dilution of their marks.
Vet AI-generated branding before adoption. New entrants to a market should be careful when using AI to create branding. AI often draws from dominant market players, increasing the risk of mimicking a competitor’s branding. Utilizing a specific trademark clearance service or discussing with a trademark professional will help mitigate this risk.
Review contracts with vendors and platforms. Brands should ensure their agreements prohibit unauthorized use of logos in AI-generated content.
Collaborate with developers on guardrails. Particularly where training data may include proprietary brand assets, brands should work with AI developers to restrict AI-generated outputs that pose risks of infringement, dilution or genericness.
Educate internal marketing teams. Marketing, creative and product teams should understand the legal risks of using AI tools for branding or content development to help avoid liability.

1 15 U.S.C. § 1114.
2 15 U.S.C. § 1125(c).
3 See e.g., Jmfries, Disney Has Taken Notice of AI-Generated Pixar Posters, The University of British Columbia (Nov. 27, 2023), https://iplaw.allard.ubc.ca/2023/11/27/disney-has-taken-notice-of-ai-generated-pixar-posters/; Amid Amidi, Report: Disney Asked Microsoft To Prevent AI Users From Infringing Its Trademarks, Cartoon Brew (Nov. 18, 2023) https://www.cartoonbrew.com/tech/report-disney-asks-microsoft-to-prevent-ai-users-from-infringing-its-trademarks-235039.html.
4 Id.
5 Getty Images first filed a complaint in the District of Delaware. See Getty Images (US), Inc. v. Stability AI, Inc., No. 1:23-cv-00135-JLH (D. Del. 2023). It has since filed a notice of voluntary dismissal without prejudice after jurisdictional issues were raised in the case, and the United States District Court for the District of Delaware closed the case on August 18, 2025. On August 14, 2025, Getty Images filed another complaint in the Northern District of California. See Getty Images (US), Inc. v. Stability AI, Ltd. et al, No. 3:25-cv-06891-TLT (N.D. Cal. 2025).
6 Hermès v. Rothschild, No. 1:2022-cv-00384 (S.D.N.Y. 2023).
7 Hermès International v. Rothschild, No. 23-1081 (2d Cir. 2023).
8 This may become analogous to the copyright issues faced by photographers in the early days of the internet, but the effect is even more detrimental, considering the harm to the reputation caused by the dilutive activities.

California’s October State Law Updates – What Employers Need to Know

Throughout October 2025, California Governor Gavin Newsom signed multiple employment-related Bills into law. These new measures address a wide range of workplace-related matters, including regulations aimed at the use of artificial intelligence, updates on paid leave, and amendments to mediation procedures. While some of these Bills will be subject to legal challenges that delay or block their application, many took effect immediately or will become effective on January 1, 2026. Accordingly, California employers are encouraged to begin updating policies, training programs, and internal templates to ensure compliance with these requirements.

AB 692 prohibits contractual provisions purporting to require a departing employee to repay any debt to the employer.

AB 692, signed into law on October 13, 2025, makes it unlawful for California employers to include specified provisions in any employment contract entered into on or after January 1, 2026. Specifically, AB 692 prohibits any term that would require the worker to pay an employer, training provider, or debt collector for a debt if the worker’s employment or work relationship terminates. Exceptions exist for certain agreements, including those involving discretionary bonuses or relocation payments, provided they meet set criteria, including: (1) repayment terms must be in a separate agreement from the primary employment contract; (2) the worker must be advised of the right to consult an attorney and given at least 5 business days to do so before signing; (3) any repayment obligation for early separation must be prorated based on the remaining retention period (up to 2 years) and cannot accrue interest; (4) the worker must have the option to defer receipt of the payment until the end of the retention period without repayment obligation; and (5) repayment may only apply if the employee leaves voluntarily or is terminated for misconduct. The bill provides penalties for violations at the amount of the greater of a worker’s actual damages or up to $5,000 per worker, injunctive relief, and attorneys’ fees and costs.

AB 406 expands qualifying reasons for use of paid and unpaid leaves under state law.

AB 406 expands the permitted uses of California Paid Sick Leave under the Healthy Workplaces, Healthy Families Act of 2014 (“HWHFA”), effective October 1, 2025. AB 406 also updates the state’s unpaid, job-protected leave provisions to align with these expanded uses starting January 1, 2026. Under the new amendments, employees now may use paid sick leave, and certain unpaid leave, if the employee or a covered family member are victims of certain crimes and need to attend related judicial proceedings. Covered proceedings include, but are not limited to, delinquency hearings, bail or release determinations, plea or sentencing hearings, postconviction proceedings, and any hearing where the victim’s rights are at stake. For this purpose, a “victim” includes individuals harmed physically, psychologically, or financially as a result of violent felonies, serious felonies, or felony theft and embezzlement, whether actual or attempted.

SB 513 broadens definition of “personnel records.”

On October 11, 2025, SB 513 was signed into law, amending California’s definition of personnel records to include “education or training records.” The law requires employers who maintain education or training records to ensure the records include the following information: the name of the employee, the name of the training provider, the duration and date of the training, the core competencies of a training course, and the resulting certification or qualification.

SB 590 extends eligibility for state paid family leave benefits.

SB 590, which also was signed into law on October 13, 2025, expands eligibility for benefits under the paid family leave program to include individuals who take time off work to care for a seriously ill designated person. The law’s definition of a “designated person” will include any care recipient related by blood or whose association with the individual is the equivalent of a family relationship. SB 590 will take effect July 1, 2028.

SB 617 adds to the notice requirements under CalWARN.

SB 617 was signed into law on October 1, 2025, expanding the information covered employers must include in their written notice required under the state’s Worker Adjustment and Retraining Notification Act (“CalWARN”). The amendments to CalWARN require employers to notify employees whether the company plans to coordinate services through the local workforce development board, another entity, or not at all. Covered employers must also provide the local workforce development board’s contact information and a description of its services in the notice as well, regardless of whether the company plans to coordinate services with the board, and the notice must include information about the statewide assistance program known as CalFresh, the CalFresh benefits helpline, and a link to the CalFresh internet website. CalWARN covers employers with 75 or more employees, including part-time employees, and requires 60 days’ advance notice for plant closures, layoffs of 50 or more employees, and relocations of at least 100 miles affecting any number of employees. SB 617 is slated to take effect on January 1, 2026.

California introduces new employer notice and training requirements related to law enforcement interactions at the workplace.

Under California’s new Workplace Know Your Rights Act, effective February 1, 2026, employers must provide employees a stand-alone written notice of workers’ rights when interacting with law enforcement at the workplace, in addition to providing notice to new hires thereafter. This notice builds on existing requirements to provide workers with notice of employee rights related to workers’ compensation, immigration agency inspections, immigration-related practices, and labor-related rights. The act also requires employers to notify an employee’s designated emergency contact if the employee is arrested or detained on their worksite and provide employees the opportunity to designate an emergency contact on or before March 30, 2026. The Labor Commissioner will develop and issue a template notice and videos for employers and employees related to the new law by July 1, 2026.

SB 303 introduces protections for an employee’s good faith participation in bias-mitigation training.

On October 1, 2025, Governor Newsom signed SB 303 into law, establishing that an employee’s assessment, testing, admission, or acknowledgment of their own personal bias, when made in good faith and solicited or required as part of a bias mitigation training, does not, by itself, constitute unlawful discrimination. Effective January 1, 2026, this law amends the California Fair Employment and Housing Act (“FEHA”), which requires employers to prevent workplace discrimination and harassment.

SB 19 criminalizes threats of mass violence against California workplaces.

On October 11, 2025, SB19 was also signed into law, criminalizing threats of mass violence made against workplaces, as well as schools, houses of worship, and medical facilities. In addition to verbal threats of violence, the law covers images or threats posted online. While SB 19 is not aimed directly at employers, the new law provides a resource for California employers to protect employees from threats of violence.

SB 464 expands pay-data reporting requirements.

On October 13, 2025, Governor Newsom signed SB 464 into law, expanding the state’s pay‑data reporting requirements for employers. Effective immediately, the law mandates that private employers with 100 or more employees collect and store demographic information used for pay‑data reporting separately from employees’ personnel files and requires courts to impose civil penalties against employers who fail to file the required report upon request from the CRD. Previously, courts had discretion in imposing civil penalties. Effective January 1, 2027, SB 464 also increases the number of job categories in which pay bands must be reported from 10 to 23.

SB 642 amends equal pay requirements to clarify certain definitions and extend the statute of limitations for wage-related actions.

SB 642, which was enacted on October 8, 2025 and took effect immediately, introduced several amendments to California’s equal pay requirements. The amendments revise the definition of “pay scale” to mean the employer’s good faith estimate of the salary or hourly wage range that the employer reasonably expects to pay for the position. SB 642 also defines “sex,” wages,” and “wage rates” for purposes of the equal pay requirements. The amendments also extend the time to bring a civil action to recover wages from two years to three years and provide that employees are entitled to obtain relief for the entire period of time in which a violation of its provisions exists, not to exceed six years.

SB 261 increases enforcement authority for wage judgments and mandates recovery of attorney fees.

SB 261, signed into law on October 13, 2025, expands the authority of the Division of Labor Standards Enforcement in wage‑claim matters, including actions to recover wages, penalties, and other demands for compensation. The law imposes new civil penalties, up to three times the outstanding judgment amount, on employers that fail to satisfy wage judgments within 180 days. Additionally, it mandates that prevailing employees (or the entity acting on their behalf) recover attorney fees and costs when enforcing such judgments.

SB 53 establishes broad AI regulations mandating standardized disclosure and transparency practices.

SB 53, also known as the Transparency in Frontier Artificial Intelligence Act, establishes several new requirements for certain “frontier” developers of artificial intelligence (“AI”) models. Effective January 1, 2026, SB 53 introduces comprehensive state-level requirements for developers of “frontier” AI models, meaning large-scale systems trained using massive computational resources. SB 53 applies to “large frontier developers,” defined as companies with annual revenues over $500 million that train foundation models using a quantity of computer power greater than 10^26 integer or floating-point operations. Covered developers are required to publish risk-mitigation frameworks, complete transparency reports before deploying models, and submit regular assessments to the California Office of Emergency Services. The law also mandates prompt reporting of critical safety incidents, including cybersecurity breaches or loss of model control with catastrophic potential. In addition, the bill introduces new whistleblower protections for AI safety professionals and authorizes the creation of a public cloud computing initiative, CalCompute, aimed at promoting equitable access to computer resources for safe and ethical AI development. While the law’s requirements are narrowly tailored to the most powerful AI systems, companies developing or deploying large-scale AI in California should closely monitor forthcoming guidance and begin evaluating their compliance readiness.

AB 250 extends statute of limitations for certain employment-related sexual assault claims.

On October 13, 2025, AB 250 became law, extending the timeframe during which certain sexual assault claims may be revived. Under the amended law, individuals may bring sexual assault claims, including derivative claims for wrongful termination and sexual harassment, among others, that would otherwise be barred prior to January 1, 2026 because the applicable statute of limitations has or had expired, by demonstrating that one or more entities legally responsible for damages engaged in a cover up. AB 250 defines a “cover up” as a “concerted effort to hide evidence relating to a sexual assault that incentivizes individuals to remain silent.” The bill permits any such claim to proceed if already pending in court on October 13, 2025, or, if not filed by that date, to be commenced between January 1, 2026, and December 31, 2027.

SB 477 amends the California Fair Employment and Housing Act to expand circumstances for tolling the statute of limitations and clarifies definition of a “group or class complaint.”

On October 3, 2025, SB 477 was enacted. Effective January 1, 2026, the law will expand the circumstances for tolling the statute of limitations for California individuals to file a civil lawsuit under the California Fair Employment and Housing Act when the individual appeals a decision from, or enters into an agreement with, the California Civil Rights Department (“CRD”). The law also clarifies the definition of a “group or class complaint” and requires the CRD to fully resolve all related proceedings before issuing a right-to-sue notice for group or class matters.

AB 1523 revises rules for court-ordered mediation and increases maximum amount in controversy for mandatory mediation.

Effective January 1, 2027, AB 1523 amends California’s court-ordered mediation rules to increase the maximum amount in controversy for mandatory mediation from $50,000 to $75,000. Under AB 1523, a court only may order mediation when the case is set for trial, at least one party has expressed interest in mediation, no ongoing discovery disputes exist, and parties are notified of their option to choose a mutually agreeable mediator. If parties cannot agree on a mediator within 15 days, the court will select one at no cost. The mediation must be completed at least 120 days before the trial date and can be conducted remotely if all parties agree. The mediation must conclude with either a mutually acceptable agreement or a statement of non-agreement, and the determination of the case’s value will be made without prejudice.

AB 1514 extends independent contractor exemptions for manicurists and commercial fishers.

On October 3, 2025, Governor Newsom signed AB 1514 into law, extending the application of the temporary exemptions from the “ABC” test for employment status for licensed manicurists and commercial fishers.  California law requires a 3-part test, commonly known as the “ABC” test, to determine if a worker is an employee or independent contractor and exempts specified occupations and business relationships from the application of the test. These exemptions include licensed manicurists and commercial fishers working on an American vessel. AB 1514 deletes the inoperative date for the manicurist exemption, which expired on January 1, 2025, and requires that, until January 1, 2029, the exemption is reapplied to certain licensed manicurists. AB 1514 also amends the inoperative date for an exemption for certain commercial fishers, which made such workers eligible for unemployment insurance benefits subject to certain conditions, from January 1, 2026 until January 1, 2031.

SB 809 establishes the Construction Trucking Employer Amnesty Program relating to classification of construction drivers as independent contractors.

SB 809, the third Bill signed into law by Governor Newsom on October 11, 2025, establishes the Construction Trucking Employer Amnesty Program (“CTEA Program”) and clarifies worker classification for certain construction trucking workers. Similar to California’s Motor Carrier Employer Amnesty Program, the CTEA Program allows eligible construction contractors to resolve misclassification claims involving construction drivers by entering into an agreement with the Labor Commissioner prior to January 1, 2029. These agreements must include certain elements, including, but not limited to, an agreement by the construction contractor to classify construction drivers as employees and to pay all wages, benefits, and taxes owed, if any. Separately, SB 809 also establishes that it is declarative of existing law that mere ownership of a vehicle used by a worker providing labor or services for remuneration does not render the individual an independent contractor. Finally, SB 809 establishes that it is declarative of existing law that Labor Code Section 2802, which requires reimbursement of necessary business-related expenses, applies to an employee’s use of a vehicle, including a personal or commercial vehicle, which the employee owns and uses to perform their duties. 

SB 20 introduces worker protections related to high-exposure trigger tasks on artificial stone.

SB 20, known as the Silicosis Training, Outreach, and Prevention (STOP) Act, was signed into law by Governor Newsom on October 13, 2025. The STOP Act aims to enhance worker safety in the stone fabrication industry and address rising cases of silicosis among workers exposed to crystalline silica dust. The new law prohibits the use of dry cutting methods on artificial stone and requires employers to implement effective “wet” methods to suppress dust. Covered employers will also be required to train workers who perform high-exposure trigger tasks, as defined by the law, by July 1, 2026 and submit a written attestation that workers have received the mandatory training each year to the California Division of Occupational Safety & Health. The law also requires employers to report cases of silicosis to the state and provides that violations may result in fines or shutdown orders.

App Store Age Verification Laws – Your Questions, Answered.

The last several weeks have been eventful for online safety and age assurance, particularly with respect to U.S. app store age verification laws: Apple and Google unveiled some of their plans for addressing these laws on Oct. 8; Governor Newsom signed the Digital Age Assurance Act into law on October 13; and on October 16, an industry organization lodged a constitutional challenge against Texas’ law (SB2420). Below, we provide a handy FAQ with questions and answers on issues that many likely have regarding these laws, the app stores’ guidance, and the legal challenge to the Texas law.
Mobile app operators: take note. Regardless of your company’s target audience, you will be required to take technical and operational steps to comply with these laws.

Which states have passed app store age verification legislation?

Four states – Texas, Utah, Louisiana, and California – have passed app store age verification laws.
The effective dates are:

Jan. 1, 2026 (Texas)
May 7, 2026 (Utah)
July 1, 2026 (Louisiana)
Jan. 1, 2027 (California)

What types of organizations are covered?

App stores (TX, LA, UT) and operating system providers (CA) include Google, Apple, and other app store operators.
A developer, as defined in the California law, refers to a person that owns, operates, or maintains a mobile app. Developer is used but not defined in the other states’ laws.

What are the app stores’ age verification obligations?

Texas, Utah, and Louisiana’s laws all require app stores to “use a commercially reasonable method” to verify an individual’s age category into one of the following categories:

Under 13 (“child”)
At least 13 and under 16 (“younger teenager”)
At least 16 and under 18 (“older teenager”)
At least 18 (“adult”)

Those laws therefore open up the possibility of methods beyond self-declared age (e.g., an age gate).
California’s law requires app stores to provide an accessible interface at account setup that requires an accountholder to indicate the birth date, age, or both, of the user of that device, and categorize the user into age categories that are identical to the above categories (though, all under 18 users are referred to as a “child”). California’s law, therefore, effectively only requires an age gate.

Who do the laws contemplate will be verifying a minor user’s age to the app stores?

Texas, Utah, Louisiana: The individual who creates the app store account, which may be the minors themselves, or potentially parents. Apple’s guidance confirms this approach. 
California: The parent. The law requires the app stores provide an interface to the “account holder,” which is an individual over 18 or the parent or guardian of an individual under 18. It seems that the app stores will need to take a different approach than is currently contemplated in relation to Texas’ law in order to comply with California’s law.

What are the app stores’ obligations regarding parent accounts?

The non-California laws require app stores to associate each minor account with a parent account.
There is not an explicit requirement to do so in California. However, it does, in effect, require association of a minor account with an adult account. “Account holder” means “an individual who is at least 18 years of age or a parent or legal guardian of a user who is under 18 years of age in the state,” and age verification must be carried out by an “account holder.”

What are the app stores’ parental consent obligations under the Texas, Louisiana, and Utah laws?

For minor accounts, Texas, Louisiana, and Utah will require app stores to obtain parental consent for each and every (1) app download, (2) app purchase, and (3) in-app purchase*. One-time and other bundled consents are not permitted.
App stores will also have consent requirements when an app developer notifies the app store of a “significant change” (see discussion below), i.e., app stores must re-consent each minor account, via parental consent.
*As to the scope of in-app purchases that would be impacted, Apple has clarified that the consent requirement applies only to purchases made using Apple’s In-App Purchase system—such as subscriptions or digital content. Purchases of physical goods (e.g., ordering food through a delivery app) are not covered. Google has not yet provided similar clarification.

What are app stores’ parental consent obligations under the California law?

None.

What are developers’ age assurance obligations under the Texas, Louisiana, and Utah laws?

Developers must verify, using the app stores’ data sharing methods (e.g., APIs, as discussed in the app stores’ guidance), (i) the age category of users and (ii) for minor accounts, whether parental consent has been obtained.
Louisiana also requires developers to obtain parental consent for app downloads, purchases, and in-app purchases. It is unclear how this would work in practice, such as if developers will have to build their own consent interface or whether the app store-provided consent flow will suffice.
The Texas law will require app developers to assign each app and each in-app purchase an age rating pursuant to the age categories discussed above.

What are developers’ age assurance obligations under the California law?

Developers must:

Request a signal with respect to a particular user when an app is downloaded and launched.
Apply age received “across all platforms of the [app] and points of access to the [app].”
Use the age range signal to comply with applicable law.

Is actual knowledge of age imputed to a developer through receipt of age information from app stores?

Texas, Louisiana, and Utah: Yes, implicitly.
California: Yes, explicitly.
With actual knowledge of users’ age being thrust upon developers, developers – in particular, those that do not independently carry out age assurance – will be forced to address obligations and restrictions under the Children’s Online Privacy Protection Act (COPPA), state consumer privacy laws that regulate children’s and teens’ personal data, and online safety laws that impose obligations and restrictions based on users’ ages.
By way of example, many developers that obtain actual knowledge of users under 13 from the app store will need to restrict ongoing access to their service by such users and delete such users’ personal information (if they process personal information for more than the narrow permitted internal operational purposes) in order to remain compliant with COPPA. Of course, there may be developers in this situation that have already otherwise obtained verifiable parental consent or are in the small minority of services (such as social media and gaming platforms) in which they are able to transition users to an age-appropriate experience (though, the COPPA deletion requirement would still apply). By way of another example, developers that obtain actual knowledge of users at least 13 but younger than 16 in California would have to apply age-related restrictions from the CCPA to such users, such as needing the users to opt in to sale and sharing, rather than only offering an opt-out right.

How do the laws address conflicts in age information possessed by developers and received from app stores?

Texas, Louisiana, and Utah: Each law provides a safe harbor based on “good faith” reliance on age and consent information from app stores.
California: The law provides that “a developer shall treat a signal received pursuant to this title as the primary indicator of a user’s age range for purposes of determining the user’s age.” However, it further provides that a “developer must not willfully disregard internal clear and convincing information otherwise available to the developer that indicates that a user’s age is different than age range received from app store.”

If we do not want to have minors download or purchase our app, can we prevent them from doing so?

It is not clear, though it seems unlikely that developers will be able to prevent minors from downloading their apps if a parent has provided consent. This is because the age verification and consent requirements extend to all apps. App developers will, therefore, likely be unable to prevent the app stores from requesting such consent (except perhaps in the event that the content rating of the app is more mature than the child user’s age range).

Can parental consent be revoked?

Yes, it can be revoked. Under the Texas, Louisiana, and Utah laws, app stores must notify each developer upon revocation of parental consent. The Google guidance seems to contemplate that revocation of consent will be possible on a per-app basis.

How will app developers address revoked consent?

Certainly, restricting an in-app purchases when a parent refuses consent will easily be accomplished by the app stores.
However, there are no details in the laws regarding what steps the app stores and developers must take with respect to a minor’s use of already downloaded apps, i.e., there is no obligation in these laws to prevent the use of the app by a minor whose parent revoked consent. To our knowledge, neither app stores nor developers have the ability to remove downloaded apps from a device (and that is not required of them by these laws).
The app stores are working on mechanisms to notify developers when a parent revokes consent for a minor’s ongoing use of an app. The app stores’ guidance provides some details in this regard. Google has stated that developers will “get a report in Play Console showing when a parent revokes approval for your app.” Apple’s press release states that “parents will be able to revoke consent for a minor continuing to use an app.” Both have alluded to further details in technical documentation later this year. Developers will need to monitor any guidance provided by regulators as well as the app stores on this issue and will need to utilize existing and potentially new features provided by the app stores to disable use of their app by minors whose parents have revoked consent.

How do the laws restrict developers from enforcing contracts against minors?

Under the non-California laws, a developer may not enforce a contract or terms of service agreement against a minor unless the developer has obtained verifiable parental consent. In Utah and Louisiana, the developer must verify through the app store that verifiable parental consent has been obtained.

Is it true that re-consent will be required if an app makes a “significant change?”

Yes, as mentioned above, the non-California laws require, upon being notified of a significant change by an app developer, app stores to re-consent all applicable accounts via parental consent.
Under the non-California laws, developers must provide notice to the app stores before making any “significant change” to an app. A change is “significant” if it:

(1) changes the type or category of personal data collected, stored, or shared by the developer;
(2) affects or changes the rating assigned to the app or content elements that led to that rating;
(3) adds new monetization features to the app, including new opportunities to make a purchase in or using the app; or new ads in the app; or
(4) materially changes the functionality or user experience of the app.There is no equivalent requirement under the California law.

Do the laws impose obligations only as to new app store accountholders/ users?

Texas, Utah, and Louisiana: Yes. The laws only apply to new app store accounts.
California: Initially, yes; the law provides a six-month grace period for both app stores and developers to comply with the law as to existing accountholders and users.

How do the laws restrict a developer’s use of personal data received from an app store?

Under the Texas and Utah laws, a developer may only use personal data provided by app stores to:

(1) enforce age-related restrictions on the app;
(2) ensure compliance with applicable laws and regulations; and
(3) implement safety-related features and default settings on the app.

The Texas law requires developers to delete personal data provided by app stores upon performing the required age verification.
All four states prohibit sharing such personal data for a purpose not required by these laws. Utah and Louisiana explicitly prohibit sharing age category data with any person.

Which app stores have released guidance addressing these laws?

Both Apple and Google have released guidance. Apple’s guidance mentions only the Texas law, while Google’s mentions Texas, Louisiana, and Utah. The app stores are developing the aforementioned technical features to enable their and app developers’ compliance, namely APIs that enable developers to receive users’ age information and consent status, as well as to report significant changes to an app, and permit parents to revoke consent for a minor’s use of an app. As we understand it, these tools and features are currently under development and subject to change. The app stores’ documentation and press releases should be consulted often to ensure that you and your technical teams are relying on the most up-to-date information.

What happens if my company does not take the actions required by the app stores?

If a developer fails to integrate with the app stores’ provided technical measures, it is likely that app store accountholders who are verified minors (in the states where the laws are in place) will not be able to download the developer’s app(s), and in-app purchase flows will be blocked for under-18 accounts.
In addition, developers that do not implement the app stores’ technical measures will likely be out of compliance with these state laws.

How will these laws be enforced, and what are the penalties for non-compliance?

Violations of the Texas and Utah laws (in the case of Utah, a specific sub-section) are considered deceptive trade practices under their respective UDAAP laws.
Texas’ law is enforced by the consumer protection division of the attorney general’s office; injunctive relief and up to $10,000 per violation in penalties are available.
In addition, Utah’s law provides for multiple avenues of a private right of action with statutory damages:

First, a violation of Subsection 13-75-202(4)(b) (restricting developers from knowingly misrepresenting any information in the parental consent disclosure) constitutes a deceptive trade practice under Subsection 13-11a-3 of Utah’s UDAAP law. Pursuant to Subsection 13-11a-4, “any person or the state may bring an action” for injunctive relief and, if injured, damages in the amount of the actual damages or $2,000, whichever is greater.
Second, a harmed minor (or parent) may bring a civil action against an app store or developer for a violation of the law for actual damages or $1,000 per violation, whichever is greater, along with reasonable attorneys’ fees and litigation costs. The private right of action has limited application; in the case of developers, it only applies to violation of Subsection 13-75-202(4), which provides that:

A developer may not: (a) enforce a contract or terms of service against a minor unless the developer has verified through the app store provider that verifiable parental consent has been obtained; (b) knowingly misrepresent any information in the parental consent disclosure; or (c)share age category data with any person.

In Louisiana and California, the attorney general may bring a civil action to enforce violations of the law.

Louisiana: Covered app stores or developers found to violate the law may be subject to injunctive relief and/or a fine of up to $10,000 per violation following a 45-day curing period.
California: Violations are subject to an injunction or civil penalties of up to $2,500 per affected child for each negligent violation, and up to $7,500 per affected child for each intentional violation.

Are any of these laws being challenged?

Yes. As of Oct. 16, the Texas law is being challenged by the Computer and Communications Industry Association on constitutional grounds. It is unclear whether the enforcement of the law will be stayed pending resolution of the challenge. In the event of a stay, it is unclear whether, but it seems unlikely that, app stores will require companies to implement the age verification and consent measures. Developers should prepare to integrate with the app stores’ technical measures by Jan. 1, 2026, but also should continue monitoring the status of the law’s challenge and app stores’ plans to address in the absence of a stay in enforcement

Anti-Circumvention- Reddit’s Case Against Perplexity

On October 22, 2025, Reddit, Inc. filed a federal lawsuit in the Southern District of New York against Perplexity AI, Inc. and associated data-scraping firms, alleging violations of the Digital Millennium Copyright Act’s anti-circumvention provisions (17 U.S.C. §1201), along with related claims for unjust enrichment and unfair competition. Building on the strategy advanced in its earlier Anthropic suit, Reddit frames the dispute not as a garden-variety copyright infringement case but as a §1201 anti-circumvention suit targeting what it calls “industrial-scale” evasion of technical controls used to harvest Reddit content through Google’s search results. According to the complaint, the defendants allegedly masked identities, rotated IP addresses, and bypassed access controls to scrape billions of Google search-engine results pages (“SERPs”) containing Reddit URLs, text, images, and videos—data that Reddit claims Perplexity subsequently incorporated into its “answer engine.”
There are two notable factual allegations that distinguish this case. First, Reddit alleges that it created a post that was indexable by Google’s search engine but not otherwise accessible, and that Perplexity’s answer engine surfaced material portions of that post within hours; evidence that, Reddit contends, demonstrates Perplexity (directly or through the co-defendants) scraped Google’s results and ingested the data. Second, after Reddit issued a May 2024 cease-and-desist, Perplexity’s citations to Reddit purportedly increased forty-fold, despite public statements that it respects robots.txt directives.
Rather than challenging how Perplexity ultimately used the copyrighted materials, Reddit’s §1201(a)(1) claim shifts the focus upstream to the act of bypassing technological measures that control access to copyrighted works, whether or not the ultimate use might otherwise be defensible. The complaint also highlights the allegedly unlawful conduct of the data-scraping co-defendants, asserting that Perplexity collaborated with them to facilitate “industrial-scale” circumvention of Reddit’s and Google’s access controls. Reddit pleads contributory and assisting circumvention theories together with unjust-enrichment and unfair-competition claims, seeking both injunctive relief and damages for business harm.
Reddit situates Perplexity within a broader ecosystem of data brokers and scrapers and contrasts Perplexity’s approach with Reddit’s paid license partnerships with OpenAI and Google, positioning this suit as a defense of a licensing model Reddit asserts Perplexity’s competitors are honoring. Reddit alleges that Perplexity’s practices not only diminish the value of existing licensing arrangements, but also divert user engagement away from Reddit, removing the need to access Reddit content directly via its website, mobile app, or services and thereby impairing the platform’s commercial utility. The complaint further asserts that by bypassing Reddit’s technical controls, the scraping captured deleted, private, or otherwise restricted posts, preventing Reddit from honoring users’ deletion requests and privacy preferences. This, Reddit alleges, jeopardizes users’ ability to protect their privacy interests and control access to their content, ultimately undermining Reddit’s ability to maintain user trust and engagement.
We expect threshold disputes in this case, the resolution of which will be predicated on (i) whether or not Google’s and Reddit’s measures qualify as §1201 “technological measures” that effectively control access, (ii) whether or not scraping SERPs to obtain Reddit content constitutes access to the underlying copyrighted works, and (iii) Perplexity’s knowledge/intent and the role of intermediary data-scraping firms. Perplexity may emphasize public availability of SERP snippets and argue that any limits regulate automated volume rather than access to protected expression.
Reddit v. Perplexity illustrates how platforms are increasingly turning to access-control and contract-based theories where direct copyright claims are unavailable or uncertain. For content owners and rightsholders, the case highlights the importance of pairing technical barriers (e.g., API gating, rate-limiting, robots.txt) with contractual enforcement and auditable logs to promptly detect and substantiate circumvention claims. For AI developers, it reinforces that publicly available content is not necessarily “free for training” if obtained through methods that evade access restrictions, an increasingly risky strategy under §1201(a)(1) of the DMCA.

Novel AI Laws Target Companion AI and Mental Health

Imagine going online to chat with someone and finding an account with a profile photo, a description of where the person lives, and a job title . . . indicating she is a therapist. You begin chatting and discuss the highs and lows of your day among other intimate details about your life because the conversation flows easily. Only the “person” with whom you are chatting is not a person at all; it is a “companion AI.”
Recent statistics indicate a dramatic rise in adoption of these companion AI chatbots, with 88% year-over-year growth, over $120 million in annual revenue, and 337 active apps (including 128 launched in 2025 alone). Further statistics about pervasive adoption among youth indicate three of every four teens have used companion AI at least once, and two out of four use companion AI routinely. In response to these trends and the potential negative impacts on mental health in particular, state legislatures are quickly stepping in to require transparency, safety and accountability to manage risks associated with this new technology, particularly as it pertains to children. 
As we noted in our October 7 blog on the subject, state legislatures are moving quickly to find solutions to the disturbing mental health issues arising from use of this technology—even as the federal push for innovation threatens to displace state AI regulation, as we reported in July. For example, New York’s S. 3008, Artificial Intelligence Companion Models, effective November 5, 2025, was one of the first laws addressing these issues. It mandates a protocol for identifying suicidal ideation, and requires notifications at the beginning of every interaction, and every three hours thereafter, that the companion AI is not human. California’s recent SB 243, effective July 1, 2027, adopts provisions similar to New York’s law.
California has emerged as one of the leaders, if not the bellwether, of state AI regulations impacting virtually every private sector industry, as it seeks to impose accountability and standards to ensure the transparent, safe design and deployment of AI systems. Indeed, SB 243 is one of several laws that California Governor Gavin Newsom signed in October 2025 that relate to the protection of children online. Spurred by concern that minors, in particular, have harmed themselves or others after becoming addicted to AI chatbots—these laws seek to prevent “AI psychosis,” a popular term, if not yet a medical diagnosis. Like New York’s S. 3008, California’s SB 243 imposes requirements on developers of companion AI to take steps designed to reduce adverse effects on users’ mental health. Unlike New York’s S. 3008, however, it authorizes a private right of action for persons suffering an injury as a result of noncompliance. Penalties include a fine of up to $1,000 per violation, as well as attorney’s fees and costs.
SB 243 does not impose requirements on providers or others engaged in the provision of mental health care. By contrast, as we previously noted, California’s AB 489, signed into law on October 11, does regulate the provision of mental health via companion AI. It expands the application of existing laws relating to unlicensed health care professionals to entities developing or deploying AI. AB 489 prohibits a “person or entity who develops or deploys [an AI] system or device” from stating or implying that the AI output is provided by a licensed health care professional. We further examine California’s new laws, SB 243 and AB 489, below.
SB 243’s Disclosure and Protocol Requirements
SB 243 adds a new Chapter 22.6, Sections 22601 to 22605, to the Business and Professional Code, Division 8, imposing requirements on “operators,” meaning deployers, or persons “who [make] a companion chatbot platform available to a user in the state[.]” Operators must maintain a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user—including through crisis service provider referrals—and publishing the details of such protocols on the operators’ website.
If the user is an adult, the operator must:

issue a clear and conspicuous notification indicating that the chatbot is AI and not human, if a reasonable person would be misled into believing that they are interacting with a human.

If the deployer or operator “knows [the user] is a minor,” it must:

disclose that users are interacting with AI;
provide a clear and conspicuous notification every three hours that the user should take a break and that the chatbot is AI and not human; and
institute reasonable measures to prevent the chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.

The law is silent as to how the deployer or operator should ascertain whether the user is an adult or minor. 
SB 243’s Reporting Requirements
SB 243 requires operators to annually report, beginning July 1, 2027, to the Office of Suicide Prevention (“OSP”) of the California Department of Public Health, the following information:

the number of times the operator has issued a crisis service provider referral notification described above in the preceding calendar year;
protocols put in place to detect, remove, and respond to instances of suicidal ideation by users; and
protocols put into place to prohibit a companion chatbot response about suicidal ideation or actions with the user (using evidence-based methods to measure suicidal ideation).

The OSP is then required to post data from the reports on its website.
AB 489’s Requirements Addressing Impersonation of a Licensed Professional
AB 489 adds a new Chapter 15.5 to Division 2 of the Business and Professions Code to provide that prohibited terms, letters, or phrases that misleadingly indicate or imply possession of a license or certificate to practice a health care profession—terms, letters, and phrases that are already prohibited by, for example, the state Medical Practice Act or the Dental Practice Act—are also prohibited for developers and deployers of AI or generative AI (GenAI) systems. (Note that AI and GenAI are already defined in Section 11549.64 of California’s Government Code.)
The law prohibits use of a term, letter, or phrase in the advertising or functionality of an AI or GenAI system, program, device, or similar technology that indicates or implies that the care, advice, reports, or assessments offered through the AI or GenAI technology are being provided by a natural person in possession of the appropriate license or certificate to practice as a health care professional. Each use is considered a separate violation.
Enforcement Under SB 243 and AB 489
SB 243 provides that a successful plaintiff bringing a civil action under the law may recover injunctive relief and damages equal to the greater of actual damages or $1000 per violation, as well as reasonable attorney’s fees and costs. AB 489, by contrast, subjects developers and deployers to the jurisdiction of “the appropriate health care professional licensing board or enforcement agency.”
As a result of these novel laws, developers and deployers should consult legal counsel to understand these new requirements and develop appropriate compliance mechanisms.
Other Bills that Address Impersonation and Disclosure
California has approved multiple bills relating to AI in recent weeks. As indicated by the governor’s October 13 press release—which lists 16 signed laws—many are aimed at protecting children online. While this blog focuses on laws related to mental health chatbots, these laws do not exist in a vacuum. California and other states are becoming more serious about regulating AI from a transparency, safety, and accountability perspective. Further, existing federal laws and regulations administered by the U.S. Food and Drug Administration (which apply to digital therapeutic products and others), the Federal Trade Commission, and other agencies may also regulate certain AI chatbots, depending upon how they are positioned and what claims are made regarding their operation, benefits, and risks.

Inside the Exclusive- Sorting Out Multistate Compliance Amid Shifting Federal Priorities [Podcast]

In this podcast recorded at our recent Corporate Labor and Employment Counsel Exclusive® seminar, Dee Anna Hays (shareholder, Tampa) and Sarah Kuehnel (shareholder, Tampa/St. Louis) discuss the increasingly complex challenge of complying with a multitude of varying state laws in an era of significant changes in federal policies. Sarah and Dee Anna (who is co-chair of the firm’s Multistate Advice and Counseling Practice Group) explore the implications of key federal changes on state-level regulations and the heightened need for employers to adapt to various state laws on issues like wage and hour requirements, mandatory leave programs, noncompete agreements, workplace safety issues, and anti-discrimination protections. They also will discuss time-saving methods in-house counsel can employ to maintain and monitor compliance, including leveraging technology and automation to promote consistency across multistate operations.

Two Notable Tech Law Decisions That Closed Out the Summer- CDA Immunity Protections for a Software Platform, CFAA “Authorized Access” Issues, Passwords as Trade Secrets

In the closing days of August, two federal appeals courts issued noteworthy decisions at the intersection of workplace conduct, computer law and online platforms. The two opinions were released during a period of time this past summer amidst the continuing flurry of AI-related case developments and perhaps did not get wide media attention (but which might prove to be important cases in the future).

Second Circuit – CDA Section 230. The court ruled that a software platform was not entitled to CDA Section 230 immunity – at least at the early stage in the case – based on allegations that it actively contributed to the unlawful software content at issue by manufacturing and distributing an emissions-control “defeat devices.” (U.S. v. EZ Lynk, SEZC, No. 24-2386 (2d Cir. Aug. 20, 2025)). The opinion’s discussion of what it means to be a “developer” of content has implications for future litigation that might involve generative AI, app stores, marketplaces, and IoT ecosystems, where certain fact patterns could blur the line between passive hosting and active co-development.
Third Circuit – CFAA and Trade Secrets: Days later, the Third Circuit issued an important decision (subsequently amended, with minor changes that did not change the holding) that further develops CFAA case law post-Van Buren. The court held that CFAA liability, an anti-hacking statute, does not extend to workplace computer use violations. (NRA Group, LLC v. Durenleau, No. 24-1123 (3d Cir. Aug. 26, 2025) (vacated by Oct. 7, 2025 amended opinion), reh’g en banc denied (Oct. 7, 2025)). The court also addressed and rejected a novel claim of trade secret misappropriation based on access to account passwords.

Together, the cases show how courts continue to interpret the reach of technology-related statutes in contexts never contemplated when those laws were first enacted.
Second Circuit – CDA Section 230 Immunity Denied for Software Platform
The Second Circuit EZ Lynk case centered on whether a platform that connects vehicles to cloud-based diagnostic and customization software could be held liable under Section 203 of the Clean Air Act, 42 U.S.C. § 7522(a)(3)(B), which prohibits the manufacture and sale of devices used to defeat vehicle emissions controls. The government argued that the EZ Lynk System, which consists of an electronic device, a mobile app and third party software (or “defeat tunes”), was an illegal “defeat device” because it enabled car owners to download and install “delete tunes” that disable manufacturer-installed emissions controls. EZ Lynk countered that its system was a neutral tool that, by itself, has no effect on emissions controls and therefore EZ Lynk should be shielded from liability by CDA Section 230 because it merely hosted the third-party software at issue. 
In March 2024 the lower court dismissed the government’s case on the main count on CDA grounds, reasoning that even if the EZ Lynk System was a defeat device, EZ Lynk was only acting as a publisher of third party content. The lower court concluded that EZ Lynk’s alleged collaboration with defeat tune creators and EZ Lynk’s employees’ social media interactions with users to assist in installation and use did not amount to “material contributions” that would defeat Section 230 immunity.
The Second Circuit reversed. It found the complaint adequately alleged that EZ Lynk “directly and materially contributed to” the creation of delete tunes and may not have acted as a neutral intermediary. Among other things, the court pointed to allegations that EZ Lynk worked closely with major “delete tune” creators (e.g., previewing devices with them before launch and ensuring compatibility) and administered a social media forum where its employees and partners advised customers on using delete tunes. At this early stage, the court held such allegations were sufficient to defeat EZ Lynk’s CDA Section 230 defense as it may have been an “information content provider” in part.[1]
The decision reaffirms that Section 230 immunity may not apply where a platform “directly and materially contributed to the underlying illegal conduct.” Although the context of this government enforcement was a novel one for interpreting CDA immunity, the reasoning may resonate in other settings, including software platforms that promote and directly assist app developers with unlawful functions or modifications (e.g., for IoT devices) and marketplaces that facilitate illegal product use, raising the risk of being treated as a co-developer of unlawful content.
Third Circuit – CFAA and Trade Secret Claims Against Employees
In NRA Group, the company argued that two employees violated the CFAA when one of them, while home sick, asked a colleague to log into her work computer to retrieve a spreadsheet of system passwords to help her remotely access a work document, all in violation of workplace computer policies. 
CFAA Issue
The Third Circuit held that the employees’ conduct did not violate the CFAA because: (1) The statute targets “hacking” or code-based unauthorized access, not workplace policy violations by current employees; (2) Both employees were authorized users of the employer’s computer systems; even though the employees may have violated computer use policies (e.g., sharing credentials, emailing passwords), the court found they acted within their granted access rights. The Third Circuit affirmed dismissal of the company’s action against the employees. [Note: This holding is reminiscent of a prior Ninth Circuit decision rejecting CFAA liability against an employee that emailed internal documents to himself after being given credentials to do so from a colleague].
Applying the Supreme Court’s Van Buren decision, the Third Circuit held that the CFAA’s “exceeds authorized access” provision covers those who obtain information from computer networks or databases to which their computer access does not extend. As such, the court stated that “absent evidence of code-based hacking, the CFAA does not countenance claims premised on a breach of workplace computer-use policies by current employees.” In the Van Buren decision’s most cited metaphor, the Supreme Court characterized the CFAA “authorization” scheme as a “gates-up-or-down” approach where the CFAA prohibits accessing data one is not authorized to access. Under this understanding, one either can or cannot access a computer system, and one either can or cannot access certain areas within the system, as some areas are fully “off limits.” Following this rationale, the Third Circuit held: “Under Van Buren, the ‘gates’ of access were ‘up’ for both women—neither hacked into NRA’s systems. […] No one hacked anything by deploying code to enter a part of NRA’s systems to which they had no access.” 
The Van Buren decision continues to shape CFAA litigation beyond the employment context. Its reasoning has featured prominently in disputes over web scraping (e.g., in this closely-watched litigation) where courts must decide whether a website’s “authorization gates” are open or closed to scrapers and whether technical measures suffice to close those gates. 
Trade Secret – Passwords Issue
In an issue we don’t ever recall seeing in recent years – even the court found caselaw on this point was “thin and undeveloped” – the Third Circuit also considered the company’s trade secret claim based on the allegation that the creation and emailing of the password spreadsheet at issue constituted trade secret misappropriation. The court rejected the claim, finding that the passwords themselves are “letters and numbers” and are not protectable trade secrets because they lack independent economic value apart from what they protect. Under general law, trade secrets must have independent economic value, and while the passwords were a compilation of data, they were not bundled with other, presumably protectable information like raw customer information or pricing strategies. Unlike a proprietary formula or customer list, the value of a password lies only in its role as a barrier, one that can be eliminated simply by changing it.

[1] In pertinent part, Section 230(c) states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 47 U.S.C. § 230(c)(1). The Complaint alleges the EZ Lynk Cloud is a platform on which people exchange information in the form of software. As a side note, the appeals court noted that it was not ruling on whether software is “information” under Section 230 – in most cases, “information” typically pertains to content, in many forms. Though, it did cite other decisions that found that software could be “information provided by another content provider,” including one decision where an app store was protected by CDA immunity for losses from a fraudulent crypto wallet app (a ruling that was later affirmed by the Second Circuit).

ROBOCALL RISE: Americans Endure Over 4BB Robocalls in September as Washington’s Efforts Fall Flat

The 2025 robocall spike continues with Americans enduring over 4BB monthly robocalls in September– a trend that has continued now for the 20th consecutive month.
Per YouMail’s robocall index, year to date Americans have received over 40,765,000,000 robocalls– that’s nearly 2 billion MORE robocalls than they received through the same period last year.
This although the Czar handed Washington the blueprint to prevent robocalls two years ago!
Some good progress has been made recently– the FCC’s new NPRM focusing on foreign-originated calls is a real bright spot, as is the new foreign robocall elimination act (I wonder where they got that idea) hahah:
But more is needed.
TCPAWorld readers know I am running for Congress and my NUMBER ONE PRIORITY is STOPPING UNWANTED ROBOCALLS.
Nobody knows more about it than I do.
And I will go get it done.
Chat soon!

UK Government Unveils New AI Sandbox to Accelerate AI Innovation

On October 21, 2025, at the Times Tech Summit, the UK Technology Secretary announced plans to introduce an AI Growth Lab in the UK, consisting of artificial intelligence (“AI”) sandboxes where companies and innovators can test new AI products in real-world conditions. The sandboxes will initially target key sectors such as healthcare, professional services, transport, and advanced manufacturing. The goal is to accelerate the responsible development and deployment of AI in the UK, enabling faster and safer innovation without compromising public safety or regulatory standards. The AI Growth Lab marks one of the UK’s first ventures into regulatory sandboxes specifically designed for AI, building on the success of previous testing grounds in other sectors.
The AI Growth Lab is intended to play a pivotal role in delivering measures set out in the UK Regulation Action Plan, which includes plans to save UK businesses nearly £6 billion a year by 2029 by eliminating unnecessary administrative tasks. The UK government believes AI tools could make direct improvements to public services, such as the UK National Health Service and housing, referring to existing collaborations between businesses and regulators, such as the UK Information Commissioner’s Office, that have already successfully delivered public benefits.
The UK government emphasized that the AI Growth Lab will operate under carefully defined parameters. Strict, time-limited restrictions will specify which regulatory requirements may be relaxed or adjusted, with all activities closely supervised by technology and regulatory experts. A robust licensing framework, reinforced by strong safeguards, will ensure that any breaches or unacceptable risks will immediately halt testing and may result in fines for those found in violation.
The UK government has launched a public call for views on the AI Growth Lab proposals, in particular considering whether the program should be run in-house or overseen by regulators.
Read the press release here.

Layoffs and Rightsizing for Unionized or Unionizing Workforces

As economic shifts and advancements in artificial intelligence reshape workforce needs, executive teams and boards are reevaluating their strategies. Unionized workforces – or those in the process of unionizing – present unique challenges, particularly in light of National Labor Relations Board developments. Careful planning is essential to navigating these uncharted waters.
To help address these challenges, McDermott Will & Schulte’s labor practice recently shared key insights on the legal and strategic considerations at play. View their analysis and presentation here.

Emerging Trends in Government Contracts Law — What Contractors Need to Know

The federal procurement landscape is shifting rapidly. Agencies, Congress, and the courts have pushed major changes recently that affect how contracts are written, awarded, challenged, and performed. Below is a concise survey of the most important emerging trends — and practical steps contractors should take now.
FAR Overhaul — Plain Language, Statutory Focus, Faster Acquisition
The Federal Acquisition Regulation (FAR) is undergoing the most comprehensive rewrite in decades. Agencies and the FAR Council are attempting to return the FAR to its statutory roots, rewriting provisions in plain language, and moving many non-regulatory “how-to” matters into nonbinding buying guides. The stated goal is to speed up acquisitions, reduce unnecessary regulatory detail, and improve clarity for contracting officers and industry.

Why it matters – Solicitation language, flow-down provisions, and compliance expectations will change. Contractors should expect new standard clauses and should be prepared to interpret and adhere to those clauses. 
Action – Contractors should review existing compliance templates and master agreements for clauses that assume the “old” FAR structure. Contractors should also coordinate with contracting officers early when solicitations cite newly rewritten FAR text.

Bid Protest Pleading Standards and Evolving GAO Practice
The Government Accountability Office (GAO) has moved to tighten the pleading standard for bid protests following FY2025 NDAA direction. Specifically, the GAO has clarified that “protesters must provide, at a minimum, credible allegations that are supported by evidence and are sufficient, if uncontradicted, to establish the likelihood of the protester’s claim of improper agency action.” At the same time, the GAO’s statistics show that protest filings at the GAO have dropped in recent years, while protest filings at the U.S. Court of Federal Claims have increased. 

Why it matters – Heightened pleading standards can affect protest strategy (when to protest, what evidence to include, etc.).
Action – Contractors shouldtighten proposal documentation (document evaluation criteria compliance), preserve contemporaneous procurement records, and prepare stronger factual bases before filing protests.

Cybersecurity and Software Supply-Chain Requirements (CMMC, SBOMs)
Cyber requirements are now mission-critical. DoD’s Cybersecurity Maturity Model Certification (CMMC) implementation and associated Defense Federal Acquisition Regulation (DFARS)/CMMC rules are being rolled out in phases with material consequences for award (including certification requirements at contract award for many solicitations). Parallel to CMMC, federal cybersecurity authorities (CISA, NSA and partners) have expanded Software Bill of Materials (SBOM) guidance and issued updated minimum elements and draft guidance to increase transparency of software supply chains. Contractors should expect procurements to require SBOMs, assessment evidence, and flow-downs to subcontractors.

Why it matters – Noncompliance can make a vendor ineligible for awards, trigger contract terminations, result in negative CPARS, and increase audit risk.
Action – Contractors should inventory software components, adopt SBOM tooling and practices, confirm whether solicitations require CMMC level or specific SBOM deliverables, and build subcontract flow-down clauses and assessment timelines into proposals and schedules.

AI Procurement, Governance and Related Rules
Federal AI procurement activity has accelerated, and agencies are being given explicit guidance for responsible AI acquisition. OMB and White House guidance (and agency AI plans) are directing agencies to adopt AI procurement controls and risk assessment processes, and the GSA and other agencies are standing up AI acquisition resources and vehicle options. Contractors offering AI products/services should expect special contract terms addressing explainability, bias mitigation, data provenance, and security.

Why it matters – AI-specific clauses may impose new compliance burdens, testing, and warranty/indemnity regimes — and procuring agencies will favor vendors who can demonstrate governance and secure deployment.
Action – Contractors should build an AI governance package (risk assessment, testing protocols, incident response, data handling policies). Contractors should also highlight AI components in proposals when appropriate and be prepared to negotiate responsible-use contract language.

Market and Budget Trends, Threshold Inflation and Consolidation
Federal spending priorities and contracting thresholds are changing. Inflation adjustments to acquisition thresholds, shifting agency budgets, and industry consolidation (fewer prime contractors in certain sectors) are reshaping competition dynamics. Agencies are also emphasizing commercial-item contracting and other streamlined approaches.

Why it matters – Competition pools and subcontracting opportunities are shifting; small businesses and mid-market firms may need new strategies to remain competitive.
Action – Contractors should reassess capture plans to reflect consolidation and threshold changes. Contractors should also consider teaming, mergers, or JV options. Further, contractors should consider targeting commercial-item pathways or set-aside opportunities.

Closing
The pace of regulatory and policy change in government contracts is unusually brisk — driven by technology (AI and software supply chain concerns), cybersecurity imperatives, statutory reforms, shifting political agendas, and evolving protest practice. For contractors, the key is proactive adaptation: update compliance playbooks, invest in cybersecurity/SBOM and AI governance, tighten proposal documentation, and revisit teaming and capture strategies. Staying ahead will not only reduce risk of noncompliance but also create competitive advantage in an environment that increasingly prizes secure, explainable, and well-documented solutions.