China’s Supreme People’s Court Designates Generative AI Case as Typical
On May 26, 2025, China’s Supreme People’s Court (SPC) released the “Typical cases on the fifth anniversary of the promulgation of the Civil Code” (民法典颁布五周年典型案例) including one generative AI case in which the Beijing Internet Court held that an AI-generated voice infringed a dubber’s personality rights. Note that while China is not a common law country, designating a case as a Guiding Case or Typical Case is somewhat analogous to a U.S. Court marking a case as precedential in that the SPC is indicating to lower courts to adjudicate future cases in accordance with this decision.
As explained by the SPC:
IV. Protecting voice rights according to law and promoting the development of artificial intelligence for good – Yin v. Beijing smart technology company and others’ infringement of personality rights case
1. Basic Facts of the Case
The plaintiff, Yin XX, is a dubbing artist. He found that the works produced by others using his dubbing were widely circulated in many well-known apps. After tracing the source, the sound in the above works came from the text-to-speech product on the platform operated by the defendant, a Beijing intelligent technology company. Users can convert text into speech by entering text and adjusting parameters. The plaintiff was commissioned by the defendant, a Beijing cultural media company, to record a sound recording, and the defendant was the copyright owner of the sound recording. Later, the defendant provided the audio of the sound recording recorded by the plaintiff to another defendant, a software company, and allowed the defendant to use, copy, and modify the data for commercial or non-commercial purposes for its products and services. The other defendant only used the sound recording recorded by the plaintiff as the material for AI processing, generated the text-to-speech product involved in the case, and sold it on the cloud service platform operated by yet another defendant, a Shanghai network technology company. The defendant, the Beijing intelligent technology company, signed an online service sales contract with the defendant, a Beijing technology development company, and another defendant placed an order with the defendant, which included the text-to-speech product involved in the case. The defendant, a Beijing intelligent technology company, adopted the form of an application program interface, and directly retrieved and generated the text-to-speech product for use on its platform without technical processing. Yin filed a lawsuit in court, requesting that defendant one, a Beijing intelligent technology company, and defendant three, a software company, immediately stop infringing and apologize, and that the five defendants compensate Yin for economic and mental losses.
(II) Judgment Result
The effective judgment holds that the voice right is a personal interest and concerns the personal dignity of natural persons. For the voice processed by artificial intelligence technology, as long as the general public or the public within a certain range can identify a specific natural person based on the timbre, intonation and pronunciation style, the voice right of the natural person can extend to the AI voice. The above five defendants all used the plaintiff’s voice without the plaintiff’s permission and committed acts that infringed the plaintiff’s voice rights, constituting an infringement of the plaintiff’s voice rights. Because the infringing products involved in the case have been removed, the five defendants will no longer be ordered to bear the tort liability of stopping the infringement. Instead, based on the plaintiff’s request, the subjective fault of each defendant and other factors, the court ruled that the first defendant, a Beijing intelligent technology company, and the third defendant, a software company, apologize to the plaintiff, and the second defendant, a Beijing cultural media company, and the third defendant, a software company, compensated the plaintiff for losses.
(III) Typical significance
General Secretary Xi Jinping emphasized: “We must strengthen the research and prevention of potential risks in the development of artificial intelligence, safeguard the interests of the people and national security, and ensure that artificial intelligence is safe, reliable and controllable.” With the rapid development of artificial intelligence technology, voice forgery and imitation are becoming increasingly common, and disputes involving infringement of personality rights caused by related technologies are also increasing. my country has written “voice” protection into the Personality Rights Code of the Civil Code in the form of legislation, reflecting respect for the rights and interests of natural persons’ voices, as well as a positive response to technological development and social needs. In this case, the People’s Court determined in accordance with the law that voice, as a kind of personal right, is person-specific. Unauthorized use or permission for others to use the voice in a recording without the permission of the right holder constitutes infringement, which sets the boundaries of behavior for the application of new formats and new technologies, and helps to regulate and guide the development of artificial intelligence technology in the direction of serving the people and doing good.
(IV) Guidance on the provisions of the Civil Code
Article 1018
A natural person enjoys the right to likeness and is entitled to make, use, publicize, or authorize others to use his image in accordance with law.
The likeness is an external image of a specific natural person reflected in video recordings, sculptures, drawings, or on other media by which the person can be identified.
Article 1019
No organization or individual may infringe upon other’s rights to likeness by vilifying or defacing the image thereof, or through other ways such as falsifying other’s image by utilizing information technology. Unless otherwise provided by law, no one may make, use, or publicize the image of the right holder without his consent.
Without the consent of the person holding the right to likeness, a person holding a right in the works of the image of the former person may not use or publicize the said image by ways such as publishing, duplicating, distributing, leasing, or exhibiting it.
Article 1023
For an authorized use of another person’s name or the like, the relevant provisions on the authorized use of other’s images shall be applied mutatis mutandis.
For the protection of a natural person’s voice, the relevant provisions on the protection of the right to likeness shall be applied mutatis mutandis.
The original text, including five other Civil Code Typical Cases, can be found here (Chinese only).
50% of Professional Services Users Have Used AI Tools Not Authorized by Company
A new survey from Intapp, titled “2025 Tech Perceptions Survey Report,” summarizes findings from a survey of fee-earners that there has been a “surge in AI usage.” The professions surveyed included accounting, consulting, finance, and legal sectors. Findings include that “AI usage among professionals has grown substantially, with 72% using AI at work versus 48% in 2024.” AI adoption among firms increased to 56%, with firms utilizing it for data summarization, document generation, research, error-checking, quality control, voice queries, data entry, consultation (decision-making support), and recommendations. That said, the vast majority of AI adoption in the four sectors is in finance, with 89% of professionals using AI at work. Specifically, 73% of accounting professionals, 68% of consulting professionals, and 55% of legal professionals use AI.
A significant conclusion is that when firms do not provide AI tools for professionals to use, they often develop their own. Over 50% of professionals have used unauthorized AI tools in the workplace, which increases risk for companies. They are reallocating the time saved with AI tools by improving work-life balance, focusing on higher-level client work, focusing on strategic initiatives and planning, cultivating relationships with clients, and increasing billable hours.
The survey found that professionals want and need technology to assist with tasks. Only 32% of professionals believe they have the optimal technology to complete their job effectively. The conclusion is that professionals who are given optimal technology to perform their jobs are more satisfied and likely to stay at the firm, optimal tech “powers professional-and firm-success, and AI is becoming non-negotiable for future firm leaders.”
AI tools are rapidly developing and adopted by all industries, including professional sectors. As noted in the Intapp survey, if firms are not providing AI tools for workers to use to enhance their jobs, they will use them anyway. The survey reiterates how important it is to have an AI Governance Program in place to provide sanctioned tools for workers to reduce the risks associated with using unauthorized AI tools. Developing and implementing an AI Governance Program and acceptable use policies should be high on the priority list for all industries, including professional services.
Four New Executive Orders Aim to Unleash U.S. Nuclear Energy
On May 23, 2025, President Trump signed four new executive orders (the Orders) to “usher in a nuclear energy renaissance.” In an article, the White House explained that the Orders provide “a path forward for nuclear innovation” as they “allow for reactor design testing at [Department of Energy (DOE)] labs, clear the way for construction on federal lands to protect national and economic security, and remove regulatory barriers by requiring the Nuclear Regulatory Commission [(NRC)] to issue timely licensing decisions.” Characterizing the Orders as “the most significant nuclear regulatory reform actions taken in decades,” the White House declared that it is “restoring a strong American nuclear industrial base, rebuilding a secure and sovereign domestic nuclear fuel supply chain, and leading the world towards a future fueled by American nuclear energy.” Below is a summary of some of the significant aspects of the Orders.
Reforming Nuclear Reactor Testing at the Department of Energy
Finds that the design, construction, and operation of certain DOE-controlled advanced reactors fall within DOE’s jurisdiction.
Directs the Secretary of Energy to take actions to reform and streamline National Laboratory processes for reactor testing at DOE, including but not limited to, revising regulations to expedite the approval of reactors under DOE’s jurisdiction to enable test reactors to be safely operational within 2 years following submission of a substantially complete application.
Directs the Secretary of Energy to create a pilot program for reactor construction and operation outside the National Laboratories, and to approve at least three reactors under this program with the goal of achieving criticality in each of the three reactors by July 4, 2026.
Directs the Secretary of Energy to eliminate or expedite internal environmental reviews for authorizations, permits, approvals, and other activities related to reactor testing.
Deploying Nuclear Reactors for National Security
Directs the Secretary of Defense, acting through the Secretary of the Army, to create a program for building and deploying a nuclear reactor at a domestic military installation by September 30, 2028.
Directs the Secretary of Energy to take actions to deploy a privately funded advanced reactor to power artificial intelligence (AI) infrastructure and meet other national security objectives at a DOE site within 30 months.
Directs the Secretary of Energy to designate certain AI data centers that are located at or operated in coordination with DOE facilities as critical defense facilities, where appropriate, and the electrical infrastructure that power them as defense critical electric infrastructure.
Directs the Secretary of Energy to make available at least 20 metric tons of high-assay low-enriched uranium for private sector nuclear projects powering AI infrastructure at DOE sites.
Directs the Secretaries of Energy and Defense to enable the construction and operation of privately funded nuclear fuel facilities at DOE and/or Department of Defense (DOD) controlled sites for use in national security reactors, commercial power reactors, and non-power research reactors.
Directs the Secretary of State to take certain actions to promote the U.S. nuclear industry in the development of commercial civil nuclear projects globally.
Ordering the Reform of the Nuclear Regulatory Commission
Establishes a goal of quadrupling American nuclear energy capacity from 100 gigawatts (GW) to 400 GW by 2050.
Directs the reorganization of the NRC and a reduction in force in consultation with the Department of Government Efficiency.
Directs the NRC to undertake a wholesale review and revision of its regulations and guidance within 18 months, including but not limited to, establishing:
Fixed deadlines to evaluate and approve new reactor license applications within 18 months and applications for the continued operation of existing reactors within one year;
Science-based radiation limits, instead of relying on the linear no-threshold model for radiation exposure;
An expedited approval process for reactor designs that have been safely tested by the DOD or DOE; and
A process for high-volume licensing of microreactors and modular reactors.
Reinvigorating the Nuclear Industrial Base
Directs the Secretary of Energy to recommend a national policy regarding management of spent nuclear fuel and the development and deployment of advanced fuel cycle capabilities, evaluate policies concerning commercial recycling and reprocessing of nuclear fuels, and make recommendations for the efficient use of nuclear waste materials.
Directs the Secretary of Energy to develop a plan to expand domestic uranium processing and enrichment capabilities to meet projected civilian and defense reactor needs.
Halts the surplus plutonium disposition program, with certain exceptions, and directs the Secretary of Energy to process and make surplus plutonium available for advanced reactor fuel fabrication.
Leverages the authority in the Defense Production Act to seek voluntary agreements with domestic nuclear energy companies for the cooperative procurement of enriched uranium and for consultation regarding the management of spent nuclear fuel.
Directs DOE to prioritize the facilitation of 5 GW of power uprates to existing reactors and construction of 10 new large reactors by 2030.
Directs DOE’s Loan Programs Office and U.S. Small Business Administration to prioritize funding to support the nuclear energy industry.
Seeks to expand the American nuclear workforce by directing the Secretaries of Labor and Education to increase participation in nuclear energy-related training and apprenticeship programs and ordering the Secretary of Energy to increase access to DOE’s National Laboratories for nuclear engineering students.
Overall, the Orders signal a renewed commitment to revitalize the U.S. nuclear energy industry and build upon a well-established bipartisan consensus in favor of nuclear innovation, accelerating nuclear deployment, and strengthening domestic uranium supply chains. Nonetheless, efforts to reduce federal staffing and weaken NRC’s regulatory independence could compromise the viability of the Trump administration’s goal to “unleash nuclear energy in the U.S.,” placing greater importance on sound regulatory execution and legally durable policymaking.
Take it Down Act Signed into Law, Offering Tools to Fight Non-Consensual Intimate Images and Creating a New Image Takedown Mechanism
Law establishes national prohibition against nonconsensual online publication of intimate images of individuals, both authentic and computer-generated.
First federal law regulating AI-generated content.
Creates requirement that covered platforms promptly remove depictions upon receiving notice of their existence and a valid takedown request.
For many online service providers, complying with the Take It Down Act’s notice-and-takedown requirement may warrant revising their existing DMCA takedown notice provisions and processes.
Another carve-out to CDA immunity? More like a dichotomy of sorts….
On May 19, 2025, President Trump signed the bipartisan-supported Take it Down Act into law. The law prohibits any person from using an “interactive computer service” to publish, or threaten to publish, nonconsensual intimate imagery (NCII), including AI-generated NCII (colloquially known as revenge pornography or deepfake revenge pornography). Additionally, the law requires that, within one year of enactment, social media companies and other covered platforms implement a notice-and-takedown mechanism that allows victims to report NCII. Platforms must then remove properly reported imagery (and any known identical copies) within 48 hours of receiving a compliant request.
Support for the Act and Concerns
The Take it Down Act attempts to fill a void in the policymaking space, as many states had not enacted legislation regulating sexual deepfakes when it was signed into law. The Act has been described as the first major federal law that addresses harm caused by AI. It passed the Senate in February of this year by unanimous consent and passed the House of Representatives in April by a vote of 409-2. It also drew the support of many leading technology companies.
Despite receiving almost unanimous support in Congress, some digital privacy advocates have expressed some concerns that the new notice-and-takedown mechanism could have some unintended consequences for digital privacy in general. For example, some commentators have suggested that the statute’s takedown provision is written too broadly and lacks sufficient safeguards against frivolous requests, potentially leading to the removal of lawful content –especially given the short 48-hour time to act following a takedown request. [Note: In 2023, we similarly wrote about abuses of the takedown provision of the Digital Millennium Copyright Act]. In addition, some have argued that the law could undermine end-to-end encryption by possibly forcing such companies to “break” encryption to comply with the removal process. Supporters of the law have countered that private encrypted messages would likely not be considered “published” under the text of the statute (which uses the term “publish” as opposed to “distribute”).
Criminalization of NCII Publication for Individuals
The Act makes it unlawful for any person “to use an interactive computer service to knowingly publish an intimate visual depiction of an identifiable individual” under certain circumstances.[1] It also prohibits threats involving the publishing of NCII and establishes various criminal penalties. Notably, the Act does not distinguish between authentic and AI-generated NCII in its penalties section if the content has been published. Furthermore, the Act expressly states that a victim’s prior consent to the creation of the original image or its disclosure to another individual does not constitute consent for its publication.
New Notice-and-Takedown Requirement for “Covered Platforms”
Along with punishing individuals who publish NCII, the Take it Down Act requires covered platforms to create a notice-and-takedown process for NCII within one year of the law’s passage. Below are the main points for platforms to consider:
Covered Platforms. The Act defines a “covered platform” as a “website, online service, online application, or mobile application” that serves the public and either provides a forum for user-generated content (including messages, videos, images, games, and audio files) or regularly deals with NCII as part of its business.
Notice-and-Takedown Process. Covered platforms must create a process through which victims of NCII (or someone authorized to act on their behalf) can send notice to them about the existence of such material (including a statement indicating a “good faith belief” that the intimate visual depiction of the individual is nonconsensual, along with information to assist in locating the unlawful image) and can request its removal.
Notice to Users. Adding an additional compliance item to the checklist, the Act requires covered platforms to provide a “clear and conspicuous” notice of the Act’s notice and removal process, such as through a conspicuous link to another web page or disclosure.
Removal of NCII. Within 48 hours of receiving a valid removal request, covered platforms must remove the NCII and “make reasonable efforts to identify and remove any known identical copies.”
Enforcement. Compliance under this provision will be enforced by the Federal Trade Commission (FTC).
Safe Harbor. Under the law, covered platforms will not be held liable for “good faith” removal of content that is claimed to be NCII “based on facts or circumstances from which the unlawful publishing of an intimate visual depiction is apparent,” even if it is later determined that the removed content was lawfully published.
Compliance Note: For many online service providers, complying with the Take It Down Act’s notice-and-takedown requirement may warrant revising their existing DMCA takedown notice provisions and processes, especially if those processes have not been reviewed or updated for some time. Many “covered platforms” may rely on automated processes (or a combination of automated efforts combined with targeted human oversight) to fulfill Take It Down Act requests and meet the related obligation to make “reasonable efforts” to identify and remove known identical copies. This may involve using tools for processing notices, removing content and detecting duplicates. As a result, some providers should consider whether their existing takedown provisions should also be amended to address these new requirements and how they will implement these new compliance items on the backend using the infrastructure already in place for the DMCA.
What about CDA Section 230?
Section 230 of the Communications Decency Act (“CDA”), 47 U.S.C § 230, prohibits a “provider or user of an interactive computer service” from being held responsible “as the publisher or speaker of any information provided by another information content provider.” Courts have construed the immunity provisions in Section 230 broadly in a variety of cases arising from the publication of user-generated content.
Following enactment of the Take It Down Act, some important questions for platforms are: (1) whether Section 230 still protects platforms from actions related to the hosting or removal of NCII; and (2) whether FTC enforcement of the Take It Down Act’s platform notice-and-takedown process is blocked or limited by CDA immunity.
On first blush, it might seem that the CDA would restrict enforcement against online providers in this area, as decisions regarding the hosting and removal of third-party content would necessarily treat a covered platform as a “publisher or speaker” of third party content. However, a deeper examination of the text of the CDA suggests the answer is more nuanced.
It should be noted that the Good Samaritan provision of the CDA (47 U.S.C § 230(c)(2)) could be used by online providers as a shield from liability for actions taken to proactively filter or remove third party NCII content or remove NCII at the direction of a user’s notice under the Take It Down Act, as CDA immunity extends to good faith actions to restrict access to or availability of material that the provider or user considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Moreover, the Take It Down Act adds its own safe harbor for online providers for “good faith disabling of access to, or removal of, material claimed to be a nonconsensual intimate visual depiction based on facts or circumstances from which the unlawful publishing of an intimate visual depiction is apparent, regardless of whether the intimate visual depiction is ultimately determined to be unlawful or not.”
Still, further questions about the reach of the CDA prove more intriguing. The Take It Down Act appears to create a dichotomy of sorts regarding CDA immunity in the context of NCII removal claims. Under the text of the CDA, it appears that immunity would not limit FTC enforcement of the Take It Down Act’s notice-and-takedown provision affecting “covered platforms.” To explore this issue, it’s important to examine the CDA’s exceptions, specifically 47 U.S.C § 230(e)(1).
Effect on other laws
(1) No effect on criminal law
Nothing in this section shall be construed to impair the enforcement of section 223 or 231 of this title [i.e., the Communications Act], chapter 71 (relating to obscenity) or 110 (relating to sexual exploitation of children) of title 18, or any other Federal criminal statute.
Under the text of the CDA’s exception, Congress carved out Section 223 and 231 of the Communications Act from the CDA’s scope of immunity. Since the Take It Down Act states that it will be codified at Section 223 of the Communications Act of 1934 (i.e., 47 U.S.C. 223(h)), it appears that platforms would not enjoy CDA protection from FTC civil enforcement actions based on the agency’s authority to enforce the Act’s requirements that covered platforms “reasonably comply” with the new Take It Down Act notice-and-takedown obligations.
However, that is not the end of the analysis for platforms. Interestingly, it would appear that platforms would generally still retain CDA protection (subject to any exceptions) from claims related to the hosting or publishing third party NCII that have not been the subject of a Take It Down Act notice, since the Act’s requirements for removal of NCII by platforms would not be implicated without a valid removal request.[2] Similarly, a platform could make a strong argument that it retains CDA immunity from any claims brought by an individual (rather than the FTC) for failing to reasonably comply with a Take It Down Act notice. That said, it is conceivable that litigants – or event state attorneys general – might attempt to frame such legal actions under consumer protection statutes, as the Take It Down Act states that a failure to reasonably comply with an NCII takedown request is an unfair or deceptive trade practice under the FTC Act. Even in such a case, platforms would likely contend that such claims by these non-FTC parties are merely claims based on a platform’s role as publisher of third party content and are therefore barred by the CDA.
Ultimately, most, if not all, platforms will likely make best efforts to reasonably comply with the Take It Down Act, thus avoiding the above contingencies. Yet, for platforms using automated systems to process takedown requests, unintended errors may occur and it’s important to understand how and when the CDA would still protect platforms against any related claims.
Looking Ahead
It will be up to a year before the notice-and-takedown requirements become effective, so we will have to wait and see how well the process works in eradicating revenge pornography material and intimate AI deepfakes from platforms, how the Act potentially affects messaging platforms, how aggressively the Department of Justice will prosecute offenders, and how closely the FTC will be monitoring online platforms’ compliance with the new takedown requirements.
It also remains to be seen whether Congress has an appetite to pass more AI legislation. Less than two weeks before the Take it Down Act was signed into law, the Senate Committee on Commerce, Science, and Transportation held a hearing on “Winning the AI Race” that featured the CEOs of many well-known AI companies. During the hearing, there was bipartisan agreement on the importance of sustaining America’s leadership in AI, expanding the AI supply chain and not burdening AI developers with a regulatory framework as strict as the EU AI Act. The senators listened to testimony from tech executives calling for enhanced educational initiatives and the improvement of infrastructure needed for advancing AI innovation, alongside discussing proposed bills regulating the industry, but it was not clear whether any of these potential policy solutions would receive enough support to be signed into law.
The authors would like to thank Aniket C. Mukherji, a Proskauer legal assistant, for his contributions to this post.
[1] The Act provides that the publication of the NCII of an adult is unlawful if (for authentic content) “the intimate visual depiction was obtained or created under circumstances in which the person knew or reasonably should have known the identifiable individual had a reasonable expectation of privacy,” if (for AI-generated content) “the digital forgery was published without the consent of the identifiable individual,” and if (for both authentic and AI-generated content) what is depicted “was not voluntarily exposed by the identifiable individual in a public or commercial setting,” “is not a matter of public concern,” and is intended to cause harm or does cause harm to the identifiable individual. The publication of NCII (whether authentic or AI-generated) of a minor is unlawful if it is published with intent to “abuse, humiliate, harass, or degrade the minor” or “arouse or gratify the sexual desire of any person.” The Act also lists some basic exceptions, such as publications of covered imagery for law enforcement investigations, legal proceedings, or educational purposes, among other things.
[2] Under the Act, “Upon receiving a valid removal request from an identifiable individual (or an authorized person acting on behalf of such individual) using the process described in paragraph (1)(A)(ii), a covered platform shall, as soon as possible, but not later than 48 hours after receiving such request—
(A) remove the intimate visual depiction; and
(B) make reasonable efforts to identify and remove any known identical copies of such depiction.
2025 Review of AI and Employment Law in California
California started 2025 with significant activity around artificial intelligence (AI) in the workplace. Legislators and state agencies introduced new bills and regulations to regulate AI-driven hiring and management tools, and a high-profile lawsuit is testing the boundaries of liability for AI vendors.
Legislative Developments in 2025
State lawmakers unveiled proposals to address the use of AI in employment decisions. Notable bills introduced in early 2025 include:
SB 7 – “No Robo Bosses Act”
Senate Bill (SB) 7 aims to strictly regulate employers’ use of “automated decision systems” (ADS) in hiring, promotions, discipline, or termination. Key provisions of SB 7 would:
Require employers to give at least 30 days’ prior written notice to employees, applicants, and contractors before using an ADS and disclose all such tools in use.
Mandate human oversight by prohibiting reliance primarily on AI for employment decisions such as hiring or firing. Employers would need to involve a human in final decisions.
Ban certain AI practices, including tools that infer protected characteristics, perform predictive behavioral analysis on employees, retaliate against workers for exercising legal rights, or set pay based on individualized data in a discriminatory way.
Give workers rights to access and correct data used by an ADS and to appeal AI-driven decisions to a human reviewer. SB 7 also includes anti-retaliation clauses and enforcement provisions.
AB 1018 – Automated Decisions Safety Act
Assembly Bill (AB) 1018 would broadly regulate development and deployment of AI/ADS in “consequential” decisions, including employment, and possibly allow employees to opt out of the use of a covered ADS. This bill places comprehensive compliance obligations on both employers and AI vendors—requiring bias audits, data retention policies, and detailed impact assessments before using AI-driven hiring tools. It aims to prevent algorithmic bias across all business sectors.
AB 1221 and AB 1331 – Workplace Surveillance Limits
Both AB 1221 and AB 1331 target electronic monitoring and surveillance technologies in the workplace. AB 1221 would obligate employers to provide 30 days’ notice to employees who will be monitored by workplace surveillance tools. These tools include facial, gait, or emotion recognition technology, all of which typically rely on AI algorithms. AB 1221 also describes procedures and requirements for any analyzing vendor’s storage and usage of data collected by such a tool. AB 1331 more broadly restricts employers’ use of tracking tools—from video/audio recording and keystroke monitoring to GPS and biometric trackers—particularly during off-duty hours or in private areas.
Agency and Regulatory Guidance
CRD – Final Regulations on Automated Decision Systems
On 21 March 2025, California’s Civil Rights Council (part of the Civil Rights Department (CRD)) adopted final regulations titled “Employment Regulations Regarding Automated-Decision Systems.” These rules, which could take effect as early as 1 July 2025, once approved by the Office of Administrative Law, explicitly apply existing anti-discrimination law (the Fair Employment and Housing Act (FEHA)) to AI tools.
Key requirements in the new CRD regulations include:
Bias Testing and Record-Keeping
Employers using automated tools may bear a higher burden to demonstrate they have tested for and mitigated bias. A lack of evidence of such efforts can be held against the employer. Employers must also retain records of their AI-driven decisions and data (e.g., job applications, ADS data) for at least four years.
Third-Party Liability
The definition of “employer’s agent” under FEHA now explicitly encompasses third-party AI vendors or software providers if they perform functions on behalf of the employer. This means an AI vendor’s actions (screening or ranking applicants, for example) can legally be attributed to the employer—a critical point aligning with recent caselaw (see Mobley lawsuit below).
Job-Related Criteria
If an employer uses AI to screen candidates, the criteria must be job-related and consistent with business necessity, and no less-discriminatory alternative can exist. This mirrors disparate-impact legal tests, applied now to algorithms.
Broad Coverage of Tools
The regulations define “Automated-Decision System” expansively to include any computational process that assists or replaces human decision-making about employment benefits, which covers resume-scanning software, video interview analytics, predictive performance tools, etc.
Once in effect, California will be among the first jurisdictions with detailed rules governing AI in hiring and employment. The CRD’s move signals that using AI is not a legal shield and that employers remain responsible for outcomes and must ensure their AI tools are fair and compliant.
AI Litigation
Mobley v. Workday, Inc., currently pending in the US District Court for the Northern District of California, illustrates the litigation risks of using AI in hiring. In Mobley, a job applicant alleged that Workday’s AI-driven recruitment screening tools disproportionately rejected older, Black, and disabled applicants, including himself, in violation of anti-discrimination laws. In late 2024, Judge Rita Lin allowed the lawsuit to proceed, finding the plaintiff stated a plausible disparate impact claim and that Workday could potentially be held liable as an “agent” of its client employers. This ruling suggests that an AI vendor might be directly liable for discrimination if its algorithm, acting as a delegated hiring function, unlawfully screens out protected groups.
On 6 February 2025, the plaintiff moved to expand the lawsuit into a nationwide class action on behalf of millions of job seekers over age 40 who applied through Workday’s systems since 2020 and were never hired. The amended complaint added several additional named plaintiffs (all over 40) who claim that after collectively submitting thousands of applications via Workday-powered hiring portals, they were rejected—sometimes within minutes and at odd hours, suggestive of automated processing. They argue that a class of older applicants were uniformly impacted by the same algorithmic practices. On 16 May 2025, Judge Lin preliminarily certified a nationwide class of over-40 applicants under the Age Discrimination in Employment Act, a ruling that highlights the expansive exposure these tools could create if applied unlawfully. Mobley marks one of the first major legal tests of algorithmic bias in employment and remains the nation’s most high-profile challenge of AI-driven employment decisions.
Conclusion
California is moving toward a comprehensive framework where automated hiring and management tools are held to the same standards as human decision-makers. Employers in California should closely track these developments: pending bills could soon impose new duties (notice, audits, bias mitigation) if enacted, and the CRD’s regulations will make algorithmic bias expressly unlawful under FEHA. Meanwhile, real-world litigation is already underway, warning that both employers and AI vendors can be held accountable when technology produces discriminatory outcomes.
The tone of regulatory guidance is clear that embracing innovation must not sacrifice fairness and compliance. Legal professionals, human resources leaders, and in-house counsel should proactively assess any AI tools used in recruitment or workforce management. This includes consulting the new CRD rules, conducting bias audits, and ensuring there is a “human in the loop” for important decisions. California’s 2025 developments signal that the intersection of AI and employment law will only grow in importance, with the state continuing to refine how centuries-old workplace protections apply to cutting-edge technology.
FDA Shifts Inspections to States and AI to Help Boost Efficiency
After a slowdown in FDA inspections during the first quarter of 2025 and the mass departure of over 3,500 employees, FDA is pivoting to state authority and generative AI technology to help the agency “do more with less.” FDA currently has contracts with forty-three states and plans to expand this program to have more states conduct FDA inspections of food and beverage facilities.
Following the shift in responsibility, food and drug manufacturers have recently seen an uptick in inspection notices. Foreign facilities are experiencing more unannounced inspections as FDA seeks to treat foreign firms like domestic firms with less lead time before inspections, as we previously blogged.
FDA Commissioner Marty Makary has also directed FDA centers to begin using AI for premarket scientific reviews, giving the agency an “aggressive timeline” of full integration by June 30, 2025. Makary claims that the use of AI will reduce the amount of “non-productive busywork” that consumes a large part of the scientific review process.
FDA has announced that they will be releasing additional details about the use of AI later in June. Keller and Heckman will continue to report on these developments.
US House of Representatives Advance Unprecedented 10-Year Moratorium on State AI Laws
The US House of Representatives has advanced a proposal that would prohibit states from enforcing any AI-related laws for a decade. While facing significant procedural hurdles in the Senate, this represents the most aggressive federal attempt yet to preempt state-level AI regulation.
Proposed Moratorium Framework
On May 22, 2025, the House of Representatives narrowly passed a budget reconciliation bill containing a provision that would ban states from enforcing AI laws for ten years. This followed the May 14, 2025, House Energy and Commerce Committee vote of 29-24 along party lines to advance the budget reconciliation bill containing sweeping AI preemption language. The provision, titled “Moratorium,” states, with limited exceptions, that “no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.”
Scope of Impact
The moratorium would effectively prohibit state enforcement of regulations addressing:
Algorithmic bias in employment or housing decisions;
AI-generated deepfakes in political campaigns;
Automated decision-making in healthcare or insurance;
AI surveillance systems;
Transparency requirements for AI use in consumer applications; and
Data protection measures specific to AI systems.
Limited Exceptions
The proposal includes narrow exceptions for state laws that:
Remove legal impediments to AI deployment;
Streamline licensing, permitting, or zoning procedures;
Impose requirements mandated by federal law;
Apply generally applicable standards equally to AI and non-AI systems; and
Impose reasonable, cost-based fees treating AI systems comparably to other technologies.
State and Industry Response
A bipartisan group of 40 state attorneys general, including Republicans from Ohio, Tennessee, Arkansas, Utah and Virginia have opposed the measure, calling it “sweeping and wholly destructive of reasonable state efforts to prevent known harms associated with AI.”
Affected State Legislation
The moratorium would also impact numerous existing state AI laws such as:
Illinois Artificial Intelligence Video Interview Act requiring disclosure of AI use in hiring;
California SB-1001 requiring disclosure of chatbots in political campaigns;
New York City Local Law 144 addressing algorithmic accountability in employment; and
Maryland House Bill 1202 establishing AI bias auditing requirements.
Industry Perspective
Major technology companies have advocated for unified federal approaches, with OpenAI CEO Sam Altman testifying that a “patchwork” of AI regulations “would be quite burdensome and significantly impair our ability to do what we need to do.”
Procedural and Constitutional Challenges
Senate Hurdles
The moratorium faces significant obstacles in the Senate, where the “Byrd Rule” requires that budget reconciliation provisions focus primarily on budgetary rather than policy matters.
Also, some Senate Republicans are expressing skepticism about a proposed moratorium on state AI governance, arguing that states need to maintain their ability to protect consumers until Congress passes comprehensive federal legislation. Senator Marsha Blackburn (R-Tenn.) gave voice to this position by pointing to Tennessee’s recent law protecting artists from unauthorized AI use of their voices and images, asserting that states cannot halt such protective measures while awaiting federal action that would preempt state laws.
Constitutional Considerations
The proposal also raises several constitutional questions regarding Commerce Clause authority, Tenth Amendment state police powers, and due process requirements for AI systems affecting citizens.
Bipartisan House Task Force
Relatedly, the Bipartisan House Task Force Report on AI published during the 118th Congress previewed challenges with preemption: “Preemption of state AI laws by federal legislation is a tool that Congress could use to accomplish various objectives. However, federal preemption presents complex legal and policy issues that should be considered.”
Business Implications
Regulatory Uncertainty
Organizations should expect continued tension between federal and state authorities over AI governance, regardless of the ultimate outcome of this specific proposal. Even without state AI laws, AI-driven decisions remain subject to existing anti-discrimination laws, including the ADA and Title VII.
Governance Recommendations
Organizations should implement comprehensive AI governance practices including:
Bias audits across AI systems;
Robust human oversight mechanisms;
Documentation of AI decision-making processes; and
Appropriate vendor contract provisions.
International Considerations
The moratorium could also affect US competitiveness by preventing regulatory innovation that might inform international standards and reducing the democratic legitimacy of US AI governance approaches.
Looking Ahead
The proposal represents fundamental tensions between federal coordination and state-level regulatory innovation during a critical period of AI development. Whether this specific measure succeeds or fails, organizations must prepare for ongoing regulatory uncertainty while implementing strong AI governance practices.
The intersection of federal preemption efforts and rapidly evolving AI capabilities suggests continued policy volatility in this area, requiring flexible compliance frameworks that can adapt to changing requirements. Organizations should monitor both federal and state AI legislative developments while maintaining robust internal governance frameworks regardless of regulatory outcomes.
“In an ideal world, Congress would be driving the conversation forward on artificial intelligence, and their failure to lead on AI and other critical technology policy issues—like data privacy and oversight of social media—is forcing states to act,” said Colorado Attorney General Paul Weiser.
coag.gov/…
The Intersection of AI, Digital Health, and the TCPA: What You Need to Know
Artificial intelligence (AI) is widely transforming digital health, including by automating certain patient communications. However, as health care companies consider deploying AI-driven chatbots, texting platforms, and virtual assistants, they should not forget about the highly consequential, and highly litigated, Telephone Consumer Protection Act (TCPA).
Many digital health companies mistakenly assume that they only need to consider the Health Insurance Portability and Accountability Act (HIPAA) when considering whether to text or otherwise communicate with patients via various means. HIPAA governs the privacy and security of protected health information. The TCPA, by contrast, protects consumer rights around how and why patients are contacted.
The TCPA has become a key regulatory consideration for any digital health company that uses technology to communicate with patients by telephone or text message. As AI enables more scalable and automated outreach, understanding the TCPA’s boundaries is key to ensuring regulatory compliance and avoiding costly litigation.
Why the TCPA Matters in an AI-Enabled Health Environment
The TCPA restricts certain calls and texts made using an “automatic telephone dialing system” (ATDS), as well as prerecorded or artificial voice messages, without prior express consent. When such communications are made for marketing purposes, prior express written consent may be required. Even health care companies that use AI-powered systems to send appointment reminders, refill prompts, or wellness check-ins by telephone or text — as opposed to marketing, user engagement, or upselling services — may fall within the TCPA’s scope, especially if those communications are automated. Note that although the TCPA includes exemptions for certain health care messages, there are numerous parameters for meeting this exception and we urge caution in relying on it.
Even though the Supreme Court’s 2021 decision in Facebook v. Duguid narrowed the definition of an ATDS, TCPA compliance remains a moving target. Further, some states have their own version of the TCPA that may define ATDS or similar technology in a different way. This creates real legal risk even for digital health companies with no robocall or telemarketing intent.
AI Chatbots and Virtual Assistants: Are They “Artificial Voices”?
One of the most pressing legal questions, and a focus of plaintiffs’ attorneys, is whether AI-powered voicebots or chatbots qualify as “artificial or prerecorded voice” communications under the TCPA. Although the Federal Communications Commission’s (FCC) 2024 ruling clarified that AI-generated voices fall into this definition, reaffirming that these types of communications are subject to the TCPA’s consent requirements, the legal landscape remains unsettled.
Courts continue to wrestle with how this interpretation applies to emerging technologies like chatbots, especially text-based systems that do not emit sound but still automate patient communication. Some plaintiffs argue that such AI technology, even if it responds dynamically to user input, meets the statutory definition of “artificial voice” because it lacks a live human on the line. If courts agree, this could impose significant restrictions on AI-driven patient engagement tools unless proper consent is obtained.
The FCC’s authority, although influential, does not fully preempt judicial interpretation, and differing court decisions may shape how the TCPA applies to various forms of AI-powered communication. As a result, companies must stay alert to both regulatory guidance and case law developments.
What Digital Health Companies Should Do Now
Below are four practical steps to stay on the right side of TCPA compliance in the AI era:
1. Conduct a TCPA Risk Assessment
Review all patient outreach channels (SMS, voice, chat, etc.) and determine which systems are AI-driven or automated. Flag any that fall within the TCPA’s scope. Consider any differing requirements under state versions of the TCPA applicable to your business.
2. Audit Your Consent Flows
Ensure that your consent language clearly distinguishes between HIPAA and TCPA compliance. For marketing communications, confirm you have prior express written consent. Consider “marketing” to be broadly defined.
3. Consent is King
When in doubt, obtain prior express written consent for communications in your user flow.
4. Monitor Litigation Trends
Stay current on case law developments regarding AI, chatbots, and “artificial voice” interpretations. Legal interpretations are evolving quickly.
Final Thoughts
AI is revolutionizing patient communication, but it can also amplify regulatory exposure. The TCPA remains a favorite tool for class-action lawsuits, and digital health companies should treat it with the same seriousness as they treat their HIPAA compliance.
As AI capabilities grow, the gap between innovation and regulation is widening. Thoughtful contracting, consent design, and legal review can help digital health companies lead with compliance, while still delivering smarter, scalable care.
California Regulator Releases Updated Draft Regulations, Scales Back Proposed AI Privacy Rules
California appears to be changing its approach to how it regulates artificial intelligence, likely reflecting its reaction to challenges seen recently in other states. Namely, the California Privacy Protection Agency recently released an update to its draft regulations which change how the Agency plans to regulate Automated Decisionmaking Technology, or ADMT. This comes after the Agency’s original proposal faced intense opposition from industry groups, state lawmakers and Governor Newsom.
The public has until June 2, 2025 to submit comments. As now proposed, some of the key changes include:
Narrowed scope of ADMT rules: The definition of automatic decisionmaking technologies would now only cover technologies that “replace or substantially replace human decision-making.” Technologies that just help or support human decisions would not be covered. The update also makes clear that the ADMT rules would only apply to decisions that result in a “significant decision” about a consumer—like those involving housing, employment, credit, or access to essential goods and services. Advertising to a consumer is specifically excluded from what counts as a “significant decision.”
Eased risk assessment burden: The new rules would make it easier for businesses when it comes to conducting risk assessments. For example, profiling a consumer for behavioral advertising would no longer requires a risk assessment. Similarly, using personal data to train ADMT would not trigger a risk assessment unless the business does it intentionally for certain specific purposes.
Cybersecurity audits: As revised, businesses would have more time to complete initial audits, depending on how much money they make. Some of the tougher rules have also been relaxed. For example, businesses can use existing audits and report the results up to executive management instead of the board of directors.
Putting it into Practice: While we await the final regulations, this is nonetheless a reminder for businesses to review their uses of automatic decisionmaking technologies.
Listen to this post
The Continued Proliferation of AI Exclusions
Risk professionals and insurers alike continue to monitor the rapid evolution and deployment of artificial intelligence (AI). With increased understanding comes increased efforts to manage and limit exposure. Exclusions to coverage offer insurers potentially broad protection against evolving AI risk. Most recently, one insurer, Berkley, has introduced the first so-called “Absolute” AI exclusion in several specialty lines of liability coverage, signaling an even broader effort to compartmentalize AI risk.
The good news for policyholders is that AI exclusions have led to introduction of new AI-specific coverages to fill potential gaps. As discussed in a recent blog post, start-up insurer Armilla, in partnership with Lloyd’s, introduced an affirmative AI insurance product that offers dedicated protections for certain AI exposures. Other insurers, like Munich Re, have likewise introduced focused AI insurance products. Dedicated AI coverages may soon become the norm, especially if other insurers follow Berkley’s lead to broadly exclude AI risk from existing or “legacy” lines of coverage.
Berkley’s “Absolute” AI Exclusion
Berkley’s new exclusion, intended for use in the company’s D&O, E&O, and Fiduciary Liability insurance products, purports to broadly exclude coverage for “any actual or alleged use, deployment, or development of Artificial Intelligence.” The full endorsement states:
The Insurer shall not be liable to make payment under this Coverage Part for Loss on account of any Claim made against any Insured based upon, arising out of, or attributable to:
(1) any actual or alleged use, deployment, or development of Artificial Intelligence by any person or entity, including but not limited to:
(a) the generation, creation, or dissemination of any content or communications using Artificial Intelligence;
(b) any Insured’s actual or alleged failure to identify or detect content or communications created through a third party’s use of Artificial Intelligence;
(c) any Insured’s inadequate or deficient policies, practices, procedures, or training relating to Artificial Intelligence or failure to develop or implement any such policies, practices, procedures, or training;
(d) any Insured’s actual or alleged breach of any duty or legal obligation with respect to the creation, use, development, deployment, detection, identification, or containment of Artificial Intelligence;
(e) any product or service sold, distributed, performed, or utilized by an Insured incorporating Artificial Intelligence; or
(f) any alleged representations, warranties, promises, or agreements actually or allegedly made by a chatbot or virtual customer service agent;
(2) any Insured’s actual or alleged statements, disclosures, or representations concerning or relating to Artificial Intelligence, including but not limited to:
(a) the use, deployment, development, or integration of Artificial Intelligence in the Company’s business or operations;
(b) any assessment or evaluation of threats, risks, or vulnerabilities to the Company’s business or operations arising from Artificial Intelligence, whether from customers, suppliers, competitors, regulators, or any other source; or
(c) the Company’s current or anticipated business plans, capabilities, or opportunities involving Artificial Intelligence;
(3) any actual or alleged violation of any federal, state, provincial, local, foreign, or international law, statute, regulations, or rule regulating the use or development of Artificial Intelligence or disclosures relating to Artificial Intelligence; or
(4) any demand, request, or order by any person or entity or any statutory or regulatory requirement that the Company investigate, study, assess, monitor, address, contain, or respond to the risks, effects, or impacts of Artificial Intelligence.
The potential breadth of this exclusion cannot be overstated. And, the exclusion’s title suggests that Berkley intends to apply the exclusion to virtually any claim with a connection to AI.
Given the current landscape of AI-related liabilities giving rise to insurance claims, likely first-deployment might be in the context of shareholder litigation alleging AI-related misrepresentations. Those securities claims, which have come to be known as “AI Washing” lawsuits, may be targeted for “actual or alleged statements, disclosures, or representations concerning or relating to Artificial Intelligence.” While the target wrongful acts (“statements, disclosures, or representations”) seem straight-forward, one over arching question remains: what exactly constitutes “Artificial Intelligence?”
What is “Artificial Intelligence”: A Definitional Dilemma
The exclusion applies to claims concerning or relating to “Artificial Intelligence.” But what exactly does that include (or not include)? On its face, one might argue that the exclusion does indeed afford “absolute” protection against AI-related risk. An insurer in practice may seek to simplify the analysis to simply—does the claim reference AI? If so, no coverage.
But as with most insurance language, the devil is in the details, and the exclusion’s purported reach is far less certain. Much of the exclusion’s effect lies in its definition of “Artificial Intelligence.” That definition, read closely, is subject to a myriad of interpretations and perhaps incapable of comprehension for all but the most sophisticated AI engineers. The supplied definition states:
“Artificial Intelligence” means any machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments, including, without limitation, any system that can emulate the structure and characteristics of input data in order to generate derived synthetic content, including images, videos, audio, text, and other digital content.
Insurance policies are sold by insurance brokers; they’re bought by risk managers; claims are handled by claim handlers and disputes are typically decided by judges. Nowhere in that list of professionals do we find AI engineers, computer programmers, mathematicians, or other technical professional capable of understanding what actually occurs within the “black box” of a particular AI system. That lack of front-line understanding will invariably lead to differing interpretations and coverage disputes.
Takeaways
Berkley’s introduction of a so-called “Absolute” AI exclusion marks an important development in how the insurance industry is navigating the complexities associated with AI. However, the purported breadth of the exclusion highlights the imprecision that stands to frustrate the insurance industry’s ability to manage AI-related risks.
For now, policyholders must remain vigilant about the addition of any AI-related provisions into their existing, new, or renewing policies. Policyholders likewise should be on the lookout for questions in insurance applications concerning how the company may be using AI. Answers to these questions, like all other application questions, must be carefully considered, especially given the rapid evolution and deployment of AI, which stands to make even the most diligent responses obsolete before the next policy renewal.
Federal Take It Down Act Targeting Revenge-Porn Becomes Law
On May 19, 2025, President Donald Trump signed into law the Take It Down Act (S.146). The federal legislation criminalizes the publication of non-consensual intimate imagery and AI-generated pornography. It comes following approximately forty states already enacting legislation targeting online abuse.
What are the Take It Down Act’s Requirements?
The federal Take It Down Act creates civil and criminal penalties for knowingly publishing or threatening to share non-consensual intimate imagery and computer-generated intimate images that depict real, identifiable individuals. If the victim is an adult, violators face up to two years in prison. If a minor, up to three years.
Social media platforms, online forums, hosting services and other tech companies that facilitate user-generated content are required to remove covered content within forty-eight hours of request and implement reasonable measures to ensure that the unlawful content cannot be posted again.
Consent to create an image will not be a defense.
Exempt from prosecution are good faith disclosures or those made for lawful purposes, such as legal proceedings, reporting unlawful conduct, law enforcement investigations and medical treatment.
What Online Platforms are Covered Under the Take It Down Act?
Covered Platforms include any website, online service, application, or mobile app that that serves the public and either: (i) provides a forum for user-generated content (e.g., videos, images, messages, games, or audio), or (ii) in the ordinary course of business, regularly publishes, curates, hosts or makes available non-consensual intimate visual depictions.
Covered Platforms do not include broadband Internet access providers, email services, or online services or websites with primarily preselected content where the content is not user-generated but curated by the provider – and interactive features are merely incidental or directly related to the pre-selected content.
What are the Legal Obligations for Covered Online Platforms?
The Take It Down Act requires covered platforms to ensure compliance via, without limitation: (i) providing a clear and accessible complaint and removal process; (ii) providing a secure method for secure identity verification; and (iii) removing unlawful content and copies thereof within forty-eight hours of receipt of a verified complaint.
The new law also contained recordkeeping and reporting requirements.
While not expressly required, platforms are well-advised to address content moderation filtration policies. Reasonable efforts are, in fact, required to identify and remove any known identical copies of non-consensual intimate imagery.
Website agreements, as well as reporting and removal processes are amongst the legal regulatory operational compliance areas that warrant consideration and attention.
Who is Empowered to Enforce the TAKE IT DOWN Act?
The Federal Trade Commission has been authorized to enforce the Take It Down Act notice and takedown requirements against technology platforms that fail to comply. Violations are considered deceptive or unfair.
Good faith, prompt compliance efforts may be considered a safe harbor and a mitigating factor for platforms in the context of regulatory enforcement. Internal processes that document good faith compliance efforts, including the documentation of all takedown actions, should be implemented in order to avail oneself of the safe harbor.
Removal and appeals processes must be implemented on or before May 19, 2026.
Takeaway: Covered online platforms including, but not limited to, those that host images, videos or other user-generated content should consult with an FTC and State Attorneys General Defense and Investigations to discuss compliance with the Act’s strict takedown obligations and so in advance of the effective date in order to minimize potential liability exposure.
AI Service Provider Faces Class Actions Over Catholic Health Data Breach
AI service provider Serviceaide Inc. faces two proposed class action lawsuits from a data breach tied to Catholic Health System Inc., a nonprofit hospital network in Buffalo, New York. The breach reportedly exposed the personal information of over 480,000 individuals, including patients and employees.
Filed in the U.S. District Court for the Northern District of California, the lawsuits allege that Serviceaide acted negligently and failed to protect sensitive data in its Elasticsearch database that was made publicly accessible allegedly for months before being disclosed.
Serviceaide, which provides AI-driven chatbots and IT support solutions, was contracted by Catholic Health and entrusted with managing protected health information and employment records. Plaintiffs allege that the company delayed notification to the affected individuals, waiting seven months after the incident to notify affected individuals. The affected data included patient records and personal information.
The lawsuits allege claims of negligence, breach of implied contract, unjust enrichment, invasion of privacy, and violations of California’s Unfair Competition Law.
Both plaintiffs seek to represent a nationwide class of individuals whose data was compromised and are seeking injunctive relief, damages, and attorneys’ fees.
These lawsuits highlight growing legal exposure for tech firms that handle protected health information, especially as more hospitals and healthcare systems outsource services to AI and cloud vendors. The healthcare sector remains one of the most targeted industries for cyber threats, and breaches involving third-party vendors are drawing increasing legal scrutiny.