Beyond Bias: California Sets a New Standard for Regulating AI in the Workplace

As employers around the globe are increasingly looking to leverage AI and AI-adjacent automation in their recruiting and personnel processes, California has stepped onto the scene. On June 27, 2025, the California Civil Rights Council approved several changes to the regulations issued under the California Fair Employment and Housing Act (FEHA)1, expressly targeting employment discrimination risks arising from AI and other automated decision systems (FEHA regulations). The new rules take effect Oct. 1, 2025, and have the potential to significantly impact hiring and personnel processes across the state.
This GT Alert provides a general overview of key requirements and practical impacts that California employers should consider before the regulations take effect later this year. 
Key Takeaways
The FEHA prohibits discrimination, harassment, and retaliation in hiring and personnel decisions based on protected categories such as race, national origin, religion, age, disability, sex, gender, and sexual orientation. In amending the FEHA regulations, the Civil Rights Council sought to “increase[] clarity on how existing antidiscrimination laws apply to the use of artificial intelligence in employment decisions.”
These changes fall into four general categories: (1) expanded definitions, (2) substantive provisions concerning pre-employment practices, (3) substantive provisions concerning discrimination, and (4) record-keeping requirements.  

1.
Expanded Definitions

The updated FEHA regulations include new definitions and modify existing ones.
Most prominently, the changes add a broad definition of the term “automated-decision system” (ADS), along with several sub-definitions.2 An ADS is defined as a computational process making decisions or facilitating human decision-making regarding employment benefits.3 The FEHA regulations provide several examples of the types of tasks an ADS can perform, including tools that4:

Screen resumes or applications; 
Rank or score candidates; 
Analyze facial expressions, voice, or behavior in interviews; 
Make predictive assessments about an applicant or employee; 
Measure skills, reaction-time, and/or other abilities or characteristics; 
Target job ads or other recruiting materials to specific groups; and/or 
Use games, puzzles, or assessments to evaluate traits like personality or aptitude.

The FEHA regulations also adopt another new definition—“agent”—which includes “any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity,” such as recruiting, screening, hiring, promoting or making other employment-related decisions, including when activities and decisions are conducted in whole or in part through the use of an ADS.5Any agent of an employer is also an “employer” for purposes of FEHA.6
The term “proxy” was also added and refers to a characteristic or category closely correlated with a FEHA-protected characteristic. The use of this term is intended to clarify how automated systems may indirectly discriminate by relying on variables that serve as stand-ins or substitutes for protected characteristics (e.g., using zip codes as a proxy for race or national origin).  

2.
Pre-Employment and Hiring Practices

The FEHA regulations already included various provisions relating to pre-employment practices, including non-discrimination in recruitment, pre-employment inquiries, applications, and interview or other screening methods. All pre-employment practices now expressly cover those conducted using an ADS.7 As an example, employers must ensure that an ADS does not disadvantage individuals with disabilities or religious needs. This may require, for example:

Modifying assessments where an ADS assesses skills or abilities, and an applicant has a disability; 
Providing alternative formats; and 
Ensuring accessibility in interviews and testing.

In addition, the FEHA regulations reinforce California’s Fair Chance Act by making it clear that employers cannot use an ADS to inquire into or obtain information about an applicant’s criminal history prior to extending a conditional offer of employment.8 

3.
Substantive Provisions on Discrimination

Employers may not use an ADS or selection criteria that results in discrimination based on any protected characteristics (e.g., race, gender, age, disability, religion, national origin).9 This includes indirect discrimination through proxies (e.g., ZIP code, speech patterns, facial analysis). Notably, employers are also now directly responsible for the actions of their agents, including for example, recruiters, staffing firms, or other third-party vendors when they use AI tools on their behalf—even if those vendors are independent from the organization. This means that liability for discrimination can attach wherever an agent misuses (or misapplies) AI in hiring, promotions, compensation, or other personnel decisions on behalf of the employer.
The FEHA regulations make clear that employer efforts to ensure non-discrimination in the context of AI-assisted hiring and personnel processes may serve as a defense to any legal claims if the employer takes certain steps. Specifically, “anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the result of such testing or other effort, and the response to the results,” are delineated as relevant to any claim or defense concerning discrimination from the use of an ADS.10 Of course, the corollary to this potential defense is that the lack of such proactive efforts may work against an employer that is challenged for the use, either directly or indirectly, of an ADS. 

4.
Recordkeeping and Preservation Requirements

In line with its other changes, employer recordkeeping requirements now include any ADS data or other personnel data generated using AI, such as screening algorithms, input data, scoring outputs, and related documentation.11 
All ADS and related data must be preserved for at least four years—doubling the previous two-year requirement.
Practical Considerations for California Employers
While the true impact of the new FEHA regulations remains to be seen, employers would be well-advised to catalog where and how any ADS and other AI tools are used in hiring, screening, and employment decisions to help ensure that the practices conform with the new FEHA regulations. Since employers may be at greater risk if they fail to take proactive steps to “avoid unlawful discrimination” through well-documented anti-bias testing or similar, proactive efforts, employers should consider implementing and documenting: (1) regular audits of AI and algorithmic processes for disparate impact (including reviewing third-party vendor tools for compliance); (2) responsive updates based on test results; and (3) detailed records of anti-bias efforts.
Employers should also consider requiring transparency and disclosure from vendors regarding their AI use, contractually delineating responsibility for such practices and compliance with the new regulations (including anti-bias testing), ensuring that the vendors are complying with record-keeping requirements, and, if possible, having the vendor indemnify the employer for the vendor’s use of an ADS or AI. 
Companies should educate their own employees responsible for personnel management and related decisions on the new FEHA regulations and risks of ADS misuse, emphasize the importance of individualized assessments and documentation, and require human oversight and decision-making with respect to all personnel actions.
Finally, employers should also ensure that their document retention and preservation policies comply with the expanded recordkeeping requirements, both in terms of the data preserved and the length of time it is retained.

1 The full final text of the Civil Rights Council’s changes to the Code of Regulations are available on its website. Citations to the Code of Regulations below are to the approved regulations effective Oct. 1, 2025.
2 Cal. Code Regs., tit. 2, § 11008.1.
3 Cal. Code Regs., tit. 2, § 11008.1(a).
4 Id.
5 Cal. Code Regs., tit. 2, § 11008(a).
6 Id.
7 See, e.g., Cal. Code Regs., tit. 2, § 11016(a)(2); see also Cal. Code Regs., tit. 2, § 11016(b)(1), (c)(5), (d)(1).
8 Cal. Code Regs., tit. 2, § 11017.1(a)(1).
9 Cal. Code Regs., tit. 2, §§ 11028(b), (m) [national origin and ancestry]; 11032(b)(4), 11033(f), 11038(b) [sex, pregnancy, and childbirth]; 11056(a) [marital status]; 11063(b) [religious discrimination]; 11070(a), 11072(b) [disability discrimination]; 11076(a), 11079(b) [age discrimination]. 
10 See, e.g., Cal. Code Regs., tit. 2, § 11009(f).
11 Cal. Code Regs., tit. 2, § 11013(c).

GT Newsletter | Competition Currents | July 2025

United States
A. Federal Trade Commission (FTC)
1. FTC agrees to horizontal Omnicom/IPG merger with behavioral ‘free speech’ commitment.
The FTC accepted a proposed consent order on June 23, 2025, that clears the way for Omnicom Group’s $13.5 billion acquisition of The Interpublic Group of Companies (IPG). The merger would combine the third and fourth largest competitors to create a new market leader in media buying services for advertisers. The FTC characterized the merger as moving from six competitors to five but entered into the consent order only to prohibit the combined firm from engaging in collusion or coordination to direct advertising to certain media publishers based on the publisher’s political or ideological viewpoints. It did not require any divestiture. Chairman Ferguson’s statement on the deal cites a Congressional report discussing such potential collusion through an advertising trade group as reason for being “particularly vigilant” in the present transaction.
2. FTC clears Alimentation Couche-Tard’s acquisition of Giant Eagle GoGet stores with divestitures.
On June 26, 2025, the FTC announced a consent decree allowing Circle-K parent Alimentation Couche-Tard Inc. (ACT) to acquire over 200 retail fuel outlets from Giant Eagle with the divestiture of 34 Circle K locations and one Giant Eagle GetGo branded property to an FTC-approved buyer—an operator that would be a new entrant into the relevant local markets. According to the FTC, the merger would result in five or fewer market participants in each local market where a divestiture was required. Among other things, the consent decree requires ACT to provide prior notice before acquiring certain retail fuel stations in a confidential list, and that notice must provide information on all retail fuel outlets within five driving miles, who ACT monitored for pricing in the area, and its pricing strategy for those monitored stations.
B. Department of Justice (DOJ) Civil Antitrust Division
1. DOJ settles HPE-Juniper Networks merger challenge with divestiture and AI source code licensing.
On June 28, 2025, the Justice Department reached a settlement allowing Hewlett Packard Enterprise (HPE)’s $14 billion acquisition of Juniper Networks to proceed, after initially suing to block the transaction in January. The settlement resolves concerns with enterprise-grade WLAN solutions by requiring HPE to divest its “Instant On” campus and branch WLAN business to a DOJ-approved buyer within 180 days. Additionally, the combined firm must conduct an auction and enter into up to two world-wide, perpetual, non-exclusive licenses of the source code for Juniper’s AI Ops for Mist, which is used in the Juniper WLAN products for automated network optimization. According to the DOJ, while the combined share of the parties would be less than 30%, the stronger second place competitor when combined with the leading competitor would hold over 70%, meaning the market is highly concentrated and would still be harmed in the Department’s view. 
2. DOJ requires divestiture in Safran-Raytheon deal to preserve aerospace competition.
The DOJ announced on June 17, 2025, that Safran S.A. must divest its North American actuation business to resolve antitrust concerns arising from its proposed $1.8 billion acquisition of Collins Aerospace’s actuation and flight control business from Raytheon Corp. The settlement with DOJ requires Safran to divest assets previously acquired under a 2018 consent decree to another aerospace supplier. DOJ alleges the acquisition would reduce competition from trimmable horizontal stabilizer actuators, which are critical aircraft components affecting safety and performance. The proposed remedy is intended to address concerns that the transaction would recombine divested assets and degrade price, quality, and innovation. 
C. U.S. Litigation
1. In re: Tecfidera Antitrust Litig., 2025 WL 1755725, Case No. 1:24-cv-07387 (N.D. Ill. June 25, 2025).
On June 26, 2025, U.S. District Judge April Perry dismissed a class action lawsuit filed against biotechnology company Biogen, Inc. alleging that Biogen violated antitrust laws by paying pharmacy benefit managers “to not promote or advantage” generic drug options over Biogen’s brand drug products. According to the plaintiffs, these payments foreclosed competition and disrupted state drug substitution laws because, as a result of Biogen’s payments, the generic drug option was placed in a disadvantageous price tier on the PBMs’ formularies compared to the name-brand product. In analyzing the complaint, the court focused on whether the allegations plausibly established that the payments had actually disrupted drug substitution laws, as well as whether the allegations plausibly pleaded that the disruption was “substantial” and actually prevented generic manufacturers from competing effectively with Biogen.
Ultimately, the plaintiffs’ allegations were insufficient because there were no allegations about how the drug substitution laws at issue actually work and no allegations explaining how plans select formularies from among any given PBM’s offerings. Significantly, “speculation will not do to bridge the gap from possible to plausible foreclosure.” The court granted Biogen’s motion to dismiss without prejudice, granting the plaintiffs leave to amend, as is Seventh Circuit practice on a first dismissal.
2. Coleman v. RealPage Inc. et al., Case No. 2:25-cv-00093 (E.D. Ky. July 3, 2025).
Kentucky filed a lawsuit against property management software company RealPage Inc. and various landlords on July 3, 2025, accusing each of engaging in a rent price-fixing scheme that “distorts” competition. The claims—which include unjust enrichment, violations of the Sherman Act, and violations of the Kentucky Consumer Protection Act—arise out of the defendants’ use of RealPage’s Revenue Management Solutions software. According to Kentucky, that software allows landlords to “sidestep vigorous competition to win renters’ business” because landlords share their private rental data with RealPage using the software, and then RealPage combines the data into an algorithm to recommend certain rent prices to the landlords, while continuing to monitor the landlords’ compliance with the recommendations. This follows similar lawsuits the federal government, the District of Columbia, New Jersey, and Washington have filed against RealPage.
Netherlands
Dutch Court Decision
Dutch Supreme Court rules on follow-on claims from a single, continuous breach of EU competition law and referral of preliminary questions to the CJEU.
The Dutch Supreme Court issued a landmark ruling on June 20, 2025, addressing the applicable law for follow-on damages claims arising from a single and continuous infringement of EU competition law of the European cartel prohibition under Article 101(1) of the Treaty on the Functioning of the European Union. The case pertains to the international cartels of truck manufacturers operating from 1997 to 2011 and airlines operating from 1999 to 2006 that coordinated prices, including fuel and security surcharges. The European Commission previously issued substantial fines to the entities involved, and various claims vehicles and direct purchasers now seek compensation for damages incurred.
The Dutch Supreme Court has referred preliminary questions to the Court of Justice of the European Union (CJEU) on whether follow-on damages claims should be treated as a single continuous tort, allowing claimants to choose the applicable law under Article 6(3)(b) of Regulation Rome II, or if the applicable law should be determined separately for each individual transaction under Article 4(1) of the former Dutch Conflict of Laws Act. Additionally, the Supreme Court seeks guidance from the CJEU on whether Member States may alternatively classify each transaction or harm event separately, and how the temporal scope and application of Rome II should be interpreted in these complex cross-border cartel damage claims.
These questions aim to clarify critical issues, including whether EU law, particularly the principle of effectiveness, requires that a “single continuous infringement” be treated as one unified wrongful act giving rise to one consolidated damages claim per injured party.
Poland
A. Bus Operators Accused of Bid-Rigging in a Public Tender
On June 16, 2025, the President of the Office of Competition and Consumer Protection (UOKiK President) brought antitrust charges against 11 bus operators allegedly conspiring to rig the largest public-transport tender ever held in the Upper Silesia–Zagłębie metropolitan area in Poland. The value of the tender was PLN 1.3 billion (approx. EUR 305.5 million / USD 357.5 million) and it covered passenger services for 2022–2029.
According to evidence gathered during dawn raids, the operators may have formed consortia for the sole purpose of pre-allocating routes and ensuring that each party retained its existing service area, rather than competing with prices and service quality. Internal correspondence and consortium agreements indicate that the defendants allegedly agreed in advance which consortium would win particular lots, evading competitive pressure and inflating costs for the contracting authority.
Such conduct, if confirmed, would constitute a typical bid-rigging cartel, exposing the companies that infringed the competition law to fines up to 10% of their annual turnover.
B. UOKiK Brings Charges Over Suspected Collusion in Agricultural Machinery Distribution
On June 23, 2025, the UOKiK President pressed charges against AGCO Polska (the exclusive Polish distributor of Valtra, Fendt, and Massey Ferguson agricultural machinery); 10 authorized dealers; and five managers for suspected market-sharing and price-fixing practices. According to UOKiK, dawn-raid evidence suggests that AGCO and the dealers divided Poland into exclusive territories and agreed not to serve customers from the others’ areas. Farmers seeking a better deal from an out-of-area dealer were allegedly redirected to their “local” seller or quoted artificially high prices. According to the UOKiK President, dealers may have shared sensitive pricing information and warned AGCO whenever a competitor considered breaking the pact and selling vehicles outside its territory.
The scheme allegedly denied farmers meaningful choices and competitive pricing for tractors, combines, and spare parts. The companies involved may be fined up to 10% of their annual turnover, while the five individuals charged—including three current or former AGCO managers—may face personal penalties up to PLN 2 million each.
Italy
Italian Competition Authority (ICA)
1. Unfair commercial practice: ICA launches investigation against DeepSeek.
On June 16, 2025, ICA initiated a formal investigation against Hangzhou DeepSeek Artificial Intelligence Co., Ltd. and Beijing DeepSeek Artificial Intelligence Co., Ltd., (collectively, DeepSeek). ICA alleged that DeepSeek failed to adequately inform users about the risk of so-called “hallucinations”— instances where their AI models generate inaccurate or misleading information. According to the ICA, the lack of a clear and immediate disclaimer may constitute a misleading commercial practice under Articles 20, 21, and 22 of the Italian Consumer Code.
ICA contended that the only warning provided (“AI-generated, for reference only”) is overly generic, insufficiently visible, and exclusively in English, even when users interact in Italian. Furthermore, relevant warnings are buried in the terms of use, which are not accessible on the main pages and also only in English. ICA argued this omission might mislead consumers into placing unwarranted trust in the outputs, potentially influencing critical decisions in areas such as health, finance, or law, undermining consumers’ ability to make informed commercial decisions.
Given these concerns, ICA has launched proceedings to verify the alleged violations and requested DeepSeek to provide detailed information on their services, user base in Italy, and corporate structure within 30 days. ICA has also reminded the companies of the legal consequences of failing to respond, including possible administrative fines. The procedure is set to conclude within 270 days and a final decision will follow based on the information received and further assessments.
2. Unfair commercial practice: ICA imposes fine of almost EUR 3 million on Virgin Active Italy.
On June 18, 2025, ICA closed an investigation into Virgin Active Italia—the national branch of the international personal fitness and exercise company—imposing a fine of EUR 3 million. The investigation started in December 2024, following numerous reports from consumers who alleged that Virgin Active provided consumers with inadequate, unclear, and insufficient information regarding the terms and conditions of subscription, automatic renewal, cancellation, and early termination of their agreements. Furthermore, the company allegedly failed to provide timely notification regarding the automatic renewal of subscriptions, the deadline for consumers to exercise formal cancellation rights, and did not offer adequate information concerning price increases implemented throughout 2024. Therefore, ICA considered the conduct an unfair commercial practice in violation of Articles 20, 21, 22, 24, 25, 26 (letter f), and 65-bis of the Italian Consumer Code.
3. Unfair commercial practice: influencer marketing.
On June 11, 2025, ICA closed investigations against six influencers (Luca Marani, Alessandro Berton, Hamza Mourai, Davide Caiazzo, Luca De Stefani, and Michele Leka). For the first four individuals, the proceedings were closed with commitments, while De Stefani and Leka were fined a total of EUR 65,000.
ICA initiated the investigations in July 2024 when it found that these influencers posted photos and/or videos on social media platforms and websites offering paid advice on “easy and secure high earnings” based on the “winning model” they themselves embodied, without indicating any form of advertisement to inform consumers of the promotional nature of the content. Furthermore, relevant information for purchasing decisions, such as the cost of goods and/or services offered, was not sufficiently highlighted.
In particular, the violations ICA sanctioned concerned: (a) promoting online earnings through exaggerated and unsubstantiated claims, including endorsements from brands and media without proper advertising disclaimers or essential consumer information; (b) artificially inflating online credibility by using fake Instagram followers and publishing only unverifiable positive reviews and testimonials; (c) promoting misleading financial advice on TikTok, suggesting that significant economic results could be easily achieved. Commitments included removing misleading expressions related to easy or risk-free earnings from their online platforms, adding clear advertising disclaimers, eliminating non-authentic followers, and monitoring their online activities to enhance compliance with consumer protection regulations.
European Union
A. European Commission
The European Commission opens in-depth investigation into Mars’ proposed acquisition of Kellanova.
The European Commission has opened an in-depth investigation (Phase II) into Mars’ proposed acquisition of Kellanova. Mars is an international supplier of popular snack-food brands (including chocolates and chewing gums), while Kellanova produces various snack products such as Pringles chips and cereals under the Kellogg brand. The European Commission expressed preliminary concerns that the merger would significantly reduce competition in the EEA and increase Mars’ bargaining power with retailers. According to the European Commission’s preliminary findings, the transaction would merge multiple “must-have” snack and breakfast brands under Mars’ already extensive portfolio, giving retailers little choice but to accept less favorable commercial terms. The European Commission noted that supermarkets may be particularly vulnerable, as consumers typically prefer one-stop shopping, potentially limiting retailers’ flexibility to reject Mars’ terms without losing essential products.
The European Commission has until Oct. 31, 2025, to issue a final decision.
B. CJEU Decision
The CJEU confirms the Commission’s approval of RWE’s acquisition of certain E.ON assets.
The CJEU has confirmed the European Commission’s approval of RWE’s acquisition of certain electricity generation assets from E.ON, upholding the General Court’s previous ruling. This decision follows a complex asset swap deal the two German energy companies announced in March 2018, involving three separate concentration operations.
The European Commission initially approved the acquisition, concluding there were no manifest errors in assessing its compatibility with EU competition law. However, several German municipal authorities challenged this approval, arguing that the deal significantly impacted competition in local energy markets. The General Court rejected these challenges in judgments issued on May 17, 2023.
On appeal, the CJEU dismissed most of these challenges, upholding the General Court’s findings that the transaction did not constitute a “single concentration” and confirming the absence of manifest errors by the European Commission. In parallel, the CJEU overturned procedural dismissals regarding some municipalities’ claims due to insufficient reasoning but ultimately dismissed their actions on substantive grounds, ruling they failed to demonstrate significant harm to their market positions.
Japan
JFTC Releases Report on Generative AI.
On June 6, 2025, the Japan Fair Trade Commission (JFTC) compiled the Report Regarding Generative AI, Version 1.0.
While acknowledging generative AI’s potential for increased productivity and innovation, the report noted risks such as intellectual property infringement and the spread of misinformation. This report aims to encourage a fair competitive environment and to understand the current state of the generative AI market.
Specifically, the report underscores the following:

Access Restrictions and Exclusion of Competitors: In the AI market, as is generally true in all markets, when an enterprise that has established a strong position for computing resources—such as hardware, data, and specialized human resources—restricts access through anticompetitive acts, new entrants and existing competitors face difficulty securing alternative suppliers, which raises the costs of business activities and undermines the motivation to enter the market or develop new products. When there is a risk where new entrants or existing competitors are excluded (i.e., the market foreclosure effect), this may become an issue under the Antimonopoly Act (private monopolization, unfair trade practice(s) (General Designation 14 (Interference with a competitor’s transactions))).
Tying: If a generative AI model provider has a strong position in a specific digital service market, integrating generative AI models into that digital service and offering it to users as a new digital service may make it difficult for other generative AI model providers or new entrants seeking to offer similar generative AI models to secure customers, thereby raising the costs of business activities and discouraging new entrants and the development of new models. When there is a risk that existing competitors or new entrants will be excluded from the digital-service market (market foreclosure effect), this may become an issue under the Antimonopoly Act (private monopolization, unfair trade practice(s) (General Designation 10 (tie-in sales, etc.))).

Going forward, the JFTC plans to collaborate with relevant ministries and agencies to promote an international exchanges of opinions.

1 Due to the terms of GT’s retention by certain of its clients, these summaries may not include developments relating to matters involving those clients.

Navigating FDA’s Proposed Guidance on AI and Non-Animal Models: Safeguarding Innovation in Drug Development

In April 2025, the U.S. Food and Drug Administration (FDA) released a landmark guidance titled “Roadmap to Reducing Animal Testing in Preclinical Safety Studies,” outlining its commitment to advancing New Approach Methodologies (NAMs) — including in silico models, organoids, and other non-animal alternatives. This guidance encourages sponsors of Investigational New Drug (IND) applications to adopt scientifically credible alternatives to animal studies, marking a shift toward more human-relevant, Artificial Intelligence (AI)-integrated platforms for regulatory submissions.
Earlier, in January 2025, the FDA released a companion piece of draft guidance titled “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products.” We discussed this FDA guidance in detail here: AI Drug Development: FDA Releases Draft Guidance.
Briefly, this guidance focuses specifically on the use of AI across the drug development lifecycle. It introduces a risk-based framework based on the model’s context of use and outlines the information that must be disclosed about model architecture, data governance, life cycle maintenance, and potential clinical impact — particularly where the model influences patient safety or drug quality.
A growing number of real-world examples confirm that AI-powered NAMs are becoming reality and demonstrate successful integration of AI and life science. Notably, the UVA/Padova Type 1 Diabetes Simulator, a physiologically detailed silico platform, has already been accepted by the FDA and used to support regulatory clearance of continuous glucose monitoring (CGM) devices. This is a tangible signal that virtual physiological systems can now form part of the evidentiary foundation for FDA approval.
FDA’s internal embrace of AI reinforces its regulatory messaging and signals long-term institutional commitment to digital tools. In particular, the FDA publicized its new uses of AI tools for internal operations, including a generative AI tool, “Elsa.” The FDA indicated that it would incorporate these tools widely throughout the agency by the end of June 2025.
Together, these FDA efforts mark a significant evolution: AI and others in silico platforms are now not only permissible but increasingly central to preclinical and clinical decision-making, along with regulatory review. With that recognition comes a dual imperative for sponsors and developers to demonstrate credibility to regulators, and safeguard innovation with robust IP strategy. NAMs significantly increase firms’ risk exposure with respect to both privacy and confidentiality concerns.
The sections that follow explore how companies and their legal teams can respond strategically — by leveraging emerging FDA frameworks, protecting innovations in AI modeling, and ensuring data governance practices are aligned with both regulatory and IP objectives.
What Are NAMs?
NAMs refer to non-animal strategies for evaluating drug safety and efficacy. These include in silico models, microphysiological systems, organoids, computational toxicology platforms, and AI-driven clinical simulations. NAMs offer benefits in speed, cost, and ethical soundness, while aligning more closely with human biology.
Despite rapid progress, FDA’s formal acceptance of AI-based NAMs remains limited. The UVA/Padova Type 1 Diabetes Simulator remains the only widely cited example of an in silico model that has been used successfully in FDA regulatory submissions — specifically in evaluating CGM devices. Its acceptance underscores the possibility of regulatory pathways for complex physiological simulators.
However, the ecosystem of NAMs is rapidly expanding. While not all have yet achieved full FDA qualification, several promising AI- and data-driven platforms are under consideration:

Organ-on-a-Chip and Organoid Models: Patient-derived organoids and organ-on-a-chip systems are being validated to simulate tissue-specific responses. For example, intestinal organoids have been shown to predict off-target toxicity for T-cell therapies.
AI-Based Computational Toxicology: AI models trained on large-scale toxicology databases are being developed to predict adverse outcomes and are supported by FDA through collaborative validation projects.
In Silico Clinical Trials (ISCTs): Computational patient models are being piloted to simulate clinical trial outcomes, particularly for device testing, with defined workflows for model validation and uncertainty analysis.
Physiologically Based Pharmacokinetic (PBPK) Models: These simulate drug distribution and metabolism and are under consideration as partial replacements for animal testing in pharmacokinetic profiling.
Synthetic Control Arms: AI-derived virtual patient cohorts are gaining traction as replacements for placebo groups in trials, helping reduce the number of real patient participants.
Wearable-Integrated AI: AI models analyzing real-time data from digital health technologies (e.g., wearables) are being reviewed for roles in patient monitoring, endpoint adjudication, and trial management.

NAMs and the FDA Risk Framework
Under the January 2025 guidance, the FDA uses a two-dimensional risk framework:
1. Model Influence Risk: How much the AI model’s output influences decisions.
2. Decision Consequence Risk: The potential impact of those decisions on patient safety or data integrity.
This determines the extent of documentation and disclosure required. High-risk AI models must include detailed submissions on training data, model performance, and governance — creating tension with trade secret protection and increasing the strategic need for patent coverage. The FDA’s evolving stance on AI-based NAMs signals that AI-enabled platforms will soon be standard components of regulatory filings. As such, developers must plan early for IP and data governance issues as discussed below in sections B and C. 
Implications for IP Strategy
The FDA’s guidance indicates that when NAMs and AI models are employed in clinical or manufacturing decision-making, stakeholders may be required to provide disclosures about data sources, model training procedures, evaluation metrics, and maintenance protocols. As these transparency requirements expand, relying solely on trade secrets becomes less practical, and patent protection or hybrid IP strategies increasingly important. Mapping the specific innovations associated with NAMs and AI models allows stakeholders to clearly identify patentable inventions. The following framework offers a practical approach for mapping key technological advancements implied or associated with NAMs and AI models beyond the model itself.
Does the model generate or enable clinically actionable information in new ways?
Discovery of new clinical information can imply new method steps (e.g., specific administration routes), formulations, dosages, and treatments of different indications or patient populations. As discussed in our prior analysis of GLP-1 receptor agonists, while newly discovered mechanisms may not themselves be patentable, they often enable claims around new methods of treatment, dosage regimens, or formulations. See GLP-1 Receptor Agonists and Patent Strategy: Securing Patent Protection for New Use of Old Drugs. Accordingly, the use of a NAM could imply patentable claims directed to methods of treatment based on new patient populations, dosages, or formulations. Alternatively, the use of NAMs could imply patentable claims directed to workflows for predicting patient outcomes or monitoring treatment responses.
Does the model shift how treatment regimens or trial protocols are designed?
The AI model may, for example, change the controls and design needed for the clinical trial, such as inclusion and exclusion criteria, or novel ways of stratifying patients by molecular subtypes, genomic, or epigenomic signature. 
In addition,dynamic AI systems could change the intervals or timing of administering therapeutic agents. 
Does the model impose new requirements on data input or integration?
The AI model may require novel multimodal or longitudinal integration by, for example, combining imaging, omics, and wearable data. The AI model may incorporate epidemiological data to determine patient clusters, and combination with complex biomarker signatures from large data on molecular signatures, genomic, epigenomic, or multi-omics data obtained from the patients. The use of such AI models may therefore imply patentable workflows directed to information flows that allow prediction of patient outcomes, stratification of patient population, detection, or monitoring of disease development. 
Does the AI model or its data structure change how upstream samples are collected or processed?
The AI model may alter patient sample workflows, potentially leading to patentable methods. Changes might include new biospecimen preservation protocols, modified sample quantity or type requirements, or added pre-processing steps. These workflow adjustments can support claims for sample preparation methods, automated systems integrated with AI, or compositions involving specially processed samples.
In sum, as AI models take on roles once reserved for clinical trials or animal studies, their influence on medical decision-making, data collection, and regulatory outcomes demands a more nuanced and forward-thinking IP strategy. The framework above offers a structured way to identify innovations tied not only to the model itself but to its impact on upstream workflows, treatment paradigms, and lifecycle management. For innovation leaders and IP teams, the above framework offers a systematic way to future-proof your AI models and ensure strategic protection.
This strategic lens also highlights the need for equally rigorous data governance approaches, which we now explore in Section C.
Data Governance and Compartmentalization in the NAM Era
As in silico NAMs and AI-enabled platforms become central to FDA submissions, data governance emerges as a critical strategic pillar — both for regulatory compliance and IP protection. Sponsors must now plan not only for data quality and model performance, but also for how data disclosures intersect with competitive advantage.
FDA guidance emphasizes lifecycle transparency: data inputs, training datasets, test cohorts, model validation strategies, and even future updates may need to be disclosed and monitored. While these disclosures promote regulatory trust, they also pose risks to trade secrets and competitive differentiation — particularly where model performance depends heavily on proprietary datasets or data preparation pipelines. Many life science firms are familiar with robust data governance in the privacy context, but privacy considerations are distinct from managing confidentiality and disclosure when balancing regulatory compliance with IP protection. To address these tensions, developers should consider a tiered data governance strategy, emphasizing:
1. Modularization and Compartmentalization of Model Components

By isolating elements of model design (e.g., preprocessing pipeline, model architecture, deployment environment), firms can disclose only the portions relevant to a given regulatory context. For example, designing “Virtual Labs” of AI models working together could help modularizing different functions and data sets to facilitate a tired data governance system and limit the necessary data disclosure. See for example our previous discussion of the emerging use of “Virtual Labs” formed by a group of AI models with distinct functions: The Virtual Lab of AI Agents: A New Frontier in Innovation.

2. Decoupling Proprietary Data from Submission Sets

Rationale: Training on large internal datasets while validating on publicly shared or FDA-approved sets may allow compliance without disclosing sensitive raw data.
Example: Model is trained on proprietary multi-omics data but validated against FDA-endorsed challenge datasets for regulatory review.

3. Governance-by-Design for Versioning and Traceability

Rationale: Lifecycle maintenance of AI models — including data drift, re-training, and re-deployment — must be documented and justified to FDA. A governance architecture that logs changes, justifies updates, and auto-generates audit trails is increasingly indispensable.
Example: Automated logs showing when model weights were updated due to new population-level data trends, with reproducibility guaranteed.

These strategies not only align with the FDA’s evolving expectations, but also support future IP assertions — e.g., filing patents around data partitioning, model maintenance tools, and regulatory integration pipelines. Similarly, as the FDA’s own use of AI tools increases, firms should maintain an open dialogue on how these tools are being used and on what data, so that they can refine their data governance accordingly.
Conclusion: Aligning Innovation, Transparency, and Strategy
AI-based drug development and non-animal methods have been brought to the regulatory forefront by FDA’s release of the January 2025 AI guidance, April 2025 NAM roadmap, and further milestones in internal AI use. In silico models have been accepted for FDA regulatory submissions as exemplified by the UVA/Padova simulator. With these advances come both opportunities and obligations. As outlined above, developers should carefully review FDA’s guidance to determine strategies for building scalable, protectable, and clinically impactful AI platforms. Strategic thinking about IP and data governance should begin on day one. By thinking early and systematically about IP and data governance, stakeholders can position themselves at the frontline of the AI-powered transformation in drug development.

AI-Driven Employment Litigation Post-Trump AI EO’s

In an era where President Trump has revoked existing federal AI policies and directives and federal agencies have followed suit, several state legislatures and courts are weighing in to account for potential AI-enabled bias in employment decisions, ranging from hiring and recruiting to separation and termination, including regulating the use of automated decision systems (ADS) in the workplace. This post provides a brief overview of some recent AI employment-related proposed legislation and legal challenges. 
Proposed State Legislation and Guidance
In January 2025, the New Jersey Division of Civil Rights published a Guidance on Algorithmic Discrimination and the New Jersey Law Against Discrimination, which applies to businesses generally, including employers. Meanwhile, New York also introduced a bill during its last legislative cycle (the “New York AI Act,” S.B. 1169), which sought to place additional hurdles for employers to audit and evaluate their systems for algorithmic bias in their decision-making processes. We expect that bill to be reintroduced in some form during the next legislative session in January 2026. California introduced similar legislation (the “No Robo Bosses Act,” S.B. 7) which would limit the use of ADS in employment decisions and requires human oversight and notice to employees regarding the use of ADS. Both the New York and California laws, if passed, would allow for a right of private action against technology companies as well as the employers utilizing ADS. This newly proposed legislation adds to already existing state and local laws (e.g., Colorado, Illinois, New York City) addressing AI in the workplace.
What is “algorithmic discrimination”? 
A key underlying goal in the proposed legislation is to target “algorithmic discrimination” using ADS. Automated decision-making tools, which are technological tools, systems, or processes that are used to automate all or part of the human decision-making process. These tools accomplish their aims by using algorithms, or sets of instructions that can be followed, typically by a computer, to achieve a desired outcome. The algorithms can analyze data, draw correlations, and make predictions and recommendations. However, in doing so, ADS may create classes of individuals who are either advantaged or disadvantaged based on their protected characteristics. Current and proposed laws aim to prohibit algorithmic discrimination no differently than discriminatory conduct by human practices. Employers are not shielded from liability simply by the existence of a third party that developed the tool or because the entity does not understand the inner technologies of the tool.
The Workday, Inc. Case Study
One area to watch closely is the rise of algorithmic bias class actions against AI tools that disproportionately screen out applicants from protected groups. A key case pending in the U.S. District Court for the Northern District of California, Mobley v. Workday, Inc., was brought by an older job applicant, alleging that Workday’s AI-driven applicant screening tools systematically disadvantaged him and other older job seekers, in violation of the Age Discrimination in Employment Act. (He also alleges race and disability discrimination under Title VII and the ADA). Just last month, the court allowed the lawsuit to proceed as a nationwide collective action, as it deemed the main issue of whether Workday’s AI system disproportionately affects job applicants over 40 can be addressed collectively. While the fight continues, third-party research suggests that underlying scoring algorithms without proper controls and training can penalize members of a single group or people with certain combinations of characteristics.
Takeaways for Employers
The Workday litigation may mark a pivotal moment in the evolving legal landscape surrounding the use of AI in employment decisions. Employers must proactively assess their use of and policies regarding algorithmic tools for potential bias to maximize the benefits of their use while eliminating potential exposure. In an ever-changing legal landscape and constant emergence of new technologies, businesses should stay informed to ensure compliance and prevent significant legal exposure and reputational harm.

The Art (and Legality) of Imitation: Navigating the Murky Waters of Fair Use in AI Training

As generative AI technology advances, the legal battles over the use of copyrighted materials for training these models are heating up. In the first wave of lawsuits the courts have diverged in their approach to fair use as a defense to claims of copyright infringement. Other legal theories of protection—including the right of publicity and unfair competition under state and federal law—remain largely untested.
Fair use is a legal doctrine that allows limited use of copyrighted material without requiring permission from the rights holder. 17 U.S.C § 107. Courts consider four main factors to determine whether a use qualifies as fair use: (1) purpose and character of the use (commercial or noncommercial); (2) nature of the copyrighted work (factual or creative); (3) amount and substantiality of the portion used; and (4) effect of the use on the market.
In May of 2025, the Copyright Office released a third (pre-publication) report that provided ample support for both sides on whether training AI with copyrighted works constitutes copyright infringement or fair use.
So far in 2025 four cases have directly analyzed the applicability of the fair use defense to AI model training. For the most part, each case put its own spin on the issue, confirming that the debate over AI and intellectual property is just beginning.
Lehrman v. Lovo, Inc., No. 1:24-cv-03770 (S.D.N.Y. July 10, 2025)
A class action filed in May 2024 by two voice actors in the Southern District of New York was one of the first to raise the issue [3]. The plaintiffs claim that Lovo, Inc., used AI-generated replicas of their voices without permission. Their 313-paragraph amended complaint brings 17 counts ranging from violations of New York statutes protecting the right of publicity to copyright infringement and various unfair competition and false advertising claims under the Lanham Act. Lovo moved to dismiss, arguing that the plaintiffs failed to state any actionable claim against it, including because the use of AI-generated voices is not copyright infringement and is not covered by relevant state laws. The court held that the claims for copyright infringement and violation of state laws were sufficiently stated, but dismissed the claims based on the Lanham Act. The court also permitted the actors to amend their claim that Lovo’s AI training infringed their copyrights. Most critically for the plaintiffs, the class action claims in Lovo remain alive.
Bartz v. Anthropic PBC, No. 24-cv-05417 (N.D. Cal. June 23, 2025)
Anthropic used millions of digital books to train its AI model, Claude. It scanned many printed books, but also downloaded others from pirate libraries. A group of authors sued Anthropic for copyright infringement, claiming that copying the books and using them to train Claude is unlawful. Anthropic has argued that its actions are fair use and asked for an early summary judgment ruling. The judge looked at three main activities separately and decided whether each was fair use:

Training: The judge held that training an AI model on lawfully obtained, copyrighted books constituted fair use, finding the use exceedingly transformative.
Library copies: The judge also agreed that buying and digitizing books is transformative because “every purchased print copy was copied in order to save storage space and to enable searchability as a digital copy.” But the judge denied summary judgment on “copies made from central library copies but not used for training.”
Pirated library copies: The judge denied summary judgment because while the use of these books for training was transformative, the creation and maintenance of a permanent, general-purpose digital library of pirated works was not protected by fair use.

Kadrey v. Meta Platforms, Inc., No. 23-cv-03417 (N.D. Cal. June 25, 2025)
A few days later, a different Northern District of California judge granted Meta a significant victory in a copyright lawsuit brought by 13 authors who alleged that their books were improperly used to train Meta’s Llama AI model. The judge ruled that the use of the plaintiffs’ works in training the AI model qualified as fair use under copyright law—particularly because the training was found to be highly transformative, serving purposes like summarization and content generation that differ fundamentally from the original works. The judge added that the plaintiffs had not provided sufficient evidence of harm. The judge, however, emphasized that the court’s decision was narrowly decided—focused on the plaintiffs’ inadequate arguments—and does not establish a blanket legality for AI training practices. He cautioned that more compelling evidence in future cases could lead to different outcomes and that evidence of infringement or economic damage could succeed.
Thomson Reuters Enter. Ctr. GMBH v. Ross Intel. Inc., 765 F. Supp. 3d 382 (D. Del. 2025) (on appeal before the 3rd Circuit)
Contrary to the preceding cases is Thomson Reuters Enter. Ctr. GMBH v. Ross Intel. Inc., where the District of Delaware held that fair use was an inapplicable defense when training AI models. Thomson Reuters alleged that Ross infringed its copyrights when Ross used Thomson Reuters’s Westlaw headnotes to train Ross’ new AI legal-research search engine. The court granted partial summary judgment for Thomson Reuters on its claims of direct infringement, while denying Ross’ defenses, including fair use. When evaluating Ross’ fair use defense, the court found that Ross use did not qualify as fair use in part because the use was “commercial” and not “transformative.” The decision is currently being appealed by Ross.
So What Now?
The legal landscape for artificial intelligence is still developing, and no outcome can yet be predicted with any sort of accuracy. While some courts appear poised to accept AI model training as transformative, other courts do not. As AI technology continues to advance, the legal system must adapt to address the unique challenges it presents. Meanwhile, businesses and creators navigating this uncertain terrain should stay informed about legal developments and consider proactive measures to mitigate risks. As we await further rulings and potential legislative action, one thing is clear: the conversation around AI and existing intellectual property protection is just beginning.

EU AI Act: Key Compliance Considerations Ahead of August 2025

The European Commission has made it clear: The timetable for implementing the Artificial Intelligence Act (EU AI Act) remains unchanged. There are no plans for transition periods or postponements. The first regulations have been in force since Feb. 2, 2025, while further key obligations will become binding on Aug. 2. AI Act violations may be punished with significant penalties, including fines of up to EUR 35 million or 7% of global annual turnover.
The AI Act marks the world’s first comprehensive legal framework for using and developing AI. It follows a risk-based approach that links regulatory requirements to the specific risk an AI system entails. Implementation may pose structural, technical, and governance-related challenges for companies, particularly in the area of general-purpose AI (GPAI).
Key Requirements and Compliance Obligations Under the EU AI Act
The AI Act focuses on risky and prohibited AI practices. Certain applications have been expressly prohibited since Feb. 2, 2025. These include, among others:

biometric categorization based on sensitive characteristics; 
emotion recognition systems in the workplace; 
manipulative systems that influence human behavior without being noticed; and 
social scoring.

These prohibitions apply comprehensively – both to the development and to the mere use of such systems.
On Aug. 2, 2025, comprehensive due diligence, transparency, and documentation requirements will also take effect for various actors along the AI value chain.
The German legislature is expected to entrust the Federal Network Agency(Bundesnetzagentur) with regulatory oversight. The agency has already set up a central point of contact, the “AI Service Desk,” to serve as a first point of contact for small and medium-sized enterprises, particularly for questions relating to the AI Act’s practical implementation. In addition, companies should closely monitor regulatory developments, for example regarding the final Code of Practice for GPAI models, which was published on July 10, and the harmonization of technical standards, which may become the “best practice” benchmark for compliance.
Which Companies and Stakeholders Are Impacted by the EU AI Act?
General-Purpose AI (GPAI) Providers
Providers of GPAI models – such as large language or multimodal models – will be subject to a specific regulatory regime beginning August 2025. They will be required to maintain technical documentation that makes the model’s development, training, and evaluation traceable. In addition, transparency reports must be prepared that describe the capabilities, limitations, potential risks, and guidance for integrators.
A summary of the training data used must also be published. This must include data types, sources, and preprocessing methods. The use of copyright-protected content must be documented and legally permissible. At the same time, providers must ensure the protection of confidential information.
GPAIs with Systemic Risk
Extended obligations apply to particularly powerful GPAI models that are classified as “systemic.” Classification is based on technical criteria such as computing power, range, or potential impact. Providers of such models must report the system to the European Commission, undergo structured evaluation and testing procedures, and permanently document security incidents. In addition, increased requirements apply in the area of cybersecurity and monitoring.
Downstream Providers and Modifiers
Companies that substantially modify existing GPAI models will themselves become providers for regulatory purposes. A modification is considered substantial if the existing GPAI model is changed through retraining, fine-tuning, or other technical adjustments in such a way that the functionality, performance, or risks of the model change significantly, and the modification does not merely amount to integration or use. This means that all obligations that apply to original GPAI developers also apply to modified models. In practice, fine-tuning in the context of business applications must therefore be carefully reviewed from a legal perspective and, if necessary, secured by regulatory measures.
AI System Users
Companies that merely use AI systems – especially in applications with potentially high risks, such as in recruitment, medicine, or critical infrastructure – are also required to maintain a complete inventory of the systems they use. In addition, they must ensure that prohibited applications are not used. Additional obligations will apply to high-risk AI systems beginning August 2026, such as data protection impact assessments and internal monitoring. The more extensive transparency obligations for AI system users – such as AI-generated content labeling – will not become binding until Aug. 2, 2026.
Technical and Organizational Requirements
The AI Act’s implementation requires not only legal, but structural measures as well. Companies should consider the following to enhance compliance:

Establishing a complete AI inventory with risk classification; 
Clarifying the company’s role (supplier, modifier, or deployer); 
Preparing the necessary technical and transparency documentation; 
Implementing copyright and data protection requirements; 
Training and verifying AI competence among employees (including external staff); and 
Adapting internal governance structures, including the appointment of responsible persons.

The Commission and national supervisory authorities have announced that they will closely monitor implementation. Companies should regularly review and adapt their compliance strategies, particularly with regard to the Codes of Practice and future technical standards.
Early Preparation for EU AI Act Compliance and Risk Mitigation
Aug. 2, 2025, is a binding deadline. Taking stock, clarifying roles, and evaluating systems may help create a solid foundation for regulatory certainty. GPAI providers and modifiers in particular should prepare for a higher level of accountability. But traditional deployers are also required to ensure transparency and control of their AI applications.
Early action may mitigate legal and financial risks, as well as underscore responsibility and future viability in dealing with artificial intelligence.
Read in German.

Copyright at a Crossroads, Continued: How the Bartz v. Anthropic Ruling Reshapes the AI Training Landscape

Abstract 
This article analyzes U.S. District Judge of the Northern District of California William Alsup’s decision in Bartz v. Anthropic, the first major ruling to address whether training AI models on copyrighted materials constitutes fair use. Building on my prior commentary about the copyright challenges posed by generative AI, this piece explores the court’s reasoning, its implications for AI developers and creators, and the broader policy questions the ruling leaves unresolved. 
Background 
In my last article, Copyright at a Crossroads: How Generative AI Challenges Traditional Doctrine, I explored the unsettled legal terrain surrounding generative AI and copyright law. As courts, policymakers, and creators grapple with how to apply long-standing principles to rapidly evolving technologies, the question of whether using copyrighted material to train AI models constitutes infringement remains a legal gray area. That is— until recently. 
The U.S. District Court’s recent ruling in Bartz v. Anthropic marks the first meaningful judicial opinion to address the legality of training large language models (LLMs) on copyrighted materials. Although the decision is widely regarded as a significant victory for AI developers, it establishes a nuanced legal precedent with farreaching implications that extend well beyond the courtroom. 
The Core Dispute: AI Training vs. Copyright Protection 
At the heart of Bartz v. Anthropic was the claim by David Bartz, a novelist and screenwriter, that Anthropic’s Claude language model was trained on copies of his copyrighted works without permission. Bartz argued that this use constituted both direct copyright infringement and unlawful circumvention under the Digital Millenium Copyright Act (DMCA). Anthropic responded that its use of publicly available internet data, which may include copyrighted works, was transformative and protected by fair use—a defense that has long played a pivotal role in technological innovation. 
This dispute mirrored many of the concerns I highlighted in my earlier piece: whether existing copyright law adequately protects creators when machine learning models copy or reference their work, and whether imposing liability on developers for training data inputs would stifle innovation in AI research. 
The Court’s Decision: A Contextual Fair Use Analysis 
The court ultimately sided with Anthropic, granting its motion to dismiss the copyright claims on the basis that the ingestion of copyrighted material for the purpose of model training was a fair use under 17 U.S.C. §107. More specifically, while acknowledging that copyrighted works were used, the court emphasized several key factors: 

Purpose and Character of the Use: The court agreed that Anthropic’s use was highly transformative. The works were not reproduced or distributed in any recognizable way, but rather abstracted into statistical representations that enabled language prediction capabilities. 
Nature of the Copyrighted Work: Though the works were creative, this factor was outweighed by the transformative nature of the use. 
Amount and Substantiality: The court recognized that entire works may have been ingested but emphasized that no output from the model replicated those works verbatim. 
Market Effect: The plaintiff failed to show any market harm, either to the original work or any derivative AI training licensing market (which remains largely speculative at this stage). 

By grounding its ruling in a fact-specific fair use analysis, the court avoided setting a sweeping rule, while nevertheless signaling a tolerance for AI training practices that rely on publicly available content. 
Why This Is a Win—for Now—for AI Developers 
For the AI development community, the decision provides much-needed clarity—at least in the short term. By validating the use of internet-scale data for training under fair use, the court preserves the status quo that has underpinned virtually all major foundation models to date. The alternative—a regime where developers must clear rights for millions of individual data points—would be commercially and technologically unworkable for many startups. 
Moreover, the ruling recognizes the fundamentally different nature of LLM training as compared to traditional copying or redistribution. This distinction is critical in reinforcing the idea that copyright law must evolve alongside the technologies it seeks to regulate. 
A Measured Victory: Risks and Ambiguities Remain 
Desire for the victory lap being taken by some within the AI development community, the ruling does not close the book on the legal challenges ahead. The court was careful to note that its opinion was grounded in the specific facts of the case—and left the door open for different outcomes in future cases involving more direct reproduction, clearer market substitution, or stronger evidence of economic harm. 
For creatives, the decision certainly feels like a setback. It reinforces a growing sense that copyright law is ill-equipped to offer meaningful protection in the age of AI. The underlying concern—that creative work is being absorbed into opaque systems without attribution, consent, or compensation—remains unresolved. 
And for policymakers, the ruling underscores the need for legislative clarity. Courts can interpret doctrine, but they cannot create the kind of systematic licensing infrastructure or statutory exceptions that may ultimately be needed to balance innovation with creator rights. I spoke more about the emerging policy recommendations in my previous article. 
Looking Ahead: A Shifting Legal Landscape 
Bartz v. Anthropic may be the first, but it certainly won’t be the last major ruling to address AI and copyright. Similar cases—against OpenAI, Meta, and Stability AI—are winding through the courts and may yield diverging results depending on jurisdiction and factual nuance. Meanwhile, agencies like the U.S. Copyright Office continue to reevaluate the boundaries of authorship, infringement, and originality in the context of generative systems. 
Copyright law truly stands at an inflection point. Bartz offers an initial blueprint for how courts might apply fair use to AI training, but it also invites more questions than it answers. What obligations do developers have to audit their training data? Can AI models be trained ethically on copyrighted materials if those materials are not reproduced? Should Congress intervene to create a statutory framework for AI training rights and licensing? 
These are not questions the judiciary alone can resolve. But in Bartz, we see a legal system beginning to engage—cautiously, contextually, and with an eye toward both enabling innovation to thrive.

Texas Legislature Passes New Laws Impacting Health Care Industry

Several state bills affecting the health care industry arose during the 89th Regular Session of the Texas Legislature, which ended June 2, 2025. This GT Alert provides summaries and effective dates of these bills, some of which passed and were enacted by Gov. Abbott, while others did not.
HB 2254 – Allows Insurers and Health Care Providers to Enter Health Care Services Contract Arrangements
HB 2254 authorizes health insurance plans to enter into contracts for primary care services using fee-for-service arrangements, risk-sharing arrangements, capitation arrangements, or any combination of these models. However, primary care physicians and physician groups are not required to enter into such arrangements, and insurers may not discriminate against those that choose not to participate. The legislation also prohibits global capitation arrangements.
Additionally, payor-provider contracts under these arrangements must not disincentivize the provision of medically necessary health care services or interfere with a physician’s independent medical judgment regarding which services are medically appropriate or necessary. The legislation took effect immediately upon its signing on June 20, 2025. SB 31 – Amends Treatment Guidance for Pregnant Women with Life-Threatening Emergency Conditions
This legislation, also known as the Life of the Mother Act, provides that, if a pregnant woman has a life-threatening physical condition aggravated by or arising from her pregnancy, a physician is not required to wait until the risk is imminent or until the pregnant woman suffers physical harm before acting. In treating the life-threatening condition, a physician must use the method that offers the best opportunity for the fetus to survive while balancing the pregnant woman’s risk of death or substantial impairment of a major bodily function. This legislation is an attempt to address certain criticisms of Texas’ abortion laws, which resulted in uncertainty among physicians about when the law’s emergency exception would be triggered. SB 31 went into effect immediately upon its signing on June 20, 2025.
SB 815 – Restricts Use of AI to Determine Medical Necessity or Appropriateness
This bill imposes restrictions on the use of artificial intelligence algorithms by utilization review agents to determine whether health services are medically necessary or appropriate. Under the legislation, a utilization review agent may not use an AI algorithm as the sole basis for wholly or partially denying, delaying, or modifying health care services. The bill clarifies that only a physician or licensed health care provider may determine medical necessity or appropriateness. SB 815 will take effect Sept. 1, 2025.
SB 1236 – Governs Relationship Between Pharmacists or Pharmacies and Health Benefit Plan Issuers or Pharmacy Benefit Managers
SB 1236 provides that a health benefit plan issuer or pharmacy benefit manager (PBM) may not, as the result of an audit, deny or reduce any claim payment made to a pharmacist or pharmacy after adjudication of the claim, except in cases of fraud, duplicate payments, erroneous prescriptions, or clerical errors. Additional provisions detail when and how a health benefit plan insurer or PBM may make changes to contracts with pharmacies or pharmacists, including enhanced requirements for changes would decrease compensation, move a pharmacist or pharmacy to a less-preferred tier, or alter administrative procedures to increase expenses or decrease compensation. The legislation limits certain potentially coercive tactics by PBMs, such as penalizing non-network participants or charging pharmacies or pharmacists a fee to participate in a network. SB 1236 will take effect Sept. 1, 2025.
HB 2038 – Regulates Issuance of Provisional Medical Licenses to Foreign License Holders
This legislation establishes the circumstances under which a foreign license holder may receive a provisional medical license, including specific restrictions on individuals from countries deemed to pose a national security risk, the settings in which a provisional licensee may practice, and the conditions under which the board may issue a full license. For physician graduates, the legislation provides criteria for receiving a limited license, prohibits graduates from practicing without entering into a supervising practice agreement with a sponsoring physician, sets practice limitations such as a physician graduate only being able to practice in a county with a population of less than 100,000, and lays out the circumstances in which a graduate’s limited license may be renewed, denied, suspended, and revoked. HB 2038 also sets rules on when a physician is eligible to enter into a supervising practice agreement as a sponsoring physician. The legislation will take effect Sept. 1, 2025. HB 2747 – Notification Requirements for Health Care Entities’ Material Change Transactions
In February 2025, Texas legislators introduced HB 2747, which would have required certain health care entities to notify the state 90 days in advance of specified “material change transactions” involving “a material change to ownership, operations, or governance structure.” The notice requirement would have applied to a broad range of transactions, including but not limited to: mergers involving a health care entity; acquisitions of health care entities or a material amount of the assets or operations of health care entities; formation of joint ventures, partnerships, accountable care organizations, or management services organizations resulting in a person acquiring direct or indirect control over all or a substantial part of a health care entity; and real estate sales involving a material amount of health care entity assets. While similar to legislation in other states, this bill did not require attorney general or agency approval for such transactions. The bill ultimately died in committee.
SB 1595 & HB 4408 – Annual Ownership and Control Reporting Requirements for Health Care Entities
Between February and April 2025, the Texas Senate and House considered identical bills that would have required certain health care entities to annually report information regarding their ownership and control. The bills targeted specified types of “material change transactions” involving a health care entity with total assets and annual revenue of at least $10 million. Covered transactions included but were not limited to: mergers; acquisitions; changes in control; formation of joint ventures or management organizations; significant asset transfers; real estate sales involving a material amount of health care entity assets; closure of a health care facility; and significant reduction of any essential health care service provided by a health care facility. In addition to reporting material change transactions, the bills would have required health care entities to submit annual reports to the state listing each person with an ownership, investment, or controlling interest; any management service organization affiliated with the entity, and any significant equity investor. Similar to legislation in other states, these bills aimed to address concerns about increasing private equity ownership of or affiliation with medical practices. Neither bill progressed to a full vote in either chamber.

European Commission Publishes General-Purpose AI Code of Practice

On July 10, 2025, the European Commission published the final version of the General-Purpose AI Code of Practice (the “AI Code”), three weeks before the obligations relating to general-purpose AI models under the EU AI Act are due to come into effect.
Compliance with the AI Code is voluntary but intended to demonstrate compliance with certain provisions of the EU AI Act. According to the European Commission, organizations that agree to follow the AI Code will enjoy a “reduced administrative burden” and further legal certainty compared to others that seek to demonstrate compliance in an alternative manner.
The AI Code is set to be complemented by forthcoming European Commission guidelines expected later this month, which will clarify key concepts related to general-purpose AI models and aim to ensure consistent interpretation and application of key concepts related to general-purpose AI models.
The AI Code is divided into three separately authored chapters: transparency obligations, copyright, and safety and security. Each chapter addresses specific aspects of compliance under the EU AI Act:

Transparency: This chapter provides a framework for providers of general-purpose AI models to demonstrate compliance with their obligations under Articles 53(1)(a) and (b) of the EU AI Act. It outlines the necessary documentation and practices required to meet transparency standards. In particular, signatories to the AI Code can comply with the EU AI Act’s transparency requirements by maintaining information in a model documentation form (included in the chapter) which may be requested by the AI Office or a national competent authority.
Copyright: This chapter details how to demonstrate compliance with Article 53(1)(c) of the EU AI Act which requires a provider put in place a policy to comply with EU law on copyright and to identify and comply with expressed reservations of rights. The AI Code provides several measures to demonstrate compliance with Article 53(1)(c), such as implementing a copyright policy which incorporates the other measures of the chapter and designating a point of contract for complaints concerning copyright.
Safety and security: This chapter only applies to providers responsible for general-purpose AI models with systemic risk and relates to the obligations under Article 55 of the EU AI Act. The chapter details the measures needed to assess and mitigate risks associated with these advanced models, such as creating and adopting a framework detailing the processes and measures for systemic risk assessment and mitigation, implementing appropriate safety and security mitigations, and developing a model report containing details about the AI model, systemic risk assessment and mitigation processes that may be shared with the AI Office.

Deepfakes Face Deep Trouble: Revenge Porn in the Workplace

The recently enacted TAKE IT DOWN Act makes it a federal offense to share online nonconsensual and explicit images, regardless of whether the images are real or computer generated. The law is intended to protect victims from online abuse, set clear guidelines for social media users, and deter “revenge porn” by targeting the distribution of real and digitally altered exploitative content involving children. While the 2019 Shield Act criminalized the sharing of intimate images with the intent to harm, the TAKE IT DOWN Act provides additional protections by establishing a removal system that allows victims to request removal of harmful and intrusive images, and mandates tech platforms to remove such images within 48 hours of receiving a takedown request from an identifiable person or authority acting on behalf of such individual. The law aims to respond to the recent era of artificial intelligence (“AI”) “deepfakes” or realistic, digitally generated or altered videos of a person, often created and distributed with malicious intent. Additionally, the law attempts to combat the rise of “nudification technology” used to create highly realistic and sexually explicit images and videos, by removing the clothes from original images of clothed people. Users of this technology can disseminate the images rapidly, broadly, and anonymously.
States have also enacted measures to combat revenge porn. As of 2025, nearly all 50 states have enacted laws criminalizing “revenge porn” and/or laws providing civil recourse for victims. For example, Massachusetts updated its Criminal Harassment Statute to prohibit distribution of nude or partially nude images of individuals, or of those engaged in sexual acts. The New York Civil Rights Act provides a private right of action against those who disseminated intimate images without consent. Washington and Illinois each provide civil remedies for deepfake content. Minnesota announced it is considering legislation that targets companies that run websites or apps that create, house, or disseminate explicit images or photos; and San Francisco filed a first-of-its-kind lawsuit that attempted to terminate an app used to create AI generated images of high school-aged girls across the globe. Legislative efforts in this area reflect the urgency to reign in the creation and distribution of harmful AI images. However, some of the wider sweeping efforts to regulate AI may collide with First Amendment rights. AI experts warn that lawmakers’ failure to narrowly tailor the legislation targeting AI that creates deepfakes, for example, will result in free speech legal challenges. Accordingly, lawmakers may strategically target conduct instead of speech, in support of arguing for a lower standard of judicial scrutiny when faced with a First Amendment challenge. 
What do these developments mean for employers? Most employers are ill-prepared to quickly and effectively respond to the fast-moving threat of AI deepfakes and nudification. Failure to respond appropriately, however, may result in liability for employers. For example, employers can be held liable under Title VII of the Civil Rights Act of 1964, if deepfakes and nudification are used to create a hostile work environment—regardless of whether the images were created outside of work hours. Accordingly, employers should review their insurance policies to confirm whether the coverage includes cyber-related matters, such as deepfakes. There are also several proactive measures employers can take to mitigate liability by guiding decision-makers’ and employees’ conduct. 
For example, employers should audit and update existing social media and harassment policies to include descriptions of deepfakes and nudification technology, and the threat they pose to the workplace. Incorporating reporting and investigation protocol concerning digital impersonation or AI-powered cyberbullying into harassment policies is another way to equip personnel to respond to the threat. Specifically, employers should create a detailed response plan with timelines, documentation requirements, and points of contacts when they receive inappropriate photos or videos of an employee that appears to be engaging in sexual or pornographic activity. Employers will want to take measures to avoid assumptions or hasty decisions that may result in punishing the victim of deepfake or nudification technology. Likewise, employers should create clear guidelines and a response plan in the event that an employee is alleged to have distributed revenge porn or malicious deepfakes. Training on all of the above will go a long way too.

President Trump Signs Law with Over $1 Billion of AI Funding, and US Rescinds Chip Export Restrictions to China — AI: The Washington Report

On July 4, 2025, President Trump signed into law the One Big Beautiful Bill Act, which allocates over $1 billion toward advancing the federal government’s use of AI.
The funding package reinforces the administration’s broader AI strategy, directing federal investments toward federal government operations, while signaling increased support for private-sector AI innovation.
On July 2, the Department of Commerce informed three of the world’s largest electronic design automation (EDA) software developers that the export controls imposed against China in May on advanced chip-design software had been lifted.
This rollback allows companies to resume access to previously restricted exports on chip-design software and technologies to China.  

President Trump Signs Law with Over $1 Billion of AI Funding
On July 4, 2025, President Trump signed into law the One Big Beautiful Bill Act, a legislative package that allocates over $1 billion in investments in the federal government’s use of AI. The package will earmark funding over the next five years for the following efforts:

$450,000,000 for the application of AI for naval shipbuilding
$124,000,000 for improvements to AI at the Test Resource Management Center
$250,000,000 for the expansion of Cyber Command AI lines of effort
$200,000,000 for the deployment of AI “to accelerate the audits of the financial statements of the Department of Defense”

The new funding marks a continuation of the Trump administration’s efforts to promote and integrate the use of AI into federal government operations. The White House, in April, rescinded requirements that could limit or delay federal agencies’ adoption of AI. During his first week in office, President Trump also issued an Executive Order directing his top AI officials to develop a comprehensive AI Action Plan for the federal government, which is due by July 22, 2025.
US Rescinds Chip Design Software Export Restrictions to China
On July 2, the US Department of Commerce notified three of the world’s largest electronic design automation (EDA) software developers that “the export restrictions related to China, pursuant to a letter [issued] on May 29, 2025, have now been rescinded, effective immediately.” The original restrictions, announced in late May, limited the export of advanced chip-design software, aiming to curb China’s access to technology critical to chip design powering AI and advanced computing. Electronic design automation (EDA) is vital for designing semiconductors that power a wide range of technologies — from smartphones and computers to automobiles and the data centers that train and operate AI models.
Following the reversal of export restrictions on chip-design software to China, the affected EDA software developers have begun restoring access to previously restricted software and tools in China.
The rollback of chip-design software export restrictions is the latest in a series of actions by the Trump administration that reflect its evolving approach to AI policies. These steps highlight a broader strategy focused on maintaining US competitiveness in AI and semiconductor development, while reinforcing the administration’s efforts to align national security measures with domestic AI development goals and private sector growth.

Voices on Trial: Voice Actors, AI Cloning, and the Fight for Identity Rights

A New York court just decided some important preliminary motions (which I previously covered here in this post) involving allegedly unauthorized AI cloning of voice actors. The court reached a split decision, concluding “that, for the most part, Plaintiffs have not stated cognizable claims under federal trademark and copyright law. However, that does not mean they are without a remedy. Rather, claims for misappropriation of a voice, like the ones here, may be properly asserted under Sections 50 and 51 of the New York Civil Rights Law [which protect name, image and likeness], which, unlike copyright and trademark law, are tailored to balance the unique interests at stake. Plaintiffs also adequately state claims under state consumer protection law and for ordinary breach of contract.”
The court commented on the uniqueness and significance of this case, stating: “The case involves a number of difficult questions, some of first impression. It also carries potentially weighty consequences not only for voice actors, but also for the burgeoning AI industry, other holders and users of intellectual property, and ordinary citizens who may fear the loss of dominion over their own identities.”
This ruling portends potential challenges that may arise for others whose voices may be AI-cloned. The problem is that there is no federal right of publicity, which covers a person’s name, image and likeness (NIL). This court’s dismissal of the federal law claims under trademark and copyright law, if followed by other courts, will limit plaintiffs’ NIL claims to those under state law, such as those here under Sections 50 and 51 of the New York Civil Rights Law. However, not every state has a right of publicity law.
Click here to read more.
Listen to this post