Utah Law Aims to Regulate AI Mental Health Chatbots
Those in the tech world and in medicine alike see potential in the use of AI chatbots to support mental health—especially when human support is unavailable, or therapy is unwanted.
Others, however, see the risks—especially when chatbots designed for entertainment purposes can disguise themselves as therapists.
So far, some lawmakers agree with the latter. In April, U.S. Senators Peter Welch (D-Vt.) and Alex Padilla (D-Calif.) sent letters to the CEOs of three leading artificial intelligence (AI) chatbot companies asking them to outline, in writing, the steps they are taking to ensure that the human interactions with these AI tools “are not compromising the mental health and safety of minors and their loved ones.”
The concern was real: in October 2024, a Florida parent filed a wrongful death lawsuit in federal district court, alleging that her son committed suicide with a family member’s gun after interacting with an AI chatbot that enabled users to interact with “conversational AI agents, or ‘characters.’” The boy’s mental health allegedly declined to the point where his primary relationships “were with the AI bots which Defendants worked hard to convince him were real people.”
The Florida lawsuit also claims that the interactions with the chatbot became highly sexualized and that the minor discussed suicide with the chatbot, saying that he wanted a “pain-free death.” The chatbot allegedly responded, “That’s not a reason not to go through with it.”
Another lawsuit in Texas, meanwhile, claims that a chatbot commiserated with a minor over a parents’ time use limit for a phone, mentioning news headlines such as “child kills parents.”
In February 2025, the American Psychological Association urged regulators and legislators to adopt safeguards. In their April 2 letters described above, the senators informed the CEOs that the attention that users receive from the chatbots can lead to “dangerous levels of attachment and unearned trust stemming from perceived social intimacy.”
“This unearned trust can [lead], and has already[ led,] users to disclose sensitive information about their mood, interpersonal relationships, or mental health, which may involve self-harm and suicidal ideation—complex themes that the AI chatbots on your products are wholly unqualified to discuss,” the senators assert.
Utah’s Solution
States are taking note. In line with national objectives, Utah is embracing AI technology and innovation while still focusing on ethical use, protecting personal data/privacy, ensuring transparency, and more.
Several of these new Utah laws to analyze the impact across industries and have broad-reaching implications across a variety of sectors. For example:
The Artificial Intelligence Policy Act (B. 149) establishes an “AI policy lab” and creates a number of protections for users and consumers of AI, including requirements for healthcare providers to prominently disclose any use of generative AI in patient treatment.
The AI Consumer Protection Amendments (B. 226) limit requirements regarding the use of AI to high-risk services.
The Unauthorized Artificial Intelligence Impersonation Amendments (B. 271) protect creators by prohibiting the unauthorized monetization of art and talent.
Utah’s latest AI-related initiatives also include H.B. 452, which took effect May 7 and which creates a new code section titled “Artificial Intelligence Applications Relating to Mental Health.” This new code section imposes significant restrictions on mental health chatbots using AI technology. Specifically, the new law:
establishes protections for users of mental health chatbots using AI technology;
prohibits certain uses of personal information by a mental health chatbot;
requires disclosures to users that a mental health chatbot is AI technology, as opposed to a human;
places enforcement authority in the state’s division of consumer protection;
contains requirements for creating and maintaining chatbot policies; and
contains provisions relating to suppliers who comply with policy requirements.
We summarize the key highlights below.
H.B. 452: Regulation of Mental Health Chatbots Using AI Technology
Definitions. Section 13-72a-101 defines a “mental health chatbot” as AI technology that:
Uses generative AI to engage in interactive conversations with a user, similar to the confidential communications that an individual would have with a licensed mental health therapist; and
A supplier represents, or a reasonable person would believe, can or will provide mental health therapy or help a user manage or treat mental health conditions.
“Mental health chatbot” does not include AI technology that only
Provides scripted output (guided meditations, mindfulness exercises); or
Analyzes an individual’s input for the purpose of connecting the individual with a human mental health therapist.
Protection of Personal Information. Section 13-72a-201 provides that a supplier of a mental health chatbot may not sell to or share with any third party: 1) individually identifiable health information of a Utah user; or 2) the input of a Utah user. The law exempts individually identifiable health information—defined as any information relating to the physical or mental health of an individual—that is requested by a health care provider, with user consent, or provided to a health plan of a Utah user upon request.
A supplier may share individually identifiable health information necessary to ensure functionality of the chatbot if the supplier has a contract related to such functionality with another party, but both the supplier and the third party must comply with all applicable privacy and security provisions of 45 C.F.R. Part 160 and Part 164, Subparts A and E (see the Privacy Rule of the Health Insurance Portability and Accountability Act of 1996 (HIPAA)).
Advertising Restrictions. Section 13-72a-202 states that a supplier may not use a mental health chatbot to advertise a specific product or service absent clear and conspicuous identification of the advertisement as an advertisement, as well as any sponsorship, business affiliation, or third-party agreement regarding promotion of the product or service. The chatbot is not prohibited from recommending that the user seek assistance from a licensed professional.
Disclosure Requirements. Section 13-72a-203 provides that a supplier shall cause the mental health chatbot to clearly and conspicuously disclose to a user that the chatbot is AI and not human—before the chatbot features are accessed; before any interaction if the user has gone seven days without access; and any time a user asks or prompts the chatbot about whether AI is being used.
Affirmative Defense. Section 58-60-118 allows for an affirmative defense to liability in an administrative or civil action alleging a violation if the supplier demonstrates that it:
created, maintained, and implemented a written policy, filed with the state’s Division of Consumer Protection, which it complied with at the time of the violation; and
maintained documentation regarding the development and implementation of the chatbot that describes foundation models; training data; compliance with federal health privacy regulations; user data collection and sharing practices.
The law also contains specific requirements regarding the policy and the filing.
Takeaways
A violation of the Utah statute carries an administrative fine of up to $2500 per violation, and the state’s Division of Consumer Protection may bring an action in court to enforce the statute. The attorney general may also bring a civil action on behalf of the Division. As chatbots become more sophisticated, and more harms are realized in the context of mental health, other states are sure to follow Utah’s lead.
AI Drives Need for New Open Source Licenses – Linux Publishes the OpenMDW License
For many reasons, existing open source licenses are not a good fit for AI. Simply put, AI involves more than just software and most open source licenses are designed primarily for software. Much work has been done by many groups to assess the open source license requirements for AI. For example, the OSI has published its version of an AI open source definition – The Open Source AI Definition – 1.0. Recently, the Linux Foundation published a draft of the Open Model Definition and Weight (OpenMDW) License.
The OpenMDW License is a permissive license specifically designed for use with machine‑learning models and their related artifacts, collectively referred to as “Model Materials.” “Model Materials” include machine‑learning models (including architecture and parameters) along with all related artifacts—such as datasets, documentation, preprocessing and inference code, evaluation assets, and supporting tools—provided in the distribution. This inclusive definition purports to align with the OSI’s Open Source Definition and the Model Openness Framework, covering code, data, weights, metadata, and documentation without mandating that every component be released. The Model Openness Framework is a three-tiered ranked classification system that rates machine learning models based on their completeness and openness, following open science principles.
The OpenMDW License is a permissive license, akin to the Apache or MIT license. It grants a royalty free, unrestricted license to use, modify, distribute, and otherwise “deal in” the Model Materials under all applicable intellectual‑property regimes—including copyright, patent, database, and trade‑secret rights. This broad grant is designed to eliminate ambiguity around the legal permissions needed to work with AI assets.
The primary substantive compliance obligation imposed by OpenMDW is preservation of the license itself. Any redistribution of Model Materials must include (1) a copy of the OpenMDW Agreement and (2) all original copyright and origin notices. Compliance is as easy as placing a single LICENSE file at the root of the repository. There are no copyleft or share‑alike requirements, ensuring that derivative works and integrations remain as unconstrained as possible.
There is however a patent‑litigation‑termination clause. If a licensee initiates litigation alleging that the Model Materials infringe their patents—except as a defensive response to a suit first brought against them—all rights granted to that licensee under the OpenMDW are terminated. This provision serves to discourage aggressive patent actions that could undermine open collaboration.
Any outputs generated by using the Model Materials are free of license restrictions or obligations. The license also disclaims all warranties and liabilities “to the greatest extent permissible under applicable law,” placing responsibility for due diligence and rights clearance squarely on the licensee.
We all know that AI will be transformative, but we do not yet know all the ways in which it will be so. One of the transformations that AI will undoubtedly drive is a redefinition of what it means to be “open source” and the type of open source AI licenses. As a leader of my firm’s Open Source Team and its AI Team, the intersection of these areas is near and dear to my heart. While many lawyers and developers may not yet have focused on this, it will be a HUGE issue. If you have not yet done so, now is a good time to start.
One of the core issues is that traditionally, under an open source license, the source code is made available so others can copy, inspect, modify and redistribute software based thereon. With AI, the code alone is often not enough to accomplish those purposes. In many cases, other things are or may be necessary such as the training data, model weights and other non-code aspects that are important to AI. This issue is significant in many ways. So much so that, as mentioned above, the Open Source Initiative, stewards of the Open Source definition, developed the Open Source AI Definition 1.0 to REDEFINE the meaning of open source in the context of AI. To learn more about these issues, check out the OSI Deep Dive initiative here.
Listen to this post
New Kansas Law Will Presume Nonsolicitation Agreements Enforceable
Kansas Governor Laura Kelly recently signed a bill into law that deems certain nonsolicitation agreements with business owners and employees to be presumptively enforceable and not a restraint on trade. While generally consistent with existing Kansas case law, the legislation comes as many states are moving to limit or ban the use and enforceability of restrictive covenants in employment and reaffirms Kansas’s status as a relatively employer-friendly jurisdiction for the enforcement of well-tailored restrictive covenant agreements.
Quick Hits
Kansas recently enacted a law to make certain written agreements not to solicit customers or employees “conclusively presumed” to be enforceable.
The legislation applies to nonsolicitation agreements between businesses and their owners, which are limited to four years after the end of their business relationship, and agreements with employees, which are limited to two years following employment.
The legislation will take effect on July 1, 2025.
Kansas Senate Bill (SB) 241, which was signed on April 9, 2025, clarifies guidelines for what constitutes reasonable and enforceable nonsolicitation agreements and noninterference agreements regarding employers’ customers and employees under the Kansas Restraint of Trade Act.
Unlike the trend of scrutinizing restrictive covenants in employment, SB 241 sets forth a more employer-friendly approach, deeming certain types of restrictive covenants in writing to be “conclusively presumed to be enforceable and not a restraint of trade.” (Emphasis added).
Enforceable Covenants
SB 241 applies to certain nonsolicitation agreements “in writing” between businesses and/or the business’s owners and employees regarding interference with the business’s employees and/or customers.
Owner Nonsolicitation of Employees—Covenants in which an owner agrees not to recruit or otherwise interfere with employees or owners of a business entity for up to four years after their business relationship ends.
Owner Nonsolicitation of Customers—Covenants in which an owner agrees not to solicit a business entity’s “material contact customers” for up to four years after their business relationship ends.
Employee Nonsolicitation of Employees—Covenants between a business and one or more of its employees where an employee agrees not to solicit employees or owners of the business. The agreement must either: (1) seek to “protect confidential or trade secret business information or customer or supplier relationships, goodwill or loyalty,” or (2) not last for more than two years after employment.
Employee Nonsolicitation of Customers—Covenants where an employee agrees not to solicit or interfere with a business entity’s “material contact customers” for up to two years after their employment ends are enforceable if they are limited to material contact customers.
Owner Notice Provisions—Provisions requiring an owner to provide prior notice before terminating, selling, or disposing of their ownership interest in a business entity.
SB 241 defines “material contact customer” as an “any customer or prospective customer that is solicited, produced or serviced, directly or indirectly, by the employee or owner at issue or any customer or prospective customer about whom the employee or owner, directly or indirectly, had confidential business or proprietary information or trade secrets in the course of the employee’s or owner’s relationship with the customer.”
Modification and Interpretation
Under the Kansas Restraint of Trade Act, the act’s provisions for covenants presumed to be enforceable control even if they conflict with federal court decisions on U.S. antitrust law. SB 241 adds that “[i]f a covenant that is not presumed to be enforceable … is determined to be overbroad or otherwise not reasonably necessary to protect a business interest of the business entity seeking enforcement of the covenant” courts must “modify the covenant” and “enforce the covenant as modified,” granting “only the relief reasonably necessary to protect such interests.”
Despite the “presumption of enforceability,” SB 241 will allow employees or owners to “assert any applicable defense available at law or in equity” in a court’s consideration of a written covenant.
Next Steps
Restrictive covenants have come under scrutiny in recent years. In 2024, the Federal Trade Commission (FTC) finalized a rule that sought to ban nearly all noncompete agreements in employment, but that effort was struck down in court. The Trump administration has since asked to halt appeals while the administration considers whether to drop the FTC’s rule. Still, the FTC under the Trump administration has indicated it will scrutinize restrictive covenants that unreasonably harm competition in labor markets, even if it is unlikely to do so through formal rulemaking. Moreover, at the state level, Virginia and Wyoming enacted restrictions on noncompete agreements in 2025.
However, Kansas’s SB 241, while not applying to noncompete agreements, goes against the broader scrutiny of restrictive covenants in employment. Instead, the law presumes certain nonsolicitation agreements to be enforceable, providing guidelines for employers to craft reasonable and enforceable agreements to protect legitimate business interests and trade secrets. The law is set to take effect on July 1, 2025.
Live from Workplace Horizons 2025 – Emerging AI + Related Tech Issues in the Workplace [Video, Podcast]
Welcome to this special edition of We get work®. Over 500 representatives from 260 companies gathered together to share valuable insights and best practices on workplace law issues impacting their business today.
Nanterre Court of Justice Issues First Decision About Introduction of AI in the Workplace in France
For the first time, a French court has ruled on the implementation of artificial intelligence (AI) processes within a company.
Quick Hits
For the first time, a French court has ruled on the implementation of AI processes within a company, emphasizing the necessity of works council consultation even during experimental phases.
The Nanterre Court of Justice determined that the deployment of AI applications in a pilot phase required prior consultation with the works council, leading to the suspension of the project and a fine for the company.
The ruling highlights the importance for employers of carefully assessing the scope of AI tools experimentation to ensure compliance with consultation obligations and avoid legal penalties.
More specifically, the Nanterre Court of Justice was called upon to determine the prerogatives of the works council when AI technologies are introduced into the workplace.
In this case, a company had presented in January 2024 to its works council a project to deploy new computer applications using artificial intelligence processes.
The works council had asked to be consulted on the matter and had issued an injunction against the company to open the consultation and suspend the implementation of the new tools.
The company had finally initiated the works council consultation, even if it considered that a mere experimentation of AI tools could not fall into the scope of the consultation process of the works council.
However, the works council, considering that it did not have enough time to study the project and did not have sufficient information about it, took legal action to obtain an extension of the consultation period with suspension of the project under penalty of a fine of €50,000 per day and per offense, as well as €10,000 in damages for infringement of its prerogatives because the AI applications submitted for its consultation had been implemented without waiting for its opinion.
On this point, it should be noted that in France, the works council, which is an elected body representing the company’s staff, has prerogatives that in some cases oblige the employer to inform it, but also to consult it, before being able to make a final decision. The consultation process means that the works council renders an opinion about the project before any implementation. This opinion is not binding, which means the employer can deploy the project even if the works council renders a negative opinion.
However, in the absence of consultation prior to the implementation of the project, the works council may take legal action to request the opening of the consultation and the suspension of the implementation of the project under penalty. The works council may also consider that failure to consult infringes its proper functioning, which is a criminal offense.
Indeed, in application of Article L.2312-15 of the French Labor Code,
[t]he social and economic committee issues opinions and recommendations in the exercise of its consultative powers. To this end, it has sufficient time for examination and precise, written information transmitted or made available by the employer, and the employer’s reasoned response to its own observations. […] If the committee considers that it does not have sufficient information, it may refer the matter to the president of the court, who will rule on the merits of the case in an expedited procedure, so that he may order the employer to provide the missing information.”
Within the area of new technologies, the prerogatives relating to consultation of the works council are numerous and variable, as it is stipulated that in companies with at least fifty employees, the works council must be:
informed and consulted, particularly when introducing new technologies and any significant change affecting health and safety or working conditions (Article L.2312-8 of the Labor Code);
informed, prior to their introduction into the company, about automated personnel management processes and any changes to them, and consulted, prior to the decision to implement them in the company, about the means or techniques enabling the monitoring of employees’ activity (Article L.2312-38 of the Labor Code); and
consulted where a type of processing, particularly when using new technologies, and taking into account the nature, scope, context, and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall carry out, prior to the processing an analysis of the impact of the envisaged processing operations on the protection of personal data (article 35(9) of the European Union’s General Data Protect Regulation (GDPR)).
In addition, regarding AI applications, it is worth noting that the EU’s regulation of June 13, 2024, on AI (Regulation (EU) 2024/1689) provides in its Recital 92 that in certain cases the
Regulation is without prejudice to obligations for employers to inform or to inform and consult workers or their representatives under Union or national law and practice, including Directive 2002/14/EC of the European Parliament and of the Council, on decisions to put into service or use AI systems. It remains necessary to ensure information of workers and their representatives on the planned deployment of high-risk AI systems at the workplace where the conditions for those information or information and consultation obligations in other legal instruments are not fulfilled. Moreover, such information right is ancillary and necessary to the objective of protecting fundamental rights that underlies this Regulation. Therefore, an information requirement to that effect should be laid down in this Regulation, without affecting any existing rights of workers.
In the case at hand, the company considered that the works council consultation was irrelevant as the AI tools were in the process of being tested and had not yet been implemented within the company.
However, the Nanterre Court of Justice, in a decision of February 14, 2025 (N° RG 24/01457), ruled that the deployment of the AI applications had been in a pilot phase for several months, involving the use of the AI tools, at least partially, by all the employees concerned.
To reach this conclusion, the court relied on the fact that certain software programs, such as Finovox, had been made available to all employees reporting to the chief operating officer (COO) and that the employees of the communications department had all been trained in the Synthesia software program. As such, the employer could not validly claim that such an implementation was experimental since so many employees had been trained and allowed to use AI tools.
The court, therefore, considered that the pilot phase could not be regarded as a simple experiment but should instead be analyzed as an initial implementation of the AI applications subject to the prior consultation of the works council.
The court therefore ordered:
the suspension of the project until the end of the works council consultation period, subject to a penalty of €1,000 per day per violation observed for ninety days; and
the payment of damages amounting to €5,000 to the works council.
Key Takeaways
In light of the Nanterre Court of Justice’s ruling, employers in France may want to remain cautious before deploying AI tools, even if it is worth noting that:
the ruling is only a summary decision, i.e., an emergency measure pending a decision on the merits of the case; and
this decision confirms that an experimental implementation of AI might be feasible, provided that it is followed by an information and consultation of the works council, prior to a complete deployment of AI tools. However, the range and scope of this experimentation is to be assessed with care because a court might consider the experiment actually demonstrates that a decision to implement AI was irrevocably taken.
How BMW and Romania Built AIconic, a Powerful AI Supply Chain System

How BMW and Romania Built AIconic, a Powerful AI Supply Chain System. When most people think of BMW, they picture sleek design, precision engineering, and the roar of a finely tuned engine. But behind the scenes, far from the showroom floor, something entirely different is unfolding, something that doesn’t run on fuel, but on data. […]
Throwing Away the Toaster: Where AI Controls Are Now and May be Heading
Years ago, when I was a baby lawyer living in a group house in DC, we had a toaster—my toaster. I had owned the toaster since college and it was showing its age. Eventually, you had to hold down the thing[1] to keep the bread lowered in the slots and toasting. But the appliance still heated bread and produced toast. One morning, I became so frustrated with that toaster and the thing-holding-down effort that I threw the toaster out, fully intending to get a new toaster.
The following day, my housemate, we’ll call him Mike,[2] raised an important series of questions:
Mike: Did you throw away the toaster?
Me: Yes. I was frustrated that it did not work right.
Mike: Did you get a new toaster?
Me: No, but I will soon.
Mike: Did our old toaster make toast?
Me: [pause] Ah . . . well, I mean, umm, yes. I see your point.
Anyhow, on a possibly related note, on May 14, 2025, BIS announced that it will rescind the AI Diffusion Rule and that, until the time of the official recission, would not enforce the Biden-era regulation.
Stop-Gap Stopped
Reading the BIS announcements, it appears that, once the AI Diffusion Rule is officially rescinded, there will not be any U.S. export control that restricts the provision of cloud computing through Infrastructure as a Service (IaaS). While the export of the certain ICs will still be controlled, ICs already owned or lawfully obtained could be put to any purpose, such as providing IaaS services for the development of AI in China.
If we return to October 2023, we see a comment made regarding the 2022 semiconductor regulations, highlighting that those rules, as then written, “may give China computational access to their equivalent ‘supercomputers’ via an IaaS arrangement.” (88 Fed. Reg. 73467). BIS acknowledged that the semiconductor regulations did not then cover IaaS, when it recognized that it was “concerned regarding the potential for China to use IaaS solutions to undermine the effectiveness of the October 7 IFR controls and [BIS] continues to evaluate how it may approach this through a regulatory response.” A plain reading of that statement indicates that the semiconductor regulations were not meant to (or could not be read to) cover IaaS.
However, the 2025 AI Diffusion Rule attempted to cover to that regulatory loophole and prohibit IaaS access for Chinese AI development. The Rule created ECCN 4E091 to cover certain AI models and then created a presumption that certain IaaS services would result in an unauthorized export of those 4E091 AI models. Effectively, that presumption created a restriction on cloud service providers to be able to provide certain IaaS services to entities in the PRC. With the recission of the AI Diffusion Rule, it appears that the loophole has been reopened.
Guidance Through and Inter-Rule Interim
In tandem with the rescission of the AI Diffusion Rule, BIS also issued three guidance documents that (1) put companies on notice that the Huawei Ascend 910 series chips are presumptively subject to General Prohibition 10[3], (2) provides guidance on due diligence that companies can conduct to prevent diversion of controlled ICs, and (3) reiterates existing controls that put restrictions on certain end-users and end-uses.
Those three guidance documents give the impression of a certain tension in the rulemaking process and they provide some hints as to be in store for the replacement AI rule:
On one hand, the new administration rescinded the AI Diffusion Rule in line with, if not in response to, calls from U.S. AI-related industry. The administration also recognized that there may have been flaws with the AI Diffusion Rule. For instance, the tiered approach to limiting and restricting exports of controlled ICs exclude many countries that are friendly to the U.S.—such as Iceland, Israel, and much of the EU and Eastern Europe. The AI Diffusion Rule did not put those U.S.-aligned countries on a tier in which they could freely acquire U.S. AI-supporting chips and, additionally, did not present a clear path for how those countries could move into the more favored tier.
As an alternative to the tiered approach researchers have suggested the idea of a country-by-country approach. That approach appears to be consistent with the administration’s recent trip to the Middle East, where it is reported that agreements with Saudi Arabia and the UAE have been negotiated to purchase U.S. producers’ GPUs (notwithstanding the fact that, under current regulations, those countries face restrictions on the purchase of certain advance semiconductors because of diversion risk).
While major semiconductor manufacturers have been the face of the recission effort, other major AI infrastructure players have been lobbying the administration to have the rule rescinded. Those companies had established or were working on data center projects in countries like Malaysia, Brazil, or India that were affected by the AI Diffusion Rule, particularly in how it limited compute capacity in those countries and restricted the use of the data centers.
On the other hand, with the AI Diffusion Rule scrapped, and no replacement ready, we suspect that officials at Commerce could be concerned about the re-opening of the IaaS loophole. The guidance documents appear to attempt to try to cover the gap left by the recission of the AI Diffusion Rule. In those guidance documents, BIS explains a policy whereby sellers of controlled ICs would need to conduct additional due diligence of IaaS providers when red flags are present. That approach stands in contrast to the AI Diffusion Rule, which put some diligence requirement on the service providers. In that we see a significant clue that a new replacement rule will likely find some way to restrict IaaS providers, but balance the interests of U.S. chip manufacturers and AI hyperscalers.
The guidance also announced the Huawei Ascend 910-series chips were presumptively a foreign direct product and subject to the EAR, presumptively violative of the EAR, and ultimately subject to General Prohibit 10. Ostensibly, that guidance could have a chilling effect on the purchase of Huawei chips, particularly in countries that wish to align with U.S. policy, and would help U.S. semiconductor manufacturers regain any ground lost to Huawei in those markets.
Striking a Balance in the New AI Rule
Looking to the future, the yet-to-be-seen replacement rule is going to have to balance the competing interests of a U.S. semiconductor and AI industry that want to expand freely and globally, and the national security concerns of those in government who would want restrict access to advanced semiconductors and AI technology by countries of concern.
For example, U.S. chipmakers will want to continue selling their leading edge GPUs to data centers in Malaysia and India. At the same time, U.S. export policy hawks would want to mitigate the risk of putting immense compute power proximate to, and potentially at the disposal of, PRC AI developers. Additionally, cloud service providers in Southeast Asia will want to be able to sell their services to the largest customer in the region, and would consider using Huawei chips over U.S. alternatives if it meant they could do so. That may mean that BIS cannot put too many restrictions on the region before the chipmakers and hyperscalers begin to voice objections and press to reduce the regulation.
Now that we have thrown away the toaster, selecting a new one—writing a new AI diffusion regulation—will require regulators to walk a narrow line to satisfy the interests of both industry and national security. Those interests are not necessarily opposed to one another, but their interests may be divergent, and it will be up to drafters to find a potentially very narrow common ground.
FOOTNOTES
[1] You know. The thing. It’s a technical term in the appliance repair world.
[2] Because that was his name. In fact, it still is.
[3] General Prohibit 10—or GP10 as it is affectionately known around the Sheppard Mullin offices—is a comprehensive prohibition on essentially doing anything with an item, including destroying or moving the item, if has caused a violation or will cause a violation of the EAR.
Workplace Strategies Watercooler 2025: The AI-Powered Workplace of Today and Tomorrow [Podcast]
In this installment of our Workplace Strategies Watercooler 2025 podcast series, Jenn Betts (shareholder, Pittsburgh), Simon McMenemy (partner, London), and Danielle Ochs (shareholder, San Francisco) discuss the evolving landscape of artificial intelligence (AI) in the workplace and provide an update on the global regulatory frameworks governing AI use. Simon, who is co-chair of Ogletree’s Cybersecurity and Privacy Practice Group, breaks down the four levels of risk and their associated regulations specified in the EU AI Act, which will take effect in August 2026, and the need for employers to prepare now for the Act’s stringent regulations and steep penalties for noncompliance. Jenn and Danielle, who are co-chairs of the Technology Practice Group, discuss the Trump administration’s focus on innovation with limited regulation, as well as the likelihood of state-level regulation.
Copyright Infringement Liability for Generative AI Training Following the Copyright Office’s AI Report and Administrative
When multiple forces act on an object, its direction of motion is determined by the net force, which is the vector sum of all individual forces.
When this happens within our federal government, we call it “interesting times.”
Not unlike other areas of the United States federal government of late, the U.S. Copyright Office has been thrown into turmoil following a stunning sequence of events this past week. As reported in multiple news outlets:
On Thursday, May 8, 2025, President Donald Trump fired Librarian of Congress Carla Hayden, the first woman and the first African American to be librarian of Congress.[i] The Library of Congress is the larger federal agency within which the U.S. Copyright Office resides.
On Friday, May 9, 2025, the U.S. Copyright Office released a “Pre-Publication Version” of the third and final part of its three-part Report on Artificial Intelligence.[ii] This Part 3 of the Report is entitled “Generative AI Training” and makes the case for finding copyright infringement where copyrighted works are used without permission to train generative AI models (more on this below).[iii] The Report was posted to the Copyright and AI landing page of the Copyright Office’s website a day after the firing of the Librarian of Congress and roughly 5 months after the official Publication of Part 2, which is about a month shorter than the interval between Parts 1 and 2 of the Report.[iv]
On Saturday afternoon, May 10, 2025, the Registrar of Copyrights, Shira Perlmutter, received an email from the White House informing her that her job as Register of Copyrights and Director at the U.S. Copyright Office had been “terminated effective immediately.”[v]
These recent events follow earlier criticism by prominent leaders of technology companies regarding perceived constraints posed by U.S. intellectual property laws on the development of artificial intelligence products and services. For example, on April 13, 2025, Jack Dorsey, co-founder of the companies formally known as Twitter and Square, posted: “delete all IP law,” to which Elon Musk replied, “I agree.”[vi] More broadly, as lobbying in favor of regulatory relief has increased with the change of administration,[vii] the mood in Washington appears to have shifted from caution to pro-development of the AI industry, as shown by the current President’s repeal[viii] of his predecessor’s sweeping executive order and the more recent attempt by Congressional Republicans to insert into the tax and spending bill a moratorium on state AI legislation.[ix]
Given all of this, interested parties may be left to wonder whether and to what extent they should rely upon the guidance and analysis of the Copyright Office’s AI Report. The question is particularly acute for parties involved in active litigation concerning the question of copyright infringement for generative AI training.
What’s next?
Let’s deal with what we know and leave the political speculation to other sources.
First, IP law is not going away anytime soon. Patent and copyright law are enshrined in the U.S. Constitution.[x] And nobody is calling for the end of all trademarks. We all have a brand, after all, and the ability to control one’s reputation by excluding others from unauthorized use is essential to all businesses. Bold statements aside, the law will continue to evolve but the need to support innovation though intellectual property rights retains broad recognition by serious people.
Second, Jack Boyle’s words, spoken in another context, seem to best capture the dynamic environment in Washington these days: “nobody knows ‘nothin.”[xi] Speculation on motive and the future direction of any particular legal issue or policy, including those involving AI, is a risky bet. While a pattern has emerged showing a preference by the administration for prioritizing pro-growth of the AI sector through relaxed legal barriers, ultimately these issues will play out in federal courts where considerations of legal precedent and constitutionality may impose restraints on executive and certain legislative actions.
Third, the U.S. Copyright Office’s AI Report does not carry the force of law. It does signal the Office’s approach to important legal issues within its purview, which approach could theoretically change with the change in leadership. And, more relevantly to the subject matter of this most recent portion of the Report, it can also serve as a roadmap for litigants, persuasive authority for courts, and input to the legislative process.
Fourth, agree with it or not, the Report as written is now in the public domain. Whether or not its authors continue to draw a paycheck from the federal government, and whether their successors write a new chapter or revision, the analysis speaks for itself and has been widely disseminated. A party who ignores this in-depth and well sourced treatment does so at its own peril.
So, what does the Report say?
Prima Facie Case for Infringement. The Report begins by finding that a prima facie claim for copyright infringement is easily met. In the Office’s view, multiple steps required to produce a dataset useful for generative AI “clearly implicate” the copyright owners’ right to control the reproduction of their works. These steps include the collection and curation of the copyrighted works, their use in training, and deployment of the model.[xii] Less clear, in the Office’s view, is whether or not the output material of the resulting generative AI model (which may, in some cases, look very much like and even compete with the original work) may implicate the copyright owners’ rights to control public display and performance of their works.[xiii]
Fair Use Defense. The Report proceeds with an analysis of the “fair use” defense to copyright infringement, including each of the statutory fair use factors set forth in 17 U.S.C. § 107, which are:
the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
the nature of the copyrighted work;
the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
the effect of the use upon the potential market for or value of the copyrighted work.
After an in-depth consideration of each of the factors,[xiv] as informed by existing legal precedent and the comments received through the Notice of Inquiry (NOI) process that gave rise to the Report, the Copyright Office offers the following somewhat equivocal perspective:
As generative AI involves a spectrum of uses and impacts, it is not possible to prejudge litigation outcomes. The Office expects that some uses of copyrighted works for generative AI training will qualify as fair use, and some will not. On one end of the spectrum, uses for purposes of noncommercial research or analysis that do not enable portions of the works to be reproduced in the outputs are likely to be fair. On the other end, the copying of expressive works from pirate sources in order to generate unrestricted content that competes in the marketplace, when licensing is reasonably available, is unlikely to qualify as fair use. Many uses, however, will fall somewhere in between.[xv]
Recommendation for Licensing. Licensing, the Copyright Office suggests, is a workable solution for both resolving the ambiguity of legal rights and fairly balancing the interests of content creators and AI developers.[xvi] Options suggested in the Report may include the forms of streamlined voluntary approaches already in the market as well as adapted statutory approaches such as compulsory licensing and extended collective licensing (“ECL”).
Final Analysis. Governmental turmoil aside, the Copyright Office’s AI Report, now completed with the delivery of its third installment, provides a solid starting point for litigants, courts, legislators, and businesses to understand the competing viewpoints and legal arguments related to artificial intelligence. This guidance will likely show up in legal briefs in the near future and it may also motivate efforts to address these issues legislatively.
ENDNOTES
[i] “Trump administration fires top copyright official days after firing Librarian of Congress,” Associated Press, May 11, 2025 (last visited May 11, 2025).
[ii] See Copyright Office statement on May 9, 2025 accompanying the posting of Part 3 of its Report on Artificial Intelligence (last visited May 11, 2025).
[iii] U.S. Copyright Office Report on Artificial Intelligence, Part 3: Generative AI Training, Pre-Publication Version, May 2025 (herein, “Copyright Report, Part 3”) (last visited May 11, 2025).
[iv] For an analysis of Parts 1 and 2 of the Copyright Office Report on Artificial Intelligence, see “Charting a Course on AI Policy: the U.S. Copyright Office Speaks!,” Krabacher, April 2, 2025.
[v] “Trump fires top US copyright official,” Politico, May 10, 2025 (based on POLITICO receipt of internal Library of Congress communications (last visited May 11, 2025); “Trump administration fires top copyright official days after firing Librarian of Congress,” Associated Press, May 11, 2025 (last visited May 11, 2025).
[vi] “Jack Dorsey and Elon Musk would like to ‘delete all IP law’,” April 13, 2025, Techcrunch (last visited May 11, 2025).
[vii] See, e.g., “Emboldened by Trump, AI Companies Lobby for Fewer Rules,” New York Times, March 24, 2025 (last visited May 13, 2025).
[viii] Executive Order: Removing Barriers to American Leadership In Artificial Intelligence, January 23, 2025 (last visited May 13, 2025).
[ix] State AI Regulation Ban Tucked Into Republican Tax, Fiscal Bill, Bloomberg, May 12, 2025 (last visited May 13, 2025).
[x] U.S. Const. Article I, Section 8, Clause 8 of the U.S. Constitution (the “Patent and Copyright Clause”).
[xi] See. e.g., John Boyle interview posted on Sensible Investing YouTube, September 26, 2012: “All You Need To Know About Investing In Three Words,” (last visited May 16, 2025).
[xii] Copyright Report, Part 3, pg. 26 – 31.
[xiii] Copyright Report, Part 3, pg. 31.
[xiv] Copyright Report, Part 3, pg. 32 – 74.
[xv] Copyright Report, Part 3, pg. 74.
[xvi] See U.S. Copyright Office Report at page 103.
Reflections on the FDLI 2025 Annual Conference – Differing Tones, Shared Goals
From “gold standard science” to biopharma “GNC store”, this year’s Food and Drug Law Institute (FDLI) 2025 Annual Conference in Washington, DC, on May 15–16, a vital gathering for life sciences professionals, was full of sound bites, featured two standout sessions: Food and Drug Administration (FDA) Commissioner Dr. Martin A. Makary on Day 1 and Congressman (D-Mass) Jake Auchincloss on Day 2. Their talks, of course, revealed stark differences in approach—Dr. Makary’s forward-looking optimism and Mr. Auchincloss’s calls for concern—yet shared a commitment to advancing innovation and protecting the core of the agency. To be sure, much of what was said (aside from Dr. Makary’s now widely reported-on comment about a new vaccine framework) was not new, but there are a number of industry takeaways when viewed together and in the context of the conference itself.
FDA Commissioner Dr. Martin A. Makary: A Push for Integrity
Dr. Makary focused on restoring trust in the “brand” of the FDA and was empathetic to the people that do the every day work of the agency. He emphasized the difficulty coming in “after” the reductions in force, but pledged to “restore and rebuild” the culture to the best of his ability. This theme was spread throughout Dr. Makary’s remarks. He also stressed agency independence, advocating for policies like limiting advisory committee roles for industry employees to reduce conflicts of interest, but underscored “strong partnerships” with industry in appropriate ways, intimating positive views on user fee programs. Dr. Makaray’s vision includes accelerating approvals with randomized controlled trials and real-world evidence, while maintaining product safety. The “gold standard science and common sense”, in his view, is here to stay, and to be clear, according to Dr. Makary, there will be no reorganization of FDA.
Dr. Makary also spoke at length about the new AI initiative at FDA, announced last week and widely publicized, which will aim to incorporate AI tools for application reviews. The example case was for review of myriad scientific literature appendices in applications that can take days—now minutes—for reviewers to pour through. Dr. Makary also discussed cloud-based endpoints and industry collaboration to further modernize review processes, particularly for predictive toxicology in drug development. Near the end of the session, Dr. Makary teased the new “framework for vaccine makers”, without much more context, in the coming days. Overall, Dr. Makary’s focus on transparency and streamlined, evidence-based approvals signals a regulatory environment that values innovation but demands robust data.
Congressman Jake Auchincloss: A Call to Action
On Day 2, Mr. Auchincloss, a member of the Energy and Commerce Committee, took a more critical tone, framing the FDA’s challenges in both a local and global-political context. He dismissed modernization efforts like DOGE as “dumb,” and lamented HHS Secretary Robert F. Kennedy Jr.’s characterization of FDA personnel as industry “sock puppets.” Mr. Auchincloss urged Congress to provide “political top cover” for the FDA—and Dr. Makary—ensuring bipartisan support to shield the agency’s critical work from political interference, focus on science and public trust, and avoid erosion of public sentiment of the agency.
Mr. Auchincloss also highlighted very real and very present global competition, warning that China is “eating our lunch” in life sciences innovation, particularly in preclinical and phase 1 research. Focus on cutting budgets is contrary to this plight. He pushed for increased R&D investment—suggesting 6% of GDP—and further underscored that we need coordinated efforts across various agencies to “build” better than China in life sciences. On rare and pediatric disease topics, Mr. Auchincloss highlighted recent news about CRISPER therapies and projected that the bipartisan Give Kids a Chance Act would pass. Overall, Mr. Auchincloss’s remarks underscore the need to uphold health and human safety through good science in a politically charged landscape while advocating for policies that bolster U.S. competitiveness.
Takeaways and Common Ground
FDLI did a great job at juxtaposing these sessions—but the perspectives, on the whole, were not substantively opposed, and at times, seemed very much aligned. Both spoke supportively about the FDA’s “gold standard” and the importance of the global marketplace advantage that the US currently holds in the life sciences sector. It would not be a stretch to say that each speaker appears to support the notion that a strong FDA is a good thing. Each speaker also recognized that these are critical elements of the agency and should be protected at all costs. Reading between the lines, moreover, it would be fair to say that each speaker shares a view that the user fee process and engagement with industry is important and should not be scrapped, contrary to what has been reported elsewhere at HHS about the user fee systems.
Similarly, there was a mutual focus on innovation—whether it be AI or R&D investment more broadly—but to be sure, each speaker had their own view on which means of innovation was more important. Even on “efficiency,” the speakers did not seem too far apart. “Modern and efficient”, in Mr. Auchincloss’ words, seemed to match well with Dr. Makary’s view that some change was needed but drastic measures would be counterintuitive to the mission.
The FDLI 2025 Conference underscored that while regulatory and political leaders may differ in approach, their endgame—fostering innovation while protecting public health—aligns. That is also a good thing.
“AI Policy Roadmap” Released by AdvaMed to Guide Regulators
On March 14, 2025, AdvaMed, the MedTech Association, released its AI Policy Roadmap (the Roadmap) outlining policy priorities for Congress and the U.S. Food and Drug Administration (FDA). The impetus for the Roadmap was the recognition of the important role that AI-enabled devices will play in improving the accuracy and efficiency of disease diagnosis, enabling higher quality treatments, and expanding access to health care and to innovative technologies. The Roadmap is broken down into three main policy priority areas: privacy and data access, FDA AI regulatory framework, and reimbursement and coverage.
Privacy and Data Access
The Roadmap contends that one component of AI-enabled devices that sets them apart from traditional technology is the need for large datasets to train and validate the algorithms underlying the devices. The need for large datasets creates two distinct challenges.
First, health care data is highly fragmented and generally stored in non-standardized formats. Health care data is not frequently shared across health systems, and there are very few commercial vendors that provide the services necessary to link and standardize this data.
The second significant challenge is the need to protect patient privacy and ensure that data security is prioritized. To this end, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) requires protection of certain types of personal health information, consent to use and/or disclose data, and strict deidentification requirements when personal health information is used. The need for high quality data measured against the need to protect patient privacy creates an inherent tension in policy priorities.
To mitigate this tension, the Roadmap provides three recommendations:
Congress and regulatory agencies such as the FDA should ensure data protection without stifling innovation.
Congress should evaluate the need to update HIPAA for the AI era and create clear guidelines specifically for data use in AI development.
Congress and regulatory agencies should develop appropriate guidelines around patient notice and authorization for the data used to develop AI.
The Roadmap strives to balance the need for a high volume of high quality standardized data with patient privacy by placing modernized consent and notification requirements at the center of the policy priorities. Recognizing the need for large datasets, the Roadmap emphasizes modernizing traditional privacy policies, such as HIPAA, to accommodate data use and collection for AI models.
FDA AI Regulatory Framework
The FDA regulates certain AI-enabled devices for safety and efficacy. However, AI-enabled devices require a different approach than FDA’s “traditional” medical device review model for those devices that undergo changes in an iterative fashion. For approved medical devices that evolve continuously, e.g., AI-enabled devices, developers must submit for FDA review any modification that could significantly affect the product’s safety or effectiveness, consistent with FDA-drafted guidance on the preapproval process for post-market changes – referred to as predetermined change control plans (PCCP). These post-market changes occur as algorithms continue to learn and validate against the data of the populations using the technology. The algorithms then adjust based on continued learning. While Congress passed legislation authorizing PCCP approval in 2022, comprehensive FDA PCCP guidance was only released in December of 2024. The complete pre- and post-market processes for AI-enabled devices are outlined in the FDA’s “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.”
The Roadmap’s recommendations suggest that FDA modernize regulations to align with the increasing shift from traditional medical devices to AI-enabled devices. Specifically, the Roadmap recommends that:
The FDA should remain the lead regulator responsible for overseeing the safety and effectiveness of AI-enabled medical devices.
The FDA should implement the existing PCCP authority to ensure it achieves its intended purpose of ensuring patients have timely access to positive product updates.
The FDA should issue timely and current AI guidance documents related to AI-enabled devices and to prioritize the development and recognition of voluntary international consensus standards.
The FDA should establish a globally harmonized approach to regulatory oversight of AI-enabled devices.
The Roadmap commends progress made by Congress and the FDA to modernize legislative and regulatory processes applicable to AI-enabled devices but urges continued focus on keeping pace with technological innovation. The focus of the policy recommendations is on streamlined, uniform regulations that are not overly burdensome and will not stifle innovation.
Reimbursement and Coverage
Finally, the third policy area addressed in the Roadmap is reimbursement and coverage as a critical component of increasing access to digital health technologies. Currently, reimbursement for AI-enabled devices has been considered on a device-specific basis, leading to incremental policy changes. The Roadmap suggests that Medicare, as the country’s largest health care payor supporting the medical needs of millions of Americans, could be instrumental in shifting this policy position. Further, Medicare policy initiatives heavily influence the coverage policies of private payors and state Medicaid plans. While the Roadmap acknowledges that there is no one single policy solution to increase accessibility to digital health technology through reimbursement, “accurately capturing the cost and value of [AI-enabled devices] is critical to ensuring appropriate reimbursement.”
Toward this end, the Roadmap provides five policy suggestions:
Congress should consider legislative solutions to address the impact of budget neutrality constraints, or restraining Medicare spending to a certain defined threshold, on the coverage and adoption of AI technologies.
The Centers for Medicare & Medicaid Services (CMS) should develop a formalized payment pathway for algorithm-based health care services to ensure future innovation and to protect access to this subset of AI technologies for Medicare beneficiaries.
To ensure future innovation and to protect access to algorithm-based health care services for Medicare beneficiaries, CMS should develop a formalized payment pathway for algorithm-based health care services.
Congress and the FDA should facilitate the adoption and reimbursement of digital therapeutics through legislation and regulation.
CMS should leverage its authority to test innovative alternative payment models to promote the ability of AI technologies to improve patient care and/or lower costs.
The development and adoption of AI-enabled devices to improve diagnosis, treatment, and patient care will be amplified by the adoption of appropriate reimbursement policies as health care providers and practitioners will be more readily able to learn about and use these health care tools. Sound reimbursement and coverage policies are an integral part of supporting innovation and development of AI-enabled health care devices.
Conclusion
In a recent press release, Scott Whitaker, AdvaMed CEO and President said about the release of the Roadmap, “The future of AI applications in medtech is vast and bright. It’s also mostly to be determined. We’re in an era of discovery… This is the right time to promote the development of AI-enabled medtech to its fullest potential to serve all patients, regardless of zip code or circumstance.” It is from this position of promoting new technology that AdvaMed urges Congress and the Food and Drug Administration to act in support of the development of AI-enabled medical technology.
Generative AI Training May Not Qualify for the Fair Use Defense
Last week, the Copyright Office released the third and final part of its report exploring copyright-related issues posed by artificial intelligence (AI). Unlike the first two parts, the third was released as a “pre-publication” version. It was published less than a day after Dr. Carla Hayden, the Librarian of Congress, was fired by President Trump and a day before Shira Perlmutter, the Register of Copyrights, was fired by President Trump. Building off its earlier parts, the latest publication focuses on how copyright law and the fair use defense should be applied to companies that use copyrighted works to train AI models.
The report concluded that companies presumptively infringe the copyright protections of others when they copy materials to use in training data. Additionally, the report concluded that the numerical parameters of the model can also be infringing when it can reproduce the copyrighted work as a memorized example.
However, the fair use defense permits copying another’s work. Many uses of copyrighted works by an AI model are likely transformative. However, the Office concluded that commercializing copyrighted works in training data to compete with the original works is unlikely to fit the fair use exception. AI models rapidly create new works that imitate a creator’s style. The Office concluded that this market dilution weighs against the fair use argument for generative AI companies.
Recognizing that the copyrighted data is needed in the training data, the report concluded by exploring different licensing frameworks that companies can use to acquire the data. The report does not recommend that the government intervene to establish a licensing regime right away, but that the market should continue to develop.
It is unclear if the Trump administration will rescind the report or issue a final report with changes. However, companies developing AI tools should likely consider the report regardless of the administration’s actions.