Voices on Trial: Voice Actors, AI Cloning, and the Fight for Identity Rights

A New York court just decided some important preliminary motions (which I previously covered here in this post) involving allegedly unauthorized AI cloning of voice actors. The court reached a split decision, concluding “that, for the most part, Plaintiffs have not stated cognizable claims under federal trademark and copyright law. However, that does not mean they are without a remedy. Rather, claims for misappropriation of a voice, like the ones here, may be properly asserted under Sections 50 and 51 of the New York Civil Rights Law [which protect name, image and likeness], which, unlike copyright and trademark law, are tailored to balance the unique interests at stake. Plaintiffs also adequately state claims under state consumer protection law and for ordinary breach of contract.”
The court commented on the uniqueness and significance of this case, stating: “The case involves a number of difficult questions, some of first impression. It also carries potentially weighty consequences not only for voice actors, but also for the burgeoning AI industry, other holders and users of intellectual property, and ordinary citizens who may fear the loss of dominion over their own identities.”
This ruling portends potential challenges that may arise for others whose voices may be AI-cloned. The problem is that there is no federal right of publicity, which covers a person’s name, image and likeness (NIL). This court’s dismissal of the federal law claims under trademark and copyright law, if followed by other courts, will limit plaintiffs’ NIL claims to those under state law, such as those here under Sections 50 and 51 of the New York Civil Rights Law. However, not every state has a right of publicity law.
Click here to read more.
Listen to this post 

AI vs. Authors: Two California Judges, Two Directions and More Uncertainty on Fair Use and Copyright

Key Takeaways

Courts Lean Toward Fair Use for AI Training: Two California rulings suggest that using copyrighted works to train artificial intelligence (AI) may be considered fair use if outputs are transformative and do not replicate the original content.
Pirated Libraries Raise Legal Risks: While courts accepted some use of pirated works for transformative AI purposes, they strongly criticized maintaining central libraries and training AI with pirated content for non-transformative purposes, signaling potential legal vulnerability.
Legal Uncertainty Remains: With no clear precedent or updated legislation, both authors and tech companies face ongoing uncertainty; future guidance will likely need to come from Congress or the Supreme Court.

The rapid advancement of AI large language models (LLMs) depends heavily on ingesting vast amounts of textual data, often scraped from books, websites and online libraries. Two recent rulings from the United States District Court for the Northern District of California delivered diverging views on one of the most disputed issues in AI technology: can tech companies use copyrighted books to train AI models without permission or a license from the authors of those books?
The two cases, Bartz v. Anthropic PBC and Kadrey v. Meta Platforms, Inc., involved authors suing AI tech companies who were training their LLMs by sourcing high volumes of copyrighted works from either pirated or shadow libraries. At the center of the courts’ fair use considerations was the “transformative use” test in analyzing the defendants’ fair use defense. The transformative use test considers whether a secondary work significantly alters the original copyrighted work by adding something new, such as a new meaning, expression, or message, as opposed to merely copying the work. Essentially, if a copyrighted work was repurposed in a way that adds new meaning or function without affecting the market for the original work, then the use may be considered fair use and not a violation of the Copyright Act.
In regard to ingesting and using a high volume of copyright works (in the millions) to train AI models, both courts appeared more forgiving of such training when the results include (1) the generation of textual outputs that did not infringe or substitute for the original copyrighted works and (2) the development of innovative tools such as language translators, email editors and creative assistants. Both courts found such use — even if the copyrighted works were sourced from pirated or shadow libraries — to be transformative, constituting fair use.
However, both courts were also concerned about the creation of a library and training AI using pirated works. The Bartz court criticized Anthropic for its act of retaining a subset of pirated books in its central library despite concluding that those sets of works would never be used for training LLMs, and found such conduct did not constitute fair use. The Kadrey court strongly suggested in dicta that, where training LLMs does not constitute fair use, AI developers will still figure out a way to license copyrighted works – especially given how beneficial such training is, as emphasized by Meta.
At least these two decisions show that courts are willing to consider LLM training fair use under certain conditions. Both decisions focused on the transformative output of the LLM that included extensive steps such as statistical analysis, text mapping and computer analysis beyond merely reproducing the work. Both decisions also noted the limited ability of the LLMs to produce more than snippets of the work in their results. However, both also focused on the pirated nature of the libraries and AI training and held (or indicated) that such conduct done for non-transformative purposes would not be fair use. The lack of unified precedent or legislative guidance means future decisions in other jurisdictions may not be uniform until the issue is resolved at the appellate level, likely by the Supreme Court. Congress will likely face renewed pressure by both creators and AI tech companies to update copyright law to address AI as the current Copyright Act is not equipped to do so.
Until there is guiding precedent or legislative action, tech companies and copyright holders alike are watching closely. Authors should ensure their works have copyright protection. For tech companies, the consequence of infringement is high as the statutory damages provided by the Copyright Act can be significant. To avoid this exposure, costly litigation and reputational risk, AI companies should focus both on the manner in which they create libraries and the LLM’s outputs. To lessen scrutiny, LLM libraries should not be created from unlicensed or unprotected works as both decisions highly criticized creating large libraries from pirated works for non-transformative uses. However, outputs based on text mapping, statistical analysis and other computer functions that do not reproduce significant portions of the work appear more likely to constitute fair use. Tech companies should consider whether any licenses for library creation will also cover LLM outputs. But given the vast amount of information needed to create an LLM library, this may be impractical or cost prohibitive. To cut down on logistics and licensing costs, AI companies should work with already established databases and libraries when sourcing materials to train their LLMs.
In the end, these decisions highlight the old adage that the only certainty about litigation is uncertainty. Until we have guiding precedent or legislative action, both authors and AI companies remain uncertain as to the scope of their rights regarding the use of works in the AI community.

Court Sanctions Attorneys for Submitting Brief with AI-Generated False Citations

Highlights

Federal Rule of Civil Procedure 11 requires attorneys to verify the accuracy of court filings — including those prepared or supplemented with AI tools. 
All outputs from AI used in legal research must be closely scrutinized to ensure their accuracy. 
Failure to verify AI outputs used in court filings can lead to monetary or other sanctions. 

The U.S. District Court for the District of Colorado has sanctioned two attorneys for submitting a brief containing “nearly thirty defective citations” that were generated by artificial intelligence.
According to the court, the brief:

Misquoted cited cases
Misstated the holdings of cited cases
Cited cases for legal principles not discussed in those cases
Misidentified the court that issued the cited case
Cited non-existent cases

Following submission of the offending brief, the court issued an order to show cause as to why the attorneys should not be sanctioned. In their response, the attorneys advanced several arguments in an attempt to demonstrate their diligence in preparing the offending brief.
They claimed that while the brief was initially drafted without the use of AI, they later employed a legal research AI tool to identify additional or stronger authority to support their arguments. The attorneys argued that they went through the final brief, which included the outputs from the AI tool, to conduct a thorough citation check.
Yet, the version of the brief that was filed still contained numerous issues, likely caused by the use of the AI tool. The attorneys blamed human error for these issues — they inadvertently filed a prior version of the brief rather than the final version that was fully cite-checked. The court held that the “correct” version by counsel still contained similar errors — including references to non-existent cases and misidentified courts — and was “replete with the same errors” as the filed version.
The court took issue with the apparent contradictions in the attorneys’ arguments. On the one hand, the attorneys stated that they did not “rel[y] on AI legal research and had prepared a thoroughly cite-checked final document.” On the other hand, the attorneys stated that they had used an AI tool to supplement their legal research.
The court also took judicial notice that the attorneys had submitted briefs with similar false citations in a different case around the same time that the filings at issue in this case were submitted. The court held that the repetition of these errors in multiple cases demonstrates counsel’s practice of utilizing AI to conduct legal research without verifying the outputs of the AI model.
In its opinion, the court emphasized that the use of AI is governed by Federal Rule of Civil Procedure 11, which requires certification that any filed materials are “warranted by existing law or by a nonfrivolous argument for extending…existing law.” It is the obligation of the attorney submitting any filing to ensure the accuracy of the submission.
In considering an appropriate sanction, the court weighed the purposes of sanctions under Rule 11:

Deterring future litigation abuse
Punishing present litigation abuse
Compensating victims of litigation abuse
Streamlining court dockets and facilitating case management

Under these principles, the court imposed a $3,000 sanction against both attorneys who signed the offending brief, payable to the district court.
Conclusion 
While AI tools can assist in conducting legal research, outputs from AI tools must be thoroughly verified. To comply with the requirements of candor and accuracy imposed by Rule 11, any court submission should be reviewed for accuracy. If not, attorneys run the risk of incurring personal sanctions or sanctions against their client. 

Liability of AI Platforms for Copyright Infringement: What Every Business Should Know Before Using Generative AI

A recent federal court decision from a New York district court provides important guidance on AI platforms’ (and your) liability for using third party copyrighted content generated by these platforms. This decision should inform every business’ use of AI,  particularly generative AI, especially in creating marketing materials and other content.

This post summarizes the decision and concludes with important takeaways and action items for every business using or planning to use AI to create content.
The New York District Court Decision
In The New York Times v. Microsoft Corporation, a New York federal district court found that Open Source (which is owned by Microsoft) could be liable for “contributory” infringement as a result of third-party generation of “outputs” from Open Source that allegedly infringed the rights of The New York Times (the “Times”) copyrighted content. OpenAI is a “large language model” (“LLM”) which, as the court noted in its decision, “can receive text prompts as inputs by users and generate natural language responses as outputs, which result from the LLM’s prediction of the most likely string of text to follow the inputted string of text based on its training on billions of written works.”

The Daily News, The Center for Investigative Reporting, and the Times (“Plaintiffs”) argued that when Open Source users input prompts into the platform they generate text that is substantially similar to (and therefore infringes) the Plaintiffs’ copyrighted material. That would make those users potentially liable for “direct” infringement. Plaintiffs also claimed that Open Source could be held liable for “contributory” infringement because it allegedly “materially contributed to and directly assisted with the direct infringement by [its] end users” by building its AI model and training it by using copyrighted content owned by the Plaintiffs; deciding what content was output by the Open Source through specific training techniques; and developing AI models capable of distributing the copyrighted content to end users without the permission of any of the Plaintiffs who owned copyright.

The defendants, who comprise Microsoft and multiple OpenAI entities, claimed that:

there was no direct infringement by users (a predicate to contributory infringement); and
defendants did not contributorily infringe because they did not know of third-party infringement (by OpenAI users).

Acknowledging a split among the circuit courts, the court said actual knowledge was not necessary to find OpenAI contributorily liable for its users’ copyright infringement. Instead, it determined that in the Second Circuit, where it sits, the standard is whether defendant investigated or would have had reason to investigate the infringement. Then it found that defendants might be found to have knowledge based on “widely publicized” instances of copyright infringement after other LLMs were released including ChatGPT, Browse with Bing, and Bing Chat. Additionally, Plaintiffs provided multiple examples of infringing outputs in their Complaint. The Court therefore found that it could later be determined during the fact-finding portion of the case that additional instances of third-party infringement would be disclosed.
Accordingly, the Court concluded that there was third-party infringement. The Court next found defendants could be found to have had “constructive, if not actual, knowledge” of this end-user infringement. In addition to the widely publicized infringements, the Court looked to statements made by OpenAI representatives about internal company disagreements regarding copyright issues. The Times also informed defendants that “their [defendants’] tools infringed its copyrighted works.” Accordingly, defendants “at a minimum had reason to investigate and uncover end-user infringement.” Finally, the Court found that the defendants’ LLMs could be found to have facilitated the third-party infringement. And the fact that the LLMs were capable of substantial non-infringing uses did not relieve defendants from liability.
Important Takeaways and Action Items
By contrast to contributory infringement, where actual or constructive knowledge is necessary, a user who generates infringing AI outputs need not be aware of the copyright status of third-party content or even that an AI output has copied copyrighted content. Because of the risk posed to businesses of inadvertently committing copyright infringement by generating outputs from AI in the course of advertising or promoting their goods and services, we recommend engaging counsel to take, at a minimum, the following actions:

draft an AI Policy applicable to all employees, and incorporate it into employee manuals;
advise how to minimize legal risk to employers and employees when using AI to generate content;
draft contracts commissioning the creation of AI tools for proprietary use, which, if due diligence is conducted, can minimize exposure to the business for copyright infringement;
draft agreements with marketing and advertising agencies including the use of AI tools by the agency;
counsel as to how to conduct due diligence in selecting an AI tool to minimize legal risk, including close attention to the terms of use for the tool;
counsel as to how to protect work generated with AI; and
for companies that are located in or operate in the EU, engage US counsel experienced in guiding EU agents as to the special considerations that apply with respect to use of and laws around AI there.

Lawsuits such as The New York Times v. Microsoft Corporation bring to the forefront the need to thoughtfully deploy AI. Before producing any materials with generative AI, it is important now more than ever to consult an intellectual property attorney fluent in the legal implications of AI to ensure that your content is not inadvertently infringing.
¹The Court did not reach the question of whether the training of the AI platform was an unlawful reproduction under the copyright law.

Lone Star State: How Texas Is Pioneering President Trump’s AI Agenda

On June 22, 2025, Texas Governor Greg Abbott signed into the law the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) or (the Act).
The Act, which goes into effect January 1, 2026, “seeks to protect public safety, individual rights, and privacy while encouraging the safe advancement of AI technology in Texas.”
Formerly known as HB 149, the Act requires a government agency to disclose to consumers that they are interacting with AI—no matter how obvious this might appear—through plain language, clear and conspicuous wording requirements, and more. The same disclosure requirement also applies to providers of health care services or treatment, when the service or treatment is first provided or, in cases of emergency, as soon as reasonably possible.
The Act further prohibits the development or deployment of AI systems intended for behavioral manipulation, including AI intended to encourage people to harm themselves, harm others, or engage in criminal activity (see a post by our colleagues on Utah’s regulation of mental health chatbots).
TRAIGA forbids, under certain conditions, the governmental use and deployment of AI to evaluate natural persons based on social behavior or personal characteristics (social scoring); and the governmental development/deployment of AI systems for the purpose of uniquely identifying individuals using biometric data, under certain conditions. Notably and broadly, the law prohibits the development or deployment of AI systems by “a person”

with the sole intent of producing or distributing child pornography, unlawful deepfake videos or images, certain sexually explicit content, etc.,
with the intent to unlawfully discriminate against a protected class in violation of state or federal law; and
with the sole intent of infringing on constitutional rights.

This broad coverage would per force include employers and other organizations using AI tools or systems in both the public and private sectors.
Legislative History of TRAIGA
The original draft of TRAIGA (Original Bill), introduced in December 2024 by State Representative Giovanni Capriglione, was on track to be the nation’s most comprehensive piece of AI legislation. The Original Bill was modeled after the Colorado AI Act and the EU AI Act, focusing on “high-risk” AI systems (see our colleagues’ blog post on Colorado’s historic law). Texas would have imposed significant requirements on developers and deployers of AI systems, including duties to protect consumers from foreseeable harm, conduct impact assessments, and disclose details of high-risk AI to consumers.
In response to feedback and the impact of the Trump administration’s push for innovation—along with a loosening of regulation— Representative Capriglione and the Texas legislature introduced a significantly pared back version of TRAIGA, known as HB 149, in March 2025. HB 149 was passed by the Texas House of Representatives in April and by the Texas State Senate in May, before Governor Abbott signed it into law in June 2025.
Current Version
The Act no longer mentions high-risk AI systems. The Act focuses primarily on AI systems developed or deployed by government entities though, as noted above, some disclosure requirements apply to health care entities and some prohibitions remain as to developers and deployers.
Unlike the Original Bill, the Act does not require private entities to conduct impact assessments, implement risk management policies, or disclose to consumers when they are interacting with AI. The Act also restricts its prohibition of social scoring to government entities. The Act explicitly states that disparate impact is not enough to impose liability for unlawful discrimination against individuals in state or federal protected classes. The latter provision clearly stems from Trump policy goals discouraging, if not prohibiting, disparate impact as an indicator of illicit discrimination (see our April Insight on this topic).
The Act establishes an AI Advisory Council, composed of seven members appointed by the Governor, Lieutenant Governor, and Speaker of the House. The Council will assist the state legislature and state agencies by identifying and recommending AI policy and legal reforms. It will also conduct AI training programs for state agencies and local governments. The Council is explicitly prohibited, however, from promulgating binding rules and regulations itself.
The Act vests sole enforcement authority with the Texas Attorney General (AG), except to the extent that state agencies may impose sanctions under certain conditions if recommended by the AG. The Act explicitly provides no private right of action for individuals. Under the Act, the AG is required to develop a reporting mechanism for consumer complaints of potential violations. The AG may then issue a civil investigative demand to request information, including requesting a detailed description of the AI system.
After receiving notice of the violation from the AG, a party has 60 days to cure, after which the AG may bring legal action and seek civil penalties for uncured violations. Curable violations are subject to a fine of $10,000 to $12,000 per violation. Uncurable violations are subject to a fine of $80,000 to $200,000 per violation. Continuing violations are subject to a fine of $40,000 per day. The Act also gives state agencies the authority to sanction parties licensed by that agency by revoking or suspending their licenses, or by imposing monetary penalties of up to $100,000.
AI Regulatory Sandbox Program Under TRAIGA
Perhaps most notably, the final version of TRAIGA establishes a “regulatory sandbox” exception program (the “Program”) to encourage AI innovation. The Program will be administered by the Texas Department of Information Resources (DIR) and is designed to support the testing and development of AI systems under relaxed regulatory constraints.
Program applicants must provide a detailed description of the AI system, including

the benefits and impacts the AI system will have on consumers, privacy, and public safety;
mitigation plans in case of adverse consequences during testing; and
proof of compliance with federal AI laws and regulations.

Participants must submit quarterly reports to DIR, which DIR will use to submit annual reports to the Texas legislature with recommendations for future legislation. Quarterly reports will include performance metrics, updates on how the AI system mitigates risk, and feedback from consumers and stakeholders. Participants will have 36 months to test and develop their AI systems, during which time the Texas AG cannot file charges and state agencies cannot pursue punitive action for violating the state laws and regulations waived under TRAIGA.
TRAIGA is neither the first nor only AI legislation to establish a regulatory sandbox program—described by a 2023 report of the Organisation for Economic Co-operation and Development (OECD) as where “authorities engage firms to test innovated products or services that challenge existing legal frameworks” and where “participating firms obtain a waiver from specific legal provisions or compliance processes to innovate.” Regulatory sandboxes in fact existed before the widespread application of AI systems; the term is widely credited to the UK Financial Conduct Authority (FCA), which introduced the concept as part of its “Project Innovate” in 2014 to encourage innovation in the fintech sector. Project Innovate’s regulatory sandbox launched in 2016 to create a controlled environment for businesses to test new financial products and services.
Regarding AI, Article 57 of the European Union’s AI Act mandates that member states must establish at least one AI regulatory sandbox at the national level, which must be operational by August 2, 2026. This Article also explains the purpose and goal for regulatory sandboxes: to provide a controlled environment to foster innovation and facilitate the development, training, testing, and validation of AI systems, before they are put on the market or into service.
Pending AI bills in several other US states would, if enacted, establish their own AI regulatory sandboxes. Connecticut has a bill (CTSB 2) that, if enacted, would establish various requirements concerning AI systems, including establishing an AI regulatory sandbox program. The Bill passed the State Senate on May 14, 2025, and is currently with the House.
Delaware’s House Joint Resolution 7 would, if enacted, direct an Artificial Intelligence Commission to work in collaboration with the Secretary of State to create a regulatory sandbox framework. The bill recognizes that “other states and nations are using regulatory sandboxes, frameworks set up by regulators in which companies are exempt from the legal risk of certain regulations under the supervision of regulators, to test innovate and novel products, services, and technologies.” HJR 7 passed in both the House and Senate and is ready for Governor action.
Oklahoma’s bill (HB 1916) was introduced on February 3, 2025. The bill calls for a new law to be codified, the Responsible Deployment of AI Systems Act. The Act would establish an AI Council to, among other things, oversee the newly established AI Regulatory Sandbox Program, which will “provide a controlled environment for deployers to test innovative AI systems when ensuring compliance with ethical and safety standards.”
Future Developments
Texas enacted TRAIGA amid the backdrop of a proposed 10-year federal moratorium on state government’s abilities to enact and enforce legislation regulating some applications of AI systems or automated decision systems. The proposed moratorium was part of President Trump’s comprehensive domestic policy bill, referred to as the “big, beautiful bill.” However, on July 1, 2025, the U.S. Senate voted nearly unanimously—99 to 1—in favor of removing the moratorium from the bill before it passed later that day.
Some predict its return, at least in some form. For now, the White House’s AI Action Plan, slated for release in July 2025, should put federal-level AI right back in the headlines. Executive Order 14179 of January 23, 2025, “Removing Barriers to American Leadership in Artificial Intelligence,” called for the submission of such a plan within 180 days—to be developed by the assistant to the president for Science and Technology (APST), the special advisor for AI and Crypto, the assistant to the president for National Security Affairs (APNSA), and more. In February, the White House issued a Request for Information (RFI) seeking public comment on policy ideas for the AI Action Plan, designed to “define priority policy actions to enhance America’s position as an AI powerhouse and prevent unnecessarily burdensome requirements from hindering private sector innovation.” By late April, the Office of Science and Technology Policy (OSTP) reported that more than 10,000 public comments had been received from interested parties including academia, industry groups, private sector organizations, and state, local, and tribal governments.

Epstein Becker Green Staff Attorney Ann W. Parks contributed to the preparation of this post.

The Patent Dilemma: Navigating AI Invention Rights in Australia’s Regulatory Framework

Artificial intelligence (AI) is transforming industries, from streamlining logistics to revolutionising healthcare and beyond. Today, AI can single-handedly manage your entire inbox or manage your entire customer service. With AI dominating the race for superior technology, the world can’t help but regard it as the “Fourth Industrial Revolution” and embrace it in all its glory. As […]

The BR Privacy & Security Download: July 2025

STATE & LOCAL LAWS & REGULATIONS
Vermont Governor Signs Vermont Kids Code Into Law: Vermont Governor Phil Scott signed SB 69, the Vermont Kids Code, into law. The Vermont Kids Code imposes privacy and safety requirements on businesses offering online services likely to be accessed by minors under the age of 18. Covered businesses must use age-assurance methods to verify users’ ages and configure default privacy settings to the highest level for minors. The law prohibits displaying minors’ accounts or content to adults without explicit consent, restricts adult interactions with minors on social media, and bans push notifications to minors, especially overnight. It also limits the collection, use, and retention of minors’ personal data to what is strictly necessary, mandates clear privacy disclosures, and requires a mechanism for prompt account deletion. The Vermont Kids Code will become effective on January 1, 2027. The Vermont Kids Code will likely face legal challenges, as similar laws passed in Maryland and California have been successfully challenged on First Amendment grounds.
New Jersey’s Attorney General Publishes Regulations for the New Jersey Data Privacy Act: The New Jersey Attorney General published proposed rules implementing the New Jersey Data Privacy Act (“NJDPA”). The proposed rules, among other things, provide content requirements for privacy notices and requirements for obtaining and documenting consent, including for processing sensitive data and the personal data of children between the ages of 13 and 17 for purposes of selling the data, targeted advertising, and/or profiling in furtherance of decisions that produce legal or similarly significant effects. The proposed rules also require controllers to provide user-friendly mechanisms for exercising data rights and prohibit the use of dark patterns. Additionally, the proposed rules include data security, data minimization, and recordkeeping requirements, and specify what must be included in data protection assessments for high-risk processing activities. The proposed rules further provide the framework for universal opt-out mechanisms and special rules around loyalty programs and profiling. The public comment period for the proposed rules will end on August 1, 2025.
Texas Passes AI Law: Texas Governor Greg Abbot signed HB 149, the Texas Responsible Artificial Intelligence Governance Act (“TRAIGA” or “the Act”), TRAIGA, effective on January 1, 2026, imposes certain restrictions regarding artificial intelligence (“AI”) system development and deployment. TRAIGA categorically prohibits AI systems intended for behavioral manipulation, unlawful discrimination, infringement of constitutional rights, and the creation or distribution of child pornography or unlawful deepfakes. The Texas Attorney General is responsible for enforcing TRAIGA and can impose substantial penalties for violations, with fines ranging from $10,000 to $200,000 per violation and up to $40,000 per day for ongoing violations. The Act provides a 60-day cure period for violators and offers affirmative defenses for self-identified and remediated violations, especially if compliant with frameworks like National Institute of Standards and Technology’s (“NIST”) AI Risk Management Framework. TRAIGA also establishes a regulatory sandbox for AI innovation and an advisory council to guide state AI policy, though the council cannot issue binding regulations.
Connecticut Passes Amendment to the Connecticut Data Privacy Act: The Connecticut Legislature has passed amendments to the Connecticut Data Privacy Act (“CTDPA”). The amendments lower the applicability thresholds, making the CTDPA applicable to entities that control or process personal data of at least 35,000 Connecticut consumers, offer personal data for sale to the CTDPA, or control or process sensitive data unless solely for payment transactions. The definition of sensitive data now includes disability, nonbinary or transgender status, neural data, and certain financial and government ID numbers. The amendments also remove the entity-level exemption for GLBA-regulated entities, replacing it with specific exemptions for financial and insurance institutions, and add a political activities exemption. Consumer rights are broadened to include explicit access to inferences and profiling information, and the right to obtain a list of third parties to whom data was sold. Requirements regarding data minimization, privacy notice, processing minors’ personal data, and secondary processing have been strengthened, and profiling opt-out rights have been expanded.
California Senate Pass Amendments to California Invasion of Privacy Act (“CIPA”): The California Senate has passed SB 690, which would amend the California Invasion of Privacy Act (“CIPA”) to significantly limit lawsuits under CIPA against businesses using standard online technologies. Over the last several years, CIPA, originally enacted to address wiretapping and eavesdropping, has been repurposed by plaintiffs’ attorneys to target businesses for their use of tracking technologies such as cookies, pixels, chatbots, and session replay tools on their websites. SB 690 proposes to address this problem by introducing exemptions for activities conducted for “commercial business purposes” from several core CIPA provisions. Perhaps most significantly, SB 690 would bar private lawsuits for the processing of personal information for a commercial business purpose, effectively eliminating the private right of action for a wide range of CIPA claims related to online business activities. For additional information on SB 690, please see Blank Rome’s Client Alert on this bill here.
Utah Enacts Three AI Laws: Utah Governor Spencer J. Cox signed three AI bills into law. SB 226 amends Utah’s AI disclosure law, which requires businesses to inform users that they are interacting with AI. While the law previously applied broadly to entities doing business in Utah, SB 226 amends the law to only apply when users engage in “high-risk artificial intelligence interactions,” which involve the collection of sensitive personal information or give recommendations or advice that users may rely on for significant decisions. HB 452 requires similar disclosures to those required in SB 226, but for providers of mental health chatbots that use generative AI. HB 452 also sets forth data protection requirements and restrictions on advertisements. SB 271 amends Utah’s Abuse of Personal Identity Act, which prohibits using an individual’s identity to imply endorsement or approval of an advertisement without consent, to apply to the imitation of an individual’s identity through generative AI, and other technological means.
Colorado Attorney General Announces Colorado Privacy Act Rulemaking for Children’s Data: The Colorado Attorney General has announced rulemaking to implement the Colorado Privacy Act (“CPA”) with respect to the personal data of minors under the age of 18. The CPA was amended by S.B. 24-041 to add enhanced protections when processing the personal data of minors, including requiring consent to: (1) process minors’ personal data for the purpose of targeted advertising, sale, or profiling; (2) use any feature to significantly increase, sustain, or extend a minor’s use of the covered controller’s online service; or (3) collect minors’ precise geolocation, except in certain instances. The Colorado Attorney General is considering amendments to the CPA’s implementing rules to clarify and enact these amendments. As part of that process, the Colorado Attorney General is accepting public input on targeted pre-rulemaking questions.
Texas Passes Changes to Telemarketing Law: Texas Governor Greg Abbot signed SB 140 into law, which will dramatically expand telemarketing regulations in the state. SB 140 broadens the definition of “telephone call” and “telephone solicitation” to include text messages, image messages, virtually any other transmission intended to sell goods or services, and traditional voice calls. SB 140 subjects Short Message Service (“SMS”), Multimedia Messaging Service (“MMS”), or similar marketing campaigns to the same strict standards as voice calls. SB 140 also introduces a private right of action under the Texas Deceptive Trade Practices and Consumer Protection Act. Statutory damages range from $500 to $5,000 per violation. The new telemarketing requirements and expanded enforcement provisions will take effect on September 1, 2025. For more information on SB 140, please see Blank Rome’s Client Alert on the bill here.

FEDERAL LAWS & REGULATIONS
Senate Removes AI Law Moratorium from Consideration: The Senate voted 99-1 to remove from the federal budget bill the proposed moratorium on state and local government AI legislation. The moratorium, originally proposed as a complete 10-year ban on AI law enactment and enforcement, had been reduced to five years with exceptions for children’s online safety. The proposal also tied compliance with the ban to the ability to receive federal broadband funding. The moratorium had faced growing opposition from a bipartisan group of state regulators and legislators, with 40 state attorneys general writing in opposition of the proposal in May, and a group of 260 state lawmakers urging Congress to drop the AI preemption proposal in June.
Trump Issues Cybersecurity Executive Order Revoking Parts of Biden and Obama-era Executive Orders: The Trump administration issued an Executive Order that revokes “problematic elements” of Obama and Biden-era Executive Orders, including portions of a Biden administration executive order promoting federal digital identity initiatives by encouraging the use of digital ID documents. The Trump Order also revokes requirements that software vendors must attest to secure development guidelines created by the National Institute of Standards and Technology. The Trump Order now emphasizes collaboration by directing NIST to work with the software industry to develop practical guidance on secure software development and to update relevant frameworks. Additionally, the Trump Order directs department and agency-level actions on post-quantum cryptography to ensure protection against threats that may leverage next-generation compute architectures.
NIST Releases Zero Trust Architecture Guidance: NIST released newly finalized guidance entitled “Implementing a Zero Trust Architecture” (NIST Special Publication (SP) 1800-35) that provides 19 example implementations of zero trust architectures using commercial, off-the-shelf technologies. The guidance uses the examples in the guidance to show organizations how to implement zero trust architecture. The new guidance augments NIST’s 2020 publication Zero Trust Architecture (NIST SP 800-207), a high-level document that describes zero trust at the conceptual level. While the earlier publication discussed how to deploy zero trust architecture and offered models, the new publication is intended to provide users with more guidance in addressing their own needs.
FTC Issues FAQ on Safeguards Rule to Automobile Dealers: The Federal Trade Commission released an FAQ to help automobile dealers comply with the Gramm-Leach-Bliley Act and the FTC’s Safeguards Rule. The FAQ provides answers to both high-level questions about the general scope and requirements of the Safeguards Rule, as well as questions intended to address automobile dealer-specific situations, such as whether an automobile dealer may send vehicle Original Equipment Manufacturer (“OEM”) customer lists and retail delivery reports.

U.S. LITIGATION
Supreme Court Empowers District Courts to Challenge FCC TCPA Interpretations: The Supreme Court issued a decision in McLaughlin Chiropractic Associates, Inc. v. McKesson Corporation, which may fundamentally alter the landscape for businesses subject to the Telephone Consumer Protection Act (“TCPA”). In a 6–3 ruling, the Court held that district courts are not required to follow the Federal Communications Commission’s (“FCC”) interpretations of the TCPA in enforcement proceedings, unless Congress has expressly stated otherwise. This marks a significant departure from the longstanding practice in many jurisdictions, where district courts treated FCC orders as binding in TCPA litigation. See Blank Rome’s Client Alert here for an in-depth analysis of this decision.
Court Rules Use of Copyrighted Works to Train AI Is Fair Use: Judge William Alsup of the Northern District of California ruled that Anthropic’s use of books to train its large language model for the purpose of creating new text outputs is fair use of those works. Anthropic used millions of copyrighted books to train its Claude large language models. As part of that training, Anthropic compiled a collection of millions of books in a “central library.” The Anthropic library contained both purchased and pirated content. Judge Alsup concluded that use of the books at issue to train Anthropic’s AI was “exceedingly transformative” and a fair use under Section 107 of the U.S. Copyright Act. Specifically, the Court noted that authors cannot exclude others from using their works to learn, noting that, for centuries, people have read and re-read books. The Court also stated that the training was for the purpose of creating something different, not to supplant the work. The Court also held that the digitization of purchased books in the library was also fair use, but that the use of pirated copies was not. The case marks a significant win for AI developers. 
Supreme Court Upholds Texas Law Requiring Age Verification for Websites with Sexually Explicit Content: In Free Speech Coalition et al. v. Ken Paxton, the U.S. Supreme Court upheld a Texas law requiring pornographic websites to conduct age checks on visitors. The Texas law requires entities that publish or distribute material on a website, more than one-third of which is “sexual material harmful to minors,” to verify that visitors are 18 years of age or older. The Court held that the law is subject to intermediate scrutiny under the First Amendment and determined that the law survived that level of scrutiny. Justice Thomas wrote for the majority that “Adults have no First Amendment right to avoid age verification, and the statute can readily be understood as an effort to restrict minors’ access,” and that “Any burden experienced by adults is therefore only incidental to the statute’s regulation of activity that is not protected by the First Amendment.” The Texas law requires users of such covered websites to verify their age by either (1) providing digital identification (i.e., information stored on a digital network that serves as proof of the individual’s identity) or (2) complying with a commercial age verification system that verifies age using either government-issued identification or transactional data.
Florida Court Blocks Enforcement of Florida Law Restricting Children’s Access to Social Media: U.S. District Court Judge Mark E. Walker blocked enforcement of a Florida law that would ban children 13 and under and restrict 14- and 15-year-olds from social media. The challenge was brought by technology industry associations, NetChoice and the Computer and Communications Industry Association, on the basis that the law violated the First Amendment. The Court found that the groups were substantially likely to succeed on their First Amendment challenge. The court stated, “Each application of the law burdens substantially more protected speech than necessary because it imposes the same sweeping burden on the rights of youth under 16 despite the availability, in each case, of substantially less burdensome alternatives.” NetChoice has successfully challenged similar laws in Utah, Arkansas, California, Mississippi, Ohio, and Texas. The Florida Attorney General filed notice of its intention to appeal the ruling to the United States Court of Appeals for the Eleventh Circuit. 
23andMe Founder’s Bid Beats Out Regeneron for Bankrupt Company’s Assets: TTAM Research Institute (“TTAM”), a nonprofit controlled by 23andMe founder Ann Wojcicki, won over Regeneron in the final round of bidding in the bankruptcy sale of 23andMe. TTAM’s bid of $305 million for substantially all of the assets of 23andMe, including the DNA testing and research services portions of the business, topped the $256 million in a previously announced agreement with Regeneron to acquire the company. TTAM’s deal must still be approved by the bankruptcy court. The privacy ombudsman in the bankruptcy case has recommended that users’ consent be obtained before the sale of 23andMe’s genetic data is approved. 23andMe and the bankruptcy case have garnered significant regulatory attention. 28 state attorneys general have filed a lawsuit on behalf of consumers objecting to the proposed sale of genetic information by 23andMe. The lawsuit aims to stop 23andMe from auctioning off the private genetic data of consumers without the consumers’ knowledge or consent.
Texas Court Invalidates HHS Abortion Privacy Rule: The U.S. District Court for the Northern District of Texas vacated a Biden-era U.S. Department of Health and Human Services (“HHS”) rule designed to protect the privacy of patients seeking abortion and gender affirming care. U.S. District Judge Matthew J. Kacsmaryk held that the rule exceeds the authority of the HHS under the Health Insurance Portability and Accountability Act (“HIPAA”). The rule specifically prohibited HIPAA regulated entities from disclosing PHI for purposes of conducting “a criminal, civil, or administrative investigation into or impose criminal, civil, or administrative liability on any person for the mere act of seeking, obtaining, providing, or facilitating reproductive health care, where such health care is lawful under the circumstances in which it is provided” and “The identification of any person for the purpose of conducting such investigation or imposing such liability.” HHS stated following Dobbs v. Jackson Women’s Health Organization in 2022 that HIPAA permits, but does not require, organizations to disclose PHI related to reproductive health for law enforcement and in response to a subpoena. Healthcare providers must also comply with healthcare laws in the states in which they operate.

U.S. ENFORCEMENT
California Attorney General Announces Largest CCPA Enforcement Penalty to Date: California Attorney General Rob Bonta announced his office entered into a settlement with Healthline Media (“Healthline”) to resolve allegations that Healthline violated the California Consumer Privacy Act (“CCPA”). As part of the settlement, Healthline will pay a civil penalty of $1.55 million. The Attorney General alleged that Healthline failed to opt consumers out of the sharing of their personal information for targeted advertising, violated the CCPA’s purpose limitation principle by sharing titles of articles available on the Healthline website that suggested a consumer may have been diagnosed with a medical condition to advertise to that consumer, and failed to maintain contracts with its advertising partners that contain privacy protections required by the CCPA. The enforcement action is the first to address the CCPA’s purpose limitation principle, signaling to companies that they should carefully review the business purposes for which data is being collected and used to determine whether those purposes have been properly disclosed and meet the reasonable expectations of consumers. Companies should also take note of the Attorney General’s emphasis on the oversight of third-party advertising partners. CCPA is the only state comprehensive privacy law that has specific requirements for contractual terms in contracts with all third parties, including those that are processing personal information on behalf of the company that disclosed the personal information. 
Nebraska Attorney General Files Lawsuit Against Chinese E-Commerce Company Alleging Unlawful Data Practices: Nebraska Attorney General Mike Hilgers announced he had filed a lawsuit against Chinese e-commerce company Temu and its affiliates, alleging the companies engaged in unlawful data practices and other consumer protection violations. The Nebraska attorney general stated that Temu unlawfully harvests data, including from children; utilizes multiple deceptive practices to encourage purchases; allows infringement and counterfeits to thrive; and engages in deceptive marketing to greenwash its image. Examples of alleged unlawful data practices cited in the announcement include employing malware that bypasses device security and grants the app unrestricted access to everything on users’ phones, allowing Temy to secretly collect user data, and sharing Nebraskan data with the Chinese Communist Party.
North Carolina Attorney General Issues Civil Investigative Demand to EdTech Provider: North Carolina Attorney General Jeff Jackson announced his office has issued a civil investigative demand (“CID”) to educational technology provider, PowerSchool, relating to a 2024 data breach experienced by the company. The data breach impacted more than 62 million people nationwide. The Attorney General stated in its announcement that, despite PowerSchool paying ransom to the hacker to delete the stolen information, North Carolina school districts were contacted by the hacker after the payment, who attempted to extort more money from the districts. The CID seeks information on the exact number of North Carolinians impacted by the 2024 data breach, details about cybersecurity measures in place at the time of the breach, what security flaws may have contributed to the breach, and information about PowerSchool’s response to the breach.

INTERNATIONAL LAWS & REGULATIONS
Statutory Tort for Privacy Harms Now Available in Australia: Changes to the Australian Privacy Act that provide for a right of action in tort for serious invasions of privacy commenced on June 10, 2025. As further detailed in an announcement by the Office of the Australian Information Commissioner, under the new statutory tort afforded by Schedule 2 of the Australian Privacy Act, an individual may have a cause of action against another person or organization who has invaded their privacy by (1) intruding upon the individual’s seclusion – for example, by physically intruding into their private space or (2) misusing information that relates to the plaintiff, in instances where the plaintiff would have had a reasonable expectation of privacy.
New Mandatory Ransomware Payment Disclosures in Australia in Effect: As of May 30, 2025, organizations qualifying as reporting business entities as defined under Part 3 of the Australian Cyber Security Act of 2024 are required to report ransomware and cyber extortion payments. Reporting business entities are organizations with an annual revenue of AUD $3 million (USD $1.957 million) or more within the last financial year of the organization. Reporting businesses have 72 hours to report the payment. Information required to be reported includes details of the cybersecurity incident to which the payment relates, the impact of the incident on the reporting business, the amount of ransom demanded and paid, and the nature and timing of any communications with the threat actor, among other information.
Japan Passes AI Law to Promote Research and Development: Japan’s National Diet has passed a bill aimed at fostering AI innovation while requiring compliance with existing laws to prevent potential harms relating to the use of AI. The new legislation establishes a framework for the government to support AI development and research. It also mandates that AI operators adhere to current laws and regulations to mitigate risks associated with AI technologies. This move is part of Japan’s broader strategy to balance technological advancement with safety and ethical considerations. The bill also indicates that further guidance and detailed regulations will be developed to support the implementation of the law.
U.S. International Trade Administration Launches Two International Privacy Certifications: The U.S. International Trade Administration (“ITA”) announced the launch of two international privacy certifications: the Global Cross-Border Privacy Rules (“CBPR”) and the Global Privacy Recognition for Processors (“PRP”) Systems. The CBPR certification is designed for organizations that handle personal data across borders. It is designed to provide assurance that these organizations adhere to a set of privacy principles that protect personal data during international transfers. The PRP certification is targeted at data processors, which are entities that process personal data on behalf of other organizations. This PRP certification is intended to ensure that data processors comply with rigorous privacy standards and practices, providing assurance to their clients that their data is handled securely and responsibly. Organizations can obtain these certifications after completing assessments conducted by designated accountability agents. 
Daniel R. Saeedi, Rachel L. Schaller, Gabrielle N. Ganze, Ana Tagvoryan, P. Gavin Eastgate, Timothy W. Dickens, Jason C. Hirsch, Adam J. Landy, Amanda M. Noonan, and Karen H. Shin also contributed to this article. 

Defining Artificial Intelligence for Cyber and Data Privacy Insurance

A small but growing number of cyber insurers are incorporating language into their policies that specifically addresses risks from artificial intelligence (AI). The June 2025 issue of The Betterley Report’s Cyber/Privacy Market Survey identifies at least three insurers that are incorporating specific definitions or terms for AI. This raises an important question for policyholders: Does including specific language for AI in a coverage grant (or exclusion) change the terms of coverage offered?
To be sure, at present few cyber policies expressly address AI. Most insurers appear to be maintaining a “wait and see” approach; they are monitoring the risks posed by AI, but they have not revised their policies. Nevertheless, a few insurers have sought to reassure customers that coverage is available for AI-related events. One insurer has gone so far as to state that its policy “provides affirmative cover for cyber attacks that utilize AI, ensuring that the business is covered for any of the losses associated with such attacks.” To the extent that AI is simply one vector for a data breach or other cyber incident that would otherwise be an insured event, however, it is unclear whether adding AI-specific language expands coverage. On the other side of the coin, some insurers have sought to limit exposure by incorporating exclusions for certain AI events.   
To assess the impact of these changes, it is critical to ask: What does artificial intelligence even mean?
This is a difficult question to answer. The field of AI is vast and constantly evolving. AI can curate social media feeds, recommend shows and products to consumers, generate email auto-responses, and more. Banks use AI to detect fraud. Driving apps use it to predict traffic. Search engines use it to rank and recommend search results. AI pervades daily life and extends far beyond the chatbots and other generative AI tools that have been the focus of recent news and popular attention.
At a more technical level, AI also encompasses numerous nesting and overlapping subfields.  One major subfield, machine learning, encompasses techniques ranging from linear regression to decision trees. It also includes neural networks, which, when layered together, can be used to power the subfield of deep learning. Deep learning, in turn, is used by the subfield of generative AI. And generative AI itself can take different forms, such as large language models, diffusion models, generative adversarial networks, and neural radiance fields.
That may be why most insurers have been reluctant to define artificial intelligence. A policy could name certain concrete examples of AI applications, but it would likely miss many others, and it would risk falling behind as AI was adapted for other uses. The policy could provide a technical definition, but that could be similarly underinclusive and inflexible. Even referring to subsets such as “generative AI” could run into similar issues, given the complex techniques and applications for the technology.
The risk, of course, is that by not clearly defining artificial intelligence, a policy that grants or excludes coverage for AI could have different coverage consequences than either the insurer or insured expected. Policyholders should pay particular attention to provisions purporting to exclude loss or liability from AI risks, and consider what technologies are in use that could offer a basis to deny coverage for the loss. We will watch with interest cyber insurers’ approach to AI — will most continue to omit references to AI, or will more insurers expressly address AI in their policies?
Listen to this article
This article was co-authored by Anna Hamel

New York Passes the Responsible AI Safety and Education Act

The New York legislature recently passed the Responsible AI Safety and Education Act (SB6953B) (“RAISE Act”). The bill awaits signature by New York Governor Kathy Hochul.
Applicability and Relevant Definitions
The RAISE Act applies to “large developers,” which is defined as a person that has trained at least one frontier model and has spent over $100 million in compute costs in aggregate in training frontier models. 

“Frontier model” means either (1) an artificial intelligence (AI) model trained using greater than 10°26 computational operations (e.g., integer or floating-point operations), the compute cost of which exceeds $100 million; or (2) an AI model produced by applying knowledge distillation to a frontier model, provided that the compute cost for such model produced by applying knowledge distillation exceeds $5 million.
“Knowledge distillation” is defined as any supervised learning technique that uses a larger AI model or the output of a larger AI model to train a smaller AI model with similar or equivalent capabilities as the larger AI model.

The RAISE Act imposes the following obligations and restrictions on large developers:

Prohibition on Frontier Models that Create Unreasonable Risk of Critical Harm: The RAISE Act prohibits large developers from deploying a frontier model if doing so would create an unreasonable risk of “critical harm.”

“Critical harm” is defined as the death or serious injury of 100 or more people, or at least $1 billion in damage to rights in money or property, caused or materially enabled by a large developer’s use, storage, or release of a frontier model through (1) the creation or use of a chemical, biological, radiological or nuclear weapon; or (2) an AI model engaging in conduct that (i) acts with no meaningful human intervention and (ii) would, if committed by a human, constitute a crime under the New York Penal Code that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.

Pre-Deployment Documentation and Disclosures: Before deploying a frontier model, large developers must:

(1) implement a written safety and security protocol;
(2) retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, for as long as the frontier model is deployed plus five years;
(3) conspicuously publish a redacted copy of the safety and security protocol and provide a copy of such redacted protocol to the New York Attorney General (“AG”) and the Division of Homeland Security and Emergency Services (“DHS”) (as well as grant the AG access to the unredacted protocol upon request);
(4) record and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model that provides sufficient detail for third parties to replicate the testing procedure; and
(5) implement appropriate safeguards to prevent unreasonable risk of critical harm posed by the frontier model.

Safety and Security Protocol Annual Review: A large developer must conduct an annual review of its safety and security protocol to account for any changes to the capabilities of its frontier models and industry best practices and make any necessary modifications to protocol. For material modifications, the large developer must conspicuously publish a copy of such protocol with appropriate redactions (as described above).
Reporting Safety Incidents: A large developer must disclose each safety incident affecting a frontier model to the AG and DHS within 72 hours of the large developer learning of the safety incident or facts sufficient to establish a reasonable belief that a safety incident occurred.

“Safety incident” is defined as a known incidence of critical harm or one of the following incidents that provides demonstrable evidence of an increased risk of critical harm: (1) a frontier model autonomously engaging in behavior other than at the request of a user; (2) theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a frontier model; (3) the critical failure of any technical or administrative controls, including controls limiting the ability to modify a frontier model; or (4) unauthorized use of a frontier model. The disclosure must include (1) the date of the safety incident; (2) the reasons the incident qualifies as a safety incident; and (3) a short and plain statement describing the safety incident.

If enacted, the RAISE Act would take effect 90 days after being signed into law.

Lone Star AI: How Texas Is Pioneering President Trump’s Tech Agenda

On June 22, 2025, Texas Governor Greg Abbott signed into the law the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) or (the Act).
The Act, which goes into effect January 1, 2026, “seeks to protect public safety, individual rights, and privacy while encouraging the safe advancement of AI technology in Texas.”
Formerly known as HB 149, the Act requires a government agency to disclose to consumers that they are interacting with AI—no matter how obvious this might appear—through plain language, clear and conspicuous wording requirements, and more. The same disclosure requirement also applies to providers of health care services or treatment, when the service or treatment is first provided or, in cases of emergency, as soon as reasonably possible.
The Act further prohibits the development or deployment of AI systems intended for behavioral manipulation, including AI intended to encourage people to harm themselves, harm others, or engage in criminal activity (see a post by our colleagues on Utah’s regulation of mental health chatbots).
TRAIGA forbids, under certain conditions, the governmental use and deployment of AI to evaluate natural persons based on social behavior or personal characteristics (social scoring); and the governmental development/deployment of AI systems for the purpose of uniquely identifying individuals using biometric data, under certain conditions. Notably and broadly, the law prohibits the development or deployment of AI systems by “a person”

with the sole intent of producing or distributing child pornography, unlawful deepfake videos or images, certain sexually explicit content, etc.,
with the intent to unlawfully discriminate against a protected class in violation of state or federal law; and
with the sole intent of infringing on constitutional rights.

This broad coverage would per force include employers and other organizations using AI tools or systems in both the public and private sectors.
Legislative History of TRAIGA
The original draft of TRAIGA (Original Bill), introduced in December 2024 by State Representative Giovanni Capriglione, was on track to be the nation’s most comprehensive piece of AI legislation. The Original Bill was modeled after the Colorado AI Act and the EU AI Act, focusing on “high-risk” AI systems (see our colleagues’ blog post on Colorado’s historic law). Texas would have imposed significant requirements on developers and deployers of AI systems, including duties to protect consumers from foreseeable harm, conduct impact assessments, and disclose details of high-risk AI to consumers.
In response to feedback and the impact of the Trump administration’s push for innovation—along with a loosening of regulation— Representative Capriglione and the Texas legislature introduced a significantly pared back version of TRAIGA, known as HB 149, in March 2025. HB 149 was passed by the Texas House of Representatives in April and by the Texas State Senate in May, before Governor Abbott signed it into law in June 2025.
Current Version
The Act no longer mentions high-risk AI systems. The Act focuses primarily on AI systems developed or deployed by government entities though, as noted above, some disclosure requirements apply to health care entities and some prohibitions remain as to developers and deployers.
Unlike the Original Bill, the Act does not require private entities to conduct impact assessments, implement risk management policies, or disclose to consumers when they are interacting with AI. The Act also restricts its prohibition of social scoring to government entities. The Act explicitly states that disparate impact is not enough to impose liability for unlawful discrimination against individuals in state or federal protected classes. The latter provision clearly stems from Trump policy goals discouraging, if not prohibiting, disparate impact as an indicator of illicit discrimination (see our April Insight on this topic).
The Act establishes an AI Advisory Council, composed of seven members appointed by the Governor, Lieutenant Governor, and Speaker of the House. The Council will assist the state legislature and state agencies by identifying and recommending AI policy and legal reforms. It will also conduct AI training programs for state agencies and local governments. The Council is explicitly prohibited, however, from promulgating binding rules and regulations itself.
The Act vests sole enforcement authority with the Texas Attorney General (AG), except to the extent that state agencies may impose sanctions under certain conditions if recommended by the AG. The Act explicitly provides no private right of action for individuals. Under the Act, the AG is required to develop a reporting mechanism for consumer complaints of potential violations. The AG may then issue a civil investigative demand to request information, including requesting a detailed description of the AI system.
After receiving notice of the violation from the AG, a party has 60 days to cure, after which the AG may bring legal action and seek civil penalties for uncured violations. Curable violations are subject to a fine of $10,000 to $12,000 per violation. Uncurable violations are subject to a fine of $80,000 to $200,000 per violation. Continuing violations are subject to a fine of $40,000 per day. The Act also gives state agencies the authority to sanction parties licensed by that agency by revoking or suspending their licenses, or by imposing monetary penalties of up to $100,000.
AI Regulatory Sandbox Program Under TRAIGA
Perhaps most notably, the final version of TRAIGA establishes a “regulatory sandbox” exception program (the “Program”) to encourage AI innovation. The Program will be administered by the Texas Department of Information Resources (DIR) and is designed to support the testing and development of AI systems under relaxed regulatory constraints.
Program applicants must provide a detailed description of the AI system, including

the benefits and impacts the AI system will have on consumers, privacy, and public safety;
mitigation plans in case of adverse consequences during testing; and
proof of compliance with federal AI laws and regulations.

Participants must submit quarterly reports to DIR, which DIR will use to submit annual reports to the Texas legislature with recommendations for future legislation. Quarterly reports will include performance metrics, updates on how the AI system mitigates risk, and feedback from consumers and stakeholders. Participants will have 36 months to test and develop their AI systems, during which time the Texas AG cannot file charges and state agencies cannot pursue punitive action for violating the state laws and regulations waived under TRAIGA.
TRAIGA is neither the first nor only AI legislation to establish a regulatory sandbox program—described by a 2023 report of the Organisation for Economic Co-operation and Development (OECD) as where “authorities engage firms to test innovated products or services that challenge existing legal frameworks” and where “participating firms obtain a waiver from specific legal provisions or compliance processes to innovate.” Regulatory sandboxes in fact existed before the widespread application of AI systems; the term is widely credited to the UK Financial Conduct Authority (FCA), which introduced the concept as part of its “Project Innovate” in 2014 to encourage innovation in the fintech sector. Project Innovate’s regulatory sandbox launched in 2016 to create a controlled environment for businesses to test new financial products and services.
Regarding AI, Article 57 of the European Union’s AI Act mandates that member states must establish at least one AI regulatory sandbox at the national level, which must be operational by August 2, 2026. This Article also explains the purpose and goal for regulatory sandboxes: to provide a controlled environment to foster innovation and facilitate the development, training, testing, and validation of AI systems, before they are put on the market or into service.
Pending AI bills in several other US states would, if enacted, establish their own AI regulatory sandboxes. Connecticut has a bill (CTSB 2) that, if enacted, would establish various requirements concerning AI systems, including establishing an AI regulatory sandbox program. The Bill passed the State Senate on May 14, 2025, and is currently with the House.
Delaware’s House Joint Resolution 7 would, if enacted, direct an Artificial Intelligence Commission to work in collaboration with the Secretary of State to create a regulatory sandbox framework. The bill recognizes that “other states and nations are using regulatory sandboxes, frameworks set up by regulators in which companies are exempt from the legal risk of certain regulations under the supervision of regulators, to test innovate and novel products, services, and technologies.” HJR 7 passed in both the House and Senate and is ready for Governor action.
Oklahoma’s bill (HB 1916) was introduced on February 3, 2025. The bill calls for a new law to be codified, the Responsible Deployment of AI Systems Act. The Act would establish an AI Council to, among other things, oversee the newly established AI Regulatory Sandbox Program, which will “provide a controlled environment for deployers to test innovative AI systems when ensuring compliance with ethical and safety standards.”
Future Developments
Texas enacted TRAIGA amid the backdrop of a proposed 10-year federal moratorium on state government’s abilities to enact and enforce legislation regulating some applications of AI systems or automated decision systems. The proposed moratorium was part of President Trump’s comprehensive domestic policy bill, referred to as the “big, beautiful bill.” However, on July 1, 2025, the U.S. Senate voted nearly unanimously—99 to 1—in favor of removing the moratorium from the bill before it passed later that day.
Some predict its return, at least in some form. For now, the White House’s AI Action Plan, slated for release in July 2025, should put federal-level AI right back in the headlines. Executive Order 14179 of January 23, 2025, “Removing Barriers to American Leadership in Artificial Intelligence,” called for the submission of such a plan within 180 days—to be developed by the assistant to the president for Science and Technology (APST), the special advisor for AI and Crypto, the assistant to the president for National Security Affairs (APNSA), and more. In February, the White House issued a Request for Information (RFI) seeking public comment on policy ideas for the AI Action Plan, designed to “define priority policy actions to enhance America’s position as an AI powerhouse and prevent unnecessarily burdensome requirements from hindering private sector innovation.” By late April, the Office of Science and Technology Policy (OSTP) reported that more than 10,000 public comments had been received from interested parties including academia, industry groups, private sector organizations, and state, local, and tribal governments.
We expect to have lots on the AI front to report for our readers during the second half of 2025.
Epstein Becker Green Staff Attorney Ann W. Parks contributed to the preparation of this post.

Regulation Round Up July 2025

Welcome to the Regulation Round Up, a regular bulletin highlighting the latest developments in UK and EU financial services regulation.
Key developments in June 2025:
30 June
Investment Advice: The Financial Conduct Authority (“FCA”) published a consultation paper (CP25/17) on proposals for a new form of support for consumers’ pensions and retail investments decisions.
27 June
UK Listing Rules: The FCA published the UK Listing Rules (Amendment) Instrument 2025, which makes amendments to the UK Listing Rules (UKLR) regarding related party transactions by closed ended investment funds.
FCA Handbook: The FCA published Handbook Notice 131, which sets out changes to the FCA Handbook.
26 June
Payments / E Money: The FCA published a new webpage containing the findings of its multi firm review of risk management, and wind down planning at e money and payment firms.
UK Economic Growth: The Prudential Regulatory Authority (“PRA”) published a report on its approach to its secondary objectives of competitiveness and growth.
25 June
ESG: The Department for Energy Security and Net Zero published a consultation on climate related transition plan requirements.
24 June
Payments: The Bank of England, PRA, FCA and Payment Systems Regulator have reviewed and revised their memorandum of understanding on their roles in the regulation of payment systems in the UK.
23 June
UK Economic Growth: The UK Government published the UK’s Modern Industrial Strategy 2025, a ten-year plan intended to increase business investment and grow the industries of the future, including the finances services sector.
ESG: The Council of European Union published a press release noting that it had agreed its negotiating mandate on the European Commission Omnibus proposal for a directive reducing the scope of the Corporate Sustainability Due Diligence Directive ((EU) 2024/1760) and the Corporate Sustainability Reporting Directive ((EU) 2022/2464). For more information, please refer to our dedicated article on the topic here.
13 June
UK Economic Growth: The FCA published a statement to the second report of the House of Lords Financial Services Regulation Committee on the FCA and PRA secondary objective of facilitating the UK economy’s growth and international competitiveness.
MiFIR: The European Commission has adopted four Delegated Regulations containing technical standards that will enable the creation of the consolidated tape under the Markets in Financial Instruments Regulation (600/2014) (“MiFIR”).
11 June
Investment Advice: The FCA published a webpage relating to its new Investment Advice Assessment Tool which aims to help personal investment firms understand how it assesses the suitability of their investment advice and disclosures to consumers.
10 June
Cryptoassets / Payment Services: The European Banking Authority (“EBA”) published an opinion (EBA/Op/2025/08) on the interplay between the revised Payment Services Directive ((EU) 2015/2366) (“PSD2”) and the Regulation on markets in cryptoassets ((EU) 2023/1114) (“MiCA”).
PISCES: The FCA published a policy statement setting out the final rules for the Private Intermittent Securities and Capital Exchange System (“PISCES”) and the responses to its consultation on the same.
Artificial Intelligence: The FCA published a speech given by Jessica Rusu, FCA Chief Data, Information and Intelligence Officer, on the FCA’s collaboration with Nvidia to acceleration AI innovation.
6 June
FCA Quarterly Consultation: The FCA published its 48th quarterly consultation paper (CP25/16).
5 June
Motor Finance: The FCA published a statement on its key considerations in implementing a possible motor finance consumer redress scheme. For more information, please refer to our dedicated article on the topic here.
3 June
FCA Enforcement: The FCA published a policy statement (PS25/5) on its updated Enforcement Guide and providing greater transparency on enforcement investigations relating to regulated and listed firms.
UK Stewardship Code: The Financial Reporting Council published its new UK Stewardship Code 2026 which applies to asset managers, asset owners and service providers. For more information, please refer to our dedicated article on the topic here.
EU Regulatory Framework: The EBA published a speech given by José Manuel Campa, EBA Chair, at a high-level meeting for Europe on banking supervision.
Sulaiman Malik & Michael Singh also contributed to this article. 

“Listen Up” if Your AI Policy Does Not Cover AI Recording Issues –Another Class Action Lawsuit Filed Over Third Party AI Recording Service

The use of AI recording tools has become prevalent. Companies’ policies addressing the legal issues with these tools is not yet as prevalent. If your company’s AI policy does not address these issues, it needs to be updated. A recently filed class action stems from one fact scenario where legal issues may arise. It is not the first suit against AI recording and it will not be the last. The lawsuit claims violation of the Federal Wiretap Act. 18 U.S.C. § 2510 et seq based on use of a third party service that records and perform AI analysis on calls between a dental company and its patients. Details of this lawsuit are provided below. However, it is important to understand that if your company or your employees use AI recording tools or notetakers, you need to ensure that your AI policy covers all of the necessary issues. These issues can include at least: i) managing and documenting notice and consent; ii) dealing with nonconsenting parties participating in a call being recorded; iii) inaccuracies of AI generated transcripts and summaries; iv) AI generated sentiment analysis/emotion detection; v) confidentiality and privilege issues; vi) retention and/or deletion of recordings; vii) vendor diligence on these tools and approval process for specific tools; and viii) knowing the technical features of some tools that can help mitigate risk and others that can create more risk.
Click here to read the full article.
Listen to this post