Lessons from the FTC: The Cleo AI Settlement

The FTC’s settlement with Cleo AI gives some indication as to what we might see from the agency in the coming months. The FTC alleged, among other things, that Cleo AI’s actions violated Section 5 of the FTC Act. In particular, as reported in our sister blog, Cleo AI required people to enroll in a paid subscription plan, even though they marketed their services as free. It also made it difficult for people to cancel their subscription and made it hard to stop recurring charges. The company also failed to disclose material terms.
Cleo AI agreed to settle by paying $10 million in consumer redress, and a $7 million civil penalty. The company has also agreed not to misrepresent people’s ability to cancel negative option charges and must get people’s “express informed consent” before collecting money (or other consideration) from consumers. It has also agreed to simplify the subscription cancellation mechanism.
Putting it Into Practice: This case suggests that the FTC will be continuing its practice of examining businesses whose user interfaces make it difficult for users to exercise choices, especially those that result in fees being charged. This decision follows the FTC’s November 2024 update to the negative option rule.
Listen to this post
James O’Reilly also contributed to this article. 

Another Legal Challenge to an AI Interviewing Tool

In the latest lawsuit of its kind, the American Civil Liberties Union recently filed a complaint with the Colorado Civil Rights Division and the Equal Employment Opportunity Commission (“EEOC”) alleging an AI interviewing tool discriminated against a deaf and Indigenous employee at Intuit seeking a promotion. 
According to the complaint, when the employee applied for a promotion, Intuit used a HireVue video interviewing platform that scored each candidate on their performance. The complaint alleges that some audible portions of HireVue’s platform lacked subtitles, and the employee’s request for human-generated captioning as an accommodation was denied. The employee was rejected for the promotion, and allegedly received AI-generated feedback from HireVue recommending, among other things, that she “practice active listening. The ACLU alleges this evidence shows the applicant’s hearing disability disadvantaged her in the process. The complaint also alleges “upon information and belief” that the interviewing platform had a disparate impact based on race.
The ACLU’s complaint against Intuit is not the first lawsuit to allege that AI interviewing software ran afoul of statutes intended to protect employees or job applicants. In 2023, an applicant for a job with CVS alleged the company’s use of a video interviewing platform that scored applicants on various competencies including “reliability, honesty, and integrity” violated a Massachusetts statute prohibiting employers from subjecting applicants to lie detector tests as a condition of employment. Baker v. CVS Health Corp., 717 F. Supp. 3d 188 (D. Mass. 2024). After the plaintiff survived a motion to dismiss, the parties reached an individual settlement. In recent months, several other plaintiffs have filed lawsuits bringing similar claims under the Massachusetts statute.
As similar AI solutions gain traction among employers, it is possible that lawsuits such as these will continue to proliferate.

Privacy and Security in AI Note-Taking and Recording Tools, Part 2: Risk Mitigation and ADMT Regulations [Podcast]

In the second part of this two-part series, Ben Perry (shareholder, Nashville) and Lauren Watson (associate, Raleigh) discuss the use of artificial intelligence (AI)-powered note-taking and recording tools in the workplace. Ben (who is co-chair of the firm’s Cybersecurity and Privacy Practice Group) and Lauren discuss the various risks and considerations companies may need to address when using AI tools, particularly focusing on data security, employee training, and compliance with evolving legal regulations. They emphasize the importance of conducting due diligence, implementing strong security measures, and providing proper employee training to mitigate potential risks associated with these AI tools.

Find part one of this series here.

Could This be the Ai-nswer? A Collective Copyright Licence for Generative AI Training

The Copyright Licensing Agency (CLA), a United Kingdom (UK) not-for-profit, has announced that it is developing a Generative AI (GenAI) Training Licence, and is hoping to publish the licence in the third quarter of 2025.
Why does this matter?
One of the most hotly debated issues surrounding GenAI is how to train AI models, and how to balance the very real concerns of creators and the necessary flexibility for developers to encourage true innovation.
Promoted as “groundbreaking” and a “milestone initiative”, the aim of this licence is a scalable collective licensing solution similar to other copyright collection societies. This licence looks to find that difficult balance by guaranteeing compensation and remuneration for publishers and authors, and by providing developers with legal certainty and material to train models.
Who does it involve?
Alongside the CLA, the Publishers’ Licensing Services (PLS) and the Authors’ Licensing and Collecting Society (ALCS) are also involved in this ambitious development. PLS is a non-profit collective management organisation, owned by the four main UK publishing trade associations and representing 4,500 publishers in the UK, and its aim is to maximise the value and returns of published content through collective licensing. ALCS is a not-for-profit organisation with 125,000 members that works for the benefit of all types of writers, and collects money for secondary uses of writers’ work.
Could this be the AI-nswer?
It might just be. Collective licensing has been proven to work well in a number of industries. By simplifying access to the massive amounts of data necessary to train GenAI models, and streamlining the process to ensure that all creators receive fair compensation for the use of their works, this licence allows legal certainty and protection for both parties.
It also allows smaller creators to obtain benefit from the language training where they may have missed out due to their lack of bargaining power. It could also slow the already accumulating number of cases bubbling away around the world on this very topic.
The availability of these types of licences will also likely alter the risk profile for transactions involving AI systems in the UK and internationally, which to date have had to proceed without adequate measures to address third party copyright litigation risk.

Can AI Replace Lawyers? The UPL Challenge

Introduction
A popular refrain echoes through legal technology conferences and webinars: “Lawyers won’t be replaced by AI, but lawyers with AI will replace lawyers without AI.” This statement offers a degree of comfort to legal professionals navigating rapid technological advancement, suggesting AI is primarily an augmentation tool rather than a replacement. While many practitioners hope this holds true, a fundamental question remains: Is it legally possible for AI, operating independently, to replace lawyers under the current regulatory frameworks governing the legal profession? As it stands, the rules surrounding the unauthorized practice of law (UPL) in most jurisdictions present a significant hurdle.
The UPL Barrier: Protecting the Public, Impacting Access
All jurisdictions in the United States have established rules prohibiting the unauthorized practice of law. These regulations typically mandate that individuals providing legal services must hold an active license issued by the state bar association. The primary stated goal is laudable: to protect the public from unqualified practitioners who could cause significant harm through erroneous advice or representation.
However, these well-intentioned rules have downstream consequences, notably impacting efforts to broaden access to justice. By strictly defining what constitutes legal practice and who can perform it, UPL rules can limit the scope of services offered by non-lawyers and technology platforms, even for relatively straightforward matters. For instance, the State Bar of California explicitly notes on its website that immigration consultants, while permitted to perform certain tasks, “cannot provide you with legal advice or tell you what form to use” – functions often essential for navigating complex immigration procedures.1
Legal Tech’s Current Role vs. Direct-to-Consumer AI
Much of the legal technology currently deployed operates comfortably within UPL boundaries because it serves as a tool for lawyers. AI-powered research platforms, document review software, and case management systems enhance a lawyer’s efficiency and effectiveness. Crucially, the licensed attorney remains the ultimate provider of legal advice and services to the client, vetting and utilizing the technology’s output.
The UPL issue arises dramatically when the lawyer is removed from this equation. If a software platform or AI system interacts directly with a consumer, analyzes their specific situation, and provides tailored guidance or generates legal documents, regulators may argue that the technology provider itself is engaging in the unauthorized practice of law.
Historical Precedents: Technology Pushing Boundaries
This tension is not new. Technology companies have long tested the limits of UPL regulations. The experiences of LegalZoom offer a prominent example. The company faced numerous disputes with state bar associations regarding whether its automated document preparation services constituted UPL. In North Carolina, for instance, LegalZoom entered into a consent judgment allowing continued operation under specific conditions, including oversight by a local attorney and preserving consumers’ rights to seek damages.2
DoNotPay, once marketed as the “world’s first Robot Lawyer,” also faced and settled UPL lawsuits. Its potential as a UPL test case is complicated by recent regulatory action; DoNotPay agreed to a Federal Trade Commission (FTC) order to stop claiming its product could adequately replace human lawyers. The FTC complaint underpinning this order alleged critical failures, including a lack of testing to compare the AI’s output to human legal standards and the fact that DoNotPay itself employed no attorneys.3
The Patchwork Problem: State-by-State Variation
The LegalZoom saga underscores a critical challenge: UPL rules are determined at the state level. While general principles are similar, specific definitions and exemptions vary significantly, creating a complex regulatory patchwork for technology companies seeking national reach.
Texas, for example, offers a statutory exemption. Its definition of the “practice of law” explicitly excludes “computer software… [that] clearly and conspicuously states that the products are not a substitute for the advice of an attorney.”4 This suggests a pathway for sophisticated software, provided the appropriate disclaimers are prominently displayed.
A Proactive Model: Ontario’s Access to Innovation Sandbox
In contrast to reactive enforcement or broad statutory exemptions, some jurisdictions are exploring proactive, structured approaches. The Law Society of Ontario’s Access to Innovation (A2I) program provides an interesting example.5 A2I creates a regulatory “safe space” or sandbox, allowing approved providers of “innovative technological legal services” to operate under specific conditions and oversight.
Applicants undergo review by the A2I team and an independent advisory council. Approved participants enter agreements outlining operational requirements, such as maintaining insurance, establishing complaint procedures, and ensuring robust data privacy and security. During their participation period, providers serve the public while reporting data and experiences back to the Law Society. This process allows for real-world testing and informs future regulatory policy. Successful participants may eventually receive a permit for ongoing operation. Currently, 13 diverse technology providers, covering areas from Wills and Estates to Family Law, operate within this framework.
The AI Chatbot Conundrum and the Path Forward
Modern AI chatbots often exhibit behaviour that sits uneasily with UPL rules. Frequently, they preface interactions with disclaimers stating they are not providing legal advice, only then to proceed with analysis and suggestions that closely resemble legal counsel. While this might satisfy the Texas exemption, regulators in many other jurisdictions could view it as impermissible UPL, regardless of the disclaimer.
Ontario’s A2I model offers an appealing framework for fostering innovation while maintaining oversight. However, the core strength of many technology ventures lies in scalability. Requiring separate approvals and adherence to distinct regulatory frameworks in every jurisdiction presents a formidable barrier to entry and growth for AI-driven legal solutions intended for direct consumer use.
Conclusion
While AI is undeniably transforming the practice of law for existing attorneys, the notion of AI replacing lawyers faces a steep legal climb due to UPL regulations. The historical friction between technology providers and regulators persists. While some jurisdictions like Texas provide explicit carve-outs, and others like Ontario are experimenting with regulatory sandboxes, the lack of uniformity across jurisdictions remains the most significant obstacle.
For AI to move beyond being merely a lawyer’s tool and become a direct provider of legal guidance to the public at scale, a significant evolution in the regulatory landscape is required. Whether this takes the form of model rules, interstate compacts, or broader adoption of supervised innovation programs like Ontario’s A2I, addressing the UPL challenge will be critical to balancing public protection, access to justice, and the transformative potential of artificial intelligence in the legal sphere.

1 https://www.calbar.ca.gov/Public/Free-Legal-Information/Unauthorized-Practice-of-Law
2 Caroline Shipman, Unauthorized Practice of Law Claims Against LegalZoom—Who Do These Lawsuits Protect, and is the Rule Outdated?, 32 Geo. J. Legal Ethics 939 (2019).
3 https://www.ftc.gov/news-events/news/press-releases/2025/02/ftc-finalizes-order-donotpay-prohibits-deceptive-ai-lawyer-claims-imposes-monetary-relief-requires
4 Tex. Gov’t Code Ann. § 81.101 (West current through 2023)
5 https://lso.ca/about-lso/access-to-innovation

Court Slams Lawyers for AI-Generated Fake Citations

A federal judge in Colorado has issued a scathing order that should serve as a wake-up call for attorneys who use frontier generative artificial intelligence (Gen AI) models in legal research. On April 23, federal Judge Nina Y. Wang of the District of Colorado issued an Order to Show Cause in Coomer v. Lindell that exposes the dangers of unverified AI use in litigation.
The Case and the Famous Defendant
The defamation lawsuit involves plaintiff Eric Coomer, a former Dominion Voting Systems executive, and defendant Mike Lindell, the well-known CEO of MyPillow. The case has gained significant attention not only for the high-profile parties involved but also for becoming a neon-red-blinking cautionary tale of consequential Gen AI misuse.
“Cases That Simply Do Not Exist”
Judge Wang identified “nearly thirty defective citations” in a brief submitted by Lindell’s attorneys, and they weren’t mere minor errors. The court found:

Citations to cases that “do not exist”
Legal principles attributed to decisions that contain no such language
Cases from one jurisdiction falsely labeled as being from another
Misquotes of actual legal authorities

One particularly egregious example involved a citation to “Perkins v. Fed. Fruit & Produce Co., 945 F.3d 1242, 1251 (10th Cir. 2019)”—a completely fabricated case. The court noted that while a similar-named case exists in a different form, the Gen AI tool had essentially cobbled together a fictional citation by merging elements from entirely different cases.
The Reluctant Admission
When confronted about these errors at a hearing, the defense attorney initially deflected, suggesting it was a “draft pleading” or blaming a colleague for failing to perform citation checking. Only when directly asked by Judge Wang whether AI had generated the content did Kachouroff admit to using Gen AI.
Even more damning, Kachouroff “admitted that he failed to cite check the authority in the Opposition after such use before filing it with the Court—despite understanding his obligations under Rule 11.” The court expressed open skepticism about his claim that he had personally drafted the brief before using AI, noting “the pervasiveness of the errors.”
Lessons for Legal Practice
AI verification isn’t optional—it’s a professional obligation. While AI tools can enhance efficiency, they require human oversight. The ethical foundations of legal practice remain unchanged: attorneys must verify information presented to the court, regardless of its source.
As GenAI continues to integrate into legal practice rapidly, the Coomer case serves as a stark reminder that technology cannot replace professional judgment. Case citations must be verified, quotations must be confirmed, and legal principles must be substantiated against primary sources. While the legal profession has always adapted to new technologies, core professional responsibilities have not changed significantly. In the AI era, these obligations require vigilance more than ever.
Not until this Court asked Mr. Kachouroff directly whether the Opposition was the product of generative artificial intelligence did Mr. Kachouroff admit that he did, in fact, use generative artificial intelligence.

Artificial Intelligence and Our Continuing Journeys in Alice’s Wonderland: Practice Points from Recentive Analytics, Inc. v. Fox Corp.

If you’re a patent practitioner who works with innovation related to artificial intelligence, you’ll want to consider the Federal Circuit’s recent decision in Recentive Analytics, Inc. v. Fox. Corp. This decision is the first to explicitly consider patent eligibility in the context of the use of artificial intelligence.
The Federal Circuit affirmed the district court’s dismissal of Recentive’s complaint, holding that the claims were not eligible under Section 101. “This case presents a question of first impression: whether claims that do no more than apply established methods of machine learning to a new data environment are patent eligible. We hold that they do not.”
This case involved patents that addressed the scheduling of live events and optimizing network maps—in particular, what programs or content are displayed by a broadcaster’s channels in different geographic markets at a particular time. The district court dismissed the patent owner’s complaint for infringement because it found the patent claims were directed to ineligible subject matter under 35 U.S.C. § 101.
In analyzing the patent claims directed to event-scheduling, the Federal Circuit noted that the claims involved collecting data, an iterative training step, and an output step in which an optimized schedule is output, and an updating step in which the schedule is updated based on new data inputs.
The extent to which the specification discussed machine learning was fairly general: that a model can be trained using a set of training data that includes historical data from previous live events, that the machine learning model can be instructed to optimize a target feature such as event attendance, revenue, or ticket sales, and that “any suitable machine learning technique” can be used.
In analyzing the patent claims directed to network map optimization, the Federal Circuit noted that the claims involved collecting data, analysis of the data to create a network map, an updating step, and a using step. The specification included some information about the training data used to train the models and also noted that any suitable machine learning technique could be used.
Notably, under Step 2A, Prong 1 of the Alice test, the Federal Circuit found that the use of generic machine learning technology in carrying out the claimed methods was a conventional technique that does not represent a technical improvement. In discussing Prong 1, the Federal Circuit noted that neither the claims nor specification described any improvement to technology but instead only disclosed that machine learning is used in a new environment. In addition, nothing in the claims discussed a transformation of machine learning techniques applied within the context of network scheduling to something significantly more than generating event schedules and network maps.
This case established two important new guideposts: (1) applying machine learning techniques to a task, in and of itself, is likely not eligible under Section 101, and (2) detail in the specification and claims about a technical problem and detail about an improvement an invention provides to address this technical problem is important. More generally, this case suggests that machine learning techniques are just another tool to perform a task, and that using machine learning is (now) a bit like using a computer to perform a task —that is, not enough to get over the 101 hurdle.
In many respects, this case follows the intellectual underpinnings of Alice Corp. v. CLS Bank Int’l, in which the recitation of a generic computer was insufficient to transform a patent-ineligible abstract idea into eligible subject matter, particularly in view of the ubiquity of computer technology in our lives. With artificial intelligence becoming more ubiquitous, it is perhaps unsurprising to see the courts suggesting that the mere use of artificial intelligence is also insufficient to transform a patent-ineligible abstract idea into eligible subject matter.
In view of Recentive Analytics, innovators and practitioners who are pursuing patents that involve the use of machine learning models to perform a task may want to focus on the underlying technical problem under the existing techniques—whether it’s computational efficiency, resource utilization, poor results, some combination of all of that, or something else—and how the invention addresses these technical problems. It may also be helpful to avoid stating in the patent application that “any suitable machine learning technique can be used.”
For applications that relate to fundamental changes to the way machine learning models work, this decision should not impact strategies for obtaining patent protection.
Michael Lew also contributed to this article. 

New Federal Agency Policies and Protocols for Artificial Intelligence Utilization and Procurement Can Provide Useful Guidance for Private Entities

On April 3, 2025, the Office of Management and Budget (“OMB”) issued two Memoranda (Memos) regarding the use and procurement of artificial intelligence (AI) by executive federal agencies.
The Memos—M-25-21 on “Accelerating Federal Use of AI through Innovation, Governance, and Public Trust” and M-25-22 on Driving Efficient Acquisition of Artificial Intelligence in Government”—build on President Trump’s Executive Order 14179 of January 23, 2025, “Removing Barriers to American Leadership in Artificial Intelligence.”
The stated goal of the Memos is to promote a “forward-leaning, pro-innovation, and pro-competition mindset rather than pursuing the risk-adverse approach of the previous administration.” They aim to lift “unnecessary bureaucratic restrictions” while rendering agencies “more agile, cost-effective, and efficient.” Further, they will, presumably, “deliver improvements to the lives of the American public while enhancing America’s global dominance in AI innovation.” The Memos rescind and replace the corresponding M-24-10 and M-24-18 memos on use and procurement from the Biden era.
Although these Memos relate exclusively to the activities of U.S. federal agencies with regard to AI, they contain information and guidance with respect to the acquisition and utilization of AI systems that is transferable to entities other than agencies and their AI contractors and subcontractors with respect to developing and deploying AI assets. In this connection, the Memos underscore the importance of responsible AI governance and management and, interestingly, in large measure mirror protocols and prohibitions found in current state AI legislation that governs use in AI by private companies.
Outlined below are the salient points of each Memo that will be operationalized by the relevant federal agencies throughout the year.
Memorandum M-25-21 (The“AI Use Memo”)
The new AI Use Memo is designed to encourage agency innovation with respect to AI while removing risk-adverse barriers to innovation that the present administration views as burdensome. Thus, the policies appear to frame AI less as a regulatory risk but more as an engine of national competitiveness, efficiency, and strategic dominance. Nonetheless, a number of important points from the former Biden-era AI directives have been retained and further developed. The AI Use Memo retains the concept of an Agency Chief AI Officer, yet in the words of the White House, these roles “are redefined to serve as change agents and AI advocates, rather than overseeing layers of bureaucracy.” It continues a focus on privacy, civil rights, and civil liberties, yet as STATNews points out, the Memos omit some references to bias. Other key points include a strong focus on American AI and a track for AI that the administration views as “high-impact.”
Scope
The AI Use Memo applies to “new and existing AI that is developed, used, or acquired by or on behalf of covered agencies”—exclusive of, for example, regulatory actions prescribing law and policy; regulatory or law enforcement; testing and research. It does not apply to the national security community, components of a national security system, or national security actions.
Covered Agencies
The AI Use Memo applies to all agencies defined in 44 U.S.C. 3502(1), meaning executive and military departments, government corporations, government controlled corporations, or other establishment in the executive branch, with some exceptions.
Innovation
The AI Use Memo focuses on three key areas of 1) Innovation, 2) Governance, and 3) Public Trust, and contains detailed guidance on:

AI Strategy: Within 180 days, agencies must develop an AI Strategy “for identifying and removing barriers to their responsible use of AI and for achieving enterprise-wide improvements in the maturity of their applications.” Strategy should include:

Current and planned AI use cases;
An assessment of the agency’s current state of AI maturity and a plan to achieve the agency’s AI maturity goals;

Sharing of agency data and AI assets (to save taxpayer dollars);
Leveraging the use of AI products and services;
Ensuring Responsible Federal Procurement: In Executive Order 14275 of April 15, 2025, President Trump announced plans to reform the Federal Acquisition Regulation (FAR) that establishes uniform procedures for acquisitions across executive departments and agencies. E.O. 14275 directs the Administrator of the Office of Federal Procurement Policy, in coordination with the FAR Council, agency heads, and others, to amend the FAR. This will impact how federal government contractors interface with respect to AI and general procurement undertaking and obligations. With regards to effective federal procurement, the AI Memo instructs agencies to

Treat relevant data and improvements as critical assets for AI maturity;
Evaluate performance of procured AI;
Promote competition in federal procurement of AI.

Building an AI-ready federal workforce (training, resources, talent, accountability).

Governance
The AI Use Memo strives to improve AI governance with various roles and responsibilities, including:

Chief AI Officers: Appoint in each agency within 60 days, with specified duties;
Agency AI Governance Board: Convene in each agency within 90 days;
Chief AI Officer Council: Convene within 90 days;
Agency Strategy (described above): Develop within 180 days;
Compliance Plans: Develop within 180 days, and every two years thereafter until 2036;
Internal Agency Policies: Update within 270 days;
Generative AI Policy: Develop within 270 days;
AI Use Case Inventories: Update annually.

Public Trust: High-Impact AI Categories and Minimum Risk Management Practices
A large portion of the AI Use Memo is devoted to fostering risk management policies that ensure the minimum number of requirements necessary to enable the trustworthy and responsible use of AI and to ensure these are “understandable and implementable.”
Agencies are required to implement minimum risk-management practices to manage risks from high-impact AI use cases by:

Determining “High-Impact” Agency Use of AI: The AI Use Memo sets out on pp. 21-22 a list of categories for which AI is presumed to be high impact. In the definition section, such use is high -impact “when its output serves as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety.” This includes AI that has a significant effect on:

Civil rights, civil liberties or privacy;
Access to education, housing, insurance, credit, employment, and other programs;
Access to critical infrastructure or public safety; or
Strategic assets or resources.

Implementing Minimum Risk Management Practices for High-Impact AI: Agencies must document implementation within 365 days, unless an exemption or waiver applies. The guidelines follow closely with National Institute of Standards and Technology (NIST) risk management framework (RMF) as well as some state AI laws, notwithstanding that the AI Use Memo excludes specific references to the RMF as particular guidance.
With respect to high-impact AI, agencies must:

Conduct pre-deployment testing;
Complete AI impact assessment before deploying, documenting

Intended purpose and expected benefit;
Quality and appropriateness of relevant data and model capability;
Potential impacts of using AI;
Reassessment scheduling and procedures;
Related costs analysis; and
Results of independent review.

Conduct ongoing monitoring for performance and potential adverse impacts;
Ensure adequate human training and assessment;
Provide additional human oversight, intervention, and accountability;
Offer consistent remedies or appeals; and
Consult and incorporate feedback from end users and the public.

Memorandum M-25-22 (The “AI Procurement Memo”)
Memorandum M-25-22, entitled “Driving Efficient Acquisition of Artificial Intelligence in Government” (the “AI Procurement Memo”) applies to AI systems or services acquired by or on behalf of covered agencies and is meant to be considered with related federal policies. It shares the same applicability as the AI Use Memo, adding that it does not apply to AI used incidentally by a contractor during the performance of a contract.
Covered AI
The AI Procurement Memo applies to “data systems, software, applications, tools, or utilities” that are “established primarily for the purpose of researching, developing, or implementing [AI] technology” or “where an AI capability ‘is integrated into another system or agency business process, operational activity, or technology system.’” It excludes “any common commercial product within which [AI] is embedded, such as a word processor or map navigations system.”
Requirements
Under the AI Procurement Memo, agencies are required to:

Update agency policies;
Maximize use of American AI;
Privacy: Establish policies and processes to ensure compliance with privacy requirements in law and policy;
IP Rights and Use of Government Data: Establish processes for use of government data and IP rights in procurements for AI systems or services, with standardization across contracts where possible. Address:

Scope: Scoping licensing and IP rights based on the intended use of AI, to avoid vendor lock-in (discussed below);
Timeline: Ensuring that “components necessary to operate and monitor the AI system or service remain available for the acquiring agency to access and use for as long as it may be necessary”;
Data Handling: Providing clear guidance on handling, access, and use of agency data or information to ensure that the information is only “collected and retained by a vendor when reasonably necessary to serve the intended purposes of the contract”;
Use of Government Data: Ensure that contracts permanently prohibit the use of non-public inputted and outputted results to further train publicly or commercially available AI algorithms absent explicit agency consent.
Documentation, Transparency, Accessibility: Obtain documentation from vendors that “facilitates transparency and explainability, and that ensures an adequate means of tracking performance and effectiveness for procured AI.”

Determine Necessary Disclosures of AI Use in the Fulfillment of Government Contracts: Agencies should be cognizant of risks posed by unsolicited use of AI systems by vendors.

AI Acquisition Practices Throughout Acquisition Lifestyle
Agencies should identify requirements involved in the procurement, including convening a cross-functional team and determining the use of high-impact AI; conduct market research and planning; and engage in solicitation development, which includes AI use transparency requirements regarding high-impact use cases, provisions in the solicitation to reduce vendor lock in, and appropriate terms relating to IP rights and lawful use of government data.
Selection and Award
When evaluating proposals, agencies must test proposed solutions to understand the capabilities and limitations of any offered AI system or service; assess proposals for potential new AI risks, and review proposals for any challenges. Contract terms must address a number of items including IP rights and government data, privacy, vendor lock-in protection, and compliance with risk management practices as described in M-25-21, above.
Vendor Lock-In; Contract Administration and Closeout
Many provisions in the memo, including those in the “closeout” section, guard against dependency on a specific vendor. For example, if a decision is made not to extend a contract for an AI system or service, agencies “should work with the vendor to implement any contractual terms related to ongoing rights and access to any data or derived products resulting from services performed under the contract.”
M-25-22 notes that OMB will publish playbooks focused on the procurement of certain types of AI, including generative AI and AI-based biometrics. Additionally, this Memo directs the General Services Administration (“GSA”) to release AI procurement guides for the federal acquisition workforce that will address “acquisition authorities, approaches, and vehicles,” and to establish an online repository for agencies to share AI acquisition information and best practices, including language for standard AI contract clauses and negotiated costs.
Conclusion
These Memos clearly recognize the importance of an AI governance framework that will operate to ensure AI competitiveness while balancing the risks of AI systems that are engaged to affect agency efficiencies and drive government effectiveness—a familiar balance for private companies that use or consider using AI. As the mandates within the Memos are operationalized over the coming months EBG will keep our readers posted with up-to-date information. 
Epstein Becker Green Staff Attorney Ann W. Parks contributed to the preparation of this post.

Threat Actors Use AI to Launch Identity Theft Scams

Identity theft will continue to rise in 2025. According to the Better Business Bureau of Missouri (BBB), it received over 16,000 identity theft complaints in the past three years. Scammers are “increasingly using advanced tactics such as artificial intelligence to exploit victims.”
The BBB notes that threat actors are taking over social media accounts to solicit money and “impersonating individuals to rent apartments or open credit cards.”
According to Which?, fraud prevention service Cifas reports the continuing rise of identity theft and fraud, and artificial intelligence (AI) is “fuelling [the] identity fraud increase.” Cases of account takeover “drastically increased by 76% in 2024.” Over half of these cases involved threat actors hijacking mobile telephone accounts, and SIM swap fraud increased by a whopping 1,055%. Threat actors use AI more frequently in cases of false applications, where it assists “with the speed, sophistication and scale of false documentation, as well as aiding the ability to pass verification checks.”
Identity theft will continue to rise, so preventative measures, such as those outlined by the BBB, Identitytheft.org, and the FTC, will hopefully prevent victimization. If you become a victim, the FTC has free helpful resources to consider.

Recalibrating Regulation: EPA, Energy, and the Unfolding Consequences of Deregulatory Momentum

The U.S. Environmental Protection Agency (EPA) has long navigated the complex intersection of science, law, policy, and public trust. Under the Trump Administration, EPA faces renewed scrutiny. The Administration seeks regulatory rollbacks and is pursuing a broader deregulatory strategy that many believe risks sacrificing hard won environmental protections in the name of economic growth.
While early promises to reduce bureaucratic red tape struck a chord with a number in industry, implementation has appeared blunt thus far, rather than measured. Deregulatory actions have sometimes resembled sweeping cuts “with a machete instead of a scalpel,” affecting the intended target of outdated or burdensome rules, but taking with it collateral damage including critical administrative safeguards and scientific functions. Although EPA has avoided some of the steepest cuts levied on other federal agencies, many worry that this trajectory will fundamentally impair the Agency’s mission.
EPA Administrator Lee Zeldin has attempted to ease concerns, stating that he can “absolutely” assure the public that deregulation will not harm the environment. “We have to both protect the environment and grow the economy,” he stated when questioned by CBS News’s “Face the Nation” about whether he could ensure that deregulation would not have an adverse impact. Still, the juxtaposition of that reassurance against ongoing efforts to slash regulations leaves many stakeholders uneasy.
At the heart of the Administration’s argument is a broader political philosophy — an intent to upend what it views as “entrenched bureaucracy.” White House spokesman Harrison Fields emphasized in a Statement that the Administration is “prioritizing efficiency; eliminating waste, fraud, and abuse; and fulfilling every campaign promise.” Critics, however, view these efforts as retributive, undermining institutional expertise, and marginalizing science-driven decision-making. Some demand a clearer upside — what fraud, waste, and abuse has been uncovered and eliminated?
One of the most visible fronts in this deregulatory push is energy policy. A recent Executive Order directs the federal government to expedite coal leasing on public lands, and aims to designate coal as a “critical mineral.” This pivot is being positioned as part of a strategy to meet the rising energy demands of generative artificial intelligence (AI) and data centers that are expected to increase significantly electricity consumption in the coming decade.
Despite this coal-forward rhetoric, more coal-fired power capacity was retired during Trump’s first term than under either of President Obama’s terms. Analysts note that even with reduced climate regulations, coal’s economic competitiveness remains constrained by market forces and state-level clean energy mandates. “You can run all these coal plants without environmental regulations…I’m sure that will save industry money,” energy data analyst Seth Feaster of the Institute for Energy Economics and Financial Analysis recently told Wired. “Whether or not the communities around those places really want that is another issue.”
Meanwhile, the federal freeze on electric vehicle (EV) charging infrastructure funding has disrupted planned rollouts in several states. “It puts some players in a bad spot where they’ve already invested,” states Jeremy Michalek, an engineering and public policy professor at Carnegie Mellon University, in a recent article on the topic. Similar concerns are emerging in the aviation and alternative fuels sectors, where projects relying on incentives from the Inflation Reduction Act (IRA) and Infrastructure Investment and Jobs Act (IIJA) now face sudden funding uncertainty.
Last week, Judge Mary McElroy of the U.S. District Court for Rhode Island, ordered the Trump Administration to reinstate previously awarded funds under both the IRA and the IIJA, underscoring the legal and financial turbulence surrounding the current regulatory landscape. This new normal is unwelcome to most shareholders. In a March 2025 press release about another lawsuit specifically challenging the Administration’s freeze on funding from the IRA and IIJA, Skye Perryman, President and CEO of Democracy Forward, states that “The decision to freeze funds that Congress appropriated is yet another attempt to roll back progress and undermine communities. These actions are not only unlawful, but are already having an impact on local economies.”
Meanwhile, in a recent post on TruthSocial, President Trump invited companies to relocate operations to the United States, promising “ZERO TARIFFS, and almost immediate Electrical/Energy hook ups and approvals. No Environmental Delays.” But for regulated entities, states, and federal partners navigating a rapidly shifting policy environment overseen by a new Administration that has diminished and fractured its workforce and shown a propensity to backpedal from bold claims, the promise of speed may not be worth the cost of lost clarity, stability, and long-term sustainability.

Navigating the Evolving Pharmacy Landscape in 2025: Challenges, Opportunities and Innovations

As we stride further into 2025, the pharmacy industry faces a landscape teeming with challenges and opportunities. From tackling drug price transparency to juggling implementation of artificial intelligence, the industry is being transformed before our eyes. The journey ahead is anything but straightforward, with solutions ranging from bold, large-scale changes to more nuanced, focused innovations. Let’s delve into the high-level, dynamic trends shaping the pharmacy world today.
Medication Accessibility Challenges
Imagine living in a community where accessing essential medications has become a Herculean task. Increasingly, this is the reality for patients who live in areas hit hard by the closure of pharmacies. A study in Health Affairs found that more than 29% of the nearly 89,000 retail U.S. pharmacies that operated between 2010 and 2020 had closed by 2021, with the rate of closures increasing between 2018 and 2021 (during which time the number of pharmacies declined in 41 states).[1] That amounts to more than 26,000 store closures.[2] According to one of the study’s authors, Dima Qato, a University of Southern California pharmacy professor, closures are occurring at a higher rate at pharmacies that serve a greater percentage of Medicaid and Medicare patients.[3]
Closures have impacted stores owned by large chains as well as independent pharmacies. Store closures make it more difficult for patients to access to medications and to adhere to medication regimes, which puts patients at a greater risk, deepening health disparities. But there is hope. There are opportunities to enhance reliability for a mail-order pharmacy model and user-friendliness for remote pharmacy services. These have the potential to bridge existing gaps in healthcare access, ensuring patients conveniently and timely receive medications and expert guidance from pharmacists.
Shoring Up Supply Chains
At the same time as the number of pharmacies has decreased, pharmacies have faced challenges accessing product through the pharmaceutical supply chain. Drug shortages in the U.S. healthcare system are driven by several systemic and operational factors, including a vulnerable supply chain that relies heavily on foreign manufacturing of active pharmaceutical ingredients from countries like China and India.[4] This reliance exposes the supply chain to disruptions from geopolitical tensions, pandemics, and natural disasters.[5]
However, these challenges have highlighted opportunities for improvement and growth. Solving drug shortages will require policy innovation, strategic investments, and professional advocacy. Strengthening domestic production of active pharmaceutical ingredients can reduce reliance on foreign suppliers and enhance resilience. Legislative efforts, such as California’s CalRx initiative, focus on drug affordability by producing and distributing generic medications at low costs with transparent pricing, targeting markets lacking competition,[6] while federal proposals like the Affordable Drug Manufacturing Act advocate for government-backed production of essential generics.[7] Pharmacist advocacy is crucial, as they can engage with policymakers to highlight operational challenges and push for reforms like improved communication and streamlined FDA processes during shortages, creating a sustainable framework for drug manufacturing and encouraging fair pricing and access.[8]
Drug Price Transparency Reform
Drug price transparency is under renewed focus, with reforms aiming to increase clarity and reduce the control exerted by pharmacy benefit managers. President Trump’s Executive Order, issued on February 25, 2025, mandates agencies to enhance enforcement of existing health plan transparency regulations and to propose new guidelines for further standardizing and comparing pricing data.[9] The order provides a 90-day timeline for agencies to publish new policies, leaving the extent of change uncertain.[10] Despite this uncertainty, pharmacies have a significant role to play in driving these reforms. By emphasizing transparency in negotiated drug prices, pharmacies can foster more competitive dynamics and potentially improve rebate terms.[11]
Artificial Intelligence Caution
Artificial intelligence (AI) has the potential to revolutionize pharmacy by enhancing medication management, patient care, and healthcare efficiency. It can assist pharmacists in selecting drugs and dosages, identifying interactions, and reducing errors, while allowing personalized treatment plans based on patient data, which can improve outcomes and minimize adverse events.[12] AI also has the potential to streamline workflows by automating tasks like dispensing and inventory management, allowing pharmacists to focus on patient care, and can enhance communication through pharmacy applications offering 24/7 support.[13]
Despite its benefits, AI integration in pharmacy faces challenges such as high implementation costs, potential lack of empathy and personal touch that human pharmacists provide, and dependence on the quality of data inputs; incorrect or biased data can lead to flawed outcomes.[14] Ethical concerns like data privacy and informed consent are significant, as AI systems handle sensitive patient information.[15] Moreover, the need for substantial computing resources and technical expertise poses hurdles, particularly for smaller pharmacies.[16]
The pharmacy industry has opportunities to address these risks by investing in training to build trust and proficiency, collaborating with developers to ensure accuracy and to construct error protection measures, and to prioritize data privacy and ethical considerations.[17] By using AI to augment human expertise rather than replace it, pharmacies can maintain personal patient interactions while leveraging AI’s capabilities to enhance care.[18] Through collaborative efforts and innovative solutions, the pharmacy industry has possibilities to enhance health outcomes and access to care for all communities, paving the way for a healthier future.

FOOTNOTES
[1] Jenny S. Guadamuz et al., More US Pharmacies Closed Than Opened In 2018–21; Independent Pharmacies, Those In Black, Latinx Communities Most At Risk, 43 Health Affairs 1703 (2024); see Tom Murphy, Nearly 30% of US Drugstores Closed in One Decade, Study Shows, The Associated Press (Dec. 3, 2024, at 5:10 PM CDT), https://apnews.com/article/drugstore-closings-cvs-walgreens-independent-pharmacies-6b54d4bd1564b2bff7a55a624da61c19.
[2] See Murphy, supra note 1.
[3] See Guadamuz et al., supra note 1; Decline in Number of Pharmacies in Most States Since 2018, U.S. Pharmacist (Dec. 5, 2024), https://www.uspharmacist.com/article/decline-in-number-of-pharmacies-in-most-states-since-2018.
[4] See Joseph L. Fink & Kelli A. Boyden, Addressing Drug Shortages: A Call to Action for Pharmacists and Policymakers, 91 Pharmacy Times 38 (2025).
[5] Id.
[6] Fact Sheet: Making Prescription Drugs More Affordable For Californians, The State of California (updated Mar. 17, 2023), https://calrx.ca.gov/uploads/2023/03/CalRx-Fact-Sheet.pdf.
[7] Fink & Boyden, supra note 4.
[8] Id.
[9] Donald J. Trump, Making America Healthy Again by Empowering Patients with Clear, Accurate, and Actionable Healthcare Pricing Information, The White House (Feb. 25, 2025), https://www.whitehouse.gov/presidential-actions/2025/02/making-america-healthy-again-by-empowering-patients-with-clear-accurate-and-actionable-healthcare-pricing-information/. 
[10] Id.
[11] See Ed Schoonveld, US Drug Price Negotiations and Transparency, Pharmaceutical Commerce, Pharmaceutical Commerce (Apr. 9, 2025), https://www.pharmaceuticalcommerce.com/view/us-drug-price-negotiations-and-transparency.
[12] See Rayn Oswalt, The Role of Artificial Intelligence in Pharmacy Practice, Pharmacy Times (Sept. 5, 2023), https://www.pharmacytimes.com/view/the-role-of-artificial-intelligence-in-pharmacy-practice; Osama Khan et al., The Future of Pharmacy: How AI is Revolutionizing the Industry, 1 Intelligent Pharmacy 32-40 (2023).
[13] Id.
[14] Id.
[15] Id.
[16] Id.
[17] Id.
[18] Id.
Listen to this post

European Commission Publishes the AI Continent Action Plan

On April 9, 2025, the European Commission published the AI Continent Action Plan (the “Action Plan”). The objective of the Action Plan is to strengthen artificial intelligence (“AI”) development and uptake in the EU, making the EU a global leader in AI. The Action Plan builds upon the InvestAI initiative that aims to mobilize €200 billion for investment in AI in the EU.
The Action Plan is divided into five strategic areas where the EU intends to intervene to foster its AI ambitions:

Computing infrastructure. Measures envisioned include setting up 13 AI Factories across the EU, five AI Gigafactories (powered by over 100,000 advanced AI processors) for which it will mobilize €20 billion from the InvestAI initiative, and proposing a Cloud and AI Development Act to boost private investment in the EU in cloud and data centers.
Data. The European Commission aims to fully realize the single market for data through the upcoming Data Union Strategy. This strategy intends to respond to the scarcity of robust and high-quality data for the training and validation of AI models. The European Commission will also implement data labs within AI factories to gather and organize high-quality data from diverse sources and continue supporting the deployment of Common European Data Spaces.
Foster innovation and accelerate AI adoption in strategic EU sectors. Measures to be implemented include adapting scientific research programs to boost development and deployment of AI/generative AI, and through the Apply AI Strategy, integrating AI in strategic sectors and boosting the use of this technology by the European industry.
Strengthen AI skills and talent. Measures to be implemented include facilitating international recruitment, supporting the increase in provision of EU bachelor’s and master’s degrees as well as PhDs focusing on key technologies, including AI, and promoting AI literacy in the current workforce.
Fostering regulatory compliance and simplification. Measures to be implemented in this context include creating an AI Act Service Desk through which organizations may request clarifications and obtain practical advice regarding their AI Act compliance. The European Commission will also continue its efforts with regards to providing AI Act guidance and launch a process to identify stakeholders’ regulatory challenges and inform possible further measures to facilitate compliance and possible simplification of the AI Act.

Read the AI Continent Action Plan.