OMB Issues Revised Policies on AI Use and Procurement by Federal Agencies

On April 3, 2025, the White House’s Office of Management and Budget (“OMB”) issued two revised policies on federal agencies’ use and procurement of artificial intelligence (“AI”), M-25-21 (“Accelerating Federal Use of AI through Innovation, Governance, and Public Trust”) and M-25-22 (“Driving Efficient Acquisition of Artificial Intelligence in Government”). These memos are designed to support the implementation of Executive Order 14179 (“Removing Barriers to American Leadership in Artificial Intelligence”), which was signed on January 23, 2025, and largely focuses on removing existing policies on AI technologies to facilitate rapid, responsible adoption across the federal government and improve public services.
The revised memos essentially replace the OMB memos published during the Biden Administration, including M-24-10 (“Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence”), which was issued on March 28, 2024. Key differences in the revised memos include:

a “forward-leaning and pro-innovation” approach to AI that encourages accelerated adoption and acquisition of AI by reducing bureaucratic burdens and maximizing U.S. competitiveness;
empowerment of agency leadership to implement AI governance efforts, risk management and interagency coordination;
transparency measures for the public that demonstrate AI risk mitigation, use, value and efficiency;
allowance of waivers for “high-impact” AI use cases and transparency requirements when justified; and
a strong preference for American-made AI tools and services, as well as for developing and retaining American AI talent.

OMB Memorandum M-25-21: Accelerating Federal Use of AI through Innovation, Governance, and Public Trust
OMB Memo M-25-21 outlines a new framework for the acceleration of federal agencies’ adoption and use of innovative AI technologies by focusing on three key priorities: innovation, governance and public trust. The memo seeks to lessen potential bureaucratic burdens and restrictions that the Administration contends have hindered timely uptake of AI across federal agencies, with the goal of ensuring that the American public receives the maximum benefit from AI adoption.
Scope

The memo applies to “new and existing AI that is developed, used, or acquired by or on behalf of covered agencies” and to “system functionality that implements or is reliant on AI, rather than to the entirety of an information system that incorporates AI.” The memo does not cover AI being used as a component of a National Security System.

Key Provisions

Removing bureaucratic barriers: Agencies are called to streamline AI adoption by reducing unnecessary requirements, increasing transparency, and maximizing existing resources and investments. CFO Act agencies must, within 180 days, publish agency-wide strategies for removing barriers to AI use.
Mandating Chief AI Officers: Agencies must, within 60 days, designate Chief AI Officers (“CAIOs”) to lead AI governance implementation, risk management and strategic AI adoption efforts. The CAIO will serve as the senior advisor on AI to the head of the agency and support interagency coordination on AI (e.g., AI-related councils, standard-setting bodies, international bodies). To further support agencies’ efforts, OMB will convene an interagency council to coordinate federal AI development and use.
Establishing agency AI Governance Boards: Within 90 days, CFO Act agencies must convene their own governance boards to coordinate cross-functional oversight and include representation from key stakeholders across federal agencies, including IT, cybersecurity, data, and budget.
Enabling workforce readiness: The memo encourages agencies to leverage AI training programs and resources to upskill federal agencies on AI technology. Agencies also are encouraged to set clear expectations for their workforce on appropriate AI use and designated channels for delegating accountability for AI risk.
Implementing oversight over high-impact AI: Agencies must implement risk management practices for “high-impact” AI use cases. AI is considered “high-impact” if “its output serves as a principal basis for decisions or actions that have a legal, material, binding, or significant effect on rights or safety.” For these high-impact use cases, agencies must:

conduct pre-deployment testing to identify both potential risks and benefits of the AI use case;
complete AI Impact Assessments before and throughout deployment that evaluate the intended purpose and expected benefit, performance of the model, and ongoing impacts of its use;
ensure adequate human oversight by providing AI training and implementing appropriate safeguards for human intervention;
offer remedies or appeals for individuals affected by AI-enabled decisions; and
cease or pause use of high-impact AI that does not comply with the minimum requirements set forth in the memo.

Mandating transparency measures to the public: Agencies must at least annually inventory and publicly publish their AI use cases. Agencies also must publicly report risk determinations and waivers from minimum practices for high-impact AI alongside a justification.

OMB Memorandum M-25-22: Driving Efficient Acquisition of Artificial Intelligence in Government
OMB Memo M-25-22 complements Memo M-25-21 by instructing federal agencies how to acquire AI responsibly. The memo focuses on three overarching themes: fostering a competitive American marketplace for AI to ensure high-quality, cost-effective solutions for the public; safeguarding taxpayer dollars by tracking AI performance and managing risks; and promoting effective AI acquisition through cross-functional engagement.
Scope

The memo applies to “AI systems or services that are acquired by or on behalf of covered agencies,” and exempts AI acquired for use as a component of a National Security System, among other exemptions.

Key Provisions

Investing in the American AI marketplace: The memo encourages agencies to maximize investments by purchasing U.S.-developed AI solutions where possible. Agencies also are encouraged to develop and retain AI talent with relevant technical expertise who can contribute to ongoing efforts to scale and govern AI.
Protecting American privacy and IP rights: Agencies must ensure that any acquired AI system complies with existing privacy and IP legal requirements. Agencies also must have appropriate processes in place that cover the use of government data. For example, procurement contracts should include terms that prevent vendors from processing such data for the purpose of training, fine-tuning or developing an AI system without explicit consent from the agency.
Ensuring competitive, cost-effective procurement: Procurement contracts should protect against vendor lock-in through requirements, including vendor knowledge transfers, data and model portability, and transparency. Agencies also may incentivize competition by leveraging performance-based contracting to ensure satisfactory model performance.
Assessing AI risks across the lifecycle: Agencies must ensure that contracts include the ability to regularly monitor and evaluate the performance, risks, and effectiveness of an AI system or service. Agencies also are encouraged to require vendors to perform regular assessments and mitigate new risks or correct changes to AI model performance. Contracts also must comply with the minimum risk management practices for high-impact AI use cases (outlined in OBM Memo M-25-21).
Contributing to a shared repository of best practices: Within 200 days, GSA, in coordination with OMB, will develop an online repository of tools and resources to enable responsible AI procurement. Agencies should contribute to this repository where possible to foster knowledge-sharing and interagency cooperation.
Requiring unanticipated disclosures of vendor AI use: Agencies should consider including solicitation provisions in their contracts that require disclosure of unanticipated vendor use of AI.

Q1 2025 New York Artificial Intelligence Developments: What Employers Should Know About Proposed and Passed Artificial Intelligence Legislation

In the first part of 2025, New York joined other states, such as Colorado, Connecticut, New Jersey, and Texas,1 seeking to regulate artificial intelligence (AI) at the state level. Specifically, on 8 January 2025, bills focused on the use of AI decision-making tools were introduced in both the New York Senate and State Assembly. As discussed further below, the New York AI Act Bill S011692 (the NY AI Act) focuses on addressing algorithmic discrimination by regulating and restricting the use of certain AI systems, including in employment. The NY AI Act would allow for a right of private action, empowering citizens to bring lawsuits against technology companies. Additionally, the New York AI Consumer Protection Act Bill A007683 (the Protection Act) would amend the general business law to prevent the use of AI algorithms to discriminate against protected classes, including in employment. 
This alert discusses these two pieces of legislation and provides recommendations for employers as they navigate the patchwork of proposed and enacted AI legislation and federal guidance.
Senate Bill 1169
On 8 January 2025, New York State Senator Kristen Gonzalez introduced the NY AI Act because “[a] growing body of research shows that AI systems that are deployed without adequate testing, sufficient oversight, and robust guardrails can harm consumers and deny historically disadvantaged groups the full measure of their civil rights and liberties, thereby further entrenching inequalities.” The NY AI Act would cover all “consumers,” defined as any New York state resident, including residents who are employees and employers.4 The NY AI Act states that “[t]he legislature must act to ensure that all uses of AI, especially those that affect important life chances, are free from harmful biases, protect our privacy, and work for the public good.”5 It further asserts that, as the “home to thousands of technology start-ups,” including those that experiment with AI, New York must prioritize safe innovation in the AI sector by providing clear guidance for AI development, testing, and validation both before a product is launched and throughout the product’s life.6 
Setting the NY AI Act apart from other proposed and enacted state AI laws,7 the NY AI Act includes a private right of action allowing New York state residents to file claims against technology companies for violations. The NY AI Act also provides for enforcement by the state’s attorney general. In addition, under the proposed law, consumers have the right to opt out of automated decision-making or appeal its results. 
The NY AI Act defines “algorithmic discrimination” as any condition in which the use of an AI system contributes to unjustified differential treatment or impacts, disfavoring people based on their actual or perceived age, race, ethnicity, creed, religion, color, national origin, citizenship or immigration status, sexual orientation, gender identity, gender expression, military status, sex, disability, predisposing genetic characteristics, familial status, marital status, pregnancy, pregnancy outcomes, disability, height, weight, reproductive health care or autonomy, status as a victim of domestic violence, or other classification protected under state or federal laws.8 
The NY AI Act demands that “deployers” using a high-risk AI system9 for a consequential decision10 comply with certain obligations. “Deployers” is defined as “any person doing business in [New York] state that deploys a high-risk artificial intelligence decision system.”11 This includes New York employers. For instance, deployers must disclose to the end user in clear, conspicuous, and consumer-friendly terms that they are using an AI system that makes consequential decisions at least five business days prior to the use of such system. The deployer must allow sufficient time and opportunity in a clear, conspicuous, and consumer-friendly manner for the consumer to opt-out of the automated process and for a human representative to make the decision. A consumer may not be punished or face any other adverse action for opting out of a decision by an AI system and the deployer must render a decision to the consumer within 45 days.12 
Further, any deployer that employs a high-risk AI system for a consequential decision must inform the end user within five days in a clear, conspicuous, and consumer-friendly manner if a consequential decision has been made entirely by or with assistance of an automated system. The deployer must then provide and explain a process for the end user to appeal the decision, which must at minimum allow the end user to (a) formally contest the decision, (b) provide information to support their position, and (c) obtain meaningful human review of the decision.13 
Additionally, deployers must complete an audit before using a high-risk AI system, six months after deployment, and at least every 18 months thereafter for each calendar year a high-risk AI system is in use. Regardless of final findings, the deployers shall deliver all audits conducted to the attorney general.
As mentioned above, enforcement is permitted by the attorney general or a private right of action by consumer citizens. If a violation occurs, the attorney general may request an injunction to enjoin and restrain the continuance of the violation.14 Whenever the court shall determine that a violation occurred, the court may impose a civil penalty of not more than US$20,000 for each violation. Further, there shall be a private right of action for any person harmed by any violation of the NY AI Act. The court shall award compensatory damages and legal fees to the prevailing party.15 
The NY AI Act also offers whistleblower protections, prohibits social scoring AI systems, and prohibits waiving legal rights.16 
Assembly Bill 768
Also on 8 January 2025, New York State Assembly Member Alex Bores introduced the Protection Act. Like the NY AI Act, the Protection Act seeks to prevent the use of AI algorithms to discriminate against protected classes. 
The Protection Act defines “algorithmic discrimination” as any condition in which the use of an AI decision system results in any unlawful differential treatment or impact that disfavors any individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, English language proficiency, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected pursuant to state or federal law.17 
The Protection Act requires a “bias and governance audit” consisting of an impartial evaluation by an independent auditor, which shall include, at a minimum, the testing of an AI decision system to assess such system’s disparate impact on employees because of such employee’s age, race, creed, color, ethnicity, national origin, disability, citizenship or immigration status, marital or familial status, military status, religion, or sex, including sexual orientation, gender identity, gender expression, pregnancy, pregnancy outcomes, and reproductive healthcare choices.18 
If enacted, beginning 1 January 2027, the Protection Act would require each deployer of a high-risk AI decision system to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.19 Specifically, deployers would be required to implement and maintain a risk management policy and program that is regularly reviewed and updated. The Protection Act references external sources employers can look to for guidance and compliance, such as the “AI Risk Management Framework” published by the National Institute of Standards and Technology and the ISO/IEC 42001 of the International Organization for Standardization.20
On 1 January 2027, employers deploying a high-risk AI decision system that makes or is a substantial factor in making a consequential decision concerning a consumer would also have to:

Notify the consumer that the deployer has used a high-risk AI decision system to make, or be a substantial factor in making, a consequential decision.
Provide to the consumer a statement disclosing: (I) the purpose of the high-risk AI decision system; and (II) the nature of the consequential decision.21 
Make available a statement summarizing the types of high-risk AI decision systems that are currently used by the deployer.
Explain how the deployer manages any known or reasonably foreseeable risks of algorithmic discrimination. 
Notify the consumer of the nature, source, and extent of the information collected and used by the deployer.22

New York City Council Local Law Int. No. 1894-A
While the NY AI Act and Protection Act are not yet enacted, New York City employers should ensure they are following Local Law Int. No. 1984-A (the NYC AI Law), which became effective on 5 July 2023. The NYC AI Law aims at protecting job candidates and employees from unlawful discriminatory bias based on race, ethnicity, or sex when employers and employment agencies use automated employment decision-making tools (AEDTs) as part of employment decisions. 
Compared to the proposed state laws, the NYC AI Law narrowly applies to employers and employment agencies in New York City that use AEDTs to screen candidates or employees for positions located in the city. Similar to the proposed state legislation, bias audits and notice are required whenever an AEDT is used. Notice must be provided to candidates and employees of the use of AEDTs at least 10 business days in advance. Under the NYC AI Law, an AEDT is:
[A]ny computational process, derived from machine learning, statistical modeling, data analytics, or [AI], that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.

The NYC AI Law demands audits be completed by an independent auditor who details the sources of data (testing or historical) used in the audit. The results of the bias audit must be published on the website of employers and employment agencies, or an active hyperlink to a website with this information must be provided, for at least six months after the latest use of the AEDT for an employment decision. The summary of results must include (i) the date of the most recent bias audit of the AEDT; (ii) the source and explanation of the data; (iii) the number of individuals the AEDT assessed that fall within an unknown category; and (iv) the number of applicants or candidates, the selection or scoring rates, as applicable, and the impact ratios for all categories.23 The penalties for noncompliance with the NYC AI Law include penalties of US$500 to US$1,500 per violation, and there is no cap on the civil penalties. Further, the NYC AI Law authorizes a private right of action, in court or through administrative agencies, for aggrieved candidates and employees.
Takeaways for Employers 
Employers should work to be in compliance with the existing NYC AI Law and prepare for future state legislation.24 
Employers should: 

Assess AI Systems: Identify any AI systems your company develops or deploys, particularly those used in consequential decisions related to employment.
Review Data Management Policies: Ensure your data management policies comply with data security protection standards.
Prepare for Audits: Familiarize yourself with the audit requirements and begin preparing for potential audits of high-risk AI systems.
Develop Internal Processes: Establish internal processes for employee disclosures related to AI system violations.
Monitor Legislation: Stay informed about proposed bills, such as AB326525 and AB3356,26 and continually review guidance from federal agencies. 

Our Labor, Employment, and Workplace Safety lawyers regularly counsel clients on a wide variety of concerns related to emerging issues in labor, employment, and workplace safety law and are well-positioned to provide guidance and assistance to clients on AI developments.
Footnotes

1 Please see the following alert for more information on the proposed Texas legislation: Kathleen D. Parker, et al., The Texas Responsible AI Governance Act and Its Potential Impact on Employers, K&L GATES HUB (Jan. 13, 2025), https://www.klgates.com/The-Texas-Responsible-AI-Governance-Act-and-Its-Potential-Impact-on-Employers-1-13-2025.
2 S. 1169, 2025-2026 Gen. Assemb., Reg. Sess., § 85 (N.Y. 2025), https://www.nysenate.gov/legislation/bills/2025/S1169.
3 A.B. 768, 2025-2026 Gen. Assemb., Reg. Sess., § 1550 (N.Y. 2025), https://www.nysenate.gov/legislation/bills/2025/A768.
4 S. 1169, supra note 2, § 85.
5Id. § 2(b).
6Id. § 2(c).
7 Please see the following alert for more information on state AI laws: Michael J. Stortz, et al., Litigation Minute: State Generative AI Statutes and the Private Right of Action, K&L GATES HUB (Jun. 17, 2024), https://www.klgates.com/Litigation-Minute-State-Statutes-and-the-Private-Right-of-Action-6-17-2024
8 S. 1169, supra note 2. § 85(1).
9 Id. § 85(12) “High-Risk AI System” means any AI system that, when deployed: (A) is a substantial factor in making a consequential decision; or (B) will have a material impact on the statutory or constitutional rights, civil liberties, safety, or welfare of an individual in the state.
10 Id. § 85(4) “Consequential Decision” means a decision or judgment that has a material, legal or similarly significant effect on an individual’s life relating to the impact of, access to, or the cost, terms, or availability of, any of the following: (A) employment, workers’ management, or self-employment, including, but not limited to, all of the following: (i) pay or promotion; (ii) hiring or termination; and (iii) automated task allocation. (B) education and vocational training, including, but not limited to, all of the following: (i) assessment or grading, including, but not limited to, detecting student cheating or plagiarism; (ii) accreditation; (iii) certification; (iv) admissions; and (v) financial aid or scholarships. (C) housing or lodging, including rental or short-term housing or lodging. (D) essential utilities, including electricity, heat, water, internet or telecommunications access, or transportation. (E) family planning, including adoption services or reproductive services, as well as assessments related to child protective services. (F) health care or health insurance, including mental health care, dental, or vision. (G) financial services, including a financial service provided by a mortgage company, mortgage broker, or creditor. (H) law enforcement activities, including the allocation of law enforcement personnel or assets, the enforcement of laws, maintaining public order or managing public safety. (I) government services. (J) legal services.
11 A.B. 768, supra note 3, § 1550(7).
12 S. 1169, supra note 2, § 86(a).
13Id. § 86(2).
14 Id. § 89(b)(1).
15 Id. § 89(b)(2).
16Id. §§ 86(b), 89(a), 86(4).
17 A.B. 768, supra note 3, § 1550(1).
18 Id. § 1550(3).
19 Id. § 1552(1)(a).
20 Id. § 1552(2)(a).
21 Id. § 1552(5)(a).
22 Id. § 1552(6)(a).
23 N.Y.C. Dep’t of Consumer & Worker Prot., Automated Employment Decision Tools (AEDT) – Frequently Asked Questions, https://www.nyc.gov/assets/dca/downloads/pdf/about/DCWP-AEDT-FAQ.pdf. 
24 Please see the following alert for more information: Maria Caceres-Boneau, et al., New York Proposal to Protect Workers Displaced by Artificial Intelligence, K&L GATES HUB (Feb. 20, 2025), https://www.klgates.com/New-York-Proposal-to-Protect-Workers-Displaced-by-Artificial-Intelligence-2-18-2025
25 A.B. 3265, 2025-2026 Gen. Assemb., Reg. Sess., (N.Y. 2025), https://www.nysenate.gov/legislation/bills/2025/A3265
26 A.B. 3356, 2025-2026 Gen. Assemb., Reg. Sess., (N.Y. 2025), https://www.nysenate.gov/legislation/bills/2025/A3356

AI Updates: Committees on Capitol Hill Continue Debate on Future of Emerging Technologies

“And if you study the history of the world, the nations that are the most military and economically domineer [sic] are the nations that are the most innovative,” Sen. Jon Husted (R-OH) remarked at a recent congressional hearing. This sentiment is shared by many of his colleagues on both sides of the aisle on Capitol Hill who recognize the need for America to stay at the forefront of development and deployment of artificial intelligence (AI). Lawmakers continue to be caught in a debate between promoting innovation and allowing for the creation of new technologies domestically while also safeguarding the American people from the risks they pose. 
Last year, we saw action from both the Bipartisan House AI Task Force as well as the Senate AI Working Group releasing recommendations on next steps relating to AI (see here for our summary of the House’s report). Over the past few weeks, there has been renewed momentum in this new session of Congress, with numerous committees, covering a wide range of jurisdictions, holding hearings to discuss AI. The four hearings below range in jurisdiction and continue to show that AI touches nearly every industry:

Harnessing Artificial Intelligence Cyber Capabilities, Senate Armed Services Committee
America’s AI Moonshot: The Economics of AI, Data Centers, and Power Consumption, House Oversight Committee 
Converting Energy into Intelligence: the Future of AI Technology, Human Discovery, and American Global Competitiveness, House Energy and Commerce Committee 
Examining Trends in Innovation and Competition, House Judiciary Committee

Below we offer high level recaps for these hearings. Our team continues to track how Congress is grappling with AI and its impacts on numerous industries, with the expectation that we will continue to see a high level of interest from Capitol Hill and the Trump Administration on how to best regulate this critical technology.
Harnessing Artificial Intelligence Cyber Capabilities
On 25 March 2025, the Senate Armed Services Cybersecurity Subcommittee held a hearing titled Harnessing Artificial Intelligence Cyber Capabilities. Chaired by Sen. Mike Rounds (R-SD) and Ranking Member Senator Jacky Rosen (D-NV), the hearing gathered testimony from cyber-industry leaders and experts, focusing on the implications of integrating AI into the cyber defense and offense strategies of the Department of Defense (DOD). It also contemplated the role of human oversight in AI and the energy demands needed to support AI development. 
Witnesses warned that the pursuit of artificial general intelligence (AGI) could create international tensions akin to that of the nuclear arms race. They argued that DOD will not be exempt from these dynamics. Drawing on the statements of the experts and the Senators, the message was clear: innovate or face an existential threat. As National Defense Authorization Act (NDAA) negotiations get underway for fiscal year (FY) 2026, these considerations are sure to be top of mind for many lawmakers as they have been in previous iterations of the bill. See here for our previous publication on AI in the FY 2025 NDAA. 
America’s AI Moonshot: The Economics of AI, Data Centers, and Power Consumption
On 1 April 2025, the House Oversight Committee’s Economic Growth, Energy Policy, and Regulatory Affairs Subcommittee held a hearing titled America’s AI Moonshot: The Economics of AI, Data Centers, and Power Consumption. Like their Senate counterparts on the Armed Services Cybersecurity Subcommittee, the members of the House Oversight Committee warned of the consequences of falling behind in the AI arms race to foreign adversaries. There was not a clear consensus amongst the members, however, on how to meet the energy demand required by data centers used for AI. Natural gas, wind, solar, coal, and nuclear power were all floated as possible sources for energy. The members debated the tradeoffs between environmental impacts and sufficiency of the sources, especially as this relates to local communities where the data centers are or would be located. 
Converting Energy Into Intelligence: The Future of AI Technology, Human Discovery, and American Global Competitiveness
On 9 April 2025, the House Energy and Commerce Committee held a hearing title Converting Energy into Intelligence: The Future of AI Technology, Human Discovery, and American Global Competitiveness, at which members echoed many of the points made during the House Oversight Committee’s hearing, especially in the debate of whether to use renewable or non-renewable energy sources. This, along with efforts from members like Rep. Julie Fedorchak (R-ND), who has started a new AI and Energy Working Group, shows the continued focus on how the US will power AI going forward. Rep. Fedorchak released a request for information in March and is working with stakeholders to develop a legislative framework for powering the future of AI.
Examining Trends in Innovation and Competition
On 2 April 2025, the House Judiciary Subcommittee on the Administrative State, Regulatory Reform, and Antitrust held a hearing titled Examining Trends in Innovation and Competition. This hearing approached AI from a slightly different angle. The Subcommittee narrowed its discussion primarily to what a regulatory framework should look like. During the hearing, there was concern from witnesses that an overreaching framework could have a chilling effect on innovation. Witnesses alluded to the European model, and the GDPR and Digital Markets Act (DMA) in particular. Subcommittee Chair Scott Fitzgerald (R-WI) advocated instead for a framework more reflective of the method that the US has traditionally followed, saying “we need to stay true to what works, and that is free enterprise, open competition and light-touch regulatory approach that allows innovation to flourish.”
Momentum Continues in 2025 
Although these hearings do not represent formal legislative momentum on AI, the bipartisan interest in AI is clear. With the expectation that Congress will continue to address AI writ large with a focus on energy and defense, we are expecting continued movement and robust policy efforts throughout the rest of 2025. This is a critical time for stakeholders to engage in this area, and our team is ready and available to assist.

“Delete All IP law”? Why the Tech Titans Want to Pull Up the Ladder Behind Them

Recently, tech titans Jack Dorsey and Elon Musk made headlines by calling for the radical dismantling of the intellectual property (IP) system—urging us to “delete all IP.” Their stance may seem superficially appealing at a time when artificial intelligence (AI) is destabilizing traditional IP boundaries and intensifying legal uncertainty. But this perspective is not just short-sighted—it betrays the foundational American principle that innovation is best nurtured through enforceable rights. 
The Constitutional Foundation of IP Law
Intellectual property is not a vestigial tool of corporate control or government overreach—it is embedded in the DNA of our constitutional system. Our Constitution grants Congress the power “[t]o promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”1
Our Founders deliberately enshrined IP rights not to stifle creativity, but to spur it. Temporary monopolies promote disclosure, encourage commercialization, and ultimately fuel progress. 
Why IP Matters Even More in the AI Age
The rise of AI magnifies challenges to IP regimes. Because AI replicates core human faculties like creativity and problem-solving, it complicates how we assign exclusive rights in inventions and creative works.2 Yet these complexities are not reasons to discard IP law—they are reasons to adapt and strengthen it. If Dorsey envisions “better ways to pay creators,” even those must rest on the foundational principle that creators own what they produce.
The Patchwork of IP: Each Branch Serves a Distinct Purpose
The Dorsey-Musk critique flattens IP into a single, monolithic enemy. But IP law is a mosaic—each piece crafted to serve specific economic and societal purposes:

Patent law incentivizes disclosure and the advancement of science by awarding exclusive rights to inventors for novel and non-obvious inventions;
Copyright law incentivizes original expression by awarding exclusive rights to creative works;
Trademark law is about trust and consumer protection, not innovation per se. It ensures brand integrity and helps consumers make informed decisions; and
Trade secret law protects business-critical information that gives firms a competitive edge, especially when patenting is not feasible or desired.

Copyright enforcement typically requires proof of actual copying, making it a more direct and fact-specific inquiry. Patent enforcement, by contrast, can extend to independently developed inventions that merely practice a claimed method or apparatus—often ensnaring innovators unaware of the patent’s existence. Conflating the two overlooks the fundamental differences in enforcement and scope. Using the excesses of patent litigation to justify dismantling copyright law only further erodes the already fragile rights of creators in the digital age.
Climbing then Pulling up the IP Ladder Behind Them…
For startups, patent law is scaffolding—supporting their climb toward market viability. But for tech giants like Musk and Dorsey, who’ve already reached the summit, IP law may now feel like a constraint. Their proposal to “delete all IP” reads less like a call for open innovation and more like an effort to pull up the ladder behind them.
LLM and AI providers don’t depend on patents to protect their dominance. Their competitive edge lies in trade secrets—like model weights (the internal parameters of AI systems) and the massive, proprietary datasets used in training—which are nearly impossible to reverse engineer. Add to that their unmatchable first-mover advantages, and it’s clear why they no longer need IP—but smaller competitors still do. 
A Responsible Path Forward: Reform, Not Repeal
Without intellectual property and laws to enforce it, companies would have no recourse against copycats or bad actors.
We do need to rethink IP in the age of AI, including the following reforms:

clearer standards for “sufficiency of human contribution” in AI-assisted inventions;
better rules to address copyright liability in training datasets; and
guidelines for AI’s role in generating patentable subject matter, especially under §101 eligibility rules that have grown increasingly narrow post-Alice.3

But these reforms depend on having a system to reform. We cannot foster the development of an AI ecosystem of startup or Small and Medium-sized Enterprise (SME) AI developers without providing enforceable rights so that they can protect their innovations—including against infringement and misappropriation by the AI behemoths, including Musk.4 Nor can we effectively regulate AI’s misuse, combat algorithmic bias, or hold dominant players accountable without the transparency that IP law—and especially patent law—can help facilitate. The Constitution doesn’t promise innovation despite IP; it promises innovation through IP.

1 U.S. Const., Art. I, Sec. 8, Cl. 8.
2 See Jim W. Ko & Paul R. Michel, Testing the Limits of the IP Legal Regimes: The Unique Challenges of Artificial Intelligence, 25 Sedona Conf. J. 389–541, at 433–82 (2024).
3 See id. at 540–41.
4 xAI, Musk’s artificial intelligence firm, acquired X in March 2025 and leverages the vast user data of the former Twitter to develop and enhance its AI models.

Health-e Law Episode 17: Navigating AI: Governance and Innovation at UCSD Health With Ron Skillens of UCSD Health [Podcast]

Welcome to Health-e Law, Sheppard Mullin’s podcast exploring the fascinating health tech topics and trends of the day. In this episode, Ron Skillens, Chief Compliance and Privacy Officer at UC San Diego Health, joins host Michael Orlando to discuss the transformative potential of AI in healthcare and the importance of balancing innovation with compliance.
What We Discussed in This Episode:

How could AI transform patient care and hospital operations in the next five years?
With health data being as sensitive and valuable as it is, why is an AI governance structure crucial for the creative and compliant use of AI?
How can AI usage be effectively managed and coordinated between stakeholders to strike the right balance of innovation and risk?
What have been some of the biggest challenges and lessons learned when establishing an AI governance structure?
In what ways does patient interest shape the evaluation of AI applications in healthcare?
What is the best way to keep staff and stakeholders updated on the latest AI advancements, emerging trends and best practices?

Federal Circuit: Machine Learning Patents Ineligible in Recentive Analytics, Inc. v. Fox Corp.

Go-To Guide:

The Federal Circuit ruled, in a case of first impression, that the machine learning patents at issue were ineligible under 35 U.S.C. § 101. 
In its precedential decision, the court held that claims that merely apply “established methods of machine learning” to “new data environments” are ineligible for protection. 
The court emphasized that iterative training and dynamic adjustments are inherent to the nature of machine learning and do not constitute an inventive concept. 
The decision underscores the importance of disclosing specific implementations or improvements to machine learning processes in patent applications.

In a precedential decision addressing the intersection of machine learning and patent law, the Federal Circuit affirmed the district court’s dismissal of Recentive Analytics, Inc.’s patent infringement claims against Fox Corp. and its affiliates. The court held that Recentive’s patents merely applied generic machine learning techniques to the fields of event scheduling and network map creation, and thus were directed to abstract ideas that lacked an inventive concept sufficient to satisfy the requirements of 35 U.S.C. § 101. This decision underscores the challenges of securing patent protection for new applications of established machine learning techniques in various fields.
In a succinct statement of the Federal Circuit’s decision, Judge Dyk, writing for the panel that included Judge Prost and Chief District Judge Goldberg (sitting by designation), stated “[t]his case presents a question of first impression: whether claims that do no more than apply established methods of machine learning to a new data environment are patent eligible. We hold that they are not.”
Recentive Analytics, Inc. is the owner of four patents: U.S. Patent Nos. 10,911,811 (‘811 patent), 10,958,957 (‘957 patent), 11,386,367 (‘367 patent), and 11,537,960 (‘960 patent). These patents fall into two categories: the “Machine Learning Training” patents (the ‘367 and ‘960 patents) and the “Network Map” patents (the ‘811 and ‘957 patents). The Machine Learning Training patents claim methods for dynamically generating optimized schedules for live events using machine learning models, while the Network Map patents claim methods for creating optimized network maps for television broadcasters using similar techniques.
Recentive sued Fox Corp., Fox Broadcasting Company, LLC, and Fox Sports Productions, LLC (collectively, Fox) for infringement of these patents. Fox moved to dismiss the complaint, arguing that the patents were directed to ineligible subject matter under § 101. The district court granted the motion, finding that the patents were directed to abstract ideas and lacked an inventive concept. Recentive appealed to the Federal Circuit.
The Machine Learning Training patents focus on optimizing event schedules using machine learning models. Claim 1 of the ‘367 patent is representative and describes a method involving: 

1.
 
Collecting Data: Receiving event parameters (e.g., venue availability, ticket prices) and target features (e.g., event attendance, revenue). 

2.
 
Training the Model: Iteratively training a machine learning model to identify relationships between the event parameters and target features using historical data. 

3.
 
Generating Output: Producing an optimized schedule for future events based on user-specific inputs. 

4.
 
Updating the Schedule: Dynamically adjusting the schedule in response to real-time changes in data. 

The specification emphasizes that the machine learning model can employ “any suitable machine learning technique,” such as neural networks, decision trees, or support vector machines. It also highlights the use of generic computing equipment to implement the claimed methods.
The Network Map patents address creating network maps for broadcasters, which determine the programming displayed on television stations in various geographic markets. Claim 1 of the ‘811 patent is representative and describes a method involving: 

1.
 
Collecting Data: Receiving broadcasting schedules for live events. 

2.
 
Analyzing Data: Generating a network map that optimizes television ratings across multiple events using machine learning techniques. 

3.
 
Updating the Map: Dynamically adjusting the network map in real time based on changes to schedules or criteria. 

4.
 
Using the Map: Determining program broadcasts based on the optimized network map. 

Like the Machine Learning Training patents, the Network Map patents discuss the use of generic machine learning techniques and computing equipment.
The lower court applied the two-step framework established in Alice Corp. v. CLS Bank International, 573 U.S. 208 (2014), to assess patent eligibility. At step one, the court found that the claims were directed to abstract ideas—producing event schedules and network maps using known mathematical techniques. At step two, the court concluded that the claims lacked an inventive concept, as they merely applied generic machine learning techniques and relied on conventional computing devices. The court also denied Recentive’s request for leave to amend, finding that any amendment would be futile.
The Federal Circuit affirmed the district court’s decision, holding that Recentive’s patents were ineligible under § 101. The court’s analysis focused on both steps of the Alice framework.
At step one, the Federal Circuit examined whether the claims were directed to patent-ineligible abstract ideas. The court emphasized that the focus of the claimed advance over the prior art was the application of generic machine learning techniques to the fields of event scheduling and network map creation. It noted that Recentive had repeatedly conceded that its patents did not claim improvements to machine learning itself but merely applied existing machine learning methods to new environments.
The court observed that the Machine Learning Training patents relied on conventional machine learning techniques, such as neural networks and decision trees, and generic computing equipment. Similarly, the Network Map patents employed generic machine learning methods to optimize television ratings. The court concluded that the claims were directed to abstract ideas because they did not disclose any technological improvement or specific implementation of machine learning.
The Federal Circuit rejected Recentive’s argument that its patents were eligible because they applied machine learning to a new field of use. Citing precedent, the court reiterated that “[a]n abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment.” The court also noted that applying existing technology to a novel database or data environment does not create patent eligibility.
At step two, the Federal Circuit considered whether the claims contained an “inventive concept” sufficient to transform the abstract idea into a patent-eligible application. Recentive argued that its patents introduced an inventive concept by using machine learning to dynamically generate optimized maps and schedules based on real-time data. The court rejected this argument, finding that the claimed methods merely described the abstract idea itself. The court emphasized that iterative training and dynamic adjustments are inherent to the nature of machine learning and do not constitute an inventive concept. It noted that the patents did not disclose any specific method for improving machine learning algorithms or achieving technological advancements. Instead, the claims relied on generic machine learning techniques and computing devices, which are insufficient to satisfy step two of the Alice inquiry.
The Federal Circuit also rejected Recentive’s argument that its patents were eligible because they performed tasks previously undertaken by humans with greater speed and efficiency. The court explained that increased speed and efficiency resulting from the use of computers do not render claims patent eligible unless they involve improved computer techniques.
Takeaways
The Federal Circuit’s decision highlights the court’s thinking on patent eligibility limits as applied to machine learning-based inventions. The court has now clarified that applying generic machine learning techniques to new data environments does not create patent eligibility unless the claims disclose specific technological improvements or inventive concepts. This holding is consistent with the Federal Circuit’s broader § 101 jurisprudence, which has repeatedly emphasized that abstract ideas do not become patent eligible simply by limiting them to a particular field of use or implementing them on generic computing devices. The decision also underscores the importance of disclosing specific implementations or improvements to machine learning processes in patent applications.
At its conclusion, the decision recognizes that “[m]achine learning is a burgeoning and increasingly important field and may lead to patent-eligible improvements in technology” and provides some hope that its decision will be cabined to specific facts by stating “[t]oday, we hold only that patents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101.” It will remain to be seen how lower courts, and the U.S. Patent Office, apply this new decision.
For practitioners, the Recentive decision highlights the need to carefully draft claims that go beyond the mere application of existing machine learning techniques. Patent applicants should focus on demonstrating how their inventions improve machine learning models or achieve technological advancements. Without such disclosures, machine learning-based patents may face significant hurdles under § 101. As machine learning continues to play an increasingly important role in technological innovation, this decision serves as a reminder of the challenges of securing patent protection in this evolving field.

Russia’s AI Manipulation Playbook: How Chatbots Are Being Tricked by Propaganda

In today’s digital landscape, AI chatbots have become go-to sources for information. However, a disturbing trend is emerging where bad actors—particularly Russia—are systematically manipulating these systems to spread false narratives.
The Washington Post reports that Russia has developed sophisticated methods to influence AI chatbot responses, creating a blueprint for others to follow. Russia’s efforts particularly focus on Ukraine-related topics, with debunked stories about “French mercenaries” and staged videos appearing in responses from major chatbots.
How the Manipulation Works
Rather than traditional social media campaigns, Russia now uses what experts call “information laundering.” Stories originate on state-controlled outlets like Tass (banned in the EU), then spread to seemingly independent websites in the “Pravda network” (named after the Russian word for “truth”).
What makes this strategy unique is that these sites aren’t designed for human visitors—they’re targeting web crawlers that collect content for search engines and AI language models. AI systems that search the current web are particularly vulnerable to picking up false information, especially when numerous websites repeat the same narratives.
According to McKenzie Sadeghi from NewsGuard, “Operators have an incentive to create alternative outlets that obscure the origin of these narratives. And this is exactly what the Pravda network appears to be doing.”
The Amplification Strategy
The operation has even managed to insert links to these propaganda stories into Wikipedia and Facebook groups, sources that many AI companies give special weight to as reliable information providers.
These AI-driven campaigns are significantly cheaper than traditional influence operations. Ksenia Iliuk from LetsData explains, “A lot of information is getting out there without any moderation, and I think that’s where the malign actors are putting most of their effort.”
Why This Matters
Giada Pistilli, principal ethicist at Hugging Face, notes that most chatbots have “basic safeguards against harmful content but can’t reliably spot sophisticated propaganda,” adding that “the problem gets worse with search-augmented systems that prioritize recent information.”
Louis Têtu, CEO of AI software provider Coveo, warns: “If the technologies and tools become biased—and they are already—and then malevolent forces control the bias, we’re in a much worse situation than we were with social media.”
As more people rely on chatbots for information while social media companies reduce content moderation, this problem is likely to worsen. The fundamental weakness is clear: chatbot answers depend on the data they’re fed, and when that data is systematically polluted with false information, the answers reflect those falsehoods.
While Russia currently focuses on Ukraine-related narratives, the same techniques could be used by anyone targeting specific topics—from political candidates attacking opponents to businesses undermining competitors.
The AI industry must address this vulnerability quickly, or risk becoming yet another battlefield for information warfare where truth is the first casualty.

Privacy and Security in AI Note-Taking and Recording Tools, Part I: Risks and Considerations [Podcast]

In the first part of this two-part series, Ben Perry (shareholder, Nashville) and Lauren Watson (associate, Raleigh) discuss the use of artificial intelligence (AI)-powered note-taking and recording tools in the workplace. Lauren and Ben (who is co-chair of the firm’s Cybersecurity and Privacy Practice Group) explore the benefits of these tools, such as automated transcription and meeting summaries, while also addressing the legal risks and compliance issues, including wiretapping laws, consent requirements, and the potential for data breaches, emphasizing the importance of robust internal policies. The conversation also touches on the need for proper employee training and the implications of using AI tools in compliance with state-specific regulations.

Beijing Intellectual Property Court: Artificial Intelligence Models Can Be Protected with the Anti-Unfair Competition Law, Not the Copyright Law

In what is believed to be a case of first impression in China, on March 31, 2025, the Beijing IP Court, on appeal, ruled that Douyin (TikTok) was entitled to protection of its artificial intelligence (AI) transformation model under Article 2 of the Anti-Unfair Competition Law but not under Copyright Law. Specifically, the Beijing IP Court upheld the original judgement against the defendant/appellant Yiruike Information Technology (Beijing) Co., Ltd. (亿睿科信息技术(北京)有限公司) for violating Douyin’s competitive interest in its transformation model with the B612 app. 

Example transformations. Column 1: Selfie, Column 2: Baidu; Column 3: Douyin; Column 4: Yiruike .

The transformation special effects model (including architecture and parameters) was trained by Douyin Company using animated character data hand-drawn by artists and corresponding real-life data, and the model architecture and parameters were continuously adjusted. The model is used for the transformation special effects function in the Douyin application, which can convert photos and videos taken by users in real time into animated character styles. The B612 application operated by Yiruike later launched the animated girl character special effects function, which can also achieve real-time conversion of animated character styles. Douyin believes that Yiruike’s animated girl character special effects model and its transformation animated character special effects model are highly similar in architecture, parameters, etc., constituting infringement, and requested damages and an injunction. After comparison, Beijing IP Court ruled that the models of both parties are highly identical in architecture, convolutional layer data, etc. and Yiruike failed to submit evidence of substantial differences.
The Beijing IP Court pointed out that the competitive interest claimed by Douyin Company in this case is protected by Article 2 of the Anti-Unfair Competition Law and includes the transformation animated character special effects model (the architecture and parameters claimed by the plaintiff in this case). According to the evidence in the case, it can be determined that Douyin Company has invested a lot of resources in the research and development of the transformation special effects model, and the model of the transformation special effects (architecture and parameters) has obtained innovative advantages, operating income and market benefits for Douyin Company, which should constitute a competitive interest protected by the Anti-Unfair Competition Law. Based on the facts ascertained in this case, it can be determined that Yiruike Company directly used the architecture and parameters of the transformation special effects model of Douyin Company without permission. The alleged behavior violated the recognized business ethics in the field of artificial intelligence models, infringed the legitimate rights and interests of Douyin Company, disrupted the market competition order and damaged the long-term interests of consumers, and constituted unfair competition under Article 2 of the Anti-Unfair Competition Law.
Accordingly, the Beijing IP Court upheld the lower court’s decision.
The case numbers are (2023)京73民终3802号 and (2023)京73民终3803号. A redacted copy of the decision can be found here (Chinese only) courtesy of 知产宝. 

Is Insurtech a High-Risk Application of AI?

While there are many AI regulations that may apply to a company operating in the Insurtech space, these laws are not uniform in their obligations. Many of these regulations concentrate on different regulatory constructs, and the company’s focus will drive which obligations apply to it. For example, certain jurisdictions, such as Colorado and the European Union, have enacted AI laws that specifically address “high-risk AI systems” that place heightened burdens on companies deploying AI models that would fit into this categorization.
What is a “High-Risk AI System”?
Although many deployments that are considered a “high-risk AI system” in one jurisdiction may also meet that categorization in another jurisdiction, each regulation technically defines the term quite differently.
Europe’s Artificial Intelligence Act (EU AI Act) takes a gradual, risk-based approach to compliance obligations for in-scope companies. In other words, the higher the risk associated with AI deployment, the more stringent the requirements for the company’s AI use. Under Article 6 of the EU AI Act, an AI system is considered “high risk” if it meets both conditions of subsection (1) [1] of the provision or if it falls within the list of AI systems considered high risk and included as Annex III of the EU AI Act,[2] which includes, AI systems that are dealing with biometric data, used to evaluate the eligibility of natural persons for benefits and services, evaluate creditworthiness, or used for risk assessment and pricing in relation to life or health insurance.
The Colorado Artificial Intelligence Act (CAIA), which takes effect on February 1, 2026, adopts a risk-based approach to AI regulation. The CAIA focuses on the deployment of “high-risk” AI systems that could potentially create “algorithmic discrimination.” Under the CAIA, a “high-risk” AI system is defined as any system that, when deployed, makes—or is a substantial factor in making—a “consequential decision”; namely, a decision that has a material effect on the provision or cost of insurance.
Notably, even proposed AI bills that have not been enacted have considered insurance-related activity to come within the proposed regulatory scope.  For instance, on March 24, 2025, Virginia’s Governor Glenn Youngkin vetoed the state’s proposed High-Risk Artificial Intelligence Developer and Deployer Act (also known as the Virginia AI Bill), which would have applied to developers and deployers of “high-risk” AI systems doing business in Virginia. Compared to the CAIA, the Virginia AI Bill defined “high-risk AI” more narrowly, focusing only on systems that operate without meaningful human oversight and serve as the principal basis for consequential decisions. However, even under that failed bill, an AI system would have been considered “high-risk” if it was intended to autonomously make, or be a substantial factor in making, a “consequential decision,” which is a “decision that has a material legal, or similarly significant, effect on the provision or denial to any consumer of—among other things—insurance.
Is Insurtech Considered High Risk?
Both the CAIA and the failed Virginia AI Bill explicitly identify that an AI system making a consequential decision regarding insurance is considered “high-risk,” which certainly creates the impression that there is a trend toward regulating AI use in the Insurtech space as high-risk. However, the inclusion of insurance on the “consequential decision” list of these laws does not definitively mean that all Insurtech leveraging AI will necessarily be considered high-risk under these or future laws. For instance, under the CAIA, an AI system is only high-risk if, when deployed, it “makes or is a substantial factor in making” a consequential decision. Under the failed Virginia AI Bill, the AI system had to be “specifically intended to autonomously make, or be a substantial factor in making, a consequential decision.”
Thus, the scope of regulated AI use, which varies from one applicable law to another, must be considered together with the business’s proposed application to get a better sense of the appropriate AI governance in a given case. While there are various use cases that leverage AI in insurance, which could result in consequential decisions that impact an insured, such as those that improve underwriting, fraud detection, and pricing, there are also other internal uses of AI that may not be considered high risk under a given threshold. For example, leveraging AI to assess a strategic approach to marketing insurance or to make the new client onboarding or claims processes more efficient likely doesn’t trigger the consequential decision threshold required to be considered high-risk under CAIA or the failed Virginia AI Bill. Further, even if the AI system is involved in a consequential decision, this alone may not deem it to be high risk, as, for instance, the CAIA requires that the AI system make the consequential decision or be a substantial factor in that consequential decision.
Although the EU AI Act does not expressly label Insurtech as being high-risk, a similar analysis is possible because Annex III of the EU AI Act lists certain AI uses that may be implicated by an AI system deployed in the Insurtech space. For example, an AI system leveraging a model to assess creditworthiness in developing a pricing model in the EU likely triggers the law’s high-risk threshold. Similarly, AI modeling used to assess whether an applicant is eligible for coverage may also trigger a higher risk threshold. Under Article 6(2) of the EU AI Act, even if an AI system fits the categorization promulgated under Annex III, the deployer of the AI system should perform the necessary analysis to assess whether the AI system poses a significant risk of harm to individuals’ health, safety, or fundamental rights, including by materially influencing decision-making. Notably, even if an AI system falls into one of the categories in Annex III, if the deployer determines through documented analysis that the deployment of the AI system does not pose a significant risk of harm, the AI system will not be considered high-risk.
What To Do If You Are Developing or Deploying a “High-Risk AI System”?
Under the CAIA, when dealing with a high-risk AI system, various obligations come into play. These obligations vary for developers[3] and deployers[4] of the AI system. Developers are required to display a disclosure on their website identifying any high-risk AI systems they have deployed and explain how they manage known or reasonably foreseeable risks of algorithmic discrimination. Developers must also notify the Colorado AG and all known deployers of the AI system within 90 days of discovering that the AI system has caused or is reasonably likely to cause algorithmic discrimination. Developers must also make significant additional documentation about the high-risk AI system available to deployers.
Under the CAIA, deployers have different obligations when leveraging a high-risk AI system. First, they must notify consumers when the high-risk AI system will be making, or will play a substantial factor in making, a consequential decision about the consumer. This includes (i) a description of the high-risk AI system and its purpose, (ii) the nature of the consequential decision, (iii) contact information for the deployer, (iv) instructions on how to access the required website disclosures, and (v) information regarding the consumer’s right to opt out of the processing of the consumer’s personal data for profiling. Additionally, when use of the high-risk AI system results in a decision adverse to the consumer, the deployer must disclose to the consumer (i) the reason for the consequential decision, (ii) the degree to which the AI system was involved in the adverse decision, and (iii) the type of data that was used to determine that decision and where that data was obtained from, giving the consumer the opportunity to correct data that was used about that as well as appeal the adverse decision via human review. Developers must also make additional disclosures regarding information and risks associated with the AI system. Given that the failed Virginia AI Bill had proposed similar obligations, it would be reasonable to consider the CAIA as a roadmap for high-risk AI governance considerations in the United States. 
Under Article 8 of the EU AI Act, high-risk AI systems must meet several requirements that tend to be more systemic. These include the implementation, documentation, and maintenance of a risk management system that identifies and analyzes reasonably foreseeable risks the system may pose to health, safety, or fundamental rights, as well as the adoption of appropriate and targeted risk management measures designed to address these identified risks. High-risk AI governance under this law must also include:

Validating and testing data sets involved in the development of AI models used in a high-risk AI system to ensure they are sufficiently representative, free of errors, and complete in view of the intended purpose of the AI system;
Technical documentation that demonstrates the high-risk AI system complies with the requirements set out in the EU AI Act, to be drawn up before the system goes to market and is regularly maintained;
The AI system must allow for the automatic recording of events (logs) over the lifetime of the system;
The AI system must be designed and developed in a manner that allows for sufficient transparency. Deployers must be positioned to properly interpret an AI system’s output. The AI system must also include instructions describing the intended purpose of the AI system and the level of accuracy against which the AI system has been tested;
High risk AI systems must be developed in a manner that allows for them to be effectively overseen by natural persons when they are in use; and
High risk AI systems must deploy appropriate levels of accuracy, robustness, and cybersecurity, which are performed consistently throughout the lifecycle of the AI system.

When deploying high risk AI systems, in-scope companies must carve out the necessary resources to not only assess whether they fall within this categorization, but also to ensure the variety of requirements are adequately considered and implemented prior to deployment of the AI system.
The Insurtech space is growing in parallel with the expanding patchwork of U.S. AI regulations. Prudent growth in the industry requires awareness of the associated legal dynamics, including emerging regulatory concepts nationwide.

[1] Subsection (1) states that an AI system is high-risk if it is “intended to be used as a safety component of a product (or is a product) covered by specific EU harmonization legislation listed in Annex I of the AI Act and the same harmonization legislation mandates that he product hat incorporates the AI system as a safety component, or the AI system itself as a stand-alone product, under a third-party conformity assessment before being placed in the EU market.”
[2] Annex 3 of the EU AI Act can be found at https://artificialintelligenceact.eu/annex/3/
[3] Under the CAIA, a “Developer” is a person doing business in Colorado that develops or intentionally and substantially modifies an AI system.
[4] Under the CAIA, a “Deployer” is a persona doing business in Colorado that deploys a High-Risk AI System.

Bridging the Gap: Applying Anti-Money Laundering Techniques and AI to Combat Tariff Evasion

Introduction
In today’s global economy, characterized by complex supply chains and escalating trade tensions, tariff evasion has emerged as a significant threat to economic stability, fair competition, and government revenue. Traditional detection methods increasingly fall short against sophisticated evasion schemes that adapt quickly to regulatory changes. This article presents a compelling case for integrating advanced anti-money laundering (AML) methodologies with cutting-edge artificial intelligence to revolutionize tariff evasion detection. We also examine how established legal frameworks like the False Claims Act and transfer pricing principles from tax law can be weaponized against tariff fraud, and explore the far-reaching implications for commercial enterprises’ compliance programs — including how these tools can level the playing field for businesses facing unfair competition.
The Convergence of TBML and Tariff Evasion: An Untapped Opportunity
Trade-based money laundering (TBML) and tariff evasion operate through remarkably similar mechanisms, creating a natural synergy for detection strategies. Both practices manipulate legitimate trade channels for illicit purposes:

Mis-invoicing: Deliberate falsification of price, quantity, or product descriptions
False Classification: Strategic misclassification of goods under favorable Harmonized System (HS) codes
Value Manipulation: Artificial inflation or deflation of goods’ values
Phantom Shipments: Creation of entirely fictitious trade transactions

This striking overlap presents customs authorities with a valuable opportunity: leverage the sophisticated detection infrastructure already developed for AML compliance to identify and prevent tariff evasion.
TBML Detection Techniques: A Ready Arsenal for Customs Authorities
The AML compliance ecosystem has developed sophisticated techniques that can be immediately deployed to combat tariff evasion:

Advanced Price Anomaly Detection: Statistical modeling to identify transactions that deviate significantly from market norms, historical patterns, and comparable trade flows
Comprehensive Quantity Analysis: Algorithmic comparison of declared quantities against shipping documentation, customs records, and production capacity data
Systematic HS Code Scrutiny: Pattern recognition to flag suspicious classification practices, such as strategic code-switching or exploitation of classification ambiguities
Geographic Risk Mapping: Targeted scrutiny of transactions involving high-risk jurisdictions known for corruption, weak regulatory oversight, or prevalent smuggling
Related Party Transaction Surveillance: Enhanced monitoring of intra-company trades where pricing manipulation is more feasible
Integrated Data Analytics: Cross-referencing multiple data sources to identify inconsistencies that may indicate fraudulent intent
Network Analysis: Sophisticated mapping of business relationships to uncover hidden connections and coordinated evasion schemes

Artificial Intelligence: The Game-Changer in Tariff Evasion Detection
AI dramatically enhances detection capabilities through its ability to process vast datasets, identify subtle patterns, and continuously improve accuracy:
Deterministic AI and Machine Learning

Advanced Anomaly Detection: Supervised and unsupervised learning models that identify subtle deviations from established trade patterns by simultaneously analyzing multiple variables
Multi-factor Risk Classification: Algorithms that dynamically assess transaction risk based on importer history, commodity characteristics, trade routes, and pricing patterns
Predictive Regression Modeling: Statistical techniques that establish expected transaction values and flag significant deviations for investigation
Adaptive Learning Systems: Models that continuously refine detection parameters based on investigation outcomes, ensuring responsiveness to evolving evasion tactics

Large Language Models (LLMs)

Comprehensive Document Analysis: Automated extraction and verification of critical information across diverse trade documentation, identifying inconsistencies that human reviewers might miss
Natural Language Risk Assessment: Analysis of unstructured data sources including news reports, regulatory filings, and industry communications to develop comprehensive risk profiles
Behavioral Pattern Recognition: Identification of suspicious trade patterns that may indicate coordinated evasion strategies
Contextual Trade Analysis: Advanced semantic understanding that can detect mismatches between declared product uses and actual characteristics 

Legal Frameworks: Powerful Tools for Enforcement and Competitive Equity
Effective enforcement requires robust legal mechanisms to prosecute and penalize violations:
The False Claims Act: A Powerful but Underutilized Weapon
The False Claims Act (FCA) represents a particularly potent tool in the anti-evasion arsenal, with key advantages that make it especially effective:

Broad Scope of Liability: Importantly, the FCA does not require proof of specific intent to defraud. This means the law covers a spectrum of non-compliant behaviors ranging from simple negligence and mistakes to deliberate fraud, significantly expanding the universe of actionable violations
Whistleblower Incentives: Qui tam provisions that allow individuals with insider knowledge to report violations and share in financial recoveries, creating powerful incentives for disclosure
Treble Damages: Provisions for triple damages that significantly raise the stakes for would-be evaders
Reduced Burden of Proof: Civil rather than criminal standards of evidence, making successful prosecution more achievable
Extended Statute of Limitations: Longer timeframes for investigation and prosecution, allowing authorities to address complex schemes

A Competitive Equity Tool for Businesses
The FCA serves not only as a government enforcement mechanism but as a powerful resource for companies facing unfair competition:

Leveling the Playing Field: Companies that suspect competitors are gaining unfair advantages through tariff evasion can leverage the FCA to prompt investigation and enforcement
Industry Self-Regulation: The qui tam provisions enable industry insiders to report violations, effectively allowing sectors to police themselves
Competitive Intelligence Application: Information gathered through compliance monitoring can help identify and address unfair competitive practices
Market Access Protection: By ensuring all market participants play by the same rules, legitimate businesses are protected from being undercut by non-compliant competitors

Transfer Pricing Principles: Adapting Section 482 to Tariff Contexts*
Transfer pricing principles offer a sophisticated framework for addressing value manipulation:

Arm’s Length Standard: Application of market-based valuation standards to related-party transactions
Comparable Transaction Analysis: Methodologies for establishing appropriate pricing benchmarks
Documentation Requirements: Structured approaches to establishing and documenting fair market value
Burden-Shifting Frameworks: Legal mechanisms that require importers to justify significant pricing discrepancies

Impact on Commercial Enterprise Compliance Programs
The government’s adoption of these advanced detection techniques has profound implications for corporate compliance strategies:
Transformative Effects on Corporate Compliance

Elevated Risk Profiles: Companies face significantly increased detection risk as governments deploy AI-enhanced monitoring, necessitating more robust internal controls
Expanded Documentation Requirements: Enterprises must maintain comprehensive transaction records that can withstand sophisticated algorithmic scrutiny
Proactive Compliance Monitoring: Organizations need to implement their own advanced analytics to identify and address potential issues before they trigger regulatory attention
Cross-functional Compliance Integration: Tariff compliance can no longer operate in isolation but must coordinate with AML, anti-corruption, and tax compliance functions

Strategic Compliance Responses

AI-Enhanced Self-Assessment: Forward-thinking enterprises are deploying their own AI systems to continuously monitor trade activities against regulatory benchmarks
Predictive Risk Modeling: Companies are developing sophisticated models to identify high-risk transactions before filing customs declarations
Transaction Testing Programs: Implementation of statistical sampling and testing protocols to verify compliance across high volumes of transactions
Enhanced Training Programs: Development of specialized training for procurement, logistics, and finance personnel on evasion risk indicators
Third-Party Due Diligence: More rigorous vetting of suppliers, customs brokers, and other trade partners 

Competitive Advantages of Robust Compliance

Reduced Penalty Exposure: Companies with sophisticated compliance programs face lower penalties when violations occur
Expedited Customs Clearance: Trusted trader programs offer streamlined processing for companies with demonstrated compliance excellence
Supply Chain Stability: Reduced risk of shipment delays and seizures due to compliance concerns
Reputational Protection: Avoidance of negative publicity associated with customs violations
Strategic Data Utilization: Compliance data becomes a valuable asset for business intelligence and operational optimization 

Competitive Intelligence and Market Protection
For businesses concerned about competitors gaining unfair advantages through tariff evasion, these tools offer strategic options:

Market Analysis: Advanced analytics can help identify pricing anomalies that may indicate competitors are benefiting from tariff evasion
Evidence Building: Systematic collection and analysis of market data can help build compelling cases for authorities to investigate
Whistleblower Protection: Companies can establish secure channels for employees or industry insiders to report suspected violations
Regulatory Engagement: Proactive sharing of competitive intelligence with customs authorities can trigger enforcement actions
Industry Collaboration: Formation of industry working groups to establish compliance benchmarks and identify suspicious practices

Challenges and Considerations
Implementing these advanced approaches presents several challenges:

Data Quality and Accessibility: Effective analysis requires comprehensive, accurate data, often from disparate sources
Supply Chain Complexity: Modern trade flows involve numerous intermediaries, complicating transaction monitoring
Cross-Border Cooperation: Effective enforcement requires unprecedented levels of international information sharing
Adversarial Adaptation: Evasion techniques evolve rapidly in response to detection methods
Algorithmic Fairness: AI systems must be designed and monitored to avoid discriminatory impacts on specific countries or industries
Cost-Benefit Balance: Compliance costs must be proportionate to risk and competitive realities
False Positive Management: Systems must be calibrated to distinguish between intentional evasion, negligence, and legitimate mistakes

Conclusion
The integration of AML techniques, artificial intelligence, and established legal frameworks represents a paradigm shift in the fight against tariff evasion. By leveraging these complementary approaches, customs authorities can dramatically enhance detection capabilities while creating powerful deterrents through robust enforcement.
For commercial enterprises, this evolving landscape creates both obligations and opportunities. The expanded scope of FCA liability—covering even negligent errors—demands heightened vigilance in compliance programs. Yet these same tools also offer legitimate businesses powerful mechanisms to combat unfair competition from less scrupulous rivals. Companies facing market distortions from competitors’ tariff evasion now have sophisticated means to identify suspicious patterns and trigger enforcement actions.
As global trade continues to evolve, this multi-faceted approach will be essential to preserving the integrity of international trade systems and ensuring a level playing field for legitimate businesses. Organizations that proactively embrace these changes will not only mitigate regulatory risk but may discover competitive advantages through superior compliance capabilities and the strategic use of enforcement mechanisms to ensure market fairness.

AI Powered Bot Targeted 400,000 Websites

SentinelOne researchers have discovered AkiraBot, which is used to target small- to medium-sized company websites with generative AI, and drafted outreach messages for website chats, comments, and contact forms. SentinelOne estimates that over 400,000 websites have been targeted, and the bot has successfully spammed “at least 80,000 websites since September 2024.”
The bot generated custom outreach messages to targets using OpenAI’s large language models (LLM) based on the purpose of the website and bypassed spam filters and CAPTCHA barriers to spam websites. OpenAI has since disabled the API key and other assets used in the campaign.
The SentinelOne researchers posited that “AkiraBot’s use of LLM-generated spam message content demonstrates the emerging challenges that AI poses to defending websites against spam attacks.”
As threat actors continue to evade detection, their generative AI usage will pose an ever-increasing challenge for protecting websites and filtering spam from email accounts.