CMS Issues CY 2026 Medicare Advantage and Part D Final Rule
On April 4, 2025, the Centers for Medicare & Medicaid Services (“CMS”) released the contract year (“CY”) 2026 final rule for the Medicare Advantage (“MA”) program, Medicare Prescription Drug Benefit Program (“Part D”), Medicare Cost Plan Program, and Programs of All-Inclusive Care for the Elderly (the “Final Rule”). While CMS finalized several proposals of its Proposed Rule, it did not finalize many of its key proposals, including on anti-obesity medication (“AOM”) coverage, enhanced guardrails for artificial intelligence (“AI”), and various health equity related initiatives in MA and Part D.
Summarized below are some of the key provisions of the Final Rule.
MA and Part D Proposals Not Finalized
Perhaps most notable from the CY 2026 Final Rule are those proposals that CMS did not finalize. These include the following:
Part D Coverage of Anti-Obesity Medications (AOMs) and Application to the Medicaid Program—CMS declined to finalize a proposal to “reinterpret” the statutory definition of a covered Part D drug at section 1860D–2(e)(2) of the Social Security Act (SSA), which excludes coverage for certain drugs and uses, including those that may be excluded by Medicaid under SSA § 1927(d)(2) as ‘‘agents when used for . . . weight loss.’’ The proposal would have applied to both Medicare and Medicaid to allow coverage for AOMs when used for the treatment of obesity, with a hefty, estimated price tag of $25 billion in Medicare spending and $15 billion in Medicaid spending over the course of a decade. As the proposal was not finalized, the current policy remains in place—the Medicare and Medicaid programs will only cover AOMs when used to treat another medically accepted condition (e.g., type 2 diabetes or cardiovascular risk).
Enhancing Health Equity Analyses: Annual Health Equity Analysis of Utilization Management Policies and Procedures — CMS did not finalize its proposal to require Medicare Advantage organizations to conduct annual health equity analyses of utilization management policies. CMS stated that this proposal remains under review for potential future rulemaking in line with Executive Order 14192’s directive to ensure consistency and avoid unnecessary burden.
Guardrails for Artificial Intelligence (AI) / Ensuring Equitable Access to Medicare Advantage Services — CMS opted not to finalize proposals related to the use of AI and algorithmic decision-making in MA, including proposals requiring plans to utilize AI in a manner that preserves equitable access, to adhere to existing Medicare regulations prohibiting discrimination, and requiring disclosure of use of AI tools. In declining to finalize these proposals, CMS acknowledged strong stakeholder interest and stated that the agency would “consider the extent to which it may be appropriate to engage in future rulemaking in this area.”
Behavioral Health Parity — Although CMS acknowledged significant stakeholder concern regarding access to behavioral health care in MA plans, it did not finalize proposals to establish stricter parity protections or expand network adequacy standards in the Final Rule. The proposed behavioral health parity provisions would have applied new requirements to ensure equitable access to mental health and substance use disorder services in Medicare Advantage plans. CMS acknowledged ongoing concerns, especially in dual-eligible special needs plans, but stated that the proposed changes are still under review. Future rulemaking may revisit these policies in coordination with broader parity and access initiatives.
Prior Authorization — While CMS finalized prior authorization requirements applicable to inpatient admissions (discussed below), CMS did not finalize proposals to establish guardrails on the use of AI in prior authorization processes.
Agent and Broker Oversight — Despite recent scrutiny of agent and broker practices, CMS did not finalize key proposed marketing reforms. Among other things, these included broadening the definition of “marketing” to enhance agency oversight of materials submitted to CMS as well as promoting informed choice by requiring agents and brokers to provide more comprehensive information to potential enrollees, such as low-income assistance options and implications of switching to traditional Medicare.
Promoting Transparency for Pharmacies — CMS did not finalize or address a proposal to require Part D sponsors (or their FDRs) to allow pharmacies the right to terminate their network contracts without cause following the same notice period that Part D sponsors have for terminating contracts without cause. Had this proposal been finalized, it would have likely faced legal challenges for violating the Part D statute’s noninterference requirement.
Formulary Placement of Generics and Biosimilars — CMS did not finalize a proposal to include an additional step in the formulary review process to check that Part D sponsors provide broad access to generics, biosimilars, and other lower cost drugs. However, CMS noted that “may consider codifying additional requirements regarding formularies in future rulemaking if necessary.”
Administration of Supplemental Benefits through Debit Cards — CMS did not finalize its proposal to impose new requirements on the use debit cards to administer plan-covered benefits, including new guardrails to ensure that beneficiaries are fully aware of covered supplemental benefits and how to access those benefits.
Community-Based Services and In-Home Service Contractors — CMS did not finalize or directly address proposals related to improving transparency and beneficiary protections through expanded provider directory requirements. These proposals included codifying definitions for community-based organizations and in-home supplemental benefit providers, and requiring their inclusion in provider directories.
Part D Medication Therapy Management (“MTM”) Program — CMS deferred for subsequent rulemaking a proposal to expand the regulatory list of core chronic diseases used to identify Part D enrollees who have multiple chronic diseases for purposes of determining eligibility for Medication Therapy Management (“MTM”) enrollment to include other causes of dementia in addition to Alzheimer’s.
Moreover, CMS indicated that various currently effective regulations and policies are currently under review by the Trump Administration “to ensure consistency with the Executive Order 14192, Unleashing Prosperity Through Deregulation.” According to CMS, policies currently under review include the following:
Health Equity Index Reward for the Parts C and D Star Ratings
Annual health equity analysis of utilization management policies and procedures
Requirements for MA plans to provide culturally and linguistically appropriate services
Quality improvement and health risk assessments (“HRAs”) focused on equity and social determinants of health (“SDOH”)
FINALIZED MA AND PART D PROPOSALS
Covered Insulin Products and Vaccines
CMS finalized a proposal to codify a relatively modest expansion of the definition of a “covered insulin product” to include Part D coverage for drug products that are a combination of more than one type of insulin or both insulin and non-insulin drugs, which is consistent with existing CMS guidance. CMS also finalized proposals to eliminate cost sharing for both covered insulin products and for adult vaccines recommended by the Advisory Committee on Immunization Practices (ACIP) covered under Part D.
Medicare Prescription Payment Plan
CMS finalized regulatory requirements for the Medicare Prescription Payment Plan for 2026 and subsequent years, codifying provisions previously established in two-part guidance for 2025. The program, created under section 11202 of the Inflation Reduction Act, requires all Medicare Part D and MA-PD plan sponsors to offer enrollees the option to pay capped monthly installments on their out-of-pocket Part D drug costs, rather than paying the full amount at the point of sale. The goal is to ease financial pressure—especially for beneficiaries who incur high drug costs early in the year.
Most provisions from prior guidance were finalized without modification, including operational processes, election procedures, and outreach requirements. CMS also finalized several new provisions:
Automatic Renewal: Beginning in 2026, enrollees who participate in the program will be automatically re-enrolled the following year unless they opt out. A separate renewal notice must be sent after the end of the annual election period and include the plan’s upcoming terms and conditions.
Voluntary Termination: CMS adjusted its original proposal and will now require plan sponsors to process opt-out requests within 3 calendar days, rather than the initially proposed 24-hour timeframe, to reduce administrative burden.
Standardized Communications: New requirements were finalized for model and standardized materials, including the “likely to benefit” notice, voluntary and involuntary termination notices, and renewal notices. Part D sponsor websites must also display information about the program.
Waiver for LI NET: CMS confirmed that the Medicare Prescription Payment Plan requirements will not apply to the Limited Income Newly Eligible Transition (LI NET) program, consistent with prior guidance.
Election Processing and Real-Time Requirements: While CMS finalized the 24-hour processing requirement for election requests received during the plan year, it did not finalize a proposed real-time processing requirement for phone or web-based requests, citing stakeholder concerns about operational feasibility. CMS may revisit this in future rulemaking.
CMS stated that its approach was intended to limit disruption, reduce burden on plans, and give stakeholders time to gain experience with the program. The agency will continue to evaluate program implementation and consider refinements in future years.
Timely Submission Requirements for Prescription Drug Event (PDE) Records
CMS has finalized new regulatory requirements under § 423.325 to codify timely submission of Prescription Drug Event (PDE) records by Medicare Part D sponsors. These records are essential for payment accuracy and program integrity, especially for programs like the Coverage Gap Discount Program, the Manufacturer Discount Program, and the Medicare Drug Price Negotiation Program.
Previously guided by subregulatory policy, CMS now formalizes specific submission timelines:
General PDEs: Within 30 days of claim receipt.
Adjustments/deletions: Within 90 days of issue discovery.
Rejected PDEs: Resubmitted within 90 days of rejection notice.
Selected drugs (Negotiation Program): Initial PDEs due within 7 days to support timely Manufacturer Fair Price refunds.
Despite concerns about the 7-day timeline, CMS finalized it without changes, citing that most PDEs are already submitted within this window. The 90-day deadlines for adjustments and rejections remain unchanged. These timelines are now enforceable, and noncompliance may trigger CMS actions.
Medicare Transaction Facilitator Requirements for Network Pharmacy Agreements
CMS finalized the proposal requiring that Part D sponsors’ network participation agreements with contracting pharmacies, including any FDR contracts, require network pharmacies to be enrolled in the Medicare Drug Price Negotiation Program’s (‘‘Negotiation Program’’) Medicare Transaction Facilitator Data Module (‘‘MTF DM’’) and that such pharmacies certify the accuracy and completeness of their enrollment information in the MTF DM. According to CMS, the MTF DM will contain several key functionalities that are necessary and appropriate for administration of the Negotiation Program and the Part D program. Through each of these functionalities, the dispensing pharmacy’s enrollment in the MTF DM would help ensure continued access to selected drugs that are covered under Part D for beneficiaries and pharmacies and help maintain the accuracy of Part D claims information and payment. These functionalities are:
The MTF DM will provide pharmacies enrolled in the MTF DM with remittances or ERAs to reconcile Maximum Fair Price (“MFP”) refund payments when a Primary Manufacturer of a drug selected by CMS for price negotiation chooses to pass payment to the pharmacy through the MTF PM rather than prospectively ensuring that the price paid by the pharmacy entity when acquiring the drug is no greater than the MFP.
There will be streamlined access for pharmacies that are enrolled in the MTF DM to submit complaints and disputes within the MTF DM to help identify issues with timely MFP refund payment, supporting pharmacies to continue efficient operations and prevent undue financial hardship, while maintaining accuracy of Part D claims information and payment.
The MTF DM will serve as a central repository for information about pharmacies enrolled in the MTF DM that self-report that they anticipate material cashflow concerns due to the reliance on retrospective MFP refunds within the 14-day prompt MFP payment window.
CMS intends that pharmacies will be able to view the status of MFP refunds from Primary Manufacturers through the MTF DM.
The MTF DM will collect and share financial information belonging to pharmacies enrolled in the MTF DM with Primary Manufacturers that pay MFP refunds to pharmacies outside the MTF PM.
CMS published new guidance on its webpage on Tuesday, April 8th to provide pharmacies and other dispensing entities with resources for engaging with the new MTF system. Enrollment in the MTF is expected to begin in June 2025.
Clarifying MA Organization Determinations to Enhance Enrollee Protections in Inpatient Settings
In the Final Rule, CMS clarifies and expands the definition of “organization determinations” under § 422.566 to explicitly include decisions made while a beneficiary is receiving care, particularly inpatient services. The key reforms include the following:
Whether a decision is made before, during, or after a service is provided, it must be treated as a formal organization determination. This change is intended to prevent MA plans from not affording appeal rights by reclassifying care decisions as claims reviews.
MA organizations may not retroactively deny or downgrade previously authorized inpatient admissions, even based on clinical data collected after admission. The only exceptions are fraud or qualifying good cause.
The Final Rule also clarifies that a beneficiary’s financial liability does not attach until an MA plan has made a formal claim determination, aligning liability with appeal rights.
These finalized requirements are intended to eliminate surprise denials, ensure transparency for providers and beneficiaries, and create a consistent standard across MA plans for inpatient decision-making. The Final Rule also introduces certain limited protections for beneficiaries and providers navigating MA plans’ prior authorization (“PA”) processes, including several provisions that restrict a plan’s ability to retroactively deny care after initial approval. Beginning in 2026:
Approved services, including inpatient admissions, cannot be retroactively denied unless there is evidence of fraud or a valid reason under CMS’s “good cause” standard as defined in 42 CFR § 405.986.
All coverage decisions made during or after an inpatient stay must be treated as formal determinations, granting enrollees full appeal rights.
Plans must notify both providers and enrollees of all coverage decisions, and beneficiaries cannot be held financially responsible until a claims payment determination is made.
Non-Allowable Special Supplemental Benefits for the Chronically Ill (SSBCI)
In the Final Rule, CMS adopts new regulatory restrictions for SSBCI. With some modifications from the Proposed Rule, CMS finalized a non-exhaustive list of non-allowable SSBCI benefits, codified at 42 C.F.R. § 422.102(f)(1)(iii).
Under existing regulations, SSBCI are not required to be primarily health related but must have a reasonable expectation of improving or maintaining the health or overall function of the enrollee, as established by the MA plan based on a bibliography of relevant acceptable evidence. In the Final Rule, CMS adopts a non-exhaustive list of non-primarily health related items or services that do not meet the standard of having a reasonable expectation of improving or maintaining the health or overall function of the enrollee. As finalized at 42 C.F.R. § 422.102(f)(1)(iii), examples of items or services that may not be offered as SSBCI include all of the following:
Procedures that are solely cosmetic in nature and do not extend upon Traditional Medicare coverage (for example, cosmetic surgery, such as facelifts, or cosmetic treatments for facial lines, atrophy of collagen and fat, and bone loss due to aging)
Hospital indemnity insurance
Funeral planning and expenses
Life insurance
Alcohol
Tobacco
Cannabis products
Broad membership programs inclusive of multiple unrelated services and discounts
Non-healthy food
Modifications from the Proposed Rule include the addition of “non-healthy food” to the non-allowable SSBCI list. According to CMS, the addition of non-healthy food addresses comments requesting clarification on how plans may provide “Food is Medicine” (an initiative of HHS’ Office of Disease Prevention and Health Promotion) within the parameters of supplemental benefit requirements. In addition, CMS did not finalize proposals to expressly incorporate as non-allowable SSBCI “cash and monetary rebates” (which are prohibited by SSA § 1851(h)(4)(A)) or “gambling items (e.g., online casino games, lottery tickets), firearms and ammunition.”
Improving Experiences for Dually Eligible Enrollees
CMS finalized its proposed requirements for certain dual-eligible Special Needs Plans (“D-SNPs”) to further streamline and integrate care delivery for dual eligible beneficiaries. Specifically, finalized proposals include:
Requiring integrated member ID cards for both Medicare and Medicaid plans. The proposal is limited to Applicable Integrated Plans (“AIPs”);
Requiring AIPs to conduct a single, integrated Health Risk Assessment (“HRA”) for both Medicare and Medicaid, replacing the separate HRAs currently utilized for each. However, CMS delayed the implementation date of this provision to January 1, 2027.
Codifying timeframes for all SNPs to conduct HRAs and develop Individualized Care Plans (“ICPs”), emphasizing active participation by enrollees or their representatives in the ICP development process. Specifically, CMS proposes to require that SNPs conduct the initial HRA within 90 days of the effective date of enrollment.
Establish new requirements for all SNPs related to outreach to enrollees regarding completion of the HRA. Specifically, SNPs make at least three non-automated phone call attempts, unless the enrollee agrees or declines to participate in the HRA before three attempts are made, on different days at different times. If the enrollee has not responded, the SNP must send a follow-up letter. The SNP must document attempts to contact the enrollee, and if applicable, the enrollee’s choice not to participate.
Require that SNPs update ICPs as warranted when there are changes in an enrollee’s health status or they have a healthcare transition.
Risk Adjustment Data
CMS finalized as proposed various technical changes to the definitions related to risk adjustment data, including a technical change to the definition of Hierarchical Condition Categories (HCCs) at § 422.2 to remove the reference to a specific version of the ICD to keep the HCC definition current as newer versions of the ICD become available and are adopted by CMS, as well as substituting the terms “disease codes” with “diagnosis codes” and “disease groupings” with “diagnosis groupings” to be consistent with ICD terminology. CMS also finalized its proposal to codify existing practice of requiring mandatory submission of risk adjustment data by PACE organizations and Section 1876 Cost plans, consistent with the risk adjustment data requirements applicable to MA plans.
Medical Loss Ratio (MLR) Reporting
In the Proposed Rule, CMS proposed a number of regulatory changes intended to improve the meaningfulness and comparability of the MLR across plan contracts, as well as align the MA and Part D MLR regulations with the regulations in the commercial and Medicaid MLR programs. However, in the Final Rule, CMS adopted only one MLR-related proposal — to exclude Medicare Prescription Payment Plan unsettled balances from the MLR numerator.
MLR related proposals that were not finalized include the following:
Requiring provider incentive and bonus arrangements are tied to clinical or quality improvement standards in order to be included in the MA MLR numerator;
Requiring administrative costs to be excluded from quality-improving activities in the MA and Part D MLR numerators; and
Codifying the current practice by which MA and Part D MLR reports include a description of how expenses are allocated across lines of business.
Listen to this post
To AI or Not to AI? The Use of AI in Employment Decisions
Even just a few years ago, the concept of using artificial intelligence (AI) in everyday life was a novel, if somewhat intimidating, concept. But from Google’s AI overview to Microsoft’s Copilot, many of us use AI daily to help increase efficiency and streamline certain processes. If you are an employer using AI to sort through job applications and resumes, to make decisions based on background check information, or to sort through criteria for promotion or termination decisions, you need to consider the legal ramifications, which increasingly involve federal and state laws.
The State and Local Legal Landscape
Some state legislatures and local governments, in attempting to get ahead of any issues, have started considering or issuing guidance or legislation aimed at preventing employment discrimination resulting from the use of AI tools. For example, New Jersey has issued guidance indicating that the use of AI in employment decisions will be subject to the same antidiscrimination laws as non-AI decisions and that employers will be liable for discrimination caused by AI tools they did not design. Both Colorado and Illinois have passed laws, effective in 2026, prohibiting employers from using AI in a discriminatory manner and requiring certain disclosures when using AI in certain employment decisions. New York City passed a local law, effective July 2023, that regulates the use of AI in employment decisions. Maryland and California have proposed but have not yet passed AI legislation, and even more states are in the early stages of considering laws regulating employer use of AI in employment decisions.
Where Is the Federal Government on This Issue?
It is currently unlikely that federal legislation is forthcoming, although that could change in the years to come. In 2023 and 2024, the Equal Employment Opportunity Commission and the Department of Labor issued guidance on the use of AI in employment decisions. That guidance was rescinded following President Trump’s January 2025 executive order revoking policies and directives acting as “barriers” to “AI innovation.”
Now What?
While this is an evolving area, employers, especially those with remote employees across the United States, must keep up to date on state or local laws on the use of AI in employment decisions. As a general rule, make sure that any AI you are using complies with federal anti-discrimination laws. Other best practices include:
Have a policy on if and how you are going to use AI;
Vet your AI vendors and make sure they have considered the potential adverse impact of their products;
Notify employees or prospective employees that you are using AI in employment decision- making;
Regularly audit AI results to see if protected groups are being disproportionately impacted;
Ensure employees responsible for implementing AI tools have the proper training and are using such tools appropriately; and
Consult with subject matter experts and legal counsel as necessary.
Listen to this post
Digital Policy: Highlights of the German Coalition Agreement 2025
The newly published German Coalition Agreement 2025 (CA 2025), German language version available here, outlines a digital agenda of the new German government, aimed at strengthening Germany’s position as a leader in digital innovation, data protection, and technological sovereignty. This GT Alert provides an overview of key digital policy areas that the CA 2025 addresses, highlighting the new government’s priorities and potential implications for businesses operating in Germany.
1. Data Protection
The coalition emphasizes the importance of harmonizing and simplifying data protection standards while promoting innovation and economic growth. Key measures include:
Simplification for SMEs and Non-Commercial Activities: The new government plans to leverage the GDPR’s flexibility to simplify compliance for small and medium-sized enterprises (SMEs). On an EU level, the coalition wants to exclude SMEs, non-commercial organizations, and “low risk activities” from the GDPR’s scope (lines 2103 et seqq.).
Centralized Oversight: The Federal Data Protection Commissioner would be empowered (and renamed) to oversee data protection, data usage, and information freedom, consolidating responsibilities for greater efficiency (lines 2248 et seqq.).
Opt-out Instead of Consent: Burdensome consent requirements would be replaced by opt-out solutions “in accordance” with EU laws (lines 2096 et seqq.).
2. Data Sharing
The CA 2025 promotes a culture of data sharing to foster innovation while safeguarding individual rights. Highlights include:
Public Money, Public Data: Commitment to making data from publicly funded institutions openly accessible, with robust data trustee mechanisms to foster trust and quality (lines 2243 et seqq.).
Comprehensive Data Framework: Aim to develop modern regulations on data access and data economy for promoting data ecosystems in a comprehensive framework (lines 2238 et seqq.).
3. Online Platforms and Social Networks
The coalition underscores the need for fair competition and user protection, particularly from disinformation, in the digital space.
Platform Regulation: General commitment to supporting the EU’s Digital Services Act and Digital Markets Act to ensure platforms address systemic risks like disinformation and remove illegal content (line 2285).
Transparency and Accountability: Online platforms would be required to comply with existing obligations on transparency and content moderation. Even stricter liability for user content is being considered (lines 3926 et seqq.).
Possible Bot Identification Measures: The introduction of mandatory bot identification provisions for digital players is “being considered” (lines 2290 et seqq.).
4. Digital Infrastructure
The coalition prioritizes expanding Germany’s digital infrastructure to support economic growth and digital transformation.
Data Center Hub: The coalition aims to make Germany Europe’s leading data center hub, with a focus on energy-efficient operations and integration into district heating systems (lines 2192 et seqq.).
Nationwide Fiber Optic Rollout: The new government commits to accelerating the deployment of fiber-optic networks and ensuring high-speed internet access for all households (lines 2201 et seqq.).
Mobile Coverage and Satellite Technology: Efforts would be made to enhance mobile network coverage and explore satellite technology for underserved areas (lines 2201 et seqq., 2279 et seqq.).
5. Public Sector Digitalization
The coalition envisions a user-centric, fully digital public administration.
Restructuring Government Bureaucracy: The new government promises to reduce administrative staff in general and, in particular, wants to reduce the total number of federal authorities (lines 1811 et seqq.). At the same time, a new federal ministry for digitization and state modernization would be created (line 4564), which underscores the coalition’s focus on digitization topics.
Simplifying Administrative Processes: The new government intends to eliminate unnecessary formalities to simplify administrative processes for businesses (lines 339 et seqq., 1798 et seqq., 2171 et seqq.). Particularly, with the adoption of a new general clause, the written form requirement is to be abolished “wherever possible” (lines 2177 et seqq.). Administrative processes would be streamlined and automated, with a focus on eliminating the need for physical paperwork (lines 2155 et seqq.).
“One Stop Shop” for Administrative Services: The coalition aims to enable straightforward digital administrative services via a central platform (one-stop shop). A centralized platform would enable German citizens to access government services digitally, with mandatory digital identities for all citizens (lines 1802 et seqq.).
“Once Only” Approach for Citizens: Intergovernmental data sharing commitments would ensure that citizens have to provide their data only once to the government (lines 2080 et seqq.).
Public Procurement: Consolidated procurement platforms would standardize public procurement (especially of IT services) and help reduce dependence on “monopolistic” suppliers (lines 2075 et seqq.).
6. Digital Sovereignty
The coalition aims to reduce Germany’s dependencies on non-European technologies and to strengthen its digital autonomy.
Open Source and Open Standards: The new government aims to promote open-source solutions and define open interfaces to enhance interoperability and security, without providing many details (lines 2139 et seqq., 2172 et seqq.).
Strategic Investments: Funding would be directed towards key technologies such as cloud computing, artificial intelligence (AI), and cybersecurity (lines 108 et seqq.).
7. Artificial Intelligence (AI)
AI is positioned as a cornerstone of Germany’s digital strategy.
Investments in AI and Cloud Technology: The coalition promised “massive” investments in AI and cloud technologies, without going into further detail (line 108).
“AI Gigafactory” in Germany: The coalition aims to establish at least one European “AI gigafactory” in Germany (lines 2193 et seqq., 2509 et seqq.).
Regulatory Framework: The new government wants the EU AI Act implemented in a way that fosters innovation while addressing ethical and safety concerns (lines 2256 et seqq.). Particularly, burdens on the economy resulting from the technical and legal specifications of the AI Act would be removed (lines 2268 et seqq.).
Copyright Balance: The coalition plans to ensure fair remuneration for creators in generative AI development, mandate fair revenue sharing on streaming platforms, and enhance transparency in content usage (lines 2824 et seqq.).
Conclusion
The German CA 2025 sets a vision for digital transformation, emphasizing the streamlining of regulatory and administrative hurdles, infrastructure development, and technological sovereignty. While many details remain unclear, businesses should prepare for regulatory changes and explore opportunities arising from the new government’s focus on innovation and digitization. As these policies take shape, staying informed and proactive will be key to navigating the evolving digital landscape in Germany.
Congress Reintroduces the NO FAKES Act with Broader Industry Support
Congress has reintroduced the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act— a bipartisan bill designed to establish a federal framework to protect individuals’ right of publicity. As previously reported, the NO FAKES Act was introduced in 2024 to create a private right of action addressing the rise of unauthorized deepfakes and digital replicas—especially those misusing voice and likeness without consent. While the original bill failed to gain traction in a crowded legislative calendar, growing concerns over generative AI misuse and newfound support from key tech and entertainment stakeholders have revitalized the bill’s momentum.
What’s New in the Expanded Bill?
The revised bill reflects months of industry negotiations. Key updates include:
Subpoena Power for Rights holders: The revised bill includes a new right to compel online services, via court-issued subpoenas, to disclose identifying information of alleged infringers, potentially streamlining enforcement efforts and unmasking anonymous violators.
Clarified Safe Harbors: Both versions of the bill include safe harbor protections for online services that proactively comply with notice and take-down procedures, a framework analogous to the protections afforded to online service providers under the Digital Millenium Copyright Act (DMCA). The revised bill introduces new eligibility requirements for these protections, including the implementation of policies for terminating accounts of repeat violators.
Digital Fingerprinting Requirement: In addition to removing offending digital replicas following takedown requests, the revised bill requires that online services use digital fingerprinting technologies (e.g., a cryptographic hash or equivalent identifier) to prevent future uploads of the same unauthorized material.
Broader Definition of “Online Service”: The revised bill broadens the scope of the definition to explicitly include search engines, advertising services/networks, e-commerce platforms, and cloud storage providers, provided they register a designated agent with the Copyright Office. This expansion further ensures that liability extends beyond just the creators of deepfake technologies to also include platforms that host or disseminate unauthorized digital replicas.
Tiered Penalties for Non-compliance: The revised bill introduces a tiered structure for civil penalties, establishing enhanced fines for online services that fail to undertake good faith efforts to comply ranging from $5,000 per violation, up to $750,000 per work.
No Duty to Monitor: Unlike the prior version, the revised bill explicitly states that online services are not required to proactively monitor for infringing content, acknowledging the practical limitations and resource constraints of such monitoring. Instead, the responsibility is triggered upon receipt of a valid takedown notice, after which the online service must act promptly to remove or disable access to the unauthorized material to maintain safe harbor protections. This approach mirrors the notice-and-takedown framework established under the DMCA.
If enacted, the NO FAKES Act would establish nationwide protections for artists, public figures, and private individuals against unauthorized use of their likenesses or voices in deepfakes and other synthetic media. Notably, the revised bill has garnered broad consensus among stakeholders, including the major record labels, SAG-AFTRA, Google, and OpenAI.
While the bill seeks to create clearer legal boundaries in an era of rapidly evolving technology, stakeholders remain engaged in ongoing discussions about how best to balance the protection of individual rights with the imperative to foster technological innovation and safeguard First Amendment-protected expression. As the legislative process unfolds, debate will likely center on whether the bill’s framework can effectively address the complex legal and operational challenges posed by generative AI, while offering enforceable and practical guidance to the platforms that host and disseminate such content.
Importantly, the NO FAKES Act aims to resolve the challenges posed by the current patchwork of state right of publicity laws, which vary widely in scope and enforcement. This fragmented approach has often proven inefficient and ineffective in addressing inherently borderless digital issues like deepfakes and synthetic content. By establishing a consistent federal standard, the NO FAKES Act could provide greater legal clarity, streamline compliance for online platforms, and enhance protections for individuals across jurisdictions.
Listen to this post
White House Unveils Government-Wide Plan to Streamline AI Integration
On April 7, the White House issued a fact sheet outlining new steps to support the responsible use and procurement of AI across federal agencies. The initiative builds on the Biden Administration’s 2023 Executive Order on AI and is intended to reduce administrative hurdles, improve interagency coordination, and expand access to commercially available AI tools.
The announcement requires the Office of Management and Budget, the Office of Federal Procurement Policy, and the General Services Administration to issue updated guidance and provide centralized tools to support implementation. Key measures of the guidance include:
Appointing Chief AI Officers. Each agency must designate a senior official responsible for overseeing AI governance and compliance.
Developing AI Strategies. Agencies are required to submit AI implementation plans within 180 days, identifying operational uses, risk mitigation strategies, and workforce needs.
Removing procurement barriers. Agencies must streamline acquisition processes that may hinder the timely adoption of AI systems, including by adopting performance-based procurement approaches.
Standardizing commercial AI guidance. OMB will release uniform guidance to support the responsible procurement and deployment of off-the-shelf AI tools, with a focus on privacy, equity, and safety.
Expanding Shared Tools and Expertise. The Administration will centralize technical resources to help agencies evaluate AI systems and manage associated risks.
Increasing Access for Small Businesses. The initiative aims to ensure that small and disadvantaged businesses can compete for AI-related government contracts.
Putting It Into Practice: The directive highlights the federal government’s commitment to institutionalizing responsible AI use across sectors while promoting innovation (previously discussed here). Similar momentum is building at the state level, where we expect to see continued parallel developments (previously discussed here and here).
Listen to this post
Stall on Automated Decision-Making Technology Rules from the California Privacy Protection Agency
This week, the California Privacy Protection Agency (CPPA) board held its April meeting to discuss the latest set of proposed regulations, including automated decision-making technology (ADMT) regulations. Instead of finalizing these rules, the board continued its debate and considered further amendments to the draft regulations. Notably, some members proposed changing the definition of ADMT and removing behavioral advertising from ADMT and risk assessment requirements. The board also directed the CPPA to remove a selection of categories in scope for provisions covering significant decisions. The board conditionally approved these changes, but the final (we think) vote will occur at the next meeting.
These continued discussions likely mean that the final rules related to ADMT, risk assessments, and cybersecurity audits are still a long way away. The CPPA raised six topics that they want additional feedback on before presenting the final set of amendments next month:
1. The definition of “ADMT;”
2. The definition of “significant decision;”
3. The “behavioral advertising” threshold;
4. The “work or educational profiling” and “public profiling” thresholds;
5. The “training” threshold; and
6. Risk assessment submissions to the CPPA.
If the changes are substantial enough, the CPPA would open up another 45-day comment period. During the last comment period, CPPA staff reported that over 1,600 pages of comments were received, and hours of testimony were given during the public hearing. The board has until November 2025 to submit the final regulatory package to the California Office of Administrative Law.
Board member Alastair Mactaggart argues that the draft regulations go beyond the scope of the CPPA’s authority to regulate privacy by also attempting to regulate artificial intelligence. He said, “We are now on notice that if we pass these regulations, we will be sued repeatedly and by many parties.” We will continue to monitor these discussions and proposed regulations.
Yahoo ConnectID Faces Class Action Over Email Address Tracking as Alleged Wiretap Violation
Yahoo’s ConnectID is a cookieless identity solution that allows advertisers and publishers to personalize, measure, and perform ad campaigns by leveraging first-party data and 1-to-1 consumer relationships. ConnectID uses consumer email addresses (instead of third-party tracking cookies) to produce and monetize consumer data. A lawsuit filed in the U.S. District Court for the Southern District of New York says that this use and monetization is occurring without consumer consent. The complaint alleges that ConnectID allows user-level tracking across websites by utilizing the individual’s email address—i.e., ConnectID tracks the users via their email addresses without consent. The complaint further alleges that this tracking allows Yahoo to create consumer profiles with its “existing analytics, advertising, and AI products” and to collect user information even if a user isn’t a subscriber to a Yahoo product.
The complaint states, “Yahoo openly tells publishers that they need not concern themselves with obtaining user consent because it already provides ‘multiple mechanisms’ for users to manage their privacy choices. This is misleading at best.” Further, the complaint alleges that Yahoo’s Privacy Policy “makes no mention of sharing directly identifiable email addresses and, in fact, represents that email addresses will not be shared.”
The named plaintiff seeks to certify a nationwide class of all individuals with a ConnectID and whose web communications have been intercepted by Yahoo. The plaintiff asserts this class will be “well over a million individuals.” The complaint seeks relief under the New York unfair and deceptive business practices law, the California Invasion of Privacy Act, and the Federal Computer Data Access and Fraud Act.
These “wiretap” violation lawsuits are popping up all across the country. The lawsuits allege violations of state and federal wiretap statutes, often focusing on website technologies like session replay, chatbots, and pixel tracking, arguing that these trackers (and here, the tracking of email addresses) allow for unauthorized interception of communications. For more information on these predatory lawsuits, check out our recent blog post, here.
The lawsuit seeks statutory, actual, compensatory, punitive, nominal, and other damages, as well as restitution, disgorgement, injunctive relief, and attorneys’ fees. Now is the time to assess your website and the tracking technologies it uses to avoid these types of claims.
Blockchain+ Bi-Weekly; Highlights of the Last Two Weeks in Web3 Law: April 10, 2025
The past two weeks have been relatively quiet, with stablecoins being the most prominent focus on the regulatory front. Stablecoin legislation now appears likely this year, with a bill to regulate stablecoins advancing out of the key House Financial Services Committee — a step toward what would be the first crypto-specific federal legislation enacted in the United States. The SEC also issued guidance clarifying that certain “covered stablecoins” are not securities under existing law. Unresolved are several key questions — including whether regulatory authority over stablecoins will lie solely with the federal government or continue to be shared with the states and whether interest-bearing stablecoins should be treated as stablecoins at all.
These developments and a few other brief notes are discussed below.
SEC Clarifies That Certain Stablecoins Are Not Securities: April 4, 2025
Background: While Congress moves toward a legislative framework for stablecoins (discussed below), the SEC has issued limited guidance addressing how existing securities laws apply to certain types of stablecoins. The SEC’s Division of Corporation Finance’s Statement on Stablecoins provides that the offer and sale of certain “covered stablecoins” do not consist of the offer and sale of securities and issuers of the same “do not need to register . . . with the Commission under the Securities Act or fall within one of the Securities Act’s exemptions from registration.”
Analysis: Under the guidance, “covered stablecoins” are defined as stablecoins that are marketed for the purposes of making payments, are exchangeable for the reference currency on a one-for-one basis and are backed by a reserve of low-risk, liquid assets. There are at least two big takeaways from this guidance. First, interest-bearing stablecoins could turn a consumer product into an investment product under the Howey and Reves tests. Second, while not explicitly addressed, the statement implies that stablecoin issuers might not need to register as investment companies under the Investment Company Act of 1940 as long as the assets backing the stablecoin are USD and other assets that are “considered low risk and readily liquid.” This view is consistent with pending legislation that would prohibit interest payments on stablecoins to distinguish them from investment products. Notably, the guidance does not address stablecoins pegged to anything other than the U.S. dollar.
Stablecoin Bill Passes House Financial Services Committee: April 2, 2025
Background: The House Financial Services Committee included H.R. 2392, the Stablecoin Transparency and Accountability for a Better Ledger Economy (STABLE) Act of 2025, in a recent markup session. Committee Chair French Hill stated he expected “our discussion today will be passionate,” and his expectations were met during a marathon 10-hour debate, particularly regarding various proposed amendments to prohibit federal officials from “sponsoring, issuing, promoting or licensing” stablecoins in response to World Liberty Financial stating its intent to issue a stablecoin for its platform. The bill ultimately ended up passing through committee on a 32-17 vote, demonstrating a fairly strong bipartisan vote, though further changes can be expected before the bill reaches the House floor.
Analysis: Stablecoin legislation in 2025 now appears likely, but the two major questions remain: whether authority will be split between state and federal authorities, and whether stablecoins should be permitted to bear interest. Some argue that allowing interest bearing stablecoins will enhance utility, while others argue that it could undermine the existing banking system. An anti-central bank digital currency (CBDC) bill also advanced through the Committee, along party lines, though that bill is of limited practical importance, as any CBDC would likely require express Congressional approval.
SEC v. Ripple Settlement Progresses: March 25, 2025
Background: In our last Bi-Weekly update, we noted the then-available details regarding developments in the SEC v. Ripple case. Since then, further news was released that Ripple will also not be appealing the decision in its case against the SEC. The SEC will also ask the district court to lift the standard SEC injunction, but there is no guarantee that it will be approved.
Analysis: The settlement was finalized as both parties agreed to drop their respective appeals in the case, which dates back to 2020. Ripple agreed to pay a fine of $50 million, reduced from the original $125 million, in exchange for the SEC requesting the lifting of injunction requiring Ripple to register any future securities. The settlement signals the conclusion of one of the most anticipated crypto litigations. As discussed in the previous update, the settlement aligns with the general outlook of the SEC dropping non-fraud related crypto cases. On the other hand, Ripple remaining liable for a $50 million fine related to its institutional token sales leaves a door open for the SEC to argue that sales of tokens for the purpose of raising capital purposes might still be treated as securities offerings. While the settlement is a welcome resolution, the absence of a final judicial opinion leaves no precedent or legal guidance for future token offerings. With this litigation soon behind us, the industry can now focus on securing clearer regulatory guidance on digital assets.
Briefly Noted:
Digital Chamber Conference: Remarks by Commissioner Peirce: The Digital Chamber of Commerce held its annual Blockchain Summit on March 26th, with the Polsinelli BitBlog team actively participating. We were encouraged by the strong demonstration of bipartisan support for the industry — even in these highly partisan times — due in no small part to the efforts of the Chamber under Perianne Boring and now under the energetic new leadership of Coby Carbone, whom we had the pleasure of congratulating in person. Of particular note at the conference was SEC Commissioner Hester Peirce’s important address on the path ahead for building common-sense digital asset regulations.
SEC Chair Confirmation Hearing: Paul Atkins had his Senate confirmation hearing last week, but there wasn’t anything unexpected discussed. He has a lot of work ahead of him and will get plenty of help from the industry in the various upcoming roundtables. That said, it appears he may have already gotten a head start, with two of the three remaining SEC commissioners (Uyeda and Peirce) being former staffers of his.
Securities Clarity Act Reintroduced: House Majority Whip Tom Emmer has reintroduced his Securities Clarity Act, which specifies that any asset sold as the object of an investment contract is distinct from the securities offering it was originally a part of. This definition is technology-neutral and applies to all assets sold or offered that would only be considered a “security” because of their inclusion in an investment contract. With the unclear status of the market structure bill, this would be a solid alternative along with SEC rulemaking and no-action letters.
FDIC Removes Crypto Limits: The FDIC has released a statement that it will no longer require supervised institutions that “engage in permissible crypto-related activities” to receive prior agency approval. Another big win for getting digital asset companies access to traditional banking.
Kentucky Self-Custody Law: Kentucky recently enacted a law that passed unanimously on a bipartisan vote and guarantees individuals the right to hold and manage their crypto in self-hosted wallets. Hopefully, we see similar protections at the federal level soon.
State Staking-as-a-Service Lawsuits Dropped: Fresh off the SEC clarifying its view that pooled PoW mining operations are not generally securities offerings, South Carolina, Kentucky and Vermont have all dropped their lawsuits against Coinbase alleging that its staking services qualified as securities.
Circle Files to Go Public: USDC stablecoin issuer Circle has filed their S-1 to go public, aiming for a $5 billion valuation. Considering they had $1.68 billion in revenue and reserve income in 2024, that seems reasonable, even in less than optimal market conditions. Interestingly, the IPO filings also revealed Coinbase’s acquisition of a stake in Circle. This is just the first of the crypto companies going public in the upcoming months/years, if tariffs don’t derail those plans.
Defending the Fourth Amendment: It is worth reading this amicus brief from the DeFi Education Fund in a case regarding the Constitution’s Fourth Amendment protection against illegal search and seizure, specifically challenging the government’s subpoena powers over digital asset transaction records held by centralized exchanges.
Acting SEC Chair Asks for Guidance Assessment: Acting SEC Chair Uyeda has asked the staff to reassess certain guidance, which includes the Framework for “Investment Contract” Analysis of Digital Assets. This document was based on a 2018 speech by former SEC Bill Hinman. It appears the goal would be to clean the slate of past guidance muddying the waters in areas the current administration wants to change, including the prior approach to regulating digital assets.
Conclusion:
With the SEC announcing that certain “covered stablecoins” are not securities and a stablecoin bill advancing through the House Financial Services Committee, stablecoins were the most active area of regulatory development over the past two weeks. Ripple’s settlement with the SEC marks the close of one of the most closely watched crypto litigations to date — though it leaves much work ahead in the pursuit of clearer legal frameworks for digital assets. Other notable updates include the SEC Chair’s confirmation hearing, the reintroduction of the Securities Clarity Act, the FDIC’s removal of prior approval requirements for crypto-related activities, Kentucky’s new self-custody law, and Circle going public.
California’s Wait Is Nearly Over: New AI Employment Discrimination Regulations Move Toward Final Publication
The California Civil Rights Council has advanced new regulations regarding employers’ use of artificial intelligence (AI) and automated decision-making systems, clearing the way for them to take effect later this year. The new regulations will make the state one of the first to adopt comprehensive regulations regarding the growing use of such technologies to make employment decisions.
Quick Hits
The California Civil Rights Department finalized modified regulations for employers’ use of AI and automated decision-making systems.
The regulations confirm that the use of such technology to make employment decisions may violate the state’s anti-discrimination laws and clarify limits on such technology, including in conducting criminal background checks and medical/psychological inquiries.
On March 21, 2025, the Civil Rights Council, a branch of the California Civil Rights Department (CRD), voted to approve the final and modified text of California’s new “Employment Regulations Regarding Automated-Decision Systems.” The regulations were filed with the Office of Administrative Law, which must approve the regulations. At this time, it is not clear when the finalized modifications will take effect, although they are likely to become effective this year.
The CRD has been considering automated-decision system regulations for years amid concerns over employers’ increasing use of AI and other automated decision-making systems, or “Automated-Decision Systems,” to make or facilitate employment decisions, such as recruitment, hiring, and promotions.
While the final regulations have some key differences from the proposed regulations released in May 2024, they clarify that it is unlawful to use AI and automated decision-making systems to make employment decisions that discriminate against applicants or employees in a way prohibited by the California Fair Employment and Housing Act (FEHA) or other California antidiscrimination laws.
Here are some key aspects of the final regulations.
Automated-Decision Systems
The final regulations define “automated-decision system[s]” as “[a] computational process that makes a decision or facilitates human decision making regarding an employment benefit,” including processes that “may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.” This definition is narrower than the proposed regulations, which would have included any computational process that “screens, evaluates, categorizes, or otherwise makes a decision….”
Covered systems include a range of technological processes, including tests, games, or puzzles used to assess applicants or employees, processes for targeting job advertisements, screening resumes, processes to analyze “facial expression, word choice, and/or voice in online interviews,” or processes to “analyz[e] employee or applicant data acquired from third parties.” Such systems do not include “word processing software, spreadsheet software, [and] map navigation systems.”
Automated-decision systems do not include typical software or programs such as word processors, spreadsheets, map navigation systems, web hosting, firewalls, and common security software, “provided that these technologies do not make a decision regarding an employment benefit.” Notably, the final regulations do not include language from the proposed rule’s excluded technology provision that would have excluded systems used to “facilitate human decision making regarding” an employment benefit.
Other Key Terms
“Agent”—The final regulations would consider an employer’s “agent” to be an “employer” under the FEHA regulations. An “agent” would be defined as “any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity … including when such activities and decisions are conducted in whole or in part through the use of an automated decision system.” (Emphasis added.)
“Automated-Decision System Data”—The regulations define such data as “[a]ny data used to develop or customize an automated-decision system for use by a particular employer or other covered entity.” However, the final regulations narrow what is included as “automated-decision system data,” removing language from the proposed regulations that would have included “[a]ny data used in the process of developing and/or applying machine learning, algorithms, and/or artificial intelligence” used in an automated-decision system, including “data used to train a machine learning algorithm.” (Emphasis added.)
“Artificial Intelligence”—The regulations define AI as “[a] machine-based system that infers, from the input it receives, how to generate outputs,” which can include “predictions, content, recommendations, or decisions.” The proposed regulations had included “machine learning system[s] that can, for a given set of human defined objectives, make predictions, recommendations, or decisions.”
“Machine Learning”—The term is defined as the “ability for a computer to use and learn from its own analysis of data or experience and apply this learning automatically in future calculations or tasks.”
Unlawful Selection Criteria
Potentially discriminatory hiring tools have long been unlawful in California, but the final regulations confirm that those antidiscrimination laws apply to potential discrimination on the basis of protected class or disability that is carried out by AI or automated decision-making systems. Specifically, the regulations state that it is “unlawful for an employer or other covered entity to use an automated-decision system or selection criteria (including a qualification standard, employment test, or proxy) that discriminates against an applicant or employee or a class of applicants or employees on a basis protected” by FEHA.
Removal of Disparate Impact
However, the final regulations do not include the proposed definition of “adverse impact” caused by an automated-decision system under the FEHA regulations. The prior proposed regulations had specified that an adverse impact includes “disparate impact” theories and may be the result of a “facially neutral practice that negatively limits, screens out, tends to limit or screen out, ranks, or prioritizes applicants or employees on a basis protected by” FEHA. Further, the final regulations do not include similar language defining automated-decision systems to include systems that screen out or make decisions related to employment benefits.
Pre-Employment Practices
The final regulations further clarify that the use of online application technology that “screens out, ranks, or prioritizes applicants based on” scheduling restrictions “may discriminate against applicants based on their religious creed, disability, or medical condition,” unless it is job-related and required by business necessity and there is a mechanism for the applicant to request an accommodation.
The regulations specify that this would apply to automated-decision systems. The regulations state that use of such a system “that, for example, measures an applicant’s skill, dexterity, reaction time, and/or other abilities or characteristics may discriminate against individuals with certain disabilities or other characteristics protected under the Act” without reasonable accommodation may result in unlawful discrimination. Similarly, a system that “analyzes an applicant’s tone of voice, facial expressions or other physical characteristics or behavior may discriminate against individuals based on race, national origin, gender, disability, or other” protected characteristic may result in unlawful discrimination.
Criminal Records
California law provides that before employers deny applicants based on a criminal record, the employer “must first make an individualized assessment of whether the applicant’s conviction history has a direct and adverse relationship with the specific duties of the job” that would justify denying the applicant. The final regulations state that “prohibited consideration” of criminal records “includes, but is not limited to, inquiring about criminal history through an employment application, background check, or the use of an automated-decision system.” (Emphasis added.)
However, the final regulations do not include the proposed language that would have clarified that the use of an automated decision-system alone, “in the absence of additional processes or actions” is not a sufficient individualized assessment. The final regulations further do not include the proposed language that would have required employers to provide “a copy or description” of a report generated that is used to withdraw a conditional job offer.
Unlawful Medical or Psychological Inquiries
The final regulations state that rules against asking job applicants about their medical or psychological histories include “through the use of an automated-decision system.” The regulations state that such an inquiry “includes any such examination or inquiry administered through the use of an automate-decision system,” including puzzles or games that are “likely to elicit information about a disability.”
Third-Party Liability
The final regulations clarify that the prohibitions on aiding and abetting unlawful employment practices apply to the use of automated decision-making systems, potentially implicating third parties that design or implement such systems. Still, the regulations specify that “evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results” is relevant to a claim of unlawful discrimination. However, the final regulations do not include the proposed language that would have created third-party liability for the design and development and advertising, promotion, or sale of such systems.
Next Steps
Once effective, the final regulations will make California one of the first jurisdictions to promulgate comprehensive regulations concerning AI and/or automated decision-making technologies, along with Colorado, Illinois, and New York City. The regulations also come as President Donald Trump is seeking to reshape federal AI policy, focusing on removing barriers to the United States being a leader in the development of the technology. The new policy shifts away from the Biden administration’s focus on safeguarding employees and consumers from potential negative impacts from the use of such technology, particularly the possibility of unlawful employment discrimination and harassment. It is expected that states and localities will continue to regulate AI to fill in the gap.
Lay of the Land: Challenges to Data Center Construction—Past, Present and Future [Podcast]
In this episode of Lay of the Land, we are joined by Paul Manzer, principal and data center market leader with Navix Engineering, to explore the evolving landscape of data center construction. We dive into the unique civil engineering challenges—from site selection to due diligence—and trace the evolution of these challenges from past limitations to present-day complexities like supply chain issues and legal hurdles.
Looking ahead, we discuss future trends driven by AI and emerging technologies, examining how legal strategies and engineering innovation can address these challenges. We provide key takeaways for developers and investors, emphasizing the critical collaboration between legal and engineering teams.
Employment Law This Week Episode 385 – Artificial Intelligence Regulations for Employers [Video, Podcast]
This week, we’re discussing the state-level, employment-related artificial intelligence (AI) laws and regulations sweeping the nation.
AI Regulations for Employers
State laws are rapidly stepping in to regulate AI in the absence of federal legislation, with at least 45 states introducing AI-related bills this year. Hear from Epstein Becker Green attorney Frances M. Green as she outlines how employers can navigate this evolving landscape by developing governance policies and providing clear training and guidelines to ensure the safe, transparent, and accountable use of AI tools.
Contract Law in the Age of Agentic AI: Who’s Really Clicking “Accept”?
In May 2024, we released Part I of this series, in which we discussed agentic AI as an emerging technology enabling a new generation of AI-based hardware devices and software tools that can take actions on behalf of users. It turned out we were early – very early – to the discussion, with several months elapsing before agentic AI became as widely known and discussed as it is today. In this Part II, we return to the topic to explore legal issues concerning user liability for agentic AI-assisted transactions and open questions about existing legal frameworks’ applicability to the new generation of AI-assisted transactions.
Background: Snapshot of the Current State of “Agents”[1]
“Intelligent” electronic assistants are not new—the original generation, such as Amazon’s Alexa, have been offering narrow capabilities for specific tasks for more than a decade. However, as OpenAI’s CEO Sam Altman commented in May 2024, an advanced AI assistant or “super-competent colleague” could be the killer app of the future. Later, Altman noted during a Reddit AMA session: “We will have better and better models. But I think the thing that will feel like the next giant breakthrough will be agents.” A McKinsey report on AI agents echoes this sentiment: “The technology is moving from thought to action.” Agentic AI represents not only a technological evolution, but also a potential means to further spread (and monetize) AI technology beyond its current uses by consumers and businesses. Major AI developers and others have already embraced this shift, announcing initiatives in the agentic AI space. For example:
Anthropic announced an updated frontier AI model in public beta capable of interacting with and using computers like human users;
Google unveiled Gemini 2.0, its new AI model for the agentic era, alongside Project Mariner, a prototype leveraging Gemini 2.0 to perform tasks via an experimental Chrome browser extension (while keeping a “human in the loop”);
OpenAI launched a “research preview” of Operator, an AI tool that can interface with computers on users’ behalf, and launched beta feature “Tasks” in ChatGPT to facilitate ongoing or future task management beyond merely responding to real time prompts;
LexisNexis announced the availability of “Protégé,” a personalized AI assistant with agentic AI capabilities;
Perplexity recently rolled out “Shop Like a Pro,” an AI-powered shopping recommendation and buying feature that allows Perplexity Pro users to research products and, for those merchants whose sites are integrated with the tool, purchase items directly on Perplexity; and
Amazon announced Alexa+, a new generation of Alexa that has agentic capabilities, including enabling Alexa to navigate the internet and execute tasks, as well as Amazon Nova Act, an AI model designed to perform actions within a web browser.
Beyond these examples, other startups and established tech companies are also developing AI “agents” in this country and overseas (including the invite-only release of Manus AI by Butterfly Effect, an AI developer in China). As a recent Microsoft piece speculates, the generative AI future may involve a “new ecosystem or marketplace of agents,” akin to the current smartphone app ecosystem. Although early agentic AI device releases have received mixed reviews and seem to still have much unrealized potential, they demonstrate the capability of such devices to execute multistep actions in response to natural language instructions.
Like prior technological revolutions—personal computers in the 1980s, e-commerce in the 1990s and smartphones in the 2000s—the emergence of agentic AI technology challenges existing legal frameworks. Let’s take a look at some of those issues – starting with basic questions about contract law.
Note: This discussion addresses general legal issues with respect to hypothetical agentic AI devices or software tools/apps that have significant autonomy. The examples provided are illustrative and do not reflect any specific AI tool’s capabilities.
Automated Transactions and Electronic Agents
Electronic Signatures Statutory Law Overview
A foundational legal question is whether transactions initiated and executed by an AI tool on behalf of a user are enforceable. Despite the newness of agentic AI, the legal underpinnings of electronic transactions are well-established. The Uniform Electronic Transactions Act (“UETA”), which has been adopted by every state and the District of Columbia (except New York, as noted below), the federal E-SIGN Act, and the Uniform Commercial Code (“UCC”), serve as the legal framework for the use of electronic signatures and records, ensuring their validity and enforceability in interstate commerce. The fundamental provisions of UETA are Sections 7(a)-(b), which provide: “(a) A record or signature may not be denied legal effect or enforceability solely because it is in electronic form; (b) A contract may not be denied legal effect or enforceability solely because an electronic record was used in its formation.”
UETA is technology-neutral and “applies only to transactions between parties each of which has agreed to conduct transactions by electronic means” (allowing the parties to choose the technology they desire). In the typical e-commerce transaction, a human user selects products or services for purchase and proceeds to checkout, which culminates in the user clicking “I Agree” or “Purchase.” This click—while not a “signature” in the traditional sense of the word—may be effective as an electronic signature, affirming the user’s agreement to the transaction and to any accompanying terms, assuming the requisite contractual principles of notice and assent have been met.
At the federal level, the E-SIGN Act (15 U.S.C. §§ 7001-7031) (“E-SIGN”) establishes the same basic tenets regarding electronic signatures in interstate commerce and contains a reverse preemption provision, generally allowing states that have passed UETA to have UETA take precedence over E-SIGN. If a state does not adopt UETA but enacts another law regarding electronic signatures, its alternative law will preempt E-SIGN only if the alternative law specifies procedures or requirements consistent with E-SIGN, among other things.
However, while UETA has been adopted by 49 states and the District of Columbia, it has not been enacted in New York. Instead, New York has its own electronic signature law, the Electronic Signature Records Act (“ESRA”) (N.Y. State Tech. Law § 301 et seq.). ESRA generally provides that “An electronic record shall have the same force and effect as those records not produced by electronic means.” According to New York’s Office of Information Technology Services, which oversees ESRA, “the definition of ‘electronic signature’ in ESRA § 302(3) conforms to the definition found in the E-SIGN Act.” Thus, as one New York state appellate court stated, “E-SIGN’s requirement that an electronically memorialized and subscribed contract be given the same legal effect as a contract memorialized and subscribed on paper…is part of New York law, whether or not the transaction at issue is a matter ‘in or affecting interstate or foreign commerce.’”[2]
Given US states’ wide adoption of UETA model statute, with minor variations, this post will principally rely on its provisions in analyzing certain contractual questions with respect to AI agents, particularly given that E-SIGN and UETA work toward similar aims in establishing the legal validity of electronic signatures and records and because E-SIGN expressly permits states to supersede the federal act by enacting UETA. As for New York’s ESRA, courts have already noted that the New York legislature incorporated the substantive terms of E-SIGN into New York law, thus suggesting that ESRA is generally harmonious with the other laws’ purpose to ensure that electronic signatures and records have the same force and effect as traditional signatures.
Electronic “Agents” under the Law
Beyond affirming the enforceability of electronic signatures and transactions where the parties have agreed to transact with one another electronically, Section 2(2) of UETA also contemplates “automated transactions,” defined as those “conducted or performed, in whole or in part, by electronic means or electronic records, in which the acts or records of one or both parties are not reviewed by an individual.” Central to such a transaction is an “electronic agent,” which Section 2(6) of UETA defines as “a computer program or an electronic or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part, without review or action by an individual.” Under UETA, in an automated transaction, a contract may be formed by the interaction of “electronic agents” of the parties or by an “electronic agent” and an individual. E-SIGN similarly contemplates “electronic agents,” and states: “A contract or other record relating to a transaction in or affecting interstate or foreign commerce may not be denied legal effect, validity, or enforceability solely because its formation, creation, or delivery involved the action of one or more electronic agents so long as the action of any such electronic agent is legally attributable to the person to be bound.”[3] Under both of these definitions, agentic AI tools—which are increasingly able to initiate actions and respond to records and performances on behalf of users—arguably qualify as “electronic agents” and thus can form enforceable contracts under existing law.[4]
AI Tools and E-Commerce Transactions
Given this existing body of statutory law enabling electronic signatures, from a practical perspective this may be the end of the analysis for most e-commerce transactions. If I tell an AI tool to buy me a certain product and it does so, then the product’s vendor, the tool’s provider and I might assume—with the support of UETA, E-SIGN, the UCC, and New York’s ESRA—that the vendor and I (via the tool) have formed a binding agreement for the sale and purchase of the good, and that will be the end of it unless a dispute arises about the good or the payment (e.g., the product is damaged or defective, or my credit card is declined), in which case the AI tool isn’t really relevant.
But what if the transaction does not go as planned for reasons related to the AI tool? Consider the following scenarios:
Misunderstood Prompts: The tool misinterprets a prompt that would be clear to a human but is confusing to its model (e.g., the user’s prompt states, “Buy two boxes of 101 Dalmatians Premium dog food,” and the AI tool orders 101 two-packs of dog food marketed for Dalmatians).
AI Hallucinations: The user asks for something the tool cannot provide or does not understand, triggering a hallucination in the model with unintended consequences (e.g., the user asks the model to buy stock in a company that is not public, so the model hallucinates a ticker symbol and buys stock in whatever real company that symbol corresponds to).
Violation of Limits: The tool exceeds a pre-determined budget or financial parameter set by the user (e.g., the user’s prompt states, “Buy a pair of running shoes under $100” and the AI tool purchases shoes from the UK for £250, exceeding the user’s limit).
Misinterpretation of User Preference: The tool misinterprets a prompt due to lack of context or misunderstanding of user preferences (e.g., the user’s prompt states, “Book a hotel room in New York City for my conference,” intending to stay near the event location in lower Manhattan, and the AI tool books a room in Queens because it prioritizes price over proximity without clarifying the user’s preference).
Disputes like these begin with a conflict between the user and a vendor—the AI tool may have been effective to create a contract between the user and the vendor, and the user may then have legal responsibility for that contract. But the user may then seek indemnity or similar rights against the developer of the AI tool.
Of course, most developers will try to avoid these situations by requiring user approvals before purchases are finalized (i.e., “human in the loop”). But as desire for efficiency and speed increases (and AI tools become more autonomous and familiar with their users), these inbuilt protections could start to wither away, and users that grow accustomed to their tool might find themselves approving transactions without vetting them carefully. This could lead to scenarios like the above, where the user might seek to void a transaction or, if that fails, even try to avoid liability for it by seeking to shift his or her responsibility to the AI tool’s developer.[5] Could this ever work? Who is responsible for unintended liabilities related to transactions completed by an agentic AI tool?
Sources of Law Governing AI Transactions
AI Developer Terms of Service
As stated in UETA’s Prefatory Note, the purpose of UETA is “to remove barriers to electronic commerce by validating and effectuating electronic records and signatures.” Yet, the Note cautions, “It is NOT a general contracting statute – the substantive rules of contracts remain unaffected by UETA.” E-SIGN contains a similar disclaimer in the statute, limiting its reach to statutes that require contracts or other records be written, signed, or in non-electronic form (15 U.S.C. §7001(b)(2)). In short, UETA, E-SIGN, and the similar UCC provisions do not provide contract law rules on how to form an agreement or the enforceability of the terms of any agreement that has been formed.
Thus, in the event of a dispute, terms of service governing agentic AI tools will likely be the primary source to which courts will look to assess how liability might be allocated. As we noted in Part I of this post, early-generation agentic AI hardware devices generally include terms that not only disclaim responsibility for the actions of their products or the accuracy of their outputs, but also seek indemnification against claims arising from their use. Thus, absent any express customer-favorable indemnities, warranties or other contractual provisions, users might generally bear the legal risk, barring specific legal doctrines or consumer protection laws prohibiting disclaimers or restrictions of certain claims.[6]
But what if the terms of service are nonexistent, don’t cover the scenario, or—more likely—are unenforceable? Unenforceable terms for online products and services are not uncommon, for reasons ranging from “browsewrap” being too hidden, to specific provisions being unconscionable. What legal doctrines would control during such a scenario?
The Backstop: User Liability under UETA and E-SIGN
Where would the parties stand without the developer’s terms? E-SIGN allows for the effectiveness of actions by “electronic agents” “so long as the action of any such electronic agent is legally attributable to the person to be bound.” This provision seems to bring the issue back to the terms of service governing a transaction or general principles of contract law. But again, what if the terms of service are nonexistent or don’t cover a particular scenario, such as those listed above. As it did with the threshold question of whether AI tools could form contracts in the first place, UETA appears to offer a position here that could be an attractive starting place for a court. Moreover, in the absence of express language under New York’s ESRA, a New York court might apply E-SIGN (which contains an “electronic agent” provision) or else find insight as well by looking at UETA and its commentary and body of precedent if the court isn’t able to find on-point binding authority, which wouldn’t be a surprise, considering that we are talking about technology-driven scenarios that haven’t been possible until very recently.
UETA generally attributes responsibility to users of “electronic agents”, with the prefatory note explicitly stating that the actions of electronic agents “programmed and used by people will bind the user of the machine.” Section 14 of UETA (titled “Automated Transaction”) reinforces this principle, noting that a contract can be formed through the interaction of “electronic agents” “even if no individual was aware of or reviewed the electronic agents’ actions or the resulting terms and agreements.” Accordingly, when automated tools such as agentic AI systems facilitate transactions between parties who knowingly consent to conduct business electronically, UETA seems to suggest that responsibility defaults to the users—the persons who most immediately directed or initiated their AI tool’s actions. This reasoning treats the AI as a user’s tool, consistent with the other UETA Comments (e.g., “contracts can be formed by machines functioning as electronic agents for parties to a transaction”).
However, different facts or technologies could lead to alternative interpretations, and ambiguities remain. For example, Comment 1 to UETA Section 14 asserts that the lack of human intent at the time of contract formation does not negate enforceability in contracts “formed by machines functioning as electronic agents for parties to a transaction” and that “when machines are involved, the requisite intention flows from the programming and use of the machine” (emphasis added).
This explanatory text has a couple of issues. First, it is unclear about what constitutes “programming” and seems to presume that the human intention at the programming step (whatever that may be) is more-or-less the same as the human intention at the use step[7], but this may not always be the case with AI tools. For example, it is conceivable that an AI tool could be programmed by its developer to put the developer’s interests above the users’, for example by making purchases from a particular preferred e-commerce partner even if that vendor’s offerings are not the best value for the end user. This concept may not be so far-fetched, as existing GenAI developers have entered into content licensing deals with online publishers to obtain the right for their chatbots to generate outputs or feature licensed content, with links to such sources. Of course, there is a difference between a chatbot offering links to relevant licensed news sources that are accurate (but not displaying appropriate content from other publishers) versus an agentic chatbot entering into unintended transactions or spending the user’s funds in unwanted ways. This discrepancy in intention alignment might not be enough to allow the user to shift liability for a transaction from a user to a programmer, but it is not hard to see how larger misalignments might lead to thornier questions, particularly in the event of litigation when a court might scrutinize the enforceability of an AI vendor’s terms (under the unconscionability doctrine, for example).
Second, UETA does not contemplate the possibility that the AI tool might have enough autonomy and capability that some of its actions might be properly characterized as the result of its own intent. Looking at UETA’s definition of “electronic agent,” the commentary notes that “As a general rule, the employer of a tool is responsible for the results obtained by the use of that tool since the tool has no independent volition of its own.” But as we know, technology has advanced in the last few decades and depending on the tool, an autonomous AI tool might one day have much independent volition (and further UETA commentary admits the possibility of a future with more autonomous electronic agents). Indeed, modern AI researchers have been contemplating this possibility even before rapid technological progress began with ChatGPT.
Still, Section 10 of UETA may be relevant to some of the scenarios from our bulleted selection of AI tool mishaps listed above, including misunderstood prompts or AI hallucinations. UETA Section 10 (titled “Effect of Change or Error”) outlines the possible actions a party may take when discovering human or machine errors or when “a change or error in an electronic record occurs in a transmission between parties to a transaction.” The remedies outlined in UETA depend on the circumstances of the transaction and whether the parties have agreed to certain security procedures to catch errors (e.g., a “human in the loop” confirming an AI-completed transaction) or whether the transaction involves an individual and a machine.[8] In this way, the guardrails integrated into a particular AI tool or by the parties themselves play a role in the liability calculus. The section concludes by stating that if none of UETA’s error provisions apply, then applicable law governs, which might include the terms of the parties’ contract and the law of mistake, unconscionability and good faith and fair dealing.
* * *
Thus, along an uncertain path we circle back to where we started: the terms of the transaction and general contract law principles and protections. However, not all roads lead to contract law. In our next installment in this series, we will explore the next logical source of potential guidance on AI tool liability questions: agency law. Decades of established law may now be challenged by a new sort of “agent” in the form of agentic AI…and a new AI-related lawsuit foreshadows the issues to come.
[1] In keeping with common practice in the artificial intelligence industry, this article refers to AI tools that are capable of taking actions on behalf of users as “agents” (in contrast to more traditional AI tools that can produce content but not take actions). However, note that the use of this term is not intended to imply that these tools are “agents” under agency law.
[2] In addition, the UCC has provisions consistent with UETA and E-SIGN providing for the use of electronic records and electronic signatures for transactions subject to the UCC. The UCC does not require the agreement of the parties to use electronic records and electronic signatures, as UETA and E-SIGN do.
[3] Under E-SIGN, “electronic agent” means “a computer program or an electronic or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part without review or action by an individual at the time of the action or response.”
[4] It should be noted that New York’s ESRA does not expressly provide for the use of “electronic agents,” yet does not prohibit them either. Reading through ESRA and the ESRA regulation, the spirit of the law could be construed as forward-looking and seems to suggest that it supports the use of automated systems and electronic means to create legally binding agreements between willing parties. Looking to New York precedent, one could also argue that E-SIGN, which contains provisions about the use of “electronic agents”, might also be applicable in certain circumstances to fill the “electronic agent” gap in ESRA. For example, the ESRA regulations (9 CRR-NY § 540.1) state: “New technologies are frequently being introduced. The intent of this Part is to be flexible enough to embrace future technologies that comply with ESRA and all other applicable statutes and regulations.” On the other side, one could argue that certain issues surrounding “electronic agents” are perhaps more unsettled in New York. Still, New York courts have found ESRA consistent with E-SIGN.
[5] Since AI tools are not legal persons, they could not be liable themselves (unlike, for example, a rogue human agent could be in some situations). We will explore agency law questions in Part III.
[6] Once agentic AI technology matures, it is possible that certain user-friendly contractual standards might emerge as market participants compete in the space. For example, as we wrote about in a prior post, in 2023 major GenAI providers rolled out indemnifications to protect their users from third-party claims of intellectual property infringement arising from GenAI outputs, subject to certain carve-outs.
[7] The electronic “agents” in place at the time of UETA’s passage might have included basic e-commerce tools or EDI (Electronic Data Interchange), which is used by businesses to exchange standardized documents, such as purchase orders, electronically between trading partners, replacing traditional methods like paper, fax, mail or telephone. Electronic tools are generally designed to explicitly perform according to the user’s intentions (e.g., clicking on an icon will add this item to a website shopping cart or send this invoice to the customer) and UETA, Section 10, contains provisions governing when an inadvertent or electronic error occurs (as opposed to an abrogation of the user’s wishes).
[8] For example, UETA Section 10 states that if a change or error occurs in an electronic record during transmission between parties to a transaction, the party who followed an agreed-upon security procedure to detect such changes can avoid the effect of the error, if the other party who didn’t follow the procedure would have detected the change had they complied with the security measure; this essentially places responsibility on the party who failed to use the agreed-upon security protocol to verify the electronic record’s integrity.
Comments to UETA Section 10 further explain the context of this section: “The section covers both changes and errors. For example, if Buyer sends a message to Seller ordering 100 widgets, but Buyer’s information processing system changes the order to 1000 widgets, a “change” has occurred between what Buyer transmitted and what Seller received. If on the other hand, Buyer typed in 1000 intending to order only 100, but sent the message before noting the mistake, an error would have occurred which would also be covered by this section.” In the situation where a human makes a mistake when dealing with an electronic agent, the commentary explains that “when an individual makes an error while dealing with the electronic agent of the other party, it may not be possible to correct the error before the other party has shipped or taken other action in reliance on the erroneous record.”