The CCPA and Automated Decision-Making Technologies (ADMT)

As artificial intelligence (AI), particularly generative AI, becomes increasingly woven into our professional and personal lives—from personalized travel itineraries to reviewing resumes to summarizing investigation notes and reports—questions about who or what controls our data and how it’s used are ever present. AI systems survive and thrive on information and that intersection of AI and privacy elevates the need for data protection.
Recent regulations issued by the California Privacy Protection Agency (CPPA) under the California Consumer Privacy Act (CCPA) begin to erect those protections. Among its various provisions, the CCPA now specifically addresses automated decision-making technologies (ADMT), attempting to bring transparency and consumer rights to, among other things, push back on algorithms making significant decisions about them.
As a starting point, it is important to define ADMT. Under the CCPA, it means any technology that processes personal information and uses computation to replace human decision-making or substantially replace human decision-making. For this purpose, “replace” means to make decision without human involvement. To be considered human involvement, a human must:

know how to interpret and use the technology’s output to make the decision;
review and analyze the output of the technology, and any other information that is relevant to make or change the decision; and
have the authority to make or change the decision based on their analysis in (B).

CCPA-covered businesses that use ADMT to make “significant decisions” about consumers have several new compliance obligations to navigate. A “significant decision” is defined as a decision that has important consequences for a consumer’s life, opportunities, or access to essential services. CCPA regulations define these decisions as those that result in the provision or denial of:

Financial or lending services (e.g., credit approval, loan eligibility)
Housing (e.g., rental applications, mortgage decisions)
Education enrollment or opportunities (e.g., admissions decisions)
Employment or independent contracting opportunities or compensation (e.g., hiring, promotions, work assignments)
Healthcare services (e.g., treatment eligibility, insurance coverage)

These decisions are considered “significant” because they directly affect a consumer’s economic, health, or personal well-being.
When such businesses use ADMT to make significant decisions, they generally must do the following:

Provide an opt-out right for consumers.
Provide a pre-use notice that clearly explains the business’s use of ADMT, in plain language.
Provide consumers with the ability to request information about the business’s use of ADMT.

Businesses using ADMT for significant decisions before January 1, 2027, must comply by January 1, 2027. Businesses that begin using ADMT after January 1, 2027, must comply immediately when the use begins.
Businesses will need to examine these new requirements carefully, including how they fit into the existing CCPA compliance framework, along with exceptions that may apply. For example, in the case of a consumer’s right to opt-out of ADMT, a business may not be required to make that right available.
If a business provides consumers with a method to appeal the ADMT decision to a human reviewer who has the authority to overturn the decision, opt-out is not required. Additionally, the right to opt-out of ADMT in connection with certain admission, acceptance, or hiring decisions, is not required if the following are satisfied:

the business uses ADMT solely for the business’s assessment of the consumer’s ability to perform at work or in an educational program to determine whether to admit, accept, or hire them; and
the ADMT works as intended for the business’s proposed use and does not unlawfully discriminate based upon protected characteristics.

Likewise, the right to opt-out of ADMT is not required for certain allocation/assignment of work and compensation decisions, if the business:

uses the ADMT solely for the business’s allocation/assignment of work or compensation; and
the ADMT works for the business’s purpose and does not unlawfully discriminate based upon protected characteristics.

As many businesses are realizing, successfully deploying AI requires a coordinated approach to achieve more than getting the desired output. It includes understanding a complex regulatory environment of which data privacy and security is a significant part.

North Carolina + Utah Governors Launch Bipartisan AI Task Force

North Carolina Attorney General Jeff Jackson and Utah Attorney General Derek Brown recently co-launched a bipartisan task force “to help monitor artificial intelligence.” According to Jackson, the task force will focus on:

Identifying emerging AI issues with the help of law enforcement, experts, and stakeholders to better equip attorneys general to protect the public;
Developing basic safeguards for AI developers in order to protect the public and reduce harm, especially towards children; and
Creating a standing forum that will track AI developments and coordinate responses to new challenges.

The task force includes representatives from OpenAI, Microsoft, and the Attorney General Alliance. According to Jackson, “Congress hasn’t put basic protections in place and we can’t wait. As attorneys general, our job is to keep people safe. AI is becoming part of everyday life for families and kids. Taking thoughtful steps now will help prevent harm as this technology becomes more powerful and more present in our daily lives.” In announcing the task force, Brown stated, “This task force is committed to defending our freedoms and our privacy, while also building a safer digital world for our families and our children. By working together with other attorneys general, we will protect our society from potential abuses of AI before they ever happen.”

NYDFS Cybersecurity Crackdown- New Requirements Now in Force, and “Covered Entities” Include HMOs, CCRCs—Are You Compliant?

As cybersecurity breaches grow more complex and frequent, regulators are increasingly focused on organizational compliance.
Organizations such as Crowdstrike report that in 2025, cyberattacks are increasing in speed, volume, and sophistication—and cybercrime has evolved as a “highly efficient business.” The escalating threat landscape demands robust security frameworks that can withstand evolving risks.
Enter the amendments announced in November 2023 to the New York’s Department of Financial Services (NYDFS) Cybersecurity Regulation, 23 NYCRR Part 500 (“Amended Regulation”), that became effective on November 1. This post explores the breadth of these Amended Regulations, and the steps that covered entities need to take now.
The Amended Regulation applies to “covered entities,” i.e., DFS-regulated entities including partnerships, corporations, branches, agencies, and associations—indeed, “any person”—operating under, or required to operate under, a license, registration, charter, certificate, permit, accreditation, or similar authorization under New York’s Banking Law, Insurance Law, or Financial Services Laws.
Notably, health maintenance organizations (HMOs) and continuing care retirement communities (CCRCs) are considered covered entities. NYDFS-authorized New York branches, agencies, and representative offices of out-of-country foreign banks are also covered entities subject to the requirements of Part 500.
While some requirements took effect almost immediately in late 2023, others were delayed to 2024 and 2025. The final set of cybersecurity requirements that became effective November 1 require covered entities to:

expand multifactor authentication (MFA) to include all individuals accessing information systems; and
implement written policies and procedures designed to produce and maintain a complete, accurate, and documented asset inventory of information systems.

Multi-Factor Authentication (MFA)
The amended Section 500.12 requires covered entities to use multi-factor authentication (MFA) for any individual accessing any information system of a covered entity—regardless of location, type of user, and type of information contained on the Information System being accessed (FAQ 18). Internal networks that would require the use of MFA include email, document hosting, and related services, whether on-premises or in the cloud, such as Office 365 and G-Suite (FAQ 19).
Definition
MFA is defined in the regulation as authentication through verification of at least two of the following types of authentication factors:

knowledge factors, such as a password, passphrase, or personal identification number (PIN);
possession factors, such as a hardware token, authentication app, or smartcard; or
inherence factors, such as a biometric characteristic (fingerprints, facial recognition, or other biometric markers.

Artificial Intelligence and Other Risks
Note that while the definitions include passwords and biometric characteristics as verifiers, caution should be taken, as AI deepfakes may now pose a risk to biometric-based systems. Indeed, NYDFS issued a related letter regarding AI cybersecurity risks in October 2024. The October 2024 letter does not impose new requirements with respect to the Amended Regulation, yet states:
While Covered Entities have the flexibility to decide, based on their Risk Assessments, which authentication factors to use, not all forms of authentication are equally effective. Given the risks…Covered Entities should consider using authentication factors that can withstand AI-manipulated deepfakes and other AI-enhanced attacks by avoiding authentication via SMS text, voice, or video, and using forms of authentication that AI deepfakes cannot impersonate, such as digital-based certificates and physical security keys. Similarly, instead of using a traditional fingerprint or other biometric authentication system, Covered Entities should consider using an authentication factor that employs technology with liveness detection or texture analysis to verify that a print or other biometric factor comes from a live person. Another option is to use authentication via more than one biometric modality at the same time, such as a fingerprint in combination with iris recognition, or fingerprint in combination with user keystrokes and navigational patterns. [Footnotes omitted].
The NYDFS July 2025 Guidance on the MFA requirements stresses the need “for organizations to understand the trade-offs associated with each method in order to make informed, risk-based decisions.” The July 2025 Guidance discusses the tradeoffs with respect to SMS Authentication, App-based Authentication (with and without number matching), and Token-based Authentication. Note that a covered entity’s Chief Information Security Officer (CISO) may approve in writing the use of reasonably equivalent or more secure controls, to be reviewed at least annually.
Limited Exemptions
The covered entity may qualify for a limited exemption pursuant to section 500.19(a), Section 500.19(a) provides limited exemptions for covered entities with:

fewer than 20 employees;
less than $7,500,000 in gross annual revenue in each of the last three years; or
less than $15,000,000 in year-end total assets.

Where one of the limited exemptions applies, MFA should nevertheless be used for:

remote access to the covered entity’s information system;
remote access to third-party applications, including but not limited to those that are cloud-based, from which nonpublic information is accessible; and
all privileged accounts other than service accounts that prohibit interactive login.

Asset Inventory of Information Systems
Section 500.13(a) requires covered entities—as part of their cybersecurity programs—to implement written policies and procedures designed to produce and maintain a complete, accurate, and documented asset inventory of their information systems. At a minimum, policies and procedures must include

a method to track specified key information for each asset, including, as applicable:

the owner;
the location;
classification or sensitivity;
support expiration date;
recovery time objectives; and

the frequency required to update and validate the covered entity’s asset inventory.

Section 500.13(b) also requires covered entities to include policies and procedures for the secure disposal on a periodic basis of any nonpublic information (identified in section 500.1(k)(2)-(3)) that is no longer necessary for business operations or for other legitimate business purposes of the covered entity, except where such information is otherwise required to be retained by law or regulation, or where targeted disposal is not reasonably feasible due to the manner in which the information is maintained.
Enforcement
The regulation is to be enforced by the superintendent. Section 500.20 states that the failure to act to satisfy an obligation shall constitute a violation, although the superintendent is directed, when assessing penalties, to consider elements including cooperation, good faith, history of prior violations, the number of violations, and the extent of harm to consumers. In a recent example, in August, NYDFS secured a $2 million settlement with a health insurance provider for violations of Part 500.
Takeaways
Implementation
Covered entities must:

implement MFA for any individual accessing any information systems of a covered entity or meet the requirements of a limited exemption (fewer than 20 employees, less than $7,500,000 in gross annual revenue in each of the last three years; or less than $15,000,000 in year-end total assets). Covered entities should understand the various methods of MFA in order to make informed, risk-based decisions regarding their use; and
implement written policies and procedures designed to produce and maintain a complete, accurate, and documented asset inventory of their information systems, with a method to 1) track key information and 2) the frequency needed to update and validate the asset inventory
The CISO may approve alternative controls in writing, if these are reasonably equivalent or more secure, and reviewed annually.

Compliance Filing
Covered entities must:

submit to NYDFS an annual notice regarding compliance with Part 500—through a Certification of Material Compliance or an Acknowledgment of Noncompliance—by April 15 (covers compliance during the previous calendar year), unless fully exempt and a Notice of Exemption is submitted (FAQ 29);
file separate annual notifications, if holding more than one license;
keep all data and documentation supporting their annual notifications for 5 years and provide that information to the Department upon request;
notify NYDFS of a cybersecurity incident no later than 72 hours after determining that one has occurred (FAQ 20). May have to notify even if the attack is unsuccessful (FAQ 21) or occurs at a third-party service provider (FAQ 23).

Third Parties
Covered entities should ensure compliance with regulations pertaining to third-party service providers, including:

Implementing policies with respect to third-party service providers (Section 500.11).
Undertaking a thorough due diligence process in evaluating the cybersecurity practices of third-party providers; the FAQs state that relying on the latter’s certification of material compliance is insufficient.
Cybersecurity governance: If the CISO is employed by a third-party service provider, the covered entity shall retain responsibility and provide direction and oversight (Section 500.4).
Making a risk assessment regarding appropriate controls for third-party service providers (Section 511(b)).

Note that NYDFS issued “Guidance on Managing Risks Related to Third-Party Service Providers” in October 2025, a Part 500 checklist, an exemption flowchart, and more. Developments are fast-paced in the cybersecurity world and companies have a lot to lose if they pay insufficient attention to all of these new legal requirements, as they set a new floor. While meeting all of these (and other) cyber requirements may not be easy, this remains a space in which an ounce of prevention may well be worth a pound of cure.
EBG will continue to monitor developments in this area. If you have questions or need assistance in implementation of the Amended Regulations within your organization, please reach out to the authors or the EBG attorney with whom you work.
Epstein Becker Green Staff Attorney Ann W. Parks assisted with the preparation of this post.

District of Massachusetts Has Personal Jurisdiction Over Out-of-State Adtech Defendant in Geolocation Case

Earlier this fall, the District of Massachusetts issued another notable decision in the growing wave of privacy litigation that, as discussed, raises difficult questions concerning standing, jurisdiction, and statutory interpretation. In Lionetta v. InMarket Media, LLC, Judge Kobick denied a motion to dismiss a putative class action alleging that InMarket Media, LLC collected and sold Massachusetts users’ precise geolocation data through software development kits (SDKs) embedded in third-party mobile applications. The ruling highlights the court’s willingness to exercise personal jurisdiction over out-of-state defendants whose business models rely on the collection and sale of Massachusetts-based location data.
Plaintiffs Allege InMarket Collected and Sold Their Precise Location Data Without Consent
InMarket aggregates consumer data through its own mobile apps and through an SDK integrated into over 300 third-party apps, installed on more than 390 million devices. The SDK allegedly collects detailed behavioral and geolocation information, which InMarket then sells to retailers and brands for targeted advertising and location-based push notifications.
The plaintiffs, both Massachusetts residents, used apps such as CVS, Stop & Shop, and Dunkin’, which they allege contained InMarket’s SDK. They claim they never provided informed consent for InMarket to collect, use, or monetize their location data. They assert claims for unjust enrichment and violation of Chapter 93A, seeking to represent a class of Massachusetts residents whose personal information was collected without informed consent.
InMarket moved to dismiss for lack of personal jurisdiction and failure to state a claim.
Why Lionetta is Different from Rosenthal: Forum Targeting and Data Use
Judge Kobick found specific jurisdiction under the First Circuit’s three-part due process test: relatedness, purposeful availment, and reasonableness.
On relatedness, the plaintiffs’ claims arose directly from InMarket’s allegedly Massachusetts-focused conduct. InMarket did not merely make its software accessible in Massachusetts. The SDK collected data from Massachusetts devices, and InMarket used that data to deliver targeted ads and push notifications to those same consumers, sometimes triggered by their proximity to local stores. This was sufficient to create a nexus between the forum, the defendant, and the alleged injury.
Purposeful availment was also satisfied. InMarket allegedly earned substantial revenue from Massachusetts-derived data and used that data to target Massachusetts consumers. The company is registered to do business in Massachusetts and employs remote workers here. These contacts made it foreseeable that InMarket could be haled into court in the Commonwealth.
The court addressed how this case differed from Rosenthal v. Bloomingdales.com, LLC, in which the First Circuit held that a retailer’s use of session-replay code on a national website did not create specific jurisdiction in Massachusetts. In Rosenthal, the website was not targeted to the Commonwealth, and the alleged tracking occurred uniformly across all visitors, regardless of location. The only Massachusetts connection in Rosenthal was the user’s action.
By contrast, InMarket’s conduct allegedly targeted the physical locations of Massachusetts residents and generated revenue from retailers’ interest in reaching those consumers in Massachusetts stores. The court emphasized that InMarket monetized the data precisely because it enabled Massachusetts-specific, real-world targeting. InMarket did not merely place its SDK in the stream of commerce but had an intent to serve Massachusetts customers. That forum-directed commercial activity, absent in Rosenthal, was critical to establishing specific jurisdiction.
Finally, the court found jurisdiction reasonable given Massachusetts’ strong interest in protecting residents’ privacy and InMarket’s failure to identify any unusual burden associated with litigating in the forum.
Unjust Enrichment Based on Monetization of Geolocation Data Does Not Require a Direct Transfer
The court held that plaintiffs plausibly stated an unjust enrichment claim. Even though the data flowed indirectly through third-party apps, Massachusetts law does not require a direct transfer. Plaintiffs alleged that their geolocation data was valuable, that InMarket profited from it, and that they reasonably expected but received no compensation. Relying on Tyler v. Michaels Stores, the court noted that misappropriation and monetization of personal data can constitute a cognizable benefit subject to restitution and disgorgement of a company’s profits can be an appropriate means of calculating damages for the unauthorized collection of private data.
Geolocation-Data Collection Without Informed Consent May Be Unfair or Deceptive
The Chapter 93A claim also survived. The court accepted that collecting and selling consumers’ geolocation data without informed consent could be unfair or deceptive. The court was guided again by Tyler, in which the Supreme Judicial Court held that a merchant’s unlawful and deceptive collection of a personal information was an invasion of the consumer’s personal privacy that violated Chapter 93A. The Lionetta plaintiffs likewise adequately alleged injury by asserting loss of privacy and the sale of their data on the open market. Causation was satisfied because InMarket’s alleged conduct—the unauthorized collection and sale—directly produced the injury.
Key Takeaways: Geolocation Collection and Monetization Creates Unique Forum Contacts
Lionetta reinforces that Massachusetts courts will assert personal jurisdiction over out-of-state defendants when their business models involve targeted use of Massachusetts residents’ information. The decision also illustrates how plaintiffs might attempt to distinguish Rosenthal by pleading forum-specific targeting, local monetization, and real-time geolocation-driven advertising.
Although the court in Lionetta accepted allegations that plaintiffs ascribed value to their data and expected compensation, defendants should continue to challenge speculative and unspecified allegations, particularly when plaintiffs fail to plead facts about value or how and at what price they would sell their data. Defendants can also target factual allegations that fail to tie enrichment to the plaintiffs’ individual data (not just an aggregated or de-identified cohort). Companies collecting precise location data through SDKs or mobile integrations should expect heightened scrutiny under both unjust enrichment and Chapter 93A theories, and should reassess disclosures, consent practices, and Massachusetts-specific data operations accordingly.

Price Tags and Personal and Competitor Data- States Step Up Algorithmic Pricing Regulation

As price-setting by computer algorithm becomes increasingly prevalent, states are stepping in to address transparency and fairness concerns that federal legislation has yet to comprehensively tackle. Lawmakers argue that clear disclosure and limits on algorithmic practices are essential to protect consumers from opaque pricing methods that may leverage their personal data or result from anti-competitive collaboration among businesses. The growing patchwork of state-level initiatives signals a broader trend toward local oversight of algorithmic decision-making in commerce, but the landscape is rapidly changing as lawmakers attempt to catch up to rapidly changing technology.
As they are often at the forefront of these issues, recent legislative and regulatory developments in California and New York are leading the way on regulating the growing technology. In the meantime, federal courts have issued divided decisions dealing with algorithmic pricing.
New York: Algorithmic Pricing Disclosure Act Survives Legal Challenge
In May 2025, New York passed the Algorithmic Pricing Disclosure Act, requiring businesses to inform customers when prices are set using personalized algorithms. The Act broadly applies to entities domiciled or doing business in New York. The Act requires businesses to disclose when a price is set using an algorithm that incorporates personal consumer data by requiring the following disclosure: “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA.” The New York Act is enforced solely by the New York Attorney General, who must first issue a cease-and-desist notice before pursuing penalties of up to $1,000 per violation.
The passage of the New York law marked a significant milestone, as it recently withstood a legal challenge brought by industry groups who argued that the mandated disclosure infringed on commercial free speech and imposed undue burdens on businesses.[1] On October 8, 2025, the court granted New York State’s motion to dismiss, finding the disclosure was factual and uncontroversial and that it served a valid consumer protection interest.
California: Restrains use of Competitor Data to Influence Price
On October 6, 2025, California signed AB 325 into law. AB 325 prohibits agreements to use or distribute a “common pricing algorithm,” defined as any software or other technology that two or more people use which ingests competitor data to recommend, align, stabilize, set, or otherwise influence a price or commercial term (including terms related to both upstream vendors and downstream customers), and lowers the pleading standard under the Cartwright Act (California’s antitrust statute, Cal Bus. & Prof. Code § 16720) for certain civil claims. The law also prohibits coercing another person to set or adopt a recommended price or commercial term generated by such an algorithm for the same or similar products or services in California.
Other Efforts to Regulate Algorithmic Pricing
In 2025 alone, more than 50 bills have been introduced to regulate algorithmic pricing across 24 state legislatures, including the following:

Illinois introduced several bills that, if enacted, would regulate or ban dynamic pricing in selected situations, including ticket sales (HB 3838) or the use of consumer data in pricing (SB2255).
Texas introduced SB 2567, which would require retailers to disclose algorithmic pricing at the point of sale.
Massachusetts introduced House Bill 99 which, if enacted, would ban dynamic pricing based on customers’ biometric data.
Colorado’s legislature passed, but the governor vetoed, HB25-1004, legislation that would have prohibited the sale or distribution of an algorithmic device sold or distributed with the intent to be used by two or more landlords in the same market to set or recommend the amount of rent, level of occupancy, or other commercial terms.
New Jersey introduced SB 3657, which seeks to make it unlawful for landlords or property managers to use algorithmic systems to influence rental prices or housing supply in New Jersey.
Pennsylvania introduced HB 1779, which seeks to require disclosure of algorithmic pricing and prohibits dynamic pricing based on protected class data (e.g., race, gender, religion).

Last week, U.S. Senators Ron Wyden and Peter Welch introduced The End Rent Fixing Act of 2025. The Act is targeted at algorithms that use competitors’ data to set rental rates. The Act would make it unlawful for rental property owners to contract for the services of a company that coordinates rental housing prices and supply information and would designate such arrangements as a per se violation of the U.S. antitrust laws. It would also prohibit the practice of coordinating price, supply, and other rental housing information among two or more rental property owners. The Act would also allow individual plaintiffs to invalidate any pre-dispute arbitration agreement or pre-dispute joint action waiver that would prevent the plaintiff from bringing suit.
Algorithms Using Competitors’ Data to Set Prices
U.S. antitrust law hasn’t fully caught up with how algorithmic price setting, and the legal landscape, is changing fast. Some experts think there could be liability in certain situations. For example, the Department of Justice has argued that if competitors use the same pricing algorithm—and that algorithm relies on competitors sharing their data to set prices—it could violate the Sherman Antitrust Act.
In September 2025, the Ninth Circuit issued the first federal appellate decision on algorithmic pricing in Gibson v. Cendyn Group, ruling that competing Las Vegas hotels did not violate Section 1 of the Sherman Act by independently using the same third-party pricing software, where there was no underlying agreement among competitors and the software did not share confidential information among licensees.
In contrast, in December 2023, an Illinois federal court denied motions to dismiss claims in multi-district class action litigation alleging software vendors and rental property owners and managers conspired by sharing property rental pricing and supply data to fix prices for multi-family house rentals across the country.[2] Last week, the court granted preliminary approval of settlements with 27 defendants for $141.8 million. The litigation continues with the larger defendants whose conduct, the plaintiffs contend, comprised the larger volume of the alleged illegal commerce at issue in the case.
In June 2025, an Illinois federal court denied a motion to dismiss allegations that health insurers unlawfully conspired to underpay out-of-network providers by outsourcing rate-setting to analytics firm MultiPlan. The court applied the per se standard, finding plaintiffs “plausibly alleged a horizontal hub-and-spokes price-fixing agreement.”
Conclusion
The legislative developments and growing litigation over the legality of dynamic pricing tools reflect growing concern among policymakers about the fairness and transparency of algorithmic pricing models. As states continue to debate and refine proposed laws, businesses that rely on dynamic pricing must closely monitor these changes and proactively assess their compliance obligations. Staying informed about both state and federal actions will be essential to avoid potential legal pitfalls and ensure responsible use of pricing algorithms.
[1] National Retail Federation v. James, 1:25-cv-05500-JSR
[2] In Re: RealPage, Inc., Rental Software Antitrust Litigation, Case No. 3:23-md-3071, MDL No. 3071.

EEOC General Counsel Nominee Crow Expected to Bring Employer-Focused Perspective to Commission

Takeaways

President Trump nominated Texas employment defense attorney M. Carter Crow on November 19, 2025, to serve as the EEOC’s general counsel.
The EEOC general counsel is responsible for directing and coordinating enforcement litigation and setting litigation priorities for the Commission.
Crow, the former president of the Houston Bar Association and global head of employment litigation at Norton Rose Fulbright, would likely bring an employer-focused perspective to the Commission.

On November 19, 2025, President Donald Trump nominated M. Carter Crow to be the general counsel of the U.S. Equal Employment Opportunity Commission. Under Crow’s leadership, employers can expect the EEOC to continue the course set by the Trump administration and Chair Lucas.
The general counsel oversees all strategic and procedural aspects of the agency’s enforcement litigation. This includes directing and coordinating litigation before and after it is filed in court, supervising the Office of the General Counsel (OGC), gaining approval from the Commissioners for systemic discrimination and other suits to be pursued, and setting litigation priorities. 
Background 
Shortly after taking office, President Trump fired two Democratic commissioners along with then-EEOC general counsel Karla Gilbride, leaving the position vacant and the EEOC without a quorum. The EEOC has recently regained a quorum with the confirmation of Brittany Bull Panuccio, allowing the EEOC to pursue the priorities of the Trump administration and EEOC Chair Andrea Lucas more fully. 
Crow’s Qualifications 
After earning an undergraduate degree in accounting from Oklahoma State University, Crow obtained his law degree from University of Oklahoma College of Law in 1991. He spent his career at Norton Rose Fulbright, focusing his practice on wage and hour litigation, contracts and restrictive covenants. In 2022, the firm appointed him as the global head of the firm’s labor and employment practice. He also served as president of the Houston Bar Association. 
What Employers Can Expect
The EEOC’s scrutiny of diversity, equity and inclusion (DEI) programs will continue, but Crow’s background indicates the EEOC’s Office of the General Counsel will be helmed by an experienced litigator who is likely familiar with employers’ perspectives on many employment law issues.
Al will be another area to watch at the EEOC. Crow has written on the risks employers face when using artificial intelligence (AI) for employment decisions. In a recent symposium article, Crow suggested five best practices for employers to mitigate risk, including transparency, accommodations, policies and training, audits and remedial steps, and vendor assessment. [See Crow, M. Carter & Jesika Silva Blanco, “Artificial Intelligence: Is AI the New Decision Maker in the Workplace?,” The Advocate (Winter 2024)]. 
With full staffing of presidential appointees for the Commission and in the Office of General Counsel, the EEOC may be better poised to bring litigation focused on administration enforcement priorities. It is unclear if Crow’s deep experience in defending companies in employment law matters will affect the EEOC’s litigation posture with employers. 
Next Steps
The White House has submitted its formal nomination of Crow to the U.S. Senate. The Senate Committee on Health, Education, Labor, and Pensions (HELP) will conduct confirmation hearings then vote on whether to send the nomination to the full Senate for a confirmation vote. 
As of this writing, the HELP committee has not announced a hearing date or formal opposition to Crowe’s nomination. We will continue to monitor developments. 

White House Draft EO Targets State AI Laws as New EO Emphasizes Security

Key Takeaways

White House draft EO proposes overriding state AI laws with a uniform national standard. A leaked executive order targets over 1,000 state-level AI bills, including laws in California and Colorado, and calls for a centralized federal approach to AI governance.
The draft EO signals a potential shift toward federal preemption of state consumer protection laws. If implemented, it could limit states’ ability to regulate AI, disrupt existing compliance strategies and create new litigation exposure for developers and deployers.
Organizations should assess AI governance policies and prepare for evolving federal enforcement. Review internal protocols for alignment with likely federal standards, monitor preemption risks and consider how the Genesis Mission’s security directives may impact partnerships. 

Last Tuesday, a draft executive order (EO) from the White House overriding states’ artificial intelligence (AI) laws was leaked. The draft EO was a surprise, considering that a moratorium on state AI laws was voted down 99-1 by the U.S. Senate on July 1, and the White House then said on July 10 that the federal government would not interfere with states’ rights if they pass “prudent AI laws.”
While the White House stated last week that another AI-related EO was only speculation, the leaked draft EO raises several issues for AI governance — the combination of principles, laws and policies that relate to AI’s development and deployment — and the extent to which states’ rights may eventually be challenged or allowed.  
Yesterday, the White House pivoted from the draft EO to issue a fact sheet to “accelerate AI for scientific discovery” and an EO launching the Genesis Mission, an integrated platform designed to harness datasets for AI-accelerated innovation. The new EO includes directives to combine efforts with the private sector and incorporate security standards.        
Draft EO Claims 1,000+ State AI Bills Threaten Innovation
The draft EO reiterates the AI Action Plan edict that “national security demands that we win this [AI] race” and asserts that state legislatures have introduced over 1,000 AI bills that threaten to undermine the country’s innovative culture. The draft declares that the White House “will act to ensure that there is a minimally burdensome national standard — not 50 discordant state ones.” 
Of the more than “1,000 AI bills” the draft EO threatens to override, it calls out two consumer privacy AI laws — California’s Transparency in Frontier Artificial Intelligence Act (i.e., Senate Bill 53) and the Colorado AI Act (CAIA) — as introducing “catastrophic risk” and “differential treatment or impact” standards that hinder innovation.   
For context, California’s Senate Bill 53 covers large frontier models and developers and includes detailed governance and transparency requirements. Covered entities, for example, must outline how they identify, assess and mitigate catastrophic risks and describe how they will integrate national/international standards and best practices.        
As explained more fully here, CAIA requires that deployers and developers meet certain criteria to ensure they understand what is required to protect consumers from known or foreseeable risks. In addition to a risk management policy and program, covered entities must complete impact assessments at least annually and in some instances within 90 days of a change to an AI system.
Proposed AI Litigation Task Force Would Target State-Specific Rules
The draft EO outlines how the California and Colorado consumer privacy laws would be challenged by an “AI Litigation Task Force” overseen by the U.S. Attorney General, focused on eliminating: 

Subjective safety standards that hinder necessary AI development;
Patchworks of laws that force compliance with the lowest common denominator;  
Restrictive states from dictating AI policy at the expense of America’s innovation and global leadership (i.e., “domination”); and   
Initiatives that are not aligned with a uniform national AI policy framework.

The draft EO outlines an evaluation process of state laws to be conducted by the Secretary of Commerce, the Special Advisor for AI and Crypto and others. Reports would then be published that address “onerous” state laws — those requiring AI models to alter truthful outputs or compelling AI developers or deployers to disclose information in violation of free speech rights.      
The draft EO also outlines a process by which the Federal Trade Commission would explain the circumstances under which state laws are preempted by the FTC Act’s prohibition on engaging in deceptive acts or practices affecting commerce.
As with prior executive orders, the draft EO states that federal funding could be withheld for continuing to effectuate or enforce a state law that is deemed noncompliant.
New Executive Order Launches Genesis Mission with Cybersecurity Mandates
Notably, the Genesis Mission EO places a strong emphasis on security standards. The EO directs the Secretary of Energy, in operating the platform, to meet security requirements consistent with its national security and competitiveness mission, including supply chain security and federal cybersecurity standards and best practices.
The EO also sets strict data access, cybersecurity and governance requirements for datasets, models and computing environments used in collaboration with nongovernment and private sector organizations, including measures requiring compliance with classification, privacy and export-control requirements.    
That security baseline matters. The EO aims to train scientific foundation models and AI agents while aligning American scientists and businesses — efforts that demand incident response planning, risk assessments and robust security programs.   
Draft and New EOs Reflect Growing Threat of Agentic AI
Organizations analyzing AI’s impacts have recently noted a surge in AI-enabled incidents. A recent cyberthreat snapshot by the House Committee on Homeland Security reported that one in six data breaches in 2025 involved AI-driven cyberattacks. A database tracking and detailing AI-enabled incidents across multiple industries and jurisdictions is up to incident No. 1275.  
In its annual threat-hunting report, CrowdStrike revealed that AI-powered social engineering attacks through voice phishing are likely to double by year-end. CrowdStrike also reported that 320-plus organizations were infiltrated by a single AI-enabled threat actor this year, doubling last year’s total.    
These figures only tell part of the story about what’s coming. Two weeks ago, the Wall Street Journal reported how hackers from China jail-broke code belonging to AI frontier model Claude and were able to automate 80-90% of multiple, multi-stage cyberattacks. As a result, 30 organizations were subjected to a wide variety of attacks.    
These examples illustrate the Institute for AI Policy’s recent conclusion that “[a]s AI systems become more powerful and widespread, the probability of crisis scenarios increases while the complexity of required responses grows.” A new era of attacks by Agentic AI is also imminent, including permission escalation, hallucination or memory manipulation attacks and multi-agent attacks.
Action Steps for Aligning with Emerging Federal AI Standards
As the White House signals a more centralized approach to AI oversight, organizations should take stock now — especially those operating in states with active AI laws like California and Colorado. We have previously provided recommendations that organizations should consider to mitigate risks associated with AI, both holistic and specific, emphasizing data collection practices.
Additional steps to consider include:

Review existing AI governance policies for alignment with emerging federal standards;
Prepare for potential preemption challenges to the California and Colorado laws; and
Monitor the Genesis Mission security standards as they evolve; and
Consider how your organization, including its insurance coverage, might be required to adapt.

Getty Images (US) Inc (and others) v Stability AI Limited. Input- Getty Images v Stability AI. Output: Continued Uncertainty.

On 4 November 2025 the UK High Court handed down its judgment in the case of Getty Images (US) Inc (and others) v Stability AI Limited [2025] EWHC 2863 (Ch) [High Court Judgment Template].
The case concerned the training, development and deployment of Stability AI’s “Stable Diffusion” generative AI model and, as one of the first and to date most high-profile intellectual property (IP) infringement claims against an AI developer to make it all the way to trial in the UK courts, was originally envisaged as having potential to provide much-needed wide-ranging judicial guidance on the application of existing UK IP law in the field of AI. However, as the case progressed and the scope of Getty’s claims gradually reduced to a shadow of the original, it became apparent that this judgment, whilst still of note in respect of a number of key issues, would not be the silver bullet which many had originally anticipated.
At over 200 pages the judgment is long and complex, including detailed discussion of the witness and expert evidence which the Court considered before reaching its findings.
Key takeaways at a high-level are:

Primary Copyright Infringement: by close of evidence at the trial Getty had abandoned its key claims alleging that the training of Stability AI’s “Stable Diffusion” AI model and certain of its outputs had infringed Getty UK copyright works and/or database rights. Key reasons for this decision on the part of Getty would appear to be relatively limited evidence of actual potentially infringing output coupled with an acknowledgement that the development and training of Stable Diffusion had not taken place in the UK despite Getty’s claim being in respect of its UK rights. As a result, the Court declined to rule on these claims – meaning that the current legal uncertainty in this area continues, to the frustration of many.
Secondary Copyright Infringement: for the purposes of establishing a secondary copyright infringement claim, the Court has confirmed that the model weights used in the training of an AI model can be considered “articles” for the purposes of the relevant legislation. However, in this case the Court went on to find that those model weights did not themselves contain any Getty copyright works and so could not be considered an infringing copy. Whilst this is helpful guidance, it has long been accepted that references to an “article” in the relevant legislation covers both tangible and intangible items, hence this was an unsurprising decision for the Court to reach.
Trade Mark Infringement: the Court held that there was some limited evidence that certain earlier versions of Stable Diffusion had produced outputs which included a “Getty Images” UK registered trade mark as a watermark thereby infringing Getty’s registered trade mark. However, the Court emphasised that this finding is both “historic and extremely limited in scope” and that as a result of changes which had been made to later versions of Stable Diffusion, it could not hold that there was likely to be any continuing proliferation of such infringing output from Stable Diffusion.
Exclusive Licensee Claims: the Court has confirmed that when bringing a copyright infringement claim in the capacity of an exclusive licensee (as opposed to copyright owner) the Court will consider in detail whether the licences in question meet the relevant legislative definition of an “exclusive licence”. As such, it must be a completely exclusive licence, excluding even the rights of the copyright owner themselves, granted to only one licensee and it must be signed. That said, the Court will be willing to apply a very broad definition when deciding whether those have been “signed” (e.g. not just wet ink but includes online acceptance). Again, whilst useful guidance, this essentially just confirms the already accepted interpretation of this legislation.
Additional Damages Claim: as a result of the very limited findings of trade mark infringement on the part of Stability AI alongside the abandonment of the primary copyright infringement claims and failure of the secondary copyright infringement claim, the Court rejected Getty’s claims for additional/aggravated damages. The Judge noted that she could not hold that there had been any blatant and widespread infringement of UK IP rights by Stability AI, as Getty had claimed which would have justified an award of such damages.

For a more detailed summary and analysis of the case and each of these issues please see our full client briefing.

STOP Me If You’ve Heard This- FCC Says Any Reasonable SMS Opt-Out Counts

Retailers face a new compliance risk in Short Message Service (“SMS”) marketing due to the Federal Communications Commission’s (“FCC”) revocation of consent rule that took effect in April 2025. Under the FCC’s rules implementing the Telephone Consumer Protection Act (“TCPA”), texts are treated as calls and carry statutory damages of at least $500 per message. The FCC now prohibits businesses from specifying an exclusive means for consumers to opt-out of receiving messages, and instead, states that businesses must honor opt outs communicated through any reasonable channel, which increases the chance of missed revocations and invites lawsuits.
The TCPA Framework
The TCPA is a federal telemarketing statute that regulates consumer outreach across several dimensions—including calling time restrictions, dialing technology, caller identification, internal do-not-call procedures, prohibitions relating to certain called parties, and consent. For years, text messages have been treated as calls under the FCC’s rules implementing the TCPA, which meant marketing texts must satisfy the statute’s consent and do-not-call requirements. The statutory damages framework can be severe. Violations can lead to $500 per message in statutory damages, or $1,500 per message if the violation is willful or knowing. This bounty scheme has encouraged class action filings.
A core feature of the statute and the rules is the right of consumers to revoke consent, which businesses must honor to cease further outreach. To make revocation manageable at scale, many programs historically prescribed standardized opt out pathways—such as clearly instructing recipients to reply “STOP,” or by specifying opt out mechanisms in terms or program disclosures, so consumers have predictable steps and businesses can operationalize suppression reliably. 
The New FCC Revocation Rule
Effective April 2025, the FCC adopted a revocation of consent rule that expands how consumers may opt out. Under the rule, a consumer will be deemed to have revoked consent if they reply to a text with “stop,” “quit,” “end,” “revoke,” “opt out,” “cancel,” or “unsubscribe,” or a substantially similar standard response. However, the rule prohibits businesses from designating an exclusive opt out method; instead, the rule requires honoring revocations expressed in any reasonable manner. For reply texts that use alternative or nonstandard language, the reasonableness of the revocation is assessed under a totality of the circumstances test. If a dispute arises, the sender may bear the burden of showing that the consumer’s phrasing was not a reasonable method of revocation.
The FCC also codified a limited ability to send a one-time confirmation or clarification message in response to an opt out. That message must be nonmarketing, sent promptly within five minutes, and must not include persuasion or promotional content. Programs that do not support reply texts face additional disclosure obligations. If a program cannot process two-way texting, each message must clearly disclose that limitation and provide reasonable alternative methods to revoke consent, such as a phone number or a URL.
Operational Challenges and Why Risk Is Rising
The rule’s rejection of exclusive, standardized opt out pathways introduces significant complexity for SMS programs. Retailers typically rely on keyword lists and platform recognition logic to capture STOP style replies and to suppress further sends. Under the new rules, programs must also be prepared to process nonstandard revocations expressed in natural language. Phrases such as “no more texts,” “remove me,” and other formulations that depart from the canonical keywords may still constitute valid revocations depending on the circumstances. The sender bears the burden to prove that an alternative phrase was not a reasonable method of revocation.
The FCC’s allocation of responsibility invites opportunistic litigation. Plaintiffs may ignore explicit reply STOP instructions and instead send deliberately nonconforming messages that some platforms will not recognize, then allege that messaging continued after a valid revocation. Indeed, complaints filed in court already reflect this tactic. In a recent complaint in federal court in Georgia, for example, the plaintiff enrolled in an SMS program and received an immediate welcome text which instructed him to reply STOP to cancel. Less than one minute later, and despite the clear opt-out instructions, the plaintiff responded, “Please cease!” The company’s platform did not process the nonconforming phrasing as an opt out, and the company unknowingly continued to send messages to the plaintiff. The plaintiff allowed a significant volume of messages to accrue and then sued.
Litigation Posture and Areas of Uncertainty
As we previously discussed, the Supreme Court recently made clear that district courts adjudicating private TCPA cases are not required to follow the FCC’s interpretations. Courts should apply ordinary statutory interpretation principles and afford appropriate respect to the agency’s views without treating them as controlling. This posture creates two threshold questions that are likely to drive litigation strategy over the coming months and years. One question is whether the FCC had authority to limit how companies manage and process revocation and to impose a reasonable means standard with per se triggers and shifting burdens. Another question is whether, and to what extent, the FCC may regulate text messages as calls under the TCPA in the first place.
Until these questions are answered, businesses should anticipate that enterprising plaintiffs will test the limits of what kinds of opt out requests are “reasonable.” Companies should plan to defend claims under the uncertainty created by the FCC rule, while preserving arguments regarding the limits of agency authority and the proper construction of the statute.
Practical Implementation for Retail SMS Programs
Retailers should take the opportunity to reduce the number of litigable events and to strengthen their evidentiary position if they are sued. A practical starting point is to expand keyword libraries and routing logic beyond the FCC’s enumerated terms. Programs should add synonyms and natural language variants commonly used by their customers to disengage. Where available, platforms should be tuned to use natural language processing and artificial intelligence to detect negative intent and to escalate ambiguous replies for human review. These models should be tested and calibrated using real message data to minimize false negatives that would allow continued messaging after a valid revocation.
Businesses should also consider regular, manual review of nonstandard replies that are not recognized by automation. Confirmation messaging, when used, should be tightened. The one-time confirmation text should be sent promptly and include no marketing content. If a consumer sends a nonconforming reply, like a “Please cease!” message, the company should send a short message that asks to clarify the consumer’s request. If the consumer does not respond to clarify with a designated keyword within a short time, then the consumer’s non-response should be treated as an opt out.
Conclusion
For retailers, the FCC’s rule makes honoring consumers’ requests to stop receiving messages more difficult and invites opportunistic litigation. Future developments in case law may eventually narrow or recalibrate aspects of the revocation rule, but uncertainty is not a defense to current claims. A pragmatic, well documented compliance posture is the best way to navigate the interim period and to minimize risk.

Auditing Artificial Intelligence Systems for Bias in Employment Decision-Making

Employers are increasingly deploying artificial intelligence (AI) across the talent life cycle, from candidate sourcing and ranking, to pre-hire assessments, to onboarding, performance reviews, development, promotions, and retention.
While these tools promise efficiency and consistency, they also introduce heightened legal risk if they produce biased outcomes. Many states, localities, and international jurisdictions require proactive AI auditing, transparency, and governance to ensure that AI-enabled employment decisions are fair, compliant, and defensible.

Quick Hits

Jurisdictions such as California, New York City, Colorado, Illinois, and the European Union (EU) variously require (or plan to require) and encourage bias testing, notices, transparency, and, in some cases, public summaries. AI involvement can create substantial legal risk, even when humans make the final decisions; AI-influenced scores, rankings, or screens can still be treated by regulatory authorities as decision-making, triggering validation, bias-testing, notice, and transparency duties—with “cutoff” uses increasing regulatory scrutiny.
Legally privileged bias audits can anchor AI governance efforts by channeling audits through legal counsel, maintaining an inventory and classification of tools, setting clear policies and vendor obligations, conducting ongoing monitoring and remediation, and preserving records supporting job-relatedness, business necessity, and “less-discriminatory alternatives” analyses.

Background
AI and algorithmic tools now permeate modern workforce management, touching everything from applicant recruitment and resume screening to onboarding, performance reviews, development, promotions, scheduling, and retention. The legal environment surrounding these systems is expanding rapidly and unevenly, with places such as California, New York City, Colorado, Illinois, and the EU adopting differing approaches. Approaches vary in each of the current or pending laws, ranging from binding requirements to soft‑law guidance. Importantly, liability can arise even when a person signs off on the outcome: regulators and courts may view AI-generated rankings, scores, or screens as part of the employment decision itself, while the use of rigid thresholds or “cutoffs” can invite heightened scrutiny.
Against this backdrop, the regulatory picture is still taking shape: a patchwork of municipal, state, federal, and global measures that differ in scope, definitions, and timing. Emerging frameworks impose varied governance, transparency, and recordkeeping obligations, while existing antidiscrimination, privacy, and consumer reporting laws continue to supply enforcement hooks. Agencies are issuing guidance and bringing early cases, while private plaintiffs are testing theories that treat algorithmic inputs as part of employment decisions, even when human review is involved. Penalties and remedies range from administrative fines to mandated disclosures and restrictions on use, with some regimes claiming extraterritorial reach and short transition periods, creating real uncertainty for multistate and global employers.
Scope and Focus of Auditing
Anti-bias auditing begins by examining whether the tool’s results differ for protected groups at each stage of the process—for example, with regard to resume scores, rankings, who receives interviews, who passes assessments, and who ultimately gets hired. Statistical findings that suggest adverse impact are a warning light, not the finish line. From there, examining the training and reference data, features that might act as stand-ins or “proxies” for protected traits, how features are built, the score cutoffs applied, any settings by location or role, and how recruiters and managers actually use the output or tool are important steps. If impact is found to be present, the next step involves assessing business necessity and whether less discriminatory ways exist to achieve the same goal, considering specific fixes, such as adjusting thresholds, swapping or removing features, training to improve use or consistency, or changing when or how the tool is used.
Effectiveness auditing assesses whether an AI tool actually enhances decision-making in your specific context. It tests if the system performs as advertised, outperforms your current process, and behaves consistently across roles, teams, sites, and time. Practically speaking, that means benchmarking model outputs against structured human evaluations, checking post-decision outcomes (such as performance, retention, quality and error rates, and downstream corrective actions), and validating that the factors driving recommendations are job‑related and predictive of success.
Effectiveness is inseparable from fairness. A tool that is fast or efficient but unevenly accurate across protected groups—or that relies on features correlated with protected traits—can create legal and operational risks. Evaluating accuracy, stability, and business impact, together with adverse-impact metrics, ensures that “better” does not simply mean “cheaper or quicker” and helps surface situations where apparent gains are driven by biased error patterns. In short, effectiveness auditing assesses whether a tool works, for whom it works, and whether it works for the right job‑related reasons.
Best Practice Considerations
An effective AI governance program brings together the applicable stakeholders for each deployment, with legal, HR, and IT at the core. Legal ensures regulatory alignment, privilege where appropriate, defensible documentation, and coordination across antidiscrimination, privacy, and consumer-reporting regimes. HR anchors job-relatedness, consistent application across roles and locations, and integration with established hiring and performance practices. IT is responsible for system architecture, security, access controls, data quality, monitoring, and change management. Around this core, additional contributors can be included as the use case demands.
Leading With Privilege
A foundational best practice involves starting every significant evaluation, audit, and testing effort with counsel. That means legal scopes the questions, engages the right experts, and directs the work so the analysis is covered by attorney-client privilege and/or work product protections. Following completion of the privileged assessment and agreement on corrective actions, nonprivileged regulatory summaries or disclosures can be prepared as a separate project. This preserves privilege over the analysis while ensuring timely compliance with applicable notice and transparency obligations.
Knowing Your Tools
Most organizations rely on more AI and algorithmic tools than they recognize, so it is sound practice to maintain a living inventory across the talent life cycle—including sourcing databases, resume screens, rankings, assessments, automated interviews, predictive models, and HR analytics—and to support meaningful oversight within the governance program by maintaining the inventory’s currency through defined change-management triggers.
For each tool, consider recording the core facts of use, impact, data, and ownership, and relying on a single inventory as the backbone for audits, governance, vendor oversight, incident response, and regulatory disclosures.
Setting Clear Rules
Setting clear rules means writing down and enforcing plain-language policies for the use of AI. That includes providing for notice (and consent where applicable), meaningful human review and appeal, data minimization and retention, and security, and making sure vendor contracts protect the organization’s legal risks with regard to audit rights, data access, security parameters, explainability documentation, and remediation obligations if problems are found.
Monitoring and Fixing
Effective systems risk management may require ongoing monitoring at set intervals, with some laws mandating periodic reviews. Consider defining clear thresholds and triggers for when to retest and remediate, and preparing a response playbook covering feature or cutoff changes, deployment adjustments, reviewer retraining, and vendor recalibration. Legal may want to continue managing the process so that analytic iterations and corrective actions remain under privilege.
Documenting to Defend
Keeping contemporaneous records that demonstrate job-relatedness, business necessity, and, where adverse impact appears, any evaluation of less discriminatory alternatives, while preserving the who/what/why of human reviews (including reasons for following or overriding AI outputs) with clear, plain-language explanations of how each tool works, is critical. Robust, contemporaneous documentation can significantly enhance a program’s defensibility in regulatory inquiries and litigation.
Next Steps
Auditing AI for bias in employment decision-making is now a critical part of AI governance and risk management. Employers that implement privileged audits, robust governance, and continuous monitoring paired with transparency, human-in-the-loop controls, and disciplined documentation can harness AI’s benefits while reducing the risk of discriminatory outcomes and regulatory exposure.

AI Meeting Tools: Asset or Exhibit A?

How Legal and Compliance Can Shape Governance, Retention, and Risk Mitigation
Artificial intelligence (AI)-powered meeting tools are being adopted into the workplace at unprecedented speed. Platforms such as Microsoft Teams, Zoom, and Webex now offer features that automatically record, transcribe, and summarize videoconference meetings — often in real time. It’s easy to see the appeal. These capabilities promise greater efficiency, searchable records, and reduced administrative effort. 
For legal, HR and compliance functions, however, these same tools raise fundamental questions about data management, privilege, accuracy, and workplace behavior. Without the right governance, they can undermine litigation strategy, erode confidentiality protections, and alter how employees engage in sensitive discussions. 
The pace of adoption compounds these risks. Rollouts are often driven by IT or business units, with legal brought in only after use has begun. That reactive position is especially problematic when meeting content is highly sensitive and discoverable in litigation. What might seem like a harmless transcript of a performance review, workplace investigation, or union strategy can quickly become a piece of evidence. 
The key to safe deployment is to identify where and how AI meeting tools introduce legal exposure and establish considered, practical controls before they become embedded in day-to-day operations. The sections below outline the primary risk areas and safeguards in-house counsel should address.
Key Risk Areas
Permanent Business Records and Retention ChallengesAI-generated transcripts, summaries, and recordings can be deemed official business records under company policy and applicable law. As such, they may be subject to preservation obligations for litigation holds or regulatory investigations, often for years. This can significantly increase storage costs and, more importantly, keep sensitive conversations alive long past when they should have been deleted. Failing to preserve or mismanaging deletion can trigger spoliation claims or regulatory sanctions. 
Privilege and Confidentiality RiskRecording attorney‑client conversations, HR deliberations, or internal audits can inadvertently waive privilege protections, particularly if outputs are shared with or stored by a third party. Many AI vendors store data in vendor-controlled infrastructure, and standard contractual terms may not recognize legal privilege or work‑product protections. Further, vendors often reserve rights to use client data to train AI models, increasing the risk of exposing confidential strategy, legal advice, and personnel information to unintended audiences.
Accuracy and Reliability Concerns
Automated transcription and summarizing tools lack human judgment and are subject to error. These tools can misidentify speakers, confuse similar-sounding names, omit acronyms or technical terms, or misinterpret back‑and‑forth exchanges when multiple people speak at once. They may also capture side comments, background discussion, or incomplete thoughts that were never intended to be part of the record or subject to external scrutiny. In disputes, regulators or opposing parties may treat AI-generative records as authoritative over formal meeting minutes, raising credibility questions and making inaccuracies difficult to correct once discovered.
Chilling Effect on Discussions
Disclosure or awareness of active recording and transcription can alter meeting dynamics. Employees may avoid raising issues, sanitize their remarks, or delay escalation of problems for fear of being “on the record.” This chilling effect can hinder proactive issue resolution, reduce candor in discussions, and ultimately affect governance.
Data Governance and Vendor Control
Outputs from AI meeting tools are commonly stored and processed by vendors, often in jurisdictions with differing privacy laws. Vendor systems may follow alternative security protocols and encryption standards that do not align with organizational requirements. Without robust contractual provisions, companies may be unable to prevent secondary use, including AI model training, or to control disclosure of sensitive content. Attendance in externally hosted meetings with active AI tools further increases exposure, as content may be recorded, stored, and disseminated outside your governance framework — and thus, beyond your control.
Practical Considerations and Safeguards
Define Clear Usage BoundariesImplement clear guidance for when AI meeting tools may be used. Prohibit recording or transcription in meetings involving counsel, HR investigations, internal audits, or sensitive strategic discussions. Consider including guidelines that require advance disclosure to participants before any AI tool is activated, ensuring consent and awareness.
Require Human Review before Circulation Develop procedures to disable automatic circulation of raw AI transcripts or summaries. Establish a human review process to verify accuracy, remove informal comments or sensitive language, and ensure alignment with the organization’s preferred tone. Clearly label reviewed records as “official” and note where AI-generated outputs are being utilized and that AI outputs are supplementary, not authoritative.
Update Retention and Legal Hold ProcessesIntegrate AI-generated outputs into existing data retention schedules, legal hold processes, and deletion protocols. Limit access to recordings and transcripts to authorized personnel only. Consider employing encryption and other security measures to protect stored data.
Strengthen Vendor Contractual SafeguardsConduct due diligence before adopting or expanding AI meeting solutions. Contracts should confirm data ownership, secure deletion upon termination, and require notice of any data breach or disclosure request. Validate that vendor security practices meet relevant legal and regulatory standards. Also, consider banning any secondary use for AI training.
Employee Education and Training Awareness is critical to mitigating misuse and risk. Train employees on proper use of AI tools, the legal implications of recorded conversations, and the importance of professionalism in meetings subject to transcription. Encourage escalation of any concerns about unauthorized recordings. Make AI policies easily accessible to employees and update them as AI technologies evolve.
Pilot Before Wide RolloutTest AI meeting tools in low‑risk environments first, so potential issues can be spotted before the technology is deployed company‑wide. Legal, compliance, privacy, and HR should be part of the evaluation team from the outset.
The expansion of AI meeting tools into daily operations demands active oversight. Compliance and legal should set the framework for how AI-generated content is handled, ensuring accuracy, consistency, retention, and privilege are not compromised. Through clear usage policies, integrated retention processes, strong vendor terms, and regular training, companies can embrace AI capabilities and avoid unnecessary risk. 

Judge Leans Away From Breakup During Closing Arguments in Google Ad Tech Case

Earlier this year, Judge Leonie Brinkema of the U.S. District Court for the Eastern District of Virginia ruled that Google had unlawfully monopolized two key ad-tech markets: publisher ad servers and ad exchanges. However, she dismissed claims that Google monopolized advertiser ad networks and upheld the legality of Google’s prior acquisitions of AdMeld (in 2011) and DoubleClick (in 2008). The remedy phase has concluded with closing arguments last Friday (November 21). Judge Brinkema is now tasked with evaluating both parties’ proposed remedies to determine the most effective path to restore competition in the affected markets.
The government is proposing a host of behavioral and structural changes, including divesting Google’s ad exchange and open-sourcing the publisher ad server. Google is proposing a package of behavioral remedies, including allowing publishers to contract separately with AdX and DFP, as well as enabling interoperability with Prebid, but stops short of breaking up its ad tech stack. There really isn’t much daylight between the two proposals.
The government has already failed once in its bid to break up the search giant, with Judge Amit Mehta of the U.S. District Court for the District of Columbia rejecting its attempt to spin off Chrome. Judge Brinkema is similarly skeptical about the “commercial reality” of imposing structural remedies in the ad tech case. Unlike Judge Mehta, Judge Brinkema only had a couple of questions, all of them circling back to timing. She questioned how the benefits of a complex structural order and monitoring regime could function in light of what she views as an inevitable and lengthy appeal process and pressed the government on its failure to identify a plausible AdX buyer. She noted that any acquirer, particularly a giant like Microsoft, would face its own lengthy antitrust review. “This still leaves us at a fairly abstract level, and the order needs to be far more down-to-earth and concrete.”
Department of Justice lawyers cast structural relief as a “cleaner, less risky” option that respects the limits of judicial oversight, warning that behavioral decrees would entangle the court in central planning while Google continues to “test boundaries.” Trying to address the court’s concerns, they downplayed the government’s own fifteen-year implementation timeline, emphasizing that a new AdX owner within roughly fifteen months would yield early competitive gains, and rejected the notion that AI-driven change will soon disrupt the markets.
Google countered that the law requires the court to start with less drastic measures and that divestiture has never been ordered in a tying case or two-sided digital market like this one. Google highlighted the significant commonalities between its own proposed behavioral remedies and those of the government. It contended that the government had not demonstrated the inadequacy of these measures nor established that a structural separation would be technically and commercially viable, beneficial to consumers, or appropriately tailored to the court’s findings of liability. Google pointed to evidence that rival exchanges already win 58% of U.S. open-web display impressions via Prebid, claimed its remedy package could be implemented in about a year, and warned that forced divestiture would create existential risks for small and medium publishers—like WikiHow—that rely on its integrated stack.
Another recurring theme was trust. The government framed Google as an inherently untrustworthy steward of the open web and cast doubts on the court’s ability to police behavioral relief. Google responded that this flips the appropriate relevant legal standard on its head: distrust alone is not a lawful basis for broad structural relief, a point Judge Brinkema herself has made in prior remedy proceedings and proved to be a fatal misstep in the Microsoft case.
Judge Brinkema said she will first decide whether structural plus behavioral relief is warranted or whether behavioral remedies alone will suffice, and then bring the parties together to narrow the gap between their proposals. Although the case is on the “rocket docket,” she cautioned that no decision should be expected until next year.