EPA Releases New TSCA and FIFRA Enforcement Policies
On January 17, 2025, days before the end of President Biden’s term, the U.S. Environmental Protection Agency (EPA) released two new enforcement documents: (1) Expedited Settlement Agreement (ESA) Pilot Program Under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) (FIFRA Settlement Pilot Program or Pilot Program); and (2) Interim Consolidated Enforcement Response and Penalty Policy (Interim CERPP) for the Toxic Substances Control Act (TSCA) New and Existing Chemicals Program. Their relevance is unclear.
FIFRA Settlement Pilot Program
EPA states that the “purpose of this Pilot Program is to provide an additional enforcement tool that encourages resource prioritization and violation deterrence through expedited resolution of cases involving minor violations that are easily correctible and do not cause significant health or environmental harm.”
Although “easily correctible” is not defined, from a timing perspective, it appears that EPA considers a violation easily correctible if FIFRA compliance can be achieved within 30 days, although EPA may, “at its discretion, grant an extension for corrective action in limited circumstances upon submission of a written extension request detailing why achieving compliance within 30 calendar days of receipt of this letter is infeasible or impracticable.”
EPA also provides the following “general parameters” that EPA considers when determining whether a case involves “minor violations” that are suitable for resolution under this Pilot Program, including but not limited to:
The case involves domestically produced or imported pesticides or device products.
The case does not require EPA review and approval of registration changes, including but not limited to labeling changes.
The total proposed penalty should not exceed $24,000, with a penalty matrix provided at Attachment B.
The company is not a “repeat violator” (noting that in the Pilot Program, EPA discusses when a repeat violator may be eligible under the Pilot Program depending on the type of violation and when the violation occurred and provides a hypothetical timeline when a ESA may be permissible).
The case does not involve criminal or fraudulent behavior (e.g., intentionally falsifying information).
For additional clarity, EPA lists at Appendix A the violations that are eligible for resolution under this Pilot Program.
If EPA determines that a case qualifies under the Pilot Program, EPA has developed a template cover letter and final order that it can provide to the company as an opportunity to resolve the violations. Upon receipt, if the company does not respond within 30 calendar days, the ESA is automatically withdrawn. EPA states that an “adequate response” from the ESA recipient would include the following:
Returning a signed agreement;
Paying the full penalty per the ESA terms offered; and
Submitting a signed, certified statement that Respondent no longer engages in violative activities, that the violations have been corrected, or that lists the steps Respondent has or will take to prevent recurrence of the violation(s), as applicable.
If an ESA is withdrawn, EPA without prejudice retains its ability to file any other enforcement action for the cited violation(s) and to seek up to the statutory maximum penalty for each violation.
Interim CERPP
EPA states that the Interim CERPP guidance is intended to help ensure that enforcement actions and the assessment of civil administrative penalties are “appropriate, nationally consistent and promote compliance among TSCA-regulated entities.” The CERPP contains the following Parts:
CERPP Part One provides cross-cutting background information: Introduction; TSCA Legal Background; Enforcement Response Options; and Regulatory Responses.
CERPP Part Two provides a cross-cutting overview of the process for determining penalties: Introduction; General Principles; Steps in Computing Civil Penalties; Factors as to Violation; and Factors as to Violator.
CERPP Part Three encompasses Modules for computing Gravity-based Penalties for specific Core TSCA programs, including: Module A for the Section 6(a) Rules.
CERPP Part Four presents the cross-cutting Gravity-based Penalty Matrix, which states the “initial” (per violation) Gravity-based Penalty dollar amount applicable in all Core TSCA programs.
CERPP Part Five provides cross-cutting guidance for adjusting (or remitting) the Gravity-based Penalty to derive the final civil administrative penalty in a case: Overview; Preliminary Information; Ability to Pay/Continue in Business; Prior Violation; Culpability; Other Matters Justice May Require; and Penalty Remittance.
Currently, there are several enforcement response policies (ERP) applicable to different statutory violations under TSCA:
Guidelines for Assessment of Civil Penalties Under Section 16 of the Toxic Substances Control Act, 45 Fed. Reg. 59770 (Sept. 10, 1980) (TSCA Penalty Policy), https://www.epa.gov/sites/default/files/documents/tscapen.pdf.
Final TSCA GLP Enforcement Response Policy, https://www.epa.gov/enforcement/final-tsca-glp-enforcement-response-policy.
Enforcement Response Policy for TSCA Section 4 Test Rules, https://www.epa.gov/enforcement/enforcement-response-policy-tsca-section-4-test-rules.
Amended TSCA Section 5 Enforcement Response Policy, https://www.epa.gov/enforcement/amendment-tsca-section-5-enforcement-response-policy-penalty-limit-untimely-noc.
Issuance of Revised Enforcement Response Policy for TSCA Sections 8, 12 & 13, https://www.epa.gov/enforcement/issuance-revised-enforcement-response-policy-tsca-sections-812-13.
EPA intends to consolidate and update these TSCA ERPs with this CERPP, but states that “until a module for a specific Core TSCA program is added to the CERPP, use the current ERP for that Core TSCA program.” When a module is available for a particular TSCA provision, the CERPP will be immediately effective and supersede any prior TSCA ERP.
At this time, the Interim CERPP appears to address only penalties for TSCA Section 6(a) violations, as described under “Module A,” and for which there is no current ERP counterpart.
As with all ERPs, the initial gravity-based penalty is determined based on three factors: nature, circumstances, and extent. Since there is no existing ERP for Section 6, the program-specific Module A provides EPA’s guidance for how it will consider Section 6(a) violations:
Nature: The Nature classification for all Section 6(a) violations is Chemical Control (CC), excluding recordkeeping violations. The Nature classification for recordkeeping violations under Section 6(a) is Control-associated Data-Gathering (CADG).
Circumstance: The Circumstance Level depends on the type of requirement that was violated, and EPA provides a chart to explain the “high,” “medium,” and “low” range Circumstance Levels.
Extent: The Extent Level Matrix establishes three classifications: Major, Significant, or Minor. For Section 6 violations, EPA describes how a violation will be classified based on two factors relevant to the unreasonable risks from Section 6(a) chemicals: (1) Potential Injury, meaning “the scope of the violation in relation to the potential injury from noncompliance;” and (2) Potentially Impacted Entity (PIE), meaning “the population or environment that could be subject to the potential injury from the violation.” The Interim CERPP provides additional guidance explaining how these classifications will be derived.
Commentary
EPA states that in issuing this FIFRA Settlement Pilot Program, its intent is to “decrease transaction costs and achieve speedy compliance.” This would seem to align with the recent Trump Administration’s Executive Order (EO) titled “Ensuring Lawful Governance and Implementing the President’s ‘Department of Government Efficiency’ Deregulatory Initiative” that instructs agencies to “preserve their limited enforcement resources by generally de-prioritizing actions to enforce regulations that are based on anything other than the best reading of a statute” and “direct the termination of all such enforcement proceedings that do not comply with the Constitution, laws, or Administration policy.” The list of violations that are eligible for ESAs is limited, however. The Pilot Program is expected to be available for three years, so it will be interesting to see whether and how this Pilot Program is implemented.
While the Pilot Program might be considerably narrow, the Interim CERPP has the potential for much broader applicability. Although the current Interim CERPP only has a Module focused on Section 6(a), for which there is no current ERP, EPA has stated its interest in developing Modules for other TSCA Sections, which will then supersede those existing ERPs. The existing ERPs have many parallels and similarities, but they also address distinct differences depending on the TSCA sections at issue. There thus is the likelihood that one consolidated CERPP will impact how penalties are calculated for all TSCA violations, and stakeholders should be prepared to review carefully any additional Modules released.
In that both of these enforcement documents were prepared by the prior Administration, their enduring relevance, like so many other issues at EPA, is unclear. As new leadership populates the ranks at EPA program offices, including the Office of Enforcement and Compliance Assurance, we may learn more.
Key Considerations Before Negotiating Healthcare AI Vendor Contracts
The integration of artificial intelligence (AI) tools in healthcare is revolutionizing the industry, bringing efficiencies to the practice of medicine and benefits to patients. However, the negotiation of third-party AI tools requires a nuanced understanding of the tool’s application, implementation, risk and the contractual pressure points. Before entering the negotiation room, consider the following key insights:
I. The Expanding Role of AI in Healthcare
AI’s role in healthcare is rapidly expanding, offering a wide range of applications including real-time patient monitoring, streamlined clinical note-taking, evidence-based treatment recommendations, and population health management. Moreover, AI is transforming healthcare operations by automating staff tasks, optimizing operational and administrative processes, and providing guidance in surgical care. These technological advancements can not only improve efficiency but also enhance the quality of care provided. AI-driven customer support tools are also enhancing patient experiences by offering timely responses and personalized interactions. Even in employment recruiting, AI is being leveraged to identify and attract top talent in the healthcare sector.
With such a wide array of applications, it is crucial for stakeholders to understand the specific AI service offering when negotiating a vendor contract and implementing the new technology. This knowledge ensures that the selected AI solution aligns with the organization’s goals and can be effectively integrated into existing systems, while minimizing each party’s risk.
II. Pre-Negotiation Strategies
Healthcare AI arrangements are complex, often involving novel technologies and products, a wide range of possible applications, important data use and privacy considerations and the potential to significantly impact patient care and patient satisfaction. Further, the regulatory landscape is developing and can be expected to evolve significantly in the coming years. Vendors and customers should consider the following when approaching a negotiation:
Vendor Considerations:
Conduct a Comprehensive Assessment: Understand the problem the product is addressing, expected users, scope, proposed solutions, data involved, potential evolution, and risk level.
Engage Stakeholders: Schedule kick-off calls with the customer’s privacy, IT, compliance, and clinical or administrative teams.
Documentation: Maintain summary documentation detailing model overview, value proposition, processing activities, and privacy/security controls.
Collaborate with Sales: Develop strategies with the sales team and consider trial periods or pilot programs. Plan for the progression of these programs. For example, even if a pilot program is free, data usage terms should still apply.
Customer Considerations:
Evaluate Within AI Governance Scope: Don’t treat an AI contract like a normal tech engagement. Instead, approach this arrangement within a larger AI governance scope, including accounting for the introduction of ethical frameworks, data governance practices, monitoring and evaluation systems, and related guardrails to work in tandem with the product’s applications.
Engage Stakeholders: Collaborate with legal, privacy, IT, compliance, and other relevant stakeholders from the outset.
Consider AI-Specific Contracts: Use AI-specific riders or MSAs and review standard vendor forms to streamline negotiations.
Assess Upstream Contract Requirements: Ensure upstream requirements can be appropriately reflected downstream.
Perform vendor due diligence:As with any nascent industry, some vendors will not survive or may significantly change their focus or products, which might impact support or the long-term viability of the service. Learn about your vendor and ask questions about their financial stability, privacy and security posture.
III. AI Governance and Risk Assessment
Evaluating AI-related risk requires understanding risk across the full lifecycle of an AI product, including its model architecture, training methods, data types, model access, and specific application context. In the healthcare space, this includes understanding the impact to operations, the effect on clinical care and any other impact to patients, the amount of sensitive information involved, and the degree of visibility and/or control the organization has over the model.[1] For example, the risk is much larger with respect to AI that is used to assist clinical decision-making for diagnostics (e.g., assessing static imaging in radiology); whereas, technology used for limited administrative purposes carries a comparatively smaller risk. Here are three resources that healthcare organizations can use to evaluate and address AI-related risks:
A. HEAT Map
A HEAT map can be a helpful tool for evaluating the severity of risks associated with AI systems. It categorizes risks into different “heat” levels (e.g., informational, low, medium, high, and critical). This high-level visual representation can be particularly helpful when a healthcare organization is initially deciding whether to engage a vendor for a new AI product or platform. It can help the organization identify the risk associated with rolling out a given product and prioritize risk management strategies if it moves forward in negotiating an agreement with that vendor.
For example, both the customer and the vendor might consider (and categorize within the HEAT map) what data the vendor will require to perform its services, why the vendor needs it, who will receive the data, and what data rights the vendor might be asking for, how that data is categorized, whether any federal, state or global rules impact the acceptance of that data, and what mitigations are necessary to account for data privacy.
B. NIST AI Risk Management Framework
The National Institute of Standards and Technology (NIST) has created the NIST AI Risk Management Framework to guide organizations in identifying and managing AI-related risks.[2] This framework offers an example of a risk tiering system that can be used to understand and assess the risk profile of a given AI product, and ultimately guide organizations in the creation of risk policies and protocols, evaluation of ongoing AI rollouts, and resolution of any issues that arise. Whether healthcare organizations choose to adopt this risk tiering approach or apply their own, this framework reminds organizations of the many tools at their disposal to manage risk during the rollout of an AI tool, including data protection and retention policies, education of users, incident response protocols, auditing and assessment practices, changes to management controls, secure software development practices, and stakeholder engagement.
C. Attestations and Certifications
Attestations and certificates (e.g., HITRUST, ISO 27001, SOC-2) can also help your organization ensure compliance with industry standard security and data protection practices. Specifically, HITRUST focuses on compliance with healthcare data protection standards, reducing the risk of breaches and ensuring AI systems that handle health data are secure; ISO 27001 provides a framework for managing information security, helping organizations to safeguard AI data against unauthorized access and breaches; and SOC-2 assesses and verifies a service organization’s controls related to security, availability, processing integrity, confidentiality, and privacy, in order to ensure AI services are trustworthy. By engaging in the process to meet these certification standards, the organization will be better equipped to issue-spot potential problems and implement corrective measures. Also, these certifications can demonstrate to the public that the organization takes AI risks seriously, thereby strengthening trust and credibility amongst its patients and business partners.
IV. Contract Considerations
Once parties have assessed their organizational needs, engaged applicable stakeholders/collaborators, and reviewed their risk exposure from an AI governance perspective, they can move forward in negotiating the specific terms of the agreement. Here’s a high-level checklist of the terms and conditions that each party will want to pay careful attention to in negotiations, along with a deeper dive into the considerations surrounding data use and intellectual property (IP) issues:
A. Key Contracting Provisions:
Third-party terms
Privacy and security
Data rights
Performance and IP warranties
Service level agreements (SLAs)
Regulatory compliance
Indemnification (IP infringement, data breaches, etc.)
Limitations of liability and exclusion of damages
Insurance and audit rights
Termination rights and effects
B. Data Use and Intellectual Property Issues
When negotiating the terms and conditions related to data use, ownership, and other intellectual property (IP) issues, each party will typically aim to achieve the following objectives:
Customer Perspective:
Ensure customer will own all inputs, outputs, and derivatives of its data used in the application of the AI model;
Confirm data usage will be restricted to service-related purposes;
Confirm the customer’s right to access data stored by vendor or third-party as needed. For example, the customer might want to require that the vendor provide any relevant data and algorithms in the event of a DOJ investigation or plaintiff lawsuit;[3]
Aim for broad, protective IP liability and indemnity provisions; and
Where patient health information is involved, ensure that it is being used in compliance with HIPAA. Vendors want to train their algorithm on PHI. Unless the algorithm is only being trained for the benefit of the HIPAA-regulated entity and fits within a healthcare operations exception, a HIPAA authorization from the data subject will typically be required to train the algorithm for broader purposes.
Vendor Perspective:
Ensure vendor owns all services, products, documentation, and enhancements thereto;
Access customer data sources for training and improving machine learning models; and
Retain ownership over outputs. From the vendor’s perspective, any customer data that is inputted into the vendor’s model is modified by that model or product, resulting in the blending of information owned by both sides. One potential solution to this shared ownership issue is for the vendor to grant the customer a longstanding license to use that output.
V. Conclusion
In conclusion, negotiating contracts for AI tools in healthcare demands a comprehensive understanding of the technology, data use, risks and liabilities, among other considerations. By preparing effectively and engaging the right stakeholders and collaborators, both vendors and customers can successfully navigate these negotiations.
FOOTNOTES
[1] UC AI Council Risk Assessment Guide.
[2] NIST AI 600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (July 2024).
[3] Paul W. Grimm et al., Artificial Intelligence as Evidence, 19 Northwestern J. of Tech. and Intellectual Prop. 1, 9 (2021).
Listen to this post
California Privacy Agency Extracts Civil Penalties in its First Settlement Not Involving Data Brokers

Companies in all industries take note: regulators are scrutinizing how companies offer and manage privacy rights requests and looking into the nature of vendor processing in connection with application of those requests. This includes applying the proper verification standards and how cookies are managed. Last week, the California Privacy Protection Agency (“CPPA” or “Agency”) provided yet another example of this regulatory focus in its Stipulated Final Order (“Order”) with automotive company, American Honda Motor Co., Inc. (“Honda”).
The CPPA alleged that Honda violated the California Consumer Privacy Act (“CCPA”) by:
requiring Californians to verify themselves where verification is not required or permitted (the right to opt-out of sale/sharing and the right to limit) and provide excessive personal information to exercise privacy rights subject to verification (know, delete, correct);
using an online cookie management tool (often known as a CMP) that failed to offer Californians their privacy choices in a symmetrical or equal way and was confusing;
requiring Californians to verify that they gave their agents authority to make opt-out of sale/sharing and right to limit requests on their behalf; and
sharing consumers’ personal information with vendors, including ad tech companies, without having in place contracts that contain the necessary terms to protect privacy in connection with their role as either a service provider, contractor, or third party.
This Order illustrates the potential fines and financial risks associated with non-compliance with the state privacy laws. Of the $632,500 administrative fine lodged against the company, the Agency clearly spelled out that $382,500 of the fine accounts for 153 violations – $2,500 per violation – that are alleged to have occurred with respect to Honda’s consumer privacy rights processing between July 1 and September 23, 2023. It is worth emphasizing that the Agency lodged the maximum administrative fine – “up to two thousand five hundred ($2,500)” – that is available to it for non-intentional violations for each of the incidents where consumer opt-out / limit rights were wrongly applying verification standards. It Is unclear to what the remaining $250,000 in fines were attributed, but they are presumably for the other violations alleged in the order, such as disclosing PI to third parties without having contracts with the necessary terms, confusing cookie and other consumer privacy requests methods, and requiring excessive personal data to make a request. It is unclear the number of incidents that involved those infractions but based on likely web traffic and vendor data processing, the fines reflect only a fraction of the personal information processed in a manner alleged to be non-compliant.
The Agency and Office of the Attorney General of California (which enforces the CCPA alongside the Agency) have yet to seek truly jaw-dropping fines in amounts that have become common under the UK/EU General Data Protection Regulation (“GDPR”). However, this Order demonstrates California regulators’ willingness to demand more than remediation. It is also significant that the Agency requires the maximum administrative penalty on a per-consumer basis for the clearest violations that resulted in denial of specific consumers’ rights. This was a relatively modest number of consumers: “119 Consumers who were required to provide more information than necessary to submit their Requests to Opt-out of Sale/Sharing and Requests to Limit, 20 Consumers who had their Requests to Opt-out of Sale/Sharing and Requests to Limit denied because Honda required the Consumer to Verify themselves before processing the request, and 14 Consumers who were required to confirm with Honda directly that they had given their Authorized Agents permission to submit the Request to Opt-out of Sale/Sharing and Request to Limit on their behalf.” The fines would have likely been greater if applied to all Consumers who accessed the cookie CMP, or that made requests to know, delete, or correct. Further, it is worth noting that many companies receive thousands of consumer requests per year (or even per month), and the statute of limitations for the Agency is five years; applying the per-consumer maximum fine could therefore result in astronomical fines for some companies.
Let us also not forget that regulators also have injunctive relief at their disposal. Although, the injunctive relief in this Order was effectively limited to fixing alleged deficiencies, it included “fencing in” requirements such as use of a UX designer to evaluate consumer request “methods – including identifying target user groups and performing testing activities, such as A/B testing, to access user behavior” – and reporting of consumer request metrics for five years. More drastic relief, such as disgorgement or prohibiting certain data or business practices, are also available. For instance, in a recent data broker case brought by the Agency, the business was barred from engaging in business as a data broker in California for three years.
We dive into each of the allegations in the present case further below and provide practical takeaways for in-house legal and privacy teams to consider.
Requiring consumers to provide more info than necessary to exercise verifiable requests and requiring verification of CCPA sale/share opt-out and sensitive PI limitation requests.
The Order alleges two main issues with Honda’s rights request webform:
Honda’s webform required too many data points from consumers (e.g., first name, last name, address, city, state, zip code, email, phone number). The Agency contends that requiring all of this information necessitates that consumers provide more information than necessarily needed to exercise their verifiable rights considering that the Agency alleged that “Honda generally needs only two data points from the Consumer to identify the Consumer within its database.” The CPPA and its regulations allow a business to seek additional personal information if necessary to verify to the requisite degree of certainty required under the law (which varies depending on the nature of the request and the sensitivity of the data and potential harm of disclosure, deletion or change), or to reject the request and provide alternative rights responses that require lesser verification (e.g., treat a request of a copy of personal information as a right to know categories of person information). However, the regulations prohibit requiring more personal data than is necessary under the particular circumstances of a specific request. Proposed amendments the Section 7060 of the CCPA regulations also demonstrate the Agency’s concern about requiring more information than is necessary to verify the consumer.
Honda required consumers to verify their Requests to Opt-Out of Sale/Sharing and Requests to Limit, which the CCPA prohibits.
In addition to these two main issues, the Agency also alluded to (but did not directly state) that the consumer rights processes amounted to dark patterns (Para. 38). The CPPA cited to the policy reasons behind differential requirements as to Opt-Out of Sale/Sharing and Right to Limit; i.e., so that consumers can exercise Opt-Out of Sale/Sharing and Right to Limit requests without undue burden, in particular, because there is minimal or nonexistent potential harm to consumers if such requests are not verified.
In the Order, the CPPA goes on to require Honda to ensure that its personnel handling CCPA requests are trained on the CCPA’s requirements for rights requests, which is an express obligation under the law, and confirming to the Agency that it has provided such training within 90 days of the Order’s effective date.
Practical Takeaways
Configure consumer rights processes, such as rights request webforms, to only require a consumer to provide the minimum information needed to initiate and verify (if permitted) the specific type of request. This may be difficult for companies that have developed their own webforms, but most privacy tech vendors that offer webforms and other consumer rights-specific products allow for customizability. If customizability is not possible, companies may have to implement processes to collect minimum information to initiate the request and follow up to seek additional personal information if necessary to meet CCPA verification standards as may be applicable to the specific consumer and the nature of the request.
Do not require verification of do not sell/share and sensitive PI limitation requests (note, there are narrow fraud prevention exceptions here, though, that companies can and should consider in respect of processing Opt-Out of Sale/Sharing and Right to Limit requests).
Train personnel handling CCPA requests (including those responsible for configuring rights request “channels”) to properly intake and respond to them.
Include instructions on how to make the various types of requests that are clear and understandable, and that track the what the law permits and requires.
Requiring consumers to directly confirm with Honda that they had given permission to their authorized agent to submit opt-out of sale/sharing sensitive PI limitation requests
The CPPA’s Order also outlines that Honda allegedly required consumers to directly confirm with Honda that they gave permission to an authorized agent to submit Opt-Out of Sale/Sharing and Right to Limit requests on their behalf. The Agency took issue with this because under the CCPA, such direct confirmation with the consumer regarding authority of an agent is only permitted as to requests to delete, correct, and know.
Practical Takeaways: When processing authorized agent requests to Opt-Out of Sale/Sharing or Right to Limit, avoid directly confirming with the consumer or verifying the identity of the authorized agent (the latter is also permitted in respect of requests to delete, correct, and know). Keep in mind that what agents may request, and agent authorization and verification standards, differ from state-to-state.
Failure to provide “symmetry in choice” in its cookie management tool
The Order alleges that, for a consumer to turn off advertising cookies on Honda’s website (cookies which track consumer activity across different websites for cross-context behavioral advertising and therefore require an Opt-out of Sale/Sharing), consumers must complete two steps: (1) click the toggle button to the right of Advertising Cookies and (2) click the “Confirm My Choices” button,” shown below:
The Order compares this opt-out process to that for opting back into advertising cookies following a prior opt-out. There, the Agency alleged that if consumers return to the cookie management tool (also known as a consent management platform or “CMP”) after turning “off” advertising cookies, an “Allow All” choice appears (as shown in the below graphic). This is likely a standard configuration of the OneTrust CMP that can be modified to match the toggle and confirm approach used for opt-out. Thus, the CPPA alleged, consumers need only take one step to opt back into advertising cookies when two steps are needed to opt-out, in violation of and express requirement of the CCPA to have no more steps to opt-in than was required to opt-out.
The Agency took issue with this because the CCPA requires businesses to implement request methods that provide symmetry in choice, meaning the more privacy-protective option (e.g., opting-out) cannot be longer, more difficult, or more time consuming than the less privacy protective option (e.g., opting-in).
The Agency also addressed the need for symmetrical choice in the context of “website banners,” also known as cookie banners, pointing to an example cited as insufficient symmetry in choice from the CCPA regulations – i.e., using “’Accept All’ and ‘More Information,’ or ‘Accept All’ and ‘Preferences’ – is not equal or symmetrical” Because it suggests that the company is seeking and relying on consent (rather than opt-out) to cookies, and where consent is sought acceptance and acceptance must be equally as easy to choose. The CCPA further explained that “[a]n equal or symmetrical choice” in the context of a website banner seeking consent for cookies “could be between “Accept All” and “Decline All.” Of course, under CCPA consent to even cookies that involve a Share/Sale is not required, but the Agency is making clear that where consent is sought there must be symmetry in acceptance and denial of consent.
The CPPA’s Order also details other methods by which the company should modify its CCPA requests procedures including (i) separating the methods for submitting sale/share opt-out requests and sensitive PI limitation requests from verifiable consumer requests (e.g., requests to know, delete, and correct); (ii) including the link to manage cookie preferences within Honda’s Privacy Policy, Privacy Center, and website footer; and (iii) applying global privacy control (“GPC”) preference signals for opt-outs to known consumers consistent with CCPA requirements.
Practical Takeaways
It is unclear whether the company configured the cookie management tool in this manner deliberately or if the choice of the “Allow All” button in the preference center was simply a matter of using a default configuration of the CMP, a common issue with CMPs that are built off of a (UK/EU) GDPR consent model. Companies should pay close attention to the configuration of their cookie management tools, including in both the cookie banner (or first layer), if used, and the preference center (shown above), and avoid using default settings and configurations provided by providers that are inconsistent with state privacy laws. Doing so will help mitigate the risk of choice asymmetry presented in this case, and the risks discussed in the following three bullets.
State privacy laws like the CCPA are not the only reason to pay close attention and engage in meticulous legal review of cookie banner and preference center language, and proper functionality and configuration of cookie management tools.
Given the onslaught of demands and lawsuits from plaintiffs’ firms under the California Invasion of Privacy Act and similar laws – based on cookies, pixels, and other tracking technologies – many companies turn to cookie banner and preference center language to establish an argument for a consent defense and therefore mitigate litigation risk. In doing so it is important to bear in mind the symmetry of choice requirements of state consumer privacy laws. One approach is to make it clear that acceptance is of the site terms and privacy practices, which include use of tracking by the operator and third parties, subject to the ability to opt-out of some types of cookies. This can help establish consent to use of cookies by use of the site after notice of cookie practices, while not suggesting that cookies are opt-in, and having lack of symmetry in choice.
In addition, improper wording and configuration of cookie tools – such as providing an indication of an opt-in approach (“Accept Cookies”) when cookies in fact already fired upon the user’s site visit, or that “Reject All” opts the user out of all, including functional and necessary cookies that remain “on” after rejection – present risks under state unfair and deceptive acts and practices (UDAAP) and unfair competition laws, and make the cookie banner notice defense to CIPA claims potentially vulnerable since the cookies fire before the notice is given.
Address CCPA requirements for GPC, linking to the business’s cookie preference center, and separating methods for exercising verifiable vs. non-verifiable requests. Where the business can tie a GPC signal to other consumer data (e.g., the account of a logged in user), it must also apply the opt-out to all linkable personal information.
Strive for clear and understandable language that explains what options are available and the limitations of those options, including cross-linking between the CMP for cookie opt-outs and the main privacy rights request intake for non-cookie privacy rights, and explain and link to both in the privacy policy or notice.
Make sure that the “Your Privacy Choices” or “Do Not Sell or Share My Personal Information” link gets the consumer to both methods. Also make sure the opt-out process is designed so that the required number of steps to make those opt-outs is not more than to opt-back in. For example, linking first to the CMP, which then links the consumer rights form or portal, rather than the other way around, is more likely to avoid the issue with additional steps just discussed.
Failure to produce contracts with advertising technology companies
The Agency’s Order goes on to allege that Honda did not produce contracts with advertising technology companies despite collecting and selling/sharing PI via cookies on its website to/with these third parties. The CPPA took issue with this because the CCPA requires a written contract meeting certain requirements to be in place between a business and PI recipients that are a CCPA service provider, contractor, or third party in relation to the business. We have seen regulators request copies of contracts with all data recipients in other enforcement inquiries.
Practical Takeaways
Vendor and contract management are a growing priority of privacy regulators, in California and beyond, and should be a priority for all companies. Be prepared to show that you have properly categorized all personal data recipients, and have implemented and maintain processes to ensure proper contracting practices with vendors, partners, and other data recipients, which should include a diligence and assessment process to ensure that the proper contractual language is in place with the data recipient based on the recipient’s data processing role. To state it another way, it may not be proper as to certain vendors to simply put in place a data processing agreement or addendum with service provider/processor language. For instance, vendors that process for cross-context behavioral advertising cannot qualify as a service provider/contractor. In order to correctly categorize cookie and other vendors as subject to opt-out or not, this determination is necessary.
Attention to contracting is important under the CCPA in particular because different language is required depending on whether the data recipient constitutes a “third party,” “service provider,” or a “contractor,” the CCPA requires different contracting terms be included in the agreements with each of those three types of personal information recipients. Further, in California, the failure to have all of the required service provider/contractor contract terms will convert the recipient to a third party and the disclosure into a sale.
Conclusion
This case demonstrates the need for businesses to review their privacy policies and notices, and audit their privacy rights methods and procedures to ensure that they are in compliance with applicable state privacy laws, which have some material differences from state-to-state. We are aware of enforcement actions in progress not only in California, but other states including Oregon, Texas, and Connecticut, and these states are looking for clarity as to what specific rights their residents have and how to exercise them. Further, it can be expected that regulators will start looking beyond obvious notice and rights request program errors to data knowledge and management, risk assessment, minimization, and purpose and retention limitation obligations. Compliance with those requirements requires going beyond “check the box” compliance as to public facing privacy program elements and to the need to have a mature, comprehensive and meaningful information governance program.
PLOT TWIST: Legal Lead Generator Sued in TCPA Class Action
In an ironic twist, Intake Desk LLC, a company that recruits plaintiffs for personal injury lawsuits and uses the tagline “File More Mass Tort Cases,” now finds itself on the other side of the courtroom—as a defendant in a TCPA class action.
The Complaint in Emily Teman v. Intake Desk LLC (Mar. 19, 2025, D. Mass.) alleges that Intake Desk violated the TCPA by making telemarketing calls to Plaintiff Teman and other putative class members while their numbers were listed on the National Do Not Call Registry and without their written consent, as well as by calling individuals who had previously requested not to receive such calls.
Plaintiff Teman claims she received more than a dozen calls from Intake Desk over several months in 2023, allegedly attempting to sign her up for personal injury lawsuits related to talcum powder. However, Teman asserts that she never consented to these calls, and that her number has been on the National Do Not Call Registry since 2003.
Interestingly, Teman filed another TCPA class action with similar allegations against Select Justice, LLC, a self-proclaimed advocacy group “committed to helping injured individuals seek justice and compensation.” In Emily Teman v. Select Justice, LLC (Mar. 20, 2025, D. Mass.), Teman alleges that she received more than a dozen calls from Select Justice over several months in 2023, including multiple harassing voicemails from different representatives. According to the Complaint, the voicemails implied that Teman had contacted Select Justice to join class action suits related to talcum powder and rideshare assault. Teman asserts that she had never been a customer of Select Justice and never consented to receive calls from them. The Complaint further alleges that the calls prompted Teman to file a Consumer Complaint with the Massachusetts Attorney General’s Office against Select Justice on January 26, 2024.
While the Complaint against Intake Desk states that Teman “never made an inquiry into” its services, the Complaint against Select Justice alleges that she “was not interested in Select Justice’s services.” Perhaps this is just a trivial semantic difference – but in TCPAWorld, every word carries weight.
Video: Whistleblower Challenges and Employer Responses: One-on-One with Alex Barnard
Addressing whistleblower claims is one of the most sensitive and complex issues employers face. It becomes especially challenging when the claims involve compliance officers, risk officers, or even lawyers tasked with identifying potential problems.
In this one-on-one interview, Epstein Becker Green attorney Alex Barnard sits down with George Whipple to explore the unique challenges whistleblower allegations present within organizations. Alex explains how courts distinguish between performing one’s job duties and raising legitimate whistleblower concerns, particularly when internal experts are involved. He also outlines key strategies for investigating claims fairly, avoiding retaliation, and navigating the fine line between good-faith and bad-faith whistleblowing.
Drawing on more than 25 years of experience in litigation and internal investigations, Alex emphasizes the importance of treating every claim as having merit while acting with caution in cases involving potential bad faith. He provides actionable insights for employers on maintaining professionalism, minimizing risks, and fostering respect for the whistleblower process while upholding organizational integrity.
Learn effective techniques to handle whistleblower claims, mitigate risks, and ensure compliance in an increasingly scrutinized workplace environment.
CPPA Settles Alleged CCPA Violations with Honda
Last week, the California Privacy Protection Agency (CPPA) settled its first non-data broker enforcement action against American Honda Motor Co. for a $632,500 fine and the implementation of certain remedial actions.
The CPPA alleged that Honda violated the California Consumer Privacy Act as amended by the California Privacy Rights Act (collectively the CCPA) by:
Requiring consumers to provide more information than necessary to exercise their rights under the CCPA. When submitting a request, a consumer is required by Honda’s webform to provide their first name, last name, address, city, state, zip code, email, and phone number. The CPPA alleged that this violated the CCPA by requiring a higher level of verification than required for an opt-out or to limit the use of certain information requests.
Requiring consumers to directly confirm that they have permitted another individual to act as their authorized agent to submit a request to opt-out or limit use. While a company may request written documentation indicating that an individual is an authorized agent for a consumer, a company may not require a consumer to directly confirm that they have provided the permission; companies may only contact consumers directly for requests to know, access, and correct.
Failing to implement a cookie management tool that provides symmetrical choice when a consumer submits requests to opt-out of sale and/or sharing and consents to the use of their personal information. The CPPA alleged that Honda’s website automatically allows cookies by default. To turn off advertising cookies, the user must toggle a button next to “Advertising Cookies” and then click “Confirm my Choices,” but to opt back into advertising cookies, the consumer only needs to press one button. The CPPA alleges that by providing one step to opt-in but two steps to opt-out, Honda did not provide equal or symmetrical choices as the CCPA requires.
Failing to execute written contracts with third-party advertising companies with whom it sold, shared, and/or disclosed consumer personal information. The CPPA alleged that Honda failed to execute CCPA-compliant agreements with the third-party cookie providers it uses on its website and that it sells, shares, and/or discloses personal information for advertising and marketing across different websites.
To resolve the allegations, Honda agreed to:
Implement a new and simpler process for Californians to assert their privacy rights
Update its cookie preference tool to include a “Reject All” button in addition to the “Accept All” button
Separate the methods for submitting requests to opt-out and limit the use of certain information from other rights under the CCPA
Train its employees
Consult a user experience designer to evaluate its methods for submitting privacy requests
The settlement amount is supported by the CPPA by calling out the number of consumers whose rights were implicated by some of Honda’s practices, which emphasizes that the CPPA will determine a fine on a per-violation basis. If your company hasn’t done so already, be sure to update your CCPA compliance program and dot the “I’s” and cross the “t’s” when it comes to your website’s privacy policy regarding cookie preferences and third-party advertisers.
Valenzuela v. The Kroger Co. Chatbot Wiretapping Case Dismissed; Implications and Takeaways for Businesses
A recent noteworthy decision from a federal court in California provides helpful guidance for companies deploying chatbots and other types of tracking technology on their websites, but at the same time highlights the nuances and high wire act of safely collecting consumer information versus stepping over the line.
In Valenzuela v. The Kroger Co., the U.S. District Court for the Central District of California dismissed a proposed class action filed against the grocery chain Kroger, finding that the plaintiff did not have a viable argument under the California Invasion of Privacy Act (CIPA). Because plaintiffs’ attorneys have recently been using CIPA to bring cases against a large number of companies, this decision is potentially an important decision in privacy jurisprudence. However, the narrowness of the decision leaves open other paths for plaintiffs and demonstrates the need for companies to carefully and thoughtfully assess what online tracking they conduct in order to minimize their risk of class action litigation.
The plaintiff in the case alleged Kroger, through a third-party vendor called Emplifi, unlawfully intercepted and recorded chat-based conversations between customers and Kroger’s website (i.e., communications with a “chatbot” on the website). The central claim was that Kroger “aided and abetted” Emplifi’s allegedly wrongful conduct of allowing Meta Platforms Inc. to mine data collected through Emplifi’s chatbots (including the one deployed on Kroger’s website) to gather information about user interests and target ads to those users on Meta’s social media platforms like Facebook and Instagram.
More specifically, the case was brought under Section 631(a) of CIPA which prohibits, among other things, any person from:
Tapping or making unauthorized connections with a telegraph or telephone line;
Willfully and without consent reading the contents of communications in transit;
Using information obtained via such interception; or
“Aiding, agreeing with, employing or conspiring with” any person to commit these acts.1
After a lengthy procedural back-and-forth, the Court allowed the plaintiff to proceed only under the “aiding and abetting” theory (the fourth prong). In its ruling, the Court emphasized that to hold Kroger liable under the fourth prong, the plaintiff needed to demonstrate that Kroger knew—or plausibly should have known—of Emplifi’s alleged unlawful eavesdropping or otherwise acted with knowledge or intent to facilitate it. The plaintiff pointed to the vendor’s marketing materials and the cost and ease with which the chatbot was installed on the Kroger website, arguing that Kroger must have known Emplifi was intercepting conversations without customers’ consent. The Court rejected this argument, holding “[i]t is not a plausible inference that because Emplifi could ‘quickly and cheaply’ deploy the bot, Kroger should have known Emplifi harvested user data.”
The Court ruled that because there was not a plausible allegation that Kroger had actual or constructive knowledge of the alleged unlawful sharing of chatbot communications with social media companies, Kroger could not be held liable for “aiding and abetting” the third parties’ alleged violation of CIPA.
What the Kroger Ruling Means for Businesses
Although this decision was made at the district court level and does not have a precedential effect, the Court’s reasoning provides a roadmap of what companies should be aware of when considering integrating chatbots or other large language model enabled third-party technologies onto their websites. Of note, this case focused on section 631(a) of CIPA only, and did not involve section 638.51(a), which prohibits the installation of “pen registers” or “tap and trace” devices without appropriate approvals and which plaintiffs are regularly claiming apply to website tracking software. As a result, even if the rationale from the Kroger decision is extended to other cases by other courts, companies will continue to face the risks associated with claims brought 638.51(a). Nonetheless, there are valuable lessons to be learned from the Kroger decision and other recent court decisions:
“Knowledge” of a third party’s actions is a key to a company’s liability. This includes constructive knowledge, that a company could gain from the third-party’s documentation and marketing communications as to the capabilities of their products.
Courts will require specific, fact-based allegations showing a company’s awareness and intent regarding any purported interception of communications.
Plaintiffs with more robust support for allegations of unauthorized data collection may have more success bringing similar claims.
Additionally, and as always, litigation defense is costly and even a successful defense can be a burden on a company. Even though Kroger won in this case, it took almost three years of litigation expenses to obtain that victory. Taking proactive steps to assess website tracking tool deployment can lower the risk of litigation in the first place and avoid these costs:
Ensuring that proper notice is given to, and appropriate consent is obtained from, website visitors
When onboarding third-party software providers, businesses should conduct thorough due diligence on data collection and sharing practices.
Contractual provisions should clarify that any data recording or sharing be done in compliance with all applicable laws, and that providers will indemnify the business if violations arise.
Footnotes
[1] The plaintiff in this matter also sought to bring a claim under §632.7 of CIPA (illegal interception of cellular communications for individuals who used the chatbot from their internet-enabled smartphones). That claim was dismissed with prejudice in March of 2024 with the Court finding that section of CIPA only applies to communications between two or more cellular phones and not between a cellular phone and a website.
AI Governance: The Problem of Shadow AI
If you hang out with CISOs like I do, shadow IT has always been a difficult problem. Shadow IT refers to refers to “information technology (IT) systems deployed by departments other than the central IT department, to bypass limitations and restrictions that have been imposed by central information systems. While it can promote innovation and productivity, shadow IT introduces security risks and compliance concerns, especially when such systems are not aligned with corporate governance.”
Shadow IT has been a longstanding problem as IT professionals can’t implement security measures and guidelines when they are unaware of its use.
Now that artificial intelligence (AI) is widely used for purposes including work, it is imperative that organizations address its governance, as they previously addressed employees’ use of IT assets. Otherwise, employees will use AI tools without the organization’s knowledge and outside of its acceptable use policies, exacerbating the problem of shadow AI in the organization.
A recent TechRadar article concluded that “you almost certainly have a shadow AI problem.” The risks of having shadow AI in the organization include: “the leakage of sensitive or proprietary data, which is a common issue when employees upload documents to an AI service such as ChatGPT, for example, and its contents become available to users outside of the company. But it could also lead to serious data quality problems where incorrect information is retrieved from an unapproved AI source which may then lead to bad business decisions.” And don’t forget about the problem of hallucinations.
Implementing an AI Governance Program is one way to address the shadow AI problem. AI Governance programs differ depending on business needs, but all of them address who owns the program, AI tools usage, what tools are sanctioned, how AI tools can be used, guardrails around the risks of data loss, data integrity and accuracy, and user training and education. Governing the use of AI tools in an organization is similar to governing the use of IT assets. The most important thing is to get started before shadow AI gets out of hand.
Amazon Files Suit against CPSC, Challenging CPSC’s Determination That Amazon Is a Distributor
On March 14, 2025, Amazon filed suit against the Consumer Product Safety Commission (CPSC) in the U.S. District Court for the District of Maryland, challenging CPSC’s July 29, 2024, and January 16, 2025, orders determining that Amazon is “a ‘distributor’ of certain products that are defective or fail to meet federal consumer product safety standards, and therefore bears legal responsibility for their recall.” According to CPSC’s January 17, 2025, announcement, “[m]ore than 400,000 products are subject to this Order: specifically, faulty carbon monoxide (CO) detectors, hairdryers without electrocution protection, and children’s sleepwear that violated federal flammability standards.” CPSC determined that the products, listed on Amazon.com and sold by third-party sellers using the Fulfillment by Amazon (FBA) program, pose a “substantial product hazard” under the Consumer Product Safety Act (CPSA). In its complaint, Amazon argues that CPSC “overstepped” the statutory limits of the CPSA by ordering “a wide-ranging recall of products that were manufactured, owned, and sold by third parties,” not Amazon itself. Amazon states that CPSC’s recall order “relies on an unprecedented legal theory that stretches the [CPSA] beyond the breaking point and fails to discharge” CPSC’s obligations under the Administrative Procedure Act (APA).
Amazon argues that it “falls within the definition of third-party logistics provider with respect to products sold using the FBA service because it does not manufacture, own, or sell those products, but instead stores and ships them on behalf of third-party sellers who retain title throughout the transaction.” Amazon notes that CPSC’s July 2021 administrative complaint was the “first of its kind” in seeking to label an online marketplace as a distributor under the CPSA, holding it responsible for recalling products “because it provided the third-party sellers with logistics services.” Amazon cites a statement by Robert S. Adler, then Acting Chair of CPSC, “admitt[ing] that the ‘statute is not perfectly clear on’ whether the Commission’s authority extends to Amazon’s FBA service.”
Amazon also argues that CPSC violated the APA in requiring a new round of recall notices, despite Amazon “having already twice notified every individual who purchased the products” and that Amazon “issue new refunds to purchasers (despite having already provided a full refund to every customer in 2021 or 2022).” According to Amazon, CPSC’s typical product recall practices require only a single round of notices, and binding precedent holds that CPSC “acknowledge and provide a ‘reasoned explanation for’” departing from its past practice.
According to Amazon, the CPSA vests CPSC Commissioners “with a potent combination of governmental functions, authorizing them to act as judge, jury, and prosecutor in the same proceeding.” Amazon notes that the body that voted to file the complaint against it — the Commissioners — “also has the power to hear the evidence, decide factual disputes, interpret and apply the law to the facts, and fashion the remedy.” Amazon states that this arrangement “contravenes Amazon’s Fifth Amendment rights because it ‘violates the [Supreme] Court’s longstanding teaching that ordinarily ‘no man can be a judge in his own case’ consistent with the Due Process Clause.’”
Amazon asks the court to:
Vacate CPSC’s January 16, 2025, Final Order, as well as all earlier orders, “as arbitrary and capricious, contrary to law, in excess of statutory authority, and contrary to constitutional right”;
Declare that Amazon is a third-party logistics provider, not a distributor, with regards to its FBA logistics service; and
Declare the Commissioners’ statutory removal protections unconstitutional.
More information on CPSC’s July 29, 2024, Decision and Order is available in our August 5, 2024, blog item.
Reminder: New York Cybersecurity Reporting Deadline April 15, 2025; New Regulations Effective May 1, 2025
Covered entities regulated by the New York State Department of Financial Services (NYDFS) must submit cybersecurity compliance forms by April 15, 2025. New sets of requirements for system monitoring and access privileges, enacted as part of 2023 amendments to the NYDFS cybersecurity regulations, will take effect on May 1 and November 1, 2025.
Quick Hits
Covered entities in New York must submit their annual cybersecurity compliance forms to the NYDFS by April 15, 2025, either certifying material compliance or acknowledging material noncompliance.
Starting May 1, 2025, new requirements will be implemented, including enhanced access management protocols, vulnerability management through automated scans, and improved monitoring measures to protect against cybersecurity threats.
In November 2023, NYDFS amended its comprehensive cybersecurity regulations with the changes set to take effect on a rolling basis over the following two years. Several amendments went into effect on November 1, 2024, and several more are set to take effect on May 1 and November 1, 2025.
The regulations apply to NYDFS-regulated entities, which include financial institutions, insurance companies, insurance agents and brokers, banks, trusts, mortgage banks, mortgage brokers and lenders, money transmitters, and check cashers. Certain large companies regulated by NYDFS (Class A companies) have additional requirements, while certain small businesses are exempt from specific regulations.
April 15 Annual Compliance Reporting Deadline
The NYDFS cybersecurity regulations require financial services companies and other covered entities to file annual notices of compliance to the superintendent of NYDFS by April 15, 2025, covering the prior calendar year. Under the amended regulations, covered entities must submit either a certification of material compliance with the cybersecurity requirements or an acknowledgment of noncompliance. In the acknowledgment of noncompliance, covered entities must (1) acknowledge the entity did not materially comply, (2) identify all sections of the regulations with which the entity has not complied, and (3) provide a “remediation timeline or confirmation that remediation has been completed.”
Covered entities must submit the certification or acknowledgment electronically using the NYDFS portal and the form on the NYDFS website.
New Requirements Effective May 1, 2025
Several requirements of the amended NYDFS cybersecurity regulations take effect on May 1, 2025, for nonexempt covered entities. Class A companies are subject to additional requirements that are not addressed below.
Access Privileges and Management
The amended regulations will require covered entities to limit user access privileges based on job function, limit the number and use of privileged accounts, periodically (but at least annually) review user access privileges, disable or securely configure protocols that permit remote control of devices, and “promptly” terminate accounts after a user’s departure. The regulations further require covered entities to implement a written password policy that meets industry standards.
Vulnerability Management
In addition to penetration testing, the amended regulations will require covered entities to perform “automated scans of information systems” and manual review of systems not covered by such scans to determine potential vulnerabilities.
System Monitoring
The amended regulations will require covered entities to implement “risk-based controls designed to protect against malicious code.” This includes monitoring and filtering web traffic and email to block malicious code.
New Requirements Effective November 1, 2025
The final batch of requirements under the amended cybersecurity regulations take effect on November 1, 2025. Covered entities will be required to implement multifactor authentication for all individuals to access the entity’s information systems. If the entity has a chief information security officer (CISO), the CISO “may approve in writing the use of reasonably equivalent or more secure compensating controls,” which must be reviewed at least annually.
Additionally, covered entities will be required to “implement written policies and procedures designed to produce and maintain a complete, accurate and documented asset inventory of the covered entity’s information systems.” The policies will be required to include methods to track information for each asset and “the frequency required to update and validate” the entity’s asset inventory.
Next Steps
Covered entities may want to take steps to comply with the April 15 compliance reporting deadline and the next round of cybersecurity requirements, which will take effect on May 1, 2025. Additional requirements for certain written policies and procedures and the implementation of multifactor authentication are set to take effect on November 1, 2025.
President Trump Fires Two Democratic FTC Commissioners
Key Takeaways
Potential Legal Battle Over Presidential Authority. President Trump’s firing of the two Democratic FTC commissioners challenges old and new U.S. Supreme Court precedent interpreting the FTC Act’s terms, which allows the president to remove FTC commissioners only for “insufficiency, neglect of duty or malfeasance in office.” Advocates of the so-called “unitary executive” theory have reportedly been seeking grounds to challenge the limit on presidential power presented by Humphrey’s Executor and Seila Law, hoping the currently constituted Supreme Court might overrule this longstanding precedent.
Impact on FTC’s Composition and Competition Enforcement Direction. With the removal of the two Democratic commissioners and the anticipated confirmation of Republican nominee Mark Meador, the FTC will have a 3–0 Republican majority. This will give these Republican appointees complete control over approving future enforcement actions and also empower them to abandon existing FTC actions that they may not support, such as defending the FTC’s noncompete rule.
Consumer Protection Focus. Despite the Commission’s composition changes, the FTC’s consumer protection mission is expected to remain largely unchanged under Chairman Ferguson. The agency will likely focus on enforcing existing laws rather than pursuing new rulemaking efforts, particularly in privacy, data security and technology. Children’s privacy will continue to be a priority.
On March 18, President Donald Trump fired the Democratic commissioners, Rebecca Slaughter and Alvaro Bedoya, from the Federal Trade Commission (FTC). This leaves two Republicans, Chairman Andrew Ferguson and Melissa Holyoak, and a Republican nominee, Mark Meador. However, the dismissal of two Democratic commissioners runs contrary to decades of precedent at the FTC and apparently tees up a battle over presidential control over so-called “independent” federal agencies that seems headed to the U.S. Supreme Court.
The FTC is an executive agency, but it is called “independent” because it is not under the oversight of a cabinet secretary. Since its inception in 1915, it has operated independently with five commissioners — by statute, no more than three from one party (i.e., the president’s party) and two from the other party. According to the FTC Act, the president may nominate the commissioners, who are then subject to Senate approval. The commissioners serve staggered terms, so they are not hired or fired simultaneously and often work across administrations. Traditionally, however, upon a change of administration, the current chair resigns to enable the president to appoint a new commissioner, giving the president’s party a majority on the Commission.
The independent agency rubric, under which the FTC has operated for many decades, has more recently come under fire for its purported insulation from presidential oversight. In other words, if the rubric is constitutional, the president may not fire commissioners absent “cause.” The president can choose to appoint other agency heads purely from their own party, but for FTC commissioners, he must maintain a split-party Commission, which functions independently.
White House to Approve Regulations from Independent Agencies
In February, President Trump issued an Executive Order to “rein in” independent agencies. The Order states that Article II of the US Constitution vests all executive power in the president, meaning that all executive branch officials and employees are subject to his supervision. Accordingly, all agencies must: (1) submit draft regulations for White House review — with no carve-out for so-called “independent agencies,” except for the monetary policy functions of the Federal Reserve; and (2) consult with the White House on their priorities and strategic plans, and the White House will set their performance standards. The Office of Management and Budget (OMB) is tasked with budget oversight, and the president and attorney general (subject to the president’s supervision and control) will provide authoritative interpretations of law for the executive branch.
Humphrey’s Executor Precedent and Presidential Oversight
The president’s attempt to fire the two Democratic commissioners appears to contravene the language of the FTC Act itself as well as old and new Supreme Court precedent interpreting its terms. The FTC Act limits the president’s ability to fire an FTC commissioner to “insufficiency, neglect of duty or malfeasance in office.” In Humphrey’s Executor v. United States, 292 U.S. 602 (1935), the Supreme Court addressed a challenge to President Franklin D. Roosevelt’s firing of FTC Commissioner William Humphrey for political reasons. The Court distinguished between the president’s authority over ordinary executive officers and what it called “quasi-legislative” or “quasi-judicial” officers. The Court held that FTC commissioners fell into that class of officers and could be removed only with procedures consistent with statutory conditions enacted by Congress.
In 2020, in Seila Law, LLC v. Consumer Financial Protection Bureau, the Supreme Court affirmed that this is one of the only remaining narrow restrictions on a president’s authority to remove officers. Humphrey’s Executor, it explained, allowed “Congress to give for-cause removal protections to a multimember body of experts, balanced along partisan lines, that performed legislative and judicial functions and was said not to exercise any executive power,” essentially limiting it to the facts of that case. 591 U.S. 197, 216 (2020).
Advocates of the so-called “unitary executive” theory have reportedly been seeking a basis to challenge the limit on presidential power embodied in Humphrey’s Executor and, more recently, Seila Law, hoping that the currently constituted Supreme Court might overrule this longstanding precedent. Indeed, Humphrey’s Executor only narrowly survived the Court’s 5–4 decision in Seila Law in 2020. Another case already in progress involves the president’s firing of National Labor Relations Board (NLRB) members, which is currently before the DC Circuit and seems to have a head start on the objective of putting Humphrey’s Executor before the Supreme Court again. A new Supreme Court ruling on this issue could have profound implications for all independent agencies — not just the FTC.
What Does this Mean for the FTC?
In the near term, the firings seem to portend the following at the FTC:
Commissioners Slaughter and Bedoya have publicly stated that they will challenge their firings in court and that, for now, they consider themselves still part of the FTC. While their immediate status is unclear, it does not seem likely that they would be able to continue in their positions without court intervention. Chairman Ferguson supported President Trump’s authority to remove them and had already referred to Slaughter and Bedoya as “former commissioners.”
Practically speaking, the firings could impact the FTC’s decisions in certain competition enforcement actions. Assuming nominee Mark Meador is confirmed and joins the Commission in the near future, the FTC will have a 3–0 Republican majority. This will give these Republican appointees complete control over approving future enforcement actions. The change may also empower the new Republican majority to abandon existing FTC actions they may not support, such as defending the FTC’s noncompete rule, currently on appeal before the Fifth Circuit and the Eleventh Circuit.
We see little change to the agency’s consumer protection mission and priorities, as articulated by Chairman Ferguson. We anticipate the agency will shift its focus to enforcing existing laws rather than pursuing rulemaking efforts related to privacy, data security and technology. However, the advocacy of the former Democratic commissioners to regulate data brokers, biometric technology and artificial intelligence (AI) may take a different shape now. Ferguson will likely focus on enforcing existing laws against AI without hindering innovation in this area. Interestingly, Ferguson has also been more aggressive, proposing to inquire into tech platforms’ alleged “censorship.” Children’s privacy continues to be a priority, and Commissioner Ferguson has already indicated that the FTC may make more changes to the recently amended Children’s Online Privacy Protection Rule.
FBI Warns of Hidden Threats in Remote Hiring: Are North Korean Hackers Your Newest Employees?
The Federal Bureau of Investigation (FBI) recently warned employers of increasing security risks from North Korean workers infiltrating U.S. companies by obtaining remote jobs to steal proprietary information and extort money to fund activities of the North Korean government. Companies that rely on remote hires face a tricky balancing act between rigorous job applicant vetting procedures and ensuring that new processes are compliant with state and federal laws governing automated decisionmaking and background checks or consumer reports.
Quick Hits
The FBI issued guidance regarding the growing threat from North Korean IT workers infiltrating U.S. companies to steal sensitive data and extort money, urging employers to enhance their cybersecurity measures and monitoring practices.
The FBI advised U.S. companies to improve their remote hiring procedures by implementing stringent identity verification techniques and educating HR staff on the risks posed by potential malicious actors, including the use of AI to disguise identities.
Imagine discovering your company’s proprietary data posted publicly online, leaked not through a sophisticated hack but through a seemingly legitimate remote employee hired through routine practices. This scenario reflects real threats highlighted in a series of recent FBI alerts: North Korean operatives posing as remote employees at U.S. companies to steal confidential data and disrupt business operations.
On January 23, 2025, the FBI issued another alert updating previous guidance to warn employers of “increasingly malicious activity” from the Democratic People’s Republic of Korea, or North Korea, including “data extortion.” The FBI said North Korean information technology (IT) workers have been “leveraging unlawful access to company networks to exfiltrate proprietary and sensitive data, facilitate cyber-criminal activities, and conduct revenue-generating activity on behalf of the regime.”
Specifically, the FBI warned that “[a]fter being discovered on company networks, North Korean IT workers” have extorted companies, holding their stolen proprietary data and code for ransom and have, in some cases, released such information publicly. Some workers have opened user accounts on code repositories, representing what the FBI described as “a large-scale risk of theft of company code.” Additionally, the FBI warned such workers “could attempt to harvest sensitive company credentials and session cookies to initiate work sessions from non-company devices and for further compromise opportunities.”
The alert came the same day the U.S. Department of Justice (DOJ) announced indictments against two North Korean nationals and two U.S. nationals alleging they engaged in a “fraudulent scheme” to obtain remote work and generate revenue for the North Korean government, including to fund its weapons programs.
“FBI investigation has uncovered a years-long plot to install North Korean IT workers as remote employees to generate revenue for the DPRK regime and evade sanctions,” Assistant Director Bryan Vorndran of the FBI’s Cyber Division said in a statement. “The indictments … should highlight to all American companies the risk posed by the North Korean government.”
Data Monitoring
The FBI recommended that companies take steps to improve their data monitoring, including:
“Practice the Principle of Least Privilege” on company networks.
“Monitor and investigate unusual network traffic,” including remote connections and remote desktops.
“Monitor network logs and browser session activity to identify data exfiltration.”
“Monitor endpoints for the use of software that allows for multiple audio/video calls to take place concurrently.”
Remote Hiring Processes
The FBI further recommended that employers strengthen their remote hiring processes to identify and screen potential bad actors. The recommendations come amid reports that North Korean IT workers have used strategies to defraud companies in hiring, including stealing the identities of U.S. individuals, hiring U.S. individuals to stand in for the North Korean IT workers, or using artificial intelligence (AI) or other technologies to disguise their identities. These techniques include “using artificial intelligence and face-swapping technology during video job interviews to obfuscate their true identities.”
The FBI recommended employers:
implement processes to verify identities during interviews, onboarding, and subsequent employment of remote workers;
educate human resources (HR) staff and other hiring managers on the threats of North Korean IT workers;
review job applicants’ email accounts and phone numbers for duplicate contact information among different applicants;
verify third-party staffing firms and those firms’ hiring practices;
ask “soft” interview questions about specific details of applicants’ locations and backgrounds;
watch for typos and unusual nomenclature in resumes; and
complete the hiring and onboarding process in person as much as possible.
Legal Considerations
New vendors have entered the marketplace offering tools purportedly seeking to solve such remote hiring problems; however, companies may want to consider the legal pitfalls—and associated liability—that these processes may entail. These considerations include, but are not limited to:
Fair Credit Reporting Act (FCRA) Implications: If a third-party vendor evaluates candidates based on personal data (e.g., scraping public records or credit history), it may be considered a “consumer report.” The Consumer Financial Protection Bureau (CFPB) issued guidance in September 2024 taking that position as well, and to date, that guidance does not appear to have been rolled back.
Antidiscrimination Laws: These processes, especially as they might pertain to increased scrutiny or outright exclusion of specific demographics or countries, could disproportionately screen out protected groups in violation of Title VII of the Civil Rights Act of 1964 (e.g., causing disparate impact based on race, sex, etc.), even if unintentional. This risk exists regardless of whether the processes involve automated or manual decisionmaking; employers may be held liable for biased outcomes from AI just as if human decisions caused them—using a third-party vendor’s tool is not a defense.
Privacy Laws: Depending on the jurisdiction, companies’ vetting processes may implicate transparency requirements under data privacy laws, such as the California Consumer Privacy Act (CCPA) and the General Data Protection Regulation (GDPR) in the European Economic Area (EEA), when using third-party sources for candidate screening. Both laws require clear disclosure to applicants about the types of personal information collected, including information obtained from external background check providers, and how this information will be used and shared.
Automated Decisionmaking Laws: In the absence of overarching U.S. federal legislation, states are increasingly filling in the gap with laws regarding automated decisionmaking tools, covering everything from bias audits to notice, opt-out rights, and appeal rights. If a candidate is located in a foreign jurisdiction, such as in the EEA, the use of automated decisionmaking tools could trigger requirements under both the GDPR and the recently enacted EU Artificial Intelligence Act.
It is becoming increasingly clear that multinational employers cannot adopt a one-size-fits-all vetting algorithm. Instead, companies may need to calibrate their hiring tools to comply with the strictest applicable laws or implement region-specific processes. For instance, if a candidate is in the EEA, GDPR and EU AI Act requirements (among others) apply to the candidate’s data even if the company is U.S.-based, which may necessitate, at a minimum, turning off purely automated rejection features for EU applicants and maintaining separate workflows and/or consent forms depending on the candidate’s jurisdiction.
Next Steps
The FBI’s warning about North Korean IT workers infiltrating U.S. companies is the latest involving security risks from foreign governments and foreign actors to companies’ confidential data and proprietary information. Earlier this year, the U.S. Department of Homeland Security published new security requirements restricting access to certain transactions by individuals or entities operating in six “countries of concern,” including North Korea.
Employers, particularly those hiring remote IT workers, may want to review their hiring practices, identity-verification processes, and data monitoring, considering the FBI’s warnings and recommendations. Understanding and addressing these risks is increasingly vital, especially as remote hiring continues to expand across industries.