CFPB Seeks to Vacate Open Banking Rule

On May 23, the CFPB notified a Kentucky federal court that it now considers its own 2023 open banking rule “unlawful” and plans to set the rule aside. The Bureau announced its intent to seek summary judgement against the rule, which was issued under Section 1033 of the Dodd-Frank Act to promote consumer-authorized data sharing with third parties.
The original rule (previously discussed here), issued in October 2023 under former Director Rohit Chopra, was designed to implement Section 1033’s mandate by requiring financial institutions to provide consumers and authorized third parties with access to their transaction data in a secure and standardized format. The rule aimed to promote competition and consumer control over financial information by enabling the use of fintech apps and digital tools to manage personal finances.
The lawsuit, filed in the U.S. District Court for the Eastern District of Kentucky, challenged the rule on several grounds, including claims that the CFPB exceeded its statutory authority and imposed obligations not contemplated by Congress. Key points raised in the challenge include:

Alleged lack of CFPB authority. Plaintiffs argue the Bureau overstepped by mandating free, comprehensive data access and imposing new compliance burdens without clear congressional authorization.
Interference with industry-led initiatives. The plaintiffs asserted that the rule would disrupt private-sector open banking frameworks already set in place, which they claim serve hundreds of million of Americans.
Concerns about data security and consumer harm. The rule’s opponents caution that mandating third-party data access could increase risks of misuse or breaches.

Putting It Into Practice: While the litigation had previously been paused to give the agency time to evaluate the regulation, the Bureau’s latest filing confirms that Acting Director Russel Vought no longer supports the rule and now views it as unlawful. This move effectively puts the rule’s validity in the hands of the court, even as compliance deadlines—set to begin April 1, 2026—technically remain in place unless the rule is vacated. Given the rule’s prior bipartisan support and its importance to fintech stakeholders, market participants should continue monitoring this litigation closely for further developments.
Listen to this post 

SEC Signals Reevaluation of CAT Reporting Amid Broader Transparency and Regulatory Reform Efforts

Securities and Exchange Commission (SEC) Chairman Paul S. Atkins recently directed SEC staff to conduct a review of the Consolidated Audit Trail (CAT), focusing on the escalating costs, reporting requirements, and cybersecurity risks stemming from sensitive data collection.[1] This directive aligns with Chairman Atkins’ expressed priorities to return to principled regulation, support market innovation and evolution, and reduce unnecessary compliance burdens. Among other things, Chairman Atkins cited CAT’s “appetite for data and computing power,” noting annual costs approaching $250 million, ultimately borne by investors, as the rationale for this reevaluation.[2] He supported Commissioner Mark T. Uyeda’s efforts behind the granting of an exemption from the requirement to report certain personally identifiable information (PII) to CAT for natural persons.[3]
This CAT reevaluation is part of a broader market-friendly agenda at the SEC. For example, Chairman Atkins has identified a goal of his tenure is to develop a rational regulatory framework for crypto asset markets, covering the issuance, custody, and trading of crypto assets all the while discouraging bad actors from violating the law. Chairman Atkins continues to emphasize the importance of regulatory frameworks that are “fit-for-purpose” with “clear rules of the road” for market participants to facilitate capital formation and protect investors. 
While no immediate changes to CAT reporting obligations are effective beyond the PII exemption, market participants should prepare for shifts in how the SEC approaches data collection and cost allocation.
Footnotes
[1] Paul S. Atkins, Chairman, SEC, “Prepared Remarks Before SEC Speaks” (May 19, 2025), https://www.sec.gov/newsroom/speeches-statements/atkins-prepared-remarks-sec-speaks-051925.
[2] Id.
[3] Paul S. Atkins, Chairman, SEC, “Testimony Before the United States House Appropriations Subcommittee on Financial Services and General Government” (May 20, 2025), https://www.sec.gov/newsroom/speeches-statements/atkins-testimony-fsgg-052025. See Katten’s Quick Reads post on the exemptive relief for reporting personally identifiable information here.

Chapter 93A and the Limits of Consumer Protection: Lessons from Wells Fargo Bank, N.A. v. Coulsey

In a long-running Massachusetts foreclosure case, Wells Fargo Bank, N.A. v. Coulsey, the Massachusetts Appeals Court weighed in on the applicability and limits of Chapter 93A. The decision provides guidance as to how—and when—Chapter 93A claims may be brought, and when repeated litigation crosses the line into claim preclusion.
The dispute began in 2007 when the plaintiff purchased a home with a loan and mortgage she would soon default on. Over the next 17 years, the plaintiff engaged in a prolonged legal battle with multiple mortgage holders, ultimately culminating in an eviction. The plaintiff repeatedly but unsuccessfully invoked Chapter 93A in an attempt to block foreclosure and eviction. The plaintiff’s claims were first dismissed without prejudice in federal court and her later attempts to revive or amend the 93A claims were rejected. They were again dismissed in a state court in a collateral action.
The Appeals Court affirmed the state-court dismissal and issued a clear rebuke to repeatedly raising Chapter 93A claims based on the same factual nucleus. The Appeals Court emphasized the following:

Prior Opportunity. The plaintiff was allowed in 2016 to amend her complaint to better articulate 93A violations. That was the moment to raise all related theories. The court found that “any new basis or theory supporting her c. 93A claim could have been brought at that time.”
Ongoing Harm v. Ongoing Claim. Plaintiff argued that her 93A claim should be revived because defendant’s alleged misconduct was “ongoing.” The court rejected that logic, stating: “[Plaintiff’s] assertion that c. 93A violations are ongoing and therefore could not have been advanced in prior litigation is contrary to the purpose of res judicata…” In short, the passage of time or continued impact did not give the plaintiff the right to relitigate previously dismissed claims.
Chapter 93A Not Exempt from Res Judicata. Importantly, the court reiterated that Chapter 93A claims—like any civil claim—are subject to rules of finality. If a claim is dismissed with prejudice or could have been litigated earlier, it cannot be brought again just by rebranding it or restating the facts.

Implications for Companies
A litigant does not get endless chances to reframe Chapter 93A claims. If the allegations asserted are vague or conclusory, challenging them in a motion to dismiss is appropriate even under the broad reach of Chapter 93A. The decision underscores that consumer rights are balanced against the need for closure. Once courts have ruled on a matter, even the broad protections of Chapter 93A will not open the door to re-litigating the same claims under a new heading.

CFPB Drops Lawsuit Against Lease-to-Own Fintech Following Adverse Credit Ruling

On May 27, the CFPB filed a notice of dismissal with prejudice in its lawsuit against a lease-to-own fintech provider. The lawsuit, filed in July 2023, alleged that the company’s rental-purchase agreements violated several federal consumer financial laws, including the Truth in Lending Act (TILA), the Electronic Fund Transfer Act (EFTA), the Fair Credit Reporting Act (FCRA), and the Consumer Financial Protection Act (CFPA).
In its original complaint, the CFPB alleged that the company targeted consumers with limited access to traditional credit—internally referred to as “ALICE” consumers (Asset-Limited, Income-Constrained, Employed)—with financing agreements that often required consumers to pay more than twice the cash price of the financed merchandise over a 12-month period. The Bureau claimed the agreements were misleadingly presented as short-term lease-purchase options, when they were in fact credit arrangements subject to federal disclosure and servicing requirements.
Specifically, the complaint’s allegations against the fintech provider included:

Failure to provide required TILA disclosures. The Bureau asserted that the company’s agreements qualified as credit sales under TILA and Regulation Z, triggering disclosure obligations the company allegedly failed to meet.
Conditioning credit on preauthorized electronic fund transfers. The CFPB alleged the company violated EFTA and Regulation E by requiring consumers to authorize recurring ACH debits as a condition of receiving financing.
Deceptive and abusive marketing and contracting practices. According to the complaint, the company misled consumers about the cost and structure of the agreements, impeded consumers from terminating repayment obligations, and failed to ensure consumers had an opportunity to review agreement terms before signing.
Unfair collection and credit reporting practices. The Bureau alleged that the company threatened actions it did not take, sent payment demands to consumers who had no outstanding obligations, and lacked reasonable written policies for furnishing consumer data, in violation of FCRA and Regulation V.

Putting It Into Practice: The dismissal of this case is another clear example of the CFPB stepping back from enforcement actions initiated from the previous administration (previously discussed here, here, and here). Although the Bureau has scaled back its use of certain enforcement powers, state regulators and other federal agencies have not slowed their efforts to enforce UDAAP violations (previously discussed here and here). Financial institutions should ensure their compliance programs are up to date to avoid scrutiny from state and federal regulators.

Privacy Tip #445 – Apple Users: Update to iOS 18.5

Never underestimate an operating system update from any mobile phone manufacturer. This week, Apple issued iOS 18.5 which provides enhancements to the user experience but also fixes bugs and flaws.
This update fixes over 30 security bugs. The sooner you update to the new version, the better from a security standpoint. The security flaws that the patch responds to includes known and unknown vulnerabilities and zero-days that may or may not be exploited in the wild.
If you haven’t updated to iOS 18.5, plug your phone in now and install it as soon as possible. Not only for the enhancements, but most importantly, for the bug fixes. If you don’t have your phone set to automatic installation, you may wish to add that feature in your setting, as that is a good way to stay on top of new releases in a timely manner.

U.S. Retailers Bracing for Scattered Spider Attacks

Google sent out a warning that the cybercriminal group Scattered Spider is targeting U.S.-based retailers. Scattered Spider is believed to have been responsible for the recent attack on Marks & Spencer in the U.K. A security researcher at Google has posited that Scattered Spider concentrates attacks on one industry at a time and predicts that it will continue to target the retail sector. They have warned that “US retailers should take note. These actors are aggressive, creative, and particularly effective at circumventing mature security programs.”
Mandiant issued a threat intelligence report on May 6, 2025, highlighting Scattered Spider’s social engineering methods and “brazen communication with victims.” It has seen Scattered Spider target specific sectors, such as financial services and food services. Recently, Scattered Spider has been seen deploying DragonForce ransomware. The operators of DragonForce have claimed control of RansomHub.
Mandiant has published recommendations on proactive hardening against the tactics used by Scattered Spider, including prioritizing:

Identity
Endpoints
Applications and Resources
Network Infrastructure
Monitoring / Detections

Although retailers should be on high alert with these warnings, all industries would do well to review Mandiant’s recommendations, as they are timely and effective.

Janie & Jack’s Alleged CIPA Violations Consolidated, Thus Avoiding Over 2,000 Individual Arbitration Claims

This week, the U.S. District Court for the Northern District of California ruled in favor of children’s clothing retailer Janie & Jack, which sought to enjoin over 2,400 individual arbitration claims resulting from alleged violations of the California Invasion of Privacy Act (CIPA). Now, Janie & Jack will confront a single privacy class action suit as opposed to the more than 2,400 individual arbitration claims by its website visitors.
The parties notified the court of their agreement not to pursue arbitration but to rather proceed through a consolidated class action. Janie & Jack voluntarily dismissed its lawsuit in an attempt to avert the numerous claims by consumers.
Website visitors accused Janie & Jack of violating CIPA and the federal Wiretap Act through its website’s information gathering and tracking practices (also known as trap and trace claims). Janie & Jack alleges that such claims are inadequate because they lack allegations that the consumers created any accounts or conducted any transactions on the website or that Janie & Jack had breached any of its online terms.
Further, although Janie & Jack’s website terms include an arbitration clause, it claimed that the claimants never assented to the contract.
In its response, the retailer emphasized its intent to prevent the growing use of arbitration agreements as “weapons” by plaintiffs’ attorneys, thwarting their intended use of an efficient, effective, and timely progression of claims.
This case highlights a common practice: thousands of individuals, all represented by the same counsel, simultaneously file, or threaten to file, arbitration demands with nearly identical claims.
These allegations mark yet another instance of the growing trend of the plaintiffs’ bars’ push for “trap and trace” claims because they can leverage existing wiretap laws (particularly in California under CIPA) to argue that common online tracking technologies like cookies, pixels, and website analytics tools essentially function as trap and trace devices, allowing them to file complaints against companies for collecting user data without proper consent, even though these technologies were originally designed for traditional phone lines, not the internet, opening up a large pool of potential plaintiffs and potentially significant damages.
If you haven’t heard it enough, here it is again: NOW is the time to assess your website’s online trackers and update your cookie consent management platform, website privacy policy, and consumer data collection processes.
This article was co-authored by Mark Abou Naoum

State Data Minimization Laws Spark Compliance Uncertainty

A new wave of state consumer privacy laws focused on limiting data collection is creating anxiety among businesses—and Maryland is leading the charge. The Maryland Online Data Privacy Act (MODPA), set to take effect in October 2025, requires companies to collect only data that is “reasonably necessary and proportionate” to their stated purposes. However, with no official guidance for compliance from the Maryland Attorney General, businesses are left guessing.
Under MODPA’s data minimization requirement, businesses should avoid collecting or processing more data than is necessary to provide a specific product or service to a consumer. In addition to the limited data collection requirement, MODPA also requires:

Stricter Data Collection Practices for Sensitive Data: The data minimization requirements are more stringer for sensitive data, such as health information, religious beliefs, and genetic data. 
Ban on the Sale of Sensitive Data: The law prohibits the sale of sensitive data unless it is strictly necessary to provide or maintain a requested product or service. 
Explicit Consent: A business may not process personal information for a purpose other than the purpose(s) disclosed to the consumer at the time of collection unless the consumer provides explicit consent. 
Limited Retention: A business may not retain consumer data for longer than necessary to fulfill the purpose for which it was collected (i.e., now is the time to update or implement your retention program).

This shift towards data minimization marks a departure from the more familiar “notice and choice” model, pushing companies to operationalize data minimization in ways that may significantly alter their data practices. While some businesses, particularly those already operating under stricter global standards like the European Union’s General Data Protection Regulation (GDPR), may be better prepared, others are weighing whether to reduce data collection or even scale back operations in certain states.
Companies developing or utilizing generative artificial intelligence are especially concerned, as these laws may limit access to large, diverse datasets required to train their models. Still, some see this as an opportunity to innovate with privacy-first technologies, such as synthetic data.
States like Maine, Massachusetts, Connecticut, and Minnesota are considering similar laws, signaling a growing trend. But as businesses await clearer definitions and enforcement standards, the central question remains: Can regulators strike the right balance between protecting privacy and supporting innovation?

Take it Down Act Signed into Law, Offering Tools to Fight Non-Consensual Intimate Images and Creating a New Image Takedown Mechanism

Law establishes national prohibition against nonconsensual online publication of intimate images of individuals, both authentic and computer-generated.
First federal law regulating AI-generated content.
Creates requirement that covered platforms promptly remove depictions upon receiving notice of their existence and a valid takedown request.
For many online service providers, complying with the Take It Down Act’s notice-and-takedown requirement may warrant revising their existing DMCA takedown notice provisions and processes.
Another carve-out to CDA immunity? More like a dichotomy of sorts…. 

On May 19, 2025, President Trump signed the bipartisan-supported Take it Down Act into law. The law prohibits any person from using an “interactive computer service” to publish, or threaten to publish, nonconsensual intimate imagery (NCII), including AI-generated NCII (colloquially known as revenge pornography or deepfake revenge pornography). Additionally, the law requires that, within one year of enactment, social media companies and other covered platforms implement a notice-and-takedown mechanism that allows victims to report NCII. Platforms must then remove properly reported imagery (and any known identical copies) within 48 hours of receiving a compliant request.
Support for the Act and Concerns
The Take it Down Act attempts to fill a void in the policymaking space, as many states had not enacted legislation regulating sexual deepfakes when it was signed into law. The Act has been described as the first major federal law that addresses harm caused by AI. It passed the Senate in February of this year by unanimous consent and passed the House of Representatives in April by a vote of 409-2. It also drew the support of many leading technology companies.
Despite receiving almost unanimous support in Congress, some digital privacy advocates have expressed some concerns that the new notice-and-takedown mechanism could have some unintended consequences for digital privacy in general. For example, some commentators have suggested that the statute’s takedown provision is written too broadly and lacks sufficient safeguards against frivolous requests, potentially leading to the removal of lawful content –especially given the short 48-hour time to act following a takedown request. [Note: In 2023, we similarly wrote about abuses of the takedown provision of the Digital Millennium Copyright Act]. In addition, some have argued that the law could undermine end-to-end encryption by possibly forcing such companies to “break” encryption to comply with the removal process. Supporters of the law have countered that private encrypted messages would likely not be considered “published” under the text of the statute (which uses the term “publish” as opposed to “distribute”).
Criminalization of NCII Publication for Individuals
The Act makes it unlawful for any person “to use an interactive computer service to knowingly publish an intimate visual depiction of an identifiable individual” under certain circumstances.[1] It also prohibits threats involving the publishing of NCII and establishes various criminal penalties. Notably, the Act does not distinguish between authentic and AI-generated NCII in its penalties section if the content has been published. Furthermore, the Act expressly states that a victim’s prior consent to the creation of the original image or its disclosure to another individual does not constitute consent for its publication.
New Notice-and-Takedown Requirement for “Covered Platforms”
Along with punishing individuals who publish NCII, the Take it Down Act requires covered platforms to create a notice-and-takedown process for NCII within one year of the law’s passage. Below are the main points for platforms to consider:

Covered Platforms. The Act defines a “covered platform” as a “website, online service, online application, or mobile application” that serves the public and either provides a forum for user-generated content (including messages, videos, images, games, and audio files) or regularly deals with NCII as part of its business.
Notice-and-Takedown Process. Covered platforms must create a process through which victims of NCII (or someone authorized to act on their behalf) can send notice to them about the existence of such material (including a statement indicating a “good faith belief” that the intimate visual depiction of the individual is nonconsensual, along with information to assist in locating the unlawful image) and can request its removal.
Notice to Users. Adding an additional compliance item to the checklist, the Act requires covered platforms to provide a “clear and conspicuous” notice of the Act’s notice and removal process, such as through a conspicuous link to another web page or disclosure.
Removal of NCII. Within 48 hours of receiving a valid removal request, covered platforms must remove the NCII and “make reasonable efforts to identify and remove any known identical copies.”
Enforcement. Compliance under this provision will be enforced by the Federal Trade Commission (FTC).
Safe Harbor. Under the law, covered platforms will not be held liable for “good faith” removal of content that is claimed to be NCII “based on facts or circumstances from which the unlawful publishing of an intimate visual depiction is apparent,” even if it is later determined that the removed content was lawfully published.

Compliance Note: For many online service providers, complying with the Take It Down Act’s notice-and-takedown requirement may warrant revising their existing DMCA takedown notice provisions and processes, especially if those processes have not been reviewed or updated for some time. Many “covered platforms” may rely on automated processes (or a combination of automated efforts combined with targeted human oversight) to fulfill Take It Down Act requests and meet the related obligation to make “reasonable efforts” to identify and remove known identical copies. This may involve using tools for processing notices, removing content and detecting duplicates. As a result, some providers should consider whether their existing takedown provisions should also be amended to address these new requirements and how they will implement these new compliance items on the backend using the infrastructure already in place for the DMCA.
What about CDA Section 230?
Section 230 of the Communications Decency Act (“CDA”), 47 U.S.C § 230, prohibits a “provider or user of an interactive computer service” from being held responsible “as the publisher or speaker of any information provided by another information content provider.” Courts have construed the immunity provisions in Section 230 broadly in a variety of cases arising from the publication of user-generated content. 
Following enactment of the Take It Down Act, some important questions for platforms are: (1) whether Section 230 still protects platforms from actions related to the hosting or removal of NCII; and (2) whether FTC enforcement of the Take It Down Act’s platform notice-and-takedown process is blocked or limited by CDA immunity. 
On first blush, it might seem that the CDA would restrict enforcement against online providers in this area, as decisions regarding the hosting and removal of third-party content would necessarily treat a covered platform as a “publisher or speaker” of third party content. However, a deeper examination of the text of the CDA suggests the answer is more nuanced.
It should be noted that the Good Samaritan provision of the CDA (47 U.S.C § 230(c)(2)) could be used by online providers as a shield from liability for actions taken to proactively filter or remove third party NCII content or remove NCII at the direction of a user’s notice under the Take It Down Act, as CDA immunity extends to good faith actions to restrict access to or availability of material that the provider or user considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Moreover, the Take It Down Act adds its own safe harbor for online providers for “good faith disabling of access to, or removal of, material claimed to be a nonconsensual intimate visual depiction based on facts or circumstances from which the unlawful publishing of an intimate visual depiction is apparent, regardless of whether the intimate visual depiction is ultimately determined to be unlawful or not.” 
Still, further questions about the reach of the CDA prove more intriguing. The Take It Down Act appears to create a dichotomy of sorts regarding CDA immunity in the context of NCII removal claims. Under the text of the CDA, it appears that immunity would not limit FTC enforcement of the Take It Down Act’s notice-and-takedown provision affecting “covered platforms.” To explore this issue, it’s important to examine the CDA’s exceptions, specifically 47 U.S.C § 230(e)(1).
Effect on other laws
(1) No effect on criminal law
Nothing in this section shall be construed to impair the enforcement of section 223 or 231 of this title [i.e., the Communications Act], chapter 71 (relating to obscenity) or 110 (relating to sexual exploitation of children) of title 18, or any other Federal criminal statute.
Under the text of the CDA’s exception, Congress carved out Section 223 and 231 of the Communications Act from the CDA’s scope of immunity. Since the Take It Down Act states that it will be codified at Section 223 of the Communications Act of 1934 (i.e., 47 U.S.C. 223(h)), it appears that platforms would not enjoy CDA protection from FTC civil enforcement actions based on the agency’s authority to enforce the Act’s requirements that covered platforms “reasonably comply” with the new Take It Down Act notice-and-takedown obligations.
However, that is not the end of the analysis for platforms. Interestingly, it would appear that platforms would generally still retain CDA protection (subject to any exceptions) from claims related to the hosting or publishing third party NCII that have not been the subject of a Take It Down Act notice, since the Act’s requirements for removal of NCII by platforms would not be implicated without a valid removal request.[2] Similarly, a platform could make a strong argument that it retains CDA immunity from any claims brought by an individual (rather than the FTC) for failing to reasonably comply with a Take It Down Act notice. That said, it is conceivable that litigants – or event state attorneys general – might attempt to frame such legal actions under consumer protection statutes, as the Take It Down Act states that a failure to reasonably comply with an NCII takedown request is an unfair or deceptive trade practice under the FTC Act. Even in such a case, platforms would likely contend that such claims by these non-FTC parties are merely claims based on a platform’s role as publisher of third party content and are therefore barred by the CDA. 
Ultimately, most, if not all, platforms will likely make best efforts to reasonably comply with the Take It Down Act, thus avoiding the above contingencies. Yet, for platforms using automated systems to process takedown requests, unintended errors may occur and it’s important to understand how and when the CDA would still protect platforms against any related claims.
Looking Ahead
It will be up to a year before the notice-and-takedown requirements become effective, so we will have to wait and see how well the process works in eradicating revenge pornography material and intimate AI deepfakes from platforms, how the Act potentially affects messaging platforms, how aggressively the Department of Justice will prosecute offenders, and how closely the FTC will be monitoring online platforms’ compliance with the new takedown requirements.
It also remains to be seen whether Congress has an appetite to pass more AI legislation. Less than two weeks before the Take it Down Act was signed into law, the Senate Committee on Commerce, Science, and Transportation held a hearing on “Winning the AI Race” that featured the CEOs of many well-known AI companies. During the hearing, there was bipartisan agreement on the importance of sustaining America’s leadership in AI, expanding the AI supply chain and not burdening AI developers with a regulatory framework as strict as the EU AI Act. The senators listened to testimony from tech executives calling for enhanced educational initiatives and the improvement of infrastructure needed for advancing AI innovation, alongside discussing proposed bills regulating the industry, but it was not clear whether any of these potential policy solutions would receive enough support to be signed into law.
The authors would like to thank Aniket C. Mukherji, a Proskauer legal assistant, for his contributions to this post.

[1] The Act provides that the publication of the NCII of an adult is unlawful if (for authentic content) “the intimate visual depiction was obtained or created under circumstances in which the person knew or reasonably should have known the identifiable individual had a reasonable expectation of privacy,” if (for AI-generated content) “the digital forgery was published without the consent of the identifiable individual,” and if (for both authentic and AI-generated content) what is depicted “was not voluntarily exposed by the identifiable individual in a public or commercial setting,” “is not a matter of public concern,” and is intended to cause harm or does cause harm to the identifiable individual. The publication of NCII (whether authentic or AI-generated) of a minor is unlawful if it is published with intent to “abuse, humiliate, harass, or degrade the minor” or “arouse or gratify the sexual desire of any person.” The Act also lists some basic exceptions, such as publications of covered imagery for law enforcement investigations, legal proceedings, or educational purposes, among other things.
[2] Under the Act, “Upon receiving a valid removal request from an identifiable individual (or an authorized person acting on behalf of such individual) using the process described in paragraph (1)(A)(ii), a covered platform shall, as soon as possible, but not later than 48 hours after receiving such request—
(A) remove the intimate visual depiction; and
(B) make reasonable efforts to identify and remove any known identical copies of such depiction.

ITS HERE: The First of a Wave of New “Keyword Avoider” SMS Opt Out TCPA Class Actions Has Been Filed and TCPAWorld Will Never Be the Same

An attorney named Jeff Lohman recently narrowly escaped a jury verdict against him on a RICO claim arising out of allegations he had manufactured TCPA claims by encouraging clients to use vague opt out language during phone calls with Navient.
With the FCC’s recent revocation rules now in effect– requiring callers and texters to honor freeform opt out requests— we can expect to see a similar phenomenon. And the first of these cases seem to be rolling in.
The new FCC rules say callers must honor phrases like “stop” and “unsubscribe” but also leave the door open for consumers to opt out in “any reasonable means” that convey a clear intent for calls or texts to stop. The Commission’s ruling is clear that consumers are NOT limited to using just a few key words to opt out.
Yet any businesses– who do not follow TCPAWorld.com ;)– have failed to heed the message (ha) and continue to use SMS settings that can detect only keyword opt out requests.
That’s not going to fly anymore folks.
For instance in a new TCPA class action against American First Finance, the consumer responded with the message “Cease and Desist All Communication.”
Notice that this is a pretty clear request for calls and texts to stop when read by a human but a company’s SMS provider’s software is unlikely to flag this phrase.
And allegedly American First continued to send SMS messages to the consumer leading to a big fat class action here in California.
Now one point of interest, the Plaintiff does not appear to be within his own class definition. The class reads:
All persons within the United States who, within the four years prior to the filing of this lawsuit through the date of class certification, received two or more text messages within any 12-month period, from or on behalf of Defendant, regarding Defendant’s goods, services, or properties, to said person’s residential cellular telephone number, after communicating to Defendant that they did not wish to receive text messages by replying to the messages with a “stop” or similar opt-out instruction.
To my eye “cease and desist all communication” is not “similar” to the elegant “Stop” request we all know and love. But that’s for the court to determine I suppose.
Pretty clear bottom line here– I expect to see a TON of TCPA class actions rolling in focused on companies that might be heeding perfect stop requests but that are missing free form communications received via their SMS channel. HUGE mistake.
These requests need to be heeded and honored– and starting next April need to be treated as complete opt outs across all channels and all purposes.
Complaint here: predocketComplaintFile (22)

Senator Calls for Food Safety Oversight Reform

On May 21, Senator Tom Cotton (R-Arkansas) introduced a bill titled the “Study And Framework for Efficiency in Food Oversight and Organizational Design Act of 2025” or the “SAFE FOOD Act of 2025” which would direct the Secretary of Agriculture to conduct a study on the consolidation of federal agencies with a “primary role in ensuring food safety in the United States” into a single agency.
The bill explicitly lists the Food Safety and Inspection Service (FSIS), the Food and Drug Administration (FDA), and the Centers for Disease Control and Prevention (CDC) as agencies targeted for consolidation. (More realistically, the food regulatory components of FDA and the CDC would be considered for consolidation with FSIS.) FDA and CDC are within the Department of Health and Human Services (HHS) while FSIS is within the U.S. Department of Agriculture (USDA). FDA has broad authority to regulate food and food additives under the Federal Food, Drug, and Cosmetic Act, while USDA-FSIS regulates meat, poultry, and egg products under the Federal Meat Inspection Act, the Poultry Products Inspection Act, and the Egg Products Inspection Act. CDC, in collaboration with FDA and other partners, plays a critical role in responding to foodborne illness outbreaks.
Senator Cotton claimed that spreading food safety oversight “across multiple federal, state, and local agencies . . . decreases efficacy, creates gaps, and slows response times to potential public health risks” and that his bill “is a commonsense step to expanding government efficiency and enhancing public health protection by unifying our food safety agencies.” As readers likely know, this is not the first time a single food agency has been considered. It probably will not be the last time either.

Chemical Coalition Withdraws TSCA Section 21 Petition Seeking Revisions to TSCA 8(a)(7) PFAS Reporting Rule

As reported in our May 4, 2025, blog item, on May 2, 2025, a coalition of chemical companies petitioned the U.S. Environmental Protection Agency (EPA) for an amendment of the Toxic Substances Control Act (TSCA) Section 8(a)(7) rule requiring reporting for per- and polyfluoroalkyl substances (PFAS). The petitioners ask that EPA revise the reporting rule to exclude imported articles, research and development (R&D) materials, impurities, byproducts, non-isolated intermediates, and PFAS manufactured in quantities of less than 2,500 pounds (lb.). Petitioners also request that EPA remove the requirement to submit “‘all existing information concerning the environmental and health effects’ of the chemical substance covered by” the reporting rule and instead allow “robust summaries, similar to the approach adopted by the European Chemicals Agency” (ECHA). According to a May 22, 2025, letter from EPA, on May 16, 2025, the coalition withdrew its petition via email to EPA Administrator Lee Zeldin and “EPA now considers this petition closed.” After the coalition submitted its petition, EPA published an interim final rule to postpone the data submission period to April 13, 2026, through October 13, 2026. 90 Fed. Reg. 20236. Small manufacturers reporting exclusively as article importers would have until April 13, 2027, to report. According to the interim final rule, EPA is separately considering reopening certain aspects of the rule to public comment. Comments on the interim final rule are due June 12, 2025. More information on the interim final rule is available in our May 12, 2025, memorandum.