Philadelphia Enacts POWERful New Worker Protection Ordinance

On May 27, 2025, Mayor Cherelle Parker signed the Protect Our Workers, Enforce Rights (“POWER”) Act into law, which expands the Philadelphia Department of Labor’s enforcement options for violations of the City’s expanding roster of worker protection laws. Under this new ordinance, which is now in effect, workers in Philadelphia have expanded protection against labor infractions; and employers face a host of new and enhanced compliance requirements.
Key provisions of the new legislation include:

Expanded Definitions and Coverage. The ordinance broadens the definitions of “employee,” “employer,” and “domestic worker,” ensuring coverage for a wider range of workers, including part-time, temporary, and live-in domestic workers.
Paid Sick Leave and Leave Accrual. The legislation specifies that employees, including domestic workers, accrue paid sick time, compensated at their regular rate of pay, with specific calculation methods for tipped employees. Employers are prohibited from counting sick time as an absence leading to discipline and must provide notice of rights in multiple languages. Employers must also maintain contemporaneous records of hours worked, sick time taken, and payments for at least three years.
Wage Theft Protections. The ordinance expands the definition of wage theft to include violations of state and federal wage laws where work is performed in Philadelphia, or the employment contract is made in the City. The Department of Labor will be authorized to investigate complaints, with a three-year statute of limitations for filing. Employers failing to maintain required payroll records face an affirmative presumption that they have violated the Act.
Domestic Worker Protection. Employers must provide written contracts to domestic workers (excluding casual work), detailing job duties, wages, schedules, leave, benefits, and termination terms, in English and in the worker’s preferred language. Domestic workers are entitled to paid rest periods and meal breaks and will accrue paid leave, with a centralized portable benefits system to be developed for aggregation across multiple employers. Employers must also provide minimum notice periods (two weeks for most, four weeks for live-in workers) for termination without cause, with severance pay and, for live-in workers, continued housing or its value if notice is not provided.
Anti-Retaliation Protections. The ordinance prohibits retaliation against workers for exercising rights under any worker protection ordinance, including filing complaints, seeking information, or participating in investigations. It also places a rebuttable presumption of unlawful retaliation on any employer who discharges, suspends, or takes other adverse action against an employee within 90 days of the employee engaging in protected conduct.
Expanded Enforcement and Private Right of Action. The Department of Labor will be empowered to investigate, mediate, and adjudicate complaints, with authority to issue subpoenas and expand investigations to cover pattern or practice violations. Workers may also bring private civil actions without first exhausting administrative remedies, subject to a 15-day notice and cure period. Prevailing workers are entitled to legal and equitable relief, including attorney’s fees and costs.
Notice, Posting, and Outreach Requirements. Employers must provide written notice of rights to employees in relevant languages or post notices conspicuously. Failure to provide notice tolls the statute of limitations for any claims and may subject an employer to a civil penalty of up to $2,000 per violation.
Public Reporting. Employers with repeated or unresolved violations may be listed in a public “Bad Actors Database,” face license revocation, and be deemed ineligible for City contracts.

The POWER Act significantly enhances protections for many workers in Philadelphia. Employers and hiring entities must review and update their policies, contracts, and recordkeeping practices to ensure compliance. The expanded enforcement powers, increased penalties, and public reporting mechanisms underscore the City’s intention to take stronger action to enforce its growing list of worker protection laws and hold noncompliant employers accountable.

Pennsylvania Launches Centralized Consumer Complaint System, Expands State Enforcement Under Dodd-Frank

On May 1, Pennsylvania Governor Josh Shapiro announced a new centralized consumer protection hotline, website, and email address, providing residents with streamlined access to state agencies for reporting scams, financial misconduct, and insurance-related disputes. The rollout is part of a broader push by Pennsylvania to expand state-level enforcement amid a shift in federal priorities.
According to Governor Josh Shapiro, Pennsylvania is also expanding its use of enforcement authority under the Dodd-Frank Act, which permits states to enforce federal consumer financial laws when federal regulators decline to act. This includes coordination across agencies and stepped-up investigations into predatory lending, deceptive practices, and insurance misconduct.
The initiative builds on Pennsylvania’s existing consumer protection framework and is designed to connect residents with the appropriate agency, such as the Department of Banking and Securities or the Pennsylvania Insurance Department, regardless of the nature of the complaint. Governor Shapiro emphasized that the program follows a “no wrong door” model, ensuring that consumers can access support across lending, insurance, student loan servicing, and other financial service issues.
Pennsylvanians can now submit complaints by calling 1-866-PACCOMPLAINT, visiting pa.gov/consumer, or emailing [email protected].
Putting It Into Practice: Governor Shapiro’s launch of a centralized complaint platform highlights Pennsylvania’s intention to fill the enforcement void left recently by federal regulators (previously discussed here and here). As the CFPB continues to scale back enforcement and supervision, states like Pennsylvania are asserting authority to investigate and prosecute violations of both state and federal law, including UDAAP violations. Financial service companies should expect to see other states follow suit as they ramp up their enforcement and supervision priorities to compensate for the federal pullback.
Listen to this post 

DOJ and CFPB Terminate $9 Million Redlining Consent Order with Southern Regional Bank

On May 21, the U.S. District Court for the Western District of Tennessee granted a joint motion by the CFPB and DOJ to terminate a 2021 redlining settlement with a regional bank, vacating the consent order and dismissing the case with prejudice. The original lawsuit, filed in October 2021, alleged violations of the Fair Housing Act (FHA) Equal Credit Opportunity Act (ECOA) and Consumer Financial Protection Act (CFPA).
The complaint accused the bank of engaging in unlawful redlining from 2014 to 2018 by failing to serve the credit needs of majority-Black and Hispanic neighborhoods in the Memphis Metropolitan Statistical Area.
Specifically, the complaint alleged that the bank:

Located nearly all mortgage officers in white neighborhoods. The bank assigned no mortgage loan officers to branches in majority-Black and Hispanic census tracts.
Failed to advertise or conduct outreach in minority neighborhoods. Marketing was concentrated in commercial media outlets and business-focused publications distributed in majority-white areas.
Lacked internal fair lending oversight. The bank allegedly did not conduct a comprehensive internal fair lending assessment until 2018.
Significantly underperformed peer lenders. Only 10% of mortgage applications and 8.3% of originations came from majority-Black and Hispanic neighborhoods—less than half the peer average.

Under the consent order, the bank agreed to pay a $5 million civil penalty, invest $3.85 million in a loan subsidiary fund, open a mortgage loan production office in a minority neighborhood, and spend an additional $600,000 on community development and outreach. The consent order was scheduled to last five years, but was terminated early after the agencies found that the bank had disbursed all required relief and was in “substantial compliance” with the orders terms.
Putting It Into Practice: By ending the redlining settlement early, the CFPB continues to back away from redlining enforcement actions launched under the prior administration (previously discussed here). While institutions should remain focused on fair lending compliance, these moves suggest federal scrutiny of redlining—particularly cases built on statistical evidence or marketing practices—may be easing.
Listen to this post 

CFPB Seeks to Vacate Open Banking Rule

On May 23, the CFPB notified a Kentucky federal court that it now considers its own 2023 open banking rule “unlawful” and plans to set the rule aside. The Bureau announced its intent to seek summary judgement against the rule, which was issued under Section 1033 of the Dodd-Frank Act to promote consumer-authorized data sharing with third parties.
The original rule (previously discussed here), issued in October 2023 under former Director Rohit Chopra, was designed to implement Section 1033’s mandate by requiring financial institutions to provide consumers and authorized third parties with access to their transaction data in a secure and standardized format. The rule aimed to promote competition and consumer control over financial information by enabling the use of fintech apps and digital tools to manage personal finances.
The lawsuit, filed in the U.S. District Court for the Eastern District of Kentucky, challenged the rule on several grounds, including claims that the CFPB exceeded its statutory authority and imposed obligations not contemplated by Congress. Key points raised in the challenge include:

Alleged lack of CFPB authority. Plaintiffs argue the Bureau overstepped by mandating free, comprehensive data access and imposing new compliance burdens without clear congressional authorization.
Interference with industry-led initiatives. The plaintiffs asserted that the rule would disrupt private-sector open banking frameworks already set in place, which they claim serve hundreds of million of Americans.
Concerns about data security and consumer harm. The rule’s opponents caution that mandating third-party data access could increase risks of misuse or breaches.

Putting It Into Practice: While the litigation had previously been paused to give the agency time to evaluate the regulation, the Bureau’s latest filing confirms that Acting Director Russel Vought no longer supports the rule and now views it as unlawful. This move effectively puts the rule’s validity in the hands of the court, even as compliance deadlines—set to begin April 1, 2026—technically remain in place unless the rule is vacated. Given the rule’s prior bipartisan support and its importance to fintech stakeholders, market participants should continue monitoring this litigation closely for further developments.
Listen to this post 

SEC Signals Reevaluation of CAT Reporting Amid Broader Transparency and Regulatory Reform Efforts

Securities and Exchange Commission (SEC) Chairman Paul S. Atkins recently directed SEC staff to conduct a review of the Consolidated Audit Trail (CAT), focusing on the escalating costs, reporting requirements, and cybersecurity risks stemming from sensitive data collection.[1] This directive aligns with Chairman Atkins’ expressed priorities to return to principled regulation, support market innovation and evolution, and reduce unnecessary compliance burdens. Among other things, Chairman Atkins cited CAT’s “appetite for data and computing power,” noting annual costs approaching $250 million, ultimately borne by investors, as the rationale for this reevaluation.[2] He supported Commissioner Mark T. Uyeda’s efforts behind the granting of an exemption from the requirement to report certain personally identifiable information (PII) to CAT for natural persons.[3]
This CAT reevaluation is part of a broader market-friendly agenda at the SEC. For example, Chairman Atkins has identified a goal of his tenure is to develop a rational regulatory framework for crypto asset markets, covering the issuance, custody, and trading of crypto assets all the while discouraging bad actors from violating the law. Chairman Atkins continues to emphasize the importance of regulatory frameworks that are “fit-for-purpose” with “clear rules of the road” for market participants to facilitate capital formation and protect investors. 
While no immediate changes to CAT reporting obligations are effective beyond the PII exemption, market participants should prepare for shifts in how the SEC approaches data collection and cost allocation.
Footnotes
[1] Paul S. Atkins, Chairman, SEC, “Prepared Remarks Before SEC Speaks” (May 19, 2025), https://www.sec.gov/newsroom/speeches-statements/atkins-prepared-remarks-sec-speaks-051925.
[2] Id.
[3] Paul S. Atkins, Chairman, SEC, “Testimony Before the United States House Appropriations Subcommittee on Financial Services and General Government” (May 20, 2025), https://www.sec.gov/newsroom/speeches-statements/atkins-testimony-fsgg-052025. See Katten’s Quick Reads post on the exemptive relief for reporting personally identifiable information here.

Chapter 93A and the Limits of Consumer Protection: Lessons from Wells Fargo Bank, N.A. v. Coulsey

In a long-running Massachusetts foreclosure case, Wells Fargo Bank, N.A. v. Coulsey, the Massachusetts Appeals Court weighed in on the applicability and limits of Chapter 93A. The decision provides guidance as to how—and when—Chapter 93A claims may be brought, and when repeated litigation crosses the line into claim preclusion.
The dispute began in 2007 when the plaintiff purchased a home with a loan and mortgage she would soon default on. Over the next 17 years, the plaintiff engaged in a prolonged legal battle with multiple mortgage holders, ultimately culminating in an eviction. The plaintiff repeatedly but unsuccessfully invoked Chapter 93A in an attempt to block foreclosure and eviction. The plaintiff’s claims were first dismissed without prejudice in federal court and her later attempts to revive or amend the 93A claims were rejected. They were again dismissed in a state court in a collateral action.
The Appeals Court affirmed the state-court dismissal and issued a clear rebuke to repeatedly raising Chapter 93A claims based on the same factual nucleus. The Appeals Court emphasized the following:

Prior Opportunity. The plaintiff was allowed in 2016 to amend her complaint to better articulate 93A violations. That was the moment to raise all related theories. The court found that “any new basis or theory supporting her c. 93A claim could have been brought at that time.”
Ongoing Harm v. Ongoing Claim. Plaintiff argued that her 93A claim should be revived because defendant’s alleged misconduct was “ongoing.” The court rejected that logic, stating: “[Plaintiff’s] assertion that c. 93A violations are ongoing and therefore could not have been advanced in prior litigation is contrary to the purpose of res judicata…” In short, the passage of time or continued impact did not give the plaintiff the right to relitigate previously dismissed claims.
Chapter 93A Not Exempt from Res Judicata. Importantly, the court reiterated that Chapter 93A claims—like any civil claim—are subject to rules of finality. If a claim is dismissed with prejudice or could have been litigated earlier, it cannot be brought again just by rebranding it or restating the facts.

Implications for Companies
A litigant does not get endless chances to reframe Chapter 93A claims. If the allegations asserted are vague or conclusory, challenging them in a motion to dismiss is appropriate even under the broad reach of Chapter 93A. The decision underscores that consumer rights are balanced against the need for closure. Once courts have ruled on a matter, even the broad protections of Chapter 93A will not open the door to re-litigating the same claims under a new heading.

CFPB Drops Lawsuit Against Lease-to-Own Fintech Following Adverse Credit Ruling

On May 27, the CFPB filed a notice of dismissal with prejudice in its lawsuit against a lease-to-own fintech provider. The lawsuit, filed in July 2023, alleged that the company’s rental-purchase agreements violated several federal consumer financial laws, including the Truth in Lending Act (TILA), the Electronic Fund Transfer Act (EFTA), the Fair Credit Reporting Act (FCRA), and the Consumer Financial Protection Act (CFPA).
In its original complaint, the CFPB alleged that the company targeted consumers with limited access to traditional credit—internally referred to as “ALICE” consumers (Asset-Limited, Income-Constrained, Employed)—with financing agreements that often required consumers to pay more than twice the cash price of the financed merchandise over a 12-month period. The Bureau claimed the agreements were misleadingly presented as short-term lease-purchase options, when they were in fact credit arrangements subject to federal disclosure and servicing requirements.
Specifically, the complaint’s allegations against the fintech provider included:

Failure to provide required TILA disclosures. The Bureau asserted that the company’s agreements qualified as credit sales under TILA and Regulation Z, triggering disclosure obligations the company allegedly failed to meet.
Conditioning credit on preauthorized electronic fund transfers. The CFPB alleged the company violated EFTA and Regulation E by requiring consumers to authorize recurring ACH debits as a condition of receiving financing.
Deceptive and abusive marketing and contracting practices. According to the complaint, the company misled consumers about the cost and structure of the agreements, impeded consumers from terminating repayment obligations, and failed to ensure consumers had an opportunity to review agreement terms before signing.
Unfair collection and credit reporting practices. The Bureau alleged that the company threatened actions it did not take, sent payment demands to consumers who had no outstanding obligations, and lacked reasonable written policies for furnishing consumer data, in violation of FCRA and Regulation V.

Putting It Into Practice: The dismissal of this case is another clear example of the CFPB stepping back from enforcement actions initiated from the previous administration (previously discussed here, here, and here). Although the Bureau has scaled back its use of certain enforcement powers, state regulators and other federal agencies have not slowed their efforts to enforce UDAAP violations (previously discussed here and here). Financial institutions should ensure their compliance programs are up to date to avoid scrutiny from state and federal regulators.

Privacy Tip #445 – Apple Users: Update to iOS 18.5

Never underestimate an operating system update from any mobile phone manufacturer. This week, Apple issued iOS 18.5 which provides enhancements to the user experience but also fixes bugs and flaws.
This update fixes over 30 security bugs. The sooner you update to the new version, the better from a security standpoint. The security flaws that the patch responds to includes known and unknown vulnerabilities and zero-days that may or may not be exploited in the wild.
If you haven’t updated to iOS 18.5, plug your phone in now and install it as soon as possible. Not only for the enhancements, but most importantly, for the bug fixes. If you don’t have your phone set to automatic installation, you may wish to add that feature in your setting, as that is a good way to stay on top of new releases in a timely manner.

U.S. Retailers Bracing for Scattered Spider Attacks

Google sent out a warning that the cybercriminal group Scattered Spider is targeting U.S.-based retailers. Scattered Spider is believed to have been responsible for the recent attack on Marks & Spencer in the U.K. A security researcher at Google has posited that Scattered Spider concentrates attacks on one industry at a time and predicts that it will continue to target the retail sector. They have warned that “US retailers should take note. These actors are aggressive, creative, and particularly effective at circumventing mature security programs.”
Mandiant issued a threat intelligence report on May 6, 2025, highlighting Scattered Spider’s social engineering methods and “brazen communication with victims.” It has seen Scattered Spider target specific sectors, such as financial services and food services. Recently, Scattered Spider has been seen deploying DragonForce ransomware. The operators of DragonForce have claimed control of RansomHub.
Mandiant has published recommendations on proactive hardening against the tactics used by Scattered Spider, including prioritizing:

Identity
Endpoints
Applications and Resources
Network Infrastructure
Monitoring / Detections

Although retailers should be on high alert with these warnings, all industries would do well to review Mandiant’s recommendations, as they are timely and effective.

Janie & Jack’s Alleged CIPA Violations Consolidated, Thus Avoiding Over 2,000 Individual Arbitration Claims

This week, the U.S. District Court for the Northern District of California ruled in favor of children’s clothing retailer Janie & Jack, which sought to enjoin over 2,400 individual arbitration claims resulting from alleged violations of the California Invasion of Privacy Act (CIPA). Now, Janie & Jack will confront a single privacy class action suit as opposed to the more than 2,400 individual arbitration claims by its website visitors.
The parties notified the court of their agreement not to pursue arbitration but to rather proceed through a consolidated class action. Janie & Jack voluntarily dismissed its lawsuit in an attempt to avert the numerous claims by consumers.
Website visitors accused Janie & Jack of violating CIPA and the federal Wiretap Act through its website’s information gathering and tracking practices (also known as trap and trace claims). Janie & Jack alleges that such claims are inadequate because they lack allegations that the consumers created any accounts or conducted any transactions on the website or that Janie & Jack had breached any of its online terms.
Further, although Janie & Jack’s website terms include an arbitration clause, it claimed that the claimants never assented to the contract.
In its response, the retailer emphasized its intent to prevent the growing use of arbitration agreements as “weapons” by plaintiffs’ attorneys, thwarting their intended use of an efficient, effective, and timely progression of claims.
This case highlights a common practice: thousands of individuals, all represented by the same counsel, simultaneously file, or threaten to file, arbitration demands with nearly identical claims.
These allegations mark yet another instance of the growing trend of the plaintiffs’ bars’ push for “trap and trace” claims because they can leverage existing wiretap laws (particularly in California under CIPA) to argue that common online tracking technologies like cookies, pixels, and website analytics tools essentially function as trap and trace devices, allowing them to file complaints against companies for collecting user data without proper consent, even though these technologies were originally designed for traditional phone lines, not the internet, opening up a large pool of potential plaintiffs and potentially significant damages.
If you haven’t heard it enough, here it is again: NOW is the time to assess your website’s online trackers and update your cookie consent management platform, website privacy policy, and consumer data collection processes.
This article was co-authored by Mark Abou Naoum

State Data Minimization Laws Spark Compliance Uncertainty

A new wave of state consumer privacy laws focused on limiting data collection is creating anxiety among businesses—and Maryland is leading the charge. The Maryland Online Data Privacy Act (MODPA), set to take effect in October 2025, requires companies to collect only data that is “reasonably necessary and proportionate” to their stated purposes. However, with no official guidance for compliance from the Maryland Attorney General, businesses are left guessing.
Under MODPA’s data minimization requirement, businesses should avoid collecting or processing more data than is necessary to provide a specific product or service to a consumer. In addition to the limited data collection requirement, MODPA also requires:

Stricter Data Collection Practices for Sensitive Data: The data minimization requirements are more stringer for sensitive data, such as health information, religious beliefs, and genetic data. 
Ban on the Sale of Sensitive Data: The law prohibits the sale of sensitive data unless it is strictly necessary to provide or maintain a requested product or service. 
Explicit Consent: A business may not process personal information for a purpose other than the purpose(s) disclosed to the consumer at the time of collection unless the consumer provides explicit consent. 
Limited Retention: A business may not retain consumer data for longer than necessary to fulfill the purpose for which it was collected (i.e., now is the time to update or implement your retention program).

This shift towards data minimization marks a departure from the more familiar “notice and choice” model, pushing companies to operationalize data minimization in ways that may significantly alter their data practices. While some businesses, particularly those already operating under stricter global standards like the European Union’s General Data Protection Regulation (GDPR), may be better prepared, others are weighing whether to reduce data collection or even scale back operations in certain states.
Companies developing or utilizing generative artificial intelligence are especially concerned, as these laws may limit access to large, diverse datasets required to train their models. Still, some see this as an opportunity to innovate with privacy-first technologies, such as synthetic data.
States like Maine, Massachusetts, Connecticut, and Minnesota are considering similar laws, signaling a growing trend. But as businesses await clearer definitions and enforcement standards, the central question remains: Can regulators strike the right balance between protecting privacy and supporting innovation?

Take it Down Act Signed into Law, Offering Tools to Fight Non-Consensual Intimate Images and Creating a New Image Takedown Mechanism

Law establishes national prohibition against nonconsensual online publication of intimate images of individuals, both authentic and computer-generated.
First federal law regulating AI-generated content.
Creates requirement that covered platforms promptly remove depictions upon receiving notice of their existence and a valid takedown request.
For many online service providers, complying with the Take It Down Act’s notice-and-takedown requirement may warrant revising their existing DMCA takedown notice provisions and processes.
Another carve-out to CDA immunity? More like a dichotomy of sorts…. 

On May 19, 2025, President Trump signed the bipartisan-supported Take it Down Act into law. The law prohibits any person from using an “interactive computer service” to publish, or threaten to publish, nonconsensual intimate imagery (NCII), including AI-generated NCII (colloquially known as revenge pornography or deepfake revenge pornography). Additionally, the law requires that, within one year of enactment, social media companies and other covered platforms implement a notice-and-takedown mechanism that allows victims to report NCII. Platforms must then remove properly reported imagery (and any known identical copies) within 48 hours of receiving a compliant request.
Support for the Act and Concerns
The Take it Down Act attempts to fill a void in the policymaking space, as many states had not enacted legislation regulating sexual deepfakes when it was signed into law. The Act has been described as the first major federal law that addresses harm caused by AI. It passed the Senate in February of this year by unanimous consent and passed the House of Representatives in April by a vote of 409-2. It also drew the support of many leading technology companies.
Despite receiving almost unanimous support in Congress, some digital privacy advocates have expressed some concerns that the new notice-and-takedown mechanism could have some unintended consequences for digital privacy in general. For example, some commentators have suggested that the statute’s takedown provision is written too broadly and lacks sufficient safeguards against frivolous requests, potentially leading to the removal of lawful content –especially given the short 48-hour time to act following a takedown request. [Note: In 2023, we similarly wrote about abuses of the takedown provision of the Digital Millennium Copyright Act]. In addition, some have argued that the law could undermine end-to-end encryption by possibly forcing such companies to “break” encryption to comply with the removal process. Supporters of the law have countered that private encrypted messages would likely not be considered “published” under the text of the statute (which uses the term “publish” as opposed to “distribute”).
Criminalization of NCII Publication for Individuals
The Act makes it unlawful for any person “to use an interactive computer service to knowingly publish an intimate visual depiction of an identifiable individual” under certain circumstances.[1] It also prohibits threats involving the publishing of NCII and establishes various criminal penalties. Notably, the Act does not distinguish between authentic and AI-generated NCII in its penalties section if the content has been published. Furthermore, the Act expressly states that a victim’s prior consent to the creation of the original image or its disclosure to another individual does not constitute consent for its publication.
New Notice-and-Takedown Requirement for “Covered Platforms”
Along with punishing individuals who publish NCII, the Take it Down Act requires covered platforms to create a notice-and-takedown process for NCII within one year of the law’s passage. Below are the main points for platforms to consider:

Covered Platforms. The Act defines a “covered platform” as a “website, online service, online application, or mobile application” that serves the public and either provides a forum for user-generated content (including messages, videos, images, games, and audio files) or regularly deals with NCII as part of its business.
Notice-and-Takedown Process. Covered platforms must create a process through which victims of NCII (or someone authorized to act on their behalf) can send notice to them about the existence of such material (including a statement indicating a “good faith belief” that the intimate visual depiction of the individual is nonconsensual, along with information to assist in locating the unlawful image) and can request its removal.
Notice to Users. Adding an additional compliance item to the checklist, the Act requires covered platforms to provide a “clear and conspicuous” notice of the Act’s notice and removal process, such as through a conspicuous link to another web page or disclosure.
Removal of NCII. Within 48 hours of receiving a valid removal request, covered platforms must remove the NCII and “make reasonable efforts to identify and remove any known identical copies.”
Enforcement. Compliance under this provision will be enforced by the Federal Trade Commission (FTC).
Safe Harbor. Under the law, covered platforms will not be held liable for “good faith” removal of content that is claimed to be NCII “based on facts or circumstances from which the unlawful publishing of an intimate visual depiction is apparent,” even if it is later determined that the removed content was lawfully published.

Compliance Note: For many online service providers, complying with the Take It Down Act’s notice-and-takedown requirement may warrant revising their existing DMCA takedown notice provisions and processes, especially if those processes have not been reviewed or updated for some time. Many “covered platforms” may rely on automated processes (or a combination of automated efforts combined with targeted human oversight) to fulfill Take It Down Act requests and meet the related obligation to make “reasonable efforts” to identify and remove known identical copies. This may involve using tools for processing notices, removing content and detecting duplicates. As a result, some providers should consider whether their existing takedown provisions should also be amended to address these new requirements and how they will implement these new compliance items on the backend using the infrastructure already in place for the DMCA.
What about CDA Section 230?
Section 230 of the Communications Decency Act (“CDA”), 47 U.S.C § 230, prohibits a “provider or user of an interactive computer service” from being held responsible “as the publisher or speaker of any information provided by another information content provider.” Courts have construed the immunity provisions in Section 230 broadly in a variety of cases arising from the publication of user-generated content. 
Following enactment of the Take It Down Act, some important questions for platforms are: (1) whether Section 230 still protects platforms from actions related to the hosting or removal of NCII; and (2) whether FTC enforcement of the Take It Down Act’s platform notice-and-takedown process is blocked or limited by CDA immunity. 
On first blush, it might seem that the CDA would restrict enforcement against online providers in this area, as decisions regarding the hosting and removal of third-party content would necessarily treat a covered platform as a “publisher or speaker” of third party content. However, a deeper examination of the text of the CDA suggests the answer is more nuanced.
It should be noted that the Good Samaritan provision of the CDA (47 U.S.C § 230(c)(2)) could be used by online providers as a shield from liability for actions taken to proactively filter or remove third party NCII content or remove NCII at the direction of a user’s notice under the Take It Down Act, as CDA immunity extends to good faith actions to restrict access to or availability of material that the provider or user considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Moreover, the Take It Down Act adds its own safe harbor for online providers for “good faith disabling of access to, or removal of, material claimed to be a nonconsensual intimate visual depiction based on facts or circumstances from which the unlawful publishing of an intimate visual depiction is apparent, regardless of whether the intimate visual depiction is ultimately determined to be unlawful or not.” 
Still, further questions about the reach of the CDA prove more intriguing. The Take It Down Act appears to create a dichotomy of sorts regarding CDA immunity in the context of NCII removal claims. Under the text of the CDA, it appears that immunity would not limit FTC enforcement of the Take It Down Act’s notice-and-takedown provision affecting “covered platforms.” To explore this issue, it’s important to examine the CDA’s exceptions, specifically 47 U.S.C § 230(e)(1).
Effect on other laws
(1) No effect on criminal law
Nothing in this section shall be construed to impair the enforcement of section 223 or 231 of this title [i.e., the Communications Act], chapter 71 (relating to obscenity) or 110 (relating to sexual exploitation of children) of title 18, or any other Federal criminal statute.
Under the text of the CDA’s exception, Congress carved out Section 223 and 231 of the Communications Act from the CDA’s scope of immunity. Since the Take It Down Act states that it will be codified at Section 223 of the Communications Act of 1934 (i.e., 47 U.S.C. 223(h)), it appears that platforms would not enjoy CDA protection from FTC civil enforcement actions based on the agency’s authority to enforce the Act’s requirements that covered platforms “reasonably comply” with the new Take It Down Act notice-and-takedown obligations.
However, that is not the end of the analysis for platforms. Interestingly, it would appear that platforms would generally still retain CDA protection (subject to any exceptions) from claims related to the hosting or publishing third party NCII that have not been the subject of a Take It Down Act notice, since the Act’s requirements for removal of NCII by platforms would not be implicated without a valid removal request.[2] Similarly, a platform could make a strong argument that it retains CDA immunity from any claims brought by an individual (rather than the FTC) for failing to reasonably comply with a Take It Down Act notice. That said, it is conceivable that litigants – or event state attorneys general – might attempt to frame such legal actions under consumer protection statutes, as the Take It Down Act states that a failure to reasonably comply with an NCII takedown request is an unfair or deceptive trade practice under the FTC Act. Even in such a case, platforms would likely contend that such claims by these non-FTC parties are merely claims based on a platform’s role as publisher of third party content and are therefore barred by the CDA. 
Ultimately, most, if not all, platforms will likely make best efforts to reasonably comply with the Take It Down Act, thus avoiding the above contingencies. Yet, for platforms using automated systems to process takedown requests, unintended errors may occur and it’s important to understand how and when the CDA would still protect platforms against any related claims.
Looking Ahead
It will be up to a year before the notice-and-takedown requirements become effective, so we will have to wait and see how well the process works in eradicating revenge pornography material and intimate AI deepfakes from platforms, how the Act potentially affects messaging platforms, how aggressively the Department of Justice will prosecute offenders, and how closely the FTC will be monitoring online platforms’ compliance with the new takedown requirements.
It also remains to be seen whether Congress has an appetite to pass more AI legislation. Less than two weeks before the Take it Down Act was signed into law, the Senate Committee on Commerce, Science, and Transportation held a hearing on “Winning the AI Race” that featured the CEOs of many well-known AI companies. During the hearing, there was bipartisan agreement on the importance of sustaining America’s leadership in AI, expanding the AI supply chain and not burdening AI developers with a regulatory framework as strict as the EU AI Act. The senators listened to testimony from tech executives calling for enhanced educational initiatives and the improvement of infrastructure needed for advancing AI innovation, alongside discussing proposed bills regulating the industry, but it was not clear whether any of these potential policy solutions would receive enough support to be signed into law.
The authors would like to thank Aniket C. Mukherji, a Proskauer legal assistant, for his contributions to this post.

[1] The Act provides that the publication of the NCII of an adult is unlawful if (for authentic content) “the intimate visual depiction was obtained or created under circumstances in which the person knew or reasonably should have known the identifiable individual had a reasonable expectation of privacy,” if (for AI-generated content) “the digital forgery was published without the consent of the identifiable individual,” and if (for both authentic and AI-generated content) what is depicted “was not voluntarily exposed by the identifiable individual in a public or commercial setting,” “is not a matter of public concern,” and is intended to cause harm or does cause harm to the identifiable individual. The publication of NCII (whether authentic or AI-generated) of a minor is unlawful if it is published with intent to “abuse, humiliate, harass, or degrade the minor” or “arouse or gratify the sexual desire of any person.” The Act also lists some basic exceptions, such as publications of covered imagery for law enforcement investigations, legal proceedings, or educational purposes, among other things.
[2] Under the Act, “Upon receiving a valid removal request from an identifiable individual (or an authorized person acting on behalf of such individual) using the process described in paragraph (1)(A)(ii), a covered platform shall, as soon as possible, but not later than 48 hours after receiving such request—
(A) remove the intimate visual depiction; and
(B) make reasonable efforts to identify and remove any known identical copies of such depiction.