Netflix Content Becomes Federal Evidence: EDNY’s OneTaste Prosecution Faces Scrutiny Amid DOJ Transition

Recent developments in the Eastern District of New York’s prosecution of wellness company OneTaste in U.S. v. Cherwitz have raised novel questions about the intersection of streaming content and criminal evidence.1 Defense motions filed in December of 2024 and January 2025 challenge the government’s use of journal entries originally created for a Netflix documentary as key evidence in its forced labor conspiracy case. This occurs during a sea change in DOJ priorities entering a new presidential administration.
After a five-year investigation, EDNY prosecutors in April of 2023 filed a single-count charge of forced labor conspiracy against OneTaste founder Nicole Daedone and former sales leader Rachel Cherwitz. The government alleges the conspiracy unfolded over a fourteen-year span, but in a prosecutorial first did not charge a substantive crime. Over the course of the prosecution, the defendants filed repeated motions with the court asking it to order the government to specify the nature of the offense. Most recently, Celia Cohen, newly appointed defense counsel for Rachel Cherwitz, highlighted in a January 18 motion the case’s unusual nature: “The government has charged one count of a forced labor conspiracy…without providing any critical details about the force that occurred and how it specifically induced any labor.”
In recent defense filings, the prosecution has faced mounting scrutiny over the authenticity of journal entries attributed to key government witness Ayries Blanck. Prosecutors had previously moved in October of 2024 for the court to admit the journal entries as evidence at trial for their case in chief. In a December 30 motion, Jennifer Bonjean, defense counsel for Nicole Daedone revealed that civil discovery exposes that the journal entries presented by the government as contemporaneous accounts from 2015 were actually created and extensively edited for Netflix’s 2022 documentary “Orgasm Inc” on OneTaste. 
“Through metadata and edit histories, we can watch entertainment become evidence,” Bonjean argued in her motion. Technical analysis from a court-ordered expert showed the entries underwent hundreds of revisions by multiple authors, including Netflix production staff, before being finalized in March 2023 – just days before a sealed indictment was filed against defendants Cherwitz and Daedone. The defense has argued that this Netflix content was presented to the grand jury to secure an indictment.
The government’s handling of these journal entries took a dramatic turn during a January 23 meet-and-confer session. After defense counsel challenged the authenticity of handwritten journals matching the Netflix content, prosecutors abruptly withdrew them from their case-in-chief. While maintaining the journals’ legitimacy, this retreat from evidence previously characterized as central to their case prompted new defense challenges.
“This prosecution is a house of cards,” argued defense counsel Celia Cohen and Michael Roboti of Ballard Spahr in a January 24 motion to dismiss. Cohen and Roboti, who joined Rachel Cherwitz’s defense team earlier this month, highlighted how the government’s withdrawal of the handwritten journals “exemplifies the serious problems with this prosecution.” Their motion notes that defense witnesses in a parallel civil case have exposed government witnesses as “perjurers” who “have received significant benefits from the government and from telling their ‘stories’ in the media.”
The matter came to head during a January 24 hearing before Judge Diane Gujarati, who had previously denied prosecutors’ request to grant anonymity to ten potential witnesses. When Cohen attempted to address unresolved issues regarding the journals, she was sharply rebuked by the court, which had indicated it would not address the new filing during the scheduled hearing. Gujarati stated that she did not intend to schedule any further conferences before trial. The trial date is scheduled for May 5, 2025. 
The case’s challenges coincide with significant changes at DOJ and EDNY under the new Trump administration. EDNY Long Island Division Criminal Chief John J. Durham was sworn in as Interim U.S. Attorney for EDNY on January 21, following former U.S. Attorney Breon Peace’s January 10 resignation. Peace spearheaded the OneTaste prosecution. Durham will serve until the Senate confirms President Trump’s nominee, Nassau County District Court Judge Joseph Nocella Jr.
The timing is particularly significant given President Trump’s January 20 executive order “Ending The Weaponization of The Federal Government.” The order specifically cites the EDNY prosecution of Douglass Mackey as an example of “third-world weaponization of prosecutorial power.” This reference carries special weight as EDNY deployed similar strategies in both the Mackey and Cherwitz cases – single conspiracy charges without substantive crimes, supported by media narratives rather than traditional evidence.
As Durham takes the helm at EDNY, this case presents an early test of how the office will handle prosecutions that blend entertainment with evidence, and whether novel theories of conspiracy without specified crimes will survive increased scrutiny under new leadership. The transformation of Netflix content into federal evidence may face particular challenges as the Attorney General reviews law enforcement activities of the prior four years under the new executive order’s mandate.
The government’s position faces further scrutiny as mainstream media begins to question its narrative. A January 24 Wall Street Journal profile by veteran legal reporter Corinne Ramey presents Daedone as a complex figure whose supporters call her a “visionary,” while examining the unusual nature of prosecuting wellness education as forced labor. The piece’s headline – “She Made Orgasmic Meditation Her Life. Not Even Prison Will Stop Her” – captures both the prosecution’s gravity and Daedone’s unwavering commitment to her work despite federal charges.

1 U.S. v. Cherwitz, et al., No. 23-cr-146 (DG).

Deadline for Filing Annual Pesticide Production Reports — March 1, 2025

The March 1, 2025, deadline for all establishments, foreign and domestic, that produce pesticides, devices, or active ingredients to file their annual production for the 2024 reporting year is fast approaching. Pursuant to Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) Section 7(c)(1) (7 U.S.C. § 136e(c)(1)), “Any producer operating an establishment registered under [Section 7] shall inform the Administrator within 30 days after it is registered of the types and amounts of pesticides and, if applicable, active ingredients used in producing pesticides” and this information “shall be kept current and submitted to the Administrator annually as required.”
Reports must be submitted on or before March 1 annually for the prior reporting year’s production and distribution. The report, filed through the submittal of the U.S. Environmental Protection Agency (EPA) Form 3540-16: Pesticide Report for Pesticide-Producing and Device-Producing Establishments, must include the name and address of the producing establishment, as well as pesticide production information, such as product registration number, product name, and amounts produced and distributed. The annual report is always required, even when no products are produced or distributed.
EPA has created the electronic reporting system to submit pesticide-producing establishment reports using the Section Seven Tracking System (SSTS). Users will be able to use SSTS within EPA’s Central Data Exchange (CDX) to submit annual pesticide production reports. Electronic reporting is efficient, saves time by making the process faster, and saves money in mailing costs and related logistics. EPA is encouraging all reporters to submit electronically to ensure proper submission and a timely review of the report.
Links to EPA Form 3540-16, as well as instructions on how to report and how to add and use EPA’s SSTS electronic filing system, are available below:

EPA Form 3540-16: Pesticide Report for Pesticide-Producing and Device-Producing Establishments;
Instructions for Completing EPA Form 3540-16 (available within the ZIP file); and
Electronic Reporting for Pesticide Establishments.

Further information is available on EPA’s website.

Cybersecurity in the Marine Transportation System: What You Need to Know About the Coast Guard’s Final Rule

The U.S. Coast Guard (“USCG”) published a final rule on January 17, 2025, addressing Cybersecurity in the Marine Transportation System (the “Final Rule”), which seeks to minimize cybersecurity related transportation security incidents (“TSIs”) within the maritime transportation system (“MTS”) by establishing requirements to enhance the detection, response, and recovery from cybersecurity risks. Effective July 16, 2025, the Final Rule will apply to U.S.-flagged vessels, as well as Outer Continental Shelf and onshore facilities subject to the Maritime Transportation Security Act of 2002 (“MTSA”). The USCG is also seeking comments on a potential two-to-five-year delay of implementation for U.S.-flagged vessels. Comments are due March 18, 2025.
Background
The need for enhanced cybersecurity protocols within the MTS has long been recognized. MTSA laid the groundwork for addressing various security threats in 2002 and provided the USCG with broad authority to take action and set requirements to prevent TSIs. MTSA was amended in 2018 to make clear that cybersecurity related risks that may cause TSIs fall squarely within MTSA and USCG authority.
Over the years, the USCG, as well as the International Maritime Organization, have dedicated resources and published guidelines related to addressing the growing cybersecurity threats arising as technology is integrated more and more into all aspects of the MTS. The USCG expanded its efforts to address cybersecurity threats throughout the MTS in its latest rulemaking, publishing the original Notice of Proposed Rulemaking (“NPRM”) on February 22, 2024. The NPRM received significant public feedback, leading to the development of the Final Rule.
Final Rule
In its Final Rule, the USCG addresses the many comments received on the NPRM and sets forth minimum cybersecurity requirements for U.S.-flagged vessels and applicable facilities. 
Training. Within six months of the Final Rule’s effective date, training must be conducted on recognition and detection of cybersecurity threats and all types of cyber incidents, techniques used to circumvent cyber security measures, and reporting procedures, among others. Key personnel are required to complete more in-depth training.
Assessment and Plans. The Final Rule requires owners and operators of U.S.-flagged vessels and applicable facilities to conduct a Cybersecurity Assessment, develop a Cybersecurity Plan and Cyber Incident Response Plan, and appoint a Cybersecurity Officer that meets specified requirements within 24 months of the effective date. There are a host of requirements for the Cybersecurity Plan, including, among others: provisions for account security, device protection, data safeguarding, training, drills and exercises, risk management practices, strategies for mitigating supply chain risks, penetration testing, resilience planning, network segmentation, reporting protocols, and physical security measures. Additionally, the Cyber Incident Response Plan must provide instructions for responding to cyber incidents and delineate the key roles, responsibilities, and decision-making authorities among staff.
Plan Approval and Audits. The Final Rule requires Cybersecurity Plans be submitted to the USCG for review and approval within 24 months of the effective date of the Final Rule, unless a waiver or equivalence is granted. The Rule also gives the USCG the power to perform inspections and audits to verify the implementation of the Cybersecurity Plan.
Reporting. The Final Rule requires reporting of “reportable cyber incidents”[1] to the National Response Center without delay. The reporting requirement is effective immediately on July 16, 2025. Further, the Final Rule revises the definition of “hazardous condition” to expressly include cyber incidents. 
Potential Waivers. The Final Rule allows for limited waivers or equivalence determinations. A waiver may be granted if the owner or operator demonstrates that the cybersecurity requirements are unnecessary given the specific nature or operating conditions. An equivalence determination may be granted if the owner or operator demonstrates that the U.S.-flagged vessel or facility complies with international conventions or standards that provide an equivalent level of security. Each waiver or equivalence request will be evaluated on a case-by-case basis.
Potential Delay in Implementation. Due to a number of comments received related to the ability of U.S.-flagged vessels to meet the implementation schedule, the Final rule seeks comments on whether a delay of an additional two to five years is appropriate.
Conclusion
As automation and digitalization continue to advance within the maritime sector, it is imperative to develop cyber security strategies tailored to specific management and operational needs of each company, facility, and vessel. Owners and operators of U.S.-flagged vessels and MTSA facilities are advised to review the new regulations closely and begin preparations for the new cybersecurity requirements at the earliest opportunity. Stakeholders are also encouraged to provide comments before March 18, 2025, addressing the potential two-to-five-year delay in implementation for U.S.-flagged vessels. 

[1] A reportable cyber incident is defined as an incident that leads to, or, if still under investigation, can reasonably lead to any of the following: (1) substantial loss of confidentiality, integrity, or availability of a covered information system, network, or operational technology system; (2) disruption or significant adverse impact on the reporting entity’s ability to engage in business operations or deliver goods or services, including those that have a potential for significant impact on public health or safety or may cause serious injury or death; (3) disclosure or unauthorized access directly or indirectly of non-public personal information of a significant number of individuals; (4) other potential operational disruption to critical infrastructure systems or assets; or (5) incidents that otherwise may lead to a TSI as defined in 33 C.F.R. 101.105.

Human Trafficking Monitoring for Telehealth Providers

Overview: Telehealth providers are uniquely positioned to monitor for human trafficking when interacting with patients. Survivor records indicate that health services are among the most common points of access to help trafficked persons, and nearly 70% of human trafficking survivors report having had access to health services at some point during their exploitation. While there’s limited data regarding trafficked persons’ use of telehealth services, empirical evidence demonstrates that a greater proportion of trafficked persons completed telehealth appointments during the early period of the COVID-19 pandemic than pre-pandemic. To enable telehealth providers to assist trafficked patients, this article discusses the legal landscape surrounding human trafficking and lays out best practices for telehealth providers.
Background: Telehealth providers are subject to a patchwork of legal requirements aimed at reducing human trafficking. If the patient is under the age of 18 or is disabled, many states require telehealth providers to report instances in which they know or reasonably believe the patient has experienced or is experiencing abuse, mistreatment, or neglect. Some states, such as Florida and New Jersey, also require telehealth providers partake in anti-trafficking education.
Online platforms that offer telehealth services are also subject to federal legislation regarding sex trafficking monitoring. In 2018, US Congress passed the Allow States and Victims to Fight Online Trafficking Act of 2017 (FOSTA). The law was enacted primarily in response to unsuccessful litigation against Backpage.com, a website accused of permitting and even assisting users in posting advertisements for sex trafficking. Before FOSTA’s enactment, Section 230 of the Communications Decency Act essentially shielded online platforms from liability for such conduct. FOSTA, however, effectively created an exception to Section 230 by establishing criminal penalties for those who promote or facilitate sex trafficking through their control of online platforms. These penalties, generally limited to a fine, imprisonment of up to 10 years, or both, may be heightened for aggravated violations, which are violations involving reckless disregard of sex trafficking or the promotion or facilitation of prostitution of five or more people. State attorneys general and, in cases of aggravated violations, injured persons also may bring civil actions against those who control online platforms in violation of the law.
Since FOSTA’s inception, the US Department of Justice (DOJ) has brought at least one criminal charge under the law. In 2021, after being charged by DOJ, the owner of the online platform CityXGuide.com pleaded guilty to one count of promotion of prostitution and reckless disregard of sex trafficking, a violation of FOSTA’s aggravated violations provision. According to DOJ officials, more charges have not been brought under FOSTA because the law is relatively new and federal prosecutors have had success prosecuting those who control online platforms by bringing racketeering and money laundering charges. Nonetheless, it is possible that prosecutors will pursue FOSTA violations more regularly during the Trump administration, particularly because US President Donald Trump signed it into law during his first term in office, calling it “crucial legislation.”
Best Practices for Telehealth Providers
Telehealth providers and online platforms that offer telehealth services should consider adhering to the following best practices when monitoring for human trafficking:

Complete Anti-Trafficking Training. Telehealth providers should complete an anti-trafficking training or educational program on a regular basis, regardless of whether they are legally required to do so. One such program is the US Department of Health and Human Services’ Stop, Observe, Ask, and Respond (SOAR) to Health and Wellness Training program. Telehealth providers may attend SOAR trainings in person or online and, depending on the program, may receive continuing education credit for their participation.
Implement a Referral Network. Prior to monitoring patients, telehealth providers should prepare a comprehensive referral list with detailed procedures for assisting identified individuals who have been trafficked or are vulnerable to trafficking. Referral lists should help patients access services that meet various immediate, intermediate, and long-term needs. Referral lists also should include information about how to connect with both national and local anti-trafficking resources.
Be Aware of Indicators of Human Trafficking for Adults. The National Human Trafficking Training and Technical Assistance Center has developed indicators of adult human trafficking. Indicators that may arise during a telehealth visit include instances where:

The patient is not in control of personal identification or does not have valid identification as part of the visit
The patient does not know where they live (or their geolocation does not match their stated location)
The patient’s story does not make sense or seems scripted
The patient seems afraid to answer questions
The patient appears to be looking at an unidentified person offscreen after speaking
The patient’s video background appears to be an odd living or work space (may include tinted windows, security cameras, barbed wire, or people sleeping or living at worksite)
The patient exhibits or indicates signs of physical abuse, drug or alcohol misuse, or malnourishment.

Be Aware of Indicators of Human Trafficking for Children. Indicators of human trafficking for children often differ from those for adults. The National Center for Missing & Exploited Children (NCMEC) has issued a list of risk factors useful for identifying possible indicators of child sex trafficking. Although NCMEC cautions that no single indicator can accurately identify a child as a sex trafficking victim, the presence of multiple factors increases the likelihood of identifying victims. Indicators that may arise during a telehealth visit include the following:

The child avoids answering questions or lets others speak for them
The child lies about their age and identity or otherwise responds to the provider in a manner that doesn’t align with their telehealth profile or account information
The child appears to be looking at an unidentified person offscreen after speaking
The child uses prostitution-related terms, such as “daddy,” “the life,” and “the game”
The child has no identification (or their identification is held by another person)
The child displays evidence of travel in their video background (living out of suitcases, at motels, or in a car)
The child references traveling to cities or states that do not match their geolocation
The child has numerous unaddressed medical issues.

UK ICO Sets Out Proposals to Promote Sustainable Economic Growth

On January 24, 2025, the UK Information Commissioner’s Office (“ICO”) published the letter it sent to the UK Prime Minister, Chancellor of the Exchequer, and Secretary of State for Business and Trade, in response to their request for proposals to boost business confidence, improve the investment climate, and foster sustainable economic growth in the UK. In the letter, the ICO sets out its proposals for doing so, including:

New rules for AI: The ICO recognizes that regulatory uncertainty can be a barrier to innovation, so it proposes a single set of rules for those developing or deploying AI products, supporting the UK government in legislating for such rules.
New guidance on other emerging technologies: The ICO will support businesses and “innovators” by publishing innovation focused guidance in areas such as neurotech, cloud computing and Internet of Things devices.
Reducing costs for small and medium-sized companies (“SMEs”): Focusing on the administrative burden that SMEs face when complying with a complex regulatory framework, the ICO commits to simplifying existing requirements and easing the burden of compliance, including by launching a Data Essentials training and assurance programme for SMEs during 2025/26.
Sandboxes: The ICO will expand on its previous sandbox services by launching an “experimentation program” where companies will get a “time-limited derogation” from specific legal requirements, under the strict control of the ICO, to test new ideas. The ICO would support legislation from UK government in this area.
Privacy-preserving digital advertising: The ICO recognizes the financial and societal benefits provided by the digital advertising economy but notes there are aspects of the regulatory landscape that businesses find difficult to navigate. The ICO wishes to help reduce the burdens for both businesses and customers related to digital advertising. To do so, the ICO, amongst other things, referred to its approach to regulating digital advertising as detailed in the 2025 Online Tracking Strategy (as discussed here).
International transfers: Recognizing the importance of international transfers to the UK economy, the ICO will, amongst other things, publish new guidance to enable quicker and easier transfers of data, and work through international fora, such as G7, to build international agreement on increasing data transfer mechanisms.
Promote information sharing between regulators: The ICO acknowledges that engaging with multiple regulators can be resource intensive, especially for SMEs. The ICO will work with the Digital Regulation Cooperation Forum to simplify this process, and would encourage legislation to simplify information sharing between regulators.

Read the letter from the ICO.

LOW INTEGRITY?: Integrity Marketing Stuck in TCPA Class Action After Declaration Fails to Move the Court

Always fascinating the arguments TCPA defendants make.
Consider Integrity Marketing’s effort in Newman v. Integrity, 2025 WL 358933 (N.D. Ill Jan. 31, 2025).
There Integrity argued it could not be liable for calls violating the TCPA made by its network of lead generators because it was just a middle man who was not actually licensed to sell insurance itself.
Huh?
Apparently Integrity believed that if it wasn’t selling anything itself it couldn’t be liable for telephone solicitations it was brokering. (This is similar to the argument Quote Wizard made and, yeah, it didn’t work out so well for them.)
Integrity also argued that it couldn’t be sued directly for calls made by third-parties unless the Plaintiff pierced the corporate veil.
Double huh?
That’s not even close to accurate. Vicarious liability principles apply when dealing with fault for third-party conduct, not corporate formality law. That argument doesn’t even make logical sense.
Regardless, the Court had little trouble determining Integrity was potentially liable for the calls.
On the corporate veil argument the Court quickly rejected the (totally wrong) alter ego approach and properly applied agency law. The Court determined the Complaint’s allegations Integrity hired a mob of of affiliates, insurance agents, and (sub)vendors to telemarket insurance and other products to consumers was sufficient to state a claim since Integrity allegedly “controlled” these entities. In the Court’s eye the ability to “dictate which telemarketing or lead-generating vendors may be used” was particularly damning.
On the “we don’t actually sell insurance” argument the Court rejected the Defendant’s declaration– you can’t submit evidence on a 12(b)(6) motion folks– but found that even if the evidence were accepted Integrity would still lose:
 However, even if the Court were to take the declaration into account, it does not refute the inferences drawn that Defendant has a network of agents encouraging the purchase of insurance, that Defendant controls or facilitates the telemarketing calls Plaintiff received which encouraged Plaintiff to purchase insurance, or that the telemarketing calls were done on behalf of Defendant. In other words, even if it is not selling insurance directly, the complaint plausibly alleges Defendant is using its network of subsidiaries and agents to engage in telemarketing and to facilitate and control the purchase of insurance using impermissible means. 
So, yeah.
Motion to dismiss denied. Integrity stuck in the case.

CJEU Upholds EDPB’s Authority to Order Broader Investigations in Cross-Border Cases

In a landmark judgment delivered on 29 January 2025, the General Court of the European Union has affirmed the European Data Protection Board‘s (EDPB) authority to require national supervisory authorities to broaden their investigations in cross-border data protection cases.
The case arose from the Irish Data Protection Commission’s (DPC) challenge to three EDPB binding decisions concerning Meta’s data processing practices for Facebook, Instagram, and WhatsApp. The EDPB had instructed the DPC to conduct new investigations into Meta’s processing of special categories of personal data and issue new draft decisions.
The Court’s ruling emphasizes that the EDPB’s power to order broader investigations is subject to specific safeguards. Such orders can only be issued following a “relevant and reasoned objection” from another supervisory authority that demonstrates significant risks to data subjects’ rights. Additionally, these decisions require approval from a two-thirds majority of EDPB members.
Notably, the Court rejected the DPC’s argument that this authority undermines national supervisory authorities’ independence. Instead, it found that the EDPB’s role supports the consistent application of the GDPR across the EU while respecting national authorities’ operational autonomy in conducting investigations.
This decision reinforces the EDPB’s role as a central authority in ensuring comprehensive data protection investigations, particularly in cases involving major tech platforms. It clarifies that while the one-stop-shop mechanism aims for procedural efficiency, this cannot override the fundamental right to data protection.
The ruling sets a significant precedent for future cross-border enforcement actions, emphasizing that national supervisory authorities must be prepared to expand their investigations when legitimate concerns are raised by their counterparts in other EU member states.

EDPB Release Pseudonymization Guidelines to Enhance GDPR Compliance

On Jan. 16, 2025 the European Data Protection Board (EDPB) published guidelines on the pseudonymization of personal data for public consultation. The Berlin Data Protection Commissioner (BlnBDI) played a leading role in drafting these guidelines (see the German-language BlnBDI press release). The consultation is ongoing, and comments can be submitted until Feb. 28, 2025, via the EDPB form.
Pseudonymization v. Anonymization
The proposed guidelines provide an overview of pseudonymization techniques and their benefits in business. Under the General Data Protection Regulation (GDPR), pseudonymization means processing personal data so that it can’t be attributed to a specific person without the use of additional information. Unlike anonymization, where data can’t be traced back to an individual even with additional information, pseudonymized data is still considered personal and subject to GDPR.
Advantages of Pseudonymization
The guidelines emphasize that the GDPR does not mandate pseudonymization. Nevertheless, using pseudonymization techniques can enhance GDPR compliance and lower data breach risks. It also supports using legitimate interests as a legal basis for data processing and ensures compatibility with original data collection purposes. Accordingly, companies can use pseudonymization to develop privacy-enhancing applications for data use and analysis that appropriately considers the rights of data subjects. This is particularly relevant in data-heavy sectors like finance, human resources, and health care.
Pseudonymization Procedures
According to the guidelines, effective pseudonymization involves three steps: 

1.
Transform personal data by removing or replacing identifiers using methods like cryptographic algorithms (e.g., message authentication codes or encryption algorithms) or lookup tables, where pseudonyms are matched with identifiers. 

2.
Store separately and protect additional information, such as cryptographic keys or lookup tables, for subsequent re-identification (“pseudonymization secrets”). Information beyond the controller’s immediate control, which can reasonably be expected to be available to the controller, should be considered when assessing the effectiveness of pseudonymization. 

3.
Implement technical and organizational measures (TOMs) to safeguard against unauthorized re-identification. TOMs include access restrictions, decentralized storage of pseudonymization secrets, and random generation of pseudonyms.

These measures enhance data security and reduce data breach risks. The guidelines provide practical scenarios to illustrate these procedures.
Outlook
Although not legally binding, the EDPB guidelines often influence courts and regulators. They help interpret the GDPR and guide companies in developing compliant processes for data protection. Companies should view these guidelines as important advice for designing their privacy practices, which can minimize legal risks and support arguments during official audits or legal disputes.
The guidelines assist businesses in balancing data protection with operational needs. Pseudonymization can offer competitive advantages by safeguarding customer data and boosting customer trust through privacy-focused practices.

Loper Bright Strikes Again: Eleventh Circuit Hangs Up on FCC’s One-to-One Consent Rule, Calling the Validity of Other TCPA Rules Into Question

The Eleventh Circuit Court of Appeals recently vacated the Federal Communications Commission’s 2023 “one-to-one consent rule” under the Telephone Consumer Protection Act (TCPA). In Insurance Marketing Coalition, Ltd. v. Federal Communications Commission,1 the Court struck down the order that (1) would have limited businesses’ ability to obtain prior express consent from consumers to a single entity at a time, and (2) would have restricted the scope of such calls to subjects logically and topically related to “interaction that prompted the consent.”2 In particular, the Court held that the FCC exceeded its authority under the plain language of the statute.3 In the wake of the IMC decision, other TCPA regulations may well face the chopping block.
The FCC’s order sought to curtail the practice of “lead generation,” which offers consumers a “one-stop means of comparing [for example] options for health insurance, auto loans, home repairs, and other services.”4 In its ruling, the Court looked to the authority Congress had extended to the FCC through the TCPA. In general, the TCPA prohibits calls made “using any automatic telephone dialing system or an artificial or prerecorded voice” without “the prior express consent of the called party.”5 The statute does not define “prior express consent.”6 Congress gave the FCC authority to “prescribe regulations to implement” the TCPA, and to exempt certain calls from the TCPA’s prohibitions.7 In its 2023 order, the FCC sought to restrict telemarketing calls by imposing the one-to-one consent restriction and the logically-and-topically related restriction.8
Applying the Supreme Court’s decision in Loper Bright Enters. v. Raimondo,9 the Eleventh Circuit ruled that in promulgating the 2023 order, the FCC exceeded its statutory authority. The Court found that the plain meaning of the term “prior express consent” nowhere suggests that a consumer can only give consent to one entity at a time and only for calls that are “logically and topically related” to the consent. Rather, the Court ruled, in the absence of a statutory definition, the common law provides that the elements of “prior express consent” are “permission that is clearly and unmistakably granted by actions or words, oral or written,” given before the challenged call occurs.10 Nothing under the common law restricts businesses from obtaining consent from consumers to receive calls from a variety of entities regarding a variety of subjects in which they are interested.
In the wake of the IMC decision, other TCPA regulations may be ripe for challenge, including the FCC’s 2012 determination that calls introducing telemarketing or solicitations require prior express written consent. For example, in IMC, the Court held that the FCC cannot create requirements for obtaining prior express consent beyond what the plain language of that term will support. And the Court delineated the common law elements of prior express consent, which the Court found can be “granted by actions or words, oral or written.”11 Under this reasoning, the Court held that “the TCPA requires only ‘prior express consent’—not ‘prior express consent’ plus.”12 This reasoning may well support a challenge to the prior express written consent rules. After all, nothing in the plain meaning of the term “prior express consent” requires a writing versus oral consent, and the common law does not appear to support such a distinction. Rather, the requirement of written consent clearly adds to the statutory requirement and for that reason, appears to exceed the FCC’s authority.
Notwithstanding the fact that the FCC imposed the prior express written consent rule more than 10 years ago, another recent decision from the Supreme Court suggests that new entrants to the lead-generation industry have standing to file a challenge. In Corner Post, Inc. v. Board of Governors of Federal Reserve System,13 the Supreme Court ruled that new market entrants impacted by federal rules have standing to challenge those rules within the statutory period that runs from the date of market entry. The firm will continue to follow challenges to the FCC’s rulemaking authority, including any challenges to the prior express written consent rule.
Footnotes

1 No. 24-10277, — F. 4th —, 2025 WL 289152 (11th Cir. Jan. 24, 2025) (IMC decision).
2 See Second Report and Order, In the Matter of Targeting and Eliminating Unlawful Text Messages, Rules and Regs. Implementing the Tel. Consumer Prot. Act of 1991, Advanced Methods to Target and Eliminate Unlawful Robocalls, 38 FCC Rcd. 12247 (2023).
3 FCC orders have perennially exceeded the agency’s authority under the TCPA. For instance, beginning in 2003, the FCC took the position that a predictive dialer––a common tool used by business customer service centers––was an “automatic telephone dialing system,” even if the technology in question did not have the characteristics described in the statutory definition, namely the “the capacity (A) to store or produce telephone numbers to be called, using a random or sequential number generator; and (B) to dial such numbers.” 47 U.S.C. § 227(a)(1). The United States Supreme Court threw out the FCC’s flawed rulings in Facebook, Inc. v. Duguid, 592 U.S. 395 (2021).
4 IMC, 2025 WL 289152, at *2.
5 47 U.S.C. § 227(b)(1)(A), (B).
6 See id.
7 47 U.S.C. § 227(b)(2)(B) and (C).
8 The Court observed that since 2012, the FCC has distinguished non-telemarketing calls for which the FCC has required prior express consent from calls that introduce telemarketing or solicitations for which the FCC has required prior express written consent. 47 C.F.R. § 64.1200(a)(2), (3); see also In re Rules and Regs. Implementing the Tel. Consumer Prot. Act of 1991, 27 FCC Rcd. 1830, 1838 (2012). The regulations define “prior express written consent” as an agreement, in writing, bearing the signature of the person called that clearly authorizes the seller to deliver or cause to be delivered to the person called advertisements or telemarketing messages using an automatic telephone dialing system or an artificial or prerecorded voice, and the telephone number to which the signatory authorizes such advertisements or telemarketing messages to be delivered. 47 C.F.R. § 64.1200(f)(9). The written agreement must “include a clear and conspicuous disclosure informing” the signing party that he consents to telemarketing or advertising robocalls and robotexts. 47 C.F.R. § 64.1200(f)(9)(i)(A).
9 603 U.S. 369, 391–92 n.4, 413 (2024).
10 IMC, 2025 WL 289152, at *6.
11 Id.
12 Id.
13 603 U.S. 799 (2024).

Federal Court Applies Antitrust Standard of Per Se Illegality to “Algorithmic Pricing” Case

A federal district court in Seattle recently issued an important antitrust decision on “algorithmic pricing.” Algorithmic pricing refers to the practice in which companies use software to help set prices for their products or services. Sometimes this software will incorporate pricing information shared by companies that may compete in some way. In recent years, both private plaintiffs and the government have filed lawsuits against multifamily property owners, hotel operators, and others, claiming their use of such software to set prices for rentals and rooms is an illegal conspiracy under the antitrust laws. The plaintiffs argue that, even without directly communicating with each other, these companies are essentially engaging in price-fixing by sharing pricing information with the algorithm and knowing that others are doing the same, which allegedly has led to higher prices for consumers. So far, these cases have had mixed outcomes, with at least two being dismissed by courts.
Duffy v. Yardi Systems, Inc.
Previously, courts handling these cases have applied, at the pleadings stage, the “rule-of-reason” standard for reviewing the competitive effects of algorithmic pricing. Under the rule-of-reason standard, a court will examine the algorithm’s actual effects before determining whether the use of the algorithm unreasonably restrains competition. In December, however, the U.S. District Court for the Western District of Washington in Duffy v. Yardi Systems, Inc., No. 2:23-cv-01391-RSL (W.D. Wash.) held that antitrust claims premised on algorithmic pricing should be reviewed under the standard of per seillegality, meaning the practice is assumed to harm competition as a matter of law. Under the per sestandard, an antitrust plaintiff need only prove an unlawful agreement and the court will presume that the arrangement harmed competition. This ruling is significant because it departs from prior cases and could ease the burden on plaintiffs in future disputes.
In Yardi, the plaintiffs sued several large, multifamily property owners and their management company, Yardi Systems, Inc., claiming these defendants conspired to share sensitive pricing information and adopt the higher rental prices suggested by Yardi’s software. The court refused to dismiss the case, finding the plaintiffs had plausibly shown an agreement based on the defendants’ alleged “acceptance” of Yardi’s “invitation” to trade sensitive information for the ability to charge increased rents. See Yardi, No. 2:23-cv-01391-RSL, 2024 WL 4980771, at *4 (W.D. Wash. Dec. 4, 2024). The court also found the defendants’ parallel conduct in contracting with Yardi, together with certain “plus factors,” were enough to allege a conspiracy. The key “plus factor” was defendants’ alleged exchange of nonpublic information. The court noted the defendants’ behavior — sharing sensitive data with Yardi — was unusual and suggested they were acting together for mutual benefit.
The court decided the stricter per serule should apply to algorithmic pricing cases, rather than the rule-of-reason. The court emphasized that “[w]hen a conspiracy consists of a horizontal price-fixing agreement, no further testing or study is needed.” Id. at *8. This decision diverged from an earlier case against a different rental-software company, where the court thought more analysis was needed because the use of algorithms is a “novel” business practice and thus not one that could be condemned as per seillegal without more judicial experience about the practice’s competitive effect. The Yardi case also stands apart from others that have been dismissed, like a prior case involving hotel operators, where there was no claim that the companies pooled their confidential information in the dataset the algorithm used to suggest prices. The court in that case decided that simply using pricing software, without sharing confidential data, did not necessarily mean there was illegal collusion. Future cases may thus depend in part on whether the software uses competitors’ confidential data to set or suggest prices.
It is unclear if other courts will adopt the same strict approach as the Yardi case when dealing with claims involving algorithmic pricing. It is clear, however, that more cases are on the horizon, likely spanning a variety of industries using pricing software.
Regulatory Efforts
Beyond private lawsuits, government agencies and lawmakers also are paying close attention to algorithmic pricing. Last year, for example, the U.S. Department of Justice (DOJ) and a number of state attorneys general sued a different rental-software company. The DOJ also has weighed in on several ongoing cases. Meanwhile Congress, along with various states and cities, has introduced laws to regulate algorithmic pricing, with San Francisco and Philadelphia banning the use of algorithms in setting rents. And just last month, the DOJ and Federal Trade Commission raised concerns about algorithmic pricing in a different context — exchanges of information about employee compensation — in the agencies’ new Antitrust Guidelines for Business Activities Affecting Workers. The new guidelines note that “[i]nformation exchanges facilitated by or through a third party (including through an algorithm or other software) that are used to generate wage or other benefit recommendations can be unlawful even if the exchange does not require businesses to strictly adhere to those recommendations.” Expect more legal and legislative action on this front in 2025 and beyond.

New York Governor Signs Privacy and Social Media Bills

On December 21, 2024, New York Governor Kathy Hochul signed a flurry of privacy and social media bills, including:

Senate Bill 895B requires social media platforms that operate in New York to clearly post terms of service (“ToS”), including contact information for users to ask questions about the ToS, the process for flagging content that users believe violates the ToS, and a list of potential actions the social media platform may take against a user or content. The New York Attorney General has authority to enforce the act and may subject violators to penalties of up to $15,000 per day. The act takes effect 180 days after becoming law.
Senate Bill 5703B prohibits the use of social media platforms for debt collection. The act, which took effect immediately upon becoming law, defines a “social media platform” as a “public or semi-public internet-based service or application that has users in New York state” that meets the following criteria:

a substantial function of the service or application is to connect users in order to allow users to interact socially with each other within the service or application. A service or application that provides e-mail or direct messaging services shall not be considered to meet this criterion on the basis of that function alone; and
the service or application allows individuals to: (i) construct a public or semi-public profile for purposes of signing up and using the service or application; (ii) create a list of other users with whom they share a connection within the system; and (iii) create or post content viewable or audible by other users, including, but not limited to, livestreams, on message boards, in chat rooms, or through a landing page or main feed that presents the user with content generated by other users.

Senate Bill 2376B amends relevant laws to add medical and health insurance information to the definitions of identity theft. The act defines “medical information” to mean any information regarding an individual’s medical history, mental or physical condition, or medical treatment or diagnosis by a health care professional. The act defines “health insurance information” to mean an individual’s health insurance policy number or subscriber identification number, any unique identifier used by a health insurer to identify the individual or any information in an individual’s application and claims history, including, but not limited to, appeals history. The act takes effect 90 days after becoming law.
Senate Bill 1759B, which takes effect 60 days after becoming law, requires online dating services to disclose certain information of banned members of the online dating services to New York members of the services who previously received and responded to an on-site message from the banned members. The disclosure must include:

the user name, identification number, or other profile identifier of the banned member;
the fact that the banned member was banned because, in the judgment of the online dating service, the banned member may have been using a false identity or may pose a significant risk of attempting to obtain money from other members through fraudulent means;
that a member should never send money or personal financial information to another member; and
a hyperlink to online information that clearly and conspicuously addresses the subject of how to avoid being defrauded by another member of an online dating service.

Firings at the US Privacy and Civil Liberties Oversight Board and Potential Impact on Transatlantic Data Transfers

President Trump recently fired the three democrats on the Privacy and Civil Liberties Oversight Board (PCLOB). Since these firings bring the Board to a sub-quorum level, they have the potential to significantly disrupt transatlantic transfers of employee and other personal data from the EU to the US under the EU-US Data Privacy Framework (DPF).
The PCLOB is an independent board tasked with oversight of the US intelligence community. It is a bipartisan board consisting of five members, three of whom represent the president’s political party and two represent the opposing party. The PCLOB’s oversight role was a significant element in the Trans-Atlantic Data Privacy Framework (TADPF) negotiations, helping the US demonstrate its ability to provide an essentially equivalent level of data protection to data transferred from the EU. Without this key element, it is highly likely there will be challenges in the EU to the legality of the TADPF. If the European Court of Justice invalidates the TADPF or the EU Commission annuls it, organizations that certify to the EU-US Data Privacy Framework will be without a mechanism to facilitate transatlantic transfers of personal data to the US. This could potentially impact transfers from the UK and Switzerland as well.
Organizations that rely on their DPF certification for transatlantic data transfers should consider developing a contingency plan to prevent potential disruption to the transfer of essential personal data. Steps to prepare for this possibility include reviewing existing agreements to identify what essential personal data is subject to ongoing transfers and the purpose(s), determining whether EU Standard Contractual Clauses would be an appropriate alternative and, if so, conducting a transfer impact assessment to ensure the transferred data will be subject to reasonable and appropriate safeguards.