New York Attorney General Secures $450,000 Settlement Over eufy Home Security Camera Security Concerns
New York Attorney General Letitia James announced a $450,000 settlement with three companies distributing eufy home security video cameras—Fantasia Trading LLC, Power Mobile Life LLC and Smart Innovation LLC—following an investigation into the security of their Internet-enabled video products. The settlement follows findings that, in some cases, video streams from eufy cameras were transmitted without end-to-end encryption and active video feeds could be accessed without authentication by individuals with the corresponding URL.
The Office of the Attorney General (OAG) initiated the investigation after a November 2022 disclosure by a security researcher raised concerns about the accuracy of eufy’s marketing claims regarding its security and encryption measures. The researcher’s findings suggested that eufy’s Internet-connected security cameras, video doorbells and smart locks did not fully encrypt video data in transit, despite company assurances that consumer footage would remain private and secure.
The OAG’s investigation confirmed that, in certain circumstances:
video data was not protected by end-to-end encryption, leaving portions of the transmission unencrypted;
active video streams could be accessed without authentication if an individual had the correct URL;
some URLs could be determined without directly obtaining them from a user, increasing the risk of unauthorized access; and
the companies had not implemented sufficient security testing procedures, leading to undetected vulnerabilities.
Under the terms of the settlement, the companies must implement enhanced security measures including:
developing and maintaining a comprehensive information security program to protect consumer data;
implementing secure software development practices, including third-party security testing;
maintaining a vulnerability management program with regular penetration testing; and
enhancing encryption protocols for video storage and transmission.
This resolution highlights the importance of robust security measures for Internet-connected devices that store and transmit sensitive consumer data. Companies offering such products must ensure that their security practices align with industry standards and that their marketing claims accurately reflect their security capabilities.
I SPY!: Court Finds No Standing In Spy Pixels Case
Hi, CIPAWorld!
The District of Massachusetts just issued a huge win for the defendant in a spy pixels class action and dismissed the case altogether for lack of standing!
In Campos v. TJX Companies, Inc., No. 24-cv-11067, 2025 WL 360677 (D. Mass. Jan. 31, 2025), Plaintiff Campos filed a putative class action against Defendant TJX Companies (“TJX”), alleging that Plaintiff TJX embedded a “spy pixel” in its promotional emails which collected certain information about the email and its recipients, including the email address, the subject of the email, when it was opened and read, the recipient’s location, the length of time the recipient spent reading the email, whether it was forwarded or printed, the recipient’s email service, et cetera. Although Plaintiff conceded that she subscribed to TJX’s email list, she said that TJX nevertheless collected this information without her consent or the consent of other class members. Plaintiff claimed that this lack of consent formed the basis of TJX’s violation of the Arizona Telephone, Utility and Communication Service Records Act, which makes it a crime for a person to “[k]nowingly procure, … [a] communication service record of any resident of [Arizona] without the authorization of the customer to whom the record pertains or by fraudulent, deceptive or false means.” Id. at *1 (second and third alterations added).
In response, TJX filed, inter alia, a Rule 12(b)(1) motion to dismiss for lack of standing, arguing that the Plaintiff could not establish an injury-in-fact. To determine whether Plaintiff suffered an injury-in-fact based on a violation of privacy, as claimed here, the Court explained that there must be a “‘close relationship’ between, on one hand, Defendant’s alleged procurement of Plaintiff’s data relating to her opening of promotional emails and, on the other hand, a traditionally actionable harm under common law.” Id. at *3 (internal citation omitted).
The Plaintiff first likened her injuries to the tort of intrusion upon seclusion, which requires an intentional intrusion and one that “would be highly offensive to a reasonable person.” Id. The cause of action is aimed at protecting deeply personal, private, or confidential matters. The Court, however, wasn’t buying it:
Some of this information clearly does not implicate Plaintiff’s privacy or seclusion. For instance, Plaintiff’s email address was certainly not private, given that she provided it to Defendant when she consented to receive the promotional emails. Nor was there anything particularly privacy about the email’s subject or other content, as Defendant authored the email and therefore would have known the subject and content with or without the pixels and thus without any impact on any privacy interest asserted by Plaintiff.
Id. at *4 (emphasis added). While the Court found that the individualized data about whether, when, where, and for how long Plaintiff read TJX’s emails presented a closer question, the Court still found this distinguishable from the idea of covert surveillance. Specifically, it explained that “a glimpse into Plaintiff’s email inbox is a far cry from peeking into her upstairs window, particularly where she voluntarily subscribed to Defendant’s emails and where there is no allegation that the spy pixels intruded into any other private area of her email inbox or computer.” Id. (emphasis added).
In a footnote, the Court noted that “Plaintiff’s allegation that the spy pixel tracked whether the email was forwarded gives the Court some pause, as it comes the closes to tracking ‘unrelated personal messages.’” Id. at *6, n.3 (emphasis added). However, it dismissed this issue because Plaintiff did not allege that the pixels could track the recipient or the content of the forwarded message. Indeed, “the simple act of forwarding, without more, does not rise to the level of substantial intrusion into Plaintiff’s private affairs.” Id.
Plaintiff also attempted to liken her harm to other privacy statutes which give rise to standing, but to no avail. First, she analogized her harm to cases under the TCPA. The Court rejected this argument on the basis that plaintiffs in TCPA cases received unconsented to and unsolicited communication, whereas Plaintiff subscribed to TJX’s messages and frequently opened them. Second, it found that Plaintiff’s reliance on the Video Privacy Protection Act, which prohibits disclosure of an individual’s rental and sale records, was misplaced because Plaintiff did not allege any such disclosure. And finally, it found that the information protected by the Illinois Biometric Information Privacy Act—a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry—to be decidedly more personal than the information at issue in Plaintiff’s Complaint.
Accordingly, the Court dismissed Plaintiff’s complaint for lack of standing. Plaintiff’s allegations just didn’t cut it—without a concrete injury, there’s no standing, and without standing, there’s no case. This ruling reinforces that not every data collection claim fits within traditional privacy harms, especially when the user voluntarily engages with the service. It’s a significant win for defendants facing similar claims and one to keep an eye on moving forward.
BIG BROTHER OR BIG BUSINESS?: E! Entertainment’s Fashion Police Might Need Troutman Amin Instead
Greetings CIPAWorld! Twice in one day? You know it must be big. This case is an absolute must-read for data privacy professionals and law students looking to break into the field. If you’re studying privacy law or tech policy or just want to see how cutting-edge legal arguments are shaping the future of digital rights, this one’s for you. Brace yourselves because we are about to dive deep into this well-drafted Complaint recently filed. A California resident has filed a sweeping class action lawsuit against E! Entertainment Television, alleging the media company’s website secretly tracks and monetizes visitor data without consent—turning users into unwitting participants in a vast digital advertising machine. See Weiler v. E! Ent. Television, LLC, No. 25STCV02509 (Cal. Super. Ct. filed Jan. 29, 2025). The Complaint, filed in Los Angeles Superior Court by Plaintiff, provides an unprecedented look into the complex web of online tracking, data brokerage, and real-time ad auctions that powers many popular websites. Plaintiff, who has regularly visited the website from 2016 through October 2024, represents potentially thousands of California residents who have had their data collected without consent.
When visitors access eonline.com, the website allegedly automatically installs two powerful tracking systems on their browsers—the Bounce Exchange Tracker operated by data broker Wunderkind and the ADNXS Tracker run by Microsoft. These trackers immediately start collecting visitors’ IP addresses, which reveal their approximate physical location, along with detailed device information like browser type, operating system, and other identifying characteristics that create a unique digital fingerprint.
Let me simplify this. Think of it like a digital license plate—once a site tags you, your activity can be traced across the web, even if you think you’re browsing anonymously. Much like a telephone number guides a call to its destination, an IP address routes data packets between devices on the internet. The traditional IPv4 format offers approximately 4.3 billion unique addresses, while the newer IPv6 system provides vastly more combinations to accommodate the growing internet.
I know I’m geeking out with these technical sophistications here, but this is fascinating as technology advances! Public IP addresses assigned by Internet Service Providers are globally unique and can reveal a user’s approximate location, while private IP addresses are used only within local networks. This distinction is essential to address, no pun intended, because private IP addresses don’t reveal geolocation, while public ones do and are extensively used in advertising.
The trackers also collect what’s known as “Device Fingerprint Information,” which includes the user-agent string (detailing precise browser and system specifications), device capabilities, supported image formats, compression methods, and persistent identifiers like PUID, GUID, UID, and PSVID. If cookies are the old-school way of tracking you, device fingerprinting is the cutting-edge upgrade—harder to delete, more invasive, and nearly impossible to avoid. Under California’s Invasion of Privacy Act (“CIPA”) § 638.50(b), these trackers qualify as “pen registers” because they capture “routing, addressing, or signaling information” without obtaining required court approval. What’s more, the lawsuit argues that these trackers also function as “trap and trace devices” under CIPA because they don’t just track outbound signals—they monitor inbound data as well, identifying where users connect from, which could further bolster the claim that E! Entertainment is violating privacy laws by monitoring user activity without disclosure.
Wunderkind’s technology goes far beyond simple data collection. According to the Complaint, it “analyze[s] everything about visitor behavior, from purchase history to traffic sources to engagement patterns to even the moment a visitor is abandoning a site using its patented exit-intent technology.” In other words, it’s not just watching—it’s waiting for the exact moment you hesitate before clicking away to push you toward engagement. Black Mirror episode, anyone? The company maintains what it calls “the largest first-party data set out of comparable solutions on the market,” using this vast collection of information to track users across multiple devices and platforms. This means that Wunderkind isn’t just tracking user behavior on E! Online—it’s enriching Microsoft’s bidstream data with additional details, allowing advertisers to bid on pre-profiled users rather than just generic ad impressions.
The lawsuit explains how this data collection feeds into a sophisticated advertising ecosystem called “real-time bidding” (“RTB”), where users’ personal data is turned into a commodity in a split-second auction. Imagine a stock market for human attention—except you don’t get a say in who’s buying or selling access to you. At the center of this system is Microsoft’s ADNXS Tracker, which functions as a “demand-side platform” (“DSP”). Its “impression bus” system processes ad requests, applies user data, and manages the entire bidding process. When someone visits E! Online, Microsoft’s ADNXS system doesn’t just load a webpage—it launches a high-speed digital auction. First, a “Supply Side Platform” sends user data to Microsoft’s Advertising Exchange, which includes device identifiers, IP address, zip/postal code, GPS location, browsing history, and other personal information, collectively known as “bidstream data.” Microsoft’s system then overlays segment data from its server-side cookie store, accumulating information through Microsoft Advertising segment pixels and client-uploaded data files.
The Advertising Exchange then broadcasts this data to multiple DSPs, who evaluate it to determine whether to bid for their advertising clients. At this point, your data isn’t just floating around in cyberspace—it’s being assessed, categorized, and priced in milliseconds. The Complaint alleges that Microsoft’s impression bus processes and facilitates RTB transactions but also plays a broader role in Microsoft’s ad-serving infrastructure, meaning data could be stored beyond just a single ad request. The lawsuit raises a major concern: even advertisers who lose the auction still receive and retain the visitor’s data. This means that a single visit to E! Online can result in personal data being shared with countless third parties—many of whom the visitor has never even heard of.
The Federal Trade Commission (“FTC”) has raised serious concerns about this real-time bidding process. The FTC warns that RTB incentivizes websites to share as much user data as possible to get higher ad valuations, particularly location data and browsing history. The process also enables sensitive data to be transmitted across geographic borders without restriction. The FTC has previously taken enforcement action against real-time bidding companies, such as Xandr (formerly owned by AT&T and later acquired by Microsoft), highlighting the potential legal exposure of E! Entertainment’s practices.
Wunderkind, meanwhile, allegedly uses the tracking data to build highly detailed consumer profiles that follow people long after they leave E! Online. As a registered data broker in California, Wunderkind maintains vast databases of consumer information that it sells to advertisers, brands, and even other data brokers. The complaint alleges that Wunderkind’s code on E! Online captured Weiler’s browser details and transmitted this information to its servers. The lawsuit argues that by allowing this data collection without obtaining explicit consent, E! Entertainment has essentially turned its audience into a product—monetizing their personal information while keeping them in the dark.
The Complaint argues these practices violate the CIPA, prohibiting certain tracking technologies without court approval. The Complaint cites explicitly recent court decisions from 2024, including Shah v. Fandom, Inc., No. 24-CV-01062-RFL, 2024 WL 4539577, at *6 (N.D. Cal. Oct. 21, 2024) and Mirmalek v. L.A. Times Commc’ns L.L.C., No. 24-cv-01797-CRB, 2024 WL 5102709, at *3-4 (N.D. Cal. Dec. 12, 2024), which found that similar trackers constituted “pen registers” due to CIPA’s “expansive language.” If the court agrees that Microsoft’s and Wunderkind’s trackers fall under this classification, E! Entertainment could face serious legal and financial consequences—including statutory damages of up to $5,000 per violation.
As digital privacy gains continued importance and online tracking faces scrutiny, this case may significantly impact how media and entertainment companies manage visitor data. While companies like E! Entertainment may argue that data-driven advertising is necessary in today’s economy, privacy advocates see lawsuits like this as long-overdue accountability for an industry that has long operated in the shadows. This case challenges E! Entertainment, and an entire industry focused on tracking, profiling, and monetizing consumers without their knowledge.
Remember, just because you can’t see tracking happening doesn’t mean it isn’t there.
As always,
Keep it legal, keep it smart, and stay ahead of the game.
TOO LATE: 7-Eleven Sued in TCPA Class Action for Allegedly Failing to Comply With Call Time Limitations–And This Is Crazy If its True
So the other day TCPAWorld.com reported on Circle K being caught in a massive TCPA class action due to marketing content in its opt in messages.
Eesh.
Well now competitor convenience store 7-Eleven is caught in a TCPA class action of its own and it also stems from low-hanging-fruit TCPA compliance issues that should never have happened (if it did.)
Background– the TCPA imposes call time limitations om messaging in some contexts. Messages cannot be sent before 8 am or after 9 pm. In some states–such as Florida–where this case is brought–the restrictions are even tighter.
Now interestingly, my read of the TCPA is that it only restricts telephone solicitations to call time hours, which means calls made with consent or an EBR are not subject to those restrictions. That is probably what 7-11 is thinking, but I am not sure. However, these exemptions do not seem to apply to state statutes. So, keep that in mind.
Regardless in the new case of Alexander Fernandez v. 7-Eleven, Plaintiff seemingly admits signing up to a 7-Eleven text club using a keyword (an always dangerous process, but that’s a topic for another day.)
While 7-Eleven does not appear to be using a double-opt in process (also odd) it does seem to be sending messages at off hours. Plaintiff provides screen shots demonstrating messages received at 9:40 and 9:41 pm.
The plaintiff seeks to represent a class of all individuals that received messages out of compliance with call time restrictions based on the called party’s time zone. Will be very interesting to see what data sets exist around such a class.
The plaintiff seemingly intentionally does not allege her phone number, so I am curious whether the area code matches Florida– where Plaintiff apparently lives. This might be a “panhandle special” where someone living in Florida’s central time zone is receiving messages intended for the eastern time zone– resulting in a message being sent at 8:41 being received at 9:41.
Then again, since Florida’s state restriction is 8 pm that wouldn’t seem to matter anyway.
Really interesting one. We will keep an eye on it.
New DHS Security Requirements Impact Compliance for Employers with Workers in Six “Countries of Concern”
The U.S. Department of Homeland Security (DHS) recently published new security requirements for certain restricted transactions covered by the U.S. Department of Justice’s (DOJ) sensitive data export rules. The security requirements could create compliance issues for employers with workers in certain countries that have been identified as posing national security concerns, a list that currently includes China (including Hong Kong and Macau), Cuba, Iran, North Korea, Russia, and Venezuela.
Quick Hits
The U.S. Department of Homeland Security published new security requirements for restricted transactions to prevent access to covered data and systems by countries of concern and certain persons affiliated with such countries.
The security requirements, which include stricter cybersecurity policies, multifactor authentication (MFA), incident response plans, and robust encryption to prevent unauthorized access to sensitive data, were published in conjunction with a Justice Department rule implementing a Biden administration-era executive order on cybersecurity.
Companies with employees in high-risk countries may face significant challenges in ensuring compliance with the new requirements, particularly regarding access to essential networks needed for business operations.
On January 3, 2025, the DHS’s Cybersecurity and Infrastructure Security Agency (CISA) released finalized security requirements for restricted transactions pursuant to Executive Order (EO) 14117, “Preventing Access to American’s Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern,” issued in February 2024 by then-President Joe Biden. The requirements were developed in conjunction with a DOJ final rule, which was published in the Federal Register on January 8, 2025, implementing EO 14117.
The CISA security requirements apply to certain restricted transactions identified by the DOJ that involve “bulk sensitive personal data or United States Government-related data” as defined by the DOJ and EO 14117 or that are of a class of transaction determined by the DOJ to pose an unacceptable risk to national security because it may enable certain “countries of concern or covered persons to access bulk sensitive personal data or United States Government-related data.”
The DOJ has identified six “countries of concern”: (1) China, including the special administrative regions of Hong Kong and Macau, (2) Cuba, (3) Iran, (4) North Korea, (5) Russia, and (6) Venezuela. A “covered person” is an individual or entity associated with a country of concern, and the term includes: (1) entities that are controlled or owned by one or more countries of concern, (2) entities that are controlled by “one or more persons” affiliated with a country of concern, (3) individuals who are “employee[s] or contractor[s] of a country of concern,” or (4) an entity controlled by a country of concern, and individuals the attorney general determines may be controlled by or act on behalf of a country of concern or other “covered person.”
Existing laws and regulations surrounding international data transfers, which are often transaction- or sector-specific, did not comprehensively address bulk data transfers to countries of concern. And, with respect to the personal data of U.S. citizens, certain common data processing principles are unequally applied given the existing patchwork of state and sectoral privacy laws. Accordingly, in an effort to fill the gap, the security requirements articulated by the DHS cover (1) organizational and system-level requirements for covered systems and (2) data-level requirements for data that is the subject of a restricted transaction.
Organizational- and System-Level Requirements
The security requirements state that entities must require that “basic organizational cybersecurity policies, practices, and requirements” are implemented with respect to any covered system (i.e., information systems used to interact with covered data in connection with restricted transactions). These steps include:
maintaining an inventory of covered system assets and ensuring the “inventory is updated on a recurring basis”;
designating an organizational level individual, such as a Chief Information Security Officer, who will be “responsible and accountable” for cybersecurity and governance, risk, and compliance (GRC) functions;
remediating any known exploited vulnerabilities (KEVs);
documenting vendor/supplier agreements for covered systems;
developing an “accurate network topology of the covered system”;
adopting policies that require approval of new hardware or software before it is deployed in a covered system; and
developing and maintaining incident response plans.
The requirements further call for entities to implement “logical and physical access controls” to protect access to data by covered persons or countries of concern, including the use of multifactor authentication (MFA) to prevent inappropriate access to data or, in the limited circumstances where MFA is not possible, stringent password requirements. Entities will wish to consider paying close attention to their processes for evaluating the sufficiency of the their security protocols on an ongoing basis, including through the issuance and management of identities and credentials associated with authorized users, services, and hardware, and the prompt revocation of credentials of individuals who leave or change roles.
The requirements likewise mandate the ongoing collection and storage of logs that relate to access to covered systems and the security of the same. Additional technical specifications include the default denial of connections. Finally, the requirements direct entities to conduct internal data risk assessments and evaluate, on an ongoing basis, whether an entity’s approach to security is sufficient to prevent access to covered data.
Data-Level Requirements
The CISA security requirements direct entities to implement data-level measures to “fully and effectively prevent access to covered data that is linkable, identifiable, unencrypted, or decryptable using commonly available technology” by the covered person, employee, or vendor, or the governments of countries of concern. The requirements call for:
applying data minimization and masking strategies, which must include the preparation of and adherence to written data retention and deletion policies, and processing restrictions geared toward transforming the data such that it is no longer considered to be covered data or such that it is unlikely to be linked to an American person;
utilizing compulsory encryption techniques to protect data;
applying “privacy enhancing technologies” or “differential privacy techniques” during the course of any processing activities associated with covered data; and
configuring identity and access management techniques to deny access to covered systems by covered persons or countries of concern.
Next Steps
The CISA security requirements may have major implications for global companies with employees in countries of concern, such as China, and are likely to raise concerns about whether such employees will be able to access networks and information that are critical for them to do their jobs.
However, employers with substantial operations in potentially impacted countries may want to take note that while the security requirements discussed above are being implemented pursuant to a Biden administration EO, it remains to be seen whether the Trump administration will roll back the security measures as part of the administration’s ongoing deregulation focus, particularly to the extent the requirements may have the practical impact of restricting work in China. Moreover, President Trump has issued a “Regulatory Freeze Pending Review,” which could delay the April 8, 2025, effective date of the DOJ’s final rule.
In the meantime, employers may want to take steps to prepare for the CISA security requirements and DOJ regulations regarding countries of concern and covered persons. To do so, companies may want to assess the extent to which they employ covered persons in countries of concern or have entered into contracts with vendors who rely upon personnel based in such countries. If they determine this to be the case, they may wish to assess whether they have necessary privacy and security safeguards, both technical and contractual, to prevent improper access to protected personal and U.S. government data.
EU Fines EU?!: Alleged Unlawful Data-Transfer Dust-Up
Following a German case brought against the EU Commission, the EU General Court found that the Commission had made an improper transfer of personal information to the US. The plaintiff, a German citizen, alleged (among other things) that his information was sent through the EU Commission’s website to the US through an automated social media login option when he registered for a Commission event. He further alleged that this violated the government-agency equivalent of GDPR (EUDPR), as it occurred during a period in time when the Privacy Shield had been found inadequate, and the replacement program was not yet in place.
The court noted that the Commission, in making the transfer, relied only on website terms for the US data recipient. It did not enter into a contract that included standard contractual clauses or otherwise have “appropriate safeguard[s].” The court ordered the Commission to pay the individual €400.
Putting It Into Practice: This case -brought against the EU entity that oversees GDPR compliance- is a reminder of EU concerns with data transfers to the US. As we await further developments with the Data Privacy Framework under the new administration, companies may want to re-examine the mechanisms (including standard contractual clauses + additional safeguards) EU-US data transfers.
Listen to this post
Netflix Content Becomes Federal Evidence: EDNY’s OneTaste Prosecution Faces Scrutiny Amid DOJ Transition
Recent developments in the Eastern District of New York’s prosecution of wellness company OneTaste in U.S. v. Cherwitz have raised novel questions about the intersection of streaming content and criminal evidence.1 Defense motions filed in December of 2024 and January 2025 challenge the government’s use of journal entries originally created for a Netflix documentary as key evidence in its forced labor conspiracy case. This occurs during a sea change in DOJ priorities entering a new presidential administration.
After a five-year investigation, EDNY prosecutors in April of 2023 filed a single-count charge of forced labor conspiracy against OneTaste founder Nicole Daedone and former sales leader Rachel Cherwitz. The government alleges the conspiracy unfolded over a fourteen-year span, but in a prosecutorial first did not charge a substantive crime. Over the course of the prosecution, the defendants filed repeated motions with the court asking it to order the government to specify the nature of the offense. Most recently, Celia Cohen, newly appointed defense counsel for Rachel Cherwitz, highlighted in a January 18 motion the case’s unusual nature: “The government has charged one count of a forced labor conspiracy…without providing any critical details about the force that occurred and how it specifically induced any labor.”
In recent defense filings, the prosecution has faced mounting scrutiny over the authenticity of journal entries attributed to key government witness Ayries Blanck. Prosecutors had previously moved in October of 2024 for the court to admit the journal entries as evidence at trial for their case in chief. In a December 30 motion, Jennifer Bonjean, defense counsel for Nicole Daedone revealed that civil discovery exposes that the journal entries presented by the government as contemporaneous accounts from 2015 were actually created and extensively edited for Netflix’s 2022 documentary “Orgasm Inc” on OneTaste.
“Through metadata and edit histories, we can watch entertainment become evidence,” Bonjean argued in her motion. Technical analysis from a court-ordered expert showed the entries underwent hundreds of revisions by multiple authors, including Netflix production staff, before being finalized in March 2023 – just days before a sealed indictment was filed against defendants Cherwitz and Daedone. The defense has argued that this Netflix content was presented to the grand jury to secure an indictment.
The government’s handling of these journal entries took a dramatic turn during a January 23 meet-and-confer session. After defense counsel challenged the authenticity of handwritten journals matching the Netflix content, prosecutors abruptly withdrew them from their case-in-chief. While maintaining the journals’ legitimacy, this retreat from evidence previously characterized as central to their case prompted new defense challenges.
“This prosecution is a house of cards,” argued defense counsel Celia Cohen and Michael Roboti of Ballard Spahr in a January 24 motion to dismiss. Cohen and Roboti, who joined Rachel Cherwitz’s defense team earlier this month, highlighted how the government’s withdrawal of the handwritten journals “exemplifies the serious problems with this prosecution.” Their motion notes that defense witnesses in a parallel civil case have exposed government witnesses as “perjurers” who “have received significant benefits from the government and from telling their ‘stories’ in the media.”
The matter came to head during a January 24 hearing before Judge Diane Gujarati, who had previously denied prosecutors’ request to grant anonymity to ten potential witnesses. When Cohen attempted to address unresolved issues regarding the journals, she was sharply rebuked by the court, which had indicated it would not address the new filing during the scheduled hearing. Gujarati stated that she did not intend to schedule any further conferences before trial. The trial date is scheduled for May 5, 2025.
The case’s challenges coincide with significant changes at DOJ and EDNY under the new Trump administration. EDNY Long Island Division Criminal Chief John J. Durham was sworn in as Interim U.S. Attorney for EDNY on January 21, following former U.S. Attorney Breon Peace’s January 10 resignation. Peace spearheaded the OneTaste prosecution. Durham will serve until the Senate confirms President Trump’s nominee, Nassau County District Court Judge Joseph Nocella Jr.
The timing is particularly significant given President Trump’s January 20 executive order “Ending The Weaponization of The Federal Government.” The order specifically cites the EDNY prosecution of Douglass Mackey as an example of “third-world weaponization of prosecutorial power.” This reference carries special weight as EDNY deployed similar strategies in both the Mackey and Cherwitz cases – single conspiracy charges without substantive crimes, supported by media narratives rather than traditional evidence.
As Durham takes the helm at EDNY, this case presents an early test of how the office will handle prosecutions that blend entertainment with evidence, and whether novel theories of conspiracy without specified crimes will survive increased scrutiny under new leadership. The transformation of Netflix content into federal evidence may face particular challenges as the Attorney General reviews law enforcement activities of the prior four years under the new executive order’s mandate.
The government’s position faces further scrutiny as mainstream media begins to question its narrative. A January 24 Wall Street Journal profile by veteran legal reporter Corinne Ramey presents Daedone as a complex figure whose supporters call her a “visionary,” while examining the unusual nature of prosecuting wellness education as forced labor. The piece’s headline – “She Made Orgasmic Meditation Her Life. Not Even Prison Will Stop Her” – captures both the prosecution’s gravity and Daedone’s unwavering commitment to her work despite federal charges.
1 U.S. v. Cherwitz, et al., No. 23-cr-146 (DG).
Deadline for Filing Annual Pesticide Production Reports — March 1, 2025
The March 1, 2025, deadline for all establishments, foreign and domestic, that produce pesticides, devices, or active ingredients to file their annual production for the 2024 reporting year is fast approaching. Pursuant to Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) Section 7(c)(1) (7 U.S.C. § 136e(c)(1)), “Any producer operating an establishment registered under [Section 7] shall inform the Administrator within 30 days after it is registered of the types and amounts of pesticides and, if applicable, active ingredients used in producing pesticides” and this information “shall be kept current and submitted to the Administrator annually as required.”
Reports must be submitted on or before March 1 annually for the prior reporting year’s production and distribution. The report, filed through the submittal of the U.S. Environmental Protection Agency (EPA) Form 3540-16: Pesticide Report for Pesticide-Producing and Device-Producing Establishments, must include the name and address of the producing establishment, as well as pesticide production information, such as product registration number, product name, and amounts produced and distributed. The annual report is always required, even when no products are produced or distributed.
EPA has created the electronic reporting system to submit pesticide-producing establishment reports using the Section Seven Tracking System (SSTS). Users will be able to use SSTS within EPA’s Central Data Exchange (CDX) to submit annual pesticide production reports. Electronic reporting is efficient, saves time by making the process faster, and saves money in mailing costs and related logistics. EPA is encouraging all reporters to submit electronically to ensure proper submission and a timely review of the report.
Links to EPA Form 3540-16, as well as instructions on how to report and how to add and use EPA’s SSTS electronic filing system, are available below:
EPA Form 3540-16: Pesticide Report for Pesticide-Producing and Device-Producing Establishments;
Instructions for Completing EPA Form 3540-16 (available within the ZIP file); and
Electronic Reporting for Pesticide Establishments.
Further information is available on EPA’s website.
Cybersecurity in the Marine Transportation System: What You Need to Know About the Coast Guard’s Final Rule
The U.S. Coast Guard (“USCG”) published a final rule on January 17, 2025, addressing Cybersecurity in the Marine Transportation System (the “Final Rule”), which seeks to minimize cybersecurity related transportation security incidents (“TSIs”) within the maritime transportation system (“MTS”) by establishing requirements to enhance the detection, response, and recovery from cybersecurity risks. Effective July 16, 2025, the Final Rule will apply to U.S.-flagged vessels, as well as Outer Continental Shelf and onshore facilities subject to the Maritime Transportation Security Act of 2002 (“MTSA”). The USCG is also seeking comments on a potential two-to-five-year delay of implementation for U.S.-flagged vessels. Comments are due March 18, 2025.
Background
The need for enhanced cybersecurity protocols within the MTS has long been recognized. MTSA laid the groundwork for addressing various security threats in 2002 and provided the USCG with broad authority to take action and set requirements to prevent TSIs. MTSA was amended in 2018 to make clear that cybersecurity related risks that may cause TSIs fall squarely within MTSA and USCG authority.
Over the years, the USCG, as well as the International Maritime Organization, have dedicated resources and published guidelines related to addressing the growing cybersecurity threats arising as technology is integrated more and more into all aspects of the MTS. The USCG expanded its efforts to address cybersecurity threats throughout the MTS in its latest rulemaking, publishing the original Notice of Proposed Rulemaking (“NPRM”) on February 22, 2024. The NPRM received significant public feedback, leading to the development of the Final Rule.
Final Rule
In its Final Rule, the USCG addresses the many comments received on the NPRM and sets forth minimum cybersecurity requirements for U.S.-flagged vessels and applicable facilities.
Training. Within six months of the Final Rule’s effective date, training must be conducted on recognition and detection of cybersecurity threats and all types of cyber incidents, techniques used to circumvent cyber security measures, and reporting procedures, among others. Key personnel are required to complete more in-depth training.
Assessment and Plans. The Final Rule requires owners and operators of U.S.-flagged vessels and applicable facilities to conduct a Cybersecurity Assessment, develop a Cybersecurity Plan and Cyber Incident Response Plan, and appoint a Cybersecurity Officer that meets specified requirements within 24 months of the effective date. There are a host of requirements for the Cybersecurity Plan, including, among others: provisions for account security, device protection, data safeguarding, training, drills and exercises, risk management practices, strategies for mitigating supply chain risks, penetration testing, resilience planning, network segmentation, reporting protocols, and physical security measures. Additionally, the Cyber Incident Response Plan must provide instructions for responding to cyber incidents and delineate the key roles, responsibilities, and decision-making authorities among staff.
Plan Approval and Audits. The Final Rule requires Cybersecurity Plans be submitted to the USCG for review and approval within 24 months of the effective date of the Final Rule, unless a waiver or equivalence is granted. The Rule also gives the USCG the power to perform inspections and audits to verify the implementation of the Cybersecurity Plan.
Reporting. The Final Rule requires reporting of “reportable cyber incidents”[1] to the National Response Center without delay. The reporting requirement is effective immediately on July 16, 2025. Further, the Final Rule revises the definition of “hazardous condition” to expressly include cyber incidents.
Potential Waivers. The Final Rule allows for limited waivers or equivalence determinations. A waiver may be granted if the owner or operator demonstrates that the cybersecurity requirements are unnecessary given the specific nature or operating conditions. An equivalence determination may be granted if the owner or operator demonstrates that the U.S.-flagged vessel or facility complies with international conventions or standards that provide an equivalent level of security. Each waiver or equivalence request will be evaluated on a case-by-case basis.
Potential Delay in Implementation. Due to a number of comments received related to the ability of U.S.-flagged vessels to meet the implementation schedule, the Final rule seeks comments on whether a delay of an additional two to five years is appropriate.
Conclusion
As automation and digitalization continue to advance within the maritime sector, it is imperative to develop cyber security strategies tailored to specific management and operational needs of each company, facility, and vessel. Owners and operators of U.S.-flagged vessels and MTSA facilities are advised to review the new regulations closely and begin preparations for the new cybersecurity requirements at the earliest opportunity. Stakeholders are also encouraged to provide comments before March 18, 2025, addressing the potential two-to-five-year delay in implementation for U.S.-flagged vessels.
[1] A reportable cyber incident is defined as an incident that leads to, or, if still under investigation, can reasonably lead to any of the following: (1) substantial loss of confidentiality, integrity, or availability of a covered information system, network, or operational technology system; (2) disruption or significant adverse impact on the reporting entity’s ability to engage in business operations or deliver goods or services, including those that have a potential for significant impact on public health or safety or may cause serious injury or death; (3) disclosure or unauthorized access directly or indirectly of non-public personal information of a significant number of individuals; (4) other potential operational disruption to critical infrastructure systems or assets; or (5) incidents that otherwise may lead to a TSI as defined in 33 C.F.R. 101.105.
Human Trafficking Monitoring for Telehealth Providers
Overview: Telehealth providers are uniquely positioned to monitor for human trafficking when interacting with patients. Survivor records indicate that health services are among the most common points of access to help trafficked persons, and nearly 70% of human trafficking survivors report having had access to health services at some point during their exploitation. While there’s limited data regarding trafficked persons’ use of telehealth services, empirical evidence demonstrates that a greater proportion of trafficked persons completed telehealth appointments during the early period of the COVID-19 pandemic than pre-pandemic. To enable telehealth providers to assist trafficked patients, this article discusses the legal landscape surrounding human trafficking and lays out best practices for telehealth providers.
Background: Telehealth providers are subject to a patchwork of legal requirements aimed at reducing human trafficking. If the patient is under the age of 18 or is disabled, many states require telehealth providers to report instances in which they know or reasonably believe the patient has experienced or is experiencing abuse, mistreatment, or neglect. Some states, such as Florida and New Jersey, also require telehealth providers partake in anti-trafficking education.
Online platforms that offer telehealth services are also subject to federal legislation regarding sex trafficking monitoring. In 2018, US Congress passed the Allow States and Victims to Fight Online Trafficking Act of 2017 (FOSTA). The law was enacted primarily in response to unsuccessful litigation against Backpage.com, a website accused of permitting and even assisting users in posting advertisements for sex trafficking. Before FOSTA’s enactment, Section 230 of the Communications Decency Act essentially shielded online platforms from liability for such conduct. FOSTA, however, effectively created an exception to Section 230 by establishing criminal penalties for those who promote or facilitate sex trafficking through their control of online platforms. These penalties, generally limited to a fine, imprisonment of up to 10 years, or both, may be heightened for aggravated violations, which are violations involving reckless disregard of sex trafficking or the promotion or facilitation of prostitution of five or more people. State attorneys general and, in cases of aggravated violations, injured persons also may bring civil actions against those who control online platforms in violation of the law.
Since FOSTA’s inception, the US Department of Justice (DOJ) has brought at least one criminal charge under the law. In 2021, after being charged by DOJ, the owner of the online platform CityXGuide.com pleaded guilty to one count of promotion of prostitution and reckless disregard of sex trafficking, a violation of FOSTA’s aggravated violations provision. According to DOJ officials, more charges have not been brought under FOSTA because the law is relatively new and federal prosecutors have had success prosecuting those who control online platforms by bringing racketeering and money laundering charges. Nonetheless, it is possible that prosecutors will pursue FOSTA violations more regularly during the Trump administration, particularly because US President Donald Trump signed it into law during his first term in office, calling it “crucial legislation.”
Best Practices for Telehealth Providers
Telehealth providers and online platforms that offer telehealth services should consider adhering to the following best practices when monitoring for human trafficking:
Complete Anti-Trafficking Training. Telehealth providers should complete an anti-trafficking training or educational program on a regular basis, regardless of whether they are legally required to do so. One such program is the US Department of Health and Human Services’ Stop, Observe, Ask, and Respond (SOAR) to Health and Wellness Training program. Telehealth providers may attend SOAR trainings in person or online and, depending on the program, may receive continuing education credit for their participation.
Implement a Referral Network. Prior to monitoring patients, telehealth providers should prepare a comprehensive referral list with detailed procedures for assisting identified individuals who have been trafficked or are vulnerable to trafficking. Referral lists should help patients access services that meet various immediate, intermediate, and long-term needs. Referral lists also should include information about how to connect with both national and local anti-trafficking resources.
Be Aware of Indicators of Human Trafficking for Adults. The National Human Trafficking Training and Technical Assistance Center has developed indicators of adult human trafficking. Indicators that may arise during a telehealth visit include instances where:
The patient is not in control of personal identification or does not have valid identification as part of the visit
The patient does not know where they live (or their geolocation does not match their stated location)
The patient’s story does not make sense or seems scripted
The patient seems afraid to answer questions
The patient appears to be looking at an unidentified person offscreen after speaking
The patient’s video background appears to be an odd living or work space (may include tinted windows, security cameras, barbed wire, or people sleeping or living at worksite)
The patient exhibits or indicates signs of physical abuse, drug or alcohol misuse, or malnourishment.
Be Aware of Indicators of Human Trafficking for Children. Indicators of human trafficking for children often differ from those for adults. The National Center for Missing & Exploited Children (NCMEC) has issued a list of risk factors useful for identifying possible indicators of child sex trafficking. Although NCMEC cautions that no single indicator can accurately identify a child as a sex trafficking victim, the presence of multiple factors increases the likelihood of identifying victims. Indicators that may arise during a telehealth visit include the following:
The child avoids answering questions or lets others speak for them
The child lies about their age and identity or otherwise responds to the provider in a manner that doesn’t align with their telehealth profile or account information
The child appears to be looking at an unidentified person offscreen after speaking
The child uses prostitution-related terms, such as “daddy,” “the life,” and “the game”
The child has no identification (or their identification is held by another person)
The child displays evidence of travel in their video background (living out of suitcases, at motels, or in a car)
The child references traveling to cities or states that do not match their geolocation
The child has numerous unaddressed medical issues.
UK ICO Sets Out Proposals to Promote Sustainable Economic Growth
On January 24, 2025, the UK Information Commissioner’s Office (“ICO”) published the letter it sent to the UK Prime Minister, Chancellor of the Exchequer, and Secretary of State for Business and Trade, in response to their request for proposals to boost business confidence, improve the investment climate, and foster sustainable economic growth in the UK. In the letter, the ICO sets out its proposals for doing so, including:
New rules for AI: The ICO recognizes that regulatory uncertainty can be a barrier to innovation, so it proposes a single set of rules for those developing or deploying AI products, supporting the UK government in legislating for such rules.
New guidance on other emerging technologies: The ICO will support businesses and “innovators” by publishing innovation focused guidance in areas such as neurotech, cloud computing and Internet of Things devices.
Reducing costs for small and medium-sized companies (“SMEs”): Focusing on the administrative burden that SMEs face when complying with a complex regulatory framework, the ICO commits to simplifying existing requirements and easing the burden of compliance, including by launching a Data Essentials training and assurance programme for SMEs during 2025/26.
Sandboxes: The ICO will expand on its previous sandbox services by launching an “experimentation program” where companies will get a “time-limited derogation” from specific legal requirements, under the strict control of the ICO, to test new ideas. The ICO would support legislation from UK government in this area.
Privacy-preserving digital advertising: The ICO recognizes the financial and societal benefits provided by the digital advertising economy but notes there are aspects of the regulatory landscape that businesses find difficult to navigate. The ICO wishes to help reduce the burdens for both businesses and customers related to digital advertising. To do so, the ICO, amongst other things, referred to its approach to regulating digital advertising as detailed in the 2025 Online Tracking Strategy (as discussed here).
International transfers: Recognizing the importance of international transfers to the UK economy, the ICO will, amongst other things, publish new guidance to enable quicker and easier transfers of data, and work through international fora, such as G7, to build international agreement on increasing data transfer mechanisms.
Promote information sharing between regulators: The ICO acknowledges that engaging with multiple regulators can be resource intensive, especially for SMEs. The ICO will work with the Digital Regulation Cooperation Forum to simplify this process, and would encourage legislation to simplify information sharing between regulators.
Read the letter from the ICO.
LOW INTEGRITY?: Integrity Marketing Stuck in TCPA Class Action After Declaration Fails to Move the Court
Always fascinating the arguments TCPA defendants make.
Consider Integrity Marketing’s effort in Newman v. Integrity, 2025 WL 358933 (N.D. Ill Jan. 31, 2025).
There Integrity argued it could not be liable for calls violating the TCPA made by its network of lead generators because it was just a middle man who was not actually licensed to sell insurance itself.
Huh?
Apparently Integrity believed that if it wasn’t selling anything itself it couldn’t be liable for telephone solicitations it was brokering. (This is similar to the argument Quote Wizard made and, yeah, it didn’t work out so well for them.)
Integrity also argued that it couldn’t be sued directly for calls made by third-parties unless the Plaintiff pierced the corporate veil.
Double huh?
That’s not even close to accurate. Vicarious liability principles apply when dealing with fault for third-party conduct, not corporate formality law. That argument doesn’t even make logical sense.
Regardless, the Court had little trouble determining Integrity was potentially liable for the calls.
On the corporate veil argument the Court quickly rejected the (totally wrong) alter ego approach and properly applied agency law. The Court determined the Complaint’s allegations Integrity hired a mob of of affiliates, insurance agents, and (sub)vendors to telemarket insurance and other products to consumers was sufficient to state a claim since Integrity allegedly “controlled” these entities. In the Court’s eye the ability to “dictate which telemarketing or lead-generating vendors may be used” was particularly damning.
On the “we don’t actually sell insurance” argument the Court rejected the Defendant’s declaration– you can’t submit evidence on a 12(b)(6) motion folks– but found that even if the evidence were accepted Integrity would still lose:
However, even if the Court were to take the declaration into account, it does not refute the inferences drawn that Defendant has a network of agents encouraging the purchase of insurance, that Defendant controls or facilitates the telemarketing calls Plaintiff received which encouraged Plaintiff to purchase insurance, or that the telemarketing calls were done on behalf of Defendant. In other words, even if it is not selling insurance directly, the complaint plausibly alleges Defendant is using its network of subsidiaries and agents to engage in telemarketing and to facilitate and control the purchase of insurance using impermissible means.
So, yeah.
Motion to dismiss denied. Integrity stuck in the case.