Location Data as Health Data? Precedent-Setting Lawsuit Brought Against Retailer Under Washington My Health My Data Act
An online retailer was recently hit with the first class action under Washington’s consumer health data privacy law alleging that it used advertising software attached to certain third-party mobile phone apps to unlawfully harvest the locations and online marketing identifiers of tens of millions of users. This case highlights how seemingly innocuous location data can become sensitive health information through inference and aggregation, potentially setting the stage for a flood of similar copycat lawsuits.
Quick Hits
An online retailer was hit with the first class action under Washington State’s My Health My Data Act (MHMDA), claiming that the retailer unlawfully harvested sensitive location data from users through advertising software integrated into third-party mobile apps.
The lawsuit alleges that the retailer did not obtain proper consent or provide adequate disclosure regarding the collection and sharing of consumer health data; a term that is defined incredibly broadly as personal information that is or could be linked to a specific individual and that can reveal details about an individual’s past, present, or future health status.
This case marks the first significant test of the MHMDA and could provide a roadmap for litigants in Washington and other states.
On February 10, 2025, Washington resident Cassaundra Maxwell filed a class action lawsuit in the U.S. District Court for the Western District of Washington alleging violations of Washington’s MHMDA. The suit alleged that the retailer’s advertising software, known as a “software development kit,” or SDK, is licensed to and “runs in the background of thousands of mobile apps” and “covertly withdraws sensitive location data” that cannot be completely anonymized.
“Mobile users may agree to share their location while using certain apps, such as a weather app, where location data provides the user with the prompt and accurate information they’re seeking,” the suit alleges. “But that user has no idea that [the online retailer] will have equal access to sensitive geolocation data that it can then exfiltrate and monetize.”
The suit brings claims under federal wiretap laws, federal and state consumer protection laws, and violations of the MHMDA, making it a likely test case for consumer privacy claims under the MHMDA. This case evokes parallels to the surge over the past several years of claims under the California Invasion of Privacy Act (CIPA), a criminal wiretap statute. Both involve allegations of unauthorized data collection and sharing facilitated by digital tracking technologies. These technologies, including cookies, pixels, and beacons, are often embedded in websites, apps, or marketing emails, operating in ways that consumers may not fully understand or consent to.
As we previously covered, hundreds if not thousands of lawsuits relating to similar technologies were brought pursuant to CIPA after a California district court denied a motion to dismiss such claims in Greenley v. Kochava, Inc. Given the parallels and the onslaught of litigation that CIPA entailed, the MHMDA case may set important precedents for how consumer health data privacy is interpreted and enforced in the digital age, similar to the impact CIPA litigation has had on broader privacy practices. Like CIPA, the MHMDA also allows for the recovery of attorneys’ fees, but unlike CIPA (which provides for statutory damages even without proof of actual harm), a plaintiff must prove an “injury” to his or her business or property to establish an MHMDA claim.
Consumer Health Data
As many companies working in the retail space likely know, the MHMDA imposes a host of new requirements for companies doing business in Washington or targeting Washington consumers with respect to the collection of “consumer health data.” The law broadly defines “consumer health data” as any personal information that can be linked or reasonably associated with an individual’s past, present, or future physical or mental health status. The MHMDA enumerates an entire list of data points that could constitute “health status,” including information that would not traditionally be thought of as indicative of health, such as:
biometric data;
precise location information that could suggest health-related activities (such as an attempt to obtain health services or supplies);
information about bodily functions, vital signs, and symptoms; and
mere measurements related to any one of the thirteen enumerated data points.
Critically, even inferences can become health status information in the eyes of the MHMDA, including inferences derived from nonhealth data if they can be associated with or used to identify a consumer’s health data.
For instance, Maxwell’s suit alleges the retailer collected her biometric data and precise location information that could reasonably indicate an attempt to acquire or receive health services or supplies. However, the complaint is light on factual support, alleging only that the data harvesting conducted via the retailer’s SDK couldreveal (presumably via inference in most cases) “intimate aspects of an individual’s health,” including:
visits to cancer clinics;
“health behaviors” like visiting the gym or fast food habits;
“social detriments of health,” such as where an individual lives or works; and
“social networks that may influence health, such as close contact during the COVID 19 pandemic.”
Notice and Consent
The suit further alleges that the retailer failed to provide appropriate notice of the collection and use of the putative class members’ consumer health data and did not obtain consent before collecting and sharing the data. These allegations serve as a timely reminder of the breadth and depth of the MHMDA’s notice and consent requirements.
Unlike most other state-level privacy laws, which allow different state-mandated disclosures to be combined in a single notice, the Washington attorney general has indicated in (nonbinding) guidance that the MHMDA “Consumer Health Privacy Policy must be a separate and distinct link on the regulated entity’s homepage and may not contain additional information not required under the My Health My Data Act.” Said differently, businesses in Washington cannot rely upon their standard privacy policies, or even their typical geolocation consent pop-up flows with respect to consumer health data.
Additionally, at a high-level, the MHMDA contains unusually stringent consent requirements, demanding the business obtain “freely given, specific, informed, opt-in, voluntary, and unambiguous” consent before consumer health data is collected or shared for any purpose other than the provision of the specific product or service the consumer has requested from the business, or collected, used, or shared for any purpose not identified in the business’s Consumer Health Privacy Policy.
Next Steps
The Maxwell lawsuit is significant as it is the first to be filed under Washington’s MHMDA, a law that has already spawned a copycat law in Nevada, a lookalike amendment to the Connecticut Data Privacy Act, and a whole host of similar bills in state legislatures across the country—most recently in New York, which has its own version of the MHMDA awaiting presentation to the governor for signature. The suit appears to take an expansive interpretation that could treat nearly all or essentially all location data as consumer health data, inasmuch as conclusions about an individual’s health that can be drawn from the data. And, while the MHMDA does use expansive language, the suit appears likely to answer still lingering questions about the extent of what should be considered “consumer health data” subject to the rigorous requirements of the MHMDA.
As this suit progresses, companies targeting Washington consumers or otherwise doing any business in Washington may want to review their use of SDKs or similar technologies, geolocation collection, and any other collection or usage of consumer data with an eye toward the possibility that the data could be treated as consumer health data. Also, their processors may wish to do the same (remember, the Washington attorney general has made it clear that out-of-state entities acting as processors for entities subject to MHMDA must also comply). Depending on what they find, those companies may wish to reevaluate the notice-and-consent processes applicable to the location data they collect, as well as their handling of consumer rights applicable to the same.
FTC Chairman Ferguson Appoints Deputy Directors for Bureau of Consumer Protection
On February 18, 2025, the Federal Trade Commission announced that Chairman Andrew N. Ferguson appointed David Shaw as Principal Deputy Director and Kelse Moen as Deputy Director of the agency’s Bureau of Competition, and Douglas C. Geho as Deputy Director of the Bureau of Consumer Protection.
Shaw is an experienced antitrust lawyer with expertise in high-stakes litigation and contentious merger review. During the first Trump Administration, Shaw served in the Department of Justice’s Antitrust Division in a variety of roles, from the front lines as a trial attorney to the front office as acting chief of staff. As a trial attorney, Shaw served on multiple trial teams, including the first litigated vertical merger challenge in forty years.
While serving in DOJ’s front office, he held a leadership role in the Big Tech investigations and successfully coordinated a bipartisan coalition of state attorneys general joining the DOJ complaint in the Google search monopolization case.
In addition to his government service, Shaw was a partner in the antitrust practice of a large international law firm.
Moen is an experienced antitrust attorney, with a career in both government service and private practice. Most recently, he served as senior counsel to the U.S. Senate Judiciary Committee for Senator Lindsey Graham, where he focused on antitrust, technology, and intellectual property issues, a position that he held until his appointment to the FTC.
Before joining the Judiciary Committee staff, Moen spent nearly a decade practicing antitrust law at major international law firms, representing businesses and individuals in high-stakes and high-profile government investigations, class actions, civil and criminal litigation, and merger reviews. He clerked for Judge Robert Mariani of the U.S. District Court for the Middle District of Pennsylvania.
Geho possesses extensive enforcement, regulatory and litigation experience. During the first Trump Administration, Geho served at the Department of Labor as Counsel and Policy Advisor, and then Counselor to the Assistant Secretary for Policy, where he advanced efforts relating to regulatory and enforcement reform, worker safety and training, and additional Administration priorities. He then served as a lead attorney for the House Judiciary Committee and two of its subcommittees. Gebo also managed investigations for the Senate Committee on Homeland Security and Governmental Affairs.
Most recently, Geho served as an Attorney Advisor to Commissioner Melissa Holyoak handling consumer protection matters for her office. He clerked for Judge Alice M. Batchelder on the U.S. Court of Appeals for the Sixth Circuit.
READ ALL ABOUT IT: Reuters Faces Privacy Lawsuit But The Court Finds No Story To Tell
Greetings CIPAWorld!
Buckle up because this one’s a big deal. If you’ve been keeping an eye on data privacy litigation, you know courts have been drawing a hard line when it comes to proving harm. The Southern District of New York just handed Reuters a win in Zhizhi Xu v. Reuters News & Media Inc., No. 24 Civ. 2466 (PAE), 2025 U.S. Dist. LEXIS 26013 (S.D.N.Y. Feb. 13, 2025), dismissing a lawsuit accusing the media giant of unlawfully collecting users’ IP addresses through web trackers. Here, the case centered around alleged violations of the California Invasion of Privacy Act (“CIPA”), which ultimately fell apart due to a lack of standing. The Court ruled that Plaintiff failed to show any concrete harm—essential for a lawsuit to survive in federal court. If there’s one thing federal courts don’t have time for, it’s speculative injury.
So, what’s the news flash? Plaintiff, a California resident, filed a putative class action against Reuters, alleging that the company embedded web trackers—Sharethrough, Oinnitag, and TripleLift—on its news website. According to Plaintiff, these trackers automatically install on users’ browsers, collect their IP addresses, and transmit that information to third parties for advertising and analytics purposes. Think of it like an invisible footprint—Plaintiff asserted that Reuters tracked him without his consent, leaving behind digital breadcrumbs that were quietly collected and shared. Plaintiff claimed this amounted to a violation of CIPA Section 638.51(a), which prohibits the installation of a “pen register or trap and trace device” without a court order. In response, Reuters quickly moved to dismiss the case, arguing that Plaintiff lacked standing because he had not suffered any tangible injury. The company maintained that collecting an IP address alone—without any evidence of targeted ads or misuse—did not meet the threshold for a privacy violation. In other words, if a tree falls in the digital forest and no one hears it, does it really make a sound? Well, it depends. Like any good law school exam answer, context is everything. Are we talking about mere data collection, or has someone actually suffered harm? Courts don’t deal in hypotheticals—they want to see real, measurable impact. Without proof that Reuters’ data collection led to some kind of concrete harm, the Court wasn’t willing to entertain a privacy violation claim based on mere technicalities.
As such, Judge Paul A. Engelmayer sided with Reuters and dismissed the lawsuit under Rule 12(b)(1) for lack of Article III standing. The ruling echoes a growing trend in data privacy cases: collecting an IP address without more doesn’t trigger a legally recognizable harm. In TransUnion L.L.C. v. Ramirez, 594 U.S. 413, 424 (2021), the Court reaffirmed that a plaintiff must demonstrate a concrete injury to establish standing in federal court. The Court emphasized that IP addresses are not inherently sensitive or private information. It functions primarily as routing data rather than revealing the contents of a user’s communication. The Court relied on Heeger v. Facebook, Inc., 509 F. Supp. 3d 1182, 1188 (N.D. Cal. 2020), which held that collecting IP addresses alone does not constitute a privacy invasion. Plaintiff did not allege that he received targeted ads, suffered financial harm, or compromised his identity due to Reuters’ data collection.
Conversely, the Court noted cases like McClung v. AddShopper, Inc., No. 23-cv-01996-VC, 2024 WL 189006, at *1 (N.D. Cal. Jan. 17, 2024), where the defendant’s data collection led to unwanted marketing. That’s the key difference—Plaintiff’s data was allegedly collected, but nothing really happened as a result. Compare that to cases where companies have blasted users with personalized ads based on the data they grabbed. The Court found no historical or legal precedent equating collecting an IP address to a recognized harm like defamation, intrusion upon seclusion, or public disclosure of private facts, noting Liau v. Weed Inc., No. 23 Civ. 1177 (S.D.N.Y. Feb. 22, 2024), which found that an IP address does not constitute “personal information” for privacy claims.
This ruling isn’t just a one-off—it’s part of a larger judicial pattern I’m seeing increasingly. The courts send a message: statutory violations alone won’t cut it in federal court. This aligns with decisions like Lightoller v. JetBlue Airways Corp., No.: 23-cv-00361-H-KSC, 2023 WL 3963823, at *3 (S.D. Cal. June 12, 2023), where the Court held that a mere statutory violation under CIPA does not establish standing without an actual, concrete harm. Plaintiff’s attempt to claim a privacy right over his IP address fell flat, as the Court reiterated that voluntarily conveyed addressing information does not trigger constitutional standing concerns. If plaintiffs want to bring CIPA or similar claims in federal court, they must show tangible harm—like unwanted targeted ads, identity theft, or direct financial consequences.
Law school lecture 101: Federal standing isn’t just some procedural hurdle—it’s the gatekeeper to the courtroom, and judges are making it clear that not all claims get past the front door. Just because a statute grants a right doesn’t mean plaintiffs automatically have standing in federal court. That’s the real kicker here. Courts are increasingly skeptical of claims that hinge on technical violations without real-world consequences. If the only harm is theoretical, don’t expect a federal judge to bite. This ruling doubles down on that message: if you want your case to survive, show the court some real, measurable damage. Otherwise, your complaint might as well be a hypothetical from law school.
What is more, this case aligns with other recent dismissals of privacy lawsuits that fail to show real harm. There’s a growing judicial skepticism of privacy claims that rest on bare statutory violations. Courts are signaling that mere technical violations of privacy statutes won’t cut it—plaintiffs must demonstrate how they were harmed. And this makes sense. Privacy is a big deal, but without actual damage, courts don’t want to police every instance of data collection. It’s the legal equivalent of “no harm, no foul.”
So, where do we go from here? The battle over what qualifies as ‘concrete injury’ in data privacy cases isn’t going away anytime soon. Expect more lawsuits, more motions to dismiss, and more courts refining the boundaries of what actually constitutes harm in data privacy.
As always,
Keep it legal, keep it smart, and stay ahead of the game.
Talk soon!
Congress Advances KOSMA Bill Targeting Social Media Use by Minors
Varnum Viewpoints:
KOSMA Restrictions: The Kids Off Social Media Act (KOSMA) aims to ban social media for kids under 13 and limit targeted ads for users under 17.
Bipartisan Support & Opposition: While KOSMA has bipartisan backing, critics argue it could infringe on privacy and First Amendment rights.
Business Impact: KOSMA could affect companies targeting minors, requiring compliance with new privacy regulations alongside existing laws like COPPA.
While COPPA 2.0 and KOSA are discussed more frequently when it comes to protecting the privacy of minors online, the U.S. Senate is advancing new legislation aimed at regulating social media use by those 17 and under. In early February, the Senate Committee on Commerce, Science and Transportation voted to advance the Kids Off Social Media Act (KOSMA), bringing it closer to a full Senate vote.
KOSMA Restrictions
KOSMA would prohibit children under 13 from accessing social media. Additionally, social media companies would be prohibited from leveraging algorithms to promote targeted advertising or personalized content to users under 17. Further, schools receiving federal funding would be required to limit the use of social media on their networks. The bill would also grant enforcement authority to the Federal Trade Commission and state attorneys general.
Bipartisan Support & Opposition
KOSMA has received bipartisan support, with advocates such as Senator Brian Schatz (D-HI), who introduced the bill in January, citing the growing mental health crisis amongst minors due to social media use. Supporters argue that while existing laws like COPPA protect children’s data, they do not adequately address the considerations of social media since they predate the platforms. However, much like similar state laws that have come before it, KOSMA is rife with opposition as well. Opponents argue that this type of regulation could erode privacy and impose unconstitutional restrictions on young people’s ability to engage online. Instituting a ban as opposed to mandating appropriate safeguards, opponents argue, infringes on First Amendment rights.
Business Impact
Although KOSMA only applies to “social media platforms,” the definition of this term could be interpreted broadly and potentially include many companies that publish user-generated content within the scope of KOSMA’s restrictions. KOSMA identifies specific types of companies that would be exempt from the definition of social media platforms, such as teleconferencing platforms or news outlets. If KOSMA were to go into effect, companies across the country that are knowingly collecting data from minors or targeting them with personalized content or advertising would have an additional layer of regulatory consideration when assessing their privacy practices pertaining to the processing of data related to minors—on top of existing federal and state laws.
Privacy Tip #432 – DOGE Sued for Unauthorized Access to Our Personal Information
The Department of Government Efficiency’s (DOGE) staggering unfettered access to all Americans’ personal information is highly concerning. DOGE employees’ access includes databases at the Office of Personnel Management, the Department of Education, the Department of Health and Human Services, and the U.S. Treasury.
If you want more information about the DOGE employees who have access to this highly sensitive data, Wired and KrebsOnSecurity have provided fascinating but disturbing accounts.
Meanwhile, New York and other states have filed suit against DOGE, alleging that the unfettered access to the federal databases is a privacy violation. On February 14, 2025, a New York federal judge found “good cause to extend a temporary restraining order” stopping DOGE employees from accessing U.S. Treasury Department databases. However, the next day, another federal judge in Washington, D.C., denied a request to stop DOGE from accessing the databases of the Department of Labor, the Department of Health and Human Services, and the Consumer Financial Protection Bureau. That means that DOGE employees now have access to the sensitive health and claims information of Medicare recipients, as well as the identities of individuals who have made workplace health and safety complaints. NBC News has reported that “the Labor Department authorized DOGE employees to use software to remotely transfer large data sets.”
Currently, 11 lawsuits have been filed against DOGE over access to sensitive information in federal databases, alleging that the access violates privacy laws. The databases include student loan applications at the Department of Education, taxpayer information at the Department of the Treasury, and the personnel records of all federal employees contained in the database of the Office of Personnel Management, the Department of Labor, the Social Security Administration, FEMA, and USAID.
According to a plaintiff, the potential to misuse Americans’ personally identifiable information “is serious and irrevocable….The risks are staggering: identity theft, fraud, and political targeting. Once your data is exposed, it’s virtually impossible to undo the damage.” We will be closely watching the progress of these suits and their impact on the protection of our personal information.
Texas AG Investigates DeepSeek + List of Banned Countries Expands
Texas Attorney General Ken Paxton announced on February 14, 2024, that his office has opened an investigation into DeepSeek’s privacy practices. DeepSeek, an artificial intelligence company with ties to the People’s Republic of China, has been banned on state owned devices in Texas, New York, and Virginia. The Pentagon, NASA, and the U.S. Navy have also prohibited employees from using DeepSeek.
According to Paxton’s press release, he has notified DeepSeek “that its platform violates the Texas Data Privacy and Security Act.” He sent civil investigative demands to tech companies to obtain information about their analysis of the application and any documentation DeepSeek forwarded to the tech companies before they were offered to consumers.
DeepSeek has been banned in Italy, South Korea, Australia, Taiwan, and India.
Is Your Business Trapped? The Rise of “Trap and Trace” Litigation
Almost every business has a website; every website should have a privacy policy, terms of use, and, in some cases, a consumer privacy rights notice—if certain state consumer privacy rights laws apply to your business, such as the California Consumer Privacy Act as amended by the California Privacy Rights Act (collectively CCPA). What about a cookie policy? Or a cookie consent banner? Or a cookie preferences pop-up? If you haven’t looked at what types of ad tech your website uses—i.e., cookies, pixel tags, device IDs, and browser fingerprinting technologies that collect data about user behavior across multiple devices and platforms, which are essential for targeted advertising online—now is the time.
“Trap and trace” litigation and private demands for damages related to online tracking have risen significantly. “Trap and trace” litigation is related to the ad tech used on websites involving online trackers that plaintiffs’ attorneys liken to “pen registers” under state wiretap laws. These technologies allegedly collect website users’ device information and activities without their consent, which plaintiffs’ attorneys argue constitutes unauthorized interception of electronic communications under various wiretap laws. Here are some key considerations to assess your company’s website and ad tech:
Unauthorized Interception: the use of third-party trackers in ad tech is being construed as an intentional interception of electronic communications, similar to how pen registers and trap and trace devices operate by capturing dialing, routing, addressing, or signaling information.
Unauthorized Interception: the use of third-party trackers in ad tech is being construed as an intentional interception of electronic communications, similar to how pen registers and trap and trace devices operate by capturing dialing, routing, addressing, or signaling information.
Legal Risks: the use of such technologies without clear consent or transparency can lead to legal and reputational risks for your business, not to mention demands from plaintiffs’ attorneys seeking quick settlement in this unsettled area of the law, as well as class actions seeking millions of dollars in damages.
State Wiretap Laws: state wiretap laws, such as California’s Invasion of Privacy Act and Massachusetts’s Wiretap Act , have been adapted to address online tracking methods. These laws prohibit unauthorized interception of electronic communications, and plaintiffs’ attorneys are alleging that using online trackers could potentially violate these laws.
Privacy Rights: the use of certain ad tech may also constitute a privacy rights violation under state consumer privacy rights laws, like the CCPA.
Impossibility of Obtaining Prior Consent: the way most ad tech is set up to function means that website users’ data and activity are tracked instantaneously upon visiting the website, which prevents the business from obtaining prior consent (i.e., acceptance of website cookies) before the tracking begins. Knowing how to program your website’s ad tech properly is vital in steering clear of these claims and lawsuits.
Overall, the intersection of ad tech and “trap and trace” demands and litigation highlights the importance of understanding and complying with privacy laws and obtaining explicit consent from website users when collecting and using their data. Now is the time to evaluate your website, privacy policy, terms of use, and consumer privacy rights notices to confirm compliance with the ever-changing landscape of state and federal laws, while also finding balance between meeting your marketing team’s needs and your website users’ experience. Take action to avoid this trap.
State Attorneys General Point to Ways DEI Programs Can Stay Within Legal Boundaries
The attorneys general of sixteen states recently released guidance explaining how diversity, equity, and inclusion (DEI) programs in the private sector can remain viable and legal. This guidance came shortly after President Donald Trump issued two executive orders targeting “unlawful DEI” programs in the federal government, federal contractors, and federal fund recipients, and directed the U.S. attorney general to investigate “unlawful DEI” programs in the private sector.
Quick Hits
The attorneys general of sixteen states signaled to private employers that their DEI programs can remain legal, if designed and implemented correctly under applicable laws.
The guidance came in response to President Trump’s executive orders to stop DEI “mandates, policies, programs, preferences, and activities” in the federal government and “unlawful DEI” programs by federal contractors and federal money recipients.
The guidance reiterates that racial and sex-based quotas and unlawful preferences in hiring and promotions have been illegal for decades under Title VII of the Civil Rights Act of 1964.
On February 13, 2025, the attorneys general of Arizona, California, Connecticut, Delaware, Hawaii, Illinois, Maine, Maryland, Massachusetts, Minnesota, Nevada, New Jersey, New York, Oregon, Rhode Island, and Vermont issued guidance stating DEI programs are still legal when structured and implemented properly.
State laws prohibiting employment discrimination based on race or sex vary in scope. Some of them go beyond the protections in the federal antidiscrimination laws.
While noting that race- and gender-based preferences in hiring and promotions have been unlawful for decades, the new guidance provides myriad legally compliant strategies for employers to enhance diversity, equity, and inclusion in the workplace, such as:
prioritizing widescale recruitment efforts to attract a larger pool of job candidates from a variety of backgrounds;
using panel interviews to help eliminate bias in the hiring process;
setting standardized criteria for evaluating candidates and employees, focused on skills
and experience;
ensuring accessible recruitment and hiring practices, including reasonable accommodations as appropriate;
ensuring equal access to all aspects of professional development, training, and mentorship programs;
maintaining employee resource groups for workers with certain backgrounds or experiences;
providing employee training on unconscious bias, inclusive leadership, and disability awareness; and
maintaining clear protocols for reporting discrimination and harassment in the workplace.
“Properly developed and implemented initiatives aimed at ensuring that diverse perspectives are included in the workplace help prevent unlawful discrimination,” the guidance states. “When companies embed the values of diversity, equity, inclusion, and accessibility within an organization’s culture, they reduce biases, boost workplace morale, foster collaboration, and create opportunities for all employees.”
Next Steps
A group of diversity officers, professors, and restaurant worker advocates has filed suit to challenge President Trump’s executive orders on DEI. Other groups have brought similar lawsuits. It is unclear what impact the challenges to the executive orders will have in light of enforcement efforts.
With the executive orders and leadership shifts at the U.S. Equal Employment Opportunity Commission, the Trump administration has signaled a change in federal enforcement priorities that could make private-sector lawful DEI efforts more risky from a legal standpoint.
Private employers may wish to review their existing DEI programs and policies to ensure compliance with federal and state antidiscrimination laws. In some cases, employers may be able to keep the legally compliant parts of their DEI programs while adjusting or eliminating certain parts that the Trump administration could consider unlawful.
Ogletree Deakins will continue to monitor developments and will provide updates on the Diversity, Equity, and Inclusion, Employment Law, and State Developments blogs as new information becomes available.
SOUR MORNING?: For Love and Lemons Faces TCPA Lawsuit Over Timing Violations
Hi TCPAWorld! The Baroness here. And we’ve got a new filing. This time, we’re taking a look at a case involving a popular clothing brand: For Love and Lemons.
Let’s start with the allegations.
The plaintiff Michelle Huang alleges that on November 28 and 29, 2024, she received two text messages from For Love and Lemons.
However, this case isn’t about the typical Do Not Call (DNC) Registry violation you might expect.
This case is actually brought under the time restrictions provisions of the TCPA.
Here’s where it gets interesting: Huang asserts that she received the messages at 7:14 a.m. and 7:45 a.m. — times she says are outside the window in which businesses are allowed to send marketing messages. Specifically, she contends she never authorized For Love and Lemons to send texts before 8 a.m. or after 9 p.m. local time.
This is significant because under 64.1200(c)(1), “[n]o person or entity shall initiate any telephone solicitation” to “[a]ny residential telephone subscriber before the hour of 8 a.m. or after 9 p.m. (local time at the called party’s location).” 47 C.F.R. § 64.1200(c)(1).
Based on this alleged violation, Plaintiff sued For Love and Lemons for violations of Section 227(c) of the TCPA and 64.1200(c)(1).
In addition, she seeks to represent a class of individuals who received similar marketing texts outside the permissible hours:
All persons in the United States who from four years prior to the filing of this action through the date of class certification (1) Defendant, or anyone on Defendant’s behalf, (2) placed more than one marketing text message within any 12-month period; (3) where such marketing text messages were initiated before the hour of 8 a.m. or after 9 p.m. (local time at the called party’s location).
It is not often that we see cases being filed pursuant to 64.1200(c)(1). But this is reminder that this provision exists!
Since this case was just filed, there is not much to report. But we will of course keep you folks updated as the case progresses.
Huang v. Love And Lemons LLC, Case No.: 2:25-CV-01391 (C.D. Cal).
Online Advertisements Found to Monetize Piracy and Child Pornography
“Online Advertising Hits Rock Bottom” screams one recent headline, as reports from ad fraud researchers purportedly have found evidence that online ads for mainstream brands have appeared on websites dedicated to the display and sharing of child pornography. Some others have appeared on sites that facilitate sharing of video content. There is little doubt that the who’s who of major brands whose ads may have appeared on such sites were unaware of this and, had they known, would have objected. I have written about this before, and this keeps happening – despite the proliferation of ad tech vendors promising to prevent it.
Moreover, this is not a victimless crime. Placing ads on a website dedicated to sharing child pornography monetizes this horrific activity. Far from merely benefitting the proverbial “two guys in a Romanian basement,” monies generated from misspent digital advertising can be used to fund terrorism, human trafficking and all manner of abhorrent, criminal activity. This should be of keen interest to all advertisers, particularly public companies.
One estimate says that advertisers lost up to $1 billion to ad fraud in 2024 alone. The nature of online advertising, which has surpassed “traditional media,” lends itself to opacity. Simply put, the Internet is infinitely scalable. Billions of “impressions” are generated daily, and more are always available to the unscrupulous. Advertisers often lack the data needed to determine where every advertisement winds up, and even if they had such data, they lack the wherewithal to determine whether an appropriate price was paid, whether they received value, and whether they received rebates to which they were entitled. Indeed, recent news reports suggest that large-scale bribery has infected ad spending in some international markets.
So, one would think that advertisers would dedicate more resources to root out this fraud. To be sure, associational efforts have been undertaken and claim to have shown progress. However, the problem persists and is still quite substantial. What other industry would tolerate fraud on the order of magnitude of 10-40% of spend? Yet, it continues year after year.
What should a responsible advertiser do now?
Review relevant contracts to determine what audit rights exist;
Revise weak contracts;
Exercise relevant audit rights;
Deal with negligent or reckless vendors; and
Pursue recovery of lost funds.
The last item is sometimes tricky to accomplish and depends on the strength of rights embodied in the relevant contracts. However, the proper contracts can give advertisers the power to pursue a refund of misspent or overspent funds, provided that the audits are strong and demonstrate compensable issues exist. This need not always involve filing a lawsuit.
Pursuing recovery can take courage and surely can create tension in some ongoing relationships. However, can your company continue business as usual with the stakes as high as they are?
“NOT MINIMAL”: Court Holds TCPA Defendant Can Be Liable for Illegal RVM Even Though Platform Sent the Message
There’s an interesting tension between platforms and callers that use their services when it comes to the TCPA.
And it all comes down to who is actually “making” the call.
This is so because the TCPA only applies to individuals that make or initiate calls–which is why lead gen data brokers always seem ti get off easy and the lead buyers are always caught in a snare.
But in the platform context, the caller wants the platform to be viewed as the “initiator” wheras the platform operator always wants to be very careful to be nothing more than a conduit.
Well in Saunders v. Dyck-O’Neal, 2025 WL 553292 (W.D. Mich Feb 19, 2025)–and unbelievably old case I can’t believe is still around– Defendant moved for summary judgment arguing it could not be liable for ringless voicemails left by the (in)famous VoApps.
To my eye this motion was a real long shot. The facts here are pretty clear. Per the order:
Dyck O’Neal provided VoApps with (1) the telephone number to be contacted, (2) the day and time the voicemails were sent, and (3) the caller ID number to be used. Dyck O’Neal also selected the message to be played. For example, one script of the voicemail message provided: “This is Dyck O’Neal with a message. This is an attempt to collect a debt. Please do not erase this message, and will you call us at 1-877-425-8998. Again, that number is 1-877-425-8998.” (ECF No. 294-8 at PageID.4091).
Ok, so the Defendant gave a file of numbers to the platform, told the platform to deliver a specific message at a specific time and also supplied the DIDs. I mean, long as the platform faithfully carried out those instructions I don’t see how you get around a determination that Defendant “initiated” those calls– they were the party instructing the transmission of the call. So yeah, they initiated the calls.
And that is just what the Court held.
The Court also held Defendant could be liable under vicarious liability principles since it controlled VoApps in the context of sending the messages:
Dyck O’Neal’s involvement was not minimal. It decided what phone numbers would be called. It decided what prerecorded voicemail messages would be played. It uploaded a “campaign” each day, on the day it wanted calls to be made. It had the message it wanted played during calls recorded and designed the prerecorded message and caller ID to conform to its debt collection purpose. It had alleged debtors’ addresses and directed VoApps to send messages only during permissible time of day, depending upon the physical location of the debtor. By the terms of the contract, VoApps acted as a “passive conduit for the distribution of content and information.”
Yeah… this one was pretty obvious.
Indeed, this motion was borderline frivolous–and perhaps not even borderline–and I rarely say that.
What I find really fascinating is that a different RVM platform was found to be exempt from TCPA liability by Section 230 of the Communications Act so not sure why that issue wasn’t raised as part of Defendant’s motion.
C’est la vie.
This is a good data point on a couple of things:
Platforms should always try to position themselves as mere conduits to avoid findings that they are responsible for the conduct of callers using their services;
Callers who wish to treat their platforms as the “makers” of the call need to really place trust in those platforms and also have clear contract terms to that effect– and handing off a list of numbers with explicit instructions is going to sink your chances;
Ringless voicemail are covered by the TCPA as regulated technology and prerecorded calls–which means you need express written consent for marketing purposes and express consent for informational purposes to leverage these systems; and
Folks caught up in RVM cases should keep Section 230 in mind!
LONG GAME: Is One-to-One Coming Back in January, 2026? NCLC Wants to Make that Happen– Here’s How It Might
CPAWorld is an absolutely fascinating place.
So many incredible storylines always intersecting. And the Czar at the center of it all.
Enjoyable beyond words.
So here’s the latest.
As I reported yesterday NCLC is seeking to intervene before the Eleventh Circuit Court of Appeals in an apparent effort to seek an en banc re-hearing of the Court’s determination that the FCC exceeded its authority in fashioning the one-to-one rule. If successful, the NCLC could theoretically resurrect the rule before the one-year stay runs that the FCC put into effect following R.E.A.C.H.’s emergency petition last month.
So, in theory, one-to-one could be back in January, 2026 after all.
So let’s back up to move forward and make sure everyone is following along.
Way back in December, 2022 Public Knowledge–a special interest group with high power over the Biden-era FCC–submitted a proposal to shut down lead generation by banning the sale or transfer of leads.
I went to work trying to spread the word and in April, 2023 the FCC issued a public notice that was a real headfake— the notice suggested it was considering only whether to ban leads that were not “topically and logically” related to the website at issue. Most people slept on this–and many lawyers in the industry told folks this was no big deal– but I told everyone PRECISELY what was at stake.
Regardless of my efforts industry’s comments were fairly week as very few companies came forward to oppose the new rule.
In November, 2023–as only the Czar had correctly predicted– the FCC circulated a proposed rule that looked nothing like their original version– THIS version required “one-to-one” consent, just as I said it would.
Working with the SBA, R.E.A.C.H. and others were able to convince the Commission to push the effective date for the rule from 6 months to 12 months to give time for another public notice period to evaluate the rule’s impact on small business.
This additional six months also gave time for another trade organization to challenge the ruling in court (you’re welcome).
Ultimately with the clock winding down the final week before the rule was set to go into effect January 27, 2025 R.E.A.C.H. filed an emergency petition to stay the ruling with the FCC.
On Friday January 24, 2025 at 4:35 pm the FCC issued the desired stay— pushing back the effective date for up to another year. Twenty minutes later the Eleventh Circuit court of appeals issued a ruling striking down the one-to-one rule completely.
Now the NCLC enters and is seeking to reverse the appellate court’s decision and reinstate the rule. To do so it would need to:
Be granted an unusual post-hac intervention; and either
Be granted an unusual en banc re-hearing and then win that re-hearing; or
Be granted an unusual Supreme Court cert and then win that Supreme Court challenge.
As anyone will tell you, every piece of this is a long shot.
Still, however, it is possible.
For instance the Eleventh Circuit standard for en banc review is high but not overwhelmingly so:
“11th Cir. R. 40-6 Extraordinary Nature of Petitions for En Banc Consideration. A petition for en banc consideration, whether upon initial hearing or rehearing, is an extraordinary procedure intended to bring to the attention of the entire court a precedent-setting error of exceptional importance in an appeal or other proceeding, and, with specific reference to a petition for en banc consideration upon rehearing, is intended to bring to the attention of the entire court a panel opinion that is allegedly in direct conflict with precedent of the Supreme Court or of this circuit. Alleged errors in a panel’s determination of state law, or in the facts of the case (including sufficiency of the evidence), or error asserted in the panel’s misapplication of correct precedent to the facts of the case, are matters for rehearing before the panel but not for en banc consideration.”
To be sure the Eleventh Circuit’s ruling was quite extraordinary. Turned appellate review of agency action more or less on its head. A complete departure from established analytic norms in such cases.
But, as I have said multiple times, we are living in a whole new world right now. So what was weird and inappropriate six months ago may be very much the new paradigm today.
Of course being granted the rehearing in this environment would just be step one. NCLC would then actually have to win the resulting en banc review– which is by no means guaranteed even if the rehearing is granted.
But from a timing perspective all of this could theoretically happen within one year.
If NCLC is denied a rehearing they could theoretically seek Supreme Court review which could theoretically result in a ruling sometime in May or June, 2026– in the meantime the FCC’s stay of proceedings would likely be extended in light of the Supreme Court taking the case. But the odds of the Supremes taking such an appeal and then reversing the one-to-one rule seem astronomically small given the current makeup of the Court.
Then again, with Mr. Trump seizing control of independent agencies the rules regarding how courts review regulatory activity by these agencies just became INSANELY important. Again, we have a whole new paradigm and the Supremes may theoretically look for any vehicle to opine on the subject ahead of potentially catastrophic separation of power issues set up by Mr. Trump’s executive order this week.
The bottom line is this: one-to-one consent may rise again, and if the NCLC has its way–it will.
We will keep everyone posted on developments, of course, and the R.E.A.C.H. board will be discussing its own potential intervention efforts shortly.
More soon.