$10.00 CAR INSURANCE?: Quote Wizard Draws Complaint Over Advertisement that Does Not Comport With “Basic Common Sense”
Is this real?
So Lending Tree hasn’t apologized yet.
But I am over it.
Unrelated, picked up this odd complaint in Michigan that I thought was interesting.
Apparently Quote Wizard was running ads suggesting they could provide full auto insurance coverage for $10.00.
At least that’s the gist of the complaint I was provided.
The consumer says:
QuoteWizard.com, LLC is running at least 29 illegal advertisements to solicit insurance in the State of Michigan in violation of Michigan Compiled Law (MCL) 500.2003, 500.2005, 500.2005a, 500.2007. The Michigan Insurance Code states that unfair methods of competition and unfair and deceptive acts include the making, publishing, disseminating, circulating, etc. of any assertion with respect to the business of insurance or with respect to any person in the conduct of his insurance business, which is untrue, deceptive or misleading. MCL § 500.2007. The Michigan Insurance Code further prohibits the use of marketing that fails to disclose in a conspicuous manner that its purpose is solicitation of insurance and that contact will be made by an insurance agent or insurance company. MCL § 500.2005a. Quotewizard.com, LLC runs a variety of advertisements on Meta’s Facebook platform. These ads, which I have copied links to view in Meta’s Ad Library, are untrue, deceptive, and misleading. Quotewizard.com, LLC advertises a new insurance rate as ” New Rate $10 Full Coverage”. As a licensed insurance agency in the State of Michigan Quotewizard.com, LLC must follow the law. Based on information, belief, and the application of basic common sense, Quotewizard.com, LLC cannot offer an automobile insurance policy with “full coverage (which in common parlance generally means to include both collision and comprehensive coverage) for $10. If Quotewizard.com, LLC is in fact selling $10 auto insurance policies we have an even bigger problem because based on a search of DIFS website QuoteWizard.com, LLC is not appointed by a single insurance carrier to transact business in the state. Quotewizard.com, LLC appears to be preying on Michigan’s financially venerable [editor’s note: probably means vulnerable] population that can barely afford their car insurance and is trying to entice them to click their advertisement in hopes of financial relief. Instead clicking the advertisement will simply forward you information to dozens of insurance agents that will call you over and over trying to sell you insurance at rates that we would customarily expect to receive not $10.
Just because a consumer says this is true doesn’t make it true. But the ads library looks pretty legit. So maybe Quote Wizard was knowingly or unknowingly tricking people into visiting its website. Or maybe somebody is submitting false stuff to a Michigan regulator. *Shrug.*
Regardless, I am sharing this because it does raise a pretty important issue for folks buying leads– you need to understand your entire funnel.
If you are accepting clicks–or even inbound calls–from social media ads that contain false content you may end up being pursued by a state agency. (That hasn’t happened here, BTW, just a complaint– but one everyone can learn from.)
And I know Musk may have just killed the CFPB and the feds look unlikely to regulate anyone or anything–at least for a while– but the states can be plenty aggressive. So watch out!
Massachusetts AG Unveils Internal TikTok Documents in Lawsuit Alleging Child Addiction Strategies
On February 3, 2025, the Massachusetts Attorney General revealed information about internal TikTok documents as part of the AG’s lawsuit in Massachusetts state court alleging that TikTok designed its platform to maximize children’s engagement while downplaying associated risks through unfair and deceptive practices prohibited under Massachusetts law. The information, revealed in a less-redacted complaint, highlights internal discussions and strategic choices made by TikTok to increase the time young users spend on the app.
The complaint alleges that TikTok’s internal metrics prioritize children’s engagement, with teenagers offering the highest “Life Time Value” to the company. According to the complaint, internal data showed that in 2020, TikTok had achieved 95% market penetration among U.S. teens aged 13 to 17. A 2019 presentation allegedly stated that the platform’s “ideal user composition” would be for 82% of its users to be under the age of 18.
TikTok executives allegedly were aware of the potential negative effects of its algorithm on children, including sleep disruption and compulsive use. Internal communications cited in the lawsuit include a statement from TikTok’s Head of Child Safety Policy acknowledging that the app’s algorithm keeps children engaged at the expense of other essential activities.
TikTok’s leadership also allegedly blocked proposed changes aimed at reducing compulsive use among minors due to concerns about negative business impacts. One example cited in the complaint involves a proposed “non-personalized feed” that could have mitigated compulsive behaviors but was ultimately rejected.
The complaint also alleges that TikTok misrepresented the effectiveness of its content moderation policies. While the company has publicly claimed high proactive removal rates for harmful content, internal data allegedly shows significant leakage of inappropriate material, including content related to child safety violations.
The Massachusetts case is one of the first to publicly disclose internal TikTok documents related to its user engagement strategies. Its outcome could impact how social media companies design their platforms and address concerns regarding child safety.
Insurtech in 2025: Opportunity and Risk
The explosion in artificial intelligence (AI) capability and applications has increased the potential for industry disruptions. One industry experiencing recent material disruption is about as traditional as it gets: insurance. While some level of disruption in the insurance industry is nothing new, AI has been accelerating more significant changes to industry fundamentals. This is the first advisory in a series exploring the legal risks and strategies surrounding disruptive insurance technologies, particularly those leveraging AI, known as Insurtech.
What is Insurtech?
Insurtech is a broad term that encompasses every stage of the insurance lifecycle. Cutting-edge technology can be instrumental in advertising, lead generation, sales, underwriting, claims processing and fraud detection, among others. Generative AI can assist in client management and retention. Insurtech can augment traditional forms of insurance such as car and health insurance, and facilitate less traditional forms of insurance, such as parametric insurance or microinsurance at scale.
Legal and Regulatory Risks of Insurtech
As Insurtech continues to evolve, designers, providers and deployers must be aware of the legal and regulatory risks inherent in the use of Insurtech at all stages. These risks are particularly heightened in the insurance world, where vendors and carriers process an enormous amount of personal information in the course of decision-making that impacts individuals’ rights, from advertising to product pricing to coverage decisions.
The heavily regulated nature of the traditional industry is also enhanced in the Insurtech context, given overlapping regulatory interests in regulating new technology applications. These additional layers of oversight – which in traditional applications may not be as much of a primary concern – include the Federal Trade Commission, states’ Attorneys’ General and in some jurisdictions, state-level privacy regulators.
Building Compliance for Insurtech Solutions
Designing, providing and deploying Insurtech solutions requires a multifaceted, customized approach to position agents, vendors, carriers and indeed any entity in the insurance stack for compliance. Taking early action to build appropriate governance for your Insurtech product or application is critical to building a defensive regulatory position. For entities that have an eye on raising capital, engaging in mergers or acquisitions, or other collaborative marketplace activity, such governance will minimize friction that can impede success.
Additionally, consumers are increasingly attentive to data privacy and AI governance standards. Incorporating proper data privacy and AI governance regimes from day one is not only a forward-thinking business decision to mitigate risk and facilitate success; it is also a market imperative.
Looking Ahead: Risks and Opportunities in 2025
Over the next few months, we will take a closer look into more discrete risks and opportunities that Insurtech providers and deployers need to keep in mind throughout 2025. Follow along as we explore this exciting area that in recent years has demonstrated enormous potential for continued growth.
The New Legal Synergy: Collaborative Intelligence with Lawyers and Agentic AI
It’s easy to dismiss new technology as impractical for an industry as established as law. But we’re well past the speculation phase. AI isn’t a theoretical disruptor — it’s already here, reshaping legal work in real-time.
The legal industry has witnessed a staggering increase in AI adoption, from a mere 19% in 2023 to an impressive 79% in 2024. In the UK alone, 41% of legal professionals now use AI for work, up from just 11% in July 2023. The dramatic surge in AI adoption is not just the latest “hype cycle”, it marks the beginning of a fundamental shift in how legal work is done.
The traditional image of a lawyer pouring over dusty tomes and case files is fading. AI-powered tools are becoming integral to legal practice. But what we’ve seen so far with generative AI is just the beginning. The fundamental transformation will come with agentic AI.
Agentic and reasoning: the next frontier of AI
Agentic AI, the next frontier beyond generative AI, is poised to revolutionize legal work. Unlike its predecessor, agentic AI uses advanced AI systems capable of independently performing complex research or document drafting tasks. These AI systems can accomplish tasks with minimal human oversight and even check their own work before human review.
Large law firms are already experimenting with agentic AI, with experts predicting that AI systems could soon be members of legal teams. This gradual integration is expected to continue, emphasizing training and preparation.
Advanced legal reasoning, powered by AI
One of the most promising applications of agentic AI in the legal field is advanced legal reasoning (ALR), which goes beyond simple document analysis or basic research tasks.
ALR allows lawyers to upload tens of thousands of documents and conduct deep analyses to uncover insights into the strengths, weaknesses, and potential strategies buried in the complexities of the facts and issues — all within minutes. Leveraging the most advanced AI systems, ALR streamlines complex workflows, enabling lawyers to make informed decisions faster than ever.
It can interpret complex legal scenarios, apply relevant case law and statutes, and even suggest strategic approaches to legal problems. Lawyers can ask ALR systems questions like, “What is the weakest part of our claim concerning liability?” By analyzing key documents and referencing leading legal authorities, the ALR platform would provide a detailed, actionable response.
For example, when asked about a spouse’s income for child support calculations, ALR first employs an agent to search for the legal standard, then uses another agent to apply that understanding to case documents and extract the necessary information.
The impact of advanced legal reasoning tools is already evident. A staggering 71% of lawyers cite faster delivery as a key benefit of AI, while 54% report improved client service. Unsurprisingly, 78% of large law firms and 74% of corporate in-house teams have implemented AI changes.
Considerations for law firms adopting agentic AI
As agentic AI becomes more integrated into legal practice, firms must navigate ethical considerations and data privacy concerns. About two-thirds (70%) of firms prioritize data privacy policies when vetting technology vendors and litigation support providers. This focus on data protection is crucial, as 76% of legal professionals express concern about inaccurate or fabricated information from public AI platforms. To address these privacy and security concerns, a growing pool of legaltech companies is helping law firms adopt self-hosted AI solutions built to run within a firm’s private cloud ecosystem.
The future of agentic AI in law
Looking ahead, the future of law is undeniably intertwined with AI – from established firms to schools teaching the next generation of lawyers.
Two-thirds (75%) of organization leaders expect to change talent strategies within two years due to AI advancement. Law schools are already integrating generative AI training for new junior lawyers, preparing the next generation for an AI-powered workforce.
But let’s be clear: AI is not here to replace lawyers. It’s here to make them better. Those who embrace it — who approach it with curiosity and a willingness to adapt — will gain the most. The legal industry isn’t losing its expertise. It’s gaining new tools to apply that expertise more effectively.
If you take this shift seriously, AI won’t just change how you practice law — it will give you an edge.
Geolocation Takes the Day at Churchill Downs
Like the thoroughbred Rich Strike at the 2022 Kentucky Derby, one category of personal data recently broke from the rear and galloped its way to the forefront of awareness, astonishing the grandstands. You may hold its source in the palm of your hand. It is precise geolocation data1, collected from mobile devices.
The analogy presumes that the grandstands are packed with privacy nerds. For the rest of you, here’s a quick setup: Modern privacy laws2 define personal information very broadly3. Examples are given, including the physical location of an identifiable human being4 (“location” or “geolocation” data). Certain categories of personal info are deemed to be riskier to handle than others5. An increase in the level of risk attributed to precise geolocation data is the topic of this article.
Also presumed is a memory of Rich Strike’s epic victory. Picture a horse making moves like a running back, cutting a path through the field like he’s the only steed with a sense of urgency. Then he’s over the line and like: Whoa, what just happened?
But we’re getting ahead of ourselves.
Upon entering the gates at post time, geolocation data seemed to merit the same odds as Rich Strike (80:1) of what was about to transpire. After all, GDPR6 itself (the OG of privacy laws) deemed it to be nothing special.7
Let’s trace its path as it makes its astonishing run. Then we’ll circle back to GDPR and answer the obvious question: did it really (as it appears) fail to back the right horse? (Spoiler alert: the answer is no.) Finally, we’ll explore whether a silver bullet might exist to address the core concern underlying the discussion. (Spoiler alert: the answer is yes.)
A Word About Geolocation Data
Normally, geolocation data collected from cell phones is used to serve targeted ads to consumers who have consented to the process. The ideal recipient delights in getting a coupon for the precise cup of joe (for example) that happens to be his favorite, just as he happens to pass a store that happens to offer it.8 Yay to that.
But unfortunately, a sketchier use came to light at about the same time that GDPR was published (2016). It seemed like a niche concern at the time, more of a culture-war skirmish than anything broader. The story appeared in the Rewire News Group, a special interest publication with a narrowly focused readership9:
Anti-Choice Groups Use Smartphone Surveillance to Target ‘Abortion-Minded Women’ During Clinic Visits.10
It garnered little attention.11 Following in GDPR’s footsteps, the 1.0 version of CCPA12 (2018) mentions “geolocation data” as one example of personal information, but declines to single it out as anything special.
That changed in 2020 when CCPA 2.0 was adopted.13 Among the amendments, a newly-created category of “sensitive personal data” debuted, including a “consumer’s precise geolocation.” However the added protections afforded were limited.14
The Sprint to Prominence
The day that corresponds (in our analogy) to the sixth furlong at Churchill Downs, and the start of the homestretch, is May 2, 2022.
That’s when the SCOTUS decision in Dobbs v. Jackson15 was leaked to the press. The very next day, Vice Media published a story entitled Data Broker Is Selling Location Data of People Who Visit Abortion Clinics.16 The article warned of “an increase in vigilante activity or forms of surveillance and harassment against those seeking or providing abortions.”17 A cascade of similar reporting ensued.18
Following the lead of the fourth estate, the other three soon got involved.19 A handful of pro-choice states quickly passed laws restricting the use of geolocation data associated with family planning centers.20
Meanwhile, the Federal Trade Commission entered the fray, deeming certain uses of geolocation data to be unfair.21 In 2022, it floated a novel position: that using precise geolocation data associated with sensitive locations is an invasion of privacy22 prohibited by law.23 By 2024, it had firmed up a list of locations it deemed in the scope of the prohibition, including medical facilities, religious organizations, ethnic/religious group social services, etc. (The full list appears in the table below.)
Effectively, the FTC consigned “Sensitive Location Data” to the highest rank of sensitivity: personal data so sensitive that even informed consent can’t sanction its processing. Other rule-makers would go even further, proposing to ban the sale of precise geolocation altogether (sensitive or not)24, which brings us to the present day – and to a present-day head-scratcher:
Are the risks so dire that our hypothetical coffee consumer must be denied the targeted coupon that so delights him?
Circling back to GDPR provides a helpful perspective.
Did GDPR Really Back the Wrong Horse?
GDPR deems certain types of personal data to be sensitive25 including data concerning a person’s health, religion, political affiliation, etc. (The full list appears in the table below.) Location data isn’t included.
Nevertheless, if and when location data reveals or concerns sensitive data, it transforms into sensitive data ipso facto.
For example, data that locates a patient within a hospital is sensitive data, because it concerns their health. But data that locates an attending physician within the same hospital is not sensitive data, because it doesn’t.
That’s one difference between GDPR and the FTC rule: the latter deems all location data associated with a Sensitive Location to be sensitive, whereas GDPR deems location data sensitive only if it actually reveals the sensitive data of a consumer.
Here’s another difference:
Even when GDPR deems personal data to be sensitive, it doesn’t prohibit its use altogether. Rather, sensitive data may be used in accordance with a consumer’s explicit consent.
If that just caused you to raise an eyebrow, you’re probably not alone. GDPR isn’t known for permissive standards. And indeed, there’s a catch. The permissiveness comes at a cost in the form of rigorous duties imposed on businesses wishing to use sensitive data.
A threshold duty is to check local laws. GDPR hedges on its permissiveness by granting member-state lawmakers the right to raise the bar; to outlaw particular uses of sensitive data altogether (like the FTC did with Sensitive Location Data).26
Furthermore, it falls to the business to adjudge whether the risks of using the sensitive data outweigh the benefits.27 A formal Data Protection Impact Assessment is required, which is no small feat. Any green light to the use of sensitive data is likely to be closely scrutinized, should it catch the attention of a Supervisory Authority. Businesses must avoid using the rope provided by GDPR to hang themselves with – that’s the takeaway.
Finally, a heightened standard is likely to govern the validity of any consents purported to authorize the use of sensitive data,28 which brings us to the crux of the matter:
A Crisis of Confidence in Consents
Modern privacy laws set a high bar for what constitutes valid consent. In a nutshell, the person providing it must understand – really and truly – what they’re saying “yes” to.29
If the high bar is met, targeted ads may properly be served to consenting consumers, assuming any applicable red lines regarding sensitive data are respected.30 No current privacy framework31 rejects this principle. Rather, what’s been called into question, in particular cases, is the proviso – i.e., whether purported consents are valid in the first place.32
Some rule makers are skeptical to the extreme. They would dispense with consent as a legal basis for using location data in targeted advertising altogether. So flawed is the system, in their view, that consumers – for their own protection – must be denied the agency to proffer consent. Sorry coffee lover, no just-in-time coupon for you!
There are reasons to think that position would go too far.
Why Consent Matters in Principle
Here’s a reality check: the right to privacy is not absolute. Even under GDPR, it must be balanced against other fundamental rights, including the freedom to conduct a business.33 This may be why GDPR stops short of an outright ban on the use of sensitive data, consent notwithstanding. Taken too far, such a ban might infringe on the rights of individuals to determine how their personal data (which they own) may be used, and the rights of businesses to use personal data in accordance with the wishes of consenting adults.
Big Improvements in Managing Consents
A protocol is currently being rolled out by a nonprofit consortium of digital advertising businesses, the IAB Tech Lab.34 Known as the Global Privacy Platform (GPP), it establishes a method for digitally recording a consumer’s consent to the use of their data. The resulting “consent string” attaches to the personal data, accompanying it on its journey through the auction houses of cyberspace. Businesses that receive the data also receive the consent string, so there’s little excuse for exceeding consumer permissions.
Universal adoption of the GPP would establish the state-of-the-art in consent management for digital advertising businesses. It would be a significant milestone.
Give Consent a Chance
Thereafter, improvements in the granularity of consent, and the effectiveness of consent management processes, might soon blow our minds. Or so we are led to expect, at this point in history, the dawn of the AI era. Consent-management “copilot” bots nestled in our pockets like Tinkerbell – only a Luddite would doubt it. Or so it seems.
This is the promised silver bullet: consents so robust and manageable that even the most privacy-conscious consumer might have the confidence to grant them – present company included.
* * * *
When is Location Data Deemed Sensitive?
FTC
“Sensitive Location Data” is precise geolocation data associated with35:
GDPR
“Location Data” becomes Sensitive Data when it reveals or concerns an individual’s:
Medical facilities
Health
Religious organizations
Religious or philosophical beliefs
Correctional facilities
Data relating to criminality is not Special Category data under Art.9, but might be effectively bucketed into this column. See Art.10.
Labor union offices
Trade union membership
Locations held out to the public as predominantly providing education or childcare services to minors
The personal data of children is not Special Category data under Art.9, but might be effectively bucketed into this column. See Art.8 and Recital 75.
Locations held out to the public as predominantly providing services to LGBTQ+ individuals such as service organizations, bars and nightlife
Sex life or orientation
Locations held out to the public as predominantly providing services based on racial or ethnic origin
Racial or ethnic origin
Locations held out to the public as predominantly providing temporary shelter or social services to homeless, survivors of domestic violence, refugees, or immigrants
No direct corollary. But the ordinary risk assessment required for non-sensitive data may result in adding data about homelessness, etc. to this column. See also the previous row, which may apply to data of refugees and immigrants.
Locations of public gatherings of individuals during political or social demonstrations, marches, and protests
Political opinions
Military installations, offices, or buildings
No direct corollary. But the ordinary risk assessment required for non-sensitive data may result in adding data about military installations, etc. to this column.
Similar protections are accorded to the location of an individual’s private residence
No direct corollary, though the ordinary risk assessment required for non-sensitive data may result in adding domicile data to this column.
1 Typically defined as latitude & longitude coordinates derived from a device such as a cellphone, which place the device at a physical location with an accuracy of
European Commission Rejects Draft DORA RTS on Sub-contracting
The European Commission (Commission) recently published a letter (Letter) that it sent to the European Supervisory Authorities (ESAs) rejecting certain draft regulatory technical standards (RTS) under the EU Digital Operational Resilience Act (DORA). The draft RTS specified the conditions and criteria to be considered by financial entities when sub-contracting information communication and technology (ICT) services supporting critical or important functions. The Letter, dated 21 January 2025, follows the ESAs’ submission of its final report on the draft RTS in July 2024.
In the Letter, the Commission explained its rejection on the basis that the requirements introduced by Article 5 of the draft RTS on the “Conditions for sub-contracting relating to the chain of ICT sub-contractors providing a service supporting a critical or important function by the financial entity” go beyond the mandate provided to the ESAs under Article 30(5) of DORA. This is because they introduce requirements not specifically linked to the conditions for sub-contracting.
In light of this, the Commission considered that Article 5 of the draft RTS and the related Recital 5 should be removed to ensure the ESAs comply with its mandate set out in DORA.
The Commission intends to adopt the RTS once its concerns are addressed and the necessary modifications are made by the ESAs.
The Letter is available here.
Bad News & Good News: Ransomware Up, Payments Down in 2024
American blockchain analysis firm Chainalysis reports that ransomware payments declined significantly in 2024, dropping to $813 million from $1.25 billion in 2023 – a 35% decrease. The company’s sleuthing also revealed that only 30% of victims who entered negotiations with ransomware actors ultimately paid a ransom. That’s big. And this downward payment trend occurred despite 2024 being a record year for ransomware attacks overall.
This work reveals a disconnect between attack volume and successful extortion, suggesting organizations are becoming more resilient to ransomware pressure tactics. Some of the possible factors contributing to the decrease in ransomware payments include:
Law Enforcement and International Collaboration: Increased law enforcement actions and improved international collaboration have been effective in disrupting ransomware operations. For example, the takedown of LockBit by the UK’s National Crime Agency (NCA) and the US FBI led to a 79% decrease in payments.
Increased Gap Between Demands and Payments: The difference between ransom demands and actual payments is increasing. Incident response data shows that a majority of clients do not pay at all.
Shift in Ransomware Ecosystem: The collapse of LockBit and BlackCat led to a rise in lone actors and smaller groups that focus on small to mid-size markets with more modest ransom demands.
Illegitimate Victims on Data Leak Sites (more on this below): Some threat actors have been caught overstating or lying about victims, or reposting claims by old victims. LockBit has been known to publish as high as 68% repeat or fabricated victims on its data leak site after being ostracized by the underground community following law enforcement action.
Ransomware Actors Abstaining From Cashing Out: Ransomware operators are increasingly abstaining from cashing out their funds (such that the funds flow isn’t tracked), likely due to uncertainty and caution amid law enforcement actions targeting individuals and services facilitating ransomware laundering.
Victim Refusal to Pay: More victims are choosing not to pay ransoms due to improved cyber hygiene and overall resiliency.
Chainalysis also gives a summary of the data leak trends in 2024:
unprecedented growth in ransomware data leak sites, with 56 new sites emerging in 2024 – more than twice the number identified in 2023
researchers note significant concerns about the accuracy of these reported leaks:
many leaks overstated their impact, claiming entire multinational organizations when only small subsidiaries were affected
over 100 organizations appeared on multiple leak sites
ransomware gang LockBit, following law enforcement disruption, artificially inflated their numbers by reposting old victims and fabricating new ones – with up to 68% of their posts being repeat or false claims
This analysis suggests that while data leak sites showed record numbers in 2024, the actual scope of successful ransomware attacks may be significantly lower than the raw numbers indicate.
CIPA SUNDAY: Class Certified! Instant Replay Catches Prudential Offside—It’s 4th & Long, What’s Their Next Move?
Greetings CIPAWorld!
We are bringing back CIPA Sundays! And what better day to do it than Super Bowl Sunday—where the only replay we should be analyzing is on the field. But off the field, a different kind of replay is making headlines—one that allegedly tracks every move you make online, even the ones you think are erased. While millions tune in for the big game, another play-by-play happens behind the scenes. Imagine filling out an online life insurance quote form. You type in your age, financial details, and perhaps even information about your medical history. Then you delete something, perhaps reconsidering how much you want to share. But what if that erased data was never truly gone? What if every keystroke, every backspace, every moment of hesitation was silently recorded? That’s exactly what Prudential Financial allegedly did, and a federal court gave thousands of California residents the green light to challenge the practice together.
In a significant ruling for digital privacy, Judge Charles Breyer of the Northern District of California refused to let Prudential and its partners sidestep liability, certifying a class action that could reshape the boundaries of online data collection. See Torres v. Prudential Fin., Inc., No. 22-CV-7465-CRB, 2024 WL 4894289 (N.D. Cal. Nov. 26, 2024). Does this case sound familiar? It should. The one and only Baroness blogged about it here: The ActiveProspect Saga: Privacy Challenges Continue Post-Javier. This case illustrates how courts deal with modern surveillance technologies, the boundaries of implied consent, and whether companies can justify real-time user tracking under the pretext of “data collection.”
So, let’s get a little technical for a moment for those unfamiliar with this tech. The technology at issue here is TrustedForm, a session replay tool that does more than just log user submissions. In turn, it generates a second-by-second reconstruction of a user’s entire interaction with Prudential’s quote form, capturing information even if it is deleted before submission. Here’s how it works: the moment a visitor lands on the form, TrustedForm assigns them a unique tracking ID and begins recording. Think of it like a surveillance camera for your browser—monitoring every keystroke, every backspace, every time you hover over a field but hesitate to fill it out. By the time users hit “Get an instant quote,” Prudential and its partners already have a fully mapped-out replay of their entire thought process. But here’s the twist—users never agreed to this level of tracking. At no point were they explicitly told that their interactions were being recorded in real time.
With this in mind, let’s now switch gears and break down the Court’s reasoning so we can work through the Court’s analysis to fully understand it. Before deciding whether a class could be certified, the Court tackled standing—a threshold issue in privacy litigation. In Campbell v. Facebook, Inc., 951 F.3d 1106, 1117 (9th Cir. 2020), Judge Breyer reaffirmed that CIPA violations inherently confer standing because they protect substantive privacy rights, not just procedural ones. Unlike some privacy claims that require proof of harm beyond statutory violations, Plaintiff didn’t need to show her data was misused—the unconsented recording itself was enough to constitute concrete injury under TransUnion L.L.C. v. Ramirez, 594 U.S. 413, 423 (2021).
Prudential’s primary defense hinged on implied consent—arguing that website visitors were sufficiently “on notice” of session replay tracking through privacy policies, industry norms, and even news articles discussing online monitoring. However, the Court wasn’t convinced. Relying on Calhoun v. Google, L.L.C., 113 F.4th 1141, 1147 (9th Cir. 2024), the Court emphasized that for consent to be valid, it must be to “the particular conduct, or substantially the same conduct” at issue. Generic disclosures about data collection won’t cut it.
Prudential then pointed to its privacy policy, but the Court found this argument lacking, distinguishing Torres from its own prior decision in Javier v. Assur. IQ L.L.C., 649 F. Supp. 3d 891, 896-97 (N.D. Cal. 2023). While Javier held that a privacy policy might put users on “inquiry notice” for statute of limitations purposes, it didn’t establish actual consent. Here, no reasonable user clicking through Prudential’s quote form would expect that their keystrokes and deleted inputs were being recorded in real time.
Another hurdle Prudential tried to establish was the identification of class members—arguing that individual inquiries would dominate. The Court disagreed. Under Briseno v. ConAgra Foods, Inc., 844 F.3d 1121, 1125 (9th Cir. 2017), the Ninth Circuit doesn’t require plaintiffs to demonstrate administrative feasibility at the certification stage. It found that Prudential’s own database, combined with user affidavits, would be sufficient to identify affected consumers.
Next, one of Prudential’s more technical arguments was that some class members may have used VPNs, making it difficult to verify whether they were in California. However, the Court found this issue insufficient to defeat predominance. The Court suggested that ZIP code cross-referencing and affidavits could establish California residency. See Zaklit v. Nationstar Mortg. L.L.C., No. 5:15-cv-2190-CAS(KKx), 2017 WL 3174901, at *9 (C.D. Cal. July 24, 2017). It also pointed out that CIPA’s protections extend to communications “in transit” through California, meaning that even non-residents could potentially qualify if their data was intercepted while in the state.
With class certification granted, the battle over Prudential’s use of TrustedForm is far from over. The defendants—Prudential, ActiveProspect, and Assurance IQ—aren’t waiting for the trial to try to shut this case down. They’ve filed an early motion for summary judgment, arguing that their use of TrustedForm doesn’t violate California’s wiretapping law, CIPA § 631(a). The motion is set for a hearing on March 28, 2025, with briefing continuing through February and March.
The core of this motion revolves around whether ActiveProspect qualifies as a “third-party eavesdropper” under CIPA or if it was merely a service provider acting on Prudential’s behalf. The defense insists that TrustedForm is just a compliance tool, incapable of independent use, while Plaintiffs argue that recording user interactions without explicit consent is exactly the kind of digital surveillance CIPA was meant to prevent. Defendants might also move to exclude the expert testimony of Plaintiffs’ software expert, adding another layer of complexity. If they do, that motion will be fully briefed by March 13, 2025.
Meanwhile, the Court has scheduled a case management conference for March 28, 2025, immediately following the summary judgment hearing. Depending on how Judge Breyer rules, this case could either be heading toward trial—or be over before it ever gets there.
Bottom line? This fight is far from over, and Torres could still set a significant precedent for online tracking and consumer privacy rights. The next few months will be one to watch, and we’ll be sure to keep you updated.
As always,
Keep it legal, keep it smart, and stay ahead of the game.
Talk soon!
Eleventh Circuit Strikes Down One-to-One Consent Rule
On February 6, 2025, the Eleventh Circuit Court of Appeals struck down the FCC’s one-to-one consent rule (previously discussed here). Applying the Supreme Court’s decision in Loper Bright Enters. v. Raimondo, 9 the Eleventh Circuit ruled that the FCC exceeded its legal authority by enforcing additional consent restrictions not explicitly outlined in the Telephone Consumer Protection Act (TCPA).
The FCC had implemented the one-to-one consent rule as a safeguard against excessive telemarketing calls. By requiring consumers to grant consent to each specific seller, the rule sought to minimize unwanted marketing communications.
By invalidating the rule, the court effectively maintains the status quo, which allows businesses to rely on a single instance of consumer consent for multiple lead generators.
Putting It Into Practice: This ruling likely ends the FCC’s push on the one-to-one consent rule. In the short term, it will need to decide whether it appeals the ruling to a possible hostile Supreme Court. A Trump-centric FCC may have a different view altogether. We will keep monitoring the space for future developments.
Listen to this post
SEC Cybersecurity Disclosure Trends: 2025 Update on Corporate Reporting Practices
Go-To Guide:
Since April 2024, 41 companies disclosed cybersecurity incidents via Form 8-K, with 26 filing under voluntary Item 8.01 and 15 under mandatory Item 1.05, which requires reporting if the incident had a material impact on the company.
Following the SEC’s May 2024 guidance clarifying that Item 1.05 is intended only for mandatory filings, companies appear to be increasingly filing voluntary non-material cybersecurity incidents under Form 8-K Item 8.01 rather than under Item 1.05.
Recent cybersecurity incident disclosures contain more detailed information about affected systems and compromised data, particularly in Item 1.05 filings, than the more general disclosures filed right after the rule became effective.
Some amended Form 8-K filings under both rules focus on operational recovery status and typically conclude no material impact occurred, even under Item 1.05 filings.
Six months after the SEC’s Cybersecurity Incident Disclosure Rule (SEC Rule) came into force, an April 2024 GT Alert summarized disclosure trends. The GT Alert identified that the companies who filed a mandatory form 8-K disclosing a cybersecurity incident had erred on the side of caution, hedged on whether the materiality threshold had been met or outright stated that it had not, reported an incident early, and provided only high-level information about the incident.
The SEC’s Division of Corporation Finance (Corp Fin) issued clarifying guidance on May 21, 2024, noting that companies were filing materiality disclosures under new Item 1.05 for incidents that did not rise to the level of a material adverse event. In other words, companies possibly afraid of being second-guessed were opting to report under Item 1.05 even when they determined that the cybersecurity incident did not have a material adverse event. The SEC’s guidance clarified that new Item 1.05 was only appropriate for cybersecurity incidents that had a material effect on the company and suggested companies could avail themselves of voluntary disclosure under Item 8.01 instead.
As a potential result of the May guidance, companies are increasingly filing non-material cyber incident disclosures under Item 8.01 of Form 8-K, while material incidents continue to be reported under Item 1.05. Since April 2024, 41 companies have filed a form 8-K to disclose a new cybersecurity incident, but 26 did so under 8.01 and 15 did so under 1.05.1 Additionally, companies are providing more detailed disclosures about affected systems and data, but amended filings often lack clarity on when additional information was discovered and primarily confirm the resumption of operations with no material impact.
SEC Rule Disclosure Requirements
As a recap, the SEC Rule requires the following:
1.
Disclosure Requirement: Companies must disclose material incidents within four business days of determining their materiality by filing a Form 8-K under Item 1.05.
2.
Materiality Determination: The assessment of materiality must happen without unreasonable delay after discovering the incident. A cybersecurity incident is material if it has a “substantial likelihood that a reasonable shareholder would consider it important” in making an investment decision or would have “significantly altered the ‘total mix’ of information made available.” There is no bright-line test for assessing materiality. When assessing materiality, the SEC directed public companies to consider both quantitative and qualitative factors, including the immediate consequences and long-term implications for operations, customer relationships, financial performance, brand reputation, and the likelihood of litigation or regulatory action.
3.
Delay Exception: The only reason to delay disclosure is a written request from the U.S. Attorney General to protect national security or public safety.
4.
Form 8-K Content: The form must include:
–
discovery date and status (ongoing or not),
–
description of the incident’s nature and scope,
–
information about stolen or altered data,
–
potential impact on operations, including financial effects, and
–
remediation efforts or plans.
5.
Amended Form 8-K Filing: Once this information becomes known, the SEC’s Final Rule requires companies to amend a prior Form 8-K to disclose any information called for that was unavailable at the time of the initial Form 8-K filing. Amendments must be filed within four business days after the company, without unreasonable delay, determines such information or within four business days after such information becomes available.
Emerging Cybersecurity Incident Disclosure Trends
Looking at the disclosures companies have made up until today, there are five emerging trends:
1.
Disclosures of non-material incidents are increasingly filed under Item 8.01. The SEC’s guidance was effective in providing a roadmap for public companies to disclose incidents deemed initially immaterial under Item 8.01. Since then, more companies have started using Item 8.01 to disclose non-material cybersecurity incidents in their 8-K filings.
2.
Uptick in companies reporting material impact. Since April 2024, there has been an uptick in companies disclosing a material impact of their cyber incidents under Item 1.05. Six out of 15 companies specified the material impact on their financial condition or results of operations in their disclosures under Item 1.05, whereas prior to April 2024, there were none. However, there are still no cases where the company later (in the amended Form 8-K) confirms that there was in fact material impact. So far, the amended disclosures conclude that there is no material impact or that material impact is reasonably unlikely.
3.
More detail in the disclosures. Companies are starting to include more details in their 8-K filings than the first half of 2024. For instance, companies report about the affected systems, particularly the impacted data, such as whether it contains sensitive personal information. On the other hand, filings under Item 8.01 have been considerably shorter, generally providing a high-level overview of the incident, as they do not need to meet the content requirements for the material incident disclosure under Item 1.05.
4.
Amended disclosures do not include the date when additional information was identified. While an amended Form 8-K must be filed within four business days after additional information becomes available, companies do not indicate the date when they became aware of additional information on the incident. Hence, it cannot be determined whether companies have met the timing requirement.
5.
Amended disclosures often focus on the resumption of operations and confirm no material impact has been identified. Generally, companies use the amended Form 8-K under both Items 1.05 and 8.01 (i) to indicate that they have resumed their normal business activities and (ii) to confirm that the incident does not or is unlikely to have a material impact.
1 This number excludes amended filings.
Trump Administration Unveils New AI Policy, Reverses Biden’s Regulatory Framework
Early signals from the Trump administration suggest it may move away from the Biden administration’s regulatory focus on the impact of artificial intelligence (AI) and automated decision-making technology on consumers and workers. This federal policy shift could result in an uptick in state-based AI regulation.
Quick Hits
On January 23, 2025, President Trump signed an executive order to develop an action plan to enhance AI technology’s growth while reviewing and potentially rescinding prior policies to regulate its use.
The Trump administration is reversing Biden-era guidance on AI and emphasizing the need for minimal barriers to foster innovation and U.S. leadership in artificial intelligence.
The administration is working closely with tech leaders and has tapped a tech investor and former executive as the newly created White House AI & Crypto Czar to guide policy development in the AI sector.
State legislators may step in to fill the regulatory gap.
As part of a flurry of executive action during President Donald Trump’s first week of his second term in office, the president rescinded a Biden-era executive order (EO) issued on October 30, 2023, titled the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which sought to create safeguards for the “responsible development and use of AI.”
Further, on January 23, 2025, President Trump took action to shape the development of AI technology, signing EO 14179, “Removing Barriers to American Leadership in Artificial Intelligence.” The order states, “It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”
AI Executive Order
President Trump’s EO 14179 directs that, within 180 days, “relevant” agencies create an “action plan to achieve” the EO’s AI policy. That plan is to be developed by the “Assistant to the President for Science and Technology (APST), the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs (APNSA), in coordination with the Assistant to the President for Economic Policy, the Assistant to the President for Domestic Policy, the Director of the Office of Management and Budget (OMB Director), and the heads of such executive departments and agencies (agencies) as the APST and APNSA deem relevant.”
The order also mandates that these heads of agencies immediately review all policies, directives, regulations, and other actions taken under President Biden’s now-revoked EO 14110 to identify any actions inconsistent with the EO’s policy objectives. The EO states that inconsistent actions will be suspended, revised, or rescinded as appropriate to ensure that federal guidelines and regulations do not impede the nation’s role as an AI leader.
The EO directs the OMB Director, in coordination with the APST, to revise two specific OMB memoranda “as necessary to make them consistent” with the president’s new AI policy:
OMB Memorandum M-24-10, “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence,” issued in March 2024, which directed the Board of Governors of the Federal Reserve System to submit a biennial AI compliance plan to OMB.
OMB Memorandum M-24-18, “Advancing the Responsible Acquisition of Artificial Intelligence in Government,” issued in October 2024.
Shifting AI Policy
The Biden administration sought to create safeguards for the development of AI technology and its impact on labor markets, potential displacement of workers, and the use of AI and automated decision-making tools to make employment decisions and evaluate worker performance.
In November 2024, the U.S. Department of Labor (DOL) issued guidance on AI, detailing principles and best practices for employers in using AI in the workplace. That guidance built on prior guidance published by the DOL’s Wage and Hour Division and Office of Federal Contract Compliance Programs. Also, in 2022 and 2023, the U.S. Equal Employment Opportunity Commission (EEOC) issued guidance on employers’ use of AI tools and the potential for discrimination. As of the date of publication of this article, the EEOC’s former AI guidance has been removed from its website.
However, in a fact sheet published on January 23, 2025, the Trump administration stated that the “Biden AI Executive Order established unnecessarily burdensome requirements for companies developing and deploying AI that would stifle private sector innovation and threaten American technological leadership.” According to the fact sheet, the “development of AI systems must be free from ideological bias or engineered social agendas.”
President Trump is also reportedly working closely with many tech company leaders and AI developers. The president tapped investor and former tech executive David Sacks as the newly created “White House AI & Crypto Czar,” who will help shape policy around emerging technologies.
Next Steps
The Trump administration’s shift in AI policy marks a substantial departure from the previous administration’s focus. By rescinding Biden-era executive orders and implementing new directives to foster innovation, the Trump administration seeks to remove perceived barriers to the development of artificial intelligence technology.
Although the new administration has expressed its intent to deregulate this area, many states and jurisdictions have taken a different position, including California, Colorado, Illinois, and New York City. Other states may also consider filling the gap created by the absence of federal agency action on AI in employment.
In light of this, employers may want to continue to implement policies and procedures that protect the workplace from unintended consequences of AI use, including maintaining an AI governance team, establishing policies and practices for the safe use of AI in the workplace, enhancing cybersecurity practices, auditing results to identify and correct unintended consequences (including bias), and maintaining an appropriate level of human oversight.
Judge Denies Kochava’s Motion to Dismiss FTC’s Suit Over Selling Geolocation Data
On February 3, 2025, U.S. District Judge B. Lynn Winmill of the District of Idaho denied digital marketing data broker Kochava Inc.’s motion to dismiss a suit brought by the Federal Trade Commission. As previously reported, in August 2022, the FTC announced a civil action against Kochava for “selling geolocation data from hundreds of millions of mobile devices that can be used to trace the movements of individuals to and from sensitive locations.”
In the order denying Kochava’s motion to dismiss, Winmill rejected Kochava’s argument that Section 5 of the FTC Act is limited to tangible injuries and wrote that the “FTC has plausibly pled that Kochava’s practices are unfair within the meaning of the FTC Act.”