NOT YOUR PRODUCT, STILL YOUR PROBLEM: Sixth Circuit Finds Faxes Can Be Ads—Even If They Don’t Promote Sender’s Products
Hey TCPAWorld!
The fact that a sender is not promoting its own products when distributing a fax does not insulate it from liability under the TCPA.
In Lyngaas v. United Concordia Cos., Inc., No. 24-1777, 2025 WL 1625517 (6th Cir. June 9, 2025), dentist and Plaintiff Brian Lyngaas (“Plaintiff”), sued Defendant United Concordia Companies, Inc. (“Defendant”), a dental insurance provider, for sending unsolicited faxed advertisements in violation of the TCPA. The district court granted summary judgment for Defendant. The United States Court of Appeals for the Sixth Circuit reversed.
Background
Plaintiff, as an agent of his dental practice, contracted with Defendant to participate in Defendant’s Fee for Service Dental Network, which included a “Value Add Program” (VAP), which provided discounts from third-party vendors. As part of the VAP, Defendant distributed materials via fax.
The case revolves around three faxes sent by Defendant between October 2020 and May 2021. These faxes provided information on:
“(1) discounts on personal protective equipment (PPE) offered by Prophy Magic; (2) discounts on dentist-specific recycling buckets provided by Dental Recycling North America; and (3) promotional services for student loan refinancing by GradFin.”
Lyngaas, 2025 WL 1625517, at 2. Plaintiff filed a class action lawsuit against Defendant under the TCPA, alleging that Defendant sent him at least two unsolicited advertisements via fax. Plaintiff provided evidence of receiving faxes for Prophy Magic and DRNA, but not of GradFin.
Defendant argued on Summary Judgment that the faxes at issue were not advertisements as defined by the statute and the district court agreed, reasoning that Defendant’s profit incentive was too remote. Plaintiff appealed.
Analysis
The Sixth Circuit held that Defendant’s faxes were advertisements under the TCPA, because Defendant “facially promoted direct sales by its third-party partners” and their profits were directly tied to the promotions as they were “part of negotiated marketing agreements.” Id. at 2. The Court further reasoned that precedent supports this conclusion by “placing liability for third-party sales on the sender of a fax, rather than the seller of the product.” Id.
Under 47 U.S.C. § 227(b)(1)(C), the TCPA imposes statutory damages on any entity that uses “any telephone facsimile machine, computer, or other device to send, to a telephone facsimile machine, an unsolicited advertisement.” The statute defines an “unsolicited advertisement” as “any material advertising the commercial availability or quality of any property, goods, or services” that is transmitted “without that person’s prior express invitation or permission, in writing or otherwise.” Id. § 227(a)(5).
The Sixth Circuit held that the faxes were promotional because they were (1) “facially promotional” and (2) Defendant maintained a “sufficiently direct profit interest by contracting with its marketing partners.” Id. at 3.
Defendant argued that the faxes “did not have profit as an aim”, and any financial gain for Defendant was “hypothetical.” Id. at 2. Applying Sandusky, the district court agreed.
The Sixth Circuit found Sandusky distinguishable because the faxes in that case were purely informational, listing covered medications to assist healthcare providers without promoting any specific product or soliciting purchases. Here, the faxes sent by Defendant facially promoted branded products, offered discounts, and included marketing language that encouraged recipients to buy from Defendant’s partners. For example, the October 2020 fax read:
“United Concordia recently collaborated with Prophy Magic[ ] to offer … a 10% discount on all PPE products.” Id. at 3.
“Prophy Magic is a direct provider of superior products … [w]ith over 20 years of industry experience.” Id.
“For more than 20 years, DRNA has provided their affordable and efficient recycling solutions to a diverse roster of dental customers.”
Id. (Internal marking omitted).
Unlike in Sandusky, where the communications lacked a direct profit motive, Defendant’s faxes directly promoted the products of its partners and were aimed to drive business. Moreover, Defendant negotiated exclusive promotional rates for its network and actively distributed marketing materials to grow its provider and customer base. In contrast to Sandusky, where the defendant merely shared pricing information, Defendant here curated promotional content as part of a broader business strategy. Defendant entered into marketing agreements specifically to promote third-party products to its dental network. These promotions created a clear commercial nexus.
A key takeaway is that a defendant may still be held liable under the TCPA even if it wasn’t directly selling the products advertised in the fax. The district court reasoned that the faxes were a not-for-profit activity, simply promoting discounts on third-party goods and services unrelated to the defendant’s core business. But the Sixth Circuit rejected that framing, holding that this interpretation conflicts with established precedent: (1) “TCPA liability falls on the sender of a fax, rather than the seller of a product,” and (2) “the TCPA covers faxed advertisements when the sender profits indirectly.” Id. 4-5.
The Sixth Circuit made clear that not every fax sent with a potential indirect benefit triggers TCPA liability. If the sender has a legitimate, non-commercial reason for transmitting the fax—such as sharing discounts without any connection to their own business interests—they may fall into what the court called the “altruistic coupon-clipper” category. Id. at 5. That type of sender is distinguishable from the Defendant here, whose faxes lacked any such explanation and thus violated the TCPA.
The Gradfin fax was removed from litigation as it could not be produced. As to the remaining two faxes, the Sixth Circuit reversed the district court’s decision and remanded for further proceedings.
Courts Continue to Grapple with VPPA Class Actions
IntroductionVideo Privacy Protection Act (VPPA) class actions filed against companies continue to proliferate. Meanwhile, as evidenced by a recent spate of decisions by the Second, Sixth, Seventh, and Ninth circuits, courts remain divided on the application of the VPPA statute to certain businesses and consumers. Although the VPPA was passed in the now bygone era of Blockbuster and video cassette rentals, plaintiffs have seized on this seemingly outdated statute to sue companies that track consumers’ online viewing activities using Meta Pixel and other similar tracking technologies.
VPPA Legislative HistoryLegislation now known as the Video Privacy Protection Act1 was first introduced when a Washington newspaper published a profile on U.S. Supreme Court nominee Judge Robert Bork, based on the titles of movies he and his family rented from a local video store. At the time, this high-profile episode was dubbed “The Bork Tapes” scandal. A 1988 Senate Judiciary Committee Report denouncing the public disclosure of Bork’s video rental selections astutely observed that “[Privacy] is not a conservative or a liberal or moderate issue. It is an issue that goes to the deepest yearnings of all Americans that we are free and we cherish our freedom and we want our freedom. We want to be left alone.”2 The VPPA was enacted shortly thereafter.
VPPA OverviewThe VPPA states, in pertinent part, that a “video tape service provider who knowingly discloses, to any person, personally identifiable information [PII] concerning any consumer shall be liable to the aggrieved person.”
In summary:
The VPPA applies to a “video tape service provider” engaged in the rental, sale, or delivery of pre-recorded video cassette tapes or similar audiovisual content.
A “consumer” means a renter, purchaser, or subscriber of goods and services from a video tape service provider.
The video tape service provider shall not knowingly disclose “personally identifiable information” related to a consumer’s video content materials or viewing habits to third parties without the consumer’s informed written consent.
In the past few years, the plaintiffs’ class action bar has seized upon this statute to bring nationwide class actions against companies that utilize tracking technology – such as Meta Pixel, Google Analytics, and other similar software code embedded on their websites – to collect and analyze consumer habits. This data is commonly shared with third-party tech companies for targeted advertising purposes.
Importantly, the VPPA allows a “private right of action.” In other words, individuals can sue companies for alleged violations of the statute and seek recovery of statutory damages, punitive damages, and attorney’s fees. This provision has incentivized plaintiffs’ attorneys to file hundreds of VPPA class actions in courts nationwide. In 2024, an estimated 250 VPPA class actions were filed against companies – nearly double the number of similar suits filed in 2023.
Recent VPPA Appellate Court Decisions In a recent spate of decisions, federal appellate courts have addressed whether a named plaintiff is in fact a “consumer” within the meaning of the VPPA statute – reaching contradictory conclusions.
Second CircuitIn Salazar v. National Basketball Association3, plaintiff Salazar signed up for a complimentary online newsletter offered by defendant National Basketball Association (NBA). To subscribe to the newsletter, the plaintiff had to provide his personal information, including his email and IP addresses. The plaintiff also alleged that he watched videos on the NBA’s website, which used Meta Pixel tracking tools. The plaintiff did not pay for the videos, which were accessible to both subscribers and non-subscribers. The plaintiff further alleged that his video-watching history and Facebook ID were disclosed to Meta by the defendant without his permission and in violation of the VPPA.
The defendant NBA argued that the plaintiff was not a “consumer” within the meaning of the VPPA. In particular, it asserted that the plaintiff was not a subscriber of audiovisual goods or services, as the newsletter did not constitute pre-recorded audiovisual content. However, the Second Circuit interpreted the definition of consumer broadly, finding that if the plaintiff subscribed to any goods or services (newsletter) offered by the defendant – which also provided audiovisual content (videos) on its website – the plaintiff was indeed a consumer protected under the VPPA.
According to the Second Circuit, the VPPA “applies equally to a business dealing primarily in audiovisual materials (think Blockbuster) and one dealing primarily in non-audiovisual materials (think a general store that rents out a few movies).” However, in an attempt to install some guardrails around its expansive interpretation, the Second Circuit further observed that, in the example of the general store, it would only be liable under the VPPA for disclosing information pertaining to a customer’s video materials – not their “bread-buying habits.”
Seventh CircuitThe Seventh Circuit Court of Appeals similarly adopted a liberal interpretation of the term “consumer” under the VPPA in Merchant v. Me-TV National Limited Partnership.4 Me-TV operated a website where the public could watch classic television shows without providing any personal information, nor were there any costs associated with accessing video content. Me-TV’s revenue came from ad sales. In that case, the plaintiffs subscribed to Me-TV by providing their email addresses and zip codes. Me-TV subscribers received additional personalized services, including program schedules, reminders, etc. The plaintiffs viewed content on Me-TV while signed into their Facebook accounts. Me-TV allegedly embedded Meta Pixel into its videos, which enabled Facebook to determine the plaintiffs’ viewing habits in order to sell targeted advertising. The plaintiffs allege that Me-TV violated the VPPA by sharing their personal information collected by the Meta Pixel tracking technology with Facebook.
Me-TV argued that the plaintiffs were not “consumers” insofar as they did not specifically subscribe to video content that was available free of charge to everyone on the website. Instead, the plaintiffs merely signed up for ancillary services related to video programming. However, the Seventh Circuit disagreed, finding that the plaintiffs were in fact consumers as contemplated by the VPPA. As the Seventh Circuit observed, the complaint adequately alleged that Me-TV is a video tape service provider. “If plaintiffs had signed up and never watched a video, but had purchased a Flintstones sweatshirt or a Scooby Doo coffee mug” offered on Me-TV’s website, “then they would have purchased ‘goods’ from a ‘video tape service provider.’ Nothing in the Act says that the goods or services must be video tapes or streams.” In other words, as long as the plaintiffs were renters, purchasers, or subscribers of any goods or services offered by a video tape service provider, they qualified as consumers for the purpose of asserting a VPPA claim. Sixth CircuitConversely, in Salazar v. Paramount Global d/b/a 247Sports,5 the Sixth Circuit adopted a narrow interpretation of the term consumer, which required a subscription to audiovisual content – not just any goods or services – from a video tape service provider. In that case, the plaintiff subscribed to an electronic newsletter from 247Sports.com by providing his email and IP address. In addition, the plaintiff viewed content on 247Sports.com, a website owned by Paramount that covers college sports recruiting, while logged into his Facebook account. The plaintiff alleged that he had a digital subscription to the website, which utilized pixel tracking tools. The plaintiff alleged that the pixel enabled Paramount to track and disclose his viewing history, linked to his Facebook ID, to Facebook without his consent.
The Sixth Circuit explicitly rejected the Second and Seventh Circuits’ broad interpretation of a “consumer” under the VPPA. Instead, the Sixth Circuit opined that a person is a consumer under the statute “only when he subscribes to ‘goods or services’ in the nature of ‘video cassette tapes or similar audiovisual materials.’” Here, the plaintiff only subscribed to a newsletter. He did not subscribe to audio-visual content that was available to anyone who visited 247Sports.com’s website. Accordingly, the plaintiff was not a consumer under the VPPA.
Ninth CircuitIn yet another recent VPPA decision, the Ninth Circuit explained that a “video tape service provider” only extended to the rental, sale, or delivery of prerecorded (versus live) content. In Osheske v. Silver Cinemas Acquisition Company d/b/a Landmark Theaters,6 defendant Landmark operated a website where people could watch movie trailers, browse show times, and purchase tickets to view movies in traditional brick-and-mortar theaters. Landmark installed a pixel or web beacon on its website to transmit user information to Facebook whenever someone purchased a movie ticket while logged into their Facebook account. The plaintiff visited Landmark’s website and bought a movie ticket, and alleged that the defendant violated the VPPA by sharing his personal information (including the name of the film, the theater location, and his unique Facebook ID) with Facebook without his consent.
The Ninth Circuit agreed with defendant Landmark that it was not a “video tape service provider” under the VPPA. In particular, the court observed that “Landmark does not deliver any prerecorded ‘audio visual materials’ to the consumer in either its ticket sales or its in-theater experiences,” further noting, “Someone late to a theater showing cannot rewind the movie, someone needing to use the facilities or desiring a soft drink cannot pause it, and someone falling asleep cannot stop and start it again later.” The Ninth Circuit also stated that despite the fact that movies theaters were in “full swing in the late 1980s,” when the VPPA was enacted, “movie theaters were omitted from the Act.” While the Ninth Circuit’s opinion might be limited in its scope, it nonetheless underscores that the VPPA only applies to the disclosure of personal information concerning pre-recorded audiovisual content.
ConclusionRecent appellate court rulings involving VPPA class actions demonstrate that the interpretation of the statute’s key provisions remains unsettled. The strength of legal defenses available to companies at the pleading stage, when considering a motion to dismiss, is heavily influenced by the court and jurisdiction in which the case is pending. Undoubtedly, plaintiffs will continue to file VPPA class actions against companies that utilize pixel tracking technology, now essentially ubiquitous. The best defense is a good offense in terms of conspicuously disclosing the use of tracking tools and obtaining consumers’ prior consent by means of a pop-up banner at the outset of any transactions. Moreover, companies should carefully craft their customer agreements, terms, and conditions to include a class action waiver and limitation of liability provisions.
1 18 U.S. Code § 2710, et seq.
2 Senate Judiciary Committee Report 100-599 accompanying Senate Bill 2361 on the Video Privacy Protection Act of 1988 (Oct. 21, 1998).
3 Michael Salazer v. National Basketball Assn., Docket No. 23-1147 in the U.S. Court of Appeals for the Second Circuit (decided October 15, 2024).
4 David Vance Gardner and Gary Merchant v. Me-TV National Limited Partnership, Docket No. 24-1290 in the U.S. Court of Appeals for the Seventh Circuit (decided March 28, 2025).
5 Michael Salazer v. Paramount Global d/b/a 247Sports, Docket No. 23-5748 in the U.S. Court of Appeals for the Sixth Circuit (decided April 3, 2025).
6 Paul Osheske v. Silver Cinemas Acquisition Company d/b/a Landmark Theaters, Docket No. 23-3882 in the U.S. Court of Appeals for the Ninth Circuit.
The Intersection of Artificial Intelligence and Employment Law
The use of algorithmic software and automated decision systems (ADS) to make workforce decisions, including the most sophisticated type, artificial intelligence (AI), has surged in recent years. HR technology’s promise of increased productivity and efficiency, data-driven insights, and cost reduction is undeniably appealing to businesses striving to streamline operations such as hiring, promotions, performance evaluations, compensation reviews, or employment terminations. However, as companies increasingly rely on AI, algorithms, and automated decision-making tools (ADTs) to make high-stakes workforce decisions, they may unknowingly expose themselves to serious legal risks, particularly under Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act (ADEA), the Americans with Disabilities Act (ADA), and numerous other federal, state, and local laws.
Quick Hits
Using automated technology to make workforce decisions presents significant legal risks under existing anti-discrimination laws, such as Title VII, the ADEA, and the ADA, because bias in algorithms can lead to allegations of discrimination.
Algorithmic HR software is uniquely risky because, unlike human judgment, it amplifies the scale of potential harm. A single biased algorithm can impact thousands of candidates or employees, exponentially increasing the liability risk compared to biased individual human decisions.
Proactive, privileged software audits are critical for mitigating legal risks and monitoring the effectiveness of AI in making workforce decisions.
What Are Automated Technology Tools and How Does AI Relate?
In the employment context, algorithmic or automated HR tools refer to software systems that utilize predefined rules to run data through algorithms to assist with various human resources functions. These tools can range from simple rule-based formula systems to more advanced generative AI-powered technologies. Unlike traditional algorithms, which operate based on fixed, explicit instructions to process data and make decisions, generative AI systems differ in that they can learn from data, adapt over time, and make autonomous adjustments without being limited to predefined rules.
Employers use these tools in numerous ways to automate and enhance HR functions. A few examples:
Applicant Tracking Systems (ATS) often use algorithms to score applicants compared to the position description or rank resumes by comparing the skills of the applicants to one another.
Skills-based search engines rely on algorithms to match job seekers with open positions based on their qualifications, experience, and keywords in their resumes.
AI-powered interview platforms assess candidate responses in video interviews, evaluating facial expressions, tone, and language to predict things like skills, fit, or likelihood of success.
Automated performance evaluation systems can analyze employee data such as productivity metrics and feedback to provide ratings of individual performance.
AI systems can listen in on phone calls to score employee and customer interactions, a feature often used in the customer service and sales industries.
AI systems can analyze background check information as part of the hiring process.
Automated technology can be incorporated into compensation processes to predict salaries, assess market fairness, or evaluate pay equity.
Automated systems can be utilized by employers or candidates in the hiring process for scheduling, note-taking, or other logistics.
AI models can analyze historical hiring and employee data to predict which candidates are most likely to succeed in a role or which new hires may be at risk of early turnover.
AI Liability Risks Under Current Laws
AI-driven workforce decisions are covered by a variety of employment laws, and employers are facing an increasing number of agency investigations and lawsuits related to their use of AI in employment. Some of the key legal frameworks include:
Title VII: Title VII prohibits discrimination on the basis of race, color, religion, sex, or national origin in employment practices. Under Title VII, employers can be held liable for facially neutral practices that have a disproportionate, adverse impact on members of a protected class. This includes decisions made by AI systems. Even if an AI system is designed to be neutral, if it has a discriminatory effect on a protected class, an employer can be held liable under the disparate impact theory. While the current administration has directed federal agencies to deprioritize disparate impact theory, it is still a viable legal theory under federal, state, and local anti-discrimination laws. Where AI systems are providing an assessment that is utilized as one of many factors by human decision-makers, they can also contribute to disparate treatment discrimination risks.
The ADA: If AI systems screen out individuals with disabilities, they may violate the Americans with Disabilities Act (ADA). It is also critical that AI-based systems are accessible and that employers provide reasonable accommodations as appropriate to avoid discrimination against individuals with disabilities.
The ADEA: The Age Discrimination in Employment Act (ADEA) prohibits discrimination against applicants and employees ages forty or older.
The Equal Pay Act: AI tools that factor in compensation and salary data can be prone to replicating past pay disparities. Employers using AI must ensure that their systems are not creating or perpetuating sex-based pay inequities, or they risk violating the Equal Pay Act.
The EU AI Act:This comprehensive legislation is designed to ensure the safe and ethical use of artificial intelligence across the European Union. It treats employers’ use of AI in the workplace as potentially high-risk and imposes obligations for continued use, as well as potential penalties for violations.
State and Local Laws: There is no federal AI legislation yet, but a number of states and localities have passed or proposed AI legislation and regulations, covering topics like video interviews, facial recognition software, bias audits of automated employment decision-making tools (AEDTs), and robust notice and disclosure requirements. While the Trump administration has reversed Biden-era guidance on AI and is emphasizing the need for minimal barriers to foster AI innovation, states may step in to fill the regulatory gap. In addition, existing state and local anti-discrimination laws also create liability risk for employers.
Data Privacy Laws: AI also implicates a number of other types of laws, including international, state, and local laws governing data privacy, which creates another potential risk area for employers.
The Challenge of Algorithmic Transparency and Accountability
One of the most significant challenges with the use of AI in workforce decisions is the lack of transparency in how algorithms make decisions. Unlike human decision-makers who can explain their reasoning, generative AI systems operate as “black boxes,” making it difficult, if not impossible, for employers to understand—or defend—how decisions are reached.
This opacity creates significant legal risks. Without a clear understanding of how an algorithm reaches its conclusions, it may be difficult to defend against discrimination claims. If a company cannot provide a clear rationale for why an AI system made a particular decision, it could face regulatory action or legal liability.
Algorithmic systems generally apply the same formula against all candidates, creating relative consistency in the comparisons. For generative AI systems, there is greater complexity because the judgments and standards change over time as the system absorbs more information. As a result, the decision-making applied to one candidate or employee will vary from the decisions made at a different point in time.
Mitigating the Legal Risks: AI Audits, Workforce Analytics, and Bias Detection
While the potential legal risks are significant, there are proactive steps employers may want to take to mitigate exposure to algorithmic bias and discrimination claims. These steps include:
Ensuring that there is a robust policy governing AI use and related issues, like transparency, nondiscrimination, and data privacy
Doing due diligence to vet AI vendors, and not utilizing any AI tools without a thorough understanding of their intended purpose and impact
Training HR, talent acquisition, and managers on the appropriate use of AI tools
Continuing to have human oversight over ultimate workforce decisions so that AI is not the decisionmaker
Ensuring compliance with all applicant and employee notice and disclosure requirements, as well as bias audit requirements
Providing reasonable accommodations
Regularly monitoring AI tools through privileged workforce analytics to ensure there is no disparate impact against any protected groups
Creating an ongoing monitoring program to ensure human oversight of impact, privacy, legal risks, etc.
Implementing routine and ongoing audits under legal privilege is one of the most critical steps to ensuring AI is being used in a legally defensible way. These audits may include monitoring algorithms for disparate impacts on protected groups. If a hiring algorithm disproportionately screens out individuals in a protected group, employers may want to take steps to correct these biases before they lead to discrimination charges or lawsuits. Given the risks associated with volume, and to ensure corrective action as quickly as possible, companies may want to undertake these privileged audits on a routine (monthly, quarterly, etc.) basis.
The AI landscape is rapidly evolving, so employers may want to continue to track changing laws and regulations in order to implement policies and procedures to ensure the safe, compliant, and nondiscriminatory use of AI in their workplace, and to reduce risk by engaging in privileged, proactive analyses to evaluate AI tools for bias.
Navigating Declaratory Judgment: Mitek’s Bid to Head Off USAA’s Patent Claims
This case [1] addresses declaratory judgments of non-infringement in relation to subject-matter jurisdiction and the district court’s refusal to exercise discretionary jurisdiction.
Background
In June 2020, Mitek Systems, Inc. (“Mitek”) filed a declaratory judgment action in the Eastern District of Texas, asking the court to declare that its MiSnap software did not infringe United Services Automobile Association’s (“USAA”) U.S. Patents Nos. 8,699,779; 9,336,517; 9,818,090; and 8,977,571. MiSnap is a software development kit used by banks to capture check images in their mobile apps. Mitek pointed to USAA’s campaign of letters and its suit against Wells Fargo as evidence of a credible threat of infringement or indemnity claims.
The district court dismissed the declaratory judgment case for lack of subject-matter jurisdiction, finding no actual controversy, and declined to exercise its discretion. On first appeal, the Federal Circuit vacated both rulings and remanded for “finer parsing” of the facts—explicitly instructing the district court to categorize any 12(b)(1) challenge as facial or factual and to revisit Mitek’s two bases for standing: threat of direct/indirect infringement and potential indemnity liability.
On remand, after extensive briefing and fact finding, the district court again concluded that Mitek had no reasonable apprehension of suit—MiSnap did not itself practice all claim elements, USAA never pointed to Mitek documentation showing inducement, and MiSnap had substantial non-infringing uses. It also held that no indemnity agreements exposed Mitek to likely liability. Even if jurisdiction existed, the court would decline declaratory relief, urging Mitek instead to intervene in any future USAA suit against a customer. Mitek timely appealed.
Issue(s)
Whether the district court erred in finding no subject-matter jurisdiction over Mitek’s declaratory judgment action and whether it abused its discretion in declining to exercise jurisdiction.
Holding(s)
The Federal Circuit held that Mitek failed to establish a case or controversy, as it lacked a reasonable apprehension of infringement and no real risk of indemnity liability. The Federal Circuit also held that the district court did not abuse its discretion in refusing to hear the case, given better remedies available through intervention.
Reasoning
For infringement, the Federal Circuit noted that MiSnap—a toolkit, not a complete banking app—cannot satisfy every element of USAA’s asserted claims, and USAA never alleged otherwise. On inducement, nothing in USAA’s claim charts or MiSnap documentation showed affirmative steps by Mitek to encourage full claim performance. For contributory infringement, the record confirmed that MiSnap had substantial non-infringing uses and USAA never argued otherwise. After Mitek filed suit, USAA settled its Wells Fargo case, further undermining any “ongoing” controversy.
On indemnity, the Federal Circuit reviewed the actual contracts and found carve-outs shielding Mitek from likely payment obligations. Mitek could not simply point to the existence of indemnity clauses; it had to show a reasonable potential for liability, which it could not.
Finally, even assuming jurisdiction, the Federal Circuit approved the district court’s discretionary decision. Intervention in an actual infringement suit against a bank customer would allow full airing of factual disputes—customer-specific use, customization, and knowledge—that a standalone declaratory judgment action could not resolve.
In conclusion, Mitek Systems v. USAA underscores the high bar for patent declaratory judgment jurisdiction. Suppliers must show a genuine, immediate threat of suit or clear indemnity exposure, not merely fear or customer-targeted enforcement. District courts retain broad discretion to decline declaratory relief when better avenues—such as intervention—exist. Parties facing indirect or contributory claims should ensure their products, documentation, and indemnity provisions are carefully aligned to avoid such jurisdictional roadblocks.
Footnotes
[1] Mitek Systems, Inc. v. United Services Automobile Association, 2023-1687 (Fed. Cir., June 12, 2025)
UK ICO Publishes Draft Guidance on Internet of Things Products and Services
On June 16, 2025, the UK Information Commissioner’s Office (the “ICO”) published its draft guidance on Internet of Things (“IOT”) products and services (the “Guidance”). Through the Guidance, the ICO aims to provide clarity to manufacturers and developers of smart products, such as smart speakers and Wi-Fi fridges, to ensure they create products that comply with data protection law. The Guidance covers key areas such as:
Types of Information: The Guidance explains the different types of personal data which may be processed by IoT products and services, including health, biometric and location data, and how such data may be used and collected.
Accountability: The ICO considers areas of accountability in the context of IoT products and services, such as the controller and processor relationship, privacy by design, and the use of IoT products and services by children.
Lawful Processing: The Guidance considers the application of the lawful bases and the special category conditions of processing under the UK General Data Protection Regulation (the “UK GDPR”) to of IoT products and services, and gives examples of how manufacturers and developers can seek to ensure that freely given, specific, informed and unambiguous consent is obtained by consumers.
Fair Processing: The ICO encourages manufacturers and developers to consider how personal data is processed, focusing on key issues such as necessity, proportionality and purpose limitation.
Transparency: The Guidance includes examples for manufacturers and developers on how to inform consumers of how they collect, use and share personal data.
Security: The Guidance stresses the importance of implementing and maintaining appropriate technical and organizational measures, providing examples of such including encryption and multi-factor authentication.
Data Subject Rights: The ICO reminds manufacturers and developers of their responsibility to inform consumers of their data subject rights under the UK GDPR.
The ICO’s intention behind the Guidance is to empower organizations to consider responsible use of information and compliance with data protection laws. However, the ICO has warned manufacturers and developers that it is “closely monitoring compliance” and will be “ready to act” where it believes “corners are being cut or personal information is being collected recklessly.”
The ICO has asked for manufacturers, developers and the wider tech industry to share their views on the draft Guidance, which will be open for consultation until September 7, 2025.
Texas SB140: Changes to Telemarketing Law May Reshape Compliance and Litigation Risks
Texas is poised for a significant overhaul of its telemarketing regulations with the anticipated enactment of Senate Bill 140 (“SB140”). Awaiting Governor Abbott’s signature and scheduled to take effect on September 1, 2025, if enacted, SB140 will expand the scope of telemarketing regulation, introduce robust new consumer litigation rights, and impose some of the nation’s harshest penalties for noncompliance. Businesses marketing to Texas consumers—by phone, text, or multimedia message—should prepare for a new era of heightened risk and regulatory scrutiny.
What is in the Law?
Expansion of the Definition of Telemarketing Communications: SB140 broadens the definition of “telephone call” and “telephone solicitation.” The law will explicitly encompass text messages, image messages, and virtually any other transmission intended to sell goods or services, in addition to traditional voice calls. Marketing strategies that include Short Message Service (“SMS”), Multimedia Messaging Service (“MMS”), or similar campaigns will soon be subject to the same strict standards as voice calls. As a result, compliance failures in text or multimedia outreach could trigger the same penalties as non-compliant robocalls.
Private Right of Action Under the DTPA: SB140 introduces a new private right of action under the Texas Deceptive Trade Practices and Consumer Protection Act (“DTPA”) for violations of core telemarketing requirements. This includes failing to comply with call time restrictions, failing to register as a telemarketer, and failing to honor opt-out requests. The use of an automatic dialing announcing device (“ADAD”)—Texas’s term for autodialers and robocall systems—will also be enforceable under the DTPA. Consumers will be able to seek both economic damages and damages for mental anguish, significantly increasing potential exposure in litigation.
Unlimited Consumer Recovery and Serial Litigation Risk: SB140 allows consumers to recover damages for each violation, regardless of whether they have previously recovered for similar violations. This opens the door to serial litigation, enabling consumers to bring repeated actions for ongoing or repeated violations with no cap on recoveries. For businesses, this means that a single consumer could initiate multiple lawsuits, each carrying the potential for significant statutory damages.
Significant Financial Exposure for Noncompliance: The financial risks associated with noncompliance are substantial. Texas already imposes some of the highest penalties in the country for telemarketing violations, with statutory damages ranging from $500 to $5,000 per violation. With the new amendments, even technical missteps or isolated errors could result in considerable liability, especially given the possibility of treble damages for knowing or intentional violations under the DTPA.
Compliance Strategies and Best Practices
Given these sweeping changes, businesses should promptly review and update their telemarketing compliance programs. This includes auditing consent practices to ensure valid, documented consent for all forms of outreach, including texts and image messages. Disclosure procedures should be reviewed and updated, and opt-out mechanisms should be reliable and prompt. Businesses using autodialers or automated messaging should assess their practices for compliance with both the telemarketing statute and the DTPA, given the heightened risk of private litigation and the potential for mental anguish damages.
SB140 reinforces the importance of Texas’s telemarketing registration requirements, which have increasingly been the subject of recent litigation. Businesses should ensure they are properly registered before engaging in any covered telemarketing activities and have robust systems in place to honor opt-out requests and comply with call time restrictions.
Staff training and ongoing compliance monitoring will be important. Documenting all consent, registration, and opt-out activity will be essential for defending against potential claims. With SB140 expected to make Texas a new epicenter for telemarketing litigation, businesses should prepare for increased scrutiny and the real possibility of class action exposure.
Conclusion: A New Era for Telemarketing in Texas
SB140 represents a shift in Texas telemarketing law, with potentially far-reaching implications for any business that markets to Texas consumers. If enacted, the law’s effective date of September 1, 2025, leaves a narrow window for businesses to review and improve their compliance programs. The risks of noncompliance are steep, and the litigation environment is about to become significantly more challenging. Now is the time for businesses to act and ensure robust compliance in anticipation of this new regulatory landscape.
Growing List of States Attempting to Regulate Kids’ Social Media Accounts: Nebraska Husks Up
Nebraska’s governor signed a bill into law that, among other things, creates the Parental Rights in Social Media Act. The provisions of the law will go into effect July 1, 2026, unless challenged. The law is similar to several other states, most of which have been challenged (including Arkansas, California, and Utah) and some struck down.
If the law goes unchallenged, unlike other states it creates a private right of action. Anyone who violates the act may be subject to a lawsuit brought by an injured party. They may be ordered to pay damages, attorney’s fees, and other relief. In addition, the Nebraska Attorney General can enforce the law and seek penalties of up to $2,500 per violation.
Obligations placed on social media companies under the law include:
Age verification: Social media companies (or their vendors) will need to verify the ages of all people that attempt to create an account. It would restrict anyone under 18 from creating an account. And, the law specifically requires that social media companies delete identifying information they get when checking user ages.
Parental consent: The law requires parental consent before minors can create social media accounts. They must also give parents mechanisms to revoke their consent. If a parent revokes their consent, the social media company must remove the account of that parent’s child and must stop a child from creating a new account unless the parent provides consent.
Parental supervision: Parents will need to be given a way to supervise their children’s social media use. This includes access to their children’s posts and messages, and controls over their privacy and account settings. In addition, parents must be able to monitor and limit the amount of time the minor spends using the social media site.
Putting it Into Practice: Nebraska joins a growing number of states attempting to regulate children’s use of social media. We will continue to monitor the status of this new Nebraska law before mid-2026, but anticipate seeing other similar legislation from other states.
Listen to this post
James O’Reilly also contributed to this article.
TCPA DEBT CAN’T BE DISCHARGED IN BK?: Court Rejects Diana Mey’s Effort to Deem TCPA Judgment Non-Dischargeable In Bankruptcy–But There’s a Catch
One of the most unfair rules in American jurisprudence is the one holding individuals personally liable under the TCPA for actions they take as part of their employment.
In almost every setting in the law if you do something at work and it turns out to be illegal you cannot be personally sued for that.
But under the TCPA, the person that runs the call center, or makes the phone call, or sets up a campaign, or even a CEO can all be sued PERSONALLY for TCPA violations. And it happens all the time.
Take the case of Judson Phillips and Preston Thompson.
Famous consumer advocate Diana Mey–who I invite onto my podcast but has not yet taken me up on it– sued these guys for alleged marketing calls that violated the TCPA and West Virginia state law and obtained a $1.5MM default judgment when they did not show up in court to defend themselves. (Protip: don’t do that.)
When these guys declared bankruptcy Mey went after them in BK court too and argued the court should not give these guys a fresh start because they acted maliciously in calling her without consent.
Now the Court ended up rejecting this argument– will explain why sortly– bt it held the door open to similar claims being found non-dischargaable, which is just wild.
Mey argued that because these violations were found to be “knowing and willful” by the court they must also be non-dischargeable in bankruptcy.
For a debt to be non-dischargeable, however, it must be one arising out of a “willful and malicious injury”– a bar that is quite high.
Unlike in the setting of the TCPA– where wilfullness is sometimes directed to the act of calling itself. For BK purposes an act is only “willful” if the debtor INTENDED TO INJURE the creditor by the actions it willfully engaged in.
In Mey’s case she could not demonstrate such an intent to injury from the robocalls at issue. The court looked at the volume of calls and facts such as Mey’s inviting some of the calls and concealing her identity in other calls to determine the callers likely had not been intending to harass Ms. Mey– just trying to reach her to discuss debt relief.
However the court noted that SOME of the calls might hae been made with the right level of intent to justify a finding of non-dischagabilty, but Mey had been too focused on getting ALL of the calls protected from BK:
Throughout these proceedings, Ms. Mey has taken an “all or nothing” approach to the litigation and the “willful and malicious” nature of the Judgment debt. No effort was made to prove that certain amounts of the Judgment should be excepted from discharge even if the full amount is not. That is true on issues such as “conscious disregard” versus “just cause” in the maliciousness analysis.
The Court concludes that it does not have sufficient evidence to determine what portion of the Judgment award for violation of the TCPA and/or the WVCCPA could be attributable to malicious versus non-malicious calls.
Get it?
If Mey had focused on certain calls made under certain circumstances, the court likely WOULD have found damages from some of the calls were not dischargeable!
Wow.
To my knowledge no court has ever found TCPA violations to be non-dischargable in BK but this court certain implies it is possible. One more ting for people to keep in mind!
Take aways:
Again, personal liability is always on the table in TCPA suits!;
Regular TCPA violations (i.e. accidents) can probably be wiped away in BK but intentionally violating someone’s TCPA rights might actually lead to non-dischargeable debt!
Diana Mey is NOT to be messed with. She is a proven winner, and even though she got to greedy here she just opened up a whole new avenue of attack in BK court.
European Commission Kick Starts Overhaul of EU Telecom Law
“Look, if you had one shot or one opportunity to seize everything you ever wanted in one moment, would you capture it or just let it slip?” asks Eminem in his song “Lose Yourself”. This year might provide just one of those once-in-a-life time opportunities for EU telecom law.
Many stakeholders have been calling for a reform of EU telecom law for some time, venting their frustration at the EU legislature. Earlier this month, finally, the European Commission launched a public call for input on the review of the EU Electronic Communications Code (EECC) and the adoption of a new Digital Networks Act (DNA), which could replace the EECC (and other regulatory instruments) entirely.
The European Commission (and specifically the Directorate‑General for Communications Networks, Content and Technology, Unit B1) believes that: “as laid out in the Letta, Draghi and Niinistö reports, and in the Commission’s Communication A Competitiveness Compass for the EU, a cutting-edge digital network infrastructure is critical for the future competitiveness of Europe’s economy, security and social welfare” and “a modern and simplified legal framework … is key”. Such a regulatory change was first explored in the Commission’s 2024 White Paper “How to master Europe’s digital infrastructure needs?” (see our thoughts on the white paper here: The European Commission Publishes a New Master Plan for Europe’s Digital Infrastructure)
The new public call for input now gives more colour to the Commission’s plans on the proposed DNA, which is likely to focus on the following potential areas for reform:
Simplification – The DNA could aim to:
Reduce existing reporting obligations (up to 50%) and to remove unnecessary regulatory burdens (e.g. requirements for providers of business-to-business services and IoT services) and re-focusing Universal Service obligations on affordability aspects;
Repeal and recast various related legislative instruments (e.g. EECC, BEREC Regulation, Open Internet Regulation, Radio Spectrum Policy Programme);
Introduce a simplified authorisation regime and a reduced and more harmonised set of common authorisation conditions, so that operators can more easily operate cross-border, and further coordination and common implementation of other applicable requirements for providers (e.g. security and law enforcement);
Harmonise end-user protection rules;
Create a more fertile ground for pan-European / cross-border telecom consolidation.
Spectrum – The DNA could propose:
To strengthen the peer review procedure, ensure timely authorisation of spectrum on the basis of an evolving roadmap and set common procedures and conditions for the national authorisation of spectrum;
Longer license duration and easier renewals, and to gear spectrum auction designs towards spectrum efficiency and network deployment as basis for the early introduction of 6G;
Flexible authorisation including spectrum sharing (in line with EU competition law principles) and facilitate requests for spectrum harmonisation;
To reinforce EU sovereignty and solidarity regarding harmonisation of spectrum, and when addressing cross-border interferences from third countries; and
To establish a level playing field for satellite constellations used for accessing the EU market. Separately, the European Commission is also running a parallel call for input on the review of the EU regulatory framework for licensing mobile satellite services (MSS) spectrum.
Level Playing Field – the DNA could include:
Creating effective cooperation among the actors of the broader connectivity ecosystem (e.g. cloud services and infrastructure providers) giving the empowerment of NRAs/BEREC to facilitate cooperation (e.g. through a dispute resolution mechanism) under certain conditions and in duly justified cases; and
A clarification of the Open Internet rules concerning innovative services (such as e.g. network slicing and in-flight connectivity), e.g. by way of interpretative guidance, while fully preserving the Open Internet principles.
Access Regulation – the DNA could propose:
To apply ex-ante regulation (i.e. access conditions at national level) after the assessment of the application of symmetric measures (e.g. Gigabit Infrastructure Act or other forms of already existing symmetric access) only as a safeguard, following a market review based on the existing three criteria test and a geographic market definition, and subject to the review of the Commission, BEREC and other NRAs, with the Commission retaining veto powers;
To simplify and increase predictability in the access conditions by introducing a pan-EU harmonised access product(s) with pre-defined technical characteristics, which would be a default remedy imposed on operators with significant market power if competition problems were identified; and
To accelerate copper switch-off by providing a toolbox for fibre coverage and national copper switch-off plans, and by setting an EU-wide copper switch-off date as default, along with a derogation mechanism to protect end-users with no adequate alternatives.
Governance: in order to reinforce the Single Market dimension, the DNA could consider an enhanced EU governance with sufficient administrative and regulatory capacity (consultative or decision-making competences), through enhancing respective roles of BEREC, BEREC Office and RSPG to address various pan-European tasks and further the digital single market.
The Commission is consulting widely to gather information and ensure that the public interest is well reflected in the design of the DNA. In addition, multiple consultation activities have been already carried out in Brussels and at national level, including through three studies commissioned to external consultants, covering the following areas:
Regulatory enablers for cross border networks/ completing the single market;
Access Policy including review of the Relevant Markets Recommendation, and review of access provisions of the EECC; and
Financing issues, including the future of the Universal Service.
As integral part of the studies, further interaction with stakeholders is envisaged through e.g. interviews, questionnaires and workshops in Brussels. The final report of the studies is planned to be published once finalised in Q4 2025 together to the impact assessment of the detailed proposals for the DNA.
However, EU member states remain skeptical about the Commission’s more ambitious plans to overhaul telecom rules, especially when it comes to simplification, governance and management of national resources (such as spectrum and numbering). Therefore, the Commission is keen to engage, as appropriate, also with BEREC and the RSPG, and through ad hoc workshops with national authorities at member state level.
The deadline for responding to the call for input is 11 July. This provides the last official opportunity to try to influence the Commission’s thinking before the publication of the detailed proposals for a new EU telecom law because the EU Commission is not planning another official public consultation before publishing the DNA proposal… would you capture it or just let it slip?
FCA Consults on Proposals for Stablecoin Issuance and Cryptoasset Custody
The UK Financial Conduct Authority (the FCA) recently published two consultations: CP25/14 on stablecoin issuance and cryptoasset custody (CP25/14), and CP25/15 on prudential requirements for cryptoasset firms (CP25/15, and together with CP25/14, the Consultations).
The Consultations are the latest milestone in the FCA’s roadmap for cryptoasset regulation. They build on HM Treasury’s draft legislation published in April 2025, which will bring certain cryptoasset-related activities within the UK regulatory perimeter. Further details on the draft legislation can be found in our previous update (available here).
Scope of the Consultations
The Consultations focus on “qualifying” stablecoins (i.e., cryptoassets that aim to maintain a stable value by referencing at least 1 fiat currency) and related activities. Issuing such stablecoins and custody of qualifying cryptoassets will become regulated activities requiring FCA authorisation when conducted by way of business in the UK.
CP 25/14
In CP 25/14, the FCA seeks views on its proposed rules and guidance for the activities of issuing a qualifying stablecoin and safeguarding qualifying cryptoassets. The proposals aim to ensure regulated stablecoins maintain their value and require customers to be provided with clear information on how the assets backing an issuance of qualifying stablecoins are managed.
Among other things, CP25/14 covers the following proposals:
Authorisation. Firms carrying out the new regulated activities must be authorised under Part 4A of the Financial Services and Markets Act 2000;
Full Reserve Backing. Stablecoin issuers must fully back their tokens with high-quality, low risk, liquid assets equal in value to all outstanding stablecoins. These backing assets must be held in a statutory trust and managed by a separate, independent custodian. Reserves are limited to low-risk instruments with only limited use of longer-term public debt or certain money-market funds;
Redemption rights and transparency obligation. Stablecoin holders must have the legal right to redeem qualifying stablecoins at par value on demand directly. The payment order of redeemed funds must be placed by the end of the business day following receipt of a valid redemption request; and
No interest to holders. Firms cannot pass through to stablecoin holders any interest earned on the reserve assets.
CP 25/15
In CP 25/15, the FCA seeks views on its proposed prudential rules and guidance for firms issuing qualifying stablecoins and safeguarding qualifying cryptoassets, including financial resource requirements.
Parts of the proposed prudential regime will be placed in a new proposed integrated prudential sourcebook (COREPRU), while sector-specific prudential requirements for firms undertaking regulated cryptoassets activities will be set out in a new CRYPTOPRU sourcebook.
Notably, the key proposals in CP 25/15 cover the following areas:
Capital requirements. The FCA proposes a minimum own-funds requirement for so-called “CRYPTOPRU Firms” that will require them to hold as own funds the higher of:
a permanent minimum requirement (i.e., £350,000 for issuing qualifying stablecoins or £150,000 for safeguarding of qualifying cryptoassets);
a fixed overhead requirement based on annual expenditure; or
a variable activity-based “K-factor” requirement.
Liquidity requirements. Firms must hold a minimum amount of liquid assets. There will be a basic liquid assets requirement for all CRYPTOPRU firms, and an issuer liquid asset requirement for those that issue qualifying stablecoins.
Concentration risk. Firms will be required to monitor and control for concentration risk, to ensure that they are not overly exposed to one or more counterparties or asset types.
Next Steps
The Consultations close on 31 July 2025. The FCA will consider any feedback before publishing its final rules, which are expected in 2026.
CP 25/14 and CP25/15 are available here and here, respectively.
Leander Rodricks, trainee in the Financial Markets and Funds practice, contributed to this article.
NOT “UNREASONABLE: Three Texts Over Seven Days After Revocation Not Sufficient to Show Lack of Internal DNC Policy
TCPA revocation cases are on the rise, and a closely related type of case– the internal DNC claim– is on the rise along with it.
There is a slight difference between the two types of cases, and one which we don’t talk about much on TCPAWorld.com (and that no one else talks about really.)
In a revocation case, the Plaintiff is either on the national DNC list or received calls using regulated technology (ATDS/prerecorded voice/artificial voice). In that setting when a Plaintiff says “stop” the calls must stop during a “reasonable time” not to exceed 10 business days.
But what if the Plaintiff is receiving telemarketing calls that AREN’T made using regulated technology and the Plaintiff’s number ISN’T on the DNC list? (Also, aren’t is a weird word isn’t it? Aren’t. Just don’t see it in print very much.)
Consumers actually DO NOT have the right to sue directly for unwanted telemarketing calls if they are not on the DNC list. Let me say that again– the TCPA does NOT give consumers the right to sue directly for marketing calls after they say “Stop” if they are not on the DNC and are not called using regulated technology.
However, the DNC rules require ALL telemarketers to have a policy in place to prevent calls to consumers after revocation. And most courts hold a consumer CAN sue a marketer for a lack of such a policy.
So IF a consumer is called after asking for calls to stop they can sue the marketer but ONLY for lacking a policy– not for the phone call itself.
Following along so far?
Great.
So that begs the question– when does a court have to accept a plaintiff’s allegations that a marketer lacks a policy?
That can be a very important question because it determines whether a case makes it past the pleadings stage and into expensive discovery proceedings.
Well in Hulett v. Eyebuydirect, 2025 WL 1677071 (N.D.N.Y. June 13, 2025) the Court determined not every case in which a Plaintiff alleges calls continued after a “stop” notification warrants an internal DNC claim to proceed past the pleadings stage. Instead, in the court’s view a Plaintiff must show something unreasonable took place– and in that case, it had not.
In Hulett the Plaintiff received “at least one” text message from Defendant and responded “stop.” Plaintiff was told: “You’ve opted out and will no longer receive messages from Eyebuydirect.” Great.
Except he received three more messages. Eesh.
The messages were sent over nine days (seven business days.)
Now at the time the messages were sent the CFR requires businesses to have a policy of stopping texts and calls within a reasonable time not to exceed 30 days. Under the new rules a business has 10 business days to stop calls– so Eyebuydirect’s practices would have been fine.
But the Plaintiff alleged Eyebuydirect used an automated system and that calls could have stopped immediately. He claims the failure to stop the messages right away showed EBD lacked a policy.
The Court disagreed. Determining the rather limited number of texts sent over a relatively short period of time was not consistent with a complete lack of policy the Court dismissed the case. In doing so the court was sure to note that not every such case should be tossed, but those where a defendant acted reasonably do not give rise to a claim:
To be clear, the Court’s decision should not be read to universally foreclose future actions under § 64.1200(d) which rely in part on allegations pled on information and belief. Such a broadly applicable rule would violate the Court’s obligation to draw reasonable inferences in favor of plaintiffs, especially considering the lack of information available to Plaintiff discussed herein. Instead, courts must consider the facts which are concretely alleged, alongside those pled on information and belief, and assess whether the allegations as a whole plausibly suggest liability under § 64.1200(d)(3).
So there you go.
Take aways here:
All marketers MUST have an internal DNC policy that requires messages and calls to stop within a reasonable time of having been notified (currently 10 business days);
Allegations of a limited number of texts sent during a limited number of days following a revocation may not be sufficient to state a claim; and
Remember, this is an entirely separate provision from DNC and regulated technology claims that are independently actionable.
HARD ROCK META PIXEL SUIT GETS NO RESERVATION: When Booking a Room Doesn’t Book a CIPA Claim
Greetings, CIPAWorld!
I’m back with the latest, and we’ve got a fresh ruling out of the Eastern District of California that highlights just how tricky it can be for plaintiffs to plead viable CIPA claims based on pixel tracking technology. The decision in King v. Hard Rock Cafe Int’l, Inc., No. 2:24-cv-01119-DC-CKD, 2025 U.S. Dist. LEXIS 109123 (E.D. Cal. June 9, 2025), reinforces the increasingly well-settled view that fundamental web interactions—even if captured by the Meta Pixel—often do not constitute “contents” or “confidential communications” under the California Invasion of Privacy Act (“CIPA”).
If you’ve ever booked a hotel online, you know the drill. You start by picking your destination. Maybe it’s Las Vegas for a weekend getaway or San Francisco for a business trip. You click through dates, select the number of guests, and maybe browse between a standard room and a suite with a view. Each click feels like a private conversation between you and the hotel’s booking system. Right? But what if I told you that conversation is allegedly like a three-way call, with Facebook listening in the entire time?
The facts here are becoming all too familiar, as we repeatedly see. Plaintiff, a Sacramento resident, visited Hard Rock’s hotel website in 2023 to book a stay. Unbeknownst to her, Hard Rock’s site included the Meta Pixel, which allegedly caused her web activity—including button clicks, room selections, and personal information—to be transmitted to Meta’s servers. Here, because Plaintiff had a Facebook account and used the same browser, Meta was allegedly able to link her interactions on Hard Rock’s site to her personal identity. This Meta Pixel allegedly then intercepted what Plaintiff characterized as “guest records” including name, address, telephone number, credit card number, email address, ZIP code, and user-specific values contained in Meta cookies, as well as button clicks selecting travel destinations, desired dates, number of rooms, and hotel preferences. See King, 2025 U.S. Dist. LEXIS 109123, at *3-4
As a result, Plaintiff filed a class action under CIPA, seeking to represent all California Facebook users who had visited Hard Rock’s site. Specifically, the putative class consisted of “all California residents who have a Facebook account and accessed and navigated the Website while in California.” Id. at *2. The Complaint asserted violations of Section 631(a), which prohibits interception of communications, and Section 632, which bars recording of confidential communications without consent. Notably, Plaintiff pursued the fourth avenue of relief under Section 631(a), alleging that Hard Rock aided Meta’s wrongdoing rather than claiming Hard Rock directly intercepted communications. Think of it like this: Plaintiff wasn’t saying Hard Rock was the one eavesdropping—she was saying Hard Rock handed the wiretap to Facebook and said, basically, here, listen to this.
But as is often the case in these pixel-based lawsuits, the Court found that the pleading failed to meet the legal standards required under CIPA and dismissed the claims, with leave to amend.
The Section 631(a) claim was initially unsuccessful. As Judge Coggins explained, violations under CIPA are analyzed under the same standards applied to a violation of the federal wiretap act, Electronic Communications Privacy Act (“ECPA”), 18 U.S.C. § 2510 et seq. Under the ECPA, “contents” is defined as “any information concerning the substance, purport, or meaning of that communication.” 18 U.S.C. § 2510(8). The Court relied on In re Zynga Privacy Litigation, where the Ninth Circuit held that “‘[c]ontents’ means ‘the intended message conveyed by the communication’ as opposed to ‘record information regarding the characteristics of the message that is generated in the course of the communication.’” In re Zynga Priv. Litig., 750 F.3d 1098, 1106 (9th Cir. 2014). Noting courts employ a contextual case-specific analysis hinging on “how much information would be revealed” by the information’s tracking and disclosure. Hammerling v. Google L.L.C., 615 F. Supp. 3d 1069, 1092 (N.D. Cal. 2022).
Here’s where things get legally fascinating. Let me give a hypothetical example to help put this into better context. Imagine you’re writing a text message to a friend about your vacation plans. The actual words “Hey, I’m thinking of going to Vegas next month” would essentially be “contents.” But if someone intercepted just the metadata—that you sent a text at 3 PM to phone number XXX-XXX-XXXX—that would be “record information.” In a hotel booking context, this means your actual travel preferences and personal details might be considered mere “record information” rather than the protected “contents” of a communication.
Applying that standard, the Court found that Plaintiff’s allegations—that her name, address, phone number, email, button clicks, and room selections were transmitted—failed to establish that the contents of her communications had been intercepted. The Court indicated that the Plaintiff’s First Amended Complaint (“FAC”) fails to sufficiently claim that the actual ‘contents’ of communications were intercepted, as opposed to merely ‘record information.’
Critically, the Court found that Plaintiff’s factual allegations were inadequate. As Judge Coggins observed, “Plaintiff’s FAC does not identify with specificity what information she provided on Defendant’s Website that was intercepted by Meta. Plaintiff’s FAC dedicates only one short paragraph to describing her personal interactions with Defendant’s Website.” King, 2025 U.S. Dist. LEXIS 109123, at *10. It’s like filing a complaint about a burglary by saying someone took my stuff without listing what was stolen. The Court distinguished this case from the requirements set in Cousin v. Sharp Healthcare, where complaints that only provided hypothetical examples and did not specify what information the plaintiffs shared with the defendants through their browsing history were deemed inadequate to support a CIPA claim. See Cousin v. Sharp Healthcare, 681 F. Supp. 3d 1117, 1123 (S.D. Cal. 2023)
The Court further explained that personal identifiers such as name, address, and email were record information under established precedent: “Generally, customer information such as a person’s name, address, and subscriber number or identity is record information, but it may be contents when it is part of the substance of the message conveyed to the recipient.” Hammerling, 615 F. Supp. 3d at 1092-93. Similarly, Plaintiff’s “button clicks” were “more akin to the ‘record’ information that the Ninth Circuit has held not to be contents of a communication.” Mikulsky v. Bloomingdale’s, L.L.C., 713 F. Supp. 3d 833, 845 (S.D. Cal. 2024); Yoon v. Lululemon U.S., 549 F. Supp. 3d 1073, 1082-83 (C.D. Cal. 2021) (holding that “mouse clicks” constitute record information, not “message content,” for purposes of CIPA, unlike the text of an email or message).
While the Court acknowledged that specific descriptive URLs might qualify as content, it found Plaintiff had not alleged that she conducted any such searches or that the URLs captured in her case contained such content. See St. Aubin v. Carbon Health Techs., Inc., No. 24-cv-00667-JST, 2024 U.S. Dist. LEXIS 179067, at *4 (N.D. Cal. Oct. 1, 2024) (“descriptive URLs that reveal specific information about a user’s queries” may reflect the “contents” of communications under CIPA). Although Plaintiff’s Complaint did provide examples of the types of “full-string URLs” allegedly intercepted by Meta, Plaintiff did not allege she conducted the searches or visited the webpages provided in the examples.
Plaintiff’s Section 632 claim faced a separate but related problem: a failure to establish that her communications were “confidential.” Section 632 requires that the plaintiff show the communication was “carried on in circumstances as may reasonably indicate that any party to the communication desires it to be confined to the parties thereto.” Cal. Penal Code § 632(c). The Court reiterated that “[c]ourts generally find that internet communications do not have an objectively reasonable expectation of confidentiality, especially if those communications can be easily shared by the recipients of the communications.” Yoon, 549 F. Supp. 3d 1073. A plaintiff must plead “something unique about [] particular internet communications” to demonstrate those communications are confidential.” In re Meta Pixel Healthcare Litig., 647 F. Supp. 3d 778, 799 (N.D. Cal. 2022) (acknowledging that communications between patients and medical providers are distinct because they are “protected by federal law and are inherently personal.”).
Furthermore, Plaintiff attempted to invoke California Civil Code § 53.5(a), which provides that guest records are confidential, to support her claim. That’s a pretty smart move, if you ask me. You would think that hotel guest information—historically some of the most protected data in the hospitality industry—would get special treatment. After all, there’s a reason hotels have long been required to keep guest records confidential. However, the Court noted that the Plaintiff did not claim that her communications with the Defendant’s Website qualify as guest records. Moreover, her vague allegation that she merely “browsed and booked a Hard Rock hotel” was insufficient. See Cousin, 681 F. Supp. 3d at 1123 (finding CIPA complaints insufficient where they “only provided hypothetical examples and failed to specify what information plaintiffs provided defendants through their browsing history”). The Court explained that “regardless of whether ‘guest records’ under California Civil Code Section 53.5(a) qualify as ‘confidential’ communications under Section 632 of CIPA,” Plaintiff’s lack of specific facts about her website interactions was detrimental to the claim. King, 2025 U.S. Dist. LEXIS 109123, at *14-15.
Interestingly, the Court declined to address Hard Rock’s alternative argument that Plaintiff had consented to the data sharing through the website’s terms and policies, noting that “because the court finds Plaintiff has failed to state a claim under Sections 631 and 632, the court does not address Defendant’s arguments regarding consent.” King, 2025 U.S. Dist. LEXIS 109123, at *6 n.2. Accordingly, the Court denied Hard Rock’s request for judicial notice of the relevant terms and conditions as moot.
Ultimately, the Court dismissed both claims, but granted leave to amend. Still, the court order is yet another clear reminder of the hurdles pixel plaintiffs face in trying to fit web tracking into CIPA’s wiretap and eavesdropping framework.