Texas AG Investigates DeepSeek + List of Banned Countries Expands

Texas Attorney General Ken Paxton announced on February 14, 2024, that his office has opened an investigation into DeepSeek’s privacy practices. DeepSeek, an artificial intelligence company with ties to the People’s Republic of China, has been banned on state owned devices in Texas, New York, and Virginia. The Pentagon, NASA, and the U.S. Navy have also prohibited employees from using DeepSeek.
According to Paxton’s press release, he has notified DeepSeek “that its platform violates the Texas Data Privacy and Security Act.” He sent civil investigative demands to tech companies to obtain information about their analysis of the application and any documentation DeepSeek forwarded to the tech companies before they were offered to consumers.
DeepSeek has been banned in Italy, South Korea, Australia, Taiwan, and India.

Is Your Business Trapped? The Rise of “Trap and Trace” Litigation

Almost every business has a website; every website should have a privacy policy, terms of use, and, in some cases, a consumer privacy rights notice—if certain state consumer privacy rights laws apply to your business, such as the California Consumer Privacy Act as amended by the California Privacy Rights Act (collectively CCPA). What about a cookie policy? Or a cookie consent banner? Or a cookie preferences pop-up? If you haven’t looked at what types of ad tech your website uses—i.e., cookies, pixel tags, device IDs, and browser fingerprinting technologies that collect data about user behavior across multiple devices and platforms, which are essential for targeted advertising online—now is the time.
“Trap and trace” litigation and private demands for damages related to online tracking have risen significantly. “Trap and trace” litigation is related to the ad tech used on websites involving online trackers that plaintiffs’ attorneys liken to “pen registers” under state wiretap laws. These technologies allegedly collect website users’ device information and activities without their consent, which plaintiffs’ attorneys argue constitutes unauthorized interception of electronic communications under various wiretap laws. Here are some key considerations to assess your company’s website and ad tech:

Unauthorized Interception: the use of third-party trackers in ad tech is being construed as an intentional interception of electronic communications, similar to how pen registers and trap and trace devices operate by capturing dialing, routing, addressing, or signaling information.
Unauthorized Interception: the use of third-party trackers in ad tech is being construed as an intentional interception of electronic communications, similar to how pen registers and trap and trace devices operate by capturing dialing, routing, addressing, or signaling information.
Legal Risks: the use of such technologies without clear consent or transparency can lead to legal and reputational risks for your business, not to mention demands from plaintiffs’ attorneys seeking quick settlement in this unsettled area of the law, as well as class actions seeking millions of dollars in damages.
State Wiretap Laws: state wiretap laws, such as California’s Invasion of Privacy Act and Massachusetts’s Wiretap Act , have been adapted to address online tracking methods. These laws prohibit unauthorized interception of electronic communications, and plaintiffs’ attorneys are alleging that using online trackers could potentially violate these laws.
Privacy Rights: the use of certain ad tech may also constitute a privacy rights violation under state consumer privacy rights laws, like the CCPA.
Impossibility of Obtaining Prior Consent: the way most ad tech is set up to function means that website users’ data and activity are tracked instantaneously upon visiting the website, which prevents the business from obtaining prior consent (i.e., acceptance of website cookies) before the tracking begins. Knowing how to program your website’s ad tech properly is vital in steering clear of these claims and lawsuits.

Overall, the intersection of ad tech and “trap and trace” demands and litigation highlights the importance of understanding and complying with privacy laws and obtaining explicit consent from website users when collecting and using their data. Now is the time to evaluate your website, privacy policy, terms of use, and consumer privacy rights notices to confirm compliance with the ever-changing landscape of state and federal laws, while also finding balance between meeting your marketing team’s needs and your website users’ experience. Take action to avoid this trap.

State Attorneys General Point to Ways DEI Programs Can Stay Within Legal Boundaries

The attorneys general of sixteen states recently released guidance explaining how diversity, equity, and inclusion (DEI) programs in the private sector can remain viable and legal. This guidance came shortly after President Donald Trump issued two executive orders targeting “unlawful DEI” programs in the federal government, federal contractors, and federal fund recipients, and directed the U.S. attorney general to investigate “unlawful DEI” programs in the private sector.
 
Quick Hits

The attorneys general of sixteen states signaled to private employers that their DEI programs can remain legal, if designed and implemented correctly under applicable laws.
The guidance came in response to President Trump’s executive orders to stop DEI “mandates, policies, programs, preferences, and activities” in the federal government and “unlawful DEI” programs by federal contractors and federal money recipients.
The guidance reiterates that racial and sex-based quotas and unlawful preferences in hiring and promotions have been illegal for decades under Title VII of the Civil Rights Act of 1964.

On February 13, 2025, the attorneys general of Arizona, California, Connecticut, Delaware, Hawaii, Illinois, Maine, Maryland, Massachusetts, Minnesota, Nevada, New Jersey, New York, Oregon, Rhode Island, and Vermont  issued guidance stating DEI programs are still legal when structured and implemented properly.
State laws prohibiting employment discrimination based on race or sex vary in scope. Some of them go beyond the protections in the federal antidiscrimination laws.
While noting that race- and gender-based preferences in hiring and promotions have been unlawful for decades, the new guidance provides myriad legally compliant strategies for employers to enhance diversity, equity, and inclusion in the workplace, such as:

prioritizing widescale recruitment efforts to attract a larger pool of job candidates from a variety of backgrounds;
using panel interviews to help eliminate bias in the hiring process;
setting standardized criteria for evaluating candidates and employees, focused on skills

and experience;

ensuring accessible recruitment and hiring practices, including reasonable accommodations as appropriate;
ensuring equal access to all aspects of professional development, training, and mentorship programs;
maintaining employee resource groups for workers with certain backgrounds or experiences;
providing employee training on unconscious bias, inclusive leadership, and disability awareness; and
maintaining clear protocols for reporting discrimination and harassment in the workplace.

“Properly developed and implemented initiatives aimed at ensuring that diverse perspectives are included in the workplace help prevent unlawful discrimination,” the guidance states. “When companies embed the values of diversity, equity, inclusion, and accessibility within an organization’s culture, they reduce biases, boost workplace morale, foster collaboration, and create opportunities for all employees.”
Next Steps
A group of diversity officers, professors, and restaurant worker advocates has filed suit to challenge President Trump’s executive orders on DEI. Other groups have brought similar lawsuits. It is unclear what impact the challenges to the executive orders will have in light of enforcement efforts.
With the executive orders and leadership shifts at the U.S. Equal Employment Opportunity Commission, the Trump administration has signaled a change in federal enforcement priorities that could make private-sector lawful DEI efforts more risky from a legal standpoint.
Private employers may wish to review their existing DEI programs and policies to ensure compliance with federal and state antidiscrimination laws. In some cases, employers may be able to keep the legally compliant parts of their DEI programs while adjusting or eliminating certain parts that the Trump administration could consider unlawful.
Ogletree Deakins will continue to monitor developments and will provide updates on the Diversity, Equity, and Inclusion, Employment Law, and State Developments blogs as new information becomes available.

SOUR MORNING?: For Love and Lemons Faces TCPA Lawsuit Over Timing Violations

Hi TCPAWorld! The Baroness here. And we’ve got a new filing. This time, we’re taking a look at a case involving a popular clothing brand: For Love and Lemons.
Let’s start with the allegations.
The plaintiff Michelle Huang alleges that on November 28 and 29, 2024, she received two text messages from For Love and Lemons.
However, this case isn’t about the typical Do Not Call (DNC) Registry violation you might expect.
This case is actually brought under the time restrictions provisions of the TCPA.
Here’s where it gets interesting: Huang asserts that she received the messages at 7:14 a.m. and 7:45 a.m. — times she says are outside the window in which businesses are allowed to send marketing messages. Specifically, she contends she never authorized For Love and Lemons to send texts before 8 a.m. or after 9 p.m. local time.
This is significant because under 64.1200(c)(1), “[n]o person or entity shall initiate any telephone solicitation” to “[a]ny residential telephone subscriber before the hour of 8 a.m. or after 9 p.m. (local time at the called party’s location).” 47 C.F.R. § 64.1200(c)(1).
Based on this alleged violation, Plaintiff sued For Love and Lemons for violations of Section 227(c) of the TCPA and 64.1200(c)(1).
In addition, she seeks to represent a class of individuals who received similar marketing texts outside the permissible hours:
All persons in the United States who from four years prior to the filing of this action through the date of class certification (1) Defendant, or anyone on Defendant’s behalf, (2) placed more than one marketing text message within any 12-month period; (3) where such marketing text messages were initiated before the hour of 8 a.m. or after 9 p.m. (local time at the called party’s location). 
It is not often that we see cases being filed pursuant to 64.1200(c)(1). But this is reminder that this provision exists!
Since this case was just filed, there is not much to report. But we will of course keep you folks updated as the case progresses.
Huang v. Love And Lemons LLC, Case No.: 2:25-CV-01391 (C.D. Cal).

Online Advertisements Found to Monetize Piracy and Child Pornography

“Online Advertising Hits Rock Bottom” screams one recent headline, as reports from ad fraud researchers purportedly have found evidence that online ads for mainstream brands have appeared on websites dedicated to the display and sharing of child pornography. Some others have appeared on sites that facilitate sharing of video content. There is little doubt that the who’s who of major brands whose ads may have appeared on such sites were unaware of this and, had they known, would have objected. I have written about this before, and this keeps happening – despite the proliferation of ad tech vendors promising to prevent it.
Moreover, this is not a victimless crime. Placing ads on a website dedicated to sharing child pornography monetizes this horrific activity. Far from merely benefitting the proverbial “two guys in a Romanian basement,” monies generated from misspent digital advertising can be used to fund terrorism, human trafficking and all manner of abhorrent, criminal activity. This should be of keen interest to all advertisers, particularly public companies.
One estimate says that advertisers lost up to $1 billion to ad fraud in 2024 alone. The nature of online advertising, which has surpassed “traditional media,” lends itself to opacity. Simply put, the Internet is infinitely scalable. Billions of “impressions” are generated daily, and more are always available to the unscrupulous. Advertisers often lack the data needed to determine where every advertisement winds up, and even if they had such data, they lack the wherewithal to determine whether an appropriate price was paid, whether they received value, and whether they received rebates to which they were entitled. Indeed, recent news reports suggest that large-scale bribery has infected ad spending in some international markets.
So, one would think that advertisers would dedicate more resources to root out this fraud. To be sure, associational efforts have been undertaken and claim to have shown progress. However, the problem persists and is still quite substantial. What other industry would tolerate fraud on the order of magnitude of 10-40% of spend? Yet, it continues year after year.
What should a responsible advertiser do now?

Review relevant contracts to determine what audit rights exist;
Revise weak contracts;
Exercise relevant audit rights; 
Deal with negligent or reckless vendors; and
Pursue recovery of lost funds.

The last item is sometimes tricky to accomplish and depends on the strength of rights embodied in the relevant contracts. However, the proper contracts can give advertisers the power to pursue a refund of misspent or overspent funds, provided that the audits are strong and demonstrate compensable issues exist. This need not always involve filing a lawsuit.
Pursuing recovery can take courage and surely can create tension in some ongoing relationships. However, can your company continue business as usual with the stakes as high as they are?

“NOT MINIMAL”: Court Holds TCPA Defendant Can Be Liable for Illegal RVM Even Though Platform Sent the Message

There’s an interesting tension between platforms and callers that use their services when it comes to the TCPA.
And it all comes down to who is actually “making” the call.
This is so because the TCPA only applies to individuals that make or initiate calls–which is why lead gen data brokers always seem ti get off easy and the lead buyers are always caught in a snare.
But in the platform context, the caller wants the platform to be viewed as the “initiator” wheras the platform operator always wants to be very careful to be nothing more than a conduit.
Well in Saunders v. Dyck-O’Neal, 2025 WL 553292 (W.D. Mich Feb 19, 2025)–and unbelievably old case I can’t believe is still around– Defendant moved for summary judgment arguing it could not be liable for ringless voicemails left by the (in)famous VoApps.
To my eye this motion was a real long shot. The facts here are pretty clear. Per the order:
Dyck O’Neal provided VoApps with (1) the telephone number to be contacted, (2) the day and time the voicemails were sent, and (3) the caller ID number to be used. Dyck O’Neal also selected the message to be played. For example, one script of the voicemail message provided: “This is Dyck O’Neal with a message. This is an attempt to collect a debt. Please do not erase this message, and will you call us at 1-877-425-8998. Again, that number is 1-877-425-8998.” (ECF No. 294-8 at PageID.4091).
Ok, so the Defendant gave a file of numbers to the platform, told the platform to deliver a specific message at a specific time and also supplied the DIDs. I mean, long as the platform faithfully carried out those instructions I don’t see how you get around a determination that Defendant “initiated” those calls– they were the party instructing the transmission of the call. So yeah, they initiated the calls.
And that is just what the Court held.
The Court also held Defendant could be liable under vicarious liability principles since it controlled VoApps in the context of sending the messages:
Dyck O’Neal’s involvement was not minimal. It decided what phone numbers would be called. It decided what prerecorded voicemail messages would be played. It uploaded a “campaign” each day, on the day it wanted calls to be made. It had the message it wanted played during calls recorded and designed the prerecorded message and caller ID to conform to its debt collection purpose. It had alleged debtors’ addresses and directed VoApps to send messages only during permissible time of day, depending upon the physical location of the debtor. By the terms of the contract, VoApps acted as a “passive conduit for the distribution of content and information.” 
Yeah… this one was pretty obvious.
Indeed, this motion was borderline frivolous–and perhaps not even borderline–and I rarely say that.
What I find really fascinating is that a different RVM platform was found to be exempt from TCPA liability by Section 230 of the Communications Act so not sure why that issue wasn’t raised as part of Defendant’s motion.
C’est la vie.
This is a good data point on a couple of things:

Platforms should always try to position themselves as mere conduits to avoid findings that they are responsible for the conduct of callers using their services;
Callers who wish to treat their platforms as the “makers” of the call need to really place trust in those platforms and also have clear contract terms to that effect– and handing off a list of numbers with explicit instructions is going to sink your chances;
Ringless voicemail are covered by the TCPA as regulated technology and prerecorded calls–which means you need express written consent for marketing purposes and express consent for informational purposes to leverage these systems; and
Folks caught up in RVM cases should keep Section 230 in mind!

LONG GAME: Is One-to-One Coming Back in January, 2026? NCLC Wants to Make that Happen– Here’s How It Might

CPAWorld is an absolutely fascinating place.
So many incredible storylines always intersecting. And the Czar at the center of it all.
Enjoyable beyond words.
So here’s the latest.
As I reported yesterday NCLC is seeking to intervene before the Eleventh Circuit Court of Appeals in an apparent effort to seek an en banc re-hearing of the Court’s determination that the FCC exceeded its authority in fashioning the one-to-one rule. If successful, the NCLC could theoretically resurrect the rule before the one-year stay runs that the FCC put into effect following R.E.A.C.H.’s emergency petition last month. 
So, in theory, one-to-one could be back in January, 2026 after all.
So let’s back up to move forward and make sure everyone is following along.
Way back in December, 2022 Public Knowledge–a special interest group with high power over the Biden-era FCC–submitted a proposal to shut down lead generation by banning the sale or transfer of leads.
I went to work trying to spread the word and in April, 2023 the FCC issued a public notice that was a real headfake— the notice suggested it was considering only whether to ban leads that were not “topically and logically” related to the website at issue. Most people slept on this–and many lawyers in the industry told folks this was no big deal– but I told everyone PRECISELY what was at stake.
Regardless of my efforts industry’s comments were fairly week as very few companies came forward to oppose the new rule.
In November, 2023–as only the Czar had correctly predicted– the FCC circulated a proposed rule that looked nothing like their original version– THIS version required “one-to-one” consent, just as I said it would.
Working with the SBA, R.E.A.C.H. and others were able to convince the Commission to push the effective date for the rule from 6 months to 12 months to give time for another public notice period to evaluate the rule’s impact on small business.
This additional six months also gave time for another trade organization to challenge the ruling in court (you’re welcome).
Ultimately with the clock winding down the final week before the rule was set to go into effect January 27, 2025 R.E.A.C.H. filed an emergency petition to stay the ruling with the FCC.
On Friday January 24, 2025 at 4:35 pm the FCC issued the desired stay— pushing back the effective date for up to another year. Twenty minutes later the Eleventh Circuit court of appeals issued a ruling striking down the one-to-one rule completely.
Now the NCLC enters and is seeking to reverse the appellate court’s decision and reinstate the rule. To do so it would need to:

Be granted an unusual post-hac intervention; and either
Be granted an unusual en banc re-hearing and then win that re-hearing; or
Be granted an unusual Supreme Court cert and then win that Supreme Court challenge.

As anyone will tell you, every piece of this is a long shot.
Still, however, it is possible.
For instance the Eleventh Circuit standard for en banc review is high but not overwhelmingly so:
“11th Cir. R. 40-6 Extraordinary Nature of Petitions for En Banc Consideration. A petition for en banc consideration, whether upon initial hearing or rehearing, is an extraordinary procedure intended to bring to the attention of the entire court a precedent-setting error of exceptional importance in an appeal or other proceeding, and, with specific reference to a petition for en banc consideration upon rehearing, is intended to bring to the attention of the entire court a panel opinion that is allegedly in direct conflict with precedent of the Supreme Court or of this circuit. Alleged errors in a panel’s determination of state law, or in the facts of the case (including sufficiency of the evidence), or error asserted in the panel’s misapplication of correct precedent to the facts of the case, are matters for rehearing before the panel but not for en banc consideration.”
To be sure the Eleventh Circuit’s ruling was quite extraordinary. Turned appellate review of agency action more or less on its head. A complete departure from established analytic norms in such cases.
But, as I have said multiple times, we are living in a whole new world right now. So what was weird and inappropriate six months ago may be very much the new paradigm today.
Of course being granted the rehearing in this environment would just be step one. NCLC would then actually have to win the resulting en banc review– which is by no means guaranteed even if the rehearing is granted.
But from a timing perspective all of this could theoretically happen within one year.
If NCLC is denied a rehearing they could theoretically seek Supreme Court review which could theoretically result in a ruling sometime in May or June, 2026– in the meantime the FCC’s stay of proceedings would likely be extended in light of the Supreme Court taking the case. But the odds of the Supremes taking such an appeal and then reversing the one-to-one rule seem astronomically small given the current makeup of the Court.
Then again, with Mr. Trump seizing control of independent agencies the rules regarding how courts review regulatory activity by these agencies just became INSANELY important. Again, we have a whole new paradigm and the Supremes may theoretically look for any vehicle to opine on the subject ahead of potentially catastrophic separation of power issues set up by Mr. Trump’s executive order this week.
The bottom line is this: one-to-one consent may rise again, and if the NCLC has its way–it will.
We will keep everyone posted on developments, of course, and the R.E.A.C.H. board will be discussing its own potential intervention efforts shortly.
More soon.

The ReAIlity of What an AI System Is – Unpacking the Commission’s New Guidelines

The European Commission has recently released its Guidelines on the Definition of an Artificial Intelligence System under the AI Act (Regulation (EU) 2024/1689). The guidelines are adopted in parallel to commission guidelines on prohibited AI practices (that also entered into application on February 2), with the goal of providing businesses, developers and regulators with further clarification on the AI Act’s provisions.
Key Takeaways for Businesses and AI Developers
Not all AI systems are subject to strict regulatory scrutiny. Companies developing or using AI-driven solutions should assess their systems against the AI Act’s definition. With these guidelines (and the ones of prohibited practices), the European Commission is delivering on the need to add clarification to the core element of the act: what is an AI system?
The AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment. The system, for explicit or implicit objectives, infers from input data how to generate outputs – such as predictions, content, recommendations or decisions – that can influence physical or virtual environments.
One of the most significant clarifications in the guidelines is the distinction between AI systems and “traditional software.”

AI systems go beyond rule-based automation and require inferencing capabilities.
Traditional statistical models and basic data processing software, such as spreadsheets, database systems and manually programmed scripts, do not qualify as AI systems.
Simple prediction models that use basic statistical techniques (e.g., forecasting based on historical averages) are also excluded from the definition.

This distinction ensures that compliance obligations under the AI Act apply only to AI-driven technologies, leaving other software solutions outside of its scope.
Below is a breakdown of what the guidelines bring for each of the seven components:

Machine-based systems – AI systems rely on computational processes involving hardware and software components. The term “machine-based” emphasizes that AI systems are developed with and operate on machines, encompassing physical elements such as processing units, memory, storage devices and networking units. These hardware components provide the necessary infrastructure for computation, while software components include computer code, operating systems and applications that direct how the hardware processes data and performs tasks. This combination enables functionalities like model training, data processing, predictive modeling, and large-scale automated decision-making. Even advanced quantum computing systems and biological or organic systems qualify as machine-based if they provide computational capacity.
Varying levels of autonomy – AI systems can function with some degree of independence from human intervention. This autonomy is linked to the system’s capacity to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. The AI Act clarifies that autonomy involves some independence of action, excluding systems that require full manual human involvement. Autonomy also spans a spectrum – from systems needing occasional human input to those operating fully autonomously. This flexibility allows AI systems to interact dynamically with their environment without human intervention at every step. The degree of autonomy is a key consideration for determining if a system qualifies as an AI system, impacting requirements for human oversight and risk-mitigation measures.
Potential adaptiveness – Some AI systems change their behavior after deployment through self-learning mechanisms, though this is not a mandatory criterion. This self-learning capability enables systems to automatically learn, discover new patterns or identify relationships in the data beyond what they were initially trained on.
Explicit or implicit objectives – The system operates with specific goals, whether predefined or emerging from its interactions. Explicit objectives are those directly encoded by developers, such as optimizing a cost function or maximizing cumulative rewards. Implicit objectives, however, emerge from the system’s behavior or underlying assumptions. The AI Act distinguishes between the internal objectives of the AI system (what the system aims to achieve technically) and the intended purpose (the external context and use-case scenario defined by the provider). This differentiation is crucial for regulatory compliance, as the intended purpose influences how the system should be deployed and managed.
Inferencing capability – AI systems must infer how to generate outputs rather than simply executing manually defined rules. Unlike traditional software systems that follow predefined rules, AI systems reason from inputs to produce outputs such as predictions, recommendations or decisions. This inferencing involves deriving models or algorithms from data, either during the building phase or in real-time usage. Techniques that enable inference include machine learning approaches (supervised, unsupervised, self-supervised and reinforcement learning) as well as logic- and knowledge-based approaches.
Types of outputs – AI systems generate predictions, content, recommendations or decisions that shape both their physical and virtual environments. Predictions estimate unknown values based on input data; content generation creates new materials like text or images; recommendations suggest actions or products; and decisions automate processes traditionally managed by human judgement. These outputs differ in the level of human involvement required, ranging from fully autonomous decisions to human-evaluated recommendations. By handling complex relationships and patterns in data, AI systems produce more nuanced and sophisticated outputs compared to traditional software, enhancing their impact across diverse domains.
Environmental influence – Outputs must have a tangible impact on the system’s physical or virtual surroundings, exposing the active role of AI systems in influencing the environment they operate within. This includes interactions with digital ecosystems, data flows and physical objects, such as autonomous robots or virtual assistants.

Why These Guidelines Matter
The AI Act introduces a harmonized regulatory framework for AI developed or used across the EU. Core to its scope of application is the definition of “AI system” (which then spills over onto the scope of regulatory obligations, including restrictions on prohibited AI practices and requirements for high-risk AI systems).
The new guidelines serve as an interpretation tool, helping providers and stakeholders identify whether their systems qualify as AI under the act. Among the key takeaways is the fact that the definition is not to be applied mechanically, but should consider the specific characteristics of each system.
AI systems are a reA(I)lity; if you have not started assessing the nature of the one you develop or the one you procure, now is the time to do so. While the EU AI Act might be considered by many as having missed its objective (a human-centric approach to AI that fosters innovation and sets a level playing field), it is here to stay (and its phased application is on track).

New Data Privacy Working Group Created by US House Committee

On February 12, 2025, Congressman Brett Guthrie (R-KY), Chairman of the House Committee on Energy and Commerce, and Congressman John Joyce, M.D. (R-PA), Vice Chairman of the House Committee on Energy and Commerce, announced the establishment of a comprehensive data privacy working group (the Working Group). The Working Group also includes Representatives Morgan Griffiths (R-VA), Troy Balderson (R-OH), Jay Obernolte (R-CA), Russell Fry (R-SC), Nick Langworthy (R-NY), Tom Kean (R-NJ), Craig Goldman (R-TX), and Julie Fedorchak (R-ND). 
The House Republicans created the Working Group to develop new federal data privacy standards. The Working Group welcomes input from a broad range of stakeholders. Stakeholders interested in engaging with the Working Group can reach out to [email protected] for more information.
This initiative presents an opportunity for companies to actively engage in shaping emerging federal data privacy standards. Feel free to contact us for guidance. We will monitor the Working Group’s progress and keep clients apprised of key developments as new federal privacy standards take shape.

“We strongly believe that a national data privacy standard is necessary to protect Americans’ rights online and maintain our country’s global leadership in digital technologies, including artificial intelligence. That’s why we are creating this working group, to bring members and stakeholders together to explore a framework for legislation that can get across the finish line,” said Chairman Guthrie and Vice Chairman Joyce. “The need for comprehensive data privacy is greater than ever, and we are hopeful that we can start building a strong coalition to address this important issue.”
 energycommerce.house.gov/..

EDPB Adopts Statement on Age Assurance and Creates a Task Force on AI Enforcement

On February 12, 2025, during its February 2025 plenary meeting, the European Data Protection Board (EDPB) adopted a statement on assurance, which outlines ten principles concerning the processing of personal data when determining an individual’s age or age range. The EDPB is also cooperating with the European Commission on age verification in the context of the Digital Services Act (DSA) working group.
In addition, the EDPB extended the scope of the ChatGPT task force to artificial intelligence (AI) enforcement. The EDPB members underlined the need to coordinate the actions of the Data Protection Authorities (DPAs) regarding urgent sensitive matters and will set up a quick response team for that purpose.
In the statement, the EDPB outlines ten key principles to follow to implement a governance framework that complies with the General Data Protection Regulation (GDPR) to protect children and how their personal data is processed. The EDPB Chair, Anu Talus, stressed the importance of balancing the responsible use of AI within the GDPR framework. Businesses should ensure compliance with these evolving data protection standards, and our team is available to provide guidance on navigating the GDPR requirements and implementing effective compliance strategies.

“The GDPR is a legal framework that promotes responsible innovation. The GDPR has been designed to maintain high data protection standards while fully leveraging the potential of innovation, such as AI, to benefit our economy. The EDPB’s task force on AI enforcement and the future quick response team will play a crucial role in ensuring this balance, coordinating the DPAs’ actions and supporting them in navigating the complexities of AI while upholding strong data protection principles.” – EDPB Chair Anu Talus
www.edpb.europa.eu/…

The Boundaries of Chapter 93A

The scope of Chapter 93A is not unlimited, as the Appeals Court of Massachusetts recently confirmed in Beaudoin v. Massachusetts School of Law at Andover, Inc. The case involved a law student who was disenrolled from the school for not obtaining a COVID-19 vaccination, contrary to what he alleged were the school’s representations. He brought claims for breach of contract, promissory estoppel, breach of the implied covenant of good faith and fair dealing, negligent misrepresentation, Chapter 93A, and unjust enrichment. The trial court dismissed the complaint under Mass. R. Civ. P. 12(b)(6) for failure to state a claim.
The Appeals Court affirmed the dismissal of the Chapter 93A claim, noting that (i) Chapter 93A, Section 2 prohibits unlawful acts and practices occurring “in the conduct of any trade or commerce” and (ii) although charitable corporations “are not immune” from Chapter 93A’s reach, in most cases, a charitable corporate’s activities in furtherance of its core mission will not be engaged in “trade or commerce” under Section 2. This decision relies on the Supreme Judicial Court’s oft-quoted decision in Linkage Corporation v. Boston Univ. Trustees (1997) and the First Circuit’s Squeri v. Mount Ida Coll. (2020). As the law student’s Chapter 93A claims focused on the alleged unfair and deceptive recruiting of students to enroll at the school, the claims arose from the nonprofit law school’s provision of education to students and, as such, the challenged acts and practices did not fall into “the conduct of any trade or commerce.” The Appeals Court, however, reversed the Trial Court’s dismissal of various common law claims.
This case demonstrates that plaintiffs and defendants alike must always consider whether challenged conduct under Chapter 93A fits the definitions required to trigger coverage and whether adding a Chapter 93A count is appropriate or will cause initial dispositive motion practice.

Beware Broader Insurance Coverage Exclusions for Biometric Information Privacy Law Claims

It has been nearly two decades since Illinois introduced the first biometric information privacy law in the country in 2008, the Illinois Biometric Information Privacy Act (“BIPA”). Since then, litigation relating to biometric information privacy laws has mushroomed, and the insurance industry has responded with increasingly broad exclusions for claims stemming from the litigation. A recent Illinois Appellate Court decision in Ohio Security Ins. Co. and the Ohio Cas. Ins. Co. v. Wexford Home Corp., 2024 IL App (1st) 232311-U, demonstrates this ongoing evolution.
The plaintiff in a putative class action lawsuit sued Wexford Home Corporation (“Wexford”), alleging that Wexford violated BIPA by collecting, recording, storing, sharing and discussing its employees’ biometric information without complying with BIPA’s statutory disclosure limitations. Wexford tendered the putative class action lawsuit to its insurers, Ohio Security Insurance Company and Ohio Casualty Insurance Company, both of which denied coverage and filed a declaratory judgment action seeking a ruling that the insurers had no duty to defend or indemnify Wexford. 
The insurers argued that there was no duty to defend or indemnify based on three exclusions: (1) the “Recording And Distribution Of Material Or Information In Violation Of Law” exclusion (“Recording and Distribution Exclusion”), (2) the “Exclusion-Access Or Disclosure Of Confidential And Data-Related Liability-With Limited Bodily Injury Exception,” and (3) the “Employment-Related Practices Exclusion.”
The parties cross-moved for judgment on the pleadings, and the trial court granted judgment for Wexford, finding that the insurers owed a defense. The trial court reasoned that publication of material that violates a person’s right to privacy met the policies’ definition of personal and advertising injury, and therefore no exclusions applied to bar coverage. The insurers appealed. Although the insurers did not challenge the trial court’s ruling that the alleged BIPA claims qualified as personal or advertising injury sufficient to trigger coverage, they maintained that the trial court erred by not applying the three exclusions.
On appeal, the court focused on the Recording and Distribution Exclusion, which purports to bar coverage where the personal or advertising injury arises from the violation of any of three enumerated statutes (TCPA, CAN-SPAM Act, and FCRA) or any other statute that falls within a broad “catch all” provision that expands the exclusion to include violations of “[a]ny federal, state or local statute, ordinance or regulations other than the [three enumerated statutes] that addresses, prohibits, or limits the printing, dissemination, disposal, collecting, recording, sending, transmitting, communicating or distribution of material or information.”
The court relied on its earlier decision, National Fire Ins. Co. of Hartford and Cont’l Ins. Co. v. Visual Park Co., Inc., 2023 IL App (1st) 221160, in which it found an identical Recording and Distribution Exclusion to bar coverage for BIPA claims. That decision, however, represented a departure from earlier decisions that found similar catchall provisions did not encompass BIPA claims. For example, in W. Bend Mut. Ins. Co. v. Krishna Schaumburg Tan, Inc., 2021 IL 125978, 183 N.E.3d 47 (May 20, 2021), the same appellate court that decided Visual Park explained that the interpretive canon of ejusdem generis (which requires that general words following an enumeration of specific persons or things are deemed to apply only to persons or things of the same general kind or class of the specifically enumerated persons or things) required a finding that a similar catchall exclusion would be afforded limited reach and not extend to BIPA claims. In the Visual Park case, on the other hand, the appellate court concluded that a catchall provision like the one in Wexford was materially different and broader than prior versions of the exclusion. According to the Visual Park court, the exclusion’s reference to “disposal,” “collecting,” or “recording” of material or information sufficiently encompassed BIPA violations, whereas prior versions apparently did not. The appellate court again applied the interpretive canon of ejusdem generis to reach conclusions about the exclusion’s intended reach. The court reasoned that because the specifically enumerated statutes in the Recording and Distribution Exclusion protected personal information and privacy, the general catchall must have been intended to do so as well.
As Wexford, Visual Park, and the pre-Visual Park decisions illustrate, insurers are broadening the scope of exclusions that potentially apply to BIPA-related claims. Policyholders should carefully review their policies annually to identify changes in wording that might have a material impact on the scope of coverage. Experienced brokers and coverage counsel can help to ensure that material changes are identified early and, where appropriate, modified or deleted by endorsement.