Trump Administration Unveils New AI Policy, Reverses Biden’s Regulatory Framework

Early signals from the Trump administration suggest it may move away from the Biden administration’s regulatory focus on the impact of artificial intelligence (AI) and automated decision-making technology on consumers and workers. This federal policy shift could result in an uptick in state-based AI regulation.

Quick Hits

On January 23, 2025, President Trump signed an executive order to develop an action plan to enhance AI technology’s growth while reviewing and potentially rescinding prior policies to regulate its use.
The Trump administration is reversing Biden-era guidance on AI and emphasizing the need for minimal barriers to foster innovation and U.S. leadership in artificial intelligence.
The administration is working closely with tech leaders and has tapped a tech investor and former executive as the newly created White House AI & Crypto Czar to guide policy development in the AI sector.
State legislators may step in to fill the regulatory gap.

As part of a flurry of executive action during President Donald Trump’s first week of his second term in office, the president rescinded a Biden-era executive order (EO) issued on October 30, 2023, titled the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which sought to create safeguards for the “responsible development and use of AI.”
Further, on January 23, 2025, President Trump took action to shape the development of AI technology, signing EO 14179, “Removing Barriers to American Leadership in Artificial Intelligence.” The order states, “It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”
AI Executive Order
President Trump’s EO 14179 directs that, within 180 days, “relevant” agencies create an “action plan to achieve” the EO’s AI policy. That plan is to be developed by the “Assistant to the President for Science and Technology (APST), the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs (APNSA), in coordination with the Assistant to the President for Economic Policy, the Assistant to the President for Domestic Policy, the Director of the Office of Management and Budget (OMB Director), and the heads of such executive departments and agencies (agencies) as the APST and APNSA deem relevant.”
The order also mandates that these heads of agencies immediately review all policies, directives, regulations, and other actions taken under President Biden’s now-revoked EO 14110 to identify any actions inconsistent with the EO’s policy objectives. The EO states that inconsistent actions will be suspended, revised, or rescinded as appropriate to ensure that federal guidelines and regulations do not impede the nation’s role as an AI leader.
The EO directs the OMB Director, in coordination with the APST, to revise two specific OMB memoranda “as necessary to make them consistent” with the president’s new AI policy:

OMB Memorandum M-24-10, “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence,” issued in March 2024, which directed the Board of Governors of the Federal Reserve System to submit a biennial AI compliance plan to OMB.
OMB Memorandum M-24-18, “Advancing the Responsible Acquisition of Artificial Intelligence in Government,” issued in October 2024.

Shifting AI Policy
The Biden administration sought to create safeguards for the development of AI technology and its impact on labor markets, potential displacement of workers, and the use of AI and automated decision-making tools to make employment decisions and evaluate worker performance.
In November 2024, the U.S. Department of Labor (DOL) issued guidance on AI, detailing principles and best practices for employers in using AI in the workplace. That guidance built on prior guidance published by the DOL’s Wage and Hour Division and Office of Federal Contract Compliance Programs. Also, in 2022 and 2023, the U.S. Equal Employment Opportunity Commission (EEOC) issued guidance on employers’ use of AI tools and the potential for discrimination. As of the date of publication of this article, the EEOC’s former AI guidance has been removed from its website.
However, in a fact sheet published on January 23, 2025, the Trump administration stated that the “Biden AI Executive Order established unnecessarily burdensome requirements for companies developing and deploying AI that would stifle private sector innovation and threaten American technological leadership.” According to the fact sheet, the “development of AI systems must be free from ideological bias or engineered social agendas.”
President Trump is also reportedly working closely with many tech company leaders and AI developers. The president tapped investor and former tech executive David Sacks as the newly created “White House AI & Crypto Czar,” who will help shape policy around emerging technologies.
Next Steps
The Trump administration’s shift in AI policy marks a substantial departure from the previous administration’s focus. By rescinding Biden-era executive orders and implementing new directives to foster innovation, the Trump administration seeks to remove perceived barriers to the development of artificial intelligence technology.
Although the new administration has expressed its intent to deregulate this area, many states and jurisdictions have taken a different position, including California, Colorado, Illinois, and New York City. Other states may also consider filling the gap created by the absence of federal agency action on AI in employment.
In light of this, employers may want to continue to implement policies and procedures that protect the workplace from unintended consequences of AI use, including maintaining an AI governance team, establishing policies and practices for the safe use of AI in the workplace, enhancing cybersecurity practices, auditing results to identify and correct unintended consequences (including bias), and maintaining an appropriate level of human oversight.

Judge Denies Kochava’s Motion to Dismiss FTC’s Suit Over Selling Geolocation Data

On February 3, 2025, U.S. District Judge B. Lynn Winmill of the District of Idaho denied digital marketing data broker Kochava Inc.’s motion to dismiss a suit brought by the Federal Trade Commission. As previously reported, in August 2022, the FTC announced a civil action against Kochava for “selling geolocation data from hundreds of millions of mobile devices that can be used to trace the movements of individuals to and from sensitive locations.” 
In the order denying Kochava’s motion to dismiss, Winmill rejected Kochava’s argument that Section 5 of the FTC Act is limited to tangible injuries and wrote that the “FTC has plausibly pled that Kochava’s practices are unfair within the meaning of the FTC Act.”

The Colorado AI Act: Implications for Health Care Providers

Artificial intelligence (AI) is increasingly being integrated into health care operations, from administrative functions such as scheduling and billing to clinical decision-making, including diagnosis and treatment recommendations. Although AI offers significant benefits, concerns regarding bias, transparency, and accountability have prompted regulatory responses. Colorado’s Artificial Intelligence Act (the Act), set to take effect on February 1, 2026, imposes governance and disclosure requirements on entities deploying high-risk AI systems, particularly those involved in consequential decisions affecting health care services and other critical areas.
Given the Act’s broad applicability, including its potential extraterritorial reach for entities conducting business in Colorado, health care providers must proactively assess their AI utilization and prepare for compliance with forthcoming regulations. Below, we discuss the intent of the Act, what types of AI it applies to, future regulation, potential impact on providers, statutory compliance requirements, and enforcement mechanisms.
1. What Is the Act Trying to Protect Against?
The Act primarily seeks to mitigate algorithmic discrimination, defined as AI-driven decision-making that results in unlawful differential treatment or disparate impact on individuals based on certain characteristics, such as race, disability, age, or language proficiency. The Act seeks to prevent AI from reinforcing existing biases or making decisions that unfairly disadvantage particular groups.
Examples of Algorithmic Discrimination in Health Care

Access to Care Issues: AI-powered phone scheduling systems may fail to recognize certain accents or accurately process non-English speakers, making it more difficult for non-native English speakers to schedule medical appointments.
Biased Diagnostic Tools and Treatment Recommendations: Some AI diagnostic tools may recommend different treatments for patients of different ethnicities, not because of medical evidence but due to biases in the training data. For instance, an AI model trained primarily on data from white patients might miss early signs of disease that present differently in Black or Hispanic patients, resulting in inaccurate or less effective treatment recommendations for historically marginalized populations.
By targeting these and other AI-driven inequities, the Act aims to ensure automated systems do not reinforce or exacerbate existing disparities in health care access and outcomes.

2. What Types of AI Are Addressed by the Act?
The Act applies broadly to businesses using AI to interact with or make decisions about Colorado residents. Although certain high-risk AI systems — those that play a substantial factor in making consequential decisions — are subject to more stringent requirements, the Act imposes obligations on most AI systems used in health care.
Key Definitions in the Act

“Artificial Intelligence System” means any machine-based system that generates outputs — such as decisions, predictions, or recommendations — that can influence real-world environments.
“Consequential Decision” means a decision that materially affects a consumer’s access to or cost of health care, insurance, or other essential services.
“High-Risk AI System” means any AI tool that makes or substantially influences a consequential decision.
“Substantial Factor” means a factor that assists in making a consequential decision or is capable of altering the outcome of a consequential decision and is generated by an AI system.
“Developers” means creators of AI systems.
“Deployers” means users of high-risk AI systems.

3. How Can Health Care Providers Ensure Compliance?
Although the Act sets out broad obligations, specific regulations are still forthcoming. The Colorado Attorney General has been tasked with developing rules to clarify compliance requirements. These regulations may address:

Risk management and compliance frameworks for AI systems.
Disclosure requirements for AI usage in consumer-facing applications.
Guidance on evaluating and mitigating algorithmic discrimination.

Health care providers should monitor developments as the regulatory framework evolves to ensure their AI-related practices align with state law.
4. How Could the Act Impact Health Care Operations?
The Act will require health care providers to specifically evaluate how they use AI across various operational areas, as the Act applies broadly to any AI system that influences decision-making. Given AI’s growing role in patient care, administrative functions, and financial operations, health care organizations should anticipate compliance obligations in multiple domains.
Billing and Collections

AI-driven billing and claims processing systems should be reviewed for potential biases that could disproportionately target specific patient demographics for debt collection efforts.
Deployers should ensure that their AI systems do not inadvertently create financial barriers for specific patient groups.

Scheduling and Patient Access

AI-powered scheduling assistants must be designed to accommodate patients with disabilities and limited English proficiency to prevent inadvertent discrimination and delayed access to care.
Providers must evaluate whether their AI tools prioritize certain patients over others in a way that could be deemed discriminatory.

Clinical Decision-Making and Diagnosis

AI diagnostic tools must be validated to ensure they do not produce biased outcomes for different demographic groups.
Health care organizations using AI-assisted triage tools should establish protocols for reviewing AI-generated recommendations to ensure fairness and accuracy.

5. If You Use AI, With What Do You Need to Comply?
The Act establishes different obligations for Developers and Deployers. Health care providers will in most cases be “Deployers” of AI systems as opposed to Developers. Health care providers will want to scrutinize contractual relationships with Developers for appropriate risk allocation and information sharing as providers implement AI tools into their operations.

Obligations of Developers (AI Vendors)

Disclosures to Deployers: Developers must provide transparency about the AI system’s training data, known biases, and intended use cases.
Risk Mitigation: Developers must document efforts to minimize algorithmic discrimination.
Impact Assessments: Developers must evaluate whether the AI system poses risks of discrimination before deploying it.

Obligations of Deployers (e.g., Health Care Providers)

Duty to Avoid Algorithmic Discrimination

Deployers of high-risk AI systems must use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination.

Risk Management Policy & Program

Deployers must implement a risk management policy and program that identifies, documents, and mitigates risks of algorithmic discrimination.
The program must be iterative, regularly updated, and aligned with recognized AI risk management frameworks.
Requirements vary based on the deployer’s size, complexity, AI system scope, and data sensitivity.

Impact Assessments (Regular & Event-Triggered Reviews)

Timing Requirements: Deployers must conduct impact assessments:

Before deploying any high-risk AI system.
At least annually for each deployed high-risk AI system.
Within 90 days after any intentional and substantial modification to the AI system.

Required Content: Each impact assessment must include the AI system’s purpose, intended use, and benefits, an analysis of risks of algorithmic discrimination and mitigation measures, a description of data processed (inputs, outputs, and any customization data), performance metrics and system limitations, transparency measures (including consumer disclosures), and details on post-deployment monitoring and safeguards.
Special Requirements for Modifications: If an impact assessment is conducted due to a substantial modification, it must also include an explanation of how the AI system’s actual use aligned with or deviated from its originally intended purpose.

Notifications & Transparency

Public Notice: Deployers must publish a statement on their website describing the high-risk AI systems they use and how they manage discrimination risks.
Notices to Patients/Employees: Before an AI system makes a consequential decision, individuals must be notified of its use.
Post-Decision Explanation: If AI contributes to an adverse decision, deployers must explain its role and allow the individual to appeal or correct inaccurate data.
Attorney General Notifications: If AI is found to have caused algorithmic discrimination, deployers must notify the Attorney General within 90 days.

Small deployers (those with fewer than 50 employees) who do not train AI models with their own data are exempt from many of these compliance obligations.
6. How is the Act Enforced?

Only the Colorado Attorney General has enforcement authority.
A rebuttable presumption of compliance exists if Deployers follow recognized AI risk management frameworks.
There is no private right of action, meaning consumers cannot sue directly under the Act.

Health care providers should take early action to assess their AI usage and implement compliance measures.
Final Thoughts: What Health Care Providers Should Do Now

The Act represents a significant shift in AI regulation, particularly for health care providers who increasingly rely on AI-driven tools for patient care, administrative functions, and financial operations.
Although the Act aims to enhance transparency and mitigate algorithmic discrimination, it also imposes substantial compliance obligations. Health care organizations will have to assess their AI usage, implement risk management protocols, and maintain detailed documentation.
Given the evolving regulatory landscape, health care providers should take a proactive approach by auditing existing AI systems, training staff on compliance requirements, and establishing governance frameworks that align with best practices. As rulemaking by the Colorado Attorney General progresses, staying informed about additional regulatory requirements will be critical to ensuring compliance and avoiding enforcement risks.
Ultimately, the Act reflects a broader trend toward AI regulation that is likely to extend beyond state borders. Health care organizations that invest in AI governance now will not only mitigate legal risks but also maintain patient trust in an increasingly AI-driven industry.
If health care providers plan to integrate AI systems into their operations, conducting a thorough legal analysis is essential to determine whether the Act applies to their specific use cases. This should also include careful review and negotiation of service agreements with AI Developers to ensure that the provider has sufficient information and cooperation from the Developer to comply with the Act and to properly allocate risk between the parties.

Compliance is not a one-size-fits-all process. It requires careful evaluation of AI tools, their functions, and their potential to influence consequential decisions. Organizations should work closely with legal counsel to navigate the Act’s complexities, implement risk management frameworks, and establish protocols for ongoing compliance. As AI regulations evolve, proactive legal assessment will be crucial to ensuring that health care providers not only meet regulatory requirements but also uphold ethical and equitable AI practices that align with broader industry standards.

LIP GLOSS POPPIN’, TEXTS DROPPIN’: Colourpop’s Late-Night Texts May Have Compliance Floppin’

Greetings TCPAWorld!
Lip gloss is poppin’, lip gloss is cool—but late-night marketing texts? Those might land them in court. Listen up, beauty lovers and TCPA watchers—Colourpop Cosmetics is facing a serious touch-up in court over its late-night marketing tactics. A new class action lawsuit filed in the U.S. District Court for the Middle District of Florida claims the company violated federal law by blasting promotional text messages well past bedtime. See Trushel v. Colourpop Cosmetics, LLC, No. 8:25-CV-00282 (M.D. Fla. filed Feb. 4, 2025).
We all know the thrill of a midnight flash sale—one second, you’re winding down for the night, and the next, you’re frantically adding items to your cart before the “FINAL HOURS!” timer runs out. Amazon Prime Day flashbacks, anyone? But there’s a fine line between FOMO marketing and federal law violations, and according to this lawsuit, Colourpop might have crossed it.
So here is the deal. Plaintiff alleges she received multiple late-night texts from Colourpop, including a “$2 Lips” deal and other Cyber Sale alerts sent around 10 PM. That might seem harmless, but here’s the problem—the Telephone Consumer Protection Act (“TCPA”) explicitly bans marketing calls and texts before 8 AM or after 9 PM (local time). See 47 C.F.R. § 64.1200(c)(1)).
And Colourpop didn’t just allegedly text Plaintiff—it may have done this to thousands of customers across the U.S. over the last four years. That’s why this lawsuit isn’t just about one person’s disrupted sleep cycle—it’s a potential nationwide class action covering anyone in the U.S. who received similar late-night texts from Colourpop. If Colourpop loses, the financial impact could be major. The TCPA allows for damages of $500 per text—which already stings—but if Colourpop knowingly ignored the law? That jumps to $1,500 per message.
The lawsuit alleges this wasn’t just an innocent mistake. The Complaint asserts that Colourpop’s late-night texts were part of a broader telemarketing strategy—meaning these weren’t one-off messages but part of a deliberate campaign. That distinction matters because it could increase the likelihood that the Court finds Colourpop acted willfully, which raises the potential damages. And here’s another issue—Plaintiff never gave consent to receive messages outside of legal hours.
Interestingly, this isn’t Plaintiff’s first TCPA lawsuit. The same day, Plaintiff sued The Children’s Place, Inc. in the same court, alleging nearly identical violations. See Trushel v. The Children’s Place, Inc., No. 8:25-CV-00284 (M.D. Fla. filed Feb. 4, 2025). According to that Complaint, Plaintiff received late-night marketing texts from The Children’s Place around 10:35 PM and 10:36 PM on separate occasions, and the lawsuit similarly seeks damages under the TCPA’s statutory framework. With two lawsuits filed back-to-back, it raises the question—are these brands engaging in widespread non-compliance, or are plaintiffs becoming increasingly aware of TCPA violations and actively monitoring for missteps? Given the financial penalties, could some consumers opt for promotional texts and wait for a company to slip up with an eye toward litigation? One misstep in your SMS marketing could be more than just a blemish—it could stain your brand. No pun intended.
What makes this case particularly interesting is how Colourpop’s Terms of Use comes into play. I did some digging into their website, and their terms contain several provisions: 1) a mandatory arbitration clause requiring disputes to be resolved through JAMS arbitration in Los Angeles County, California; 2) a 60-day notice and informal resolution period before any legal action; 3) a class action waiver requiring all claims to be brought individually; and 4) detailed SMS marketing consent provisions that are notably silent on message timing.
But here’s where things get even more complicated for Colourpop—its SMS Terms of Use might work against it. According to its official policy, Colourpop requires users to “affirmatively opt-in” to receive marketing texts and states that “consent is not required to make any purchase.” That’s standard, but the policy doesn’t say anything about notifying users that messages may arrive at prohibited hours. In other words, just because someone opted in doesn’t mean they agreed to get texts at 10 PM.
What is more, the Terms include a “Class Action Waiver,” stating that customers agree to resolve disputes through individual arbitration rather than class actions. However, TCPA cases have successfully challenged these waivers, particularly when courts find them unconscionable or conflicting with consumer protection policies. But let’s be clear—each case has its own legal and factual workup, and enforcing arbitration clauses isn’t a one-size-fits-all. Have you ever read Troutman Amin’s motions to compel arbitration? They are top-notch, crafted with precision, and built to withstand scrutiny. Whether enforcing a waiver or strategically defending against class certification, our team knows how to keep businesses out of costly courtroom battles and in control of their legal strategy. You don’t want to be left covering up legal blemishes—you want a flawless finish. (And yes, my pun game is getting better.)
This lawsuit isn’t just about Colourpop—it’s a reminder to every brand using SMS marketing that timing isn’t just a courtesy; it’s the law. Translation? If your brand hits “send” on promotional texts after 9 PM, you might wake up to a class action lawsuit. The old saying goes, “Nothing good happens after midnight,” but for businesses, it’s starting to look like “nothing safe happens after 9 PM.”
As always,
Keep it legal, keep it smart, and stay ahead of the game.
Talk soon!

Key Considerations for the Prospective Blockchain Investor

Prospective purchasers of blockchain assets can now navigate through global exchanges (i.e., Coinbase or Kraken) to invest in various forms of tokens. Investments in tokens, however, are only the tip of the iceberg for those who are interested in undertaking financial exposure in blockchain projects. Here, we will provide a high-level overview of common forms of securities that blockchain investors may choose to acquire.

Tokens.
As perhaps the most straightforward form of blockchain investment, tokens can be purchased by investors either from the token issuer directly or through a secondary market. Tokens can take various forms, including utility tokens, security tokens, payment tokens, or stablecoins.

Equity.
Rather than acquiring tokens themselves, investors can purchase equity of companies that either have issued tokens or plan to do so in the future. Whereas certain issuers may distribute tokens directly, others may have a business model related to blockchain more generally. These investments may take the form of an issuer’s common or preferred stock. As an alternative to a purchase of share, investors may instead receive an instrument convertible into equity such as a Simple Agreement for Future Equity (a “SAFE”) or a convertible note. As a further incentive for equity investment, issuers may offer an option to purchase tokens at a future point in time (such as a “warrant”; more on this below).

Pre-Purchase Agreement for Tokens.
When an issuer has not yet minted tokens but intends to do so in the near future, the issuer may decide to issue pre-purchase agreements. As a play on the SAFE acronym, a pre-purchase agreement for tokens is often referred to as a Simple Agreement for Future Tokens (or a “SAFT”). However, unlike a standard Y-Combinator form of SAFE, there is not yet an industry standard form for a SAFT, although certain forms have gained in popularity.
Below are key considerations to be taken into account when evaluating an investment in a SAFT:

Valuation. Whereas a SAFE typically defers on the determination of valuation to a future point in time, a SAFT is often drafted as a pre-purchase agreement with a specified price per token. In exchange for an early investment while the token is still in development, an investor is given a price per token more favorable than that which will be offered to future investors upon a token launch. Less often, a SAFT may be drafted more analogously to a SAFE and will defer the valuation question to the future. With this formulation, the price per token will be determined based on the rate offered by the issuer upon token launch and may offer a percentage discount.
Deadline. Unlike a typical form of SAFE, a SAFT often includes a maturity date upon which the investor’s purchase price must be returned if a token has not been issued. The inclusion of a maturity date protects the investor against the risk of a company either pivoting its business and deciding not to issue tokens or failing to develop the intended token. Even with a maturity date, a SAFT is not without risk. Issuers often use funds received pursuant to a SAFT in connection with the development of their project and may not have funds available to make a repayment upon maturity.

Token Warrant.
A token warrant provides an investor with the option, but not the obligation, to purchase tokens prior to the warrant’s expiration at a set price. A token warrant may be sold on its own, but it most often issued alongside another security such as stock or an instrument convertible into shares of stock.
Below are key considerations to be taken into account when evaluating a token warrant:

Expiration Date. Whereas some warrants expire upon a specific date, others may expire sooner upon the achievement of a milestone. A common milestone is the initial launch of the applicable token. Additionally, token warrants often permit for multiple exercises; for example, subsequent exercises may be permitted if additional tokens are launched or if the quantity of a token previously minted is increased.
Price. An investor could pay a nominal amount upon issuance of a token warrant for the right to buy tokens at a price offered in the future; for example, at the price that tokens are offered to insiders or at a discount to the price that tokens are offered to the public. Alternatively, a token warrant could be drafted as a “penny warrant” whereas the investor is granted a right to purchase tokens for nominal amount (for example, $0.01 total) in the future.
Allocation. The allocation of tokens to be issued may be provided as a predetermined number of tokens or as a percentage of the future tokens issued. Alternatively, as is often the case when warrants are issued in connection with an equity investment, a warrant may grant to the investor a right to purchase their “pro rata” percentage of the future token issuance. Although on its face this calculation may initially appear straightforward, the formula itself is often a point of considerable discussion among the investor and the issuer. An investor-favorable calculation would permit an investor to purchase their pro rata percentage (or perhaps a multiple of their pro rata percentage) of the total number of tokens generated. Issuers may push back, offering a pro rata issuance only of the percentage of tokens allocated to insiders. The definition of “insiders” is itself a negotiated term, and may include the issuer’s founders, other investors, and key employees. Given the significant economic impact that this initially innocuous term may have, all elements of the calculation of the investor’s allocation should be carefully considered by all parties involved.

Additional Terms for Consideration.

Vesting. In any transaction that involves the future issuance of tokens, there is likely to be a tension between the issuer, who in order avoid an immediate sell-off upon grant would prefer to lock up the tokens from future sales for an extended period of time, and the investor, who would prefer to be permitted to freely transfer their tokens as soon as permitted by law (which in the United States may involve compliance with transfer restrictions under the securities laws). Investors may consider requesting that the tokens vest in accordance with a pre-determined schedule, whereas issuers may prefer to retain flexibility to set a vesting schedule upon the token launch. As a middle-ground, parties may consider permitting a lock-up of tokens to be determined by the issuer at launch, but in no event more restrictive than the least restrictive vesting schedule applicable to any insiders.
Governing Law. Across jurisdictions, the treatment of tokens continues to evolve at a rapid pace. Investors must consider not only the applicable law in the jurisdiction in which they reside, but also the local laws of the token issuer. Given the potential regulatory disparity between jurisdictions, investors should seek the advice of local counsel.
Regulatory Updates. In the United States, blockchain tokens are likely to be considered “securities”. However, this is a rapidly-evolving regulatory landscape under which the classification and treatment of blockchain tokens may be reconsidered. For example, the SEC has recently created a new “Crypto Task Force” and has rescinded certain controversial staff guidance related to the accounting treatment of custodied crypto assets. Investors and issuers alike should procure legal counsel to ensure continued compliance with all applicable laws and regulations.

ONE-TO-ONE IS DEAD– OR IS IT?: Three Reasons Why One-To-One Consent May Still be a Thing After All

Another big one this Friday.
Everyone is saying one-to-one consent is dead (heck, even I have said it.)
But is it really?
Consider:
First, the wireless carriers are still requiring one-to-one opt in for SMS traffic on their network. T-Mobile for instance outright bans the sharing of collected consents for messaging on their network. They essentially treat lead generation the same as scams, gambling and illegal conduct. So SMS isn’t getting through unless it is one-to-one!
THIS issue is why the R.E.A.C.H. petition asking the FCC to stop content-based call and text blocking is so important– these restrictions are categorically unconstitutional, yet the carriers censor speech in plain sight.
Second, speaking of R.E.A.C.H– the R.E.A.C.H Standards still require one-to-one consent. So expect a huge number of buyers to continue to require such consent to comply with those standards. (The board is meeting today and we may see some softening on one-to-one in light of the 11th Circuit ruling– more on that soon.)
Third, many buyers want the option to purchase one-to-one consent regardless. The consents convert better and I have heard in many instances there is lower CPA. So this is an attractive product.
Will everyone move to one-to-one? Doesn’t seem likely. But looks like it will be much more durable than expected.

New York State Legislature Passes Amendment to the New York Retail Worker Safety Act

Although later than anticipated, the New York State Legislature has just passed an amendment to the New York Retail Worker Safety Act (S8358C/A8947C, Chapter 308) that would extend the effective date of the act’s workplace violence prevention policy, training, and notice provisions from March 4, 2025, to June 2, 2025.
While this amendment (S740/A1678) still needs to be presented to Governor Kathy Hochul to be signed into law, employers that are preparing for compliance can take note of the changes.
Quick Hits

The New York State Legislature has passed an amendment to the New York Retail Worker Safety Act that extends the effective date for workplace violence prevention policies, training, and notice provisions from March 4, 2025, to June 2, 2025.
The amendment requires employers with 500 or more retail employees statewide toprovide “silent response buttons” (SRBs) for internal alerts, adjusts training requirements for smaller employers, and mandates state model templates in multiple languages. (The effective date of the SRB requirement is still January 1, 2027.)

The amendment also modifies the following other provisions of the act:

“Panic Buttons” that would alert law enforcement are now replaced with “silent response buttons” (SRBs) that alert internal staff (security officers, managers, or supervisors).
SRBs are now required for employers with 500 or more retail employees statewide rather than nationwide.
Employers with fewer than fifty retail employees now only need to provide workplace violence training to their retail employees upon hire, and then every other year, rather than annually.
New York State model templates will now be issued in English and the twelve most common non-English languages spoken in New York (as determined by data published by the United States Census Bureau).

With the previous effective date right around the corner, this amendment, along with the extension to the compliance date to June 2, 2025, can be a breath of fresh air for employers still working on their workplace violence policies and training programs. The amendment has not changed the effective date for the SRB requirement, which remains January 1, 2027.
Because Governor Hochul was actively involved in advancing the Retail Worker Safety Act and its amendment, it is anticipated that she will sign the amendment into law.

Colorado’s AI Task Force Proposes Updates to State’s AI Law

Stemming from Colorado’s Concerning Consumer Protections in Interactions with Artificial Intelligence Systems Act (the Act), which will impose obligations on developers and deployers of artificial intelligence (AI), the Colorado Artificial Intelligence Impact Task Force recently issued a report outlining potential areas where the Act can be “clarified, refined[,] and otherwise improved.”
The Task Force’s mission is to review issues related to AI and automated detection systems (ADS) affecting consumers and employees. The Task Force met on several occasions and prepared a report summarizing their findings:

Revise the Act’s definition of the types of decisions that qualify as “consequential decisions,” as well as the definition of “algorithmic discrimination,” “substantial factor,” and “intentional and substantial modification;”
Revamp the list of exemptions to what qualifies as a “covered decision system;”
Change the scope of the information and documentation that developers must provide to deployers;
Update the triggering events and timing for impact assessments as well as changes to the requirements for deployer risk management programs;
Possible replacement of the duty of care standard for developers and deployers (i.e., consider whether such standard should be more or less stringent);
Consider whether to minimize or expand the small business exemption (the current exemption under the Act is for businesses with less than 50 employees);
Consider whether businesses should be provided a cure period for certain types of non-compliance before Attorney General enforcement under the Act; and,
Revise the trade secret exemptions and provisions related to a consumer’s right to appeal.

As of today, the requirements for AI developers and deployers under the Act go into effect on February 1, 2026. However, the Task Force recommends reconsidering the law’s implementation timing. We will continue to track this first-of-its-kind AI law. 

The BR Privacy & Security Download: February 2025

STATE & LOCAL LAWS & REGULATIONS
New York Legislature Passes Comprehensive Health Privacy Law: The New York state legislature passed SB-929 (the “Bill”), providing for the protection of health information. The Bill broadly defines “regulated health information” as “any information that is reasonably linkable to an individual, or a device, and is collected or processed in connection with the physical or mental health of an individual.” Regulated health information includes location and payment information, as well as inferences derived from an individual’s physical or mental health. The term “individual” is not defined. Accordingly, the Bill contains no terms restricting its application to consumers acting in an individual or household context. The Bill would apply to regulated entities, which are entities that (1) are located in New York and control the processing of regulated health information, or (2) control the processing of regulated health information of New York residents or individuals physically present in New York. Among other things, the Bill would restrict regulated entities to processing regulated health information only with a valid authorization, or when strictly necessary for certain specified activities. The Bill also provides for individual rights and requires the implementation of reasonable administrative, physical, and technical safeguards to protect regulated health information. The Bill would take effect one year after being signed into law and currently awaits New York Governor Kathy Hochul’s signature.
New York Data Breach Notification Law Updated: Two bills, SO2659 and SO2376, that amended the state’s data breach notification law were signed into law by New York Governor Kathy Hochul. The bills change the timing requirement in which notice must be provided to New York residents, add data elements to the definition of “private information,” and adds the New York Department of Financial Services to the list of regulators that must be notified. Previously, New York’s data breach notification statute did not have a hard deadline within which notice must be provided. The amendments now require affected individuals to be notified no later than 30 days after discovery of the breach, except for delays arising from the legitimate needs of law enforcement. Additionally, as of March 25, 2025, “private information” subject to the law’s notification requirements will include medical information and health insurance information.
California AG Issues Legal Advisory on Application of California Law to AI: California’s Attorney General has issued legal advisories to clarify that existing state laws apply to AI development and use, emphasizing that California is not an AI “wild west.” These advisories cover consumer protection, civil rights, competition, data privacy, and election misinformation. AI systems, while beneficial, present risks such as bias, discrimination, and the spread of disinformation. Therefore, entities that develop or use AI must comply with all state, federal, and local laws. The advisories highlight key laws, including the Unfair Competition Law and the California Consumer Privacy Act. The advisories also highlight new laws effective on January 1, 2025, which include disclosure requirements for businesses, restrictions on the unauthorized use of likeness, and regulations for AI use in elections and healthcare. These advisories stress the importance of transparency and compliance to prevent harm from AI.
New Jersey AG Publishes Guidance on Algorithmic Discrimination: On January 9, 2025, New Jersey’s Attorney General and Division on Civil Rights announced a new civil rights and technology initiative to address the risks of discrimination and bias-based harassment in AI and other advanced technologies. The initiative includes the publication of a Guidance Document, which addresses the applicability of New Jersey’s Law Against Discrimination (“LAD”) to automated decision-making tools and technologies. It focuses on the threats posed by automated decision-making technologies in the housing, employment, healthcare, and financial services contexts, emphasizing that the LAD applies to discrimination regardless of the technology at issue. Also included in the announcement is the launch of a new Civil Rights Innovation lab, which “will aim to leverage technology responsibly to advance [the Division’s] mission to prevent, address, and remedy discrimination.” The Lab will partner with experts and relevant industry stakeholders to identify and develop technology to enhance the Division’s enforcement, outreach, and public education work, and will develop protocols to facilitate the responsible deployment of AI and related decision-making technology. This initiative, along with the recently effective New Jersey Data Protection Act, shows a significantly increased focus from the New Jersey Attorney General on issues relating to data privacy and automated decision-making technologies.
New Jersey Publishes Comprehensive Privacy Law FAQs: The New Jersey Division of Consumer Affairs Cyber Fraud Unit (“Division”) published FAQs that provide a general summary of the New Jersey Data Privacy Law (“NJDPL”), including its scope, key definitions, consumer rights, and enforcement. The NJDPL took effect on January 15, 2025, and the FAQs state that controllers subject to the NJDPL are expected to comply by such date. However, the FAQs also emphasize that until July 1, 2026, the Division will provide notice and a 30-day cure period for potential violations. The FAQs also suggest that the Division may adopt a stricter approach to minors’ privacy. While the text of the NJDPL requires consent for processing the personal data of consumers between the ages of 13 and 16 for purposes of targeted advertising, sale, and profiling, the FAQs state that when a controller knows or willfully disregards that a consumer is between the ages of 13 and 16, consent is required to process their personal data more generally.
CPPA Extends Formal Comment Period for Automated Decision-Making Technology Regulations: The California Privacy Protection Agency (“CPPA”) extended the public comment period for its proposed regulations on cybersecurity audits, risk assessments, automated decision-making technology (“ADMT”), and insurance companies under the California Privacy Rights Act. The public comment period opened on November 22, 2024, and was set to close on January 14, 2025. However, due to the wildfires in Southern California, the public comment period was extended to February 19, 2025. The CPPA will also be holding a public hearing on that date for interested parties to present oral and written statements or arguments regarding the proposed regulations.
Oregon DOJ Publishes Toolkit for Consumer Privacy Rights: The Oregon Department of Justice announced the release of a new toolkit designed to help Oregonians protect their online information. The toolkit is designed to help families understand their rights under the Oregon Consumer Privacy Act. The Oregon DOJ reminded consumers how to submit complaints when businesses are not responsive to privacy rights requests. The Oregon DOJ also stated it has received 118 complaints since the Oregon Consumer Privacy Act took effect last July and had sent notices of violation to businesses that have been identified as non-compliant.
California, Colorado, and Connecticut AGs Remind Consumers of Opt-Out Rights: California Attorney General Rob Bonta published a press release reminding residents of their right to opt out of the sale and sharing of their personal information. The California Attorney General also cited the robust privacy protections of Colorado and Connecticut laws that provide for similar opt-out protections. The press release urged consumers to familiarize themselves with the Global Privacy Control (“GPC”), a browser setting or extension that automatically signals to businesses that they should not sell or share a consumer’s personal information, including for targeted advertising. The Attorney General also provided instructions for the use of the GPC and for exercising op-outs by visiting the websites of individual businesses.

FEDERAL LAWS & REGULATIONS
FTC Finalizes Updates to COPPA Rule: The FTC announced the finalization of updates to the Children’s Online Privacy Protection Rule (the “Rule”). The updated Rule makes a number of changes, including requiring opt-in consent to engage in targeted advertising to children and to disclose children’s personal information to third parties. The Rule also adds biometric identifiers to the definition of personal information and prohibits operators from retaining children’s personal information for longer than necessary for the specific documented business purposes for which it was collected. Operators must maintain a written data retention policy that documents the business purpose for data retention and the retention period for data. The Commission voted 5-0 to adopt the Rule, but new FTC Chair Andrew Ferguson filed a separate statement describing “serious problems” with the rule. Ferguson specifically stated that it was unclear whether an entirely new consent would be required if an operator added a new third party with whom personal information would be shared, potentially creating a significant burden for businesses. The Rule will be effective 60 days after its publication in the Federal Register.
Trump Rescinds Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence: President Donald Trump took action to rescind former President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“AI EO”). According to a Biden administration statement released in October, many action items from the AI EO have already been completed. Recommendations, reports, and opportunities for research that were completed prior to revocation of the AI EO may continue in place unless replaced by additional federal agency action. It remains unclear whether the Trump Administration will issue its own executive orders relating to AI.
U.S. Justice Department Issues Final Rule on Transfer of Sensitive Personal Data to Foreign Adversaries: The U.S. Justice Department issued final regulations to implement a presidential Executive Order regarding access to bulk sensitive personal data of U.S. citizens by foreign adversaries. The regulations restrict transfers involving designated countries of concern – China, Cuba, Iran, North Korea, Russia, and Venezuela. At a high level, transfers are restricted if they could result in bulk sensitive personal data access by a country of concern or a “covered person,” which is an entity that is majority-owned by a country of concern, organized under the laws of a country of concern, has its principle place of business in a country of concern, or is an individual whose primary residence is in a county of concern. Data covered by the regulation includes precise geolocation data, biometric identifiers, genetic data, health data, financial data, government-issued identification numbers, and certain other identifiers, including device or hardware-based identifiers, advertising identifiers, and demographic or contact data.
First Complaint Filed Under Protecting Americans’ Data from Foreign Adversaries Act: The Electronic Privacy Information Center (“EPIC”) and the Irish Counsel for Civil Liberties (“ICCL”) Enforce Unit filed the first-ever complaint under the Protecting Americans’ Data from Foreign Adversaries Act (“PADFAA”). PADFAA makes it unlawful for a data broker to sell, license, rent, trade, transfer, release, disclose, or otherwise make available specified personally identifiable sensitive data of individuals residing in the United States to North Korea, China, Russia, Iran, or an entity controlled by one of those countries. The complaint alleges that Google’s real-time bidding system data includes personally identifiable sensitive data, that Google executives were aware that data from its real-time bidding system may have been resold, and that Google’s public list of certified companies that receive real-time bidding bid request data include multiple companies based in foreign adversary countries.
FDA Issues Draft Guidance for AI-Enabled Device Software Functions: The U.S. Food and Drug Administration (“FDA”) published its January 2025 Draft Guidance for Industry and FDA Staff regarding AI-enabled device software functionality. The Draft provides recommendations regarding the contents of marketing submissions for AI-enabled medical devices, including documentation and information that will support the FDA’s evaluation of their safety and effectiveness. The Draft Guidance is designed to reflect a “comprehensive approach” to the management of devices through their total product life cycle and includes recommendations for the design, development, and implementation of AI-enabled devices. The FDA is accepting comments on the Draft Guidance, which may be submitted online until April 7, 2025.
Industry Coalition Pushes for Unified National Data Privacy Law: A coalition of over thirty industry groups, including the U.S. Chamber of Commerce, sent a letter to Congress urging it to enact a comprehensive national data privacy law. The letter highlights the urgent need for a cohesive federal standard to replace the fragmented state laws that complicate compliance and stifle competition. The letter advocates for legislation based on principles to empower startups and small businesses by reducing costs and improving consumer access to services. The letter supports granting consumers the right to understand, correct, and delete their data, and to opt out of targeted advertising, while emphasizing transparency by requiring companies to disclose data practices and secure consent for processing sensitive information. It also focuses on the principles of limiting data collection to essential purposes and implementing robust security measures. While the principles aim to override strong state laws like that in California, the proposal notably excludes data broker regulation, a previous point of contention. The coalition cautions against legislation that could lead to frivolous litigation, advocating for balanced enforcement and collaborative compliance. By adhering to these principles, the industry groups seek to ensure legal certainty and promote responsible data use, benefiting both businesses and consumers.
Cyber Trust Mark Unveiled: The White House launched a labeling scheme for internet-of-things devices designed to inform consumers when devices meet certain government-determined cybersecurity standards. The program has been in development for several months and involves collaboration between the White House, the National Institute of Standards and Technology, and the Federal Communications Commission. UL Solutions, a global safety and testing company headquartered in Illinois, has been selected as the lead administrator of the program along with 10 other firms as deputy administrators. With the main goal of helping consumers make more cyber-secure choices when purchasing products, the White House hopes to have products with the new cyber trust mark hit shelves before the end of 2025.

U.S. LITIGATION
Texas Attorney General Sues Insurance Company for Unlawful Collection and Sharing of Driving Data: Texas Attorney General Ken Paxton filed a lawsuit against Allstate and its data analytics subsidiary, Arity. The lawsuit alleges that Arity paid app developers to incorporate its software development kit that tracked location data from over 45 million consumers in the U.S. According to the lawsuit, Arity then shared that data with Allstate and other insurers, who would use the data to justify increasing car insurance premiums. The sale of precise geolocation data of Texans violated the Texas Data Privacy and Security Act (“TDPSA”) according to the Texas Attorney General. The TDPSA requires the companies to provide notice and obtain informed consent to use the sensitive data of Texas residents, which includes precise geolocation data. The Texas Attorney General sued General Motors in August of 2024, alleging similar practices relating to the collection and sale of driver data. 
Eleventh Circuit Overturns FCC’s One-to-One Consent Rule, Upholds Broader Telemarketing Practices: In Insurance Marketing Coalition, Ltd. v. Federal Communications Commission, No. 24-10277, 2025 WL 289152 (11th Cir. Jan. 24, 2025), the Eleventh Circuit vacated the FCC’s one-to-one consent rule under the Telephone Consumer Protection Act (“TCPA”). The court found that the rule exceeded the FCC’s authority and conflicted with the statutory meaning of “prior express consent.” By requiring separate consent for each seller and topic-related call, the rule was deemed unnecessary. This decision allows businesses to continue using broader consent practices, maintaining shared consent agreements. The ruling emphasizes that consent should align with common-law principles rather than be restricted to a single entity. While the FCC’s next steps remain uncertain, the decision reduces compliance burdens and may challenge other TCPA regulations.
California Judge Blocks Enforcement of Social Media Addiction Law: The California Protecting Our Kids from Social Media Addiction Act (the “Act”) has been temporarily blocked. The Act was set to take effect on January 1, 2025. The law aims to prevent social media platforms from using algorithms to provide addictive content to children. Judge Edward J. Davila initially declined to block key parts of the law but agreed to pause enforcement until February 1, 2025, to allow the Ninth Circuit to review the case. NetChoice, a tech trade group, is challenging the law on First Amendment grounds. NetChoice argues that restricting minors’ access to personalized feeds violates the First Amendment. The group has appealed to the Ninth Circuit and is seeking an injunction to prevent the law from taking effect. Judge Davila’s decision recognized the “novel, difficult, and important” constitutional issues presented by the case. The law includes provisions to restrict minors’ access to personalized feeds, limit their ability to view likes and other feedback, and restrict third-party interaction.

U.S. ENFORCEMENT
FTC Settles Enforcement Action Against General Motors for Sharing Geolocation and Driving Behavior Data Without Consent: The Federal Trade Commission (“FTC”) announced a proposed order to settle FTC allegations against General Motors that it collected, used, and sold driver’s precise geolocation data and driving behavior information from millions of vehicles without adequately notifying consumers and obtaining their affirmative consent. The FTC specifically alleged General Motors used a misleading enrollment process to get consumers to sign up for its OnStar-connected vehicle service and Smart Driver feature without proper notice or consent during that process. The information was then sold to third parties, including consumer reporting agencies, according to the FTC. As part of the settlement, General Motors will be prohibited from disclosing driver data to consumer reporting agencies, required to allow consumers to obtain and delete their data, required to obtain consent prior to collection, and required to allow consumers to limit data collected from their vehicles.
FTC Releases Proposed Order Against GoDaddy for Alleged Data Security Failures: The Federal Trade Commission (“FTC”) has announced it had reached a proposed settlement in its action against GoDaddy Inc. (“GoDaddy”) for failing to implement reasonable and appropriate security measures, which resulted in several major data breaches between 2019 and 2022. According to the FTC’s complaint, GoDaddy misled customers of its data security practices, through claims on its websites and in email and social media ads, and by representing it was in compliance with the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks. However, the FTC found that GoDaddy failed to inventory and manage assets and software updates, assess risks to its shared hosting services, adequately log and monitor security-related events, and segment its shared hosting from less secure environments. The FTC’s proposed order against GoDaddy prohibits GoDaddy from misleading its customers about its security practices and requires GoDaddy to implement a comprehensive information security program. GoDaddy must also hire a third-party assessor to conduct biennial reviews of its information security program.
CPPA Reaches Settlements with Additional Data Brokers: Following their announcement of a public investigative sweep of data broker registration compliance, the CPPA has settled with additional data brokers PayDae, Inc. d/b/a Infillion (“Infillion”), The Data Group, LLC (“The Data Group”), and Key Marketing Advantage, LLC (“KMA”) for failing to register as a data broker and pay an annual fee as required by California’s Delete Act. Infillion will pay $54,200 for failing to register between February 1, 2024, and November 4, 2024. The Data Group will pay $46,600 for failing to register between February 1, 2024, and September 20, 2024. KMA will pay $55,800 for failing to register between February 1, 2024, and November 5, 2024. In addition to the fines, the companies have agreed to injunctive terms. The Delete Act imposes fines of $200 per day for failing to register by the deadline.
Mortgage Company Fined by State Financial Regulators for Cybersecurity Breach: Bayview Asset Management LLC and three affiliates (collectively, “Bayview”) agreed to pay a $20 million fine and improve their cybersecurity programs to settle allegations from 53 state financial regulators. The Conference of State Bank Supervisors (“CSBS”) alleged that the mortgage companies had deficient cybersecurity practices and did not fully cooperate with regulators after a 2021 data breach. The data breach compromised data for 5.8 million customers. The coordinated enforcement action was led by financial regulators in California, Maryland, North Carolina, and Washington State. The regulators said the companies’ information technology and cybersecurity practices did not meet federal or state requirements. The firms also delayed the supervisory process by withholding requested information and providing redacted documents in the initial stages of a post-breach exam. The companies also agreed to undergo independent assessments and provide three years of additional reporting to the state regulators.
SEC Reaches Settlement over Misleading Cybersecurity Disclosures: The SEC announced it has settled charges with Ashford Inc., an asset management firm, over misleading disclosures related to a cybersecurity incident. This enforcement action stemmed from a ransomware attack in September 2023, compromising over 12 terabytes of sensitive hotel customer data, including driver’s licenses and credit card numbers. Despite the breach, Ashford falsely reported in its November 2023 filings that no customer information was exposed. The SEC alleged negligence in Ashford’s disclosures, citing violations of the Securities Act of 1933 and the Exchange Act of 1934. Without admitting or denying the allegations, Ashford agreed to a $115,231 penalty and an injunction. This case highlights the critical importance of accurate cybersecurity disclosures and demonstrates the SEC’s commitment to ensuring transparency and accountability in corporate reporting.
FTC Finalizes Data Breach-Related Settlement with Marriott: The FTC has finalized its order against Marriott International, Inc. (“Marriott”) and its subsidiary Starwood Hotels & Resorts Worldwide LLC (“Starwood”). As previously reported, the FTC entered into a settlement with Marriott and Starwood for three data breaches the companies experienced between 2014 and 2020, which collectively impacted more than 344 million guest records. Under the finalized order, Marriott and Starwood are required to establish a comprehensive information security program, implement a policy to retain personal information only for as long as reasonably necessary, and establish a link on their website for U.S. customers to request deletion of their personal information associated with their email address or loyalty rewards account number. The order also requires Marriott to review loyalty rewards accounts upon customer request and restore stolen loyalty points. The companies are further prohibited from misrepresenting their information collection practices and data security measures.
New York Attorney General Settles with Auto Insurance Company over Data Breach: The New York Attorney General settled with automobile insurance company, Noblr, for a data breach the company experienced in January 2021. Noblr’s online insurance quoting tool exposed full, plaintext driver’s license numbers, including on the backend of its website and in PDFs generated when a purchase was made. The data breach impacted the personal information of more than 80,000 New Yorkers. The data breach was part of an industry-wide campaign to steal personal information (e.g., driver’s license numbers and dates of birth) from online automobile insurance quoting applications to be used to file fraudulent unemployment claims during the COVID-19 pandemic. As part of its settlement, Noblr must pay the New York Attorney General $500,000 in penalties and strengthen its data security measures such as by enhancing its web application defenses and maintaining a comprehensive information security program, data inventory, access controls (e.g., authentication procedures), and logging and monitoring systems.
FTC Alleges Video Game Maker Violated COPPA and Engaged in Deceptive Marketing Practices: The Federal Trade Commission (“FTC”) has taken action against Cognosphere Pte. Ltd and its subsidiary Cognosphere LLC, also known as HoYoverse, the developer of the game Genshin Impact (“HoYoverse”). The FTC alleges that HoYoverse violated the Children’s Online Privacy Protection Act (“COPPA”) and engaged in deceptive marketing practices. Specifically, the company is accused of unfairly marketing loot boxes to children and misleading players about the odds of winning prizes and the true cost of in-game transactions. To settle these charges, HoYoverse will pay a $20 million fine and is prohibited from allowing children under 16 to make in-game purchases without parental consent. Additionally, the company must provide an option to purchase loot boxes directly with real money and disclose loot box odds and exchange rates. HoYoverse is also required to delete personal information collected from children under 13 without parental consent. The FTC’s actions aim to protect consumers, especially children and teens, from deceptive practices related to in-game purchases.
OCR Finalizes Several Settlements for HIPAA Violations: Prior to the inauguration of President Trump, the U.S. Department of Health and Human Services Office for Civil Rights (“OCR”) brought enforcement actions against four entities, USR Holdings, LLC (“USR”), Elgon Information Systems (“Elgon”), Solara Medical Supplies, LLC (“Solara”) and Northeast Surgical Group, P.C. (“NESG”), for potential violations of the Health Insurance Portability and Accountability Act’s (“HIPAA”) Security Rule due to the data breaches the entities experienced. USR reported that between August 23, 2018, and December 8, 2018, a database containing the electronic protected health information (“ePHI”) of 2,903 individuals was accessed by an unauthorized third party who was able to delete the ePHI in the database. Elgon and NESG each discovered a ransomware attack in March 2023, which affected the protected health information (“PHI”) of approximately 31,248 individuals and 15,298 individuals, respectively. Solara experienced a phishing attack that allowed an unauthorized third party to gain access to eight of Solara’s employees’ email accounts between April and June 2019, resulting in the compromise of 114,007 individuals’ ePHI. As part of their settlements, each of the entities is required to pay a fine to OCR: USR $337,750, Elgon $80,000, Solara $3,000,000, and NESG $10,000. Additionally, each of the entities is required to implement certain data security measures such as conducting a risk analysis, implementing a risk management plan, maintaining written policies and procedures to comply with HIPAA, and distributing such policies or providing training on such policies to its workforce.  
Virgina Attorney General Sues TikTok for Addictive Fees and Allowing Chinese Government to Access Data: Virginia Attorney General Jason Miyares announced his office had filed a lawsuit against TikTok and ByteDance Ltd, the Chinese-based parent company of TikTok. The lawsuit alleges that TikTok was intentionally designed to be addictive for adolescent users and that the company deceived parents about TikTok content, including by claiming the app is appropriate for children over the age of 12 in violation of the Virginia Consumer Protection Act. 

INTERNATIONAL LAWS & REGULATIONS
UK ICO Publishes Guidance on Pay or Consent Model: On January 23, the UK’s Information Commissioner’s Office (“ICO”) published its Guidance for Organizations Implementing or Considering Implementing Consent or Pay Models. The guidance is designed to clarify how organizations can deploy ‘consent or pay’ models in a manner that gives users meaningful control over the privacy of their information while still supporting their economic viability. The guidance addresses the requirements of applicable UK laws, including PECR and the UK GDPR, and provides extensive guidance as to how appropriate fees may be calculated and how to address imbalances of power. The guidance includes a set of factors that organizations can use to assess their consent models and includes plans to further engage with online consent management platforms, which are typically used by businesses to manage the use of essential and non-essential online trackers. Businesses with operations in the UK should carefully review their current online tracker consent management tools in light of this new guidance.
EU Commission to Pay Damages for Sending IP Address to Meta: The European General Court has ordered the European Commission to pay a German citizen, Thomas Bindl, €400 in damages for unlawfully transferring his personal data to the U.S. This decision sets a new precedent regarding EU data protection litigation. The court found that the Commission breached data protection regulations by operating a website with a “sign in with Facebook” option. This resulted in Bindl’s IP address, along with other data, being transferred to Meta without ensuring adequate safeguards were in place. The transfer happened during the transition period between the EU-U.S. Privacy Shield and the EU-U.S. Data Protection Framework. The court determined that this left Bindl in a position of uncertainty about how his data was being processed. The ruling is significant because it recognizes “intrinsic harm” and may pave the way for large-scale collective redress actions.
European Data Protection Board Releases AI Bias Assessment and Data Subject Rights Tools: The European Data Protection Board (“EDPB”) released two AI tools as part of the AI: Complex Algorithms and effective Data Protection Supervision Projects. The EDPB launched the project in the context of the Support Pool of Experts program at the request of the German Federal Data Protection Authority. The Support Pool of Experts program aims to help data protection authorities increase their enforcement capacity by developing common tools and giving them access to a wide pool of experts. The new documents address best practices for bias evaluation and the effective implementation of data subject rights, specifically the rights to rectification and erasure when AI systems have been developed with personal data.
European Data Protection Board Adopts New Guidelines on Pseudonymization: The EDPB released new guidelines on pseudonymization for public consultation (the “Guidelines”). Although pseudonymized data still constitutes personal data under the GDPR, pseudonymization can reduce the risks to the data subjects by preventing the attribution of personal data to natural persons in the course of the processing of the data, and in the event of unauthorized access or use. In certain circumstances, the risk reduction resulting from pseudonymization may enable controllers to rely on legitimate interests as the legal basis for processing personal data under the GDPR, provided they meet the other requirements, or help guarantee an essentially equivalent level of protection for data they intend to export. The Guidelines provide real-world examples illustrating the use of pseudonymization in various scenarios, such as internal analysis, external analysis, and research.
CJEU Issues Ruling on Excessive Data Subject Requests: On January 9, the Court of Justice of the European Union (“CJEU”) issued its ruling in the case Österreichische Datenschutzbehörde (C‑416/23). The primary question before the Court was when a European data protection authority may deny consumer requests due to their excessive nature. Rather than specifying an arbitrary numerical threshold of requests received, the CJEU found that authorities must consider the relevant facts to determine whether the individual submitting the request has “an abusive intention.” While the number of requests submitted may be a factor in determining this intention, it is not the only factor. Additionally, the CJEU emphasized that Data Protection Authorities should strongly consider charging a “reasonable fee” for handling requests they suspect may be excessive prior to simply denying them.
Daniel R. Saeedi, Rachel L. Schaller Gabrielle N. Ganz, Ana Tagvoryan, P. Gavin Eastgate, Timothy W. Dickens, Jason C. Hirsch, Tianmei Ann Huang, Adam J. Landy, Amanda M. Noonan, and Karen H. Shin contributed to this article

The Pre-Suit Notice Requirement and the Actual Knowledge Exception

     Section 2-607(3)(a) of the Uniform Commercial Code, as incorporated into Illinois law as 810 ILCS 5/2-607(3)(a), provides that once a tender of goods has been accepted by the buyer, and a problem with the goods is discovered, “the buyer must within a reasonable time after he discovers or should have discovered any breach [of any express or implied warranty] notify the seller of the breach or be barred from any remedy. . .”
     This timely pre-suit notice made directly by the buyer to the seller is not only standard commercial practice but is an essential element of the buyer’s cause of action for breach of warranty. Branden v. Gerbie, 62 Ill. App. 3d 138, 140 (1st Dist. 1978).
     The notice requirement of section 2-607(3)(a) “generally requires that the plaintiff contact the seller directly and inform the seller of the problems incurred with a particular product that he purchased.” Maldonado v. Creative Woodworking Concepts, 296 Ill. App. 3d (3d Dist. 1998), citing Connick v. Suzuki Motor Co. 174 Ill. 2d 482 (1996).
     “If the problem relates to an injury, the plaintiff must notify the seller that an injury has occurred.” Id., citing 810 ILCS 5?2-607, Note 5.
The Connick Exceptions.
     In the Connick case, cited above, the Illinois Supreme Court discussed two narrow exceptions to the notice requirement. The court explained that “[d]irect notice is not required when (1) the seller has actual knowledge of the defect of the particular product, (citing Malawy v. Richards Manufacturing Co., 150 Ill. App. 3d 549 (5th Dist. 1986)) or (2) the seller is deemed to have been reasonably notified by the filing of the buyer’s complaint alleging breach of UCC warranty, (citing Perona v. Volkswagen of America, Inc., 276 Ill. App. 3d 609 (1st Dist. 1995))” 174 Ill. 2d at 492.
     Narrowing these two exceptions even further, the court held that the first exception, i.e. actual knowledge of the defect on the part of the seller that would excuse the buyer from providing notice of the defect, is only applicable when the seller “is somehow apprised of the trouble with the particular product purchased by a particular buyer.’ 174 Ill. 2d at 494, (emphasis added).
     Regarding the second exception, i.e. notice by lawsuit, the court narrowed the exception to suits filed by “a consumer plaintiff who suffers personal injury.” 174 Ill. 2d at 495. Accord, Goldstein v. G.D. Searle & Co., 62 Ill. App. 3d 344 (1st Dist. 1978).
The Actual Knowledge Exception in the Illinois Federal Courts
     The second exception noted above, i.e, notice by lawsuit, is fairly straightforward in its application. Either the buyer’s complaint involves a personal injury, or it does not. The first exception however – actual knowledge of the defect on the part of the seller – is more nuanced, and the federal courts in Illinois have not been in agreement regarding the parameters of this exception.
     For example, Stella v. LVMH Perfumes & Cosmetics USA, Inc., 564 F. Supp. 2d 833 (N.D. Ill. 2008), involved a claim against a lipstick manufacturer whose product allegedly exposed the buyer to dangerous levels of lead. A public report by an industry group reveled that the seller’s lipstick contained many times the amount of lead that the U.S. Food and Drug Administration had established as the maximum safe lead level for food. The buyer claimed that she would not have purchased the lipstick had the seller warned of the lead content in the product. The buyer sustained no present personal injury but sought the cost of medical monitoring for the future. She sued the seller under several theories including breach of the implied warranty of merchantability. The seller moved to dismiss on the grounds that the buyer had failed to provide pre-suit notice of their claim.
     The plaintiff-purchaser invoked the actual knowledge exception to the notice requirement, claiming that the aforementioned industry report was sufficient to put the seller on notice of the defect. At the pleading stage of the case the court said “[w]hen taken as a whole and in the best light to plaintiff, the complaint sufficiently alleged [that the manufacturer] had actual knowledge of the presence of lead in the lipstick . . . This is enough to fit the claim under the first identified exception to the direct notice requirement.” Stella at 837.
     Regarding the second exception – the personal injury exception – the court added that “plaintiff’s claim for medical monitoring is a form of personal injury claim,” Id., citing cases.
     A similar result occurred a year earlier in the case of Hedges v. Earth, Inc., 2015 U.S. Dist. LEXIS 52318 (N.D. Ill. April 21, 2015), a case involving a shoe that incorporated a “negative heel” which, according to the seller, provided numerous health benefits including improved posture, reduced joint stress and stronger core muscles. The plaintiff bought a pair of the Earth Shoes but subsequently found out from numerous studies and scientific research that the seller’s claims of health benefits were unfounded. The buyer’s claims against the seller included an action for breach of express warranty. No pre-suit notice had been given by the buyer to the seller before suit was filed.
     The district court began its analysis by noting the general rule that “pre-suit notice is an essential element of a breach of warranty claim, and the absence of such notice results in dismissal.” Hedges, 2015 U.S. Dist. LEXIS 52318 at *3, (citing cases).
     To excuse his lack of pre-suit notice plaintiff-buyer invoked the “actual knowledge” exception to the notice requirement, citing the various news stories and scientific studies that contradicted the seller’s health claims. The seller countered by arguing that “news stories and studies concerning general complaints related to a product line do not provide the specific knowledge of a particular breach as required under Connick” Id., at *4.
    While the district court stated that “the actual knowledge exception is not satisfied just because a company is aware of third-party reports criticizing a product line [and that] ‘even if a manufacturer is aware of problems with a particular product line, the notice requirement of Section 2-607 is satisfied only where the manufacturer is somehow apprised of the trouble with a particular product purchased by a particular buyer’” Id., at*5, quoting Connick, 174 Ill. 2d at 494.
     The court continued by saying that “this conclusion comports with common sense; even though there are public records about a general problem with a product line, a seller has no way of knowing whether a particular product actually suffers from the defect until the buyer provides notice of the alleged defect.” Id.
     Despite these correct pronouncements of the law the court held that the buyer “has alleged a set of facts that, if true, demonstrate that [the seller] had actual knowledge of the particular defect with the particular shoe that [plaintiff] purchased, Id., at *6,7.
     According to the facts in the complaint, the court concluded, the seller “must have known that the particular pair of shoes it sold to [the plaintiff] was defective.” Id., at *7.
     The court’s rationale for this conclusion was that that since the claimed “defect” was allegedly false advertising, rather than something to do with the particular pair of shoes purchased by the plaintiff, the “defect” claimed applied equally to each and every pair of the same model of shoe sold by the defendant. Id., at *8.
     In Flynn v. FCA US LLC, 2016 U.S. Dist. LEXIS 130614 (S.D. Ill. Sept. 23, 2016), the Southern District of Illinois, citing the Stella case, supra, refused to dismiss a breach of warrant claim at the pleading stage for lack of pre-suit notice because the claimed defect impacted the entire product line rather than the individual product purchased by the plaintiff.
     The broader view of the actual knowledge exception to the notice rule, as illustrated in the above-cited cases, has been criticized and rejected in a number of subsequent district court opinions. For example, in Muir. NBTY, Inc., 2016 U.S. Dist. LEXIS 129494 (N.D. Ill. Sept. 22, 2016), a case involving a dietary supplement that allegedly bore an inaccurate statement of its contents on its label, the buyer claimed that he was excused from his pre-suit notice obligation because the seller was aware of the misleading label based upon knowledge of test results that had shown that the product was far less potent than advertised. Since the entire product line allegedly suffered from the same mislabeling, the buyer, citing the Stella and Hedges cases, supra, argued that the seller must have actual knowledge of the claimed defect. Muir, 2016 U.S. Dist. LEXIS 129494 at *32.
     Rejecting this argument, the district court said that “this kind of generalized knowledge is not sufficient to excuse the pre-suit notice requirement.” Id. In order for the buyer to be excused from providing notice, the seller must be “’somehow apprised of the trouble with the particular product by a particular buyer.’” Id., quoting Connick, supra, 174 Ill. 2d at 494.
     In Block v. Lifeway Foods, Inc., 2017 U.S. Dist. LEXIS 143828 (N.D. Ill. Sept. 6, 2017), the district court again rejected the “entire product line” rationale that had been used in Stella and similar cases to excuse pre-suit notice, saying that it “is not bound by those rulings and, respectfully, believes that they run contrary to the principles outlined in Connick. . . “ Block, 2017 U.S. Dist. LEXIS 143828 at *18. See also, Anthony v. Country Life Mfg., LLC, 70 F. App’x 379, 384 (7th Cir. 2003).
      Lastly, another court in the Northern District of Illinois rejected the rationale of the Stella and Hedges decisions, supra, saying that “numerous cases have declined to follow these decisions, finding them either inconsistent with or completely contrary to Illinois law,” Rodriguez v. Ford Motor Co., 596 F. Supp. 3d 1050 at 1055 (N.D. Ill. 2022), citing Block and Muir, supra.
     The Rodriguez opinion concludes its strict interpretation of the actual knowledge exception by stating that ‘[t]his Court agrees with the Block and Muir courts – Illinois law requires more than [the buyer’s] general allegation that [the seller] had knowledge of the defect in the. . . product line.” Id. Accord, Bojko v. Pierre Fabre USA, Inc., 2023 U.S. Dist. LEXIS 110443 (N.D. Ill June 27, 2023) (likewise finding the Stella and Hedges decisions to be “inconsistent with Illinois law”.
The Andrews case.
     The first exception to the notice rule, i.e. actual knowledge of the defect on the part of the seller, was the subject of a recent decision by the First District of the Appellate Court of Illinois in the case of Andrews v. Carbon on 26th, LLC, 2024 IL App (1st) 231369, leave to appeal granted, Case Nos. 130862, 130863 (consolidated) sub nom. Martin Produce, Inc. v Jack Tuchen Wholesale Produce, Inc. (In re Andrews), 2024 Ill LEXIS 587 and 667 (2024).
     As is more fully explained below, the Andrews case was a commercial dispute between a distributor (the buyer in this case), and a group of wholesalers (the sellers), who had sold allegedly contaminated produce (cilantro) to the distributor, who in turn had sold the produce to a group of restaurants which were named defendants in a series of personal injury suits filed by sickened patrons. 
     The somewhat complicated history of the Andrews case arose from an outbreak of E. coli bacterial that, as mentioned, was traced to contaminated cilantro served to customers of the defendant, the operator of two fast-casual Mexican restaurants. The contaminated cilantro had sickened a number of the restaurants’ customers who brought personal injury claims against the defendant-restaurants along with the distributor and wholesalers of the cilantro. The claims against the restaurants were settled on the eve of trial, and eventually all of the plaintiffs’ claims against the distributor and wholesalers were likewise settled. 
      What remained were the contribution claims between the distributor and the wholesalers, seeking recovery of monies paid to the customers in settlement of their personal injury claims. The distributor’s contribution action alleged a breach of the implied warranty of merchantability on the part of the wholesalers for selling the contaminated cilantro. In their defense against the distributor’s warranty claim the wholesalers alleged that the distributor had failed to provide them with pre-suit notice of its claim as required by section 2-607(3)(a) of the Uniform Commercial Code, incorporated into Illinois law as 810 ILCS 5/2-607(3)(a).
     In response, the distributor contended that it was excused from providing notice to the wholesalers because the wholesalers had actual knowledge of the problems with the cilantro since the wholesalers had been parties to the personal injury suits brought by the customers of the restaurants, and the wholesalers had participated in discovery in those cases which had made them fully aware of the problems with the contaminated produce. Thus, the distributor contended that it was excused from providing a separate, pre-suit notice of the problem to the wholesalers, invoking the “actual knowledge” exception to the notice requirement as recognized in Connick.
     The wholesalers moved for summary judgment against the distributor on the warranty count based on lack of pre-suit notice. The circuit court initially denied the motion, finding the existence of a factual question as to whether notice was excused by reason of the wholesalers’ involvement in five years of litigation by over seventy restaurant patrons, which “common sense”, the court said, would dictate to have afforded the wholesalers with actual knowledge of the defect in the cilantro, thereby excusing pre-suit notice by the distributors. Andrews at ¶ 41.
     On reconsideration, however, the circuit court reversed its prior ruling and granted summary judgment in the wholesalers’ favor. “The court believed that it had erroneously suggested in its earlier decision that the law would ‘allow a defendant-seller to receive reasonable notice from third-parties via the filing of a lawsuit. But Illinois law is clear, [the court concluded], ‘that only consumer plaintiffs that suffer person injury can satisfy their Section 2-607 notice requirement by filing a lawsuit against the seller.’” Id. at ¶ 15, quoting Connick, 174 Ill. 2d at 495.
     Thus, the circuit court shifted its focus from the “actual knowledge” exception to the notice rule, with which the court felt sufficiently satisfied in order to deny summary judgment to the wholesalers initially, to the “notice-by-lawsuit” exception, which the court recognized is applicable only to personal injury cases, not contribution actions, thereby granting the summary judgment motion. Id. at ¶ 41.
     The distributor’s motion for reconsideration of the granting of the wholesalers’ summary judgment was denied, and the distributor’s appeal followed. 
     The appellate court began by recognizing the precedential impact of the Connick case in holding that pre-suit notice by a buyer is excused when the seller has actual knowledge of the defect in a particular product, and that notice by lawsuit is only effective when a consumer plaintiff is claiming personal injury. Because, however, the record demonstrated that the wholesalers “had actual knowledge that the specific shipments of cilantro they supplied to [the buyer-distributor] were alleged to have been contaminated” long before the distributor’s lawsuit, “when the personal injury plaintiffs [the restaurant patrons] first brought claims against them,” the court held that it could not say “on these facts, as a matter of law, that the first Connick exception – actual knowledge of the defective product – was not satisfied.” Id. at ¶ 40.
     While acknowledging that the “personal injury lawsuit exception did not apply here” to excuse notice by the buyer, the appellate court said that “the consumer lawsuits could still be the vehicle by which the wholesalers, in this case, received actual pre-suit knowledge of the defective product.” Id. at ¶ 42.
     “By naming everyone in the supply chain, the personal injury suits filed here necessarily gave each of those entities’ actual knowledge that the cilantro they sold was alleged to be defective. In our view, this is the sort of actual knowledge that will make it unnecessary for a buyer to separately notify its direct seller that a transaction is considered [problematic].” Id. at ¶ 43, (emphasis in the original).
     “In sum”, the court concluded, “we reject the wholesalers’ argument that a lawsuit filed by a third-party, though it cannot constitute pre-suit notice under section 2-607 of the UCC [in the absence of personal injury], can never be what causes a remote seller to have actual knowledge of a defect in the goods at issue.” Andrews, at ¶ 45.
     The appellate court thereupon reversed the summary judgment that had been rendered in favor of the wholesalers and remanded the case for further consideration of the distributor’s breach of warranty claim. Id. at ¶ 41.
Petitions for Leave to Appeal Granted.
     Separate petitions for leave to appeal by two of the wholesalers were granted by the Illinois Supreme Court on September 25, 2024. See, Martin Produce, Inc. v Jack Tuchten Wholesale Produce, Inc. (In re Andrews), 2024 Ill. LEXIS 587 and 667 (Ill. Sept. 25, 202
The Andrews Case in the Federal District Courts.
     The Andrews case was referenced in two recent opinions from the United States District Court for the Northern District of Illinois, both of which distinguished Andrews from the facts of the cases at bar. In Raya v. Mead Johnson Nutrition Co., No. 24 C 4696, 2024 U.S. Dist. LEXIS 217317 (N.D. Ill. Dec. 2, 2014), a putative class action alleging that certain infant formula sold by the defendant contained undisclosed heavy metals, a claim for breach of implied warranty was asserted along with other claims. The defendant-seller moved to dismiss, arguing that the plaintiff-buyer failed to provide them with pre-suit notice of the claim. The plaintiff had been involved in a previous lawsuit which had asserted identical claims against the defendant but was voluntarily dismissed. The plaintiff claimed that the existence of the prior lawsuit provided the defendant-seller with actual notice of the claimed defect in the formula, thereby excusing the plaintiff from providing separate pre-suit notice prior to the filing of the subsequent suit.
     The court said, however, that since the buyer, a named plaintiff in the prior suit, was aware of the conduct serving as the basis for a breach of warranty claim, based upon her involvement in the earlier class action, she was obligated to provide pre-suit notice of the breach prior to filing her own suit. It is not clear from the court’s opinion whether the plaintiff herself raised the Andrews case, or if the court, on its own initiative, distinguished Andrews on its own from the plaintiff-buyer’s individual lawsuit, stating that “[w]hile the Andrews case currently on appeal held that a third-party lawsuit could establish notice by ‘caus[ing] a seller to have knowledge of a defect in the goods at issue,’ Andrews does not support [plaintiff’s] argument that her own prior lawsuit can establish notice.” 2024 U.S. Dist. LEXIS 217317 at *21, quoting Andrews, 2024 IL App (1st) 231369 at ¶ 46.
     A more direct reference to the Andrews case is found in the case of Calixte v. Walgreen Co., No. 1:22-cv-01855, 2024 U.S. Dist. LEXIS 229675 (N.D. Ill. Dec. 19, 2024), involving a putative class action based on a claim for breach of warranty of merchantability associated with allegedly defective pre-paid gift cards. No pre-suit notice of the claim was provided by the plaintiff-buyers, and the defendant-seller moved for summary judgment on the warranty claim on that basis. The question before the court was whether the case could be decided as a matter of law on the issue of whether the defendant-seller had actual knowledge of the defect. 
     After encountering trouble with the stated value of the gift cards he had purchased, the plaintiff contacted the company that distributed and serviced the cards and informed it of his “’submission of [his] situation to their dispute resolution process’”, but did not contact the defendant. Calixte at *2, (quoting the record in the case).
     The plaintiff argued that the card servicer and the defendant “were within the same chain of distribution”, likening his case to the knowledge that the wholesalers had in Andrews who were in the same chain of distribution as the buyer-distributors.
     The district court readily distinguished Andrews, in which the defendant-wholesalers gained actual knowledge of the defective product prior to the distributors’ lawsuit because the wholesalers had been specifically named as defendants in suits filed by the injured plaintiffs. Id. at *7,8 (citing Andrews, at ¶¶ 42, 43).
     Also, in Calixte, the record was devoid of any evidence of communication between the card servicer and the defendant which would have imputed actual knowledge of the claim to the defendant. Id. at *8.
Conclusion.
     Practitioners of commercial law, especially warranty litigators, should be mindful of the notice requirement of the UCC, and its exceptions, as well as potential changes to the actual knowledge exception as forecast by the Andrews case, now on appeal to the Illinois Supreme Court.
     It is expected that the court will closely scrutinize the rationale for the ruling in the Andrews decision, and may make a definitive decision on whether or not a third-party’s lawsuit, while not satisfying the suit-as-notice exception, may indeed serve as actual knowledge of a defect, thereby satisfying the first exception to the notice requirement, as stated in the Connick case, which is the last time the Illinois Supreme Court addressed the subject of the notice requirement under the Uniform Commercial Code.

Health-e Law Episode 15: Healthcare Security is Homeland Security with Jonathan Meyer, former DHS GC and Partner at Sheppard Mullin [Podcast]

Welcome to Health-e Law, Sheppard Mullin’s podcast exploring the fascinating health tech topics and trends of the day. In this episode, Jonathan Meyer, former general counsel of the Department of Homeland Security and Leader of Sheppard Mullin’s National Security Team, joins us to discuss cyberthreats and data security from the perspective of national security, including the implications for healthcare.
What We Discussed in This Episode

How do cyberattacks and data privacy impact national security?
How can personal data be weaponized to cause harm to an individual, and why should people care?
Many adults are aware they need to keep their own personal data secure for financial reasons, but what about those who aren’t financially active, such as children?
How is healthcare particularly vulnerable to cyberthreats, even outside the hospital setting?
What can stakeholders do better at the healthcare level?
What can individuals do better to ensure their personal data remains secure?

Class Certification Granted – California Website Tracking Lawsuit Reminds Businesses about Notice Risks

A California federal district court recently granted class certification in a lawsuit against a financial services company. The case involves allegations that the company’s website used third-party technology to track users’ activities without their consent, violating the California Invasion of Privacy Act (CIPA). Specifically, the plaintiffs allege that the company along with its third-party marketing software platform, intercepted and recorded visitors’ interactions with the website, creating “session replays” which are effectively video recordings of the users’ real-time interaction with the website forms. The technology at issue in the suit is routinely utilized by website operators to provide a record of a user’s interactions with a website, in particular web forms and marketing consents. 
The plaintiffs sought class certification for individuals who visited the company’s website, provided personal information, and for whom a certificate associated with their website visit was generated within a roughly year time frame. The company argued that users’ consent must be determined on an individual and not class-wide, basis. The company asserted that implied consent could have come from multiple different sources including its privacy policies and third-party materials provided notice of data interception and thus should be viewed as consent. Some of the sources the company pointed to as notice included third-party articles on the issue.
The district court found those arguments insufficient and held that common questions of law and fact predominated as to all users. Specifically, the court found whether any of the sources provided notice of the challenged conduct in the first place to be a common issue. Further, the court found that it could later refine the class definition to the extent a user might have viewed a particular source that provided sufficient notice. The court also determined plaintiffs would be able to identify class members utilizing the company’s database, including cross-referencing contact and location information provided by users.
While class certification is not a decision on the merits and it is not determinative whether the company failed to provide notice or otherwise violated CIPA, it is a significant step in the litigation process. If certification is denied, the potential damages and settlement value are significantly lower. However, if plaintiffs make it over the class certification hurdle, the potential damages and settlement value of the case increase substantially.
This case is a reminder to businesses to review their current website practices and implement updates or changes to address issues such as notice (regarding tracking technologies in use) and consent (whether express or implied) before collecting user data. It is also important when using third-party tracking technologies, to audit if vendors comply with privacy laws and have data protection measures in place.