The BR Privacy & Security Download: February 2025

STATE & LOCAL LAWS & REGULATIONS
New York Legislature Passes Comprehensive Health Privacy Law: The New York state legislature passed SB-929 (the “Bill”), providing for the protection of health information. The Bill broadly defines “regulated health information” as “any information that is reasonably linkable to an individual, or a device, and is collected or processed in connection with the physical or mental health of an individual.” Regulated health information includes location and payment information, as well as inferences derived from an individual’s physical or mental health. The term “individual” is not defined. Accordingly, the Bill contains no terms restricting its application to consumers acting in an individual or household context. The Bill would apply to regulated entities, which are entities that (1) are located in New York and control the processing of regulated health information, or (2) control the processing of regulated health information of New York residents or individuals physically present in New York. Among other things, the Bill would restrict regulated entities to processing regulated health information only with a valid authorization, or when strictly necessary for certain specified activities. The Bill also provides for individual rights and requires the implementation of reasonable administrative, physical, and technical safeguards to protect regulated health information. The Bill would take effect one year after being signed into law and currently awaits New York Governor Kathy Hochul’s signature.
New York Data Breach Notification Law Updated: Two bills, SO2659 and SO2376, that amended the state’s data breach notification law were signed into law by New York Governor Kathy Hochul. The bills change the timing requirement in which notice must be provided to New York residents, add data elements to the definition of “private information,” and adds the New York Department of Financial Services to the list of regulators that must be notified. Previously, New York’s data breach notification statute did not have a hard deadline within which notice must be provided. The amendments now require affected individuals to be notified no later than 30 days after discovery of the breach, except for delays arising from the legitimate needs of law enforcement. Additionally, as of March 25, 2025, “private information” subject to the law’s notification requirements will include medical information and health insurance information.
California AG Issues Legal Advisory on Application of California Law to AI: California’s Attorney General has issued legal advisories to clarify that existing state laws apply to AI development and use, emphasizing that California is not an AI “wild west.” These advisories cover consumer protection, civil rights, competition, data privacy, and election misinformation. AI systems, while beneficial, present risks such as bias, discrimination, and the spread of disinformation. Therefore, entities that develop or use AI must comply with all state, federal, and local laws. The advisories highlight key laws, including the Unfair Competition Law and the California Consumer Privacy Act. The advisories also highlight new laws effective on January 1, 2025, which include disclosure requirements for businesses, restrictions on the unauthorized use of likeness, and regulations for AI use in elections and healthcare. These advisories stress the importance of transparency and compliance to prevent harm from AI.
New Jersey AG Publishes Guidance on Algorithmic Discrimination: On January 9, 2025, New Jersey’s Attorney General and Division on Civil Rights announced a new civil rights and technology initiative to address the risks of discrimination and bias-based harassment in AI and other advanced technologies. The initiative includes the publication of a Guidance Document, which addresses the applicability of New Jersey’s Law Against Discrimination (“LAD”) to automated decision-making tools and technologies. It focuses on the threats posed by automated decision-making technologies in the housing, employment, healthcare, and financial services contexts, emphasizing that the LAD applies to discrimination regardless of the technology at issue. Also included in the announcement is the launch of a new Civil Rights Innovation lab, which “will aim to leverage technology responsibly to advance [the Division’s] mission to prevent, address, and remedy discrimination.” The Lab will partner with experts and relevant industry stakeholders to identify and develop technology to enhance the Division’s enforcement, outreach, and public education work, and will develop protocols to facilitate the responsible deployment of AI and related decision-making technology. This initiative, along with the recently effective New Jersey Data Protection Act, shows a significantly increased focus from the New Jersey Attorney General on issues relating to data privacy and automated decision-making technologies.
New Jersey Publishes Comprehensive Privacy Law FAQs: The New Jersey Division of Consumer Affairs Cyber Fraud Unit (“Division”) published FAQs that provide a general summary of the New Jersey Data Privacy Law (“NJDPL”), including its scope, key definitions, consumer rights, and enforcement. The NJDPL took effect on January 15, 2025, and the FAQs state that controllers subject to the NJDPL are expected to comply by such date. However, the FAQs also emphasize that until July 1, 2026, the Division will provide notice and a 30-day cure period for potential violations. The FAQs also suggest that the Division may adopt a stricter approach to minors’ privacy. While the text of the NJDPL requires consent for processing the personal data of consumers between the ages of 13 and 16 for purposes of targeted advertising, sale, and profiling, the FAQs state that when a controller knows or willfully disregards that a consumer is between the ages of 13 and 16, consent is required to process their personal data more generally.
CPPA Extends Formal Comment Period for Automated Decision-Making Technology Regulations: The California Privacy Protection Agency (“CPPA”) extended the public comment period for its proposed regulations on cybersecurity audits, risk assessments, automated decision-making technology (“ADMT”), and insurance companies under the California Privacy Rights Act. The public comment period opened on November 22, 2024, and was set to close on January 14, 2025. However, due to the wildfires in Southern California, the public comment period was extended to February 19, 2025. The CPPA will also be holding a public hearing on that date for interested parties to present oral and written statements or arguments regarding the proposed regulations.
Oregon DOJ Publishes Toolkit for Consumer Privacy Rights: The Oregon Department of Justice announced the release of a new toolkit designed to help Oregonians protect their online information. The toolkit is designed to help families understand their rights under the Oregon Consumer Privacy Act. The Oregon DOJ reminded consumers how to submit complaints when businesses are not responsive to privacy rights requests. The Oregon DOJ also stated it has received 118 complaints since the Oregon Consumer Privacy Act took effect last July and had sent notices of violation to businesses that have been identified as non-compliant.
California, Colorado, and Connecticut AGs Remind Consumers of Opt-Out Rights: California Attorney General Rob Bonta published a press release reminding residents of their right to opt out of the sale and sharing of their personal information. The California Attorney General also cited the robust privacy protections of Colorado and Connecticut laws that provide for similar opt-out protections. The press release urged consumers to familiarize themselves with the Global Privacy Control (“GPC”), a browser setting or extension that automatically signals to businesses that they should not sell or share a consumer’s personal information, including for targeted advertising. The Attorney General also provided instructions for the use of the GPC and for exercising op-outs by visiting the websites of individual businesses.

FEDERAL LAWS & REGULATIONS
FTC Finalizes Updates to COPPA Rule: The FTC announced the finalization of updates to the Children’s Online Privacy Protection Rule (the “Rule”). The updated Rule makes a number of changes, including requiring opt-in consent to engage in targeted advertising to children and to disclose children’s personal information to third parties. The Rule also adds biometric identifiers to the definition of personal information and prohibits operators from retaining children’s personal information for longer than necessary for the specific documented business purposes for which it was collected. Operators must maintain a written data retention policy that documents the business purpose for data retention and the retention period for data. The Commission voted 5-0 to adopt the Rule, but new FTC Chair Andrew Ferguson filed a separate statement describing “serious problems” with the rule. Ferguson specifically stated that it was unclear whether an entirely new consent would be required if an operator added a new third party with whom personal information would be shared, potentially creating a significant burden for businesses. The Rule will be effective 60 days after its publication in the Federal Register.
Trump Rescinds Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence: President Donald Trump took action to rescind former President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“AI EO”). According to a Biden administration statement released in October, many action items from the AI EO have already been completed. Recommendations, reports, and opportunities for research that were completed prior to revocation of the AI EO may continue in place unless replaced by additional federal agency action. It remains unclear whether the Trump Administration will issue its own executive orders relating to AI.
U.S. Justice Department Issues Final Rule on Transfer of Sensitive Personal Data to Foreign Adversaries: The U.S. Justice Department issued final regulations to implement a presidential Executive Order regarding access to bulk sensitive personal data of U.S. citizens by foreign adversaries. The regulations restrict transfers involving designated countries of concern – China, Cuba, Iran, North Korea, Russia, and Venezuela. At a high level, transfers are restricted if they could result in bulk sensitive personal data access by a country of concern or a “covered person,” which is an entity that is majority-owned by a country of concern, organized under the laws of a country of concern, has its principle place of business in a country of concern, or is an individual whose primary residence is in a county of concern. Data covered by the regulation includes precise geolocation data, biometric identifiers, genetic data, health data, financial data, government-issued identification numbers, and certain other identifiers, including device or hardware-based identifiers, advertising identifiers, and demographic or contact data.
First Complaint Filed Under Protecting Americans’ Data from Foreign Adversaries Act: The Electronic Privacy Information Center (“EPIC”) and the Irish Counsel for Civil Liberties (“ICCL”) Enforce Unit filed the first-ever complaint under the Protecting Americans’ Data from Foreign Adversaries Act (“PADFAA”). PADFAA makes it unlawful for a data broker to sell, license, rent, trade, transfer, release, disclose, or otherwise make available specified personally identifiable sensitive data of individuals residing in the United States to North Korea, China, Russia, Iran, or an entity controlled by one of those countries. The complaint alleges that Google’s real-time bidding system data includes personally identifiable sensitive data, that Google executives were aware that data from its real-time bidding system may have been resold, and that Google’s public list of certified companies that receive real-time bidding bid request data include multiple companies based in foreign adversary countries.
FDA Issues Draft Guidance for AI-Enabled Device Software Functions: The U.S. Food and Drug Administration (“FDA”) published its January 2025 Draft Guidance for Industry and FDA Staff regarding AI-enabled device software functionality. The Draft provides recommendations regarding the contents of marketing submissions for AI-enabled medical devices, including documentation and information that will support the FDA’s evaluation of their safety and effectiveness. The Draft Guidance is designed to reflect a “comprehensive approach” to the management of devices through their total product life cycle and includes recommendations for the design, development, and implementation of AI-enabled devices. The FDA is accepting comments on the Draft Guidance, which may be submitted online until April 7, 2025.
Industry Coalition Pushes for Unified National Data Privacy Law: A coalition of over thirty industry groups, including the U.S. Chamber of Commerce, sent a letter to Congress urging it to enact a comprehensive national data privacy law. The letter highlights the urgent need for a cohesive federal standard to replace the fragmented state laws that complicate compliance and stifle competition. The letter advocates for legislation based on principles to empower startups and small businesses by reducing costs and improving consumer access to services. The letter supports granting consumers the right to understand, correct, and delete their data, and to opt out of targeted advertising, while emphasizing transparency by requiring companies to disclose data practices and secure consent for processing sensitive information. It also focuses on the principles of limiting data collection to essential purposes and implementing robust security measures. While the principles aim to override strong state laws like that in California, the proposal notably excludes data broker regulation, a previous point of contention. The coalition cautions against legislation that could lead to frivolous litigation, advocating for balanced enforcement and collaborative compliance. By adhering to these principles, the industry groups seek to ensure legal certainty and promote responsible data use, benefiting both businesses and consumers.
Cyber Trust Mark Unveiled: The White House launched a labeling scheme for internet-of-things devices designed to inform consumers when devices meet certain government-determined cybersecurity standards. The program has been in development for several months and involves collaboration between the White House, the National Institute of Standards and Technology, and the Federal Communications Commission. UL Solutions, a global safety and testing company headquartered in Illinois, has been selected as the lead administrator of the program along with 10 other firms as deputy administrators. With the main goal of helping consumers make more cyber-secure choices when purchasing products, the White House hopes to have products with the new cyber trust mark hit shelves before the end of 2025.

U.S. LITIGATION
Texas Attorney General Sues Insurance Company for Unlawful Collection and Sharing of Driving Data: Texas Attorney General Ken Paxton filed a lawsuit against Allstate and its data analytics subsidiary, Arity. The lawsuit alleges that Arity paid app developers to incorporate its software development kit that tracked location data from over 45 million consumers in the U.S. According to the lawsuit, Arity then shared that data with Allstate and other insurers, who would use the data to justify increasing car insurance premiums. The sale of precise geolocation data of Texans violated the Texas Data Privacy and Security Act (“TDPSA”) according to the Texas Attorney General. The TDPSA requires the companies to provide notice and obtain informed consent to use the sensitive data of Texas residents, which includes precise geolocation data. The Texas Attorney General sued General Motors in August of 2024, alleging similar practices relating to the collection and sale of driver data. 
Eleventh Circuit Overturns FCC’s One-to-One Consent Rule, Upholds Broader Telemarketing Practices: In Insurance Marketing Coalition, Ltd. v. Federal Communications Commission, No. 24-10277, 2025 WL 289152 (11th Cir. Jan. 24, 2025), the Eleventh Circuit vacated the FCC’s one-to-one consent rule under the Telephone Consumer Protection Act (“TCPA”). The court found that the rule exceeded the FCC’s authority and conflicted with the statutory meaning of “prior express consent.” By requiring separate consent for each seller and topic-related call, the rule was deemed unnecessary. This decision allows businesses to continue using broader consent practices, maintaining shared consent agreements. The ruling emphasizes that consent should align with common-law principles rather than be restricted to a single entity. While the FCC’s next steps remain uncertain, the decision reduces compliance burdens and may challenge other TCPA regulations.
California Judge Blocks Enforcement of Social Media Addiction Law: The California Protecting Our Kids from Social Media Addiction Act (the “Act”) has been temporarily blocked. The Act was set to take effect on January 1, 2025. The law aims to prevent social media platforms from using algorithms to provide addictive content to children. Judge Edward J. Davila initially declined to block key parts of the law but agreed to pause enforcement until February 1, 2025, to allow the Ninth Circuit to review the case. NetChoice, a tech trade group, is challenging the law on First Amendment grounds. NetChoice argues that restricting minors’ access to personalized feeds violates the First Amendment. The group has appealed to the Ninth Circuit and is seeking an injunction to prevent the law from taking effect. Judge Davila’s decision recognized the “novel, difficult, and important” constitutional issues presented by the case. The law includes provisions to restrict minors’ access to personalized feeds, limit their ability to view likes and other feedback, and restrict third-party interaction.

U.S. ENFORCEMENT
FTC Settles Enforcement Action Against General Motors for Sharing Geolocation and Driving Behavior Data Without Consent: The Federal Trade Commission (“FTC”) announced a proposed order to settle FTC allegations against General Motors that it collected, used, and sold driver’s precise geolocation data and driving behavior information from millions of vehicles without adequately notifying consumers and obtaining their affirmative consent. The FTC specifically alleged General Motors used a misleading enrollment process to get consumers to sign up for its OnStar-connected vehicle service and Smart Driver feature without proper notice or consent during that process. The information was then sold to third parties, including consumer reporting agencies, according to the FTC. As part of the settlement, General Motors will be prohibited from disclosing driver data to consumer reporting agencies, required to allow consumers to obtain and delete their data, required to obtain consent prior to collection, and required to allow consumers to limit data collected from their vehicles.
FTC Releases Proposed Order Against GoDaddy for Alleged Data Security Failures: The Federal Trade Commission (“FTC”) has announced it had reached a proposed settlement in its action against GoDaddy Inc. (“GoDaddy”) for failing to implement reasonable and appropriate security measures, which resulted in several major data breaches between 2019 and 2022. According to the FTC’s complaint, GoDaddy misled customers of its data security practices, through claims on its websites and in email and social media ads, and by representing it was in compliance with the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks. However, the FTC found that GoDaddy failed to inventory and manage assets and software updates, assess risks to its shared hosting services, adequately log and monitor security-related events, and segment its shared hosting from less secure environments. The FTC’s proposed order against GoDaddy prohibits GoDaddy from misleading its customers about its security practices and requires GoDaddy to implement a comprehensive information security program. GoDaddy must also hire a third-party assessor to conduct biennial reviews of its information security program.
CPPA Reaches Settlements with Additional Data Brokers: Following their announcement of a public investigative sweep of data broker registration compliance, the CPPA has settled with additional data brokers PayDae, Inc. d/b/a Infillion (“Infillion”), The Data Group, LLC (“The Data Group”), and Key Marketing Advantage, LLC (“KMA”) for failing to register as a data broker and pay an annual fee as required by California’s Delete Act. Infillion will pay $54,200 for failing to register between February 1, 2024, and November 4, 2024. The Data Group will pay $46,600 for failing to register between February 1, 2024, and September 20, 2024. KMA will pay $55,800 for failing to register between February 1, 2024, and November 5, 2024. In addition to the fines, the companies have agreed to injunctive terms. The Delete Act imposes fines of $200 per day for failing to register by the deadline.
Mortgage Company Fined by State Financial Regulators for Cybersecurity Breach: Bayview Asset Management LLC and three affiliates (collectively, “Bayview”) agreed to pay a $20 million fine and improve their cybersecurity programs to settle allegations from 53 state financial regulators. The Conference of State Bank Supervisors (“CSBS”) alleged that the mortgage companies had deficient cybersecurity practices and did not fully cooperate with regulators after a 2021 data breach. The data breach compromised data for 5.8 million customers. The coordinated enforcement action was led by financial regulators in California, Maryland, North Carolina, and Washington State. The regulators said the companies’ information technology and cybersecurity practices did not meet federal or state requirements. The firms also delayed the supervisory process by withholding requested information and providing redacted documents in the initial stages of a post-breach exam. The companies also agreed to undergo independent assessments and provide three years of additional reporting to the state regulators.
SEC Reaches Settlement over Misleading Cybersecurity Disclosures: The SEC announced it has settled charges with Ashford Inc., an asset management firm, over misleading disclosures related to a cybersecurity incident. This enforcement action stemmed from a ransomware attack in September 2023, compromising over 12 terabytes of sensitive hotel customer data, including driver’s licenses and credit card numbers. Despite the breach, Ashford falsely reported in its November 2023 filings that no customer information was exposed. The SEC alleged negligence in Ashford’s disclosures, citing violations of the Securities Act of 1933 and the Exchange Act of 1934. Without admitting or denying the allegations, Ashford agreed to a $115,231 penalty and an injunction. This case highlights the critical importance of accurate cybersecurity disclosures and demonstrates the SEC’s commitment to ensuring transparency and accountability in corporate reporting.
FTC Finalizes Data Breach-Related Settlement with Marriott: The FTC has finalized its order against Marriott International, Inc. (“Marriott”) and its subsidiary Starwood Hotels & Resorts Worldwide LLC (“Starwood”). As previously reported, the FTC entered into a settlement with Marriott and Starwood for three data breaches the companies experienced between 2014 and 2020, which collectively impacted more than 344 million guest records. Under the finalized order, Marriott and Starwood are required to establish a comprehensive information security program, implement a policy to retain personal information only for as long as reasonably necessary, and establish a link on their website for U.S. customers to request deletion of their personal information associated with their email address or loyalty rewards account number. The order also requires Marriott to review loyalty rewards accounts upon customer request and restore stolen loyalty points. The companies are further prohibited from misrepresenting their information collection practices and data security measures.
New York Attorney General Settles with Auto Insurance Company over Data Breach: The New York Attorney General settled with automobile insurance company, Noblr, for a data breach the company experienced in January 2021. Noblr’s online insurance quoting tool exposed full, plaintext driver’s license numbers, including on the backend of its website and in PDFs generated when a purchase was made. The data breach impacted the personal information of more than 80,000 New Yorkers. The data breach was part of an industry-wide campaign to steal personal information (e.g., driver’s license numbers and dates of birth) from online automobile insurance quoting applications to be used to file fraudulent unemployment claims during the COVID-19 pandemic. As part of its settlement, Noblr must pay the New York Attorney General $500,000 in penalties and strengthen its data security measures such as by enhancing its web application defenses and maintaining a comprehensive information security program, data inventory, access controls (e.g., authentication procedures), and logging and monitoring systems.
FTC Alleges Video Game Maker Violated COPPA and Engaged in Deceptive Marketing Practices: The Federal Trade Commission (“FTC”) has taken action against Cognosphere Pte. Ltd and its subsidiary Cognosphere LLC, also known as HoYoverse, the developer of the game Genshin Impact (“HoYoverse”). The FTC alleges that HoYoverse violated the Children’s Online Privacy Protection Act (“COPPA”) and engaged in deceptive marketing practices. Specifically, the company is accused of unfairly marketing loot boxes to children and misleading players about the odds of winning prizes and the true cost of in-game transactions. To settle these charges, HoYoverse will pay a $20 million fine and is prohibited from allowing children under 16 to make in-game purchases without parental consent. Additionally, the company must provide an option to purchase loot boxes directly with real money and disclose loot box odds and exchange rates. HoYoverse is also required to delete personal information collected from children under 13 without parental consent. The FTC’s actions aim to protect consumers, especially children and teens, from deceptive practices related to in-game purchases.
OCR Finalizes Several Settlements for HIPAA Violations: Prior to the inauguration of President Trump, the U.S. Department of Health and Human Services Office for Civil Rights (“OCR”) brought enforcement actions against four entities, USR Holdings, LLC (“USR”), Elgon Information Systems (“Elgon”), Solara Medical Supplies, LLC (“Solara”) and Northeast Surgical Group, P.C. (“NESG”), for potential violations of the Health Insurance Portability and Accountability Act’s (“HIPAA”) Security Rule due to the data breaches the entities experienced. USR reported that between August 23, 2018, and December 8, 2018, a database containing the electronic protected health information (“ePHI”) of 2,903 individuals was accessed by an unauthorized third party who was able to delete the ePHI in the database. Elgon and NESG each discovered a ransomware attack in March 2023, which affected the protected health information (“PHI”) of approximately 31,248 individuals and 15,298 individuals, respectively. Solara experienced a phishing attack that allowed an unauthorized third party to gain access to eight of Solara’s employees’ email accounts between April and June 2019, resulting in the compromise of 114,007 individuals’ ePHI. As part of their settlements, each of the entities is required to pay a fine to OCR: USR $337,750, Elgon $80,000, Solara $3,000,000, and NESG $10,000. Additionally, each of the entities is required to implement certain data security measures such as conducting a risk analysis, implementing a risk management plan, maintaining written policies and procedures to comply with HIPAA, and distributing such policies or providing training on such policies to its workforce.  
Virgina Attorney General Sues TikTok for Addictive Fees and Allowing Chinese Government to Access Data: Virginia Attorney General Jason Miyares announced his office had filed a lawsuit against TikTok and ByteDance Ltd, the Chinese-based parent company of TikTok. The lawsuit alleges that TikTok was intentionally designed to be addictive for adolescent users and that the company deceived parents about TikTok content, including by claiming the app is appropriate for children over the age of 12 in violation of the Virginia Consumer Protection Act. 

INTERNATIONAL LAWS & REGULATIONS
UK ICO Publishes Guidance on Pay or Consent Model: On January 23, the UK’s Information Commissioner’s Office (“ICO”) published its Guidance for Organizations Implementing or Considering Implementing Consent or Pay Models. The guidance is designed to clarify how organizations can deploy ‘consent or pay’ models in a manner that gives users meaningful control over the privacy of their information while still supporting their economic viability. The guidance addresses the requirements of applicable UK laws, including PECR and the UK GDPR, and provides extensive guidance as to how appropriate fees may be calculated and how to address imbalances of power. The guidance includes a set of factors that organizations can use to assess their consent models and includes plans to further engage with online consent management platforms, which are typically used by businesses to manage the use of essential and non-essential online trackers. Businesses with operations in the UK should carefully review their current online tracker consent management tools in light of this new guidance.
EU Commission to Pay Damages for Sending IP Address to Meta: The European General Court has ordered the European Commission to pay a German citizen, Thomas Bindl, €400 in damages for unlawfully transferring his personal data to the U.S. This decision sets a new precedent regarding EU data protection litigation. The court found that the Commission breached data protection regulations by operating a website with a “sign in with Facebook” option. This resulted in Bindl’s IP address, along with other data, being transferred to Meta without ensuring adequate safeguards were in place. The transfer happened during the transition period between the EU-U.S. Privacy Shield and the EU-U.S. Data Protection Framework. The court determined that this left Bindl in a position of uncertainty about how his data was being processed. The ruling is significant because it recognizes “intrinsic harm” and may pave the way for large-scale collective redress actions.
European Data Protection Board Releases AI Bias Assessment and Data Subject Rights Tools: The European Data Protection Board (“EDPB”) released two AI tools as part of the AI: Complex Algorithms and effective Data Protection Supervision Projects. The EDPB launched the project in the context of the Support Pool of Experts program at the request of the German Federal Data Protection Authority. The Support Pool of Experts program aims to help data protection authorities increase their enforcement capacity by developing common tools and giving them access to a wide pool of experts. The new documents address best practices for bias evaluation and the effective implementation of data subject rights, specifically the rights to rectification and erasure when AI systems have been developed with personal data.
European Data Protection Board Adopts New Guidelines on Pseudonymization: The EDPB released new guidelines on pseudonymization for public consultation (the “Guidelines”). Although pseudonymized data still constitutes personal data under the GDPR, pseudonymization can reduce the risks to the data subjects by preventing the attribution of personal data to natural persons in the course of the processing of the data, and in the event of unauthorized access or use. In certain circumstances, the risk reduction resulting from pseudonymization may enable controllers to rely on legitimate interests as the legal basis for processing personal data under the GDPR, provided they meet the other requirements, or help guarantee an essentially equivalent level of protection for data they intend to export. The Guidelines provide real-world examples illustrating the use of pseudonymization in various scenarios, such as internal analysis, external analysis, and research.
CJEU Issues Ruling on Excessive Data Subject Requests: On January 9, the Court of Justice of the European Union (“CJEU”) issued its ruling in the case Österreichische Datenschutzbehörde (C‑416/23). The primary question before the Court was when a European data protection authority may deny consumer requests due to their excessive nature. Rather than specifying an arbitrary numerical threshold of requests received, the CJEU found that authorities must consider the relevant facts to determine whether the individual submitting the request has “an abusive intention.” While the number of requests submitted may be a factor in determining this intention, it is not the only factor. Additionally, the CJEU emphasized that Data Protection Authorities should strongly consider charging a “reasonable fee” for handling requests they suspect may be excessive prior to simply denying them.
Daniel R. Saeedi, Rachel L. Schaller Gabrielle N. Ganz, Ana Tagvoryan, P. Gavin Eastgate, Timothy W. Dickens, Jason C. Hirsch, Tianmei Ann Huang, Adam J. Landy, Amanda M. Noonan, and Karen H. Shin contributed to this article

Health-e Law Episode 15: Healthcare Security is Homeland Security with Jonathan Meyer, former DHS GC and Partner at Sheppard Mullin [Podcast]

Welcome to Health-e Law, Sheppard Mullin’s podcast exploring the fascinating health tech topics and trends of the day. In this episode, Jonathan Meyer, former general counsel of the Department of Homeland Security and Leader of Sheppard Mullin’s National Security Team, joins us to discuss cyberthreats and data security from the perspective of national security, including the implications for healthcare.
What We Discussed in This Episode

How do cyberattacks and data privacy impact national security?
How can personal data be weaponized to cause harm to an individual, and why should people care?
Many adults are aware they need to keep their own personal data secure for financial reasons, but what about those who aren’t financially active, such as children?
How is healthcare particularly vulnerable to cyberthreats, even outside the hospital setting?
What can stakeholders do better at the healthcare level?
What can individuals do better to ensure their personal data remains secure?

DeepSeek AI’s Security Woes + Impersonations: What You Need to Know

Soon after the Chinese generative artificial intelligence (AI) company DeepSeek emerged to compete with ChatGPT and Gemini, it was forced offline when “large-scale malicious attacks” targeted its servers. Speculation points to a distributed denial-of-service (DDoS) attack.
Security researchers reported that DeepSeek “left one of its databases exposed on the internet, which could have allowed malicious actors to gain access to sensitive data… [t]he exposure also includes more than a million lines of log streams containing chat history, secret keys, backend details, and other highly sensitive information, such as API Secrets and operational metadata.”
On top of that, security researchers identified two malicious packages using the DeepSeek name posted to the Python Package Index (PyPI) starting on January 29, 2025. The packages are named deepseeek and deepseekai, which are “ostensibly client libraries for access to and interacting with the DeepSeek AI API, but they contained functions designed to collect user and computer data, as well as environment variables, which may contain API keys for cloud storage services, database credentials, etc.” Although PyPI quarantined the packages, developers worldwide downloaded them without knowing they were malicious. Researchers are warning developers to be careful with newly released packages “that pose as wrappers for popular services.”
Additionally, due to DeepSeek’s popularity, it is warning X users  of fake social media accounts impersonating the company.
But wait, there’s more! Cybersecurity firms are looking closely at DeepSeek and are finding security flaws. One firm, Kela, was able to “jailbreak the model across a wide range of scenarios, enabling it to generate malicious outputs, such as ransomware development, fabrication of sensitive content, and detailed instructions for creating toxins and explosive devices.” DeepSeek’s chatbot provided completely made-up information to a query in one instance. The firm stated, “This response underscores that some outputs generated by DeepSeek are not trustworthy, highlighting the model’s lack of reliability and accuracy. Users cannot depend on DeepSeek for accurate or credible information in such cases.”
We remind our readers that TikTok and DeepSeek are based in China, and the same national security concerns apply to both companies. DeepSeek is unavailable in Italy due to information requests from the Italian DPA, Garante. The Irish Data Protection Commissioner is also requesting information from DeepSeek. In addition, there are reports that U.S.-based AI companies are investigating whether DeepSeek used OpenAI’s API to train its models without permission. Beware of DeepSeek’s risks and limitations, and consider refraining from using it at the present time. “As generative AI platforms from foreign adversaries enter the market, users should question the origin of the data used to train these technologies. They should also question the ownership of this data and ensure it was used ethically to generate responses,” said Jennifer Mahoney, Advisory Practice Manager, Data Governance, Privacy and Protection at Optiv. “Since privacy laws vary across countries, it’s important to be mindful of who’s accessing the information you input into these platforms and what’s being done with it.”

Class Certification Granted – California Website Tracking Lawsuit Reminds Businesses about Notice Risks

A California federal district court recently granted class certification in a lawsuit against a financial services company. The case involves allegations that the company’s website used third-party technology to track users’ activities without their consent, violating the California Invasion of Privacy Act (CIPA). Specifically, the plaintiffs allege that the company along with its third-party marketing software platform, intercepted and recorded visitors’ interactions with the website, creating “session replays” which are effectively video recordings of the users’ real-time interaction with the website forms. The technology at issue in the suit is routinely utilized by website operators to provide a record of a user’s interactions with a website, in particular web forms and marketing consents. 
The plaintiffs sought class certification for individuals who visited the company’s website, provided personal information, and for whom a certificate associated with their website visit was generated within a roughly year time frame. The company argued that users’ consent must be determined on an individual and not class-wide, basis. The company asserted that implied consent could have come from multiple different sources including its privacy policies and third-party materials provided notice of data interception and thus should be viewed as consent. Some of the sources the company pointed to as notice included third-party articles on the issue.
The district court found those arguments insufficient and held that common questions of law and fact predominated as to all users. Specifically, the court found whether any of the sources provided notice of the challenged conduct in the first place to be a common issue. Further, the court found that it could later refine the class definition to the extent a user might have viewed a particular source that provided sufficient notice. The court also determined plaintiffs would be able to identify class members utilizing the company’s database, including cross-referencing contact and location information provided by users.
While class certification is not a decision on the merits and it is not determinative whether the company failed to provide notice or otherwise violated CIPA, it is a significant step in the litigation process. If certification is denied, the potential damages and settlement value are significantly lower. However, if plaintiffs make it over the class certification hurdle, the potential damages and settlement value of the case increase substantially.
This case is a reminder to businesses to review their current website practices and implement updates or changes to address issues such as notice (regarding tracking technologies in use) and consent (whether express or implied) before collecting user data. It is also important when using third-party tracking technologies, to audit if vendors comply with privacy laws and have data protection measures in place.

HHS’s Proposed Security Rule Updates Will Substantially Increase the Controls Needed to Comply with the Technical Safeguard Requirements

In this week’s installment of our blog series on the U.S. Department of Health and Human Services’ (HHS) HIPAA Security Rule updates in its January 6 Notice of Proposed Rulemaking (NPRM), we are tackling the proposed updates to the HIPAA Security Rule’s technical safeguard requirements (45 C.F.R. § 164.312). Last week’s post on group health plan and sponsor practices is available here.
Existing Requirements
Under the existing regulations, HIPAA-covered entities and business associates must generally implement the following five standard technical safeguards for electronic protected health information (ePHI):

Access Controls – Implementing technical policies and procedures for its electronic information systems that maintain ePHI to allow only authorized persons to access ePHI.
Audit Controls – Implement hardware, software, and/or procedural mechanisms to record and examine activity in information systems that contain or use ePHI.
Integrity – Implementing policies and procedures to ensure that ePHI is not improperly altered or destroyed.
Authentication – Implementing procedures to verify that a person seeking access to ePHI is who they say they are.
Transmission Security – Implementing technical security measures to guard against unauthorized access to ePHI that is being transmitted over an electronic network.

The existing requirements either do not identify the specific control methods or technologies to implement or are otherwise “addressable” as opposed to “required” in some circumstances for regulated entities — until now.
What Are the New Technical Safeguard Requirements?
The NPRM substantially modifies and specifies the particular technical safeguards needed for compliance. In particular, the NPRM restructured and recategorized existing requirements and added stringent standard and implementation specifications, and HHS has proposed removing the distinction between “required” and “addressable” implementation specifications, making all implementation specifications required with specific, limited exceptions.
A handful of the new or updated standards are summarized below:

Access Controls – New implementation specifications to require technical controls to ensure access are limited to individuals and technology assets that need access. Two of the controls that will be required are network segmentation and account suspension/disabling capabilities for multiple log-in failures.
Encryption and Decryption – Formerly an addressable implementation specification, the NPRM would make encryption of ePHI at-rest and in-transit mandatory, with a handful of limited exceptions, such as when the individual requests to receive their ePHI in an unencrypted manner.
Configuration Management – This new standard would require a regulated entity to establish and deploy technical controls for securing relevant electronic information systems and the technology assets in its relevant electronic information systems, including workstations, in a consistent manner. A regulated entity also would be required to establish and maintain a minimum level of security for its information systems and technology assets.
Audit Trail and System Log Controls – Identified as “crucial” in the NPRM, this reorganized standard formerly identified as the “audit control” would require covered entities to monitor in real-time all activity in its electronic information systems for indications of unauthorized access and activity. This standard would require the entity to perform and document an audit at least once every 12 months.
Authentication – This standard enhances the implementation specifications needed to ensure ePHI is properly protected from improper alteration or destruction. Of note, the NPRM would require all regulated entities to deploy multi-factor authentication (MFA) on all technology assets, subject to limited exceptions with compensating controls, such as during an emergency when MFA is infeasible. One exemption is where the regulated entity’s existing technology does not support MFA. However, the entity would need to implement a transition plan to have the ePHI transferred to another technology asset that does support MFA within a reasonable time. Medical devices authorized for marketing by the FDA before March 2023 would be exempt from MFA if the entity deployed all recommended updates and after that date if the manufacturer supports the device or the entity deployed any manufacturer-recommended updates or patches.
Other Notable Standards – In addition to the above, the NPRM would add standards for integrity, transmission security, vulnerability management, data backup and recovery, and information systems backup and recovery. These new standards would prescribe new or updated implementation specifications, such as conducting vulnerability scanning for technical vulnerabilities, including annual penetration testing and implementing a patch management program.

Listen to this article

Privacy Tip #430 – GrubHub Confirms Security Incident Through Third Party Vendor

If you are a GrubHub customer, read carefully. The app has confirmed a security incident involving a third-party vendor that allowed an unauthorized threat actor to access user contact information, including some customer names, email addresses, telephone numbers, and partial payment information for a subset of campus diners.
GrubHub’s response states, “The unauthorized party also accessed hashed passwords for certain legacy systems, and we proactively rotated any passwords that we believed might have been at risk. While the threat actor did not access any passwords associated with Grubhub Marketplace accounts, as always, we encourage customers to use unique passwords to minimize risk.”
If you are a GrubHub customer, you may want to change your password and ensure it is unique to that platform. 

UK Publishes AI Cyber Security Code of Practice and Implementation Guide

On January 31, 2025, the UK government published the Code of Practice for the Cyber Security of AI (the “Code”) and the Implementation Guide for the Code (the “Guide”). The purpose of the Code is to provide cyber security requirements for the lifecycle of AI. Compliance with the Code is voluntary. The purpose of the Guide is to provide guidance to stakeholders on how to meet the cyber security requirements outlined in the Code, including by providing examples of compliance. The Code and the Guide will also be submitted to the European Telecommunications Standards Institute (“ETSI”) where they will be used as the basis for a new global standard (TS 104 223) and accompanying implementation guide (TR 104 128).
The Code defines each of the stakeholders that form part of the AI supply chain, such as developers (any business across any sector, as well as individuals, responsible for creating or adapting an AI model and/or system), system operators (any business across any sector that has responsibility for embedding/deploying an AI model and system within their infrastructure) and end-users (any employee within a business and UK consumers who use an AI model and/or system for any purpose, including to support their work and day-to-day activities). The Code is broken down into 13 principles, each of which contains provisions, compliance with which is either required, recommended or a possibility. While the Code is voluntary, if a business chooses to comply, it must adhere to those provisions which are stated as required. The principles are:

Principle 1: Raise awareness of AI security threats and risks.
Principle 2: Design your AI system for security as well as functionality and performance.
Principle 3: Evaluate the threats and manage the risks to your AI system.
Principle 4: Enable human responsibility for AI systems.
Principle 5: Identify, track and protect your assets.
Principle 6: Secure your infrastructure.
Principle 7: Secure your supply chain.
Principle 8: Document your data, models and prompts.
Principle 9: Conduct appropriate testing and evaluation.
Principle 10: Communication and processes associated with End-users and Affected Entities.
Principle 11: Maintain regular security updates, patches and mitigations.
Principle 12: Monitor your system’s behavior.
Principle 13: Ensure proper data and model disposal.

The Guide breaks down each principle by its provisions, detailing associated risks/threats with each provision and providing example measures/controls that could be implemented to comply with each provision. 
Read the press release, the Code, and the Guide.

Consultation: Ofcom to Auction More Spectrum for 4G and 5G Mobile Use

Ofcom has announced its intention to auction the upper block of 1.4 GHz band (1492-1517 MHz) for 4G and 5G mobile use. It expects that further deployment of the upper block of the 1.4 GHz band will help improve the performance of mobile services, particularly in areas where coverage is patchy, such as some indoor areas and in remote parts of the UK. To avoid potential disruption to Inmarsat satellite receivers on board maritime vessels and aircraft, Ofcom is also proposing to limit the power that mobile networks can transmit around certain ports and airports for an initial period, relaxing this limit later on.
To award the 1492-1517 MHz spectrum, Ofcom plans to use a sealed-bid, single round auction format, with a ‘second price’ rule – where winning bidders pay fees based on the second highest price bid.
Ofcom’s proposals (including a draft license template) are available here and any interested party can provide comments until 25 April 2025. Ofcom also intends to consult separately on its competition assessment for this award once any spectrum trades, which are being considered as part of the merger between H3G and Vodafone, have been completed. Potentially interested parties should bear in mind the potential risk of antitrust liability arising from the exchange of competitively sensitive information in connection with auctions of this kind.

Ninth Circuit Blocks Enforcement of California Social Media Addiction Law Pending Appeal

On January 28, 2025, the U.S. Court of Appeals of the Ninth Circuit temporarily enjoined enforcement of S.B. 976, the Protecting Our Kids from Social Media Addiction Act (“the Act”) in its entirety, pending an appeal in a case brought by NetChoice, a technology trade group. The Act was set to take effect January 1, 2025, which was extended to February 1, 2025 by a district court order. Our earlier post provides a summary of the Act’s requirements and restrictions. Under an expedited appeal schedule, the Ninth Circuit will hear arguments in this case in April 2025 and will not grant the parties briefing schedule extensions.

OPT OUT WOES: Albertson’s Sued in TCPA Class Action for Allegedly Failing to Honor Stop Request– And its Harbinger of Things to Come

April 11, 2025.
That’s the date the FCC’s new and incredibly dangerous TCPA revocation rules are set to take effect.
If you’re not aware of the new rule you need to watch this incredibly important webinar that breaks it down right now:
Even without the massive expansion of revocation scope in April, some companies are having a tough time with their revocation process.
For instance Albertson’s was just sued in a TCPA class action asserting it failed to honor a consumer opt out request.
A litigator named Jennifer Schofield claims she personally placed her number on the national DNC way back in 2006. Nonetheless, she contends Albertson’s began texting her in 2024 from shortcode 48687.
According to Schofield, she texted “stop” and received a response from Albertson’s confirming no further texts would be sent.
Yet she received another text.
Schofield again allegedly texted “STOP” and, once again, continued to receive texts.
Schofield contends she was annoyed and harassed and sued Albertson’s under the TCPA. She also hopes to represent two nationwide classes:
IDNC Class: All persons within the United States who, within thefour years prior to the filing of this lawsuit through the date ofclass certification, received two or more text messages within any12-month period, from or on behalf of Defendant, regardingDefendant’s goods or services, to said person’s residential cellulartelephone number, after communicating to Defendant that they didnot wish to receive text messages by replying to the messages witha “stop” or similar opt-out instruction.
DNC CLASS: All persons in the United States who, within thefour years prior to the filing of this action through the date of classcertification, (1) were sent more than one text message within any12-month period; (2) where the person’s telephone number hadbeen listed on the National Do Not Call Registry for at least thirtydays; (3) regarding Defendant’s property, goods, and/or services;(4) to said person’s residential cellular telephone number; (5) aftermaking a request to Defendant to not receive further text messagesby replying with a “stop” or similar opt-out instruction in responseto Defendant’s text message(s).
Obviously we don’t know whether the allegations have any merit–the case was just filed– but we will keep an eye on it.
Full complaint here: Albertsons Complaint

Kein digitales Zugangsrecht – Gewerkschaft scheitert mit Klage gegen Adidas

Spätestens seit der Corona-Pandemie erfreut sich das Home-Office großer Beliebtheit: Rund ein Viertel aller Erwerbstätigen in Deutschland arbeitet zumindest teilweise aus dem Home-Office. Doch während das mobile Arbeiten sowohl für Arbeitnehmer als auch Arbeitgeber eine Reihe von Vorteilen mit sich bringt, stellt es die – ohnehin über Mitgliederschwund klagenden – Gewerkschaften vor zunehmende Herausforderungen: Potenzielle Gewerkschaftsmitglieder, die von zu Hause aus arbeiten und nicht physisch auf dem Betriebsgelände erscheinen, sind schlicht schwieriger zu erreichen als ihre Kollegen, die täglich zur Arbeitsstätte pendeln und von den Gewerkschaften auf dem Firmenparkplatz angeworben werden können.
Die Industriegewerkschaft Bergbau, Chemie, Energie (IG BCE) nahm genau diese Herausforderung nun zum Anlass, von Adidas die Herausgabe der dienstlichen E-Mail-Adressen seiner Mitarbeitenden, Zugang zum konzernweiten sozialen Netzwerk Viva Engage sowie die Verlinkung ihrer Homepage auf der Startseite des Intranets des Sportartikelherstellers zu verlangen. In einer in der vergangenen Woche ergangenen Entscheidung erteilte das Bundesarbeitsgericht den Gewerkschaftsforderungen und damit einem „digitalen Zugangsrecht“ jedoch eine Absage.
In seiner Entscheidung (BAG, Urt. v. 28.01.2025, Az. 1 AZR 33/24) stellte der erste Senat um die Präsidentin des Bundesarbeitsgerichts Inken Gallner klar, dass Arbeitgeber weder hinsichtlich bereits beschäftigter Arbeitnehmer noch hinsichtlich neu hinzukommender Arbeitnehmer dazu verpflichtet seien, dienstliche E-Mail-Adressen zum Zwecke der Mitgliederwerbung an die jeweils tarifzuständige Gewerkschaft herauszugeben. Zwar gewährleiste die in Art. 9 Abs. 3 GG normierte Koalitionsfreiheit die grundsätzliche Befugnis von Gewerkschaften, betriebliche E-Mail-Adressen der Arbeitnehmer zu Werbezwecken und für deren Information zu nutzen, allerdings resultiere daraus keine Verpflichtung von Arbeitgebern, die Mitgliederwerbung durch Übermittlung der E-Mail-Adressen selbst aktiv zu unterstützen. Neben den insofern betroffenen konkurrierenden Grundrechten des Arbeitgebers aus Art. 14 GG sowie Art. 12 Abs. 1 GG und der verfassungsrechtlich garantierten wirtschaftlichen Betätigungsfreiheit seien die ebenfalls berührten Grundrechte der betroffenen Arbeitnehmer aus Art. 2 Abs. 1 i.V.m. Art. 1 Abs. 1 GG bzw. Art. 8 der Charta der Grundrechte der Europäischen Union zu berücksichtigen und mit der Koalitionsfreiheit abzuwägen. Die von der Gewerkschaft erhobene Forderung bringe die betroffenen Rechte dabei nicht in einen angemessenen Ausgleich.
Auch mit ihren Anträgen auf Zugang zum konzerninternen sozialen Netzwerk und eine Verlinkung der Gewerkschafts-Homepage auf der Startseite des firmeneigenen Intranets scheiterte die Gewerkschaft. Die damit einhergehenden Beeinträchtigungen des Arbeitgebers übersteigen dem Bundesarbeitsgericht zufolge das durch Art. 9 Abs. 3 GG geschützte Interesse der Gewerkschaft an der Durchführung solcher Werbemaßnahmen. Insbesondere die Forderung nach Verlinkung im firmeneigenen Intranet finde aktuell keine Grundlage im Gesetz und könne mangels planwidriger Regelungslücke im Betriebsverfassungsgesetz auch nicht auf § 9 Abs. 3 S. 2 BPersVG gestützt werden.
Ob der Gesetzgeber vor dem Hintergrund dieser Entscheidung tätig wird und mit der ausdrücklichen Normierung eines elektronischen Zugangsrechts auf den Wandel der Arbeitswelt reagiert, bleibt abzuwarten. Bis dahin bleibt den Gewerkschaften lediglich, auf klassischem Wege um Nachwuchs zu werben oder – wie das Bundesarbeitsgericht betont – potenzielle Mitglieder vor Ort im Betrieb nach ihrer dienstlichen E-Mail-Adresse zu fragen. Betroffene Unternehmen können bei vergleichbaren Gewerkschaftsanfragen hingegen – jedenfalls vorerst – getrost auf die Rechtsprechung aus Erfurt verweisen.

SEC’s Crypto Journey Continues

In a wide-ranging public statement entitled “The Journey Begins,” SEC Commissioner Hester Peirce previewed next steps for the SEC’s Crypto Task Force. As chair of the Crypto Task Force, Commissioner Peirce’s statement lays out a broad agenda for the SEC’s approach to cryptocurrency over the next four years.
The statement begins by criticizing the SEC’s past approach to crypto, noting:
it took us a long time to get into this mess, and it is going to take us some time to get out of it. The Commission has engaged with the crypto industry in one form or another for more than a decade. The first bitcoin exchange-traded product application hit our doorstep in 2013, and the Commission brought a fraud case that had a tangential crypto element that same year. In 2017, we issued the DAO Section 21(a) report, which reflected the first application of the Howey test in this context. Since then, there have been many enforcement actions, a number of no-action letters, some exemptive relief, endless talk about crypto in speeches and statements, lots of meetings with crypto entrepreneurs, many inter-agency and international crypto working groups, discussion of certain aspects of crypto in rulemaking proposals, consideration of crypto-related issues in reviews of registration statements and other filings, and approval of numerous SRO proposed rule changes to list crypto exchange-traded products. 
Commissioner Peirce also sought to manage expectations about the timing and complexity of future SEC action:
Throughout this time, the Commission’s handling of crypto has been marked by legal imprecision and commercial impracticality. Consequently, many cases remain in litigation, many rules remain in the proposal stage, and many market participants remain in limbo. Determining how best to disentangle all these strands, including ongoing litigation, will take time. It will involve work across the whole agency and cooperation with other regulators. Please be patient. The Task Force wants to get to a good place, but we need to do so in an orderly, practical, and legally defensible way.
The statement hits libertarian notes, proclaiming, “In this country, people generally have a right to make decisions for themselves, but the counterpart to that wonderful American liberty is the equally wonderful American expectation that people must decide for themselves, not look to Mama Government to tell them what to do or not to do, nor to bail them out when they do something that turns out badly.” But Commissioner Peirce also warned that “SEC rules will not let you do whatever you want, whenever you want, however you want,” and that the SEC will not “tolerate liars, cheaters, and scammers.”
The heart of the statement lays out a 10-point, nonexclusive agenda for the SEC Crypto Task Force:

providing greater specificity as to which crypto assets are securities;
identifying areas both within and outside the SEC’s jurisdiction;
considering temporary regulatory relief for prior coin or token offerings;
modifying future paths for registering securities token offerings;
updating policies for special purpose broker-dealers transacting in crypto;
improving crypto custody options for investment advisers;
providing clarity around crypto lending and staking programs;
revisiting SEC policies regarding crypto exchange-traded products;
engaging with clearing agencies and transfer agents transacting in crypto; and
considering a cross-border sandbox for limited experimentation.

Commissioner Peirce’s statement concludes with instructions on how to engage with the Crypto Task Force, both in writing and in person.