EDPB Launches Coordinated Enforcement Framework Action on the Right to Erasure

On March 5, 2025, the European Data Protection Board (“EDPB”) announced the launch of its latest Coordinated Enforcement Framework action (“CEF action”) addressing the right to erasure. The new CEF action follows the EDPB’s 2024 CEF action on the right of access.
During the course of 2025, 32 data protection authorities (“DPAs”) across the European Economic Area will take part in this initiative. The EDPB selected the right to erasure for the 2025 CEF action on the basis it is one of the most frequently exercised rights under the European General Data Protection Regulation and one which is frequently the basis of complaints to DPAs from individuals.
As part of the 2025 CEF action, DPAs will contact controllers from various sectors and may conduct fact-finding exercises or open new investigations. DPAs will evaluate how controllers handle and respond to the requests for erasure that they receive and, in particular, how they apply the conditions and exceptions for the exercise of this right.
Read the Press Release.

Tax Transparency and Data Privacy — Which Wins?

As tax authorities embrace new digital technologies, the issue of safeguarding citizens’ data privacy rights steps to the fore. Since the implementation of the EU General Data Protection Regulation (GDPR) in 2018, there has been a greater focus on data privacy from both the public and organisations. At the same time, the cooperative international effort to combat offshore tax evasion has been steadily increasing. Several information-sharing regimes have been conceived to allow tax authorities to share information globally relating to financial accounts and investments under Automatic Exchange of Information Agreements.
In J Webster v HMRC [2024] EWHC 530 (KB), Ms. Webster, a US citizen, brought a case against His Majesty’s Revenue and Customs (HMRC) regarding information sharing under the Foreign Account Tax Compliance Act. At the centre of this case stands the question of which wins — tax transparency or data privacy?
Automatic Exchange of Information (AEOI)
The United Kingdom shares information with foreign tax authorities under two specific regimes:
1. Foreign Account Tax Compliance Act (FATCA): The FATCA regime is US-specific. Financial institutions outside of the United States are required to provide the US tax authorities with information relating to the foreign financial accounts of US individuals. Information includes, for example, the individual’s name and address, account balance and amount of interest accrued.
2. Common Reporting Standard (CRS): Nicknamed “global FATCA” by commentators at its inception, the CRS requires the automatic exchange of financial account information between tax authorities globally. The information shared is largely the same as that under FATCA, with the addition of the date and individuals’ places of birth (in some cases).
In practice, financial institutions in the United Kingdom supply the required data to HMRC, which then provides it to the relevant tax authorities on an annual and automatic basis.
The GDPR
Data privacy in the United Kingdom is regulated by the UK GDPR (the retained version of the EU GDPR) and the Data Protection Act 2018. Under Article 4(1) of the UK GDPR, personal data means any information relating to an identified or identifiable natural person. There are seven key principles for processing personal data (found in Article 5, UK GDPR). Broadly, these require that personal data is: (i) processed lawfully, fairly and transparently, (ii) collected for specified, explicit and legitimate purposes only, (iii) limited to what is necessary for the purposes (minimisation), (iv) accurate, (v) not stored longer than necessary, and (vi) processed in a manner that ensures appropriate security of the data. Finally, the data controller must be responsible for and able to demonstrate compliance with the preceding six principles.
Importantly, personal data must only be transferred outside of the United Kingdom if the receiving countries have adequate levels of protection for data subjects in place or appropriate safeguards for the transfer of personal data (Article 46, UK GDPR).
So, Which Wins?
Ms. Webster argued that information sharing between tax authorities under the FATCA regime breached her data privacy and human rights. In summary, she claimed that there were no appropriate safeguards in place for the transfers by HMRC and that US law failed to provide adequate levels of protection. Additionally, the data transfers allegedly fell foul of the principle of proportionality, as bulk processing did not account for Ms. Webster’s personal circumstances — specifically, that Ms. Webster had no US tax obligations (having modest income in the United Kingdom and owning no assets or income in the United States).
Unfortunately, the central question of “which wins?” remains unanswered. The judgment focused more on questions of procedure than substance — for example, as argued by HMRC, whether the claim should have been brought via judicial review and was, therefore, an abuse of process.
However, it is not difficult to see some merit in Ms. Webster’s claim. The aims of FATCA and the CRS are clearly worthy, and tax transparency is important. However, since personal data is processed automatically and whether an individual poses any real risk of tax evasion is immaterial to that processing, it is unconvincing that the principles of proportionality and data minimisation are comfortably being met.
Information-sharing regimes have been challenged in other countries as well. For example, the Belgian Data Protection Authority has argued (in a decision that has since been annulled) that data exchanges under FATCA violate the EU GDPR since more information than necessary is shared and the purposes for the data transfers are insufficiently defined. The Slovakian Data Protection Authority also challenged FATCA on the grounds that the AEOI Agreement under which data transfers took place did not contain the necessary safeguards to transfer personal data to third countries.
It is widely agreed that the GDPR is far more comprehensive than US privacy laws — some might remember the highly publicised “Schrems II” case from 20201 where the Court of Justice of the European Union declared that the US privacy laws fail to ensure an adequate level of protection. Recent news about the US Treasury being hacked also inevitably raises concerns about the security of the personal data transferred, and with President Donald Trump’s firing of Democratic members of the Privacy and Civil Liberties Oversight Board since the beginning of his second term, more widespread privacy concerns now linger.
We will have to wait and see how the tension between tax transparency and data privacy culminates. A judgment that focuses on the merits of Ms. Webster’s concerns would bring us some much-needed answers. However, what is clear is that there is pressure on tax authorities to address concerns relating to the data privacy of individuals, which are not subsiding.

1 Data Protection Commissioner v Facebook Ireland Ltd, Maximilian Schrems and intervening parties, Case C-311/18.
 
Georgia Griesbaum contributed to this article

Cybersecurity in the Nuclear Industry: US and UK Regulation and the Sellafield Case

Key Points:

Real-world examples from both the U.S. and U.K. demonstrate that nuclear facilities are being targeted by sophisticated cyber attackers, including state actors. This isn’t just a theoretical risk—it’s happening now, and facilities must take it seriously.
The successful prosecution of Sellafield with significant fines (£332,500) shows that regulators are now willing to take strong enforcement action, even when no actual breach has occurred. Nuclear facilities cannot afford wait for an incident before improving their cybersecurity—they must be proactive. 
With both the U.S. and U.K. strengthening their regulatory frameworks and increasing enforcement powers, nuclear facilities should take steps now to review and upgrade cybersecurity measures. This includes not just updating technical controls, but also ensuring compliance with security plans, auditing systems, and maintaining proper documentation. 

National security regulators are particularly concerned about the vulnerabilities of nuclear facilities to cyberattacks. In March 2022, the U.S. Justice Department unsealed criminal indictments against four agents of the Russian government, charging them with offenses related to cyber “spearfishing attacks” which compromised the business network of the Wolf Creek Nuclear Operating Corporation (WCNOC) in Burlington, Kansas. Also of note is the October 2024 prosecution and conviction of Sellafield Ltd in the U.K. for three offenses involving inadequate cybersecurity controls. In that case, the company (rather than the hacker) was charged by the Office for Nuclear Regulation (ONR) for failing to protect sensitive nuclear information and for failure to follow its own cybersecurity plan between 2019 and 2023. 
Fortunately, the nuclear facilities in both cases were not materially compromised in these attacks. The targeting of nuclear facility operators demonstrated that malicious actors intended to exploit cyber vulnerabilities within the nuclear industry.
U.S. Regulatory Framework
The Nuclear Regulatory Commission (“NRC”) has been active in establishing rules and guidelines to enhance the cybersecurity of U.S. nuclear facilities:

10 CFR Part 73.54: One of the NRC’s key regulatory frameworks that includes cybersecurity requirements, the regulation mandates that nuclear facilities establish and maintain a cybersecurity program to protect digital assets critical to safety, security, and emergency preparedness.
Regulatory Guide 5.71: In February 2023, the NRC revised its regulatory guide to provide detailed guidance on implementing cybersecurity measures. It outlines a defensive strategy that includes the identification of critical digital assets, continuous assessment of threats, and implementation of protective measures.
Nuclear Energy Institute (NEI) 08-09 (2018 Addendum): This document, developed by the nuclear industry with NRC’s endorsement, offers a comprehensive framework for cybersecurity programs. It emphasizes a risk-informed approach, allowing facilities to tailor their cybersecurity measures based on specific threats and vulnerabilities.

In 2013, the NRC’s Office of Nuclear Security and Incident Response established a Cyber Security Branch (CSB) to strengthen internal governance of the agency’s regulatory activities. Today, the NRC actively monitors threats associated with cybersecurity against NRC-licensed facilities. The CSB maintains a dedicated cyber assessment team responsible for analysing and evaluating real-world cyber incidents. 

Today, the Nuclear Regulatory Commission (NRC) actively monitors threats associated with cybersecurity against NRC-licensed facilities. The Cyber Security Branch maintains a dedicated cyber assessment team responsible for analysing and evaluating real-world cyber incidents.

The team evaluates whether an identified threat could impact licensed facilities and makes recommendations for NRC actions and communications to the licensees. The NRC also coordinates with other intelligence and law enforcement communities including the National Counterterrorism Center, the Department of Homeland Security’s U.S. Computer Emergency Response Team, and the Federal Bureau of Investigation in working to prevent cyberattacks.
U.K. Regulatory Framework
The U.K. Nuclear industry is subject to a range of different cybersecurity regulations that all have at their heart the concept that effective cybersecurity is a mandatory requirement. These rules have existed in various forms over the years, but there is now increasing activity by regulators to strictly enforce them.

The U.K. Nuclear industry is subject to a range of different cybersecurity regulations that all have at their heart the concept that effective cybersecurity is a mandatory requirement. 

The overarching framework is set out in the Civil Nuclear Cyber Security Strategy 2022. This strategy aims to strengthen the cybersecurity posture of the U.K. civil nuclear sector over five years. It focuses on four key objectives:

Risk Management: Prioritizing cybersecurity as part of a holistic risk management approach.
Risk Mitigation: Proactively addressing cyber risks, including those from legacy systems and new technologies.
Incident Management: Enhancing resilience by preparing for and responding to cyber incidents collaboratively.
Culture and Skills: Promoting a positive security culture and developing cyber skills within the sector.

Underpinning this strategy are an overlapping (and growing) regime of cybersecurity laws:

The Nuclear Industries Security Regulations 2003 (“the NISR”) governs a wide range of security issues, including obligations to ensure that “sensitive nuclear information” is kept secure.
The Network and Information Security Regulations (“NIS 1”) designates nuclear sites as critical infrastructure and imposes an obligation to implement “appropriate technical and operational measures” to protect IT systems and to ensure continuity of service.

Whilst these regimes have been in place for some time, regulators recently stepped up enforcement to ensure compliance with these laws as was evidenced by the recent prosecution of Sellafield.
The Sellafield Case 
Sellafield Ltd, the company licensed to operate the Sellafield nuclear decommissioning and waste site, received a fine in October 2024 of £332,500 after pleading guilty to three offences relating to inadequate cybersecurity controls and procedures that it had in place across a four-year period. 
The prosecution was brought by the U.K.’s independent nuclear regulator (the Office for Nuclear Regulation (“ONR”)) following its investigation where it had identified that Sellafield Ltd had failed to meet the requisite standards, procedures and arrangements set out in its own approved plan for cybersecurity as required under the NISR.
The ONR’s case was not brought on the basis that there had been an actual exploitation of the security failings (seemingly because there was a lack of evidence that attacks had been successful, rather than conclusive proof that the attacks were stopped). The basis of the prosecution was Sellafield’s unsatisfactory performance in relation to the management of its IT systems, and that had the vulnerabilities been exploited by attackers, it could have led to the unauthorised access to critical systems and loss of key data resulting in disrupted operations, damaged facilities and the delay of important decommissioning activities. In particular, Sellafield failed to comply with its own cybersecurity plan and failed to undertake annual checks on the security of its operational and information technology systems.
Following its guilty plea to three offences under the NISR, Sellafield Ltd was ordered to pay a fine of £332,500, along with prosecution costs of £53,253.20. Despite the successful prosecution, the ONR has reported that the cybersecurity failings have yet to be fixed and are subject to ongoing required improvements. 
Going forward, the U.K. legal regime is only going to get stronger. The Government has announced that it plans to introduce a new Cyber Security and Resilience Bill which intends to strengthen the U.K.’s operational resilience to cyber threats by, amongst other things:

Updating the existing (NIS1) regime to ensure that more essential services are protected, including by increasing the scope of digital services and supply chains within the regime;
Increasing regulators’ powers through introducing new cost recovery mechanisms and the ability to proactively investigate potential vulnerabilities (similar to the U.S.’s 2022 update to inspection procedure 71130); and
Expanding reporting requirements. 

It is worth noting that the European Union’s transition from NIS 1 to NIS 2 demonstrates a strengthened approach to cybersecurity, featuring expanded scope, more detailed requirements, and enhanced enforcement measures. This update emphasizes the EU’s dedication to protecting critical infrastructure and extends security obligations to equipment suppliers and service providers. The U.K. Government is likely to use NIS 2 as a model when developing its own Cyber Security and Resilience Bill.

Going forward, the U.K. legal regime is only going to get stronger. The Government has announced that it plans to introduce a new Cyber Security and Resilience Bill which intends to strengthen the U.K.’s operational resilience to cyber threats.

Looking Ahead
U.S. and U.K. regulators are focused on ensuring that organisations providing essential services, and their related key digital suppliers, implement sufficient technical controls to enhance the level of cybersecurity and help protect critical infrastructure. Those in the nuclear industry will be at the sharp edge of these changes and should take the opportunity to review their operational and technical cybersecurity measures now to ensure they are fit for purpose.

FROM CORN DOGS TO COURTROOMS: Sonic’s Texts Might Cost More Than a Combo Meal

Quick update here for you. Have you ever received a text about a fast food deal you never signed up for? Usually, I receive these texts because I signed up for some deal, like a free milkshake or a discount. That is the trade-off. You get a coupon; in return, you let them send you marketing you can opt out of. Well, Plaintiff in this newly filed class action lawsuit says he has, and he is taking Sonic Drive-In to court over it. The lawsuit, filed in the United States District Court for the Western District of Oklahoma, accuses Sonic of sending promotional texts to consumers who had placed their numbers on the National DNC Registry. See Brennan v. Sonic, Inc., No. 5:25-CV-00280 (W.D. Okla. filed Mar. 4, 2025).
According to the Complaint, Plaintiff added his number to the DNC Registry on February 3, 2024. That should have stopped unsolicited marketing texts, but by March 6, Sonic was already sending him offers for grilled cheese and 99-cent corn dogs. The Complaint details texts sent on March 6, March 11, March 13, March 15, and March 20. Plaintiff claims he never provided his phone number to Sonic, never had a business relationship with them, and never opted into any rewards program. So how did Sonic get his number? Interesting…
The lawsuit argues that Sonic’s “impersonal and generic” messages, their frequency, and the lack of consent all suggest that Sonic used an automatic telephone dialing system (“ATDS”).
This is where things make me ponder. This is not Plaintiff’s first TCPA lawsuit. He has previously filed complaints against Pizza Hut, DirecTV, Meyer Corporation, and Transfinancial Companies. That is a stacked lineup of big-name defendants. That track record raises some interesting questions. Is Plaintiff an unlucky mass marketing recipient or something else at play here? Is this about stopping unlawful texts, or is Plaintiff turning TCPA enforcement into a side hustle? Either way, it puts Sonic in a tough spot. This is where Troutman Amin always steps up to the plate for stellar legal work.
Beyond the Plaintiff’s individual claims, this lawsuit covers a broader group of consumers who allegedly received these messages. The Complaint defines two classes. The DNC Registry Class includes those on the registry but still got texts. Additionally, the Autodialed Text Class covers anyone who received automated marketing texts from Sonic without providing written consent.
If the Court sides with Plaintiff, Sonic might find itself in a legal pickle that no amount of tots and milkshakes can fix—no pun intended. We’ll be sure to keep you posted.

FCC’s New Consent Revocation Rule Set to Take Effect in April 2025

The Federal Communications Commission (FCC) has a new rule under the TCPA for revocation of consent for robocalls and text messages set to go into effect on April 11, 2025. This rule is designed to give consumers greater control over their ability to withdraw consent for marketing communications. Businesses that use text messaging and robocalls to communicate with customers should be reviewing their policies to ensure readiness with the new requirements.
Key Provisions of the New Rule
The FCC’s regulation prevents businesses from requiring consumers to use a specific method to revoke consent. Instead, consumers must be able to withdraw consent using any reasonable means that clearly conveys their request to stop receiving further calls or messages.
To provide clarity, the FCC has identified several standardized keywords — including “stop,” “quit,” “revoke,” “opt out,” “cancel,” “unsubscribe,” and “end” — that must be honored as explicit revocation requests. Additionally, the regulation establishes that opt-out requests submitted via automated or interactive voice response systems are presumed valid unless proven otherwise.
Burden of Proof on Businesses
When a consumer uses a method outside of those listed in the order to revoke consent, a rebuttable presumption is created that the consumer’s request is valid unless the sender can demonstrate otherwise. If a business’s texting system does not support reply messages, it must clearly disclose this limitation in each message and offer an alternative, reasonable method for revocation.
Shortened Compliance Timeframe
Previously, companies had more flexibility in processing opt-out requests, but the new rule mandates compliance within 10 business days of receiving a revocation request. Additionally, the rule expands the definition of consent revocation, specifying that withdrawing consent for one type of robocall or text message applies to all robocalls and texts from that sender.
Confirmatory Opt-Out Texts Allowed
One aspect of the rule has already gone into effect: Businesses may send a single confirmation text acknowledging the consumer’s opt-out request, provided that it contains no promotional content and is sent within five minutes of the revocation request. In cases where consumers have signed up for multiple types of messages, businesses may ask for clarification about which messages they wish to discontinue. However, if the consumer does not respond, the request must be interpreted as revoking consent for all robocalls and texts from that sender.
What Businesses Need to Know
At the moment, there are no legal challenges to this forthcoming FCC rule. Organizations — especially those engaged in business-to-business (B2B) outreach — should start preparing for compliance with these upcoming changes. The 10-day compliance window and the broad scope of revocation requests mean that companies may need to adjust existing consent management practices to remain in compliance with TCPA regulations.
With the new rule set to take effect soon, businesses should review their opt-out procedures, update their compliance policies, and ensure their customer communication platforms can accommodate these regulatory changes to avoid potential penalties.

Social Engineering + Stolen Credential Threats Continue to Dominate Cyber-Attacks

CrowdStrike recently published its 2025 Global Threat Report, which among other conclusions, emphasized that social engineering tactics aimed to steal credentials grew an astounding 442% in the second half of 2024. Correspondingly, use of stolen credentials to attack systems increased.
Other observations in the report include:

Adversaries are operating with unprecedented speed and adaptability;
China expanded its cyber espionage enterprise;
Stolen credential use is increasing ;
Social engineering tactics aim to steal credentials;
Generative AI drives new adversary risks;
Cloud-conscious actors continue to innovate; and
Adversaries are exploiting vulnerabilities to gain access

The details behind these conclusions include that the time an adversary starts moving through a network “reached an all-time low in the past year. The average fell to 48 minutes, and the fastest breakout time we observed dropped to a mere 51 seconds.” This means that threat actors are breaking in and swiftly moving within the system, making it difficult to detect, block, and tackle.
Vishing “saw explosive growth—up 442% between the first and second half of 2024.”
CrowdStrike’s observations are instructive to plan and harden defenses against these risks. Crucial pieces of the defense are:

Continued education and training of employees (including how social engineering schemes work;
The importance of protecting credentials;
How credentials are used to enter into a system.

Although we have been repeatedly educating employees on these themes, the statistics and real life experiences show that the message is not getting through. Addressing these specific risks through your training program may help ebb the tide of these successful social engineering campaigns.

Design-Code Laws: The Future of Children’s Privacy or White Noise?

In recent weeks, there has been significant buzz around the progression of legislation aimed at restricting minors’ use of social media. This trend has been ongoing for years but continues to face resistance. This is largely due to strong arguments that all-out bans on social media use not only infringe on a minor’s First Amendment rights but, in many cases, also create an environment that allows for the violation of that minor’s privacy.
Although companies subject to these laws must be wary of the potential ramifications and challenges if such legislation is enacted, these concerns should be integrated into product development rather than driving business decisions.
Design-Code Laws
A parallel trend emerging in children’s privacy is an influx in legislation aimed at mandating companies to proactively consider the best interest of minors as they design their websites (Design-Code Laws). These Design-Code Laws would require companies to implement and maintain controls to minimize harms that minors could face using their offerings.
At the federal level, although not exclusively a Design-Code Law, the Kids Online Safety Act (KOSA) included similar elements, and like those proposed bills, placed the responsibility on covered platforms to protect children from potential harms arising from their offerings. Specifically, KOSA introduced the concept of “duty of care,” wherein covered platforms would be required to act in the best interests of minors under 18 and protect them from online harms. Additionally, KOSA would require covered platforms to adhere to multiple design requirements, including enabling default safeguard settings for minors and providing parents with tools to manage and monitor their children’s online activity. Although the bill has seemed to slow as supporters try to account for prospective challenges in each subsequent draft of the law, the bill remains active and has received renewed support from members of the current administration.
At the state level, there is more activity around Design-Code Laws, with both California and Maryland enacting legislation. California’s law, which was enacted in 2022, has yet to go into effect and continues to face opposition largely centered around the law’s alleged violation of the First Amendment. Similarly, Maryland’s 2024 law is currently being challenged. Nonetheless, seven other states (Illinois, Nebraska, New Mexico, Michigan, Minnesota, South Carolina and Vermont) have introduced similar Design-Code Laws, each taking into consideration challenges that other states have faced and attempting to further tailor the language to withstand those challenges while still addressing the core issue of protecting minors online.
Why Does This Matter?
While opposition to laws banning social media use for minors has demonstrated success in the bright line rule restricting social media use, Design-Code Laws not only have stronger support, but they will also likely continue to evolve to withstand challenges over time. Although it’s unclear exactly where the Design-Code Laws will end up (which states will enact them, which will withstand challenges and what the core elements of the laws that withstand challenges will be), the following trends are clear:

There is a desire to regulate how companies collect data from or target their offerings to minors in order to protect this audience. The scope of the Design-Code Laws often does not stop at social media companies, rather, the law is intended to regulate those companies that provide an online offering that is likely to be accessed by children under the age of 18. Given the nature and accessibility of the web, many more companies will be within the scope of this law than the hotly contested laws banning social media use.
These laws bring the issue of conducting data privacy impact assessments (DPIAs) to the forefront. Already mandated by various state and international data protection laws, DPIA requirements compel companies to establish processes to proactively identify, assess and mitigate risks associated with processing personal information. Companies dealing with minor data in these jurisdictions will need to:

Create a DPIA process if they do not have one.
Build in additional time in their product development cycle to conduct a DPIA and address the findings.
Consider how to treat product roll-out in jurisdictions that do not have the same stringent requirements as those that have implemented Design-Code Laws.

As attention to children’s privacy continues to escalate, particularly on the state level, companies must continue to be vigilant and proactive in how they address these concerns. Although the enactment of these laws may seem far off with continued challenges, the emerging trends are clear. Proactively creating processes will mitigate the effects these laws may have on existing offerings and will also allow a company to slowly build out processes that are both effective and minimize the burden on the business.

The BR Privacy & Security Download: March 2025

STATE & LOCAL LAWS & REGULATIONS
Virginia Legislature Passes Bill Regulating High-risk AI: The Virginia legislature passed HB 2094, the High-Risk Artificial Intelligence Developer and Deployer Act (the “Act”). Using a similar approach to the Colorado AI Act passed in 2023 and California’s proposed regulations for automated decision-making technology, the Act defines “high-risk AI systems” as AI systems that make consequential decisions, which are decisions that have material legal or similarly significant effects on a consumer’s ability to obtain things such as housing, healthcare services, financial services, access to employment, and education. The Act would require developers to use reasonable care to prevent algorithmic discrimination and to provide detailed documentation on an AI system’s purpose, limitations, and risk mitigation measures. Deployers of AI systems would be required to implement risk management policies, conduct impact assessments before deploying high-risk AI systems, disclose AI system use to consumers, and provide opportunities for correction and appeal. The bill is currently with Virginia Governor Glenn Youngkin, and it is unclear if he will sign it. 
Connecticut Introduces AI Bill: After an effort to pass AI legislation stalled last year in the Connecticut House of Representatives, another AI bill was introduced in the Connecticut Senate in February. SB-2 would establish regulations for the development, integration, and deployment of high-risk AI systems designed to prevent algorithmic discrimination and promote transparency and accountability. SB-2 would specifically regulate high-risk AI systems, defined as AI systems making consequential decisions affecting areas like employment, education, and healthcare. The bill includes similar requirements as the Connecticut AI bill considered in 2024 and would require developers to use reasonable care to prevent algorithmic discrimination and provide documentation on an AI system’s purpose, limitations, and risk mitigation measures. Deployers of high-risk AI systems would be required to implement risk management policies, conduct impact assessments before deployment of high-risk AI systems, disclose AI system use to consumers, and provide opportunities for appeal and correction.
New York Governor Signs Several Privacy Bills: New York Governor Kathy Hochul signed a series of bills expanding compliance obligations for social media platforms, debt collectors who use social media platforms, and dating applications. Senate Bill 895B—effective 180 days after becoming law—requires social media platforms operating in New York to post terms of service explaining how users may flag content they believe violates the platform’s terms. Senate Bill 5703B—effective immediately—prohibits the use of social media platforms for debt collection purposes. Senate Bill 2376B—effective 90 days after becoming law—expands the scope of New York’s identity theft protection law by including in its scope the theft of medical and health insurance information. Finally, Senate Bill 1759B—effective 60 days after becoming law—requires online dating services to notify individuals who were contacted by members who were banned for using a false identity, providing them with specific information to help users prevent being defrauded. Importantly, the New York Health Information Privacy Act, which would significantly expand the obligations of businesses that may collect broadly defined “health information” through their websites, has not yet been signed.
California Reintroduces Bill Requiring Browser-Based Opt-Out Preference Signals: For the second year in a row, the California Legislature has introduced a bill requiring browsers and mobile operating systems to provide a setting that enables a consumer to send an opt-out preference signal to businesses with which the consumer interacts through the browser or mobile operating system. The California Consumer Privacy Act, as amended by the California Privacy Rights Act (“CCPA”), provides California residents with the ability to opt out of the sale or sharing of their personal data, including through an opt-out preference signal. AB 566 would amend the CCPA to ensure that consumers have the ability to do so. AB 566 requires the opt-out preference signal setting to be easy for a reasonable person to locate and configure. The bill further gives the California Privacy Protection Agency (“CPPA”), the agency charged with enforcing the CCPA, the authority to adopt regulations to implement and administer the bill. The CPPA has sponsored AB 566.
Virginia Senate Passes Amendments to Virginia Consumer Protection Act: Virginia’s Senate Bill 1023 (“SB 1023”) amends the Virginia Consumer Data Protection Act by banning the sale of precise geolocation data. The bill defines precise location data as anything that can locate a person within 1,750 feet. Introduced by Democratic State Senator Russet Perry, the bill has garnered bipartisan support in the Virginia Senate, passing with a 35-5 vote on February 4, 2025. Perry stated that the type of data the bill intends to ban has been used to target people in domestic violence and stalking cases, as well as for scams. 
Task Force Publishes Recommendations for Improvement of Colorado AI Act: The Colorado Artificial Intelligence Impact Task Force published its Report of Recommendations for Improvement of the Colorado AI Act. The Act, which was signed into law in May 2024, has faced significant pushback from a broad range of interest groups regarding ambiguity in its definitions, scope, and obligations. The Report is designed to help lawmakers identify and implement amendments to the Act prior to its February 1, 2026, effective date. The Report does not provide substantive recommendations regarding content but instead categorizes topics of potential changes based on how likely they are to receive consensus. The report identified four topics in which consensus “appears achievable with additional time,” four topics where “achieving consensus likely depends on whether and how to implement changes to multiple interconnected sections,” and seven topics facing “firm disagreement on approach where creativity will be needed.” These topics range from key definitions under the Act to the scope of its application and exemptions.
AI Legislation on Kids Privacy and Bias Introduced in California: California Assembly Member Bauer-Kahan introduced yet another California bill targeting Artificial Intelligence (“AI”). The Leading Ethical AI Development for Kids Act (“LEAD Act”) would establish the LEAD for Kids Standards Board in the Government Operations Agency. The Board would then be required to adopt regulations governing—among other things—the criteria for conducting risk assessments for “covered products.” Covered products include an artificial intelligence system that is intended to, or highly likely to, be used by children. The Act would also require covered developers to conduct and submit risk assessments to the board. Finally, the Act would authorize a private right of action for parents and guardians of children to recover actual damages resulting from breaches of the law.

FEDERAL LAWS & REGULATIONS
House Committee Working Group Organized to Discuss Federal Privacy Law: Congressman Brett Guthrie, Chairman of the House Committee on Energy and Commerce (the “Committee”), and Congressman John Joyce, M.D., Vice Chairman of the Committee, announced the establishment of a working group to explore comprehensive data privacy legislation. The working group is made up entirely of Republican members and is the first action in this new Congressional session on comprehensive data privacy legislation. 
Kids Off Social Media Act Advances to Senate Floor: The Senate Commerce Committee advanced the Kids Off Social Media Act. The Act would prohibit social media platforms from allowing children under 13 to create accounts, prohibit platforms from algorithmically recommending content to teens under 17, and require schools to limit social media use on their networks as a condition of receiving certain funding. The Act is facing significant pushback from digital rights groups, including the Electronic Frontier Foundation and the American Civil Liberties Union, which claim that the Act would violate the First Amendment.
Business Groups Oppose Proposed Updates to HIPAA Security Rule: As previously reported, the U.S. Department of Health and Human Services (“HHS”) Office for Civil Rights (“OCR”) issued a Notice of Proposed Rulemaking (“NPRM”) to amend the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Security Rule to strengthen cybersecurity protections for electronic protected health information (“ePHI”). See Blank Rome’s Client Alert on the proposed rule. A coalition of business groups, including the College of Healthcare Information Management Executives, America’s Essential Hospitals, American Health Care Association, Association of American Medical Colleges, Federation of American Hospitals, Health Innovation Alliance, Medical Group Management Association and National Center for Assisted Living, have written to President Trump and HHS Secretary Robert F. Kennedy, Jr. opposing the proposed rule. The business groups argue that the proposed rule imposes great financial burdens on the healthcare sector, including on rural hospitals, which would divert attention and funds away from other critical areas. The business groups also argue that the proposed rule contradicts Public Law 116-321, which explicitly requires HHS to consider a regulated entity’s adoption of recognized security practices when enforcing the HIPAA Security Rule, by not addressing or incorporating this legal requirement.
National Artificial Intelligence Advisory Committee Adopts List of 10 AI Priorities: The National Artificial Intelligence Advisory Committee (“NAIC”), which was established under the 2020 National Artificial Intelligence Initiative Act, approved a draft report for the Trump administration with 10 recommendations to address AI policy issues. The recommendations cover AI issues in employment, AI awareness and literacy, and AI in education, science, health, government, and law enforcement, as well as recommendations for empowering small businesses and AI governance and supporting AI innovation in a way that would benefit Americans.
CFPB Acting Director Instructs Agency Staff to Stop Work: Consumer Financial Protection Bureau (“CFPB”) Acting Director Russel Vought instructed agency staff to “stand down” and refrain from doing any work. The communication to CFPB employees followed an instruction to suspend regulatory activities and halt CFPB rulemaking. Vought also suspended CFPB’s supervision and examination activities. This freeze would impact the CFPB’s rule on its oversight of digital payment apps as well as the CFPB’s privacy rule that created a right of data portability for customers of financial institutions.

U.S. LITIGATION
First Washington My Health My Data Lawsuit Filed: Amazon is facing a class action lawsuit alleging violations of Washington’s My Health My Data Act (“MHMDA”), along with federal wiretap laws and state privacy laws. The suit is the first one brought under MHMDA’s private right of action and centers on Amazon’s software development kit (“SDK”) embedded in third-party mobile apps. The plaintiff’s complaint alleges Amazon collected location data of users without their consent for targeted advertising. The complaint also alleges that the SDK collected time-stamped location data, mobile advertising IDs, and other information that could reveal sensitive health details. According to the lawsuit, this data could expose insights into a user’s health status, such as visits to healthcare facilities or health behaviors, without users knowing Amazon was also obtaining and monetizing this data. The lawsuit seeks injunctive relief, damages, and disgorgement of profits related to the alleged unlawful behavior. The outcome could clarify how broadly courts interpret “consumer health data” under the MHMDA.
NetChoice Files Lawsuit to Challenge Maryland Age-Appropriate Design Act: NetChoice—a tech industry group—filed a complaint in federal court in Maryland challenging the Maryland Age-Appropriate Design Code Act as violating the First Amendment. The Act was signed into law in May and became effective in October 2024. It requires online services that are likely to be accessed by children under the age of 18 to provide enhanced safeguards for, and limit the collection of data from, minors. In its Complaint, NetChoice alleges that the Act will not meaningfully improve online safety and will burden online platforms with the “impossible choice” of either proactively censoring categories of constitutionally protected speech or implementing privacy-invasive age verification systems that create serious cybersecurity risks. NetChoice has been active in challenging similar Acts across the country, including in California, where it has successfully delayed the implementation of the eponymous California Age-Appropriate Design Code Act.
Kochava Settles Privacy Class Action; Unable to Dismiss FTC Lawsuit: Kochava Inc. (“Kochava”), a mobile app analytics provider and data broker, has settled the class action lawsuits alleging Kochava collected and sold precise geolocation data of consumers that originated from mobile applications. The settlement requires Kochava to pay damages of up to $17,500 for the lead plaintiffs and attorneys’ fees of up to $1.5 million. Among other changes to its privacy practices Kochava must make, the settlement requires Kochava to implement a feature aimed at blocking the sharing or use of raw location data associated with health care facilities, schools, jails, and other sensitive venues. Relatedly, U.S. District Judge B. Lynn Winmill of the District of Idaho denied Kochava’s motion to dismiss the lawsuit brought by the Federal Trade Commission (“FTC”) for Kochava’s alleged violations of Section 5 of the FTC Act. The FTC alleges that Kochava’s data practices are unfair and deceptive under Section 5 of the FTC Act, as it sells the sensitive personal information collected through its Mobile Advertising ID system (“MAIDs”) to its customers, providing customers a “360-degree perspective” on consumers’ behavior through subscriptions to its data feeds, without the consumer’s knowledge or consent. In the order denying Kochava’s motion to dismiss, Winmill rejected Kochava’s argument that Section 5 of the FTC Act is limited to tangible injuries and wrote that the “FTC has plausibly pled that Kochava’s practices are unfair within the meaning of the FTC Act.”
Texas District Court Blocks Enforcement of Texas SCOPE Act: The U.S. District Court for the Western District of Texas (“Texas District Court”) granted a preliminary injunction blocking enforcement of Texas’ Securing Children Online through Parental Empowerment Act (“SCOPE Act”). The SCOPE Act requires digital service providers to protect children under 18 from harmful content and data collection practices. In Students Engaged in Advancing Texas v. Paxton, plaintiffs sued the Texas Attorney General to block enforcement of the SCOPE Act, arguing the law is an unconstitutional restriction of free speech. The Texas District Court ruled that the SCOPE Act is a content-based statute subject to strict scrutiny, and that with respect to certain of the SCOPE Act’s monitoring-and-filtering, targeted advertising and content monitoring and age-verification requirements, the law’s restrictions on speech failed strict scrutiny and should be facially invalidated. Accordingly, the Texas District Court issued a preliminary injunction halting the enforcement of such provisions. The remaining provisions of the law remain in effect.
California Attorney General Agrees to Narrowing of Its Social Media Law: The California Attorney General has agreed to not enforce certain parts of AB 587, now codified in the Business & Professions Code, sections 22675-22681, which set forth content moderation requirements for social media platforms (the “Social Media Law”). X Corp. (“X”) filed suit against the California Attorney General, alleging that the Social Media Law was unconstitutional, censoring speech based on what the state sees as objectionable. While the U.S. District Court for the Eastern District of California (“California District Court”) initially denied X’s request for a preliminary injunction to block the California Attorney General from enforcing the Social Media Law, the Ninth Circuit overturned that decision, holding that certain provisions of the law regarding extreme content failed the strict-scrutiny test for content-based restrictions on speech, violating the First Amendment. X and the California Attorney General have asked the California District Court to enter a final judgment based on the Ninth Circuit decision. The California Attorney General has also agreed to pay $345,576 in attorney fees and costs.

U.S. ENFORCEMENT
Arkansas Attorney General Sues Automaker over Data Privacy Practices: Arkansas Attorney General Tim Griffin announced that his office filed a lawsuit against General Motors (“GM”) and its subsidiary OnStar for allegedly deceiving Arkansans and selling data collected through OnStar from more than 100,000 Arkansas drivers’ vehicles to third parties, who then sold the data to insurance companies that used the data to deny insurance coverage and increase rates. The lawsuit alleges that GM advertised OnStar as offering the benefits of better driving, safety, and operability of its vehicles, but violated the Arkansas Deceptive Trade Practices Act by misleading consumers about how driving data was used. The lawsuit was filed in the Circuit Court of Phillips County, Arkansas.
Healthcare Companies Settle FCA Claims over Cybersecurity Requirements: Health Net and its parent company, Centene Corp. (collectively, “Health Net”), have settled with the United States Department of Justice (“DOJ”) for allegations that Health Net falsely certified compliance with cybersecurity requirements under a U.S. Department of Defense contract. Health Net had contracted with the Defense Health Agency of the U.S. Department of Defense (“DHA”) to provide managed healthcare support services for DHA’s TRICARE health benefits program. The DOJ alleged that Health Net failed to comply with its contractual obligations to implement and maintain certain federal cybersecurity and privacy controls. The DOJ alleged that Health Net violated the False Claims Act by falsely stating its compliance in related annual certifications to the DHA. The DOJ further alleged that Health Net ignored reports from internal and third-party auditors about cybersecurity risks on its systems and networks. Under the settlement, Health Net must pay the DOJ and DHA $11.25 million.
Eyewear Provider Fined $1.5M for HIPAA Violations: The U.S. Department of Health and Human Services (“HHS”), Office for Civil Rights (“OCR”) imposed a $1,500,000 civil money penalty against Warby Parker for violations of the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Security Rule. The penalty resulted from a cyberattack involving unauthorized access to customer accounts, affecting nearly 200,000 individuals. An OCR investigation resulted from a 2018 security incident. Between September 25, 2018, and November 30, 2018, third parties accessed customer accounts using usernames and passwords obtained from breaches of other websites, a method known as “credential stuffing.” The compromised data included names, addresses, email addresses, payment card information, and eyewear prescriptions. OCR found that Warby Parker failed to conduct an accurate risk analysis, implement sufficient security measures, and regularly review information system activity.
CPPA Finalizes Sixth Data Broker Registration Enforcement Action: The California Privacy Protection Agency announced that it is seeking a $46,000 penalty against Jerico Pictures, Inc., d/b/a National Public Data, a Florida-based data broker, for allegedly failing to register and pay an annual fee as required by the California Delete Act. The Delete Act requires data brokers to register and pay an annual fee that funds the California Data Broker Registry. This action comes following a 2024 data breach in which National Public Data reportedly exposed 2.9 billion records, including names and Social Security Numbers. This is the sixth action taken by the CPPA against data brokers, with the first five actions resulting in settlements.

INTERNATIONAL LAWS & REGULATIONS
First EU AI Act Provisions Become Effective; Guidelines on Prohibited AI Adopted: The first EU AI Act (the “Act”) provisions to become effective came into force on February 2, 2025. The Act’s provisions prohibiting certain types of AI systems deemed to pose an unacceptable risk and rules on AI literacy are now applicable in the EU. Prohibited AI systems are those that present unacceptable risks to the fundamental rights and freedoms of individuals and include social scoring for public and private purposes, exploitation of vulnerable individuals with subliminal techniques, biometric categorization of natural persons based on biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs or sexual orientation, and emotion recognition in the workplace and education institutions, unless for medical or safety reasons, among other uses. The new AI literacy obligations will require organizations to put in place robust AI training programs to ensure a sufficient level of AI literacy for their staff and other persons working with AI systems. Certain obligations related to general-purpose AI models will become effective August 2, 2025. Most other obligations under the Act will become effective August 2, 2026.
UK Introduces AI Cyber Code of Practice: The UK government has introduced a voluntary Code of Practice to address cybersecurity risks in AI systems, with the aim of establishing a global standard via the European Telecommunications Standards Institute (“ETSI”). This code is deemed necessary due to the unique security risks associated with AI, such as data poisoning and prompt injection. It offers baseline security requirements for stakeholders in the AI supply chain, emphasizing secure design, development, deployment, maintenance, and end-of-life. The Code of Practice is intended as an addendum to the Software Code of Practice. It provides guidelines for developers, system operators, data custodians, end-users, and affected entities involved in AI systems. Principles within the code include raising awareness of AI security threats, designing AI systems for security, evaluating and managing risks, and enabling human responsibility for AI systems. The code also emphasizes the importance of documenting data, models, and prompts, as well as conducting appropriate testing and evaluation.
CJEU Issues Opinion on Pseudonymized Data: The Court of Justice of the European Union (“CJEU”) issued a decision in a case involving an appeal by the European Data Protection Supervisor (“EDPS”) against a General Court decision that annulled the EDPS’s decision regarding the processing of personal data by the Single Resolution Board (“SRB”) during the resolution of Banco Popular Español SA during insolvency proceedings. The case reviewed whether data transmitted by the SRB to Deloitte constituted personal data. Personal data consisted of comments from parties interested in the proceedings that had been pseudonymized by assigning a random alphanumeric code, as well as aggregated and filtered, so that individual comments could not be distinguished within specific commentary themes. Deloitte did not have access to the codes or the original database. The court held that the data was personal data in the hands of the SRB. However, the court ruled that the EDPS was incorrect in determining that the pseudonymized data was personal data to Deloitte without analyzing whether it was reasonably possible that Deloitte could identify individuals from the data. As a takeaway, the CJEU left open the possibility that pseudonymized data could be organized and protected in such a way as to remove any reasonable possibility of re-identification with respect to a particular party, resulting in the data not constituting personal data under the GDPR.
European Commission Withdraws AI Liability Directive from Consideration; European Parliament Committee Votes to Press On: The European Commission announced it plans to withdraw the proposed EU AI Liability Directive, a draft legislation for addressing harms caused by artificial intelligence. The decision was announced in the Commission’s 2025 Work Program stating that there is no foreseeable agreement on the legislation. However, the proposed legislation has not yet been officially withdrawn. Despite the announcement, members of the European Parliament on the body’s Internal Market and Consumer Protection Committee voted to keep working on liability rules for artificial intelligence products. It remains to be seen whether the European Parliament and the EU Council can make continued progress in negotiating the proposal in the coming year.
Additional Authors: Daniel R. Saeedi, Rachel L. Schaller, Gabrielle N. Ganze, Ana Tagvoryan, P. Gavin Eastgate, Timothy W. Dickens, Jason C. Hirsch, Adam J. Landy, Amanda M. Noonan and Karen H. Shin.

TIME OUT!: NFL Team Tampa Bay Buccaneers Hit With Latest in A Series of Time Restriction TCPA Class Action

So TCPAWorld has been reporting on the clear trend of TCPA class action suits against companies (primarily retailers) that deploy text clubs and particularly those arising out of timing limitations in the TCPA and state statutes.
Well, the NFL’s Tampa Bay Buccaneers are the latest to fall victim to this trend with a new TCPA class action filed in Florida against the team’s ownership today.
Plaintiff Andrew Leech claims he was texted by the Buccaneers at 9:24 pm his time– he claims to live in Palm Beach County, Florida so not sure what happened there.
Plaintiff seeks to represent a class consisting of:
All persons in the United States who from four years prior to thefiling of this action through the date of class certification (1)Defendant, or anyone on Defendant’s behalf, (2) placed more thanone marketing text message within any 12-month period; (3) wheresuch marketing text messages were initiated before the hour of 8a.m. or after 9 p.m. (local time at the called party’s location)
Notably the Plaintiff does not say whether he agreed to be texted by the Buccaneers to begin with. As I have previously reported the TCPA’s timing regulations likely do NOT apply to consented calls, but there is very little case law on the issue.
The case is brought by the Law Offices of Jibrael S. Hindi– the same firm behind a number of similar timing cases. (He is apparently a Dolphins fan…)
Again until this trend abates companies deploying SMS need to be EXTREMELY cautious to assure timing limitations are complied with!

DISA Global Faces Class Action After Cyber-Attack

Last week, two separate class actions were filed in the federal district court for the Southern District of Texas against DISA Global Solutions (DISA), a third-party employment screening services provider, related to an April 2024 cyber-attack.
DISA provides drug and alcohol testing and background checks for employers. DISA reportedly faced a cyber-attack from February to April 2024, which resulted in unauthorized third-party access to over 3.3 million individuals’ personal information. According to DISA, the information may have contained individuals’ names, Social Security numbers, driver’s license numbers, and financial account information.
DISA sent notification letters to individuals around February 24, 2025. The lead plaintiffs in both actions claim that they were required to provide their personal information to DISA as part of a job application or to obtain certain employment-related benefits.
Data breach class actions can help inform entities’ risk management strategies. We will consider some key considerations from the class action complaints against DISA.
Reasonable Safeguards
One plaintiff alleges that DISA had a duty to exercise reasonable care in securing data, but that DISA breached that duty by “neglect[ing] to adequately invest in security measures.” The complaint lists numerous commonly accepted security standards, including:

Maintaining a secure firewall configuration;
Monitoring for suspicious credentials used to access servers; and
Monitoring for suspicious or irregular server requests.

The other plaintiff similarly alleges that DISA failed to adequately implement measures. This complaint also enumerates common measures, including:

Scanning all incoming and outgoing emails;
Configuring access controls; and
Applying the principle of least-privilege.

Such claims of inadequate security and privacy measures are common in data breach class action litigation. Organizations should evaluate their security standards and ensure they are aligned with current best practices.
Notification Timeframe
DISA’s notification letter to affected individuals states that the unauthorized access occurred between February and April 2024. DISA sent notification letters in February 2025. One plaintiff alleges that the “unreasonable delay in notification” heightened the foreseeability that affected individuals’ personal information has been or will be used maliciously by cybercriminals.
It can take months to investigate a cyber incident and determine the nature and extent of information involved. Still, organizations who experience such incidents should be mindful of the ways in which plaintiffs can use the notification timeframe in their litigation.
Heightened Sensitivity of Social Security Numbers
One plaintiff includes in their complaint that Social Security numbers are “invaluable commodities and a frequent target of hackers.” This plaintiff alleges that, given the type of information DISA maintains and the frequency of other “high profile” data breaches, DISA should have foreseen and been aware of the risk of a cyber-attack.
The other plaintiff states that various courts have referred to Social Security numbers as the “gold standard” for identity theft and that their involvement is “significantly more valuable than the loss of” other types of personal information.
When it comes to information, not all data elements present the same level of risk if subject to unauthorized access. Organizations should track the types of information they maintain and understand that certain information may present higher risk if exposed, potentially requiring heightened security standards to protect it. The suits against DISA highlight that organizations should implement robust measures to not only minimize risk of cyber-attacks but also to minimize litigation risk in the often-inevitable class actions that follow.
Roma Patel also contributed to this article. 

Warby Parker Settles Data Breach Case with OCR for $1.5M

Eyeglass manufacturer and retailer Warby Parker recently settled a 2018 data breach investigation by the Office for Civil Rights (OCR) for $1.5 million. According to OCR’s press release, Warby Parker self-reported that between September and November of 2018, unauthorized third parties had access to customer accounts following a credential stuffing attack. The names, mailing and email addresses, payment card information, and prescription information of 197,986 patients was compromised.
Following the OCR’s investigation, it alleged three violations of the HIPAA Security Rule, “including a failure to conduct an accurate and thorough risk analysis to identify the potential risks and vulnerabilities to ePHI in Warby Parker’s systems, a failure to implement security measures sufficient to reduce the risks and vulnerabilities to ePHI to a reasonable and appropriate level, and a failure to implement procedures to regularly review records of information system activity.” The settlement reiterates the importance of conducting an annual security risk assessment and implementing a risk management program.