CAUGHT WITH THEIR HAND IN THE COOKIE JAR?: CNN’s Privacy Lawsuit is Served Fresh and the Court is Taking a Bite

Greetings CIPAWorld!
Well folks, it looks like CNN is about to get a course in the ABC’s of CIPA! If you’ve ever wondered what happens behind the scenes when you visit a news website, a recent court case might make you think twice before clicking on your next headline. A federal judge in New York just rejected CNN’s Motion to Dismiss a class action lawsuit, putting the media giant on the defensive in what’s shaping up to be a significant showdown over digital privacy rights. CNN might be in the business of breaking news, but now they’re possibly breaking privacy laws too—allegedly, of course. It sounds like they need Troutman Amin on the speed dial. The case can potentially expose how the invisible machinery of web tracking operates—and whether it violates California privacy law.
Remember our CIPA queen, Queenie, who first broke the news on this case back in January 2024? She predicted this wave of pen register litigation after the Greenley v. Kochava ruling opened the floodgates. Well, her crystal ball was spot-on once again!
What started as a lesser-known facet of CIPA has become the next major battleground in privacy litigation. For those keeping score at home, Queenie’s batting a thousand on predicting CIPA litigation trends—from chat box cases to web session recording and now these pen register claims. If I were a betting person, I’d put my money on whatever she predicts next.
For a refresher on Queenie’s original deep dive into this case and its significance, check out her blog post here: CNN BREAKING NEWS: CNN Targeted In Massive CIPA Case Involving A NEW Theory Under Section 638.51!
So let’s get into the update. In Lesh v. CNN, Inc., No. 24 Civ. 03132 (VM), 2025 U.S. Dist. LEXIS 30743 (S.D.N.Y. Feb. 20, 2025), pits a seemingly routine website visit against a state privacy law initially designed for telephone surveillance. Plaintiff, an ordinary visitor to CNN.com, found herself the lead plaintiff in a lawsuit alleging that CNN secretly installed tracking software on her browser without consent. But this isn’t just about one person’s browsing habits—it’s about whether companies can legally monitor users in ways most people never realize.
Of course, we are dealing with a CIPA claim here. Specifically, Section 638.51 prohibits installing or using what’s called a “pen register” without a court order. For those new to CIPA litigation, let’s break it down. I think it’s important to first break down the basics for aspiring future lawyers in this space or just for your own general knowledge to brush up on.
Originally, pen registers were devices used to record telephone numbers dialed from a specific phone line without capturing the actual conversations. Think of those old spy movies where agents track which numbers a suspect is calling. However, Judge Victor Marrero didn’t let the outdated terminology limit his interpretation. He ruled that how CNN’s trackers collect and transmit user data might qualify as a modern equivalent of a pen register. In other words, what once applied to landlines may now apply to websites silently gathering data behind the scenes.
Next, let’s talk about what CNN’s website actually does when you visit it (at least allegedly, according to the court documents). When your browser sends a request to CNN’s server, the server doesn’t just send back news articles. It also allegedly sends instructions that result in the installation of trackers from third-party companies like PubMatic, Magnite, and Antiview. These trackers, developed by third-party software companies that sell technology to help businesses place advertisements on their websites, then collect users’ IP addresses—a unique identifier that reveals their approximate location—and store cookies on their browsers to recognize them on future visits. The Court noted that these trackers don’t just passively log visits—they actively gather and transmit data about users, allegedly without their explicit consent.
What’s particularly clever about Judge Marrero’s analysis is how he breathes new life into an old statute. He rejected CNN’s argument that CIPA only applies to telephones, reasoning that “the plain text of Section 638.50 clearly does not limit the application of pen registers to telephones.” Lesh, 2025 U.S. Dist. LEXIS 30743, at *11. He continued, “[T]he Court cannot ignore the expansive language in the California Legislature’s chosen definition [of pen register],” which is “specific as to the type of data [collected],” but “vague and inclusive as to the form of the collection tool.” Lesh, 2025 U.S. Dist. LEXIS 30743, at *11-12 (quoting Greenley v. Kochava, Inc., 684 F. Supp. 3d 1024, 1050 (S.D. Cal. 2023)).
In other words, the law wasn’t designed to protect telephones—it was designed to protect information. And if a website tracker is secretly capturing addressing information, the court says that’s fair game for regulation under CIPA. Judge Marrero’s reasoning builds on the framework established in Greenley, where another court applied CIPA to modern digital tracking tools, rejecting the idea that pen registers are limited to phone lines.
It is refreshing to see courts adapting old laws to new technologies rather than throwing up their hands and waiting for legislatures to catch up. Judge Marrero found that IP addresses qualify as “addressing information” under the statute, citing the Ninth Circuit’s observation that “IP addresses constitute addressing information and do not necessarily reveal any more about the underlying contents of the communication than do phone numbers.” In re Zynga Litig., 750 F.3d 1098, 1108 (9th Cir. 2014).
This decision aligns with a broader legal trend recognizing that digital tracking implicates privacy rights. In Carpenter v. United States, 585 U.S. 296, 138 (2018), the Supreme Court held that historical cell site data collection constitutes a search under the Fourth Amendment. Similarly, the Lesh ruling suggests that collecting and transmitting IP addresses without consent could be an unlawful invasion of privacy under CIPA.
CNN also attempted to argue that collecting an IP address does not violate privacy rights, citing Fourth Amendment case law. Specifically, CNN relied on cases like United States v. Ulbricht, 858 F.3d 71, 96 (2d Cir. 2017), which held that individuals do not have a reasonable expectation of privacy in their IP addresses under the Fourth Amendment. However, the Court swiftly rejected this argument, noting that CIPA imposes broader privacy protections than the constitutional floor set by the Fourth Amendment. As Judge Marrero explained, the fact that the Fourth Amendment does not recognize an expectation of privacy in IP addresses does not mean that California law cannot provide greater protections. The Court emphasized that CIPA ‘extends beyond constitutional constraints’ and is an independent statutory safeguard against unauthorized tracking. This means that even if the government could collect IP addresses without violating the Constitution, private companies might still run afoul of CIPA when doing the same thing.
What is more, CNN asserted that it was entitled to an exception in the law for situations where “the consent of the user of that service has been obtained.” But Judge Marrero wasn’t buying it, noting that it would be “illogical to allow CNN’s consent to the installation of Trackers to bar claims from users like Lesh who did not give their consent.” Lesh, 2025 U.S. Dist. LEXIS 30743, at *13. Clearly, CNN cannot simply consent to its data collection practices and then claim immunity from privacy violations.
The Court also analyzed whether CNN’s Terms of Use were enforceable under a clickwrap or browsewrap framework. CNN argued that Lesh had agreed to its Terms of Use, which supposedly disclosed the use of trackers. To prove it, they submitted screenshots from the Wayback Machine (an internet archive). But the Court refused to consider these screenshots, finding they weren’t properly authenticated. Even beyond the evidentiary issue, the Court found that CNN’s agreement wasn’t a traditional “clickwrap” contract—where users affirmatively click “I agree” before using the site. Instead, the Court characterized it as a “hybrid clickwrap-browsewrap” agreement, meaning users were presented with a pop-up but were not required to take affirmative action beyond dismissing it. Courts have repeatedly rejected these types of passive consent mechanisms when determining enforceability. See Nguyen v. Barnes & Noble Inc., 763 F.3d 1171, 1176 (9th Cir. 2014) (rejecting website terms where “users were not required to affirmatively agree”).
What strikes me about this case is how it exposes the fiction of consent in our modern digital age. How many of us have actually read those terms of service pop-ups that appear when we visit websites? Be honest—when was the last time you did more than glance at one before clicking “X” to make it go away?
This decision joins other recent cases like Vishal Shah v. Fandom, Inc., No. 24-cv-01062-RFL, 2024 U.S. Dist. LEXIS 193032 (N.D. Cal. Oct. 21, 2024) and Mirmalek v. L.A. Times Commc’ns L.L.C., No. 24-cv-01797-CRB, 2024 U.S. Dist. LEXIS 227378 (N.D. Cal. Dec. 12, 2024), which have similarly found that website trackers collecting IP addresses may violate CIPA.
In both cases, Courts held that these tracking tools gather ‘addressing information’ and function similarly to pen registers, a key issue in Lesh. This interpretation of CIPA could force a significant shift in how websites operate, as it directly contradicts the assumption that IP tracking is legally harmless.
If this interpretation holds up, it could force a massive shift in how websites collect data. Nearly every major website uses similar tracking technologies to gather visitor information, often for advertising purposes. Are they all potentially violating California law? The implications of this case extend far beyond CNN—any website using third-party trackers may now face legal scrutiny.
For now, CNN must answer Lesh’s Complaint within 21 days of the Court’s order.
The internet has evolved faster than our laws, and companies may have exploited that gap. But if this case is any indication, the courts are finally starting to close it.
As always,
Keep it legal, keep it smart, and stay ahead of the game.
Talk soon!

Update of German Law Aspects of Crypto Assets

Our recently updated article considers how EU and German civil and regulatory law approach crypto assets with a particular focus on how those types of crypto assets are dealt with in an insolvency.
In this article we explore the different types of crypto assets there are, the legal nature of them, how crypto assets are dealt with in insolvency proceedings and the recovery of such assets.

Blockchain+ Bi-Weekly; Highlights of the Last Two Weeks in Web3 Law: February 27, 2025

Three of the SEC’s key enforcement actions—all extensively covered in BitBlog and widely seen as emblematic of the agency’s adversarial stance toward the industry—are reportedly being halted or dismissed. The SEC has agreed in principle to drop its case against Coinbase without any penalties or required changes in business. The SEC also agreed in principle to drop its case against Uniswap for operating an unlicensed securities exchange. Both parties in SEC v. Binance have jointly requested a 60-day litigation stay. Meanwhile, highlighting that the challenges facing this emerging industry are not confined to the United States and its regulation, an international digital asset exchange suffered the largest known hack of its ETH wallets, reigniting concerns over the security of digital asset platforms. Additionally, there are ongoing and potential personnel changes within the U.S. government, particularly in the CFTC and Department of Commerce, with new leadership thus far demonstrating and advocating for positions that are supportive of the industry.
These developments and a few other brief notes are discussed below.
SEC v. Coinbase Dismissal Pending Commission Approval: February 21, 2025
Background: The SEC staff have agreed in principle to dismiss its action against Coinbase where the SEC had alleged that it was operating as an unregistered securities exchange, broker and clearing agency, along with unregistered offering charges against its staking-as-a-service program. Given that two of the three current commissioners have publicly opposed the agency’s actions against digital asset companies, the commission is likely to approve the dismissal recommendation, effectively bringing the matter to an end. This decision would also eliminate the pending interlocutory appeal before the Second Circuit, which was set to review certain rulings from the Motion to Dismiss stage.
Analysis: It is unusual to see a dismissal such as this one announced before final approval, but the timing may be strategic. With only three commissioners currently in place, the likely dissenting vote, Commissioner Crenshaw, could effectively block commission action to formally dismiss the case. One has to imagine that the portions of the cases against Binance and Kraken that have similar causes of action with similar legal theories are also likely to be dismissed. Another key question is whether other exchanges that delisted tokens alleged to be securities in response to these lawsuits, will reconsider and reintroduce them to their trading platforms. The outcome of these cases could significantly impact how digital asset exchanges approach compliance and token offerings moving forward.
Bybit Exchange Suffers Largest Known Exchange Hack in History: February 21, 2025
Background: Bybit (a digital asset exchange based in Dubai that is not available to U.S. users) announced it suffered unauthorized access to various ETH wallets, resulting in roughly $1.4 billion being stolen from the platform. To put into perspective, in 2024 $2.2 billion is estimated to be the combined amount stolen from all platforms for the year, meaning 2025 will likely dwarf that number. The hack is currently believed to be the work of the North Korean hacking organization the Lazarus Group, which was also behind the similar Phemex hack earlier this year. Bybit announced it still has the funds to cover customer withdrawals, and operations remain active.
Analysis: While the roughly 850,000 Bitcoin stolen in the infamous Mt. Gox hack is worth more in today’s dollars, this is likely the largest cryptocurrency hack in dollars at the time of the hack and one of the largest, if not the largest, heists of all time. It also makes the hackers one of the largest owners of ETH, as the over 400,000 ETH stolen is more than double the amount held by the Ethereum Foundation itself.
Brian Quintenz Tapped to Lead CFTC: February 11, 2025
Background: It is being fairly widely reported that President Trump plans to nominate a16z’s Brian Quintenz to lead the CFTC. Quintenz previously served as a commissioner at the CFTC from 2017 to 2021. He is currently the Global Head of Policy at venture firm a16z’s crypto investment arm, and if he is confirmed, he will replace the current acting Chair, Pham. He is the first potential CFTC chair to announce his nomination on Farcaster, the digital asset native social network.
Analysis: If you read his prior statements on digital assets and DeFi, it is clear why the digital asset legal community is largely supportive of this pick. He is also no stranger to prediction markets, which are likely to be a hot topic for regulation in the upcoming years. He recently wrote about being excited about governments putting bonds onChain.
SEC v. Binance Joint Stay of Litigation Requested: February 11, 2025
Background: The parties in SEC v. Binance are requesting a 60-day pause in the litigation, citing the reason as “new SEC Acting Chairman Mark T. Uyeda launched a crypto task force dedicated to helping the SEC develop a regulatory framework for crypto assets. The work of this task force may impact and facilitate the potential resolution of this case.” Since the Court in Binance agreed to the stay request and with SEC v. Coinbase currently stayed pending an interlocutory appeal decision from the Second Circuit (and likely soon to be dismissed, as discussed below), that just leaves SEC v. Payward (i.e., Kraken) in the exchange cases ongoing post-election.
Analysis: The stay request is document 296 in the case’s court file if that is any indication of how fiercely litigated the SEC v. Binance case has been over the past roughly 1.5 years. Considering on the same day, the SEC asked the Court to ignore certain allegations from their Amended Complaint in reaching a determination on the pending Motion to Dismiss indicates there was possibly an order from on-high to enter a holding pattern in all digital asset litigation with approaching deadlines. But no way to know until the dust settles if that was the case.
Briefly Noted:
Uniswap Labs Says SEC Probe Has Been Closed: Consistent with the Coinbase dismissal but different due to Uniswap’s decentralized nature, Uniswap Labs, the tech company behind the decentralized Uniswap protocol, announced that the SEC has also dropped its investigation for purportedly running an unregistered securities exchange, among other things. There is still the open question of whether decentralization really matters for bringing this type of claim and, if so, how much it matters. 
SEC Dismisses Dealer Rule Appeal: The SEC has decided to not go forward with their appeal of two challenges to the proposed expansion of the term “dealer” under applicable securities laws. Well done by the Blockchain Association and the Crypto Freedom Alliance of Texas, among others. The expanded definition had the potential to capture all kinds of traditional finance activities that historically had never been regulated, such as proprietary high frequency trading.
SEC Launches Cyber Fraud Unit: The SEC has formed a Cyber and Emerging Technologies Unit, which will go after, in part, “fraud involving blockchain technology and crypto assets.” This makes sense to focus on fraud and consumer harm vs. trying to fight digital asset businesses that are trying to be good actors in an unclear regulatory environment.
SEC Crypto Task Force Meeting Logs: The SEC is posting meeting logs of its crypto task force meetings, which is really cool. So much of crypto has been built on open source and community development that making these task force submissions and meetings transparent just fits. There is also a list of questions that the SEC is seeking public input on answering. Please reach out to any of the listed authors if you are a company that wishes assistance in submitting such responses.
Nasdaq Proposes Rule for Trading Digital Assets: The Nasdaq exchange is proposing a rule change to permit the listing and trading of digital asset-based investment interests.
Secretary of Commerce Confirmed: Howard Lutnick, formerly of Cantor Fitzgerald, has been confirmed as the new Secretary of Commerce. He has said a ton of positive things about crypto in the past, so another ally in a high-ranking position is always good.
Nation-State Rug: The President of Argentina tweeted out about a memecoin, $LIBRA, which reached a market cap of almost $4 billion before insiders cashed out, making over a hundred million in the process and tanking the price of the token. Great thread explaining it all here. The fallout from the Argentina memecoin rug $LIBRA is ongoing, and it can be expected this will have significant repercussions down the line depending on the role of seemingly trusted service providers in the schemes.
SEC Commissioner Says Memecoins Not the SEC’s Concern: The very term “memecoin” implies that investors are not relying on the efforts of others to generate profits—a key factor in determining whether an asset qualifies as a security under U.S. law. If that weren’t already clear, SEC Commissioner Hester Peirce, who also heads the Crypto Task Force, recently reinforced this point, stating that the SEC’s jurisdiction is limited to securities. She emphasized that the regulation of many memecoins likely falls under other federal agencies, such as the CFTC, FTC, and others that oversee financial instruments that are not stock-like securities. This statement, while not actionable precedent, reflects an ongoing debate over the appropriate regulatory framework for digital assets and highlights the need for greater clarity in interagency enforcement efforts.
House Financial Services Subcommittee Holds Digital Asset Hearing: The House Financial Services Subcommittee recently held a hearing titled A Golden Age of Digital Assets: Charting a Path Forward. With legislators pushing an aggressive schedule to advance various digital asset bills, a rapid succession of hearings on these issues is expected. This hearing signals continued momentum in shaping the regulatory framework for digital assets and highlights the urgency among lawmakers to address key policy questions surrounding the industry. With the aggressive schedule put forward by many legislators to get various digital asset bills done, there is going to be an equally fast paced group of hearings on these issues.
Conclusion:
As personnel changes continue within the U.S. government and crypto-related industries, we can expect ongoing developments on the litigation front, further shaping the regulatory landscape for digital assets. The SEC’s decision to dismiss its case against Coinbase, along with other high-profile enforcement actions, signals a potential shift in regulatory strategy. Meanwhile, the recent Bybit Exchange hack, though not directly affecting U.S. users, underscores the urgent need for safe exchanges to ensure the secure access and custody of digital assets, as well as the need for more clarity involving self-custodial solutions. Alongside anti-money laundering and fraud detection and prevention, these issues will remain central to regulatory efforts in the evolving crypto ecosystem.

RUDE AWAKENING: Wellness Company Allegedly Sends 5:00 A.M. Texts To Consumers Without Consent

Hey TCPAWorld!
Imagine. It’s 5:00 a.m. The world is still. The kettle whistles before the aroma of freshly brewed coffee seizes the air. Your peace is impenetrable–or so you think. Then, an intrusion takes shape in the form of chimes and vibrations. Your phone bombarded with telephone solicitations from a source you never provided prior express permission to. This is the harm that 47 C.F.R. § 64.1200(c)(1) seeks to prevent by prohibiting callers from issuing telephone solicitations prior to 8 a.m.
In a complaint filed against Skinny Fit, LLC, a health and wellness company, the plaintiff claims to have suffered this very harm. Specifically, in SAVAGE v. SKINNY FIT, LLC, No. 8:25-CV-00376 (C.D. Cal. Feb. 25, 2025), Savage (“Plaintiff”) alleges that Skinny Fit, LLC, (“Defendant”) violated 47 C.F.R. § 64.1200(c)(1) by initiating at least two telephone solicitations to Plaintiff’s phone before 8 a.m. (local time at the called party’s location). The first message Plaintiff claimed to have received at 5:01 a.m. reads as follows:
SkinnyFit: Want to reveal your most flawless side? Try your match for free: https://kvo7.io/yzJPN7 Text STOP to opt-out

Id. at ¶ 14. Then, on the following day at 5:03 a.m., Plaintiff claims to have received another message:
SkinnyFit: Psst psst. Don’t wait until our best-selling Super Youth Orange Pineapple is out of stock… Did I mention you get to try it FREE for 21 days? Make your move now: https://.kvo7.io/cdTTNK

Plaintiff seeks to represent the following class:
Proposed Class. All persons in the United States who from four years prior to the filing of this action through the date of class certification (1) Defendant, or anyone on Defendant’s behalf, (2) placed more than one marketing text message within any 12-month period; (3) where such marketing text messages were initiated before the hour of 8 a.m. or after 9 p.m. (local time at the called party’s location).

Id. at ¶ 23.
It remains to be seen whether Skinny Fit, LLC, actually violated 47 C.F.R. § 64.1200(c)(1). That being said, this complaint serves as a cautionary tale: Avoid sending solicitations before 8 a.m. or after 9 p.m. (local time at the called party’s location), especially if you have not obtained prior express written consent. To stay compliant, it is essential to implement reliable systems to determine the recipient’s local time before sending any such solicitations.
One more thing. Starting April 11, 2025, the new revocation rule takes effect, which will require businesses to process revocation requests within a reasonable timeframe– no later than 10 business days. The team at Troutman Amin, LLP put together a concise and valuable one-sheet to help stakeholders understand the impact of the new rule. Download it here: Compliance Alert (TCPA Revocation Rule).
Stay tuned for more TCPA insights!

FCC Unanimously Votes to Keep Blocking: Expanding Call Blocking Requirements to Cover More VSPs

At the FCC’s open meeting this morning under the gavel of newly appointed Chairman Carr they unanimously approved the NPRM for Strengthening Call Blocking Rules, docket number 17-59. What will this new order impose:

Expand the range of voice service providers that block based on a reasonable do-not-originate list from only gateway providers to all voice service providers in a call path. A do-not-originate list may include unused, unallocated, or invalid numbers, as well as numbers for which a subscriber has requested blocking.
Modify the existing requirement for voice service providers to immediately notify callers when providers block calls based on reasonable analytics by requiring the use of Session Initiation Protocol (SIP) code 603+ and eliminating the use of SIP codes 603, 607, and 608 for this purpose. This will better ensure a fast resolution of any erroneous blocking.

We all know that call blocking is a real problem and as Troutman laid it out for you just a few weeks ago, this is most likely not going to be a solution and may be even a bigger issue for small business.
Commissioner Starks stated, “this is why the FCC needs to do everything in its power to close the vectors for fraud in its network and on that account I would have hoped that this item would address robotexts and go further to block Robo calls.”
While Chairman Carr shared “Illegal Robocalls are a nuisance and the annoy far too many Americans far too often and the commission is going to continue its work to accelerate on this crackdown on the scourge of illegal Robocalls. Americans are fed up and tired of unknown numbers calling at all hours or spoof numbers that appear to come from a trusted source. A lot of people have just given up and not answering the phones. And although there is no silver bullet here, the FCC needs to keep pushing on multiple fronts. We do so today by taking two important steps. Both of these actions are designed to stop illegal calls before they ever reach consumers.”
We will keep our eye on this as it goes through the process of becoming a final rule and provide effective dates once published in the Federal Register. Read the NPRM HERE.

The Why Behind the HHS Proposed Security Rule Updates

In this week’s installment of our blog series on the U.S. Department of Health and Human Services’ (HHS) HIPAA Security Rule updates in its January 6 Notice of Proposed Rulemaking (NPRM), we are exploring the justifications for the proposed updates to the Security Rule. Last week’s post on the updates related to Vulnerability Management, Incident Response & Contingency Plans can be found here.
Background
Throughout this series, we have discussed updates to various aspects of the Security Rule and explored how HHS seeks to implement new security requirements and implementation specifications for regulated entities. This week, we discuss the justifications behind HHS’s move and the challenges entities face in complying with the existing rule.
Justifications
HHS discussed multiple reasons for this Security Rule update, and a few are discussed below:

Importance of Strong Security Posture of Regulated Entities – The preamble to the NPRM posits that the increase in use of certified electronic health records (80% of physicians’ offices and 96% of hospitals as of 2021) fundamentally shifted the landscape of healthcare delivery. As a result, the security posture of regulated entities must be updated to accommodate such advancement. As treatment is increasingly provided electronically, the additional volume of sensitive patient information to protect continues to grow.
Increase Cybersecurity Incident Risks – HHS cites the heightened risk to patient safety during cybersecurity incidents and ransomware attacks as a key reason for these updates. The current state of the healthcare delivery system is propelled by deep digital connectivity as prompted by the HITECH and 21st Century Cures Act. If this system is connected but insecure, the connectivity could compromise patient safety, subjecting patients to unnecessary risk and forcing them to bear unaffordable personal costs. During a cybersecurity incident, patients’ health, and potentially their lives, may be at risk where such an incident creates impediments to the provision of healthcare. Serious consequences can result from interference with the operations of a critical medical device or obstructions to the administrative or clinical operations of a regulated entity, such as preventing the scheduling of appointments or viewing of an individual’s health history.
The Healthcare Industry Could Benefit from Centralized Security Standards Due to Inconsistent Implementation of Current Voluntary Standards – Despite the proliferation of voluntary cybersecurity standards, industry guidelines, and best practices, HHS found that many regulated entities have been slow to strengthen their security measures to protect ePHI and their information systems. HHS also noted that recent case law, including University of Texas M.D. Anderson Cancer Center v. HHS, has not accurately set forth the steps regulated entities must take to adequately protect the confidentiality, integrity, and availability of ePHI, as required by the statute. In that case, the Fifth Circuit vacated HIPAA penalties against MD Anderson, ruling that HHS acted arbitrarily and capriciously under the Administrative Procedure Act. The court found that MD Anderson met its obligations by implementing an encryption mechanism for ePHI. HHS disagrees with whether the encryption mechanism was sufficient and asserted its authority under HIPAA to mandate strengthened security standards for ePHI. This ruling and lack of adoption of the voluntary cybersecurity standards by regulated entities has led to inconsistencies in the implementation of the Security Rule at regulated entities and providing clearer and mandatory standards were noted justifications for these revisions.

Takeaways
In 2021, Congress amended the HITECH Act, requiring HHS to assess whether an entity followed recognized cybersecurity practices in line with HHS guidance over the prior 12 months to qualify for HIPAA penalty reductions. In response to this requirement, HHS could have taken the approach of acknowledging recognized frameworks that offer robust safeguards to clarify expectations, enhance the overall security posture of covered entities, and reduce compliance gaps. While HHS refers to NIST frameworks in discussions on security, it has not formally recognized any specific frameworks to qualify for this so called “safe harbor” incentive. Instead, HHS uses this NPRM to embark on a more prescriptive approach to the substantive rule based on its evaluation of various frameworks.
HHS maintains that these Security Rule updates still allow for flexibility and scalability in its implementation. However, the revisions would limit the flexibility and raise the standards for protection beyond what was deemed acceptable in the past Security Rule iterations. Given that the Security Rule’s standard of “reasonable and appropriate” safeguards must account for cost, size, complexity, and capabilities, the more prescriptive proposals in the NPRM and lack of addressable requirements present a heavy burden — especially on smaller providers.
Whether these Security Rule revisions become finalized in the current form, a revised form, or at all remains an open item for the healthcare industry. Notably, the NPRM was published under the Xavier Becerra administration at HHS and prior to the confirmation of Robert F. Kennedy, Jr. as the new secretary of HHS. The current administration has not provided comment on its plans related to this NPRM, but we will continue to watch this as the March 7, 2025, deadline for public comment is inching closer.
Stay tuned to this series as our next and final blogpost on the NPRM will consider how HHS views the application of artificial intelligence and other emerging technologies under the HHS Security Rule.
Please visit the HIPAA Security Rule NPRM and the HHS Fact Sheet for additional resources.
Listen to this post

TCPA Revocation of Consent Rule Effective April 2025

As previously blogged about in detail here, the TCPA rules on revoking consent for unwanted robocalls and robotexts becomes effective in April 2025.
Revocation of prior express consent for autodialed, prerecorded or artificial voice calls (and autodialed texts) must be permitted to be made by “any reasonable means.” Additionally, callers may not infringe on that right by designating an exclusive means to revoke consent that precludes the use of any other reasonable method.
Callers are required to honor do-not-call and consent revocation requests within a reasonable time not to exceed ten (10) business days of receipt of the request. Text message senders are limited to a one SMS text message confirming that no further text messages are to be transmitted. Such confirmation messages must be sent promptly following receipt of the opt-out request. The FCC will monitor compliance with this obligation to ensure that such requests are honored in a timely manner.
Importantly, when a consumer revokes robocall or robotext telemarketing consent under the Telephone Consumer Protection Act – unless a separate intent to opt-out therefrom is expressed – a caller may continue to reach the consumer if it is an exempt informational call. A revocation request in response to an exempt informational call shall be considered a request to opt-out of all non-emergency robocalls and robotexts.
Takeaway: Digital marketers should consult with an experienced FTC compliance and defense lawyer to discuss the scope of the new rule, the implementation of compliance strategies and training materials, how to ensure that revocation requests are honored, and the identification of non-regulated technologies.

CNIL Publishes Recommendations on AI and GDP

On February 7, 2025, the French Data Protection Authority (“CNIL”) released two recommendations aimed at guiding organizations in the responsible development and deployment of artificial intelligence (“AI”) systems in compliance with the EU General Data Protection Regulation (“GDPR”). The first recommendation is titled “AI: Informing Data Subjects” (the “Recommendation on Informing Individuals”) and the second recommendation is titled “AI: Complying and Facilitating Individuals’ Rights” (the “Recommendation on Individual Rights”). The recommendations build on the CNIL’s four-pillar AI action plan announced in 2023.
At a general level, the CNIL clarifies in its press release that:

The purpose limitation principle applies flexibly to general-purpose AI systems. Operators who cannot precisely define all future applications at the training stage may limit themselves to describing the type of system being developed and illustrating its potential key functionalities.
The data minimization principle does not prevent the use of large training datasets. In principle, the data used should be selected and cleaned to optimize algorithm training while avoiding the use of unnecessary personal data.
Training data may be retained for extended periods, if justified and appropriate security measures are implemented.
The reuse of databases, including those available online, is possible in many cases, subject to verifying that the data was not collected unlawfully and that its reuse is compatible with the original collection purpose.

We have summarized below key takeaways for each recommendation.
Recommendation on Informing Individuals
The CNIL emphasizes the importance of transparency in AI systems that process personal data. Organizations must provide clear, accessible, and intelligible information to data subjects about the processing of their data by an AI system. Specifically:

Timing of the information. The CNIL recommends providing information at the time of the data collection. If data is obtained indirectly, individuals should be informed as soon as possible and at the latest, at the first point of contact with the individuals or the first sharing of the data with another data recipient. In any event, individuals must be informed about the processing of their personal data within one month maximum after the collection of their data.
How to provide information. The CNIL recommends providing concise, transparent and easily understandable information, using clear and simple language. The information should be easily accessible and distinguished from other unrelated content. To achieve those objectives, the CNIL recommends using a layered approach to provide essential information upfront while linking to more detailed explanations.
Derogations to information provided individually. The CNIL analyzes various use cases that allow for an exemption from the obligation to individually inform data subjects. For example, when the individuals already have the information as per Article 14 of the GDPR. In all cases, organizations must ensure that these exemptions are applied judiciously and that individuals’ rights are upheld through alternative measures.
What information must be provided. When providing information to the data subjects, the CNIL states that providing details as required by Articles 13 and 14 of the GDPR will generally be required. If individual notification is exempt under the GDPR, organizations must still ensure transparency by publishing general privacy notices. A website, for example, that contains as much relevant information that would have been provided through individual notification. If the organization cannot identify individuals, they must explicitly state this in the notice. If possible, individuals should be informed of what additional details they can provide to help the organization verify their identity. Regarding data sources, the organization is generally required to provide specific details about these sources when the training datasets comes from a small number of sources, unless an exception applies. However, if the data comes from numerous publicly available sources, a general disclosure is sufficient. This can include the categories and examples of key or typical sources. This aligns with Recital 61 of the GDPR which allows for general information on data sources when multiple sources are used.
AI models subject to the GDPR. The CNIL looks at the applicability of the GDPR to AI models, emphasizing that not all AI systems are subject to its provisions. Some AI models are considered anonymous because they do not process personal data. In such cases, the GDPR does not apply. However, the CNIL highlights that certain AI models may memorize parts of their training data, leading to potential retention of personal data. If so, those models would fall under the scope of the GDPR and the transparency obligation apply. As a best practice, the CNIL advises AI providers to specify the risks associated with data extraction from the model in their information notices, such as the possibility of “regurgitation” of training data in generative AI, the mitigation measures implemented to reduce those risks and the recourse mechanisms available to individuals in case one of those risks materializes (e.g., in the event of “regurgitation”).

Recommendation on Individual Rights
The CNIL’s guidelines aim to ensure that individuals’ rights are respected and facilitated when their personal data is used in developing AI systems or models.

General Principles. The CNIL emphasizes that individuals must be able to exercise their data protection rights both with respect to training datasets and AI models, unless the models are considered anonymous (as specified in the EDPB Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models). The CNIL flags that while the rights of access, rectification or erasure for training datasets present challenges similar to those faced with other large databases, exercising these rights directly with respect to the AI model (as opposed to the training dataset) raises unique and complex issues. To balance individual rights and AI innovation, the CNIL calls for realistic and proportionate solutions, and highlights that the GDPR provides flexibility to accommodate the specificities of AI models when handling data subject rights requests. For example, the complexity of responding to the request and costs to do so are relevant factors that can be taken into account when assessing how to respond to a request.
Exercising rights in AI model or system development.

According to the CNIL, how rights requests should be responded to depends on whether these requests concern training datasets or the AI model itself. In this respect, organizations should clearly inform individuals about how their request is interpreted, i.e., whether it relates to training data or the AI model, and explain how the request is handled. When rights requests relate to training datasets, organizations may face challenges in identifying individuals. In this respect, the CNIL highlights:

If an organization no longer needs to identify individuals in a training dataset and can prove it, it may indicate this in response to rights requests.
AI providers generally do not need to identify individuals in their training datasets.
Organizations are not required to retain identifiers solely to facilitate rights requests if data minimization principles justify their deletion.
If individuals provide additional information, the organization may use this to verify their identity and facilitate rights requests.

Individuals have the right to obtain copies of their personal data from training datasets, including annotations and metadata in an understandable format. Complying with this right of access must not infringe others’ rights, such as intellectual property and trade secrets. Further, when complying with the right of access, organizations must provide details on data recipients and sources. If the original source is known, this information must be disclosed. When multiple sources are used, organizations must provide all available information but are not required to retain URLs unless necessary for compliance. More generally, the CNIL highlights that a case-by-case analysis is necessary to determine the level of detail and content of information that must be reasonably and proportionately stored to respond to access requests.
With respect to the rectification, erasure and objection rights, the CNIL clarifies that, among others:

Individuals can request correction of inaccurate annotations in training datasets.
When processing is based on legitimate interest or public interest, individuals may object, if the circumstances justify it.
AI developers should explore technical solutions, such as opt-out mechanisms or exclusion lists, to facilitate rights requests in cases of web scraping.

Article 19 of the GDPR provides that a controller must notify each data recipient with whom it has shared personal data of a rectification, restriction or deletion request. Accordingly, when a dataset is shared, updates should be communicated to recipients via APIs or contractual obligations requiring those recipients to apply those updates.

Exercising rights on AI Models subject to GDPR. Certain AI models are trained on personal data but remain anonymous after training. In such cases, GDPR does not apply. If the model retains identifiable personal data, GDPR applies and individuals must be able to exercise their rights over the model:

Organizations must assess whether a model contains personal data. If the presence of personal data is uncertain, the organization must demonstrate that it is not able to identify individuals as part of its model.
Once a specific individual has been identified as part of a model, the organization must identify the data that are included. If feasible, data subjects must be given the opportunity to provide additional information to help verify their identity and exercise their rights. If the organization still has access to training data, it may be appropriate to first identify the individual within the dataset before verifying whether their data was memorized by the AI model and could be extracted. If training data is no longer available, the organization can rely on the data typology to determine the likelihood that specific categories of data were memorized. For generative AI models, the CNIL advises providers to establish an internal procedure to systematically query the model using a predefined set of prompts.
The rights to rectification and erasure are not absolute and should be assessed in light of the sensitivity of the data and the impact on the organization, including the technical feasibility and cost of retraining the model. In some cases, retraining the model is not feasible and the request may be denied. That said, AI developers should monitor advances in AI compliance since evolving techniques may require previously denied requests to be honored in the future. When the organization is still in possession of the training data, retraining the model to remove or correct data should be envisaged. In any event, as current solutions do not always provide a satisfactory response in cases where an AI model is subject to the GDPR, the CNIL recommends that providers anonymize training data. If this is not feasible, they should ensure that the AI model itself remains anonymous after training.

Exceptions to the exercise of rights. When relying on an exception to limit individuals’ rights as per the GDPR, the organization must inform individuals in advance that their rights may be restricted and explain the reasons for such restrictions.

Read the CNIL’s Press Release (available in English), Recommendation on Informing Individuals and Recommendation on Individual Rights (both only available in French).

Powering the Digital Future: Navigating the Nuclear Option for Data Centers

Key Points:

Nuclear energy is well suited to meet the demands of AI data centers and data center customers have multiple options for nuclear power integration, including Small Modular Reactors (SMRs) vs. full-sized units, on-site vs. off-site generation, and new construction vs. reactivating existing facilities – each with distinct trade-offs in terms of cost, scale, and implementation complexity.
Potential developers will need to navigate a welter of state and federal regulations, statues and tariffs regarding grid interconnection, utility rights, and behind-the-meter arrangements. Current rules were not designed for large-scale, single-customer nuclear generation facilities.
We await key developments stemming from:

FERC’s Show Cause Proceeding: There is a 30-day deadline for PJM and PJM Transmission owners to defend why the current tariff is just or propose changes to remedy concerns on reliability impacts and cost causation.
Talen’s petition for review in review in the Fifth Circuit Court of Appeals regarding FERC’s rejection of the Amazon/Pennsylvania connection agreement
Consolidated proceedings

Commissioner-led Technical Conference on Large Loads Co-Located at Generating Facilities (Docket No. AD24-11-000)
Constellation complaint (EL25-20-000)

These will help establish “rules of the road” for co-location arrangements in PJM territory

Modern data centers are the foundation of our information society and now use artificial intelligence to generate new forms of machine intelligence and learning – though at the cost of considerable energy consumption. Their energy demand outstrips the ability of existing generation and transmission systems to meet that demand, making nuclear energy particularly well-suited to supply the shortfall given its base load (round-the-clock) generation profile, low fuel cost and insulation from carbon emissions concerns. Powering data centers with purpose-built, reactivated, or newly-completed nuclear generation is an attractive way to accelerate power supply to meet the needs of the AI economy.
The path for powering data centers with nuclear energy depends on multiple factors including whether the power will come from:

Full-sized units or a series of small modular reactors (“SMRs”). 
Existing nuclear units that are already connected to the grid or new nuclear units to be constructed off site. 
Nuclear power wheeled to the data center through the grid or new nuclear units to be constructed at or adjacent to the data center site.

The Electric Grid
More than 65 individual balancing authorities manage the United States electric grid, and most are either regional transmission organizations or large investor-owned or publicly-owned power utilities. Many smaller utilities, electric cooperatives, and municipal power systems operate behind those larger balancing authorities. 
The commercial and operational terms by which new nuclear capacity will be integrated onto the grid will depend on the laws, regulations and tariffs that apply to each of these entities. This is true even for on-site or ‘behind the meter’ nuclear generation, since connecting data centers to the grid is typically required for emergency or standby generation, moment to moment load balancing, and the export of excess power when power consumption at the data center drops. The costs of operating without a robust grid connection can be quite expensive considering the cost of building and maintaining high-capacity battery or gas-fired generation back up. 
Full-Sized Units vs. SMR
Data center developers are increasingly viewing SMRs as an attractive alternative to traditional large-scale nuclear reactors for powering their facilities considering their automatic or ‘walk-away’ safety features, their scalability (300 MW or less per unit vs. approximately 1,000 MW per full-scale unit), and the ability of SMR reactor units to be manufactured in factories and transported fully assembled to their final location for installation. 
But substantively, the costs and benefits of the two technologies are closely balanced since, like SMRs, today’s full-scale reactors have comparable walk-away safety features and key components can be built in a series of modules on factory floors. Both SMRs and full-sized units require significant on-site ‘stick built’ construction for balance-of-plant equipment (including steam turbines, condensers, water cooling systems, switchyards, and control, maintenance and administration facilities) as well as site-specific NRC licensing and environmental permitting. Neither represents a true plug-and-play solution. In addition, full-scale units have significant economies of scale in the form of lower per-unit staffing and operating cost and produce less high-level waste for future disposal. Outside of the United States, there have been a number of successful recent projects to build new, full-scale reactors.
In short, SMRs represent nuclear capacity that data center developers can install in smaller increments reducing financial risk (capital costs) and time to start up, while creating the redundancy inherent in multiple units that can produce energy independently of each other. Full-scale reactors have significant operating economies, but in a single generating package.

SMRs represent nuclear capacity that data center developers can install in smaller increments reducing financial risk (capital costs) and time to start up, while creating the redundancy inherent in multiple units that can produce energy independently of each other. Full-scale reactors have significant operating economies, but in a single generating package.

On-Site Nuclear
Locating nuclear capacity on data center sites can create significant cost and time saving if major transmission upgrades can be avoided. But the savings may evaporate if for operational requirements grid connectivity must still match the nuclear facility’s total output or the data center’s peak power needs. These requirements include maintaining reliability standards, managing excess power, balancing loads, or meeting the data center’s full power demands. Where robust standby transmission access is required, the same transmission-related regulatory and construction issues will arise with on-site generation as with generation located elsewhere on the grid. In those cases, the primary operating advantage of on-site nuclear generation, or other onsite generation, may be accelerating availability of capacity or insulation from the effects of curtailments of service or loss-of-load (“LOA”) events on the grid. 
Completing or Recommissioning Existing Units
As we will discuss with more depth in an upcoming installment of our “Going Nuclear” series, completing or restarting existing but non-operating units is currently being considered for multiple plants including the Palisades Unit, the Bellefonte Units, Three Mile Island Unit 2, and V. C. Summer Units 2 and 3. Completions and restarts leverage investments already made) and transmission interconnectivity already in place. Assuming land use patterns and the constraints of nuclear exclusion zones would support doing so, building new data centers alongside such completions and restarts can be a powerful strategy for delivering new capacity quickly. 
Behind the Meter
State statutes and regulations generally allow customers to build and operate their own generation. But where that generation is connected to the grid (i.e., behind a utility meter) it may fall under the provisions of state distribution energy resources (DER) legislation that typically were drafted for smaller solar and renewable projects and prohibit large behind the meter installation. These caps, however, are not the last word, and can be removed or waived especially if the incumbent utility agrees. 
Some jurisdictions may prohibit interconnection behind the meter facilities that do not sell their capacity and energy to the grid, and where the facility will be located inside an RTO or ISO, FERC may have jurisdiction over interconnection. FERC recently rejected an agreement to power an Amazon data center in Pennsylvania through a direct connection with an adjacent operating nuclear station based on the potential impact taking existing nuclear generation out of PJM’s constrained capacity markets. 
FERC is also considering a request by Constellation Energy to require PJM to adopt tariff provisions to support co-located or directly connected nuclear and other generation while addressing concerns about effects on reliability and rate payer costs. 
Off-Site New Nuclear
If a data center plans to purchase power from an offsite nuclear unit, a power purchase agreement (PPA) with the owner and operator of the unit will determine the terms of sale. If the unit will operate in a competitive retail market, then the PPA delivery will take place under the open access transmission tariff (OATT) of the resident transmission operator, and retail power delivery tariffs of the local distribution entity. However, the structure of most deregulated markets involves all generation being sold into a single market, with contracts for differences giving end-users the economic benefit of their PPA transactions. A data center customer will want some assurance that service will not be curtailed so long as the nuclear capacity it is providing is online and supplying power to the grid. The terms of existing tariffs should not be considered to be the last word on what is possible. It may be possible to negotiate and obtain regulatory approval for contractual terms or special tariff provisions tailored to the specific transaction. 

A data center customer will want some assurance that service will not be curtailed so long as the nuclear capacity it is providing is online and supplying power to the grid.

State Regulation: Certificates of Public Convenience and Necessity (“CPCNs”), Territorial Assignment and Retail Tariffs
Most states require electric generation developers to obtain some form of CPCN to construct systems sized at 75-85 MW or more, and nuclear construction would almost certainly require certification under those statutes. These requirements often apply whether or not the new unit is considered self-generation, i.e., it is owned by and serves only the data center owner. These statutes were not typically drafted with single customer, large-scale generation in mind, and so adapting a new nuclear project under their terms may require some creativity. 
State Regulation
Vertically Integrated Markets: If the state follows a vertically integrated utility service model (i.e., non-deregulated), then the local utility will likely have territorial service rights which extend to generation construction. This may allow the utility to block the construction of new generation to serve a customer within its service territory, especially if it is to be owned by an entity other than the data center and its customers. However, there can be exceptions. Some states have statutes or tariffs that allow industrial choice, distribution energy resources (DER), or voluntary renewable energy projects (“VREPs”). Otherwise, regulatory support from the incumbent utility and a one-off agreement may be needed with to site new nuclear generation. Further , the incumbent utility’s public service commission will need to approve, and such agreement would be a contractual exception to the utility’s generally applicable tariffs. 
State Regulation: Retail Standby Service
A data center that is connected to the grid for backup power purposes will be a retail customer of the incumbent electric utility. The upside of being a retail tariff customer is that the data center can use its grid connection to buy standby power to deal with fluctuations in its energy demand (and to sell excess power onto the grid when necessary). But the presence of a retail meter will make the data center subject to the costs built into its retail tariff. The tariff may be out of alignment with the standby nature of the service being purchased and may include services that do not benefit the data center owner (e.g., cost for renewable portfolio standards, demand side management (DSM) programs, and other social or environmental costs). Depending on the tariffs, the data center may be subject to curtailment in times of system emergency even if the nuclear plant is producing sufficient power to meet its demands. 
Looking Ahead
Nuclear power presents a compelling solution for meeting the exponentially growing energy demands of modern data centers, particularly those supporting AI operations. However, successful implementation requires careful navigation of multiple regulatory and licensing complexities.
Whether choosing SMRs or full-scale reactors, data center operators must carefully evaluate their specific needs against various factors: initial capital costs, operational economies, regulatory requirements and uncertainties, and grid integration challenges. The decision between on-site and off-site generation, or whether to participate in recommissioning existing facilities, requires thorough analysis of federal, regional, and local regulations, transmission infrastructure, and operational requirements.

HHS OCR Imposes $1.5 Million Civil Penalty Against Warby Parker

On February 20, 2025, the U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) announced it had issued a $1.5 million fine against HIPAA covered entity Warby Parker, an eyewear manufacturer and online retailer headquartered in New York City.  OCR began its investigation into Warby Parker following receipt of a breach report filed with OCR by the company.
The breach report detailed that an unauthorized third party accessed Warby Parker customer accounts through the use of “credential stuffing” attacks, in which usernames and passwords previously exposed in unrelated breaches are used to gain access to user accounts. According to Warby Parker’s OCR breach report, 197,986 individuals were affected by the breach, which compromised names, mailing addresses, email addresses, payment card information and eyewear prescription information.
OCR’s investigation into Warby Parker revealed evidence of three alleged violations of the HIPAA Security Rule, including failure to conduct an accurate and thorough risk analysis, failure to implement sufficient security measures, and failure to implement procedures to regularly review information system activity records.
OCR initially issued a Notice of Proposed Determination in September of 2024, seeking to impose a civil monetary penalty, which Warby Parker did not contest. Accordingly, OCR issued a Notice of Final Determination to Warby Parker in December of 2024.
In its press release announcing the penalty, OCR Acting Director Anthony Archeval stressed that “protecting individuals’ electronic health information means regulated entities need to be vigilant in implementing and complying with the Security Rule requirements before they experience a breach.”

Health-e Law Episode 16: Crossroads of Care: Navigating Executive Orders with Jonathan Meyer, former DHS GC and Partner at Sheppard Mullin [Podcast]

Welcome to Health-e Law, Sheppard Mullin’s podcast exploring the fascinating health tech topics and trends of the day. In this episode, Jonathan Meyer, a partner at Sheppard Mullin and Leader of the firm’s National Security Team, joins us again to discuss the early days of the new Trump administration and what might be on the horizon in terms of cybersecurity and data privacy.
What We Discussed in This Episode:

What can we expect from the new administration in relation to cybersecurity and data protection?
How do these concerns translate to healthcare, both in terms of managing our care and protecting our data?
What is Sheppard Mullin’s executive actions tracker, why it matters, and how can listeners use it?
How is healthcare struggling with privacy and immigration, and how does this impact national security?

Click Here to Read Transcript

Exploring DORA: Potential Implications for EU and UK Businesses

On Jan. 17, 2025, EU Regulation 2022/2554 on digital operational resilience for the financial sector (DORA) became applicable in the EU.
DORA focusses on risk management and resilience testing, with a strong focus on vendor risk management, incident management and reporting, and resilience testing of key systems.
DORA applies to financial institutions that are authorized to provide financial services in the EU and is designed to strengthen their IT security and operational resiliency.
It is worth noting, particularly for UK financial institutions, that DORA does not apply directly to organizations, including UK organizations, that are providing non-regulated services in the EU financial services industry. However, if a UK organization is providing any IT related services to an EU financial institution, it may be classified as an information and communication technology (ICT) third-party service provider under DORA. Depending on the nature of the organization and its services, it could be designated as a critical ICT third-party service provider, in which case it would have direct compliance obligations under DORA (which would include implementing a comprehensive governance and control framework to manage IT and operational resiliency risk).
As a high-level summary, financial institutions subject to DORA must:

Create and maintain a register of vendors (ICT third-party service providers) and report relevant information from the register to financial authorities annually.
Implement comprehensive security incident reporting obligations, requiring initial notification four hours after the incident is classified as major and a maximum of 24 hours after becoming aware. Follow-up obligations will also be required. 
Implement post ICT-related incident reviews after a major ICT-related incident disrupts core activities.
Implement and maintain a sound, comprehensive, and well-documented ICT risk management framework, which must include appropriate audits.
Establish and maintain a sound and comprehensive digital operational resilience testing program, which for critical functions must involve penetration testing.
Clearly allocate, in writing, the financial entity’s rights and obligations when engaging with ICT third-party service providers, including mandatory DORA contractual provisions.
Adopt and maintain a strategy on ICT third-party risk.

As discussed above, ICT third-party service providers delivering services to financial entities will also be subject to DORA obligations. The nature of these obligations, and whether the ICT third-party service provider falls directly under DORA, will depend on various factors, including how critical the ICT service provider is to the EU financial services eco system, the nature of functions being supported, and services being provided. With that said, all ICT third-party service providers will be subject to contractual obligations resulting from the requirement for in-scope financial entities to flow down certain obligations to their service providers under DORA.
In light of the above, UK organizations providing services in the EU should carefully consider whether they fall directly under DORA in their capacity as a financial institution, and/or whether their services may cause them to be considered an ICT third-party service provider.