Walking the Talk, Ofcom’s Online Safety Act Enforcement

Back in March 2025, we published an article highlighting that Ofcom will be turning up the heat to ramp up pressure on platforms in relation to their duties to the UK’s Online Safety Act (OSA). There has been a flurry of activity from Ofcom on OSA compliance and it appears that the heat has indeed been turned up. 
The First Wave
On 9 May 2025, Ofcom published that it has opened investigation into two services regulated under Part 5 of the OSA, namely Itai Tech Ltd and Score Internet Group LLC. This investigation was initiated as part of Ofcom’s January 2025 Enforcement Programme into age assurance. It appears that some services failed to respond to Ofcom’s request in January 2025 and do not appear to have taken steps to implement measures in line with their duties under the OSA. The duty being Part 5 service providers under the OSA must have highly effective age assurance in place from January 2025. 
Less than a week later, on 12 May 2025, Ofcom further published that it is launching additional investigations into Kick Online Entertainment S.A for failing to keep a suitable and sufficient illegal content risk assessment and for failing to respond to a statutory information request.
As outlined in our March 2025 article, platforms were expected to have completed their illegal harms risk assessment by 16 March 2025 and their children’s access assessment by 16 April 2025. The investigation into Kick Online Entertainment S.A is a clear indication that Ofcom will have a direct and serious approach in relation to its OSA enforcement. 
It’s Not Over 
Ofcom has additionally written to a number of services under Part 3 of the OSA (i.e. user-to-user services and search services) noting the deadline for mandatory age assurance on services that allows pornography or adult content and reminding platforms of their duties under the OSA. 
This shows that the initial round of enforcement programmes and investigations are just the beginning for Ofcom and further requests are likely to come, especially as the protection of children requirements come into force, details of which are outlined in our previous article available here.
Ofcom has further opened an enforcement programme into child sexual abuse imagery on file-sharing services so it would be expected that a number of platforms are already in the process of communicating with Ofcom in relation to comply with their OSA duties. 
What to do when Ofcom (or anyone else) is knocking at your door
It is clear that Ofcom will not be ignored, if Ofcom writes to you, it is important you respond within the given timeframe. A failure to respond to requests has triggered three published investigations, platforms should be careful and take Ofcom seriously when they write to you, otherwise you may risk being named publicly by Ofcom. 
Engagement with Ofcom shows that a platform is taking Ofcom seriously and fosters a cooperative culture. Ofcom has suggested in recent communications that it is willing to work with platforms so as to achieve the wider goal of improving online safety. 
Whilst Ofcom is likely to take a pragmatic approach with enforcement, the duties under the OSA and its deadlines are very clear. Ofcom’s approach towards enforcement of this demonstrates a direct and serious approach that platforms should not take lightly. Otherwise, platforms are at risk of paying fines of up to £18m or 10% of global turnover, whichever is higher.
This should also apply to other regulators, such as the Information Commissioner’s Office (ICO), the UK’s regulator for personal data. The ICO have written to a large number of sites seeking a response on cookie banner compliance. Platforms should not ignore these communications or risk similar penalties to the OSA. 
Larry Wong also contributed to this article. 

The Biggest Misconceptions About Digital Estate Planning

The rise of digital platforms, online accounts, and cryptocurrency has reshaped the role of digital assets in modern estate planning. Digital assets, once an afterthought or a minor footnote in the planning process, now warrant their own conversation entirely. Most estate practitioners have likely become more aware of the need to plan for digital assets. 
However, many clients still harbor misconceptions about these assets, which can muddle the planning process, leaving their digital legacies unprotected and their heirs unprepared — and as you know nearly all your clients have digital assets.
Here’s a look at six of the biggest misconceptions your clients may have about the digital side of estate planning, and why addressing them is crucial.
1. “My Will Covers My Digital Assets.”
Many clients believe that simply adding a generic clause about “digital assets” to their will is enough. While this is a good start, a clause alone is inadequate for comprehensive planning. Not to mention, wills are public documents, including sensitive digital information in a will such as account logins, private keys, or other sensitive information can create serious security risks.
Notably, without the proper digital asset authorization language included in a will (and other estate planning documents such as Powers of Attorney and Trust Agreements), fiduciaries acting under these documents, including agents, executors, and trustees, may lack legal access to important accounts and information. In addition, clauses in estate planning documents that permit fiduciary access, also must specifically authorize disclosure of the contents of electronic communications (such as email messages), which are subject to heightened privacy standards. Of course, even if estate planning documents provide fiduciaries with the requisite legal access, this does not equate to actual access without preplanning measures. 
Proper planning also requires complementary tools, such as digital asset schedules and inventories, secure password vaults, and language that complies with the Revised Uniform Fiduciary Access to Digital Assets Act (RUFADAA), a version of which has been passed in the majority of U.S. states.
2. “Digital Assets Will Automatically Be Handled by the Service Providers After I Die.”
Many clients assume that their digital accounts will simply be managed or closed by service providers after they pass away. However, that assumption is likewise wrong. Most online services, including social media platforms and email providers, do not automatically transfer control of accounts to heirs or legal representatives.
While some providers offer account “legacy” services or offer online tools such as Facebook’s Legacy Contact feature—which allows someone to manage a deceased person’s account—many do not. As mentioned above, access to online accounts and certain digital assets by a fiduciary is governed under RUFADAA. Most clients are not aware of RUFADAA or the need to have specific legal instructions and directives in estate planning documents to access digital assets and accounts, if a service provider doesn’t have an online tool. Otherwise, heirs and legal representatives may be completely locked out or require a court order for access. This can lead to legal disputes, delays, and frustration for families already grieving a loss.
3. “I Can Just Give My Passwords to My Spouse or Kids.”
Some clients think a handwritten list of passwords (or even a shared note on their phone) is a sufficient means of transfer. This approach is problematic for many reasons:

Information is easily outdated (passwords change frequently).
How information is stored can create security risks (especially if lost, stolen, or seen by the wrong person, or not transmitted and stored with encryption).
Sharing passwords and login information violates many laws and terms of service agreements.

Estate planners need to guide clients toward secure and legal methods for granting access to their digital accounts and devices — such as using encrypted password managers, for starters. Clients who own cryptocurrency, NFTs and other more sophisticated or sensitive digital information or IP, need to use even more advanced methods to secure these types of digital interests, such as cold storage vaults, which are a form of digital storage not connected to the internet.
4. “Digital Assets Are Not Subject to Probate.”
Some clients incorrectly assume digital assets automatically bypass the probate process the same way some jointly owned assets or payable-on-death accounts do. But unless those digital assets are titled in a trust or handled via an online tool, which is similar to a beneficiary designation on an insurance policy or retirement account, they often do go through probate — and the process for gaining proper access can be expensive and time-consuming. Moreover, if the requisite legal authorizations were not included in estate planning documents, the information that eventually becomes accessible is more limited and does not include the content of electronic communications. 
For digital assets, and for a growing number of traditional assets, it is more common for no paper trail to exist due to the rise in online statements and management. And it is becoming harder to even identify what assets exist unless the client has proactively documented them. Finding utility and subscription information, for instance, can be a daunting process that can cause unnecessary delays if preplanning measures are not in place. As digital assets become more ubiquitous, it’s crucial to ensure that even the most basic online accounts are considered as part of an overall estate plan.
5. “My Digital Assets Are Too Small To Worry About.”
It is common for clients to dismiss the importance of digital assets with the belief that their digital accounts and footprint hold no value after they’re gone. The reality is, even non-monetary digital assets can create significant challenges for heirs and legal representatives at death. 
Your clients’ digital legacy can hold sentimental value that their heirs may want to preserve — photos, emails, and social media profiles all contribute to a person’s digital story. Ensuring these assets are properly managed is just as important as safeguarding tangible personal relics.
Some of your clients may ask, “What’s the big deal if I forget to close my Instagram account? There’s nothing in it.” But this is a dangerous misconception. While an online account may not have inherent monetary or sentimental value, it can become an entry point for cyberattacks and identity theft and present significant issues and additional time and expenses for heirs and legal representatives.
In the last few years, identity theft of the deceased has been on the rise, resulting in protracted estate administrations and thousands of dollars in additional fees. 
Non-financial digital assets can have hidden costs, and the failure to plan for them can lead to administrative headaches and financial burdens for loved ones that can easily be avoided through preplanning measures.
6. “Digital Estate Planning Is Just for Crypto Investors.”
I hear this one all the time. People think “digital assets” and “crypto” are interchangeable. Therefore, digital asset planning must only be relevant to those with significant cryptocurrency holdings or at least a deep understanding of technology. Estate planning professionals should be addressing digital assets for all clients—not just those who are involved in the tech or cryptocurrency spaces. After all, the average person today has around 168 online accounts, including email, social media, online banking, and cloud storage. That list grows daily, for both tech wizzes and luddites alike. 
Remember: even if a client dies with no crypto and a negative net worth, their families can still inherit a complex digital scavenger hunt. That’s why virtually everyone needs digital estate planning. 
The Bottom Line
If you’re not discussing digital estate planning with your clients in 2025, you’re leaving them — and your practice, potentially — exposed. Digital assets are an integral part of every estate, and planning for them is the only way to ensure a seamless transition of assets, minimize loss, and decrease the likelihood of cybercrimes.
By integrating digital estate planning into your practice, you can provide your clients with the peace of mind in knowing their digital assets are properly protected and will be managed according to their wishes. 
This is too important to put off. Don’t let your clients fall victim to these and other common misconceptions — help them plan for their digital future today.

OCC Confirms Banks Authority to Offer Crypto Custody and Execution Services

On May 7, the OCC issued Interpretive Letter 1184, reaffirming that national banks and federal savings associations may provide cryptocurrency custody and execution services, including through sub-custodians. The OCC confirmed that these activities are permissible under existing banking authority so long as banks comply with applicable law and engage in safe and sound practices.
The letter builds upon earlier OCC guidance, including Interpretive Letters 1170 and 1183. Specifically, the OCC clarified the following:

Execution of crypto trades at customer direction is permissible. Banks may buy and sell crypto-assets held in custody or on behalf of customers, so long as the transactions are executed at the customer’s direction and in accordance with the customer agreement.
Outsourcing to third parties is allowed with appropriate oversight. Banks may engage sub-custodians and outsource custody or execution functions, provided they maintain robust third-party oversight practices and ensure proper internal controls are in place.
Crypto custody remains a modern extension of traditional bank custodial services. The OCC reiterated its position that holding crypto-assets is functionally similar to traditional custody services, which fall within banks’ statutory authority.
Fiduciary activities must follow applicable regulations. When acting in a fiduciary capacity, national banks must comply with 12 C.F.R. Part 9 or Part 150 for federal savings associations, including rules on the custody and control of fiduciary assets.

Putting It Into Practice: The OCC’s latest guidance offers banks further regulatory clarity in connection with crypto-related services (previously discussed here and here). Banks considering entry into the digital asset space should track these regulatory shifts closely and ensure their compliance, risk management, and third-party oversight frameworks are equipped to support crypto operations.

Ascension Notifies 430,000 Patients of Data Breach

Healthcare system Ascension has notified 437,329 patients of a data breach exposing “demographic information, such as name, address, phone number(s), email address, date of birth, race, gender, and Social Security numbers, as well as clinical information related to an inpatient visit.”
Ascension indicated that the incident occurred when it “inadvertently disclosed information to a former business partner, and some of this information was likely stolen from them due to a vulnerability in third-party software used by the former business partner.”
Ascension is offering affected individuals two years of free identity monitoring, including credit monitoring, fraud consultation, and identity theft restoration.

The VPPA: The NBA and NFL Ask SCOTUS to Referee

On April 22, 2025, the National Football League (NFL) filed an amicus brief asking the United States Supreme Court to take on a Video Privacy Protection Act (VPPA) class action case against the National Basketball Association (NBA). In my last post, we covered a recent VPPA lawsuit against a movie theater company and reviewed the provisions of the Act. In recent years, we analyzed how plaintiffs have applied the VPPA outside of traditional video contexts. This week, we dive deeper into a VPPA case against the NBA and explore the NFL’s amicus brief supporting the NBA’s position, asserting why the Act should not apply in the modern video streaming context, particularly for sports leagues.
Case Background
In the case against the NBA, the plaintiff alleged that they subscribed to the NBA’s newsletter and watched free videos on its website while logged into their Facebook account. In doing so, the NBA reportedly shared their personal viewing information with Facebook via the Meta Pixel tracking technology. The plaintiff asserted that they were a “subscriber of goods and services” and therefore met the definition of a consumer under the VPPA. See Salazar v. Nat’l Basketball Ass’n, 118 F.4th 533 (2d Cir. 2024).
To recap, the VPPA prohibits a video tape service provider from knowingly disclosing a consumer’s personally identifiable information—including information identifying a person as having requested or obtained specific video materials or services from a video tape service provider—to a third party without the consumer’s express consent. A “video tape service provider’ is defined as someone “engaged in the business … of rental, sale, or delivery of prerecorded video cassette tapes or similar audiovisual materials,” and has been interpreted to apply to video streaming service providers. A “consumer” refers to a renter, purchaser, or subscriber of goods or services from a video tape service provider.
In October 2024, the Second Circuit held that the plaintiff was the NBA’s consumer under the VPPA, interpreting that the term “consumer” should include an individual who rents, purchases, or subscribes to any of a provider’s goods or services, not just those that are audiovisual. The Second Circuit also concluded that even though the NBA may have obtained only the plaintiff’s name, email, IP address, and cookies associated with their device, the provision of such information in exchange for receiving services constitutes a “subscription.” Further, the Second Circuit also held that the VPPA applies even for videos accessed on a public page that does not require a sign-in for exclusive content.
The NBA filed a petition for certiorari, requesting the Supreme Court to review the Second Circuit’s decision.
The NFL’s Amicus Brief
The NFL’s amicus brief highlights that the Second Circuit is not alone in this broad interpretation of the VPPA. The Seventh Circuit has also held that a plaintiff need not have rented, purchased, or subscribed to the defendant’s audiovisual goods or services to qualify as a consumer under the VPPA, but that any goods or services are sufficient. However, the Sixth Circuit has held to the contrary, reasoning that the definition of “consumer” in the statute does not encompass consumers of all goods or services imaginable, but only those offered in a video tape service provider context. The NFL supports the latter position.
The NFL warns that the “explosion of VPPA class actions” is a concern for content providers like the NBA and NFL, who risk “massive liability” that was “unforeseen by Congress” when the VPPA was enacted in 1988. According to the NFL, tracking technology is “ubiquitous” and “makes much of the content on the Web free.” The NFL warns that if online content providers face such liability, “many content providers would be forced to pursue alternative sources of revenue as a result of the reduction in targeted advertising revenues,” which may result in consumers paying for currently free applications and services.
For sports leagues specifically, the NFL asserts that these organizations often have “hundreds of millions of fans,” many of whom purchase or rent non-audiovisual goods and services that would qualify them as a consumer under a broad interpretation of the VPPA. For example, a fan who bought tickets to a sports game or purchased league apparel through the NBA or NFL website, who then happened to watch a free video on the league’s website while logged into Facebook, may be considered a consumer, and could seek VPPA damages.
The NFL also asserts that there is no real harm to VPPA plaintiffs because using pixels is not a secret and that “consumers are well aware that enabling the use of cookies permits personalized advertising.” The NFL emphasizes that the plaintiff in the NBA case admitted they could have seen that the NBA was using the Meta Pixel by viewing the code on the NBA’s website. In addition, Meta’s Cookie Policy informs users that it may obtain information from third parties. Therefore, the NFL also questions consumers’ standing for such VPPA suits based on no real harm.
Last year, plaintiffs initiated over 250 VPPA lawsuits. Yet, the circuit split still leaves open the question: Who qualifies as a consumer under the VPPA in this modern video streaming context? The NBA, with support from the NFL, has punted the question to the Supreme Court. If the writ of certiorari is granted, we might find the ball in SCOTUS’ court.

Privacy Tip #443 – Fake AI Tools Used to Install Noodlophile

Threat actors are leveraging the publicity around AI tools to trick users into downloading the malware known as Noodlophile through social media sites. 
Researchers from Morphisec have observed threat actors, believed to originate from Vietnam, posting on Facebook groups and other social media sites touting free AI tools. Users are tricked into believing that the AI tools are free, and unwittingly download Noodlophile Stealer, “a new malware that steals browser credentials, crypto wallets, and may install remote access trojans like XWorm.” Morphisec observed “fake AI tool posts with over 62,000 views per post.”
According to Morphisec, Noodlophile is a previously undocumented malware that criminals sell as malware-as-a-service, often bundled with other tools designed to steal credentials.
Beware of deals that are too good to be true, and exercise caution when downloading any content from social media.

Virginia Will Add to Patchwork of Laws Governing Social Media and Children (For Now?)

Virginia’s governor recently signed into law a bill that amends the Virginia Consumer Data Protection Act. As revised, the law will include specific provisions impacting children’s use of social media. Unless contested, the changes will take effect January 1, 2026. Courts have struck down similar laws in other states (see our posts about those in Arkansas, California, and Utah) and thus opposition seems likely here as well. Of note, the social media laws that have been struck down in other states attempted to require parental consent before minors could use social media platforms. This law is different, as it allows account creation without parental consent. Instead, it places restrictions on account use for both minors and social media platforms.
As amended, the Virginia law will require social media companies to use “commercially reasonable” means to determine if a user is under 16. An example given in the law is a neutral age gate. The age verification is similar to those proposed other states’ social media laws. (And it was that requirement that was central to the court’s decision when striking down Arkansas’ law.) Use of social media by under-16s will default to one hour per day, per app. Parents can increase or decrease these time limits. That said, the bill expressly states that there is no obligation for social media companies to give those parents who give their consent “additional or special access” or control over their children’s accounts or data.
The law will limit use of age verification information to only that purpose. An exception is if the social media company is using the information to provide “age-appropriate experiences” – thought the bill does not explain what such experiences entail. Finally of note, even though these provisions may increase costs on companies, the bill specifically prohibits increasing costs or decreasing services for minor accounts.
Putting it Into Practice: We will be monitoring this law to see if the Virginia legislature has success in regulating children’s use of social media. This modification reflects not only a focus on children’s use of social media, but also continued changes to US State “comprehensive” privacy laws.
James O’Reilly contributed to this article

DOJ Criminal Division Updates (Part 1): DOJ’s New White Collar Crime Enforcement Plan

On May 12, DOJ’s Criminal Division head, Matthew G. Galeotti, issued a memo to all Criminal Division personnel, entitled “Focus, Fairness, and Efficiency in the Fight Against White-Collar Crime,” to “outline the Criminal Division’s enforcement priorities and policies for prosecuting corporate and white-collar crimes in the new administration.” The memo highlights 10 priority areas for investigation and prosecution, calls for a revision of the Division’s Corporate Enforcement and Voluntary Self-Disclosure Policy to provide increased incentives to corporations, and previews “streamlining corporate investigations” with an emphasis on fairness and efficiency as well as a reduction in corporate monitorships.
Ten Priority Areas for Investigation and Prosecution
The memo enumerates the following ten areas of focus:

Health care fraud;
Trade and customs fraud, including tariff evasion;
Fraud perpetrated through VIEs (variable interest entities);
Fraud that victimizes U.S. investors, such as Ponzi schemes and investment fraud;
Sanctions violations or conduct that enable transactions by cartels, TCOs, hostile nation-states, and/or foreign terrorist organizations;
Provision of material support to foreign terrorist organizations;
Complex money laundering, including schemes involving illegal drugs;
Violations of the Controlled Substances Act and the FDCA (Food, Drug, and Cosmetic Act);
Bribery and money-laundering that impact U.S. national interests, undermine U.S. national security, harm the competitiveness of U.S. business, and enrich foreign corrupt officials; and
Digital asset crimes, with high priority to cases involving cartels, TCOs, drug money-laundering or sanctions evasion.

These 10 areas of focus — and the order in which they are listed — echo the priorities laid out in the Trump administration’s enforcement-related executive orders and memos published to date.[1]
More broadly, Galeotti described the priorities as DOJ’s effort to “strike an appropriate balance between the need to effectively identify, investigate, and prosecute corporate and individuals’ criminal wrongdoing while minimizing unnecessary burdens on American enterprise.” Galeotti explained that “[t]he vast majority of American business are legitimate enterprises working to deliver value for their shareholders and quality products and services for customers” and therefore “[p]rosecutors must avoid overreach that punishes risk-taking and hinders innovation.” Galeotti also makes clear that DOJ attorneys “are to be guided by three core tenets: (1) focus; (2) fairness; and (3) efficiency.” He also directed the Criminal Division’s Corporate Whistleblower Awards Pilot Program be amended to reflect these priority areas of focus.[2]
Emphasis on Individuals and Leniency Toward Corporations
Galeotti emphasized the Criminal Division’s focus on prosecuting individuals and the need to further take into account the efforts put forth by corporations to remediate the actions of individual bad actors. Galeotti promised the Criminal Division would “investigate these individual wrongdoers relentlessly to hold them accountable” and directed the revision of the Division’s Corporate Enforcement and Voluntary Self-Disclosure Policy (CEP) to provide more opportunities for leniency where it is determined corporate criminal resolutions are necessary for companies that self-disclose and fully cooperate. These revisions include shorter terms for non-prosecution and deferred prosecution agreements, reduced corporate fines, and limited use and terms of corporate monitors.[3] Galeotti specifically has directed the review of terms of all current agreements with companies to determine whether they should be terminated early. DOJ has already begun terminating agreements it determined have been fully met.
Streamlining Corporate Investigations
Finally, Galeotti emphasizes the need to minimize the unnecessary cost and disruption to U.S. businesses due to DOJ’s investigations and to “maximize efficiency.”
More Efficient Investigations
While acknowledging the complexity and frequent cross-border nature of the Division’s investigations, prosecutors are instructed to “take all reasonable steps to minimize the length and collateral impact of their investigation, and to ensure that bad actors are brought to justice swiftly and resources are marshaled efficiently.” The Assistant Attorney General’s office will, along with the relevant Section, track investigations to ensure they are “swiftly concluded.”
Limitation on Corporate Monitorships
DOJ will impose compliance monitorships only when it deems them necessary and has directed that those monitorships, when imposed, should be “narrowly tailored.” Building upon a previous administration’s memorandum,[4] DOJ issued a May 12 Memorandum on Selection of Monitors in Criminal Division Matters, which provides factors for considering whether a monitorship is appropriate and guidelines to ensure a monitorship is properly tailored to address the “risk of recurrence” and “reduce unnecessary costs.” In considering the appointment of a monitor, prosecutors are to consider the:

Risk of recurrence of criminal conduct that significantly impacts U.S. interests;
Availability and efficacy of other independent government oversight;
Efficacy of the compliance program and culture of compliance at the time of the resolution; and
Maturity of the company’s controls and its ability to independently test and update its compliance program

The chief of the relevant section, as well as the Assistant Attorney General, must approve all monitorships, and the memo lays out additional details regarding the monitor’s appointment and oversight as well as the monitor selection process.
Takeaways
DOJ’s current hiring freeze and recent personnel reductions/reassignments should not be taken as a sign that white collar crime will be permitted to flourish under the current administration. Rather, Galeotti’s May 12 memo further solidifies the enforcement policies and priorities the DOJ has been previewing since day one of the Trump administration and provides more clarity on what to expect when engaging with the Criminal Division and where it will be focusing its now-more-limited resources. Companies should familiarize themselves with this memo and corresponding updates related to whistleblowers, corporate enforcement and self-disclosures, and monitorships to ensure companies are appropriately assessing their risk profile, addressing potential misconduct, and meeting government expectations.

[1] See, e.g., Executive Order 14157, Designating Cartels and Other Organizations as Foreign Terrorist
Organizations and Specially Designated Global Terrorists (Jan. 20. 2025) (Cartels Executive Order);
Memorandum from the Attorney General, Total Elimination of Cartels and Transnational Criminal
Organizations (Feb. 5, 2025) (Cartels and TCOs AG Memorandum) Executive Order 14209, Pausing Foreign Corrupt Practices Act Enforcement to Further American Economic and National Security (Feb. 10, 2025); Cartels and TCOs AG Memorandum.
2 See “DOJ Criminal Division Updates (Part 2): Department of Justice Updates its Corporate Criminal Whistleblower Awards Pilot Program”
[3] See “DOJ Criminal Division Updates (Part 3): New Reasons for Companies to Self-Disclose Criminal Conduct”
[4] March 7, 2008 Craig Morford Memorandum (addressing selection and responsibilities of a corporate monitor).

California Privacy Protection Agency Releases Updated Regulations: What’s Next?

This month, the California Privacy Protection Agency (CPPA) Board discussed updates to the California Consumer Privacy Act (CCPA) draft regulations related to cybersecurity audits, risk assessments, automatic decision-making technology (ADMT), and insurance.
The CPPA received comments on the first draft of the regulations between November 22, 2024, and February 19, 2025, and the feedback was provided at last month’s board meeting.
Based on the discussions at last month’s meeting, the CPPA made further revisions to the draft, which include the following:

Definition of ADMT: ADMT will no longer include technology that ONLY executes a decision or substantially facilitates human decision-making; the definition will only include technology that REPLACES or substantially replaces human decision-making.
Definition of Significant Decision: Risk assessments and ADMT obligations are triggered by certain data processing activities that lead to “significant decisions” that affect a consumer; the updated draft no longer includes decisions that determine “access” to certain services as triggering events. However, financial or lending, housing, education, employment, and independent contracting services constitute services that implicate whether a significant decision is being made about a consumer; insurance, criminal justice services and essential goods and services were removed from the list of services in the latest draft.
First-Party Advertising: Under the updated draft, companies are not required to conduct risk assessments or comply with the ADMT obligations simply because they profile consumers for behavioral advertising (i.e., first-party advertising does not trigger these requirements under the new draft).
ADMT Training and Personal Information: Companies will only be required to conduct a risk assessment if they process personal information to train ADMT for specific purposes.
Sensitive Location Profiling: Companies will not be required to conduct a risk assessment simply because they profile consumers through systematic observation in publicly accessible spaces; they will only have to adhere to the risk assessment requirement if the company profiles a consumer based on the individual’s presence in a “sensitive location” (i.e., healthcare facilities, pharmacies, domestic violence shelters, food pantries, housing or emergency shelters, educational institutions, political party offices, legal services offices, and places of worship).
Artificial Intelligence: The updated draft does not refer to “artificial intelligence” (AI) and AI terminology has been removed. However, AI systems would fall under the definition of ADMT and be subject to the other requirements under the updated regulations.
Cybersecurity Audits: If a company meets the risk threshold, the first cybersecurity audit must be completed as follows:

April 1, 2028, if the business’s annual gross revenue for 2026 is more than $100 million.
April 1, 2029, if the business’s annual gross revenue for 2027 is at least $50 million but no more than $100 million.
April 1, 2030, if the business’s annual gross revenue for 2028 is less than $50 million.

Thereafter, if a company meets the risk thresholds under the law, it must conduct a cybersecurity audit annually, irrespective of gross annual revenue.

Submission of Risk Assessments: Under the updated draft, companies no longer have to submit their risk assessments to the CPPA; alternatively, the company must provide an attestation and a point of contact for the company. Such documentation is due to the CPPA by April 1, 2028, for risk assessments completed in 2026 and 2027; after 2027, the documentation must be submitted by April 1 of the year following any year the risk assessment was conducted.

So, what’s next?

The CPPA initiated another public comment period, ending on June 2, 2025.
The CPPA MUST finalize the draft regulations by November 25, 2025:

If the CPPA files the final regulations by August 31, 2025, then the updates will take effect on October 1, 2025;
If the CPPA files the final regulations AFTER August 31, 2025, then the updates will take effect on January 1, 2026.

Todd Snyder Fined for Technical CCPA Violations

The California Consumer Privacy Protection Agency (CPPA) Board issued a stipulated final order against Todd Snyder, Inc., a clothing retailer based in New York, requiring the company to pay a $345,178 fine and update its privacy program to settle allegations that it violated the California Consumer Privacy Act (CCPA). Specifically, Todd Snyder must update its methods for submitting and fulfilling privacy requests and provide training to its staff about CCPA requirements. Todd Snyder is also required to maintain a contract management and tracking process so that required CCPA contractual terms are included in contracts with third parties with access to or receipt of personal information.
The CPPA alleged that Todd Snyder violated the CCPA as follows:

Its consumer privacy rights request process collected much more information than necessary to fulfill privacy requests. Specifically, the privacy portal on Todd Snyder’s website used by consumers to submit privacy rights requests required consumers to provide their first and last name, email, country of residence, and a photograph of the consumer holding the consumer’s “identity document” (such as a driver’s license or passport which is considered “sensitive information” under the CCPA), regardless of the type of privacy request. The sensitive information is unnecessary to exercise a request to opt-out of the sale and/or sharing of personal information.
It failed to oversee and properly configure its third-party consumer privacy request portal for 40 days. The Todd Snyder website utilizes third-party tracking technologies, including cookies, pixels, and other trackers that automatically send data about consumers’ online behavior to third-party companies for analytics and behavioral advertising. The CPPA alleges that the opt-out mechanism on the website was not properly configured for a 40-day period. During that period, if the consumer clicked on the cookie preferences link on the website, a pop-up appeared, but then immediately disappeared, making it impossible for the consumer to opt-out of the sale or sharing of their personal information.

The lesson here is that a company cannot pass on its privacy compliance obligations to a third-party privacy management platform; the company itself is responsible for the functionality of such platforms. Michael Macko, head of the CPPA’s Enforcement Division, stated in a press release, “Using a consent management platform doesn’t get you off the hook for compliance [. . .] the buck stops with the businesses.” Your company cannot rely on its third-party privacy management platform for compliance and expect no accountability in the event of non-compliance; you must conduct due diligence and validate that the operation is functioning and compliant with CCPA requirements.
This is likely only the start of the CPPA’s enforcement sweep. The time is now—assess your CCPA compliance program and processes, and ensure they are up to par.

5 Key Contracting Considerations for Digital Health Companies Working with AI Vendors

Artificial Intelligence (AI) is rapidly transforming digital health — from patient engagement to clinical decision-making, the changes are revolutionary. Contracting with AI vendors presents new legal, operational, and compliance risks. Digital health CEOs and legal teams must adapt traditional contracting playbooks to address the realities of AI systems handling sensitive and highly regulated health care data.
To assure optimal results, here are five critical areas for digital health companies to address in the contract negotiation process with potential AI vendors:
1. Define AI Capabilities, Scope, and Performance
Your contract should explicitly:

Describe what the AI tool does, its limitations, integration points, and expected outcomes.
Establish measurable performance standards and incorporate them into service-level agreements.
Include user acceptance testing and remedies, such as service credits or termination if performance standards are not met. This protects your investment in AI-driven services and aligns vendor accountability with your operational goals.

2. Clarify Data Ownership and Usage Rights
AI thrives on data, so clarity around data ownership, access, and licensing is essential. The contract should state the specific data the vendor can access and use — including whether such data includes protected health information (PHI), other personal information, or operational data — and whether it can be used to train or improve the vendor’s models. Importantly, your contract should ensure that any vendor use of data aligns with HIPAA, state privacy laws, and your internal policies, including restricting reuse of PHI or other sensitive health data for purposes other than the vendor providing the services to your company or other purposes permitted by law. There is much greater flexibility to license access for the vendor to use your de-identified data to train or develop AI models, if the company has the appetite for such data licensing. 
You should also scrutinize broad data licenses. Be careful not to assume liability for how a vendor repurposes your data unless the use case is clearly authorized in the contract.
3. Demand Transparency and Explainability
Regulators and patients expect transparency in AI-driven health care decisions. Require documentation that explains how the AI model works, the logic behind outputs, and what safeguards are in place to mitigate bias and inaccuracies.
Beware of vendors reselling or embedding third-party AI tools without sufficient knowledge or flow-down obligations. The vendor should be able to audit or explain the tools it licenses from third parties if those AI tools are handling your company’s sensitive health care data.
4. Address Liability and Risk Allocation
AI-related liability, especially from errors, hallucinations, or cybersecurity incidents, can have sizable consequences. Ensure the contract includes tailored indemnities and risk allocations based on the data sensitivity and function of the AI tool.
Watch out for vendors who exclude liability for AI-generated content. This may be acceptable for internal tools but not for outputs that reach patients, payors, or regulators. Low-cost tools with high data exposure can pose a disproportionate liability risk, which is especially true if liability caps are tied only to the contract fees. 
5. Plan for Regulatory Compliance and Change
With evolving rules from federal and state privacy regulators, vendors must commit to ongoing compliance with current and future requirements. Contracts should allow flexibility for future changes in law or best practices. This will better help ensure that the AI tools your company relies on will not fall behind the regulatory curve — or worse, expose your company to enforcement risk due to noncompliance or outdated model behavior.
Incorporating this AI Vendor Contracting Checklist into your vendor selection process will help CEOs systematically manage risks, compliance, and innovation opportunities when engaging with AI vendors.
AI Vendor Contracting Checklist:

Define AI scope, capabilities, and performance expectations.
Clarify data ownership, access, and privacy obligations.
Require transparency and explainability of AI processes.
Set clear liability, risk, and compliance responsibilities.
Establish terms for updates, adaptability, and exit strategy.

AI solutions in the health care space continue to rapidly evolve. Thus, digital health companies should closely monitor any new developments and continue to take necessary steps towards protecting themselves during the contracting process.

UK Government Publishes New Software and Cyber Security Codes of Practice

As cyber security continues to make be headline news it is timely that on 7 May 2025 the UK government published a new voluntary Software Security Code of Practice: Software Security Code of Practice – GOV.UK
This Code is designed to be complementary to relevant international approaches and existing standards and where possible reflects internationally recognized best practice including as outlined in the US Secure Software Development Framework (Secure Software Development Framework | CSRC) and the EU Cyber Resilience Act (Cyber Resilience Act (CRA) | Updates, Compliance, Training).
This Code consists of 14 principles split across 4 themes (secure design and development; build environment security; secure deployment and maintenance; and communication with customers) that software vendors are expected (but to stress the voluntary nature of this code, are not legally obliged) to implement to establish a consistent baseline of software security and resilience across the market – these principles are stated to be relevant to any type of software supplied to business customers.
“Software Vendors” are defined under this Code as organisations that develop and sell software or software services; “Software” is code, programmes and applications that run on devices including on hardware devices and via cloud/SaaS.
A self-assessment form is also made available (Software-Security-Code-of-Practice-Self-Assessment-Template.docx) which software vendors can use to assess and evidence compliance with this Code.
This Code follows on from the Cyber Governance Code of Practice and supporting tool kit published on 8 April 2025 (Cyber Governance Code of Practice – GOV.UK) to support boards and directors of medium and large organizations to govern cyber security risks. The emphasis of this Code is to support boards and directors to effectively govern and monitor cyber security within their business, but it is not intended for use by those people in a business whose role is the day-to-day management of cyber security.
As cyber security continues to be a high-profile and business critical issue for many businesses it is likely that in the coming months we may start to see compliance with these voluntary codes becoming contractual obligations imposed on suppliers.