UK Data (Use and Access) Bill to Become Law

On June 11, 2025, the UK Data (Use and Access) Bill (the “DUA Bill”) successfully navigated its final parliamentary hurdle and will soon become law.
The DUA Bill’s journey has been anything but straightforward. Most recently it became entangled in debates over AI and copyright with the House of Lords persistently pushing to include transparency provisions related to AI models. These suggestions, however, were repeatedly overturned by the House of Commons, which maintained that such provisions should be tackled separately to avoid complicating the DUA Bill’s framework. In a bid for compromise, the DUA Bill now includes provisions requiring the Secretary of State to, amongst other things, draft legislation containing proposals to provide transparency to copyright owners regarding the use of their copyright works as data inputs for AI models.
The DUA Bill’s enactment arrives at a crucial time as the extension of the UK’s adequacy decision with the EU will expire in December 2025. The European Commission has indicated that a formal assessment of the UK’s legal framework will not commence until the DUA Bill is formally passed.
For a detailed overview of the legislative changes introduced by the DUA Bill, read our previous update.
Read more about the previously proposed AI transparency amendments

AI Bias Lawsuit Against Workday Reaches Next Stage as Court Grants Conditional Certification of ADEA Claim

A closely watched class and collective action against the HR management services company Workday, Inc. reached a new milestone recently, when the Northern District of California conditionally certified Age Discrimination in Employment Act (ADEA) claims on behalf of a sprawling collective believed to include millions of job applicants. In Mobley v. Workday, Inc., N.D. Cal. Case No. 23-cv-00770-RFL, the plaintiff alleges that Workday’s popular artificial intelligence (AI)-based applicant recommendation system violated federal antidiscrimination laws because it had a disparate impact on job applicants based on race, age, and disability. Although Mobley does not allege that Workday itself was an “employer” (or prospective employer) of him or the putative class members, he alleges Workday may nonetheless be held liable as an “agent.” In July 2024, the Court denied Workday’s second motion to dismiss, allowing the claims to proceed.
Mobley’s claims cleared a second hurdle on May 16, 2025, when the Court granted conditional certification of the ADEA claims. In seeking conditional certification, Mobley claimed that Workday’s tools were “designed in a manner that reflects employer biases and relies on biased training data.” The Court agreed this adequately “alleged the existence of a unified policy: the use of Workday’s AI recommendation system to score, sort, rank, or screen applicants.” The Court rejected Workday’s argument that collective treatment was improper because the tools’ impact could vary based on different employer-clients (for example, in the case of a Workday tool training itself on different employers’ varying employee populations). It likewise found immaterial for certification purposes that the different class members’ qualifications and experiences may vary, because the common injury was simply being “denied the right to compete on equal footing with other candidates.”
As a result of the ruling, notice will be issued to the allegedly affected job applicants, in what could be one of the largest collectives ever certified. In filings, Workday represented that “1.1 billion applications were rejected” using its software tools during the relevant period, and so the collective could potentially include “hundreds of millions” of members.
In recent years, online platforms have increasingly reduced the friction of the job application process, and consequently the number of applications employers receive has dramatically increased, leading to greater demand for technological solutions to help sort, rank, and filter applicants. AI tools are increasingly being used to help address this issue, but, as with any new technology, the tools can lead to novel claims. As one of the first largescale tests of such solutions in the courts the Mobley case will undoubtedly continue to attract considerable attention from employers and practitioners alike. Regardless of the outcome, Mobley illustrates the legal risk associated with employing AI tools, and the need for employers to be thoughtful as they implement them.
We will continue to monitor this case and other developments as lawmakers, regulators, and courts grapple with the issues created by the use of AI in employment decisions.

AI-Generated Deepfakes in Court: An Emerging Threat to Evidence Authenticity?

Federal Rule 901 governs the authentication of evidence in court. Per the rule, “[t]o satisfy the requirement of authenticating or identifying an item of evidence, the proponent must produce evidence sufficient to support a finding that the item is what the proponent claims it is.” Historically, this requirement could be satisfied, for example, through the testimony of a witness with knowledge, comparison with an authenticated item by an expert witness or trier of fact, or identification of distinctive characteristics. The advent of deepfakes, however, has generated debate whether additional safeguards need to be implemented to protect the authenticity of evidence.
On May 2, 2025, the U.S. Judicial Conference’s Advisory Committee on Evidence Rules considered proposals to amend the Federal Rules of Evidence to address the challenges posed by AI-generated evidence (see our prior post regarding the Committee’s proposed Rule 707 – Machine-Generated Evidence). Besides Rule 707, the Committee evaluated Rule 901(c), a new draft amendment that addresses deepfakes, i.e., altered or wholly fabricated AI-generated images, audio, or video that are difficult to discern from reality.
While recognizing the importance of detecting deepfakes to preserve the integrity of the judicial system, the Committee ultimately decided that a rule amendment was not necessary at this time, given the courts’ existing methods for evaluating authenticity and the limited instances of deepfakes in the courtroom to date. Nonetheless, as a precaution, the Committee proposed Rule 901(c) for future consideration should circumstances change.
Rule 901. Authenticating of Identifying Evidence
*****
(c) Potentially Fabricated Evidence Created by Artificial Intelligence.

(1) Showing Required Before an Inquiry into Fabrication. A party challenging the authenticity of an item of evidence on the ground that it has been fabricated, in whole or in part, by generative artificial intelligence must present evidence sufficient to support a finding of such fabrication to warrant an inquiry by the court.
(2) Showing Required by the Proponent. If the opponent meets the requirement of (1), the item of evidence will be admissible only if the proponent demonstrates to the court that it is more likely than not authentic.
(3) Applicability.  This rule applies to items offered under wither Rule 901 or 902.

In the notes section to the draft amendment, the Committee explained its rationalefor Rule 901(c)’s two-step process for evaluating deepfake claims. First, the opponent must submit sufficient information for a reasonable person to infer that the proffered evidence was fabricated. A mere assertion that the evidence is a deepfake is not sufficient. Provided that the opponent meets their burden, the Committee explained that the proponent “must prove authenticity under a higher evidentiary standard than the prima facie standard ordinarily applied under Rule 901.”
The Committee acknowledged that Rule 901(c) does not specifically combat another possible consequence of deepfakes, whereby the risk of deepfakes leads juries to distrust genuine evidence. The Committee, however, cited Rule 403 (the “prejudice rule”) and the judges’ role as gatekeepers to curb attorney assertions that “you canno t believe anything that your see.”
Clearly, deepfakes present significant challenges in the courtroom and risk eroding public confidence in our judicial system. LMS will continue to monitor this evolving topic, the tools used by judges to verify evidence authenticity, and any associated amendments to the rules of evidence.

HealthBench: Advancing the Standard for Evaluating AI in Health Care

The Evolution of Health Care AI Benchmarking
Artificial Intelligence (AI) foundation models have demonstrated impressive performance on medical knowledge tests in recent years, with developers proudly announcing their systems had “passed” or even “outperformed” physicians on standardized medical licensing exams. Headlines touted AI systems achieving scores of 90% or higher on the United States Medical Licensing Examination (USMLE) and similar assessments. However, these multiple-choice evaluations presented a fundamentally misleading picture of AI readiness for health care applications. As we previously noted in our analysis of AI/ML growth in medicine, a significant gap remains between theoretical capabilities demonstrated in controlled environments and practical deployment in clinical settings.
These early benchmarks—predominantly structured as multiple-choice exams or narrow clinical questions—failed to capture how physicians actually practice medicine. Real-world medical practice involves nuanced conversations, contextual decision-making, appropriate hedging in the face of uncertainty, and patient-specific considerations that extend far beyond selecting the correct answer from a predefined list. The gap between benchmark performance and clinical reality remains largely unexamined.
HealthBench—an open-source benchmark developed by OpenAI—represents a significant advancement in addressing this disconnect, designed to be meaningful, trustworthy, and unsaturated. Unlike previous evaluation standards, HealthBench measures model performance across realistic health care conversations, providing a comprehensive assessment of both capabilities and safety guardrails that better align with the way physicians actually practice medicine.
The Purpose of Rigorous Benchmarking
Robust benchmarking serves several critical purposes in health care AI development. It sets shared standards for the AI research community to incentivize progress toward models that deliver real-world benefits. It provides objective evidence of model capabilities and limitations to health care professionals and institutions that may employ such models. It helps identify potential risks before deployment in patient care settings. It establishes baselines for regulatory review and compliance, and perhaps most importantly, it evaluates models against authentic clinical reasoning rather than simply measuring pattern recognition or information retrieval. As AI systems become increasingly integrated into health care workflows, these benchmarks become essential tools for ensuring that innovation advances alongside trustworthiness, with evaluations of safety and reliability that reflect the complexity of real clinical practice.
HealthBench: A Comprehensive Evaluation Framework
HealthBench consists of 5,000 multi-turn conversations between a model and either an individual user or a health care professional. Responses are evaluated using conversation-specific rubrics created by physicians spanning 48,562 unique criteria across seven themes: emergency referrals, context-seeking, global health, health data tasks, expertise-tailored communication, responding under uncertainty, and response depth. This multidimensional approach allows for nuanced evaluation across five behavioral axes: accuracy, completeness, context awareness, communication quality, and instruction following.
By focusing on conversational dynamics and open-ended responses, HealthBench challenges AI systems in ways that mirror actual clinical encounters rather than artificial testing environments—revealing substantial gaps even in frontier models and providing meaningful differentiation between systems that might have scored similarly on traditional multiple-choice assessments.
Physician-Validated Methodology
Developed in collaboration with 262 physicians across 26 specialties with practice experience in 60 countries, HealthBench grounds its assessment in real clinical expertise. These physicians contributed to defining evaluation criteria, writing rubrics, and validating model grading against human judgment. This physician-led approach aimed to develop benchmarks that reflect real-world clinical considerations and maintain a high standard of medical accuracy.
Notably, when physicians were asked to write responses to HealthBench conversations without AI assistance, their performance was weaker than that of the most advanced models, though physicians could improve responses from older models. This suggests that HealthBench’s evaluation approach captures dimensions of performance that go beyond memorized knowledge and may better reflect the nuances of human interactions, communication, and reasoning required in clinical practice.

What to Watch: FDA Shifts Attention on Artificial Intelligence

In an interesting and somewhat unexpected turnabout over the last six months, FDA has pivoted its focus from regulating industry’s use of artificial intelligence (“AI”) to how the agency itself utilizes AI. This internal shift marks a departure from FDA’s development of AI guidance over the last few years.
2024 marked an active year for AI regulation by FDA, with establishment of the CDER AI Council as well as the development and release of various white papers and guidance documents. Even at the beginning of 2025, FDA released much-anticipated drug and device guidance documents. As the agency has undergone transformation in the first half of 2025, it has been notably quiet on the release of policy statements or guidance documents impacting the industry’s use of AI. Instead, FDA announced in May a short timeline to scale the use of AI internally across FDA’s various centers, and in early June, rolled out Elsa, an AI tool to, among other uses, accelerate clinical protocol reviews, scientific evaluations, and identify high-priority inspection targets. The way in which Elsa will affect FDA’s processes and timelines is certainly an area to keep any eye on.
Also, as FDA continues to prioritize its internal use of AI, the gap created by FDA’s deceleration on industry regulation and the continued emergence of state AI laws impacting industry is creating a tension that requires attention from in-house privacy, legal and compliance teams. 

Nuclear Power in 2025 – DOE Loan Programs Office (LPO) at the Forefront

As the U.S. faces surging electricity demand from AI data centers, infrastructure upgrades and decarbonization goals, nuclear energy is re-emerging as a viable clean power source. Central to this 2025 revival is the U.S. Department of Energy’s Loan Programs Office (LPO), which is helping push nuclear energy in the United States from near stagnation to resurgence.
LPO Nuclear Program:
The LPO provides federal loan guarantees under its Title 17 Clean Energy Financing Program to help bridge capital gaps and de-risk early-stage nuclear projects. These guarantees make it easier for developers to finance next generation Small Modular Reactors (SMRs), upgrades and restoration of retired plants, and traditional large-scale nuclear projects. Much like it did for solar projects in the program that ended in 2011, the LPO offers loan guarantees for project financing at attractive rates – facilitating large-scale capital that would otherwise not be available, particularly for newer technologies.
2025 LPO Highlights:

Palisades Nuclear Plant Restart – in March and April 2025, LPO approved and disbursed approximately $57 million in loan guarantee funding (of the project’s total conditional loan guarantee of up to $1.52 billion) to restart the 800 MW Palisades plant located in Michigan, which is the first ever major U.S. nuclear reactivation.
Vogtle Units 3 & 4 – The AP1000 reactors in Georgia, which have long been supported by the LPO (~$12 billion since 2010), are now online – demonstrating how LPO’s federal backing helps de-risk first-of-its kind technology.
SMR & Advanced Reactors – Legislation like the ADVANCE Act of 2024 incentivizes advanced nuclear deployment by reducing licensing costs supporting high-assay low-enriched uranium (HALEU) supply and streamlining NRC reviews. DOE has announced funding through Title 17 for Gen III+ SMRs, including the Palisades restart and upcoming deployments by Holtec and NuScale.
Fuel Supply & Licensing – DOE is channeling HALEU to companies including Westinghouse and TerraPower to support advanced designs. On May 23, 2025, four executive orders were signed mandating NRC and DOE to accelerate approvals of proposed nuclear projects and site selection, in order to streamline the federal nuclear permitting process. These orders are looking to quadruple U.S. nuclear capacity to 400 GW by 2050.
Additional Government Funding – the 2026 federal budget requests $750 million in subsidy funding and $30 billion in new Title 17 loan authority, primarily for nuclear, critical minerals, grid upgrades and firm power.

LPO Application Process:
Applying for a DOE loan guaranty involves a structured process, and is not as simple as filling out a form. The LPO acts more like a commercial project finance lender with a public mission. The project sponsor will need to prepare a full project financing package and navigate the process through several stages. At a high level, the LPO will expect a bankable project, which will include third-party market validation, strong project contracts and a viable debt/equity stack. The process starts with filling out a Part I application form through the LPO portal that attaches a project summary, technology description, preliminary financial model, environmental considerations and basis for expected eligibility under Title 17 – a go/no-go is generally given by the LPO within 60 days. If invited, the sponsor will need to submit Part II of the application with a fee, which submission would include a detailed model, feasibility studies, market risk analysis, resumes of management team, material project contracts (site control, offtake, EPC, O&M), Phase I EIS, corporate org chart, and debt and equity funding commitments. DOE then does its due diligence, typically through independent technical, financial and legal consultants and, if successful, the next step is negotiation of a term sheet and a conditional commitment. Then comes finalization of equity commitments and third-party debt commitments, completion of NEPA process, signing of documentation and satisfaction of conditions precedent to closing. The timing from initial application to financial closing is expected to be about 2 years.
Tips:

Lead with a bankable project team – top-tier EPC, O&M, legal, insurance
Engage NEPA counsel early
Pay attention to DOE’s policy priorities relating to domestic supply chains and labor standards
Start dialogue with LPO staff before applying – they encourage early conversations

Conclusion:
With the Palisades Restart and AP1000 Vogtle units demonstrating renewed momentum and SMRs on the horizon, federal loan guarantees are proving their power. As the nation races to meet demand and climate goals alike, DOE’s financial backing could be the lynchpin that turns nuclear promise into reality – provided that fuel supply, licensing safety and budget support remain aligned.
To read our previous article about nuclear power, click here.

Navigating Change: The Impact of the UK’s Data (Use and Access) Bill on Businesses

The UK Data (Use and Access) Bill (the “DUA Bill”) has been subject to a surprisingly prolonged legislative journey, oscillating between the House of Commons and the House of Lords as it approaches the final stages. This back-and-forth reflects the complexity and controversy surrounding certain of its provisions. Once the DUA Bill is agreed, it is estimated that it will come into effect within approximately 12 months. This article summarises certain of the key changes to UK data protection and privacy legislation proposed by the DUA Bill, considers the impact of such changes on the UK’s existing EU Commission adequacy decision and discusses how businesses should approach compliance.
How the DUA Bill Amends Data Protection and Privacy Legislation
The DUA Bill proposes fundamental changes to the UK’s data protection and privacy legislation, including the UK General Data Protection Regulation (“UK GDPR”). The focus of the UK government is to modernise and streamline existing legislation as part of an effort to bolster data governance in the UK. It addresses key areas of data protection and privacy, such as legitimate interests, international data transfers and automated decision-making (“ADM”), while also covering other data-related areas, including smart data and public registers. It seeks to balance the need for flexibility in data processing with robust safeguards for personal data, reflecting the evolving digital landscape and the increasing importance of data-driven technologies. The UK government believes that the proposed legislative amendments will foster innovation and enhance public trust, while remaining aligned with international standards and the EU General Data Protection Regulation.
AI Models
The key topic which remains under debate between the House of Lords and the House of Commons is whether to include provisions related to AI models. The House of Lords argued for the inclusion of transparency requirements for business data used in relation to AI models and inserted provisions requiring developers of AI models to publish all information used in the pre-training, training, fine-tuning and retrieval-augmented generation of the AI model, and to provide a mechanism for copyright owners to identify any individual works they own that may have been used during such processes. These provisions emerged as the most contentious aspect of the DUA Bill, contributing significantly to its ongoing back-and-forth between the House of Commons and the House of Lords. The House of Commons is of the view that transparency requirements for AI models warrant separate legislative action, arguing that their inclusion in the DUA Bill would complicate the overarching framework and would require additional public funds. As of the time of writing, the transparency provisions for AI models have been removed from the DUA Bill and replaced with provisions requiring the Secretary of State to introduce, amongst other things, draft legislation containing proposals to provide transparency to copyright owners regarding the use of their copyright works as data inputs for AI models. We now wait to see whether this approach will be agreed to between the House of Lords and House of Commons.
Recognised Legitimate Interests and Legitimate Interests
The DUA Bill introduces “recognised legitimate interests” as a new, lawful basis for processing personal data. Building on the existing lawful basis of legitimate interests, this new basis allows businesses to process data for specific purposes defined under the DUA Bill without conducting a traditional legitimate interests assessment (“LIA”). The listed processing activities include national security and defence, and responding to emergencies and safeguarding vulnerable people.
Additionally, the DUA Bill outlines a further list of processing activities which “may” be processed under the existing legitimate interests lawful basis. While such activities are not “recognised legitimate interests” and therefore still require an LIA, the legislative footing allows businesses more surety when seeking to rely on legitimate interests for the activity. The activities include direct marketing, sharing data intra-group for internal administrative purposes, and ensuring security of network and information systems.
International Data Transfers
The DUA Bill amends the adequacy decision framework in several ways. The amendments re-work Article 45 of the UK GDPR so the framework comprises “transfers approved by regulations,” as opposed to “transfers on the basis of an adequacy decision.” To approve a country by regulations, the UK Secretary of State must be of the view that the “data protection test” is met, i.e., the standard of protection in the third country is “not materially lower” than that of the UK. Similar to the UK GDPR, the DUA Bill sets out considerations which the UK Secretary of State should take into account when assessing whether the data protection test is met for a third country, including, for example, whether the third country has respect for the rule of law and human rights, and whether the third country has an authority for enforcing data protection. While the amendments initially appear as fairly substantial, they are unlikely to significantly affect international data transfers from the UK as they do not radically reform the existing framework.
Data Subject Access Requests
The DUA Bill seeks to address certain challenges posed by data subject access requests (“DSARs”). The amendments clarify that data subjects are only entitled to information resulting from a “reasonable and proportionate” search by the business, the intention being to reduce the cost and administrative burden on businesses of fulfilling DSARs. The DUA Bill also amends the time limit for responding to a DSAR, enabling businesses to extend the initial one-month period for responding by a further two months where it is deemed necessary by reason of the “complexity” or “number” of requests by a data subject.
Automated Decision-Making
The DUA Bill relaxes restrictions on the use of ADM, enabling ADM without the existing restrictions under Article 22 of the UK GDPR (e.g., procuring consent of the individual) where special category data is not to be processed. Where ADM is conducted without special category data, the DUA Bill still requires safeguards be implemented such as transparency regarding the ADM and allowing individuals to contest decisions and seek human review.
Scientific Research Provisions
The DUA Bill broadens the definition of scientific research to encompass any research “reasonably described as scientific, whether publicly or privately funded and whether carried out as a commercial or non-commercial activity,” expanding the exemptions for processing of special category data under the UK GDPR to include privately funded and commercial research. The definition also removes the need for a public interest assessment with respect to the processing of scientific research data. Under the new definition, data subjects will be able to consent to the use of their data for scientific research purposes even if such purposes are not yet “possible to identify.”
Purpose Limitation
The DUA Bill clarifies the concept of “further processing.” Amongst other things, it outlines criteria to help assess whether further processing is compatible with the original purpose, such as the link between the new and original purposes, the context in which the data was originally collected and the possible consequences for data subjects of the further processing being contemplated. It also sets out instances when processing for a new purpose would be deemed compatible with the original purpose, for example where the data subject consents or where the processing meets a condition set out in the new Annex 2, for example, where the processing is necessary for the purposes of complying with an obligation of the controller under an enactment, a rule of law or a court order or tribunal.
Children’s Data
Emphasising the protection of children’s data, the DUA Bill introduces the concept of “children’s higher protection matters” to the principle of data protection by design and default in the context of providing an information society service which is likely accessible by a child. This places additional duties on businesses and the Information Commissioner’s Office (the “ICO”) to consider the vulnerability of children when carrying out responsibilities under data protection law in an effort to ensure enhanced safeguards for young individuals.
Cookie Requirements and PECR Fines
The DUA Bill introduces key changes to the rules governing the use of cookies and similar tracking technologies under Privacy and Electronic Communications Regulations (“PECR”), most notably regarding the need for consent. The DUA Bill provides exemptions from the requirement to seek consent for certain non-essential cookies and similar tracking technologies used solely to collect statistical data with a view to improve the appearance or performance of a website, adapt a website to a user’s preferences, or to make improvements to services or a website. It also includes an exhaustive list of purposes for using cookies and similar tracking technologies which can be considered strictly necessary, such as security and fraud detection. The impact of such is that consent is not required to use the cookies and similar tracking technologies, nor are businesses required to offer the ability to opt-out. Additionally, the DUA Bill aligns fines for non-compliance with PECR with the UK GDPR, setting sanctions at up to 4 percent of global turnover or £17.5 million, whichever is higher.
Information Commission
The DUA Bill provides for significant organisational changes to the ICO. For example, the DUA Bill abolishes the ICO and replaces it with the Information Commission and replaces the lead Information Commissioner role with a Chair and executive/non-executive members. It also reforms the process by which data subjects can submit complaints to the ICO by requiring complaints be addressed by the relevant business first. The complaint can only be escalated to the ICO when it has not been dealt with satisfactorily, thereby reducing the number of complaints reaching the ICO.
Other Provisions
Beyond the amendments to data protection regulations, the DUA Bill introduces other provisions that, according to the UK government, seek to promote the growth of the UK economy, improve UK public services and make people’s lives easier, such as:

Smart Data: The DUA Bill introduces provisions enabling Smart Data Schemes, whereby the Secretary of State can issue regulations governing access to customer and business data. Open Banking is an example of a Smart Data Scheme already existing in the UK. Government consultations will define which businesses can access data and what safeguards apply.
Digital Verification Services: The DUA Bill establishes a framework for “trusted” providers of digital verification services (“DVS”) by introducing a DVS register with additional certification through a DVS Trust Framework which will be created by the Secretary of State in consultation with the ICO. This initiative aims to enhance trust and security in digital verification processes.
Healthcare Data: To facilitate data sharing across platforms, the DUA Bill mandates that IT systems in the healthcare system must meet common standards. The Secretary of State will be given the power to publish an information standard on IT services in the healthcare setting, including on technical provisions such as functionality, connectivity, interoperability, portability, storage and data security.

Conclusion
The DUA Bill represents a comprehensive effort to modernise data protection laws in the UK, balancing the need for economic growth and innovation with the imperative to safeguard individual privacy and data security.
The UK government is optimistic that these changes will be well-received by the European Commission when considering the UK’s adequacy decisions. The European Commission recently granted a six-month extension to the UK’s two adequacy decisions to allow the UK additional time to finalise the DUA Bill, after which the European Commission intends to reassess the adequacy of data protection in the UK (see here for more information on the extensions).
As it nears implementation, businesses impacted by the DUA Bill should take proactive measures to review their data processing practices in anticipation of the new requirements set forth by the legislation. This preparation involves not only ensuring compliance with the new obligations but also capitalising on opportunities to enhance data management and security, and to streamline certain processing activities such as the use of ADM and cookies.

Synthetic Data and the Illusion of Privacy: Legal Risks of Using De-Identified AI Training Sets

As artificial intelligence (AI) systems grow more advanced and data-driven, organizations are turning to synthetic and de-identified data to power model development while purportedly reducing legal risk. The assumption is simple: if the data no longer identifies a specific individual, it falls outside the reach of stringent privacy laws and related obligations. But this assumption is increasingly being tested—not just by evolving statutes and regulatory enforcement, but by advances in re-identification techniques and shifting societal expectations about data ethics.
This article examines the complex and evolving legal terrain surrounding the use of synthetic and de-identified data in AI training. It analyzes the viability of “privacy by de-identification” strategies in light of re-identification risk, explores how state privacy laws like the California Consumer Privacy Act (CCPA) and its successors address these techniques, and highlights underappreciated contractual, ethical, and regulatory risks that organizations should consider when using synthetic data.
The Promise and Pitfalls of Synthetic and De-Identified Data
Synthetic data refers to artificially generated data that mimics the statistical properties of real-world datasets. It may be fully fabricated or derived from actual data using generative models, such as GANs (Generative Adversarial Networks). De-identified data, by contrast, refers to datasets where personal identifiers have been removed or obscured.
These approaches are attractive for AI development because they purport to eliminate the compliance burdens associated with personal data under laws like the CCPA, the Illinois Biometric Information Privacy Act (BIPA), or the Virginia Consumer Data Protection Act (VCDPA). But whether these datasets are truly outside the scope of privacy laws is not a settled question.
Re-Identification Risk in a Modern AI Context
Re-identification—the process of matching anonymized data with publicly available information to reveal individuals’ identities—is not a hypothetical risk. Numerous academic studies and high-profile real-world examples have demonstrated that supposedly de-identified datasets can be reverse-engineered. In 2019, researchers were able to re-identify 99.98% of individuals in a de-identified U.S. dataset using just 15 demographic attributes. In 2023, researchers published methods for reconstructing training data from large AI models, raising new concerns about AI-enabled leakage and data memorization.
The risk of re-identification undermines the legal assumption that de-identified or synthetic data always falls outside of privacy laws. Under the CCPA, for example, “deidentified information” must meet strict criteria, including that the business does not attempt to re-identify the data and implements technical and business safeguards to prevent such re-identification.1 Importantly, the burden is on the business to prove that the data cannot reasonably be used to infer information about or re-identify a consumer. A similar standard applies under the Colorado Privacy Act2 and the Connecticut Data Privacy Act.3
Given the growing capabilities of AI models to infer sensitive personal information from large datasets, a company’s belief that it has met these standards may not align with regulators’ or plaintiffs’ evolving views of reasonableness.
Regulatory and Enforcement Attention
State attorneys general and privacy regulators have signaled increased scrutiny of de-identification claims. The California Privacy Protection Agency (CPPA), for instance, has prioritized rulemaking on automated decision-making technologies and high-risk data processing, including profiling and algorithmic training. Early drafts suggest regulators are interested in the opacity of AI training sets and may impose disclosure or opt-out requirements even where de-identified data is used.
In enforcement, regulators have challenged claims of anonymization when they believe data could reasonably be re-associated with individuals. The FTC’s 2021 settlement with Everalbum, Inc. involved claims that the company misrepresented the use of facial recognition technology and the deletion of biometric data. As AI systems trained on biometric, geolocation, or consumer behavior data become more common, scrutiny will likely intensify.
Companies operating in jurisdictions with biometric laws, such as BIPA or the Texas Capture or Use of Biometric Identifier Act (CUBI), may also face heightened risk. In Rivera v. Google Inc.,4 a federal court in Illinois allowed BIPA claims to proceed where facial templates were generated from user-uploaded images, even though the templates were not linked to names, finding that such data could still qualify as biometric identifiers. Conversely, in Zellmer v. Meta Platforms, Inc.,5 the Ninth Circuit held that BIPA did not apply where facial recognition data was not linked to any identifiable individual, emphasizing the absence of any connection between the template and the plaintiff. These cases underscore that even de-identified biometric data may trigger compliance obligations under BIPA if the data retains the potential to identify individuals, particularly when used to train AI systems capable of reconstructing or linking biometric profiles.
Contractual and Ethical Considerations
Beyond statutory exposure, organizations should consider the contractual and reputational implications of using synthetic or de-identified data for AI training. Contracts with data licensors, customers, or partners may include restrictions on derivative use, secondary purposes, or model training altogether—especially in regulated industries like financial services and healthcare. Misuse of even anonymized data could lead to claims of breach or misrepresentation.
Ethically, companies face growing pressure to disclose whether AI models were trained on consumer data and whether individuals had any control over that use. Transparency expectations are evolving beyond strict legal requirements. Companies seen as skirting consumer consent through “anonymization loopholes” may face reputational damage and litigation risk, especially as synthetic datasets increasingly reflect sensitive patterns about health, finances, or demographics.
Takeaways for Compliance and AI Governance
Companies relying on synthetic or de-identified data to train AI models should reconsider any assumptions that such data is legally “safe.” Legal risk can arise from:
– Re-identification risk, which may bring data back within the scope of privacy laws;- Evolving state privacy standards, which impose safeguards and reasonableness requirements;- Contractual limitations on data use, which may prohibit derivative or secondary applications;- Public perception and regulatory scrutiny, especially regarding opaque AI systems.
To mitigate these risks, businesses should adopt rigorous data governance practices. This includes auditing training datasets, documenting de-identification techniques, tracking regulatory definitions, and considering independent assessments of re-identification risk. Where possible, companies should supplement technical safeguards with clear contractual language and consumer disclosures regarding the use of data in model development.
The age of synthetic data has not eliminated privacy risk. Instead, it has introduced new complexities and heightened scrutiny as the lines between real and generated data continue to blur.
The views expressed in this article are those of the author and not necessarily of his law firm, or of The National Law Review.

1 Cal. Civ. Code § 1798.140(m).
2 Colo. Rev. Stat. Ann. § 6-1-1303(11).
3 Conn. Gen. Stat. § 42-515(16).
4 238 F. Supp. 3d 1088 (N.D. Ill. 2017).
5 104 F.4th 1117 (9th Cir. 2024).

NLRB Releases FY 2026 Budget: Proposed Staffing Cuts and Focus on Efficiency and IT Modernization

Amidst ongoing transitions—with the Board operating with a quorum and the President’s nominee for General Counsel pending Senate confirmation—the National Labor Relations Board (“NLRB” or “Board”) released its Fiscal Year (“FY”) 2026 Budget Justification on May 23, 2025. The proposed budget requests $285.2 million, a 4.7% ($14 million) decrease from the FY 2025 enacted level.
This reduction reflects anticipated savings from staffing reductions and workforce optimization, as the agency continues to address significant delays in case management across its regional offices. The proposed budget aligns with Executive Order 14210, which directs federal agencies to optimize workforce efficiency.
Mission and Strategic Goals
The NLRB’s mission is to protect workplace democracy and the rights of employees, unions, and employers under the National Labor Relations Act (“NLRA”). The agency’s FY 2022–2026 Strategic Plan (with the FY 2026–2030 plan under review for rollout in the FY 2027 cycle) emphasizes four primary goals: (1) effective enforcement of the NLRA, (2) protection of employee free choice in representation matters, (3) organizational excellence, and (4) efficient resource management to instill public trust.
Budget Overview and Staffing
The FY 2026 budget request is allocated as follows:

Labor (Salaries and Benefits): $231 million (81.1%)
Information Technology, Security, and Facilities: $23 million (7.9%)
Rent: $21 million (7.5%)
Other Operational Costs: $10 million (3.5%)

The agency anticipates a reduction of 99 full-time equivalents (“FTEs”), bringing total staffing to 1,152 FTEs. This decrease is driven by voluntary early retirements and deferred resignations, as part of broader efforts to address staffing imbalances and improve operational efficiency.
Information Technology and Cybersecurity Initiatives
A central focus of the FY 2026 budget is the modernization of the NLRB’s IT infrastructure. The agency is replacing its legacy NxGen Case Management System, supported by a $23.2 million investment from the Technology Modernization Fund (“TMF”).
The NLRB is also integrating Artificial Intelligence (“AI”) to automate routine tasks, enhance data analysis, and improve case-processing efficiency, in line with recent executive orders on government efficiency and responsible AI use.
Cybersecurity remains a top priority. The agency is participating in the Department of Justice’s Zero Trust Architecture pilot and is implementing advanced monitoring, data loss prevention, and compliance with federal cybersecurity mandates.
Legislative and Administrative Provisions
The NLRB is not proposing new legislation for FY 2026. The budget includes language specifying appropriations for the Office of Inspector General and maintains restrictions on the use of electronic voting in representation elections.
Audit and Oversight
The budget report identifies several open audit recommendations, including improvements to data accuracy in case management, performance-based staffing methodologies, internal controls for mail ballot elections, and enhanced cybersecurity training.
Takeaways
The NLRB’s proposed budget is designed to meet the White House’s workforce efficiency goals without resorting to mandatory layoffs. The agency’s focus on IT modernization, AI integration, and strengthened cybersecurity is intended to drive operational efficiency and transparency. The NLRB is clearly relying on these technological improvements to offset the impact of staff reductions and to maintain or improve agency performance.
The budget must still undergo several steps before approval, including review by the House and Senate appropriations committees, potential hearings, committee and full chamber votes, reconciliation of any differences, and, ultimately, the President’s signature.
If enacted, the impact of the FY 2026 budget on the agency’s ability to process unfair labor practice and representation cases remains to be seen, especially given ongoing work backlogs and staffing challenges.

The One Big Beautiful Bill Act’s Proposed Moratorium on State AI Legislation: What Healthcare Organizations Should Know

Congress is weighing a sweeping proposal that could significantly reshape how artificial intelligence (AI) is regulated across the United States. At the end of May, the United States House of Representatives passed, by a vote of 215-214, the One Big Beautiful Bill Act (OBBBA), a budget reconciliation bill with a provision imposing a 10-year moratorium on the enforcement of most state and local laws that target AI systems. If enacted, OBBBA would pause the enforcement of existing state AI laws and regulations and take precedence over emerging AI legislation in state legislatures across the country.
For healthcare providers, payors, and other healthcare stakeholders, the implications are substantial. While the moratorium could streamline AI deployment and ease compliance burdens, it also raises questions about regulatory uncertainty and patient safety, potentially undermining patient trust.
What OBBBA Would Do
Section 43201 of OBBBA prohibits the enforcement of any state or local law or regulation “limiting, restricting, or otherwise regulating” AI models, AI systems, or automated decision systems. OBBBA defines AI as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” The definition of “automated decision systems” is similarly broad, encompassing “any computational process derived from machine learning, statistical modeling, data analytics, or AI that issues a simplified output (e.g., a score, classification, or recommendation) to materially influence or replace human decision making.”
As proposed, OBBBA would preempt several enacted and proposed restrictions on AI use in healthcare, including:

California AB 3030, which (with few exceptions) mandates disclaimers when generative AI is used to communicate clinical information to patients and requires that patients be informed of how to reach a human provider;
California SB 1120, which prohibits health insurers from using AI to deny coverage without sufficient human oversight;
Colorado Artificial Intelligence Act, which regulates developers and deployers of AI systems, particularly those considered “high risk”;
Utah Artificial Intelligence Policy Act, which requires regulated occupations (including healthcare professionals) to prominently disclose at the beginning of any communication that a consumer is interacting with generative AI; and
Massachusetts Bill S.46, which, as proposed, would require healthcare providers to disclose the use of AI to make decisions affecting patient care.

Importantly, however, OBBBA contains exceptions that will likely spark debate about the true scope of the moratorium. Under OBBBA, state AI laws and regulations will remain enforceable (and not preempted) if they fall under any one of the following exceptions:

Primary Purpose and Effect Exception. The state law or regulation has the “primary purpose and effect,” with respect to the adoption of AI or automated decision systems, of: (i) removing legal impediments; (ii) facilitating deployment or operation; or (iii) consolidating administrative procedures;
No Design, Performance, and Data-Handling Imposition Exception. The state law or regulation does not impose substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or similar requirements on AI or automated decision systems, unless these requirements are imposed under federal law or are generally applicable to other models and systems that perform similar functions; or
Reasonable and Cost-Based Fees Exception. The state law or regulation imposes only fees or bonds that are “reasonable and cost-based” and imposed equally on other AI models, AI systems, and automated decision systems that perform comparable functions.

The last two exceptions, in particular, imply that the moratorium would affect only those state laws that treat AI systems differently from other systems. As such, laws of general application at the state and federal level would continue to regulate AI, including those concerning anti-discrimination, privacy, and consumer protection. However, even with this carve-out, the moratorium would undeniably transform the AI regulatory landscape, given the absence of robust federal regulation to replace state-level restrictions.
Why It Matters for Healthcare Stakeholders
The proposed moratorium is part of the Trump Administration’s broader emphasis on innovation over regulation in the AI space. Supporters argue that a single federal standard would help reduce compliance burdens on AI developers by eliminating the need to track and implement AI rules in 50 states. This would, in turn, encourage innovation and protect national competitiveness, as the U.S. races to keep pace with the European Union and China on AI development.
But for healthcare providers, the tradeoffs are complex. State-level regulation has its advantages. For example, patients may grow wary of AI-enabled care if transparency and oversight appear to be diminished, especially in sensitive areas like diagnosis, care triage, or behavioral health. Additionally, states often act as early responders to emerging risks. A moratorium could prevent regulators from addressing evolving clinical concerns related to AI tools, especially given the lack of comprehensive federal guardrails in this area.
Legal and Procedural Challenges
The moratorium also faces significant constitutional and procedural hurdles. For example, legal scholars and 40 bipartisan state attorneys general have raised concerns that OBBBA may infringe upon state police powers related to health and safety, potentially raising issues under the Tenth Amendment. Additionally, if enacted, the moratorium is expected to face legal challenges in court, given bipartisan opposition.
What Healthcare Organizations Should Do Now
Healthcare organizations should maintain strong compliance practices and stay abreast of laws of general application, such as HIPAA and state data privacy and security laws, as AI tools are likely to remain subject to such laws, despite uncertainties that may emerge if OBBBA is enacted. Even if the moratorium does not pass the United States Senate, Congress has clearly signaled a growing intent to regulate AI—whether through future legislation or agency-led rulemaking by entities such as the United States Department of Health and Human Services or the Food and Drug Administration. As such, healthcare organizations should have a clear vision on their organization’s policies and practices involving AI compliance, including the following:

Maintaining Compliance Readiness. Continue monitoring and preparing for state-level AI regulations that are currently in effect or soon to be implemented.
Auditing Current AI Deployments. Evaluate how AI tools are currently used across clinical, operational, and administrative functions, and continue to assess their alignment with broader legal frameworks, including, but not limited to, HIPAA, FDCA, FTC Act, Title VI, and consumer protection laws. As discussed, AI tools will continue to remain subject to many laws of general application even if the moratorium passes.
Engaging in Strategic Planning. Depending on whether the moratorium is approved by the United States Senate and survives legal scrutiny, organizations may need to recalibrate compliance programs.

Regardless of whether OBBBA is ultimately enacted, the proposed federal AI enforcement moratorium marks a pivotal moment in the evolving landscape of AI regulation in healthcare. Providers would be well served to remain proactive, well-informed, and prepared to adapt to evolving legal and regulatory developments.
Listen to this post

Regulation Round Up May 2025

Welcome to the Regulation Round Up, a regular bulletin highlighting the latest developments in UK and EU financial services regulation.
Key developments in May 2025:
30 May
Trade Settlement: The Financial Conduct Authority (“FCA”) published a press release on the faster settlement of trades in funds.
29 May
FCA Regulation Round‑up: The FCA has published its regulation round-up for May 2025. Among other things, it covers a future data request for the advisers and intermediaries’ sector, and an upcoming webinar on the FCA’s recent consultation paper on simplifying its insurance rules.
28 May
Cryptoassets: The FCA has published a consultation paper (CP25/14) on proposed prudential rules and guidance for firms carrying on the regulated activities of issuing qualifying stablecoins and safeguarding qualifying cryptoassets.
27 May
Liquidity Risk Management: The International Organization of Securities Commissions (“IOSCO”) has published its final report on its updated liquidity risk management recommendations for collective investment schemes alongside final guidance for the effective implementation of its revised recommendations.
23 May
FCA Handbook: The FCA has published Handbook Notice 130, which sets out changes to the FCA Handbook made by the FCA board on 1 May and 22 May. The changes relate to payment optionality for fund managers, consumer credit regulatory reporting and handbook administration.
19 May
Consumer Credit: HM Treasury published a consultation paper on the first phase of its proposed widescale reforms to the Consumer Credit Act 1974.
16 May
Bank Resolution: The Bank Resolution (Recapitalisation) Act 2025 was published. The Act will amend the Financial Services and Markets Act 2000 and the Banking Act 2009 to introduce a new mechanism allowing the Bank of England to use funds provided by the banking sector to cover certain costs associated with resolution under the special resolution regime.
15 May
UK Sanctions: The UK Government published its cross‑government review of sanctions implementation and enforcement.
Artificial Intelligence: The European Parliament’s Committee on Economic and Monetary Affairs published a draft report on the impact of artificial intelligence (“AI”) on the financial sector (PE773.328v01‑00). The report provides policy recommendations to enable the use of AI in financial services and outlines concerns of regulatory overlaps / legal uncertainties, indicating potential early tensions with the proposed AI Act. The report also calls on the European Commission to ensure clarity and guidance on how existing financial services regulations apply to the use of AI in financial services.
PISCES: The Financial Services and Markets Act 2023 (Private Intermittent Securities and Capital Exchange System Sandbox) Regulations 2025 were published and laid before parliament. The regulations establish the Private Intermittent Securities and Capital Exchange System (“PISCES”) Sandbox, including providing the framework for potential PISCES operators to apply to operate intermittent trading events for participating private companies and investors.
14 May
Insurance: The FCA published a consultation paper (CP25/12) on proposals to simplify its insurance rules for insurance firms and funeral plan providers.
12 May
Investment Research: The FCA has published a policy statement (PS25/4) on investment research payment optionality for fund managers.
8 May
Solvency II: The Prudential Regulation Authority (“PRA”) has updated its webpage on Solvency II to note that it will delay the implementation of the updated mapping of external credit rating agency ratings to Credit Quality Steps (“CQSs”) for use in the UK Solvency II regime.
Small Asset Managers: The FCA has published a webpage setting out the findings from its review of business models for smaller asset managers and alternatives.
7 May
MAR and MiFID II: The European Securities and Markets Authority (“ESMA”) has published a final report containing technical advice to the European Commission on the implications of the Listing Act on the Market Abuse Regulation (596/2014) (“MAR”) and the Markets in Financial Instruments Directive (2014/65/EU) (“MIFID II”).
6 May
ESG: The European Commission has published a call for evidence about revising Regulation (EU) 2019/2088 on sustainability‑related disclosures in the financial services sector (“SFDR”). Please refer to our dedicated article on this topic here.
2 May
ESG: ESMA has published a consultation paper on regulatory technical standards under Regulation (EU) 2024/3005 on the transparency and integrity of ESG rating activities.
Cryptoassets: The FCA has published a discussion paper (DP25/1) seeking views on its future approach to regulating cryptoasset trading platforms, intermediaries, cryptoasset lending and borrowing, staking, decentralised finance, and the use of credit to buy cryptoassets.
FCA Handbook: The FCA published Handbook Notice 129, which sets out changes to the FCA Handbook made by the FCA board on 27 March 2025.
Sulaiman Malik & Michael Singh also contributed to this article. 

Managing the Managers: Governance Risks and Considerations for Employee Monitoring Platforms

In today’s hybrid and remote work environment, organizations are increasingly turning to digital employee management platforms that promise productivity insights, compliance enforcement, and even behavioral analytics. These tools—offered by a growing number of vendors—can monitor everything from application usage and website visits to keystrokes, idle time, and screen recordings. Some go further, offering video capture, geolocation tracking, AI-driven risk scoring, sentiment analysis, and predictive indicators of turnover or burnout.
While powerful, these platforms also carry real legal and operational risks if not assessed, configured, and governed carefully.
Capabilities That Go Beyond Traditional Monitoring
Modern employee management tools have expanded far beyond “punching in,” reviewing emails, and tracking websites visited. Depending on the features selected and how the platform is configured, employers may have access to:

Real-time screen capture and video recording
Automated time tracking and productivity scoring
Application and website usage monitoring
Keyword or behavior-based alerts (e.g., data exfiltration risks)
Behavioral biometrics or mouse/keyboard pattern analysis
AI-based sentiment or emotion detection
Geolocation or IP-based presence tracking
Surveys and wellness monitoring tools

Not all of these tools are deployed in every instance, and many vendors allow companies to configure what they monitor. Some important questions arise, such as who at the company is making the decisions on how to configure the tool, what data is collected, is the collection permissible, who has access , how are decisions made using that data, and what safeguards are in place to protect the data. But even limited use can present privacy and employment-related risks if not governed effectively.
Legal and Compliance Risks
While employers generally have some leeway to monitor their employees on company systems, existing and emerging law, particularly concerning AI, along with considering best practices, employee relations, and other factors should help with developing some guidelines.

Privacy Laws: State and international privacy laws (like the California Consumer Privacy Act, GDPR, and others) may require notice, consent, data minimization, and purpose limitation. Even in the U.S., where workplace privacy expectations are often lower, secretive or overly broad monitoring can trigger complaints or litigation.
Labor and Employment Laws: Monitoring tools that disproportionately affect certain groups or are applied inconsistently may prompt discrimination or retaliation claims. Excessive monitoring activities could trigger bargaining obligations and claims concerning protected concerted activity.
AI-Driven Features: Platforms that employ AI or automated decision-making—such as behavioral scoring or predictive analytics—may be subject to emerging AI-specific laws and guidance, such as New York City’s Local Law 144, Colorado’s AI Act, and AI regulations recently approved by the California Civil Rights Department under the Fair Employment and Housing Act (FEHA) concerning the use of automated decision-making systems.
Data Security and Retention: These platforms collect sensitive behavioral data. If poorly secured or over-retained, that data could become a liability in the event of a breach or internal misuse.

Governance Must Extend Beyond IT
Too often, these tools are procured and managed primarily, sometimes exclusively, by IT or security teams without broader organizational involvement. Given the nature of data these tools collect and analyze, as well as their potential impact on members of a workforce, a cross-functional approach is a best practice.
Involving stakeholders from HR, legal, compliance, data privacy, etc., can have significant benefits not only at the procurement and implementation stages, but also throughout the lifecycle of these tools. This includes regular reviews of feature configurations, access rights, data use, decision making, and staying abreast of emerging legal requirements.
Governance considerations should include:

Purpose Limitation and Transparency: Clear internal documentation and employee notices should explain what is being monitored, why, and how the information will be used.
Access Controls and Role-Based Permissions: Not everyone needs full access to dashboards or raw monitoring data. Access should be limited to what’s necessary and tied to a specific function.
Training and Oversight: Employees who interact with the monitoring dashboards must understand the scope of permitted use. Misuse of the data—whether for personal curiosity, retaliation, or outside policy—should be addressed appropriately.
Data Minimization and Retention Policies: Avoid “just in case” data collection. Align retention schedules with actual business need and regulatory requirements.
Ongoing Review of Vendor Practices: Some vendors continuously add or enable new features that may shift the risk profile. Governance teams should review vendor updates and periodically reevaluate what’s enabled and why.

A Tool, Not a Silver Bullet
Used thoughtfully, employee management platforms can be a valuable part of a company’s compliance and productivity strategy. But they are not “set it and forget it” solutions. The insights they provide can only be trusted—and legally defensible—if there is strong governance around their use.
Organizations must manage not only their employees, but also the people and tools managing their employees. That means recognizing that tools like these sit at the intersection of privacy, ethics, security, and human resources—and must be treated accordingly.