OCC’s Hood Emphasized AI Oversight and Inclusion in Financial Services
On April 29, Acting Comptroller of the Currency Rodney Hood delivered pre-recorded remarks at the National Fair Housing Alliance’s Responsible AI Symposium. In his speech, Hood reiterated the OCC’s commitment to deploying AI responsibly within the banking sector and highlighted the agency’s broader initiatives to promote financial inclusion.
Hood outlined several key OCC initiatives focused on the responsible use of AI in banking, including:
Establishing a risk-based oversight framework. The OCC has issued guidance promoting transparency, accountability, and fairness in AI use in the financial services sector. Hood noted that AI should be governed by the same risk-based, technology-neutral principles that apply to other banking activities.
Encouraging traditional risk management practices for AI. Banks should apply established model risk management principles to AI tools due to the complex nature of AI, including its use of large data sets and intricate algorithms.
Leveraging AI for expanded access to credit. The OCC supports the use of alternative data, such as rent and cash flow information, to improve credit modeling and increase financial inclusion.
Supporting innovation through internal infrastructure. The OCC’s Office of Financial Technology continues to monitor developments in financial technology, including AI adoption and bank-fintech partnerships, and supports supervisory and policy development efforts in those areas.
Hood also discussed the role of Project REACh (Roundtable for Economic Access and Change), an OCC-led initiative that brings together banking, community, and technology stakeholders to expand affordable credit access. Project REAch has supported pilot programs that helped establish over 100,000 accounts for consumers previously unable to access credit. New workstreams under Project REACh aim to tackle homeownership barriers and explore tech-driven inclusion strategies.
Putting It Into Practice: The OCC’s ongoing efforts to promote responsible AI use underscore the federal government’s broader commitment to ensuring AI is integrated safely and equitably into the financial services sector (previously discussed here). With the vision that AI will play a growing role in financial services, market participants should expect continued developments in the regulation of both AI applications and the use of alternative data in credit decisioning on both federal and state levels (previously discussed here, here, and here).
Listen to this post
The BR Privacy & Security Download: May 2025
Welcome to this month’s issue of The BR Privacy & Security Download, the digital newsletter of Blank Rome’s Privacy, Security, & Data Protection practice.
STATE & LOCAL LAWS & REGULATIONS
State Regulators Form Bipartisan Consortium for Privacy Issues: The California Privacy Protection Agency and the Attorneys General of California, Colorado, Connecticut, Delaware, Indiana, New Jersey, and Oregon have created the Consortium of Privacy Regulators (the “Consortium”), a bipartisan consortium, to collaborate on various privacy issues. The seven states all have comprehensive privacy laws that are currently or will be in effect, and the Consortium will collaborate on the implementation and enforcement of their respective state laws. The Consortium will hold regular meetings not only to share expertise and resources, but also to coordinate efforts to investigate potential violations of applicable laws.
CPPA Issues Updated ADMT Proposed Rules and Opens Comment Period for Data Broker Deletion Mechanism Proposed Rules; California Governor Urges CPPA to Not Enact ADMT Proposed Rules: The California Privacy Protection Agency (“CPPA”), the regulatory authority charged with enforcing the California Consumer Privacy Act, as amended by the California Privacy Rights Act (“CCPA”), has released a revised version of its proposed regulations on cybersecurity audits, risk assessments, and automated decision-making technology (“ADMT”). Among the notable modifications offered by the CPPA were to narrow the definition of ADMT, remove behavioral advertising from ADMT and risk assessment requirements, and reduce the kinds of evaluations that businesses would have to undertake when using ADMT. California’s Governor, Gavin Newsom, sent a letter to the CPPA, urging the agency not to enact the proposed regulations on ADMT, stating that the regulations “could create significant unintended consequences and impose substantial costs that threaten California’s enduring dominance in technological innovation.” In addition to the proposed ADMT regulations, the CPPA has progressed its rulemaking under the California Delete Act. The CPPA has opened the formal public comment period on its proposed regulations for the Delete Request and Opt-Out Platform. The Delete Act requires the CPPA to establish an accessible deletion mechanism to allow consumers to request the deletion of personal information from all registered data brokers through a single deletion request to the CPPA. The comment period will remain open until June 10, 2025.
Bill Introduced to Stop California CIPA Claims: The California Senate introduced S.B. 690, which aims to stop lawsuits for violations of the California Invasion of Privacy Act (“CIPA”) based on the use of cookies and other online tracking technologies. There has been a recent trend of class actions under CIPA, where plaintiffs claim that the use of cookies and tracking technologies on websites violates CIPA because such technologies facilitate wiretapping and constitute illegal pen registers or trap and trace devices. Not even businesses compliant with the CCPA that provide consumers with the ability to opt out of the sharing of personal information with providers of tracking technologies are immune from CIPA class actions. S.B. 690 would exempt online technologies used for a “commercial business purpose” from wiretapping and pen register or trap-and-trace liability. “Commercial business purpose” is defined as the processing of personal information in a manner permitted by the CCPA.
Arkansas’ Social Media Safety Act Struck Down; Arkansas Legislature Passes Amendments in Response: The U.S. District Court for the Western District of Arkansas held that the Arkansas’ Social Media Safety Act (“SMSA”), a law limiting minors’ access to social media platforms, was unconstitutional and granted a permanent injunction blocking SMSA from taking effect. The District Court held that SMSA violated the First Amendment because it did not meet the requisite standard of strict scrutiny. The District Court held that SMSA’s age verification requirements blocking minors’ access to social media platforms were not narrowly tailored to prevent minors from interacting online with predators and other harmful content. The District Court also found that SMSA was unconstitutionally vague, as it is not clear which of NetChoice’s members are subject to SMSA’s requirements, while SMSA regulates companies like Facebook and Instagram, it specifically exempts Google, WhatsApp, and Snapchat. In response to the District Court’s ruling, the Arkansas Legislature passed a new bill, S.B. 611, to amend SMSA to broaden the scope and applicability of SMSA to include additional online platforms, narrow the age of applicability to users under 16 (rather than 18), strengthen privacy protections for minor users, and add a private right of action for parents of minor users.
Connecticut Attorney General Issues Annual Report on Connecticut Data Privacy Act Enforcement: The Connecticut Attorney General released a new report detailing the actions it has taken to enforce the Connecticut Data Privacy Act (“CTDPA”). The report provides updates on: (1) the Connecticut Attorney General’s broader privacy and data security efforts; (2) consumer complaints received under the CTDPA to date; (3) several enforcement efforts highlighted in the Connecticut Attorney General’s initial report; (4) expanded enforcement priorities; and (5) recommendations for strengthening the CTDPA’s protections. While the Connecticut Attorney General seems to remain focused on enforcing the CTDPA’s transparency requirements (i.e., disclosures to be included in privacy notices) and requirements to obtain opt-in consent to process sensitive data, it seems to also have broadened its efforts to address opt-out practices and dark patterns. The Connecticut Attorney General’s priorities have further expanded as the CTDPA’s universal opt-out provisions became effective and new legislation related to minors’ privacy and consumer health data took effect.
Oregon Attorney General Reports Spike in Complaints on Use of Personal Data by Government Entities: The Oregon Department of Justice’s (“ODOJ”) Privacy Unit reported a big spike in the first three months of 2025 in complaints about the Department of Government Efficiency (“DOGE”). As of March 31, 2025, the Privacy Unit reports it received more than 250 complaints about DOGE. In addition to the DOGE complaints, the Privacy Unit received 47 complaints between January and March of this year relating to the Oregon Consumer Privacy Act (“OCPA”). In addition, ODOJ announced the publication of a 2025 Quarter 1 Enforcement Report, which addresses outreach and enforcement efforts of the OCPA from January 1 to March 31, 2025, and identifies broad privacy trends in Oregon. ODOJ previously issued a Six-Month Enforcement Report, which addressed enforcement efforts for the first six months of the OCPA. ODOJ plans to continue to issue these reports quarterly, with a longer report published every six months.
Ohio’s Age Verification Law Struck Down: The U.S. District Court for the Western District of Arkansas struck down Ohio’s Social Media Parental Notification Act, which required social media companies to verify user age and obtain parental consent for users under 16. NetChoice, a technology industry trade group that has challenged a number of recently enacted social media laws around the country on constitutional grounds, including Arkansas’ SMSA, alleged that the act violated the First Amendment. The District Court agreed and held that the law’s age verification requirement blocking minors’ access to social media is not narrowly tailored to protect children from the harms of social media. The District Court also held that the law’s definitions for which websites had to comply with the law were a content-based restriction because it favored some forms of engagement with certain topics to the exclusion of others.
California Attorney General Appeals Age-Appropriate Design Code Act Decision: As previously reported, NetChoice obtained a second preliminary injunction temporarily blocking the enforcement of the California Age-Appropriate Design Code Act (“AADC”). The California Attorney General has appealed this decision, stating that it is “deeply concerned about further delay in implementing protections for children online.” The AADC would place extensive new requirements on websites and online services that are “likely to be accessed by children” under the age of 18. NetChoice won its first preliminary injunction in September 2023 on the grounds that the AADC would likely violate the First Amendment. In April 2025, NetChoice’s motion for preliminary injunction was again granted on the grounds that the AADC regulates protected speech, triggering a strict scrutiny review, and while California has a compelling interest in protecting the privacy and well-being of children, this interest alone is not sufficient to satisfy a strict scrutiny standard.
FEDERAL LAWS & REGULATIONS
DOJ Issues Data Security Program Compliance Guide and FAQ; Provides 90 Day Limited Enforcement Policy: The National Security Division of the U.S. Department of Justice (“DOJ NSD”) released a compliance guide and FAQ as part of its implementation of its final rule on protecting Americans’ sensitive data from foreign adversaries (the “Final Rule”). The compliance guide is intended to provide general information to assist individuals and entities in complying with the Final Rule’s legal requirements and to facilitate an understanding of the scope and purposes of the Final Rule. The FAQ answers 108 questions regarding Final Rule topics such as the definition of sensitive personal data, prohibited and restricted transactions, and scope of the Final Rule’s application to certain corporate group transactions, among other topics. Concurrently, the DOJ NSD issued a limited enforcement policy through July 8, 2025. Under the limited enforcement policy, the DOJ NSD stated that it will not prioritize civil enforcement actions against any person for violations of the DSP that occur from April 8 through July 8, 2025, so long as the person is engaging in good faith efforts to comply with or come into compliance with the DSP during that time. NSD stated it will pursue penalties and other enforcement actions as appropriate for egregious, willful violations and is not limited in pursuit of civil enforcement if good faith compliance efforts, such as reviewing data flows, conducing data inventories, renegotiating vendor agreements, transferring services to new vendors, and conducting diligence on new vendors, are not undertaken.
FTC Sends Letter to Office of U.S. Trustee Regarding 23andMe Bankruptcy: Federal Trade Commission (“FTC”) Chairman Andrew N. Ferguson issued a letter to the U.S. Trustee regarding the 23andMe bankruptcy proceeding, expressing the concerns consumers have with the potential sale or transfer of their 23andMe data. The letter emphasizes the fact that the data 23andMe collects and processes is extremely sensitive, and highlights some of the public-facing privacy and data security-related representations the company has made. Chairman Ferguson urges the U.S. Trustee to ensure that any bankruptcy-related sale or transfer involving 23andMe users’ personal information and biological samples will be subject to the representations the company has made to users about both privacy and data security.
OMB Issues Memoranda on Federal Government Purchase and Use of AI: The U.S. Office of Management and Budget (“OMB”) issued memoranda providing guidance on federal agency use of AI and purchase of AI systems. The guidance in the memoranda builds on Executive Order 14179, Removing Barriers to American Leadership in Artificial Intelligence, signed by President Trump in January. The memorandum fact sheet states, “The Executive Branch is shifting to a forward-leaning, pro-innovation and pro competition mindset rather than pursuing the risk-averse approach of the previous administration.” Notwithstanding that characterization, the guidance does share many risk management and performance tracking concepts included in Biden administration directives. The guidance describes how to manage “high-impact” AI, which is defined as AI where the output serves as a principal basis for decisions or actions that have legal, material, binding, or significant effect on AI rights or safety. There are several examples of high-impact AI in the guidance, including enforcement of trade policies, safety functions for critical infrastructure, transporting chemical agents, certain law enforcement activities, and when protected speech is removed. Environmental impacts and algorithmic bias are not mentioned. However, the guidance directs agencies to use AI in a way that improves public services while maintaining strong safeguards for civil rights, civil liberties, and privacy.
States’ Attorneys General Challenge the Firing of FTC Commissioners: A coalition of 21 Attorneys General (the “Coalition”) supported two FTC Commissioners in challenging the decision by President Trump to fire them without cause. Led by the Colorado Attorney General, the Coalition filed an amicus brief in Slaughter v. Trump, emphasizing the important role the FTC has played in consumer protection and antitrust. The Coalition stated that the strong track record of the FTC is due in large part to the bipartisan structure of the FTC’s leadership and that “[a]llowing the president to have at-will removal authority would ruin the FTC’s independence by allowing the commission to become a partisan agency subject to the political whims of the president.”
NIST Releases Initial Draft of New Version of Incident Response Recommendations: The U.S. Department of Commerce National Institute of Standards and Technology (“NIST”) released the initial public draft of Special Publication 800-61 Rev. 3 (“SP 800-61”) for public comment. SP 800-61 is designed to assist organizations in incorporating cybersecurity incident response considerations throughout NIST Cybersecurity Framework 2.0 risk management activities to improve the efficiency and effectiveness of their incident detection, response, and recovery activities. The public comment period is open through May 20, 2025.
NIST Releases Initial Public Draft of Privacy Framework 1.1: NIST released a draft update to the NIST Privacy Framework (“PFW”). Updates include targeted changes to the content and structure of the NIST PFW to enable organizations to better use it in conjunction with the NIST Cybersecurity Framework, which was updated to version 2.0 in 2024 (“CSF 2.0”). The PFW’s draft update makes targeted changes to align with CSF 2.0, with a focus on the Govern Function (i.e., risk management strategy and policies) and the Protect Function (i.e., privacy and cybersecurity safeguards). The new draft also includes changes responsive to stakeholder feedback since the initial release of the PFW five years ago. The draft PFW also includes a new section on AI and privacy risk management and moves PFW use guidelines online. NIST is accepting comments on the draft through June 13, 2025.
FCC Delays Part of TCPA Rule Amendments: The Federal Communications Commission (“FCC”) announced that it was extending the effective date of one part of the amendments to the Telephone Consumer Protection Act (“TCPA”) rules the FCC released last year. The delayed amendments were initially set to become effective April 11, 2025, and relate to consumers’ revocation of consent. Amendments to C.F.R. § 64.1200(a)(10) were designed to make it easier for consumers to revoke consent under the TCPA by requiring callers to apply a revocation request received for one type of message to all future calls and texts. However, in response to industry comments, the FCC extended the effective date of C.F.R. § 64.1200(a)(10) until April 11, 2026, “to the extent that it requires callers to apply a request to revoke consent made in response to one type of message to all future robocalls and robotexts from that caller on unrelated matters.” The remaining portions of the amended rule went into effect on April 11, 2025.
U.S. LITIGATION
Fifth Circuit Vacates FCC Telecommunications Provider Fine: The Fifth Circuit vacated the $57 million fine imposed on AT&T by the FCC in 2024, which was part of a number of FCC enforcement actions issued concurrently by the FCC against major carriers related to the sale of geolocation data to third parties. All carriers have appealed the fines. AT&T argued that the penalty should be vacated in part because the FCC imposed sanctions without proving the allegations in court. Following the U.S. Supreme Court decision in U.S. Securities and Exchange Commission v. Jarkesy, in which the Supreme Court limited use of government agency courts and held that when the Securities Exchange Commission seeks civil penalties against a defendant for securities fraud, the Seventh Amendment entitles the defendant to a jury trial. The FCC argued that its enforcement action was rooted in Section 222 of the Telecommunications Act, which does not have roots in common law, and that, therefore, the Seventh Amendment right to a jury trial is inapplicable. However, the Fifth Circuit determined that Section 222’s requirement to use reasonable measures to protect consumer data is analogous to common law negligence. The Court stated that it was not denying the FCC’s right to enforce laws to protect customer data, but that the FCC must do so consistent with constitutional guarantees of a jury trial.
Illinois Federal Judge Reverses Prior Ruling on Retroactive Application of BIPA Amendments: In two cases before U.S. District Court Judge Elaine Bucklo, Judge Bucklo vacated her prior rulings that Illinois’ Biometric Information Privacy Act (“BIPA”) amendments passed by the Illinois legislature applied retroactively, stating that upon her reexamination of the issue she concluded that the “better interpretation of the amendment is that it changed the law” rather than clarified the initial intent of the legislature when it first passed BIPA. The Illinois Legislature amended BIPA in 2024 to provide that a company that collects a person’s biometric information multiple times in the same manner has committed only one violation of the law. Previously, the Illinois State Supreme Court held that each instance of collection constituted a violation supporting a claim for damages, resulting in potentially extreme liability for companies using biometric systems for business purposes such as timekeeping, where employees might clock in and out by scanning biometric identifiers multiple times per day. Judge Bucklo’s new ruling aligns with those of two other Illinois federal district courts. The plaintiffs will now be permitted to pursue their claims under the statute as it existed at the time of the alleged violations.
Pennsylvania District Court Holds Online Privacy Terms Sufficient for Implied Consent Under State Wiretapping Law: The U.S. District Court for the Western District of Pennsylvania held that disclosure of third-party data collection in online privacy statements that can be seen by a reasonably prudent person is sufficient to obtain implied consent to that disclosure. Pennsylvania’s wiretapping statute prohibits any person from intercepting a wire, electronic, or oral communication unless all parties have provided consent to interception. The website in question, operated by Harriet Carter Gifts, disclosed that the business tracked and shared website visitors’ activity with third parties. The privacy statement was available via a link at the bottom of each page of the website. According to the Court, the description of sharing data with third parties in the privacy statement combined with the reasonable availability of the privacy statement provided the plaintiff with constructive notice of the practice of sharing data with third parties and resulted in the plaintiff providing implied consent to such sharing, despite the fact that the plaintiff testified she had never read the privacy statement.
Sixth Circuit Holds Newsletter Subscribers Are Not Consumers Under VPPA: The Sixth Circuit affirmed the dismissal of a proposed class action brought by a plaintiff who had subscribed to a digital newsletter from Paramount Global’s 24/7 Sports. The plaintiff alleged that the subscription qualified him as a “consumer” under the Video Privacy Protection Act (“VPPA”) because the newsletter contains links to video content, making the newsletter “audiovisual materials” subject to the VPPA. The Court rejected this argument, stating that the complaint suggests that the linked video content was available to anyone with or without a newsletter subscription and that the plaintiff did not plausibly allege that the newsletter itself was “audiovisual material.” The Court noted that its reading of the VPPA differed from the Second and Seventh Circuits, which have held that the term “consumer” under the statute should encompass any purchaser or subscriber of goods or services, whether audiovisual or not. U.S. Circuit Judge Rachel S. Bloomekatz dissented, stating that the plaintiff is a “consumer” under the VPPA because he is a subscriber of Paramount, which is a “videotape service provider.”
Ninth Circuit Rules VPPA Not Applicable to Movie Theaters: The Ninth Circuit affirmed a District Court’s dismissal of an action against Landmark Theaters (“Landmark”), holding that the Video Privacy Protection Act (“VPPA”) does not apply to in-theater movie businesses. The plaintiff had purchased a ticket on Landmark’s website. As part of that purchase, the plaintiff alleged that Landmark shared the name of the film, the location of the showing, and the plaintiff’s unique Facebook identification number with Facebook. The VPPA prohibits “video tape service providers” from knowingly disclosing personally identifiable information of a consumer without consent. “Video tape service provider” is defined under the VPPA as “any person, engaged in the business .. of rental, sale, or delivery of prerecorded video cassette tapes or similar audiovisual materials.” The Court held that the plain language of the statute and the law’s statutory history did not support a finding that selling tickets to an in-theater movie-going experience is a business subject to the VPPA.
U.S. ENFORCEMENT
Defense Contractor Settles FCA Allegations Related to Cybersecurity Compliance: The U.S. Department of Justice (“DOJ”) announced a settlement with defense contractor Morsecorp Inc. (“Morse”) resolving allegations that Morse violated the False Claims Act (“FCA”) by failing to comply with cybersecurity requirements in its contracts with the Army and Air Force. The DOJ alleged that Morse failed to comply with contract requirements by, among other things, using a third party to host Morse emails without requiring or ensuring that the third party met Federal Risk and Authorization Management Program Moderate baseline and complied with the Department of Defense’s cyber security requirements, failing to implement all cybersecurity controls in NIST Special Publication 800-171 (“SP 800-171”), failing to have a consolidated written plan for each of its covered information systems describing system boundaries, system environments of operation, how security requirements are implemented and the relationships with or connections to other systems, and failing to update its self-reported score for implementation of the requisite NIST controls following receipt of an updated score from a third party assessor. Morse has agreed to pay $4.6 million to resolve the allegations.
New York Attorney General Fines Auto Insurance Company over Data Breach: The Office of New York Attorney General Letitia James announced that it had fined auto insurance company Root $975,000 for failing to protect personal information following a breach that affected 45,000 New York residents. Root allows consumers to obtain a price quote for insurance through its website. After entering limited personal information, the online quote tool filled in other personal information such as driver’s license numbers. The Attorney General alleges that Root exposed plaintext driver’s licenses in a PDF generated at the end of the quote process and that Root had failed to perform adequate risk assessments on its public-facing web applications, did not identify the plain text exposure of consumer personal information, and employed insufficient controls to thwart automated attacks. In addition to the fine, the settlement requires Root to enhance its data security controls by maintaining a comprehensive information security program that uses reasonable authentication procedures for access to private information and the maintenance of logging and monitoring systems, among other things.
New Jersey Attorney General Sues Messaging App for Failing to Protect Kids: New Jersey Attorney General Matthew J. Platkin and the Division of Consumer Affairs announced it had filed a lawsuit against message app provider Discord, Inc. (“Discord”) alleging Discord engaged in “deceptive and unconscionable business practices that misled parents about the efficacy of its safety controls and obscured the risks children faced when using the application.” According to the complaint, Discord violated the New Jersey Consumer Fraud Act by misleading parents and kids about its safety settings for direct messages. For example, Discord allegedly represented that certain user settings related to its safe direct messaging setting would cause the app to scan, detect, and delete direct messages for explicit media content. According to the Attorney General, Discord knew that not all explicit content was being detected or deleted. The complaint also alleges that Discord misrepresented its policy of not permitting users under the age of 13 because of its inadequate age verification processes.
HHS Enters Settlement with Healthcare Network over Phishing Attack that Exposed PHI: The U.S. Department of Health and Human Services (“HHS”), Office for Civil Rights (“OCR”) announced a settlement with PIH Health, Inc. (“PIH Health”), a California healthcare network, relating to alleged violations of the Health Insurance Portability and Accountability Act (“HIPAA”) arising from a phishing attack that exposed protected health information. The phishing attack compromised 45 PIH Health employee email accounts, which resulted in the breach of 189,763 individuals’ protected health information, including names, addresses, dates of birth, driver’s license numbers, Social Security numbers, diagnoses, lab results, medications, treatment and claims information, and financial information. OCR alleges that PIH Health failed to conduct an accurate and thorough risk analysis of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of ePHI held by PIH Health, and failed to provide timely notification of the breach. Under the terms of the settlement, PIH Health will implement a corrective action plan that will be monitored by OCR for two years and pay a $600,000 fine.
INTERNATIONAL LAWS & REGULATIONS
Cyberspace Administration of China Publishes Q&A on Cross-Border Data Transfers: The Cyberspace Administration of China (“CAC”) published a Q&A on cross-border data transfer policies and requirements for organizations. The Q&A is intended to provide guidance on government administrative policies. China’s regulations on cross-border data transfer require one of three mechanisms to be used if personal data or important data is transferred. Those mechanisms are a regulator-led security assessment, standard contractual clauses, and certification. The Q&A lists several common types of low risk data transfers that are not required to comply with one of the transfer mechanisms, including data related to international trade, cross-border transportation, academic collaborations, and cross-border manufacturing/sales if no important data or personal information is involved and nonsensitive personal information, totaling fewer than 100,000 individuals since 1 Jan. of the current year by noncritical information infrastructure operators. The Q&A also provides additional detail on assessing the necessity of personal data transfer and describes administrative processes available for obtaining clearance for data transfers on a company group basis, among other things.
ICO Releases Anonymization Guidance: The United Kingdom Information Commissioner’s Office (“ICO”) released new guidance on anonymizing personal data to assist organizations in identifying issues that should be considered to use anonymization techniques effectively. The guidance discusses what is meant by anonymization and pseudonymization, how such techniques affect data protection obligations, provides advice on good practices for anonymizing personal data, and discusses technical and organizational measures to mitigate risks to individuals when organizations anonymize data. Among other things, the guidance explains that anonymization is about reducing the likelihood of a person being identified or identifiable to a sufficiently remote level and that organizations should undertake identifiability risk assessments to determine the likelihood of identification when undertaking anonymization efforts, among other recommended accountability and governance measures. The guidance also includes case studies to assist users in understanding the guidance concepts.
Office of the Privacy Commissioner of Canada Releases Guidance on Risk Assessment in Data Breach; Canada Announces First Phase of Cybersecurity Certification Program: The Office of the Privacy Commissioner of Canada (“Privacy Commissioner”) released an online tool to assist organizations in conducting a breach risk self-assessment. The tool guides users through a series of details of the breach to assess whether the circumstances create a real risk of significant harm and is required to be reported. Separately, the Government of Canada announced the first phase in the implementation of the Canadian Program for Cyber Security Certification (“CPCSC”). The CPCSC will establish a cyber security standard for companies that handle sensitive unclassified government information in defense contracting. The Canadian government stated that the CPCSC will be released in phases, with the first phase involving the release of a new Canadian industrial cyber security standard, opening the accreditation process, and introducing a self-assessment tool for level 1 certification to help businesses better understand the program before a wider rollout of the program in successive phases.
NOYB Files Complaint Against ChatGPT over Defamatory Hallucinations: Privacy advocacy organization NOYB has filed a complaint against ChatGPT stemming from false information about an individual provided by ChatGPT in response to a query. Specifically, the complaint alleges that when Norwegian user Arve Hjalmar Holmen queried ChatGPT to determine if it had any information about him, ChatGPT presented the complainant as a convicted criminal who murdered two of his children and attempted to murder his third son. NOYB further alleges that the fake story included real elements of his personal life, including the actual number and the gender of his children and the name of his hometown. The NOYB complaint alleges that the output is not an isolated incident and violates the EU General Data Protection Regulation, including Article 5(1)(d), which requires organizations to ensure the personal data they produce about individuals is accurate.
ICO Fines Company for Lax Cybersecurity Following Ransomware Attack: The ICO announced it has fined Advanced Computer Software Group Ltd. (“Advanced”) £3.07 million for cybersecurity failures relating to a ransomware incident in August 2022. Advanced provides information technology services to businesses, including in the healthcare industry. Hackers had gained access to Advanced systems via a customer account that did not have multi-factor authentication, leading to the disruption of UK National Health Service (“NHS”) operations. The personal information on 79,404 people was exfiltrated in the attack, including details of how to enter the homes of 809 individuals receiving home care. The ICO investigation concluded that Advanced did not have appropriate technical and organizational measures in place to protect personal data prior to the incident. The ICO noted that it reduced the initially proposed fine due to Advanced’s proactive engagement with law enforcement, the NHS, and other steps taken by Advanced to mitigate the risk to impacted individuals.
Daniel R. Saeedi, Rachel L. Schaller, Gabrielle N. Ganze, Ana Tagvoryan, P. Gavin Eastgate, Timothy W. Dickens, Jason C. Hirsch, Adam J. Landy, Amanda M. Noonan, and Karen H. Shin also contributed to this article.
HIPAA Compliance for AI in Digital Health: What Privacy Officers Need to Know
Artificial intelligence (AI) is rapidly reshaping the digital health sector, driving advances in patient engagement, diagnostics, and operational efficiency. However, for Privacy Officers, AI’s integration into digital health platforms raises critical concerns around compliance with the Health Insurance Portability and Accountability Act and its implementing regulations (HIPAA). As AI tools process vast amounts of protected health information (PHI), digital health companies must carefully navigate privacy, security, and regulatory obligations.
The HIPAA Framework and Digital Health AI
HIPAA sets national standards for safeguarding PHI. Digital health platforms—whether offering AI-driven telehealth, remote monitoring, or patient portals—are often HIPAA covered entities, business associates, or both. Accordingly, AI systems that process PHI must be able to do so in compliance with the HIPAA Privacy Rule and Security Rule, making it vital for Privacy Officers to understand:
Permissible Purposes: AI tools can only access, use, and disclose PHI as permitted by HIPAA. The introduction of AI does not change the traditional HIPAA rules on permissible uses and disclosures of PHI.
Minimum Necessary Standard: AI tools must be designed to access and use only the PHI strictly necessary for their purpose, even though AI models often seek comprehensive datasets to optimize performance.
De-identification: AI models frequently rely on de-identified data, but digital health companies must ensure that de-identification meets HIPAA’s Safe Harbor or Expert Determination standards—and guard against re-identification risks when datasets are combined.
BAAs with AI Vendors: Any AI vendor processing PHI must be under a robust Business Associate Agreement (BAA) that outlines permissible data use and safeguards—such contractual terms will be key to digital health partnerships.
AI Privacy Challenges in Digital Health
AI’s transformative capabilities introduce specific risks:
Generative AI Risks: Tools like chatbots or virtual assistants may collect PHI in ways that raise unauthorized disclosure concerns, especially if the tools were not designed to safeguard PHI in compliance with HIPAA.
Black Box Models: Digital health AI often lacks transparency, complicating audits and making it difficult for Privacy Officers to validate how PHI is used.
Bias and Health Equity: AI may perpetuate existing biases in health care data, leading to inequitable care—a growing compliance focus for regulators.
Actionable Best Practices
To stay compliant, Privacy Officers should:
Conduct AI-Specific Risk Analyses: Tailor risk analyses to address AI’s dynamic data flows, training processes, and access points.
Enhance Vendor Oversight: Regularly audit AI vendors for HIPAA compliance and consider including AI-specific clauses in BAAs where appropriate.
Build Transparency: Push for explainability in AI outputs and maintain detailed records of data handling and AI logic.
Train Staff: Educate teams on which AI models may be used in the organization, as well as the privacy implications of AI, especially around generative tools and patient-facing technologies.
Monitor Regulatory Trends: Track OCR guidance, FTC actions, and rapidly evolving state privacy laws relevant to AI in digital health.
Looking Ahead
As digital health innovation accelerates, regulators are signaling greater scrutiny of AI’s role in health care privacy. While HIPAA’s core rules remain unchanged, Privacy Officers should expect new guidance and evolving enforcement priorities. Proactively embedding privacy by design into AI solutions—and fostering a culture of continuous compliance—will position digital health companies to innovate responsibly while maintaining patient trust.
AI is a powerful enabler in digital health, but it amplifies privacy challenges. By aligning AI practices with HIPAA, conducting vigilant oversight, and anticipating regulatory developments, Privacy Officers can safeguard sensitive information and promote compliance and innovation in the next era of digital health. Health care data privacy continues to rapidly evolve, and thus HIPAA-regulated entities should closely monitor any new developments and continue to take necessary steps towards compliance.
AI Circuit Breakers in Legal Contracts: A Safeguard for Business
As artificial intelligence becomes increasingly integrated into business operations, IT contracts covering the provision of AI systems are evolving to include critical safeguards. One emerging concept is the AI circuit breaker, a contractual mechanism that provides for an intervention, or override, where an AI system exhibits undesirable or harmful behavior.
When contracting for AI, businesses should look to proactively include these safeguards in their contracts to mitigate against the risks of AI-driven processes causing unintended harm.
What Is an AI Circuit Breaker?
Borrowing from engineering, an AI circuit breaker triggers a pause or override when an AI system acts unpredictably, exceeds acceptable risk levels, or falls below a minimum performance threshold. This ensures that businesses remain in control of automated processes, mitigating against unintended consequences.
AI circuit breakers take multiple forms including:
Automated Intervention: an automated circuit breaker within the AI system, which does not require human intervention, that detects issues and provides for an override or intervention in certain predefined circumstances.
Human Intervention: a contractual right for a party (whether the provider, customer or both) to intervene and take certain actions, such as interrupting or stopping the AI system, in certain predefined circumstances.
Why Are AI Circuit Breakers Necessary?
Circuit breakers can benefit both providers and customers as they seek to mitigate the risks associated with the deployment of AI systems, including:
to ensure regulatory compliance;
to mitigate the risk of inaccuracies and AI “hallucinations”;
to detect and address inequality and bias;
to identify and mitigate the risk of security breaches; and
to identify potentially infringing output.
Particular benefits of circuit breakers include retaining control and human oversight over AI systems and providing contractual certainty; traditional contractual rights to suspend and terminate services are unlikely to offer sufficient clarity regarding the rights and obligations of each party if an AI system begins to exhibit undesirable or harmful behaviour.
Drafting and Negotiating AI Circuit Breakers
Key considerations when drafting and negotiating circuit breakers include:
Trigger Conditions: defining specific scenarios where the circuit breaker activates such as if the AI system produces an unacceptable error rate, displays clear signs of bias, infringes third party rights or breaches applicable laws.
Consequences of Trigger Activation: to what extent will a party have the ability to interrupt, suspend and/or potentially terminate depending on the nature of the event giving rise to the trigger?
Remediation: if the AI system is interrupted, the parties will need to address the actions to be undertaken (and costs of doing so), including responsibility for determining the cause of the failure, whether the AI system should be rolled back and how the issues will be resolved, including through redevelopment and retraining.
Payment: impact on any related payment commitments, including payment commitments whilst the AI system is subject to any suspension/it is rolled back.
Restart: conditions for lifting any suspension, including completion of audits and testing.
Liability Allocation: responsibility for AI-related errors and liability when the circuit breaker is triggered.
Summary
The very nature of AI is that it continually ‘learns’ and evolves, often in an opaque manner meaning that providers and deployers of AI systems may not fully understand the power and capability of the AI technology at the outset of any deployment.
AI circuit breakers can provide an important safety net in respect of AI system deployments. As AI continues to shape the business and legal landscape, incorporating these safeguards can help providers and deployers of AI systems mitigate AI-driven risks through implementing appropriate guardrails, maintaining oversight and accountability and clearly defining responsibilities and rights in the event of undesirable or harmful behaviour.
The European Commission’s Guidance on Prohibited AI Practices: Unraveling the AI Act
The European Commission published its long-awaited Guidelines on Prohibited AI Practices (CGPAIP) on February 4, 2025, two days after the AI Act’s articles on prohibited practices became applicable.
The good news is that in clarifying these prohibited practices (and those excluded from its material scope), the CGPAIP also addresses other more general aspects of the AI Act, which comes to provide much-needed legal certainty to all authorities, providers and deployers of AI systems/models in navigating the regulation.
It refines the scope of general concepts (such as “placing on the market”, “putting into service”, “provider” or ” deployer”) and exclusions from the scope of the AI Act, provides a definition of others not expressly included in the AI Act (such as “use”, “national security”, “purposely manipulative techniques” or “deceptive techniques”), as well as takes a position on the allocation of responsibilities of providers and deployers using a proportionate approach (establishing that these responsibilities should be assumed by whoever is best positioned in the value chain).
It also comments on the interplay of the AI Act with other EU laws, explaining that while the AI Act applies as lex specialis to other primary or secondary EU laws with respect to the regulation of AI systems, such as the General Data Protection Regulation (GDPR) or EU consumer protection and safety legislation, it is still possible that practices permitted under the AI Act are prohibited under those other laws. In other words, it confirms that the AI Act and these other EU laws complement each other.
However, this complementarity is likely to pose the greatest challenges to both providers and deployers of the systems. For example, while the European Data Protection Board (EDPB) has already clarified in its Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models (adopted in December 2024) that the “intended” purposes of AI models at the deployment stage must be taken into account when assessing whether the processing of personal data for the training of said AI models can be based on the legitimate interest of the providers and/or future deployers. The European Commission clarifies in Section 2.5.3 of the CGPAIP that the AI Act does not apply to research, testing (except in the real world) or development activities related to AI systems, or AI models before they are placed on the market or put into service (i.e. during the training stage). Similarly, the CGPAIP provides some examples of exclusions from prohibited practices (i.e., permitted practices) that are unlikely to find a lawful basis in the legitimate interests of providers and/or future users of the AI system.
The prohibited practices:
Subliminal, purposefully manipulative or deceptive techniques (Article 5(1)(a) and Article 5(1)(b) AI Act)This prohibited practice refers to subliminal, purposefully manipulative or deceptive techniques that are significantly harmful and materially influence the behavior of natural persons or group(s) of persons, or exploit vulnerabilities due to age, disability or a specific socio-economic situation.
The European Commission provides examples of subliminal techniques (visual and auditory subliminal messages, subvisual and subaudible queueing, embedded images, misdirection and temporal manipulation), as well as explains that the rapid development of related technologies, such as brain-computer interfaces or virtual reality, increases the risk of sophisticated subliminal manipulation.
When referring to purposefully manipulative techniques (to exploit cognitive biases, psychological vulnerabilities or other factors that make individuals or groups of individuals susceptible to influence), it clarifies that for the practice to be prohibited, either the provider or the deployer of the AI system must intend to cause significant (physical, psychological or financial/ economic) harm. While this is consistent with the cumulative nature of the elements contained in Article 5(1)(a) of the AI Act for the practice to be prohibited, it could be read as an indication that manipulation of an individual (beyond consciousness) where it is not intended to cause harm (for example, for the benefit of the end user or to be able to offer a better service) is permitted. The CGPAIP refers here to the concept of “lawful persuasion”, which operates within the bounds of transparency and respect for individual autonomy.
With respect to deceptive techniques, it explains that the obligation of the provider to label “deep fakes” and certain AI-generated text publications on matters of public interest, or the obligation of the provider to design the AI system in a way that allows individuals to understand that they are interacting with an AI system (Article 50(4) AI Act) are in addition to this prohibited practice, which has a much more limited scope.
In connection with the interplay of this prohibition with other regulations, in particular, with the DSA, the European Commission recognizes that dark patterns are an example of manipulative or deceptive technique when they are likely to cause significant harm.
It also provides that there should be a plausible/reasonably likely causal link between the potential material distortion of the behavior (significant reduction in the ability to make informed and autonomous decisions) and the subliminal, purposefully manipulative or deceptive technique deployed by the AI system.
Social scoring (Article 5(1)(c) AI Act)The CGPAIP defines social scoring as the evaluation or classification of individuals based on their social behavior, or personal or personality characteristics over a certain period of time, clarifying that a simple classification of people on said basis would trigger this prohibition and that the concept evaluation is inclusive of “profiling” (in particular to analyze and/or make predictions on interests or behaviors), that leads to detrimental or unfavorable treatment in unrelated social contexts, and/or unjustified or disproportionate treatment.
Concerning the requirement that it leads to detrimental or unfavorable treatment, it is established that such harm may be caused by the system in combination with other human assessments, but that at the same time, the AI system must play a relevant role in the assessment. It also provides that the practice is prohibited even if the detrimental or unfavorable treatment is produced by an organization different from the one that uses the score.
The European Commission states, however, that AI systems can lawfully generate social scores if they are used for a specific purpose within the original context of the data collection and provided that any negative consequences from the score are justified and proportionate to the severity of the social behavior.
Individual Risk Assessment and Prediction of Criminal Offences (Article 5(1)(d) AI Act)When interpreting this prohibited practice, the European Commission outlines that crime prediction and risk assessment practices as such are not outlawed, but only when the prediction of a natural person committing a crime is made solely on the basis of a profiling of said individual, or on assessing their personality traits and characteristics. In order to avoid circumvention of the prohibition and ensure its effectiveness, any other elements being taken into account in the risk assessment will have to be real, substantial and meaningful for them to be able to justify the conclusion that the prohibition does not apply (excluding therefore AI systems to support the human assessment based on objective and verifiable facts directly linked to a criminal activity, in particular when there is human intervention).
Untargeted Scraping of Facial Images (Article 5(1)(e) AI Act)The European Commission clarifies that the purpose of this prohibited practice is the creation or enhancement of facial recognition databases (a temporary, centralized or decentralized database that allows a human face from a digital image or video frame to be matched against a database of faces) using images obtained from the Internet or CCTV footage, and that it does not apply to any scraping AI system tool that can be used to create or enhance a facial recognition database, but only to untargeted scraping tools.
The prohibition does not apply to the untargeted scraping of biometric data other than facial images, or even if it is a database that is not used for the recognition of persons. For example to generate images of fictitious persons and clarifies that the use of databases created prior to the entry into force of the AI Act, which are not further expanded by AI-enabled untargeted scraping, must comply with applicable EU data protection rules.
Emotion Recognition (Article 5(1)(f) AI Act)This prohibition concerns AI systems that aim to infer the emotions (interpreted in a broad sense) of natural persons based on their biometric data and in the context of the workplace or educational and training institutions, except for medical or security reasons. Emotion recognition systems that do not fall under this prohibition are considered high-risk systems and deployers will have to inform the natural persons exposed thereto of the operation of the system as required by article 50(3) of the AI Act.
The European Commission refers here to certain clarifications contained in the AI Act regarding the scope of the concept of emotion or intention, which does not include, for example, physical states such as pain or fatigue, nor readily apparent expressions, gestures or movements unless they are used to identify or infer emotions or intentions. Therefore, a number of AI systems used for safety reasons would already not fall under this prohibition.
Similarly, the notions of workplace, educational and training establishments must be interpreted broadly. There is also room for member states to introduce regulations that are more favorable to workers with regard to the use of AI systems by employers.
It also clarifies that authorized therapeutic uses include the use of CE marked medical devices and that the notion of safety is limited to the protection of life and health and not to other interests such as property.
Biometric Categorization for certain “Sensitive” Characteristics (Article 5(1)(g) AI Act)This prohibition is for biometric categorization (except where purely ancillary to another commercial service and strictly necessary for objective technical reasons) that individually categorize natural persons on the basis of their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
The European Commission clarifies that this prohibition, however, does not cover the labelling or filtering of lawfully acquired biometric datasets (such as images), including for law enforcement purposes (for instance, to guarantee that data equally represents all demographic groups).
Real-time Remote Biometric Identification (RBI) Systems for Law Enforcement Purposes (Article 5(1)(h) AI Act)The European Commission devotes a substantial part of the CGPAIP to the development of this prohibited practice, which refers to the use of real-time RBI systems in publicly accessible areas for law enforcement purposes. Exceptions, based on the public interest, are to be determined by the member states, through local legislation.
The CGPAIP concludes with a final section on safeguards and conditions for the application of the exemptions to the prohibited practices, including the conduct of Fundamental Rights Impact Assessments (FRIAs), which are defined as assessments aimed at identifying the impact that certain high-risk AI systems, including RBI systems, may have on fundamental rights, and which, it is clarified, do not replace the existing Data Protection Impact Assessment (DPIA) that data controllers (i.e., those responsible for processing personal data) must conduct and have a broader scope (covering not only the fundamental right to data protection but also all other fundamental rights of individuals) and which complement, inter alia, the required DPIA, the registration of the system or the need for prior authorization.
Regulation Round Up: April 2025
Welcome to the UK Regulation Round Up, a regular bulletin highlighting the latest developments in UK and EU financial services regulation.
Key Developments in April 2025:
29 April
ESG: The FCA updated its webpage on its consultation paper on extending the sustainability disclosure requirements (“SDR”) and investment labelling regime to portfolio managers (CP24/8). In the light of feedback received, it has decided that it is not the right time to finalise rules on extending the regime to portfolio managers.
Cryptoassets: The European Securities and Markets Authority (“ESMA”) published its final report on guidelines relating to supervisory practices for national competent authorities to prevent and detect market abuse under Article 92(2) of the Regulation on markets in cryptoassets ((EU) 2023/1114).
24 April
Regulatory Capital: The FCA has published a consultation paper on the definition of capital for investment firms (CP25/10). CP25/10 outlines the FCA’s proposals to simplify and consolidate the rules as they relate regulatory capital.
17 April
Listing Regime: The FCA has published Primary Market Bulletin No 55 which addresses proposed changes to the listing regime.
16 April
ESG: The Omnibus I Directive (EU) 2025/794 amending Directives (EU) 2022/2464 and (EU) 2024/1760 as regards the dates from which member states are to apply certain corporate sustainability reporting and due diligence requirements was published in the Official Journal of the European Union. Please refer to our dedicated article on this topic here.
15 April
AIFMD 2.0: ESMA has published final reports setting out the final guidelines and draft technical standards on liquidity management tools under the Directive amending the Alternative Investment Fund Managers Directive (2011/61/EU) (“AIFMD”) and the UCITS Directive (2009/65/EC) ((EU) 2024/927) (“AIFMD 2.0”). Please refer to our dedicated article on this topic here.
14 April
Regulatory Initiatives Grid: The Forum members (the Bank of England, the Prudential Regulatory Authority, the FCA, the Payment Systems Regulator, the Competition and Markets Authority, the Information Commissioner’s Office, the Pensions Regulator, and the Financial Reporting Council) published a joint regulatory initiatives grid relevant to the financial services sector.
11 April
FCA Regulatory Perimeter: HM Treasury has published a policy paper containing a record of the meeting between the Economic Secretary to the Treasury and the Chief Executive of the FCA. The purpose of the meeting was to discuss the FCA’s perimeter and the issues set out in its December 2024 report.
Trading Platforms: The FCA has published a webpage summarising the findings from its multi-firm review of trading apps (also more commonly known as “neo-brokers”).
10 April
ESG: ESMA has published a risk analysis report on the increased incorporation of ESG terms into fund names and its impact on investment flows.
MiFID II: ESMA has published a final report on regulatory technical standards supplementing the Markets in Financial Instruments (“MiFID II”) Directive (2014/65/EU) to specify the criteria for establishing and assessing the effectiveness of investment firms’ order execution policies.
9 April
Artificial Intelligence: The Financial Policy Committee published a report on artificial intelligence in the financial system.
ESG: ESMA has published a final report setting out its analysis and conclusions on a common supervisory action exercise conducted with the national competent authorities on ESG disclosures under the Benchmarks Regulation ((EU) 2016/1011).
8 April
MiFID II: ESMA has published a final report containing technical advice to the European Commission on amendments to the research provisions in the MiFID II Directive in the context of changes introduced by the Listing Act (ESMA35-335435667-6290).
FCA Strategy: The FCA has published it annual work programme for 2025/26, which sets out how it will deliver its four strategic priorities (to be a smarter regulator that is more efficient and effective, supporting growth, helping consumers navigate their financial lives and fighting financial crime).
7 April
UK AIFMD: The FCA has published a call for input and HM Treasury has published a consultation on the future regulation of alternative investment fund managers. Please refer to our dedicated article on this topic here.
3 April
ESG: The European Parliament plenary session formally adopted at first reading the stop-the-clock proposal for a Directive amending Directives (EU) 2022/2464 and (EU) 2024/1860 with respect to the dates from which member states are expected to align to corporate sustainability reporting and due diligence requirements.
ESG: The FCA has published a summary of the feedback it has received in relation to its discussion paper on finance for positive sustainable change (DP23/1) together with its response and next steps. Please refer to our dedicated article on this topic here.
1 April
Short Selling: The FCA published an updated version of its webpage on the notification and disclosure of net short positions.
Motor Finance: The FCA has published its written submissions to the Supreme Court in the appeal of the Court of Appeal decision in Johnson v FirstRand Bank Ltd (London Branch) t/a Motonovo Finance [2024] EWCA Civ 1282.
ESG: The EU Platform on Sustainable Finance has published a report on its first review of the Taxonomy Climate Delegated Act and the development of technical screening criteria for a list of new economic activities.
Additional Authors: Sulaiman Malik and Michael Singh
UK’s Collective Licensing Initiative Aims to Harmonize AI and Copyright Law
In a significant move to address the tension between copyright and generative artificial intelligence (AI), the UK’s Copyright Licensing Agency (CLA), Authors’ Licensing and Collecting Society (ALCS), and Publishers’ Licensing Services (PLS) have announced plans to launch a collective licensing framework for AI training. The opt-in license would allow AI developers to use text-based published works—such as books, journals, and magazines—for training, fine-tuning, and retrieval-augmented generation (RAG) while ensuring that creators are compensated. The license is expected to roll out in Q3 2025, following further consultation with publishers.
The UK government’s consultation “Copyright and Artificial Intelligence” acknowledged that current UK copyright law leaves both rights holders and AI developers navigating uncertainty. It proposed two main solutions: (1) strengthened rights reservation mechanisms and (2) a fallback copyright exception for AI training where rights were not reserved. However, rights holders strongly opposed a “catch-all” exception, arguing it would erode transparency and remuneration. Their opposition helped catalyze the development of this licensing framework as a market-based alternative.
The framework is being built collaboratively by CLA, ALCS (representing authors), and PLS (representing publishers). CLA—already the UK’s recognized collective management organization for text content—would administer the license, collecting and distributing fees to right holders after operational costs. It would cover a broad range of text-based published works under CLA’s mandate, streamlining permissions for AI developers where individual negotiations would be impractical. Rights holders would affirmatively opt in by registering works, while unregistered works would remain outside the license’s scope. This would apparently avoid the “opt-out” burden creators criticized in the government’s exception proposal, instead requiring affirmative consent to participate.
Importantly, the proposed collective license is intended to complement, not replace, direct bespoke licensing deals between large publishers or rights holders and AI firms. It provides an accessible fallback particularly for smaller creators and independent publishers who may otherwise struggle to negotiate individually.
An initial phase of publisher consultation has already concluded, with a second round scheduled for later in 2025. The framework’s rollout will follow the launch of related TDM and workplace-use licenses slated to commence on May 1, 2025. Exact compensation models are still under negotiation but will aim to balance affordability for AI developers against fair, sustainable remuneration for creators. License fees collected will be allocated among ALCS and PLS members based on established distribution rules.
The framework is accompanied by the development of a national rights reservation registry, intended to facilitate machine-readable licensing metadata and help signal licensing preferences at scale. Licensees would also be subject to robust transparency requirements, including reporting on the works used, methods of content acquisition, and downstream uses, with the ultimate goal of reinforcing trust between the creative industries and AI developers.
Although UK-focused, the licensing framework is being designed with international interoperability in mind, offering access to licensed content to AI developers worldwide, including US-based firms. If successful, it could serve as a model for future cross-border AI licensing solutions.
The UK’s proposed collective licensing framework represents a pragmatic approach to reconciling the needs of AI innovation with the requirements of copyright law. While the implementation process is likely to raise novel challenges, the initiative is designed to provide voluntary, transparent, and scalable alternative to statutory exceptions in the UK; protecting the economic interests of creators while enabling responsible AI development. If successful, it may offer valuable insights for other jurisdictions seeking to balance technological advancement with the protection of creative rights.
Listen to this post
Avoiding Ethical Pitfalls as Generative Artificial Intelligence Transforms the Practice of Litigation
“It has become appallingly obvious that our technology has exceeded our humanity.”
This quote from Albert Einstein warns that as technology rapidly advances, human ethical oversight is required to ensure tools are used thoughtfully and appropriately. Decades later, this quote rings loud today as generative artificial intelligence (“GAI”) transforms the practice of law in ways that eclipse even the introduction of computer-assisted legal research from Westlaw and Lexis in the 1970s.
GAI has brought about revolutionary changes in the practice of law, including litigation, over the last five years, particularly since the launch of ChatGPT on November 30, 2022, and its subsequent rise. Advancements are so major and rapid that the legal profession recently witnessed the first appearance by an avatar before a court in a real proceeding.1 In a profession governed by a defined set of principle-based ethical rules, litigators making new use of the technology will likely find themselves in an ethical minefield. This article focuses on governing ethical rules and tips to avoid violations.
Brief Overview of GAI
GAI is a powerful technology that trains itself on massive datasets – typically taking the form of large language models (“LLMs”) which mimic human intelligence allowing it to perform a variety of functions seemingly as if it were a person. GAI can analyze data, produce relevant research, and generate new product, including written material, images, and video. For ease of use, GAI functions through “chatbots,” which simulate conversation and allow users to seek assistance through text or voice. The main chatbots include ChatGPT (developed by OpenAI), Gemini (Google), Claude (Anthropic), and Copilot (Microsoft). These chatbots are all public-facing, as such any member of the public can use them. Unless the functionality is switched off, queries and information shared by a user are retained and continue to train the model meaning such information loses its confidentiality. There is also non-public-facing GAI, which utilizes proprietary models that are private to the user.
The main difference between GAI and the versions of Westlaw and Lexis that lawyers today grew up using is that GAI can do the same research and much more. Responding to conversational commands, GAI can engage in human-like functions, including creating first drafts of documents and culling document sets of all sizes for relevance and privilege.
GAI Litigation Use Cases
The early days of GAI have seen five main use cases in the litigation context as set out below.
Legal Research
In little more than a quarter century, the legal profession has seen revolutionary advances in legal research. The manual and laborious practice of visiting law libraries and pulling bound case reporters gave way to technology as Westlaw and Lexis gained widespread use in the early 1990’s with the rise of personal computers. Recent years have witnessed the next technological revolution in legal research with the advent of GAI. Searches are now simpler and available to all. Whereas traditional Lexis and Westlaw (they each now have GAI versions) rely on more primitive search terms and require a paid subscription, GAI allows for conversational commands – typed or spoken – and basic versions are available free of charge. GAI can also conduct advanced queries, such as mining vast troves of data to ensure no precedent is missed.
E-Discovery / Document Review
“Technology Assisted Review” (more commonly known as “TAR”) has materially changed e-discovery over the last decade. TAR, aka predictive coding, learns from a lawyer’s tagging of sample documents and efficiently classifies the remainder of the population. The process has dramatically sped up e-discovery and has reliably assisted with relevancy and privilege determinations. Following a seminal decision by SDNY Magistrate Judge Peck in 2012,2 where he encouraged the use of predictive coding in large data volume cases, U.S. judges have regularly approved of TAR as an acceptable (and often superior) method for identifying responsive documents.
GAI is elevating TAR to the next level. Before GAI, TAR relied heavily on keyword searches and the results were only as good as the search terms. By contrast, GAI can learn context and intent, which allows it to surface relevant documents even if they do not contain identified keywords. Similarly, GAI can learn legal concepts and flag privileged material. Unlike legacy TAR systems, which need to be re-trained for each project, GAI LLM models constantly re-train themselves and stand ready for use.
Studies have found that GAI-based review can be more accurate and efficient than human review in finding relevant material. For example, a 2024 study published by Cornell University, which pitted LLMs against humans in a contract review exercise, found that LLMs match or exceed human accuracy, LLMs complete the task in mere seconds compared to hours for humans, and LLMs offered a 99.97 percent reduction in costs.3
Legal Writing
GAI can amaze at preparing first drafts of any legal document. For a simple example, ask ChatGPT to “please draft a SDNY complaint for a 10b-5 claim where the stock price dropped 10 percent after the CEO misrepresented future prospects.” First-time users will do a double take at the results—a highly workable draft that provides both structure and the start of relevant content. Studies again confirm efficiencies—with one showing GAI cut brief-writing time by 76%.4
Trial Preparation
GAI has shown great promise in assisting trial preparation through summarizing and organizing documents, such as deposition transcripts. Aside from relevance, GAI can be used to create chronologies, and zero in on key admissions and impeachment material. GAI can be used to compare the deposition transcript to prior statements by the witness or others to speedily find inconsistencies. GAI can even generate deposition transcripts in real-time allowing questioners to raise inconsistencies that will be captured in the official record, as well as to suggest real-time relevant follow-up questions.
Predictive Analytics for Trial Outcomes
Lastly, GAI can sift through vast legal data sets to help forecast how a trial may play out. It can assess legal precedent, rulings by the judge, and even jury behavior. The analysis can get micro, focusing on issues such as how often a particular judge grants motions to dismiss and what types of arguments are most persuasive to the judge. While early days, there is a reported episode of a litigation where a law firm, which was inclined to settle, won at trial following its use of a GAI tool which predicted an 80% chance of victory.5
Ethical Issues
While the benefits are immense, use of GAI is laden with risks that must be carefully managed. As Chief Justice John Roberts stated in his 2023 year-end report, which was devoted to artificial intelligence, “Any use of AI requires caution and humility.”6 To apply caution and avoid pitfalls, lawyers should be mindful of two main concerns when using GAI:
(1) While the tools are immensely powerful in applying logic to locate and analyze material, they cannot discern the truth and they can become confused leading to erroneous output – discussed below and colloquially referred to as “hallucinations,” thereby requiring close human review; and
(2) Several of the Model Rules of Professional Conduct (and state versions thereof) apply to use of GAI and need to be complied with at the risk of attorney discipline.
The Rules
No lawyer should use GAI in practice without being aware of the following Model Rules, along with relevant states’ versions, and their application to the technology. For a more fulsome review of the governing Model Rules, practitioners would be well-served to read ABA Formal Opinion 512 dedicated to GAI.7
Competence – Rule 1.1 and ABA Comment 8 thereunder, which require that lawyers keep informed of changes in the law and its practice, including the benefits and risks associated with relevant technology.
Client Communication – Rule 1.4, which requires sharing with the client all information that can be deemed important to the client, including the means by which a client’s objectives are to be met.
Fees and Expenses – Rule 1.5, which requires that both be reasonable.
Confidentiality of Client Information – Rule 1.6, which requires informed consent from clients before disclosing their information to third parties.
Candor Toward the Tribunal – Rule 3.3, which sets forth specific duties to avoid undermining the integrity of the adjudicative process, including prohibitions on submission of false evidence.
Responsibilities of Partners, Managers and Supervisory Lawyers – Rule 5.1, which requires such persons to take reasonable steps to ensure that all lawyers in their firm conform to the Rules of Professional Conduct.
Tips to Avoid Running Afoul
While not meant to be an exhaustive list, the following tips represent critical practice points based on early GAI experience.
Do not learn GAI on the job
Lawyers’ first use of GAI should not be in connection with a live matter. Under Rule 1.1, lawyers have a duty to understand a technology–including nature of operation, benefits and risks–prior to use.8 As per ABA Formal Opinion 512, a “reasonable understanding,” rather than an expertise, is required.9 To satisfy this requirement, attorneys should be familiar with how a tool was trained, its capabilities (which tasks it can be used for) and its limitations (confidentiality, bias risk based on the data that was inputted, etc.) before using the tool.
A mishap example comes from Michael Cohen, President Trump’s former lawyer, who presented his lawyer with three non-existent cases that made their way into motion papers.10 Cohen obtained the cases from Google Bard, a GAI tool, and, upon realizing the mistake, informed the court that he “did not realize [Bard] … was a generative text service that … could show citations and descriptions that looked real but actually were not. Instead [he had] understood it to be a super-charged search engine.” The court, while describing Cohen’s belief as “surprising,” declined to impose sanctions finding a lack of bad faith.
Never submit GAI-generated research to a court or adversary without human verification of all relevant points – legal and factual.
Cohen aside, there have been at least six additional hallucination cases to date where erroneous cites were submitted to courts.11 These have included citations for cases which simply do not exist12 as well as cases which do exist but do not support the proposition they are used to support.13 Accordingly, it is not enough to solely verify citations – rather, it must be confirmed that the citation stands for the point represented. These matters have involved law firms big14 and small,15 showing the risks do not only reside at small firms looking to save on the costs of Lexis and Westlaw.
These risks are real and material. A June 2024 Stanford University study found that leading legal research GAI tools hallucinate between 17% and 33% of the time.16 The consequences can be severe, ranging from professional embarrassment17 to Rule 11 sanctions to referrals to bar associations for potential discipline based on competence violations
Promptly notify the court and adversaries of any known inaccuracy generated by GAI.
Should an error occur, it is imperative to promptly notify the court and adversaries. The duty of candor imposed by Rule 3.3 requires that lawyers correct any material false statement of law or fact made to a tribunal. Prompt remedial action can also persuade the judge to refrain from imposing sanctions for submission of the hallucination.18
Never input client information into a GAI tool without informed client consent and an evaluation of attendant risks.
As stated clearly in ABA Formal Opinion 512 and its analysis of Rule 1.6 governing the confidentiality of client information, “[B]ecause many of today’s self-learning GAI tools are designed so that their output could lead directly or indirectly to the disclosure of information relating to the representation of a client, a client’s informed consent is required prior to inputting information relating to the representation into such a GAI tool.” Informed consent should come from a direct documented communication and not from statements made in boilerplate engagement letters.
Consent aside, Rule 1.1 governing competence requires an evaluation of the disclosure risks of inputting such information, including from use by the public, others within the same firm but walled off from the matter, and via cyber breaches. As a baseline, practitioners should read the Terms of Use, privacy policy and other contractual terms applicable to the GAI tool being used. Such review such focus on: the confidentiality provisions in place; who has access to the tool’s data; the cyber controls that are in place; and whether the information is retained by the tool after the lawyer’s use is discontinued. Even if the system is custom to a firm, issues can still arise if others within the firm, but not working on the matter, can inadvertently view such data.
Never input client information into a public-facing GAI tool, period.
Client information should never be inputted into a public facing GAI tool such as Chat GPT. While Chat GPT keeps conversations private by default, its OpenAI architecture uses chat to improve model performance (i.e., for training) unless the user opts out. Risks of disclosure of client information are too great and consequence too severe (violations of ethical rules and waiver of attorney-client privilege and work product protections), such that client information should not be shared with such tools.
Discuss with the client any use of GAI to help form litigation strategy.
Rule 1.4 requires lawyers to advise the client promptly of information which would be important for the client to receive, including the means by which the client’s objectives are to be accomplished. As such, lawyers should consult with their client prior to using GAI to influence a significant litigation decision. As stated in ABA Formal Opinion 512, “A client would reasonably want to know whether, in providing advice or making important decisions about how to carry out the representation, the lawyer is exercising independent judgment or, in the alternative, is deferring to the output of a GAI tool.” By contrast, it is unlikely that using GAI to conduct research gives rise to a consultation requirement, just as it is not the practice to do so today when using Westlaw or Lexis.
Ensure all fees and expenses tied to GAI use are reasonable.
Rule 1.5 provides that a lawyer’s fee must be reasonable and the basis for the charges must be communicated to the client. The new terrain of GAI can lead to violations of this rule. Under ABA Formal Opinion 93-379, lawyers cannot bill for more time than worked. While GAI can speed up the completion of many tasks, lawyers may only bill for the actual time spent on such task, even if compressed. Next, as per ABA Formal Opinion 512, it is permissible for lawyers to bill for the time to input data and run queries, as well as to learn a tool that is custom to the client. By contrast, it is not permissible to bill for the time spent learning a tool that will be used broadly within a lawyer’s practice. As to expenses, as per ABA Formal Opinion 93-379, absent an agreement, a lawyer can charge a client no more than the direct cost associated with the tool plus a reasonable allocation of related overhead – no surcharges or premiums may be added. Flat fee arrangements give rise to their own considerations – as per Model Rule 1.5, the fee must be “reasonable,” which would not always be the case if GAI allowed the project to be completed in rapid fashion. Here, price adjustments may be in order. As stated in Formal Opinion 512, “A fee charged for which little to no work was performed is an unreasonable fee.”
Managerial lawyers must take steps to ensure those at their firm comply with the rules.
To ensure compliance with Rule 5.1, law firms should establish clear policies and provide training on the appropriate use of GAI prior to allowing such use.
Conclusion
As its use takes off, GAI is likely to have several material impacts on the practice of law, including: reducing the demand for law firm associates as research, first drafts of documents, first level document review, and other traditional tasks performed by these lawyers will now be handled by computer; an increase in cost reduction pressure given clients’ greater expectation of efficiencies; and the potential for higher quality product given a broader landscape of data which can be efficiently surveyed for relevance and more lawyer time available for strategy and tactical decisions.
Increased future use of GAI will also assuredly result in more errors, hallucinations and otherwise. To guard against the consequences of missteps, practitioners should follow the tips herein as but a baseline for good practice.
1 See Larry Newmeister, An AI Avatar Tried To Argue A Case Case Before A New York Court. The Judges Weren’t Having it,” AP News, April 4, 2025 (Discussing a March 26, 2025 proceeding before the New York State Supreme Appellate Division, where a panel of judges shut down an attempt by a pro se plaintiff to show a video of his argument delivered by an avatar moments into delivery. The court scolded the attorney and required him to deliver his argument himself).
2 Da Silva Moore v. Publicis Groupe, 287 F.R.D. 182 (SDNY 2012).
3 Lauren Martin, Nick Whitehouse, Stephanie You, Lizzie Catterson, and Rivindu Perera, Better Call GPT, Comparing Large Language Models Against Lawyers, Cornell University (January 24, 2024).
4 Bob Ambrogi, CaseText Study Says Its ‘Compose’ Technology Cuts Brief-Writing Time by 76%, LawSites (July 28, 2020).
5 Ashley Hallene and Jeffrey M. Allen, Using AI for Predictive Analytics in Litigation, ABA website, October 2024.
6 Chief Justice John Roberts, 2023 Year-End Report on the Federal Judiciary at 5.
7 ABA Formal Opinion 512, Generative Artificial Intelligence Tools, July 29, 2024.
8 ABA Comment 8 to Model Rule 1.1 (“[A] lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.”
9 Id. at 2.
10 U.S. v. Cohen, 18-CR-602 (JMF) (SDNY March 20, 2024)
11 Sara Merken, AI “hallucinations’ in court papers spell trouble for lawyers, Reuters (Feb. 18, 2025) (citing to at least seven such cases over the prior two years).
12 See Mata v. Avianca Airlines, No. 22-CV-1461 (SDNY June 22, 2023)[INSERT PARENTHETICAL]; Wadsworth v. Walmart LLC, No. 2:23-CV-118-KHR (D. WY Feb. 24, 2025) (lawyers sanctioned for citing six cases that do not exist in a motion to dismiss); U.S. v. Cohen, 18-CR-602 (JMF) (SDNY March 20, 2024) (sanctions considered but not imposed on Michael Cohen and his attorney for citing to three cases that do not exist). Iovino v. Michael Stapleton Assoc., No. 5:21-cv-00064 (W.D. Va. July 24, 2024) (sanctions considered but not imposed on lawyer for citing two cases that do not exist).
13 Iovino, id (also citing two cases for quotes that are not found in the opinions).
14 See Wadsworth, supra note 8 (AmLaw 100 firm filed a motion citing eight cases that do not exist).
15 See, e.g., Mata v. Avianca Airlines, [cite] (two lawyers fined $5,000).
16 Varun Magesh, Faiz Surani, Matthew Dahl, Mirac Suzgan, Christopher D. Manning, and Daneil E. Ho, Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools, Stanford University (June 6, 2024).
17 Negative press aside, the lawyers in the Avianca Airlines matter were ordered to send the judge’s opinion sanctioning them – which called their filing “legal gibberish” – to the judges to whom the GAI improperly attributed the fake citations. Similarly, in the Michael Cohen matter, Judge Furman described the episode as “embarrassing” and “unfortunate.” Cohen, supra note 8.
18 Iovino v. Michael Stapleton Assoc., No. 5:21-CV-64, Transcript of Order to Show Cause Proceeding, (W.D. VA Oct. 9, 2024).
States Shifting Focus on AI and Automated Decision-Making
Since January, the federal government has moved away from comprehensive legislation on artificial intelligence (AI) and adopted a more muted approach to federal privacy legislation (as compared to 2024’s tabled federal legislation). Meanwhile, state legislatures forge ahead – albeit more cautiously than in preceding years.
As we previously reported, the Colorado AI Act (COAIA) is set to go into effect on February 1, 2026. In signing the COAIA into law last year, Colorado Governor Jared Polis (D) issued a letter urging Congress to develop a “cohesive” national approach to AI regulation preempting the growing patchwork of state laws. In the letter, Governor Polis noted his concern that the COAIA’s complex regulatory regime may drive technology innovators away from Colorado. Eight months later, the Trump Administration announced its deregulatory approach to AI regulation making federal AI legislation unlikely. At that time, the Trump Administration seemed to consider existing laws – such as Title VI and Title VII of the Civil Rights Act and the Americans with Disabilities Act which prohibit unlawful discrimination – as sufficient to protect against AI harms. Three months later, a March 28th Memorandum issued by the federal Office of Management and Budget directs federal agencies to implement risk management programs designed for “managing risks from the use of AI, especially for safety-impacting and rights impacting AI.”
On April 28, two of the COAIA’s original sponsors, Senator Robert Rodriguz (D) and Representative Brianna Titone (D) introduced a set of amendments in the form of SB 25-318 (AIA Amendment). While the AIA Amendment seems targeted to address the concerns of Governor Polis, with the legislative session ending May 7, the Colorado legislature has only a few days left to act.
If the AIA Amendment passes and is approved by Governor Polis, the COAIA would be modified as follows:
The definition of “algorithmic discrimination” would be narrowed to mean only use of an AI system that results in violation of federal or Colorado’s state or local anti-discrimination laws.
The current definition is much broader – prohibiting any condition in which use of an AI system results in “unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.” (Colo. Rev. Stat. § 6-1-1701(1).)
Obligations on developers, deployers and vendors that modify high-risk AI systems would be materially lessened.
An exception for a developer of an AI system offered with “open model weights” (i.e., placed in the public domain along with specified documentation), as long as the developer takes certain technical and administrative steps to prevent the AI system from making, or being a substantial factor in making, consequential decisions.
The duty of care imposed on a developer or deployer to use reasonable care to protect consumers from any known or foreseeable risks of algorithmic discrimination of a high-risk AI System would be removed.
This is a significant change from the focus on procedural risk reduction duties and away from a general duty to avoid harm.
Developer reporting obligations would be reduced.
Deployer risk assessment record-keeping obligations would be removed.
A deployer’s notice (transparency) requirements for a consumer who is subject to an adverse consequential decision from use of a high-risk AI system would be combined into a single notice.
An additional affirmative defense for violations that are “inadvertent”, affect fewer than 100,000 consumers and are not the result of negligence on the part of the developer, deployer or other party asserting the defense would be added
Effective dates would be extended to January 1, 2027, with some obligations pushed back to April 1, 2028, for a business employing fewer than 250 employees, and April 1, 2029, for a business employing fewer than 100 employees.
Even if the AIA Amendment is passed, COAIA will remain the most comprehensive U.S. law regulating commercial AI development and deployment. Nonetheless, the proposed AIA Amendment is one example of how the innovate-not-regulate mindset of the Trump Administration may be starting to filter down to state legislatures.
Another example: in March, Virginia Governor Glenn Yougkin (R) vetoed HB 2094, the High-Risk Artificial Intelligence Developer and Deployer Act, which was based on the COAIA, and a model bill developed by the Multistate AI Policymaker Working Group (MAP-WG), a coalition of lawmakers from 45 states. In a statement explaining his veto, Governo Youking noted that “HB 2094’s rigid framework fails to account for the rapidly evolving and fast-moving nature of the AI industry and puts an especially onerous burden on smaller firms and startups that lack large legal compliance departments.” Last year California Governor Gavin Newsom (D) vetoed SB 1047, which would have focused only on large-scale AI models, calling on the legislature to further explore comprehensive legislation and states that “[a] California-only approach may well be warranted – especially absent federal action by Congress.”
Meanwhile, on April 23, California Governor Newson warned the California Privacy Protection Agency (CPPA) (the administration agency that enforces the California Consumer Privacy Act (CCPA)) to reconsider its draft automated decision-making technology (“ADMT”) regulations to leave AI regulation to the legislature to consider. His letter echoes a letter from the California Legislature, chiding the CPPA for its lack of the authority “to regulate any AI (generative or otherwise) under Proposition 24 or any other body of law.” At its May 1st meeting, the CPPA Board considered and approved staff’s proposed changes to the ADMT draft regulations, which include deleting the definitions and mentions of “artificial intelligence” and “deep fakes.” The revised ADMT draft regulations also include these revisions (along others):
Deleting the definition “extensive profiling” (monitoring employees, students or publicly available spaces or use for behavioral advertising) and shifting focusing on use to make a significant decision about consumers. Reducing regulation of ADMT training. However, risk assessments would still be required for profiling based on systemic observation and training of ADMT to make significant decisions or to verify identity or for biological or physical profiling.
Streamlining the definition of ADMT to “mean any technology that processes personal information and uses computation to replace … or substantially replace human decision-making [which] means a business uses the technology output to make a decision without human involvement.”
Streamlining the definition significant decisions to remove decisions regarding “access to,” and limited to “provision or denial of” the following more narrow types of goods and services: “financial or lending services, housing, education enrollment or opportunities, employment or independent contracting opportunities or compensation, or healthcare services,” and clarifying that use for advertising is not a significant decision.
Deleting the obligation to conduct specific risk of error and discrimination evaluations for physical or biological identification or profiling, but the general risk assessment obligations were largely kept.
Pre-use notice obligations were streamlined.
Opt-out rights were limited to uses to make a significant decision.
Giving businesses until January 1, 2027, to comply with the ADMT regulations.
(A more detailed analysis of the CCPA’s rule making, including regulation unrelated to ADMT, will be posted soon.)
MAP-WG inspired bills also are under consideration by several other states, including California. Comprehensive AI legislation proposed in Texas, known as the Texas Responsible AI Governance Act, was recently substantially revised (HB 149) to shift the focus from commercial to government implementation of AI systems. (The Texas legislature has until June 2 to consider the reworked bill.) Other states have more narrowly tailored laws focused on Generative AI – such as the Utah Artificial Intelligence Policy Act which requires any business or individual that “uses, prompts, or otherwise causes [GenAI] to interact with a person” to “clearly and conspicuously disclose” that the person is interacting with GenAI (not a human) “if asked or prompted by the person” and, for persons in “regulated occupations” (generally, need a state license or certification), disclosure must “prominently” disclose that a consumer is interacting with generative AI in the provision of the regulated services.
What happens next in the state legislatures and how Congress may react is yet to be seen. Privacy World will keep you updated.
Navigating California’s New Regulations on Automated Decision-Making Tools
The California Civil Rights Department (CRD) has recently approved regulations under the Fair Employment and Housing Act (FEHA) to address discrimination in employment resulting from the use of automated decision-making systems, including artificial intelligence (AI) and algorithms. These regulations apply to all employers covered by the FEHA and will likely take effect in July, once they complete the final administrative process of approval by the Office of Administrative Law.
Definition of Automated Decision Systems
An automated decision system (ADS) is defined as a computational process that makes or assists in making decisions regarding employment benefits such as hiring, promotion, selection for training programs, or similar activities. An ADS can result from AI, machine learning, algorithms, statistics, or other data processing techniques. The definition of ADS does not include word processing software, spreadsheet software, or other commonly used software for day-to-day work.
Regulations Against Discrimination
Under these regulations, it is unlawful for an employer to use ADS or selection criteria that discriminate against applicants or employees based on protected categories defined under FEHA. Evidence of anti-bias testing of ADS or similar practices may support defenses against discrimination claims. Anti-bias testing involves evaluating automated decision-making systems to identify and mitigate biases that may lead to unfair or discriminatory outcomes, ensuring the system operates equitably across different demographic groups. However, methods of conducting anti-bias testing may vary depending on the ADS used.
Recordkeeping
The regulations require preserving ADS data and related records for four years from either the date of the data’s creation or the personnel action involved, whichever occurs later, similar to other types of personnel records and selection criteria. Other revisions include adding ADS to regulations in the definition of “application” or included in “recruitment activity.” Additionally, the regulations specify that using ADS for certain skill testing may necessitate providing reasonable accommodations for religious beliefs or disabilities, ensuring non-discriminatory practices.
Compliance for Employers
For employers in California, the regulations clarify that when using ADS for any aspect of employment, caution should be applied to avoid discrimination.
White House Issues Executive Order to Advance AI Education for American Youth
A new executive order establishes a comprehensive federal framework to advance artificial intelligence (AI) education for American youth.
Agencies are directed to launch a national AI Challenge, build public-private partnerships, and expand AI-focused teacher training and apprenticeship programs.
Stakeholders in education, workforce development, and industry should monitor new guidance and funding streams and explore partnership opportunities with federal agencies.
Recognizing AI’s transformative impact on society and the economy, the administration issued an executive order on April 23 establishing a national strategy to integrate AI education across the K-12 system. The order outlines a coordinated federal framework that promotes AI literacy and proficiency among students and educators, leverages public-private partnerships and expands workforce development opportunities —all aimed at keeping the United States at the forefront of AI innovation and readiness.
For stakeholders in education, industry and workforce development, the order offers a roadmap for potential funding, strategic collaboration and early alignment with federal AI education priorities.
Key Provisions
Establishment of the White House Task Force on Artificial Intelligence Education
The Task Force, chaired by the Director of the Office of Science and Technology Policy and comprising senior officials from multiple federal agencies, is charged with implementing the order’s policy objectives and coordinating federal AI education initiatives.
Presidential Artificial Intelligence Challenge
Within 90 days, the Task Force must develop plans for a national AI Challenge to be held within 12 months. The AI Challenge will recognize student and educator achievements, promote geographic and topical diversity, and foster collaboration among government, academia, industry and philanthropy.
Public-Private Partnerships and K-12 AI Resources
Agencies are directed to establish partnerships with industry, academia and nonprofits to develop online resources for K-12 AI education, with a focus on foundational literacy and critical thinking.
Federal funding mechanisms, including discretionary grants, are to be prioritized for these initiatives, with resources to be made available for classroom use within 180 days of partnership announcements.
Guidance and Support for AI in Education
The Secretary of Education is tasked with issuing guidance on the use of federal grant funds for AI-based instructional resources and identifying ways to leverage existing research programs to support state and local AI education efforts.
Enhanced Educator Training
Within 120 days, the Secretary of Education and the Director of the National Science Foundation (NSF) must prioritize AI in teacher training and research programs, including professional development for integrating AI into curricula and classroom practice.
The Secretary of Agriculture is similarly directed to support AI education through 4-H and the Cooperative Extension System.
Expansion of AI-Related Apprenticeships and Workforce Development
The Secretary of Labor is instructed to increase participation in AI-related Registered Apprenticeships, set growth targets, and use existing funds to develop industry-driven standards.
Guidance will be issued to encourage the use of Workforce Innovation and Opportunity Act (WIOA) funds for youth AI skills development, and grant programs will prioritize providers expanding AI coursework and certifications, including dual enrollment opportunities for high school students.
Federal Fellowship and Scholarship Prioritization
All agencies providing educational grants are to consider AI as a priority area within existing fellowship and scholarship programs.
Takeaways
The executive order signals the federal government’s intent to make AI education a long-term national priority not just for students, but for the educators and institutions that support them. As agencies roll out guidance, funding mechanisms and partnership opportunities, stakeholders in education, industry and workforce development should consider opportunities for engagement and collaboration as the federal government advances its AI education agenda.
As the federal AI education strategy evolves, we’re here to support clients exploring how to align, partner, or participate.
DoD AI Compliance Guidance for Government Contractors
As the Department of Defense (DoD) scales artificial intelligence across its operations, government contractors must ensure their AI solutions align with federal mandates and ethical standards. This guide provides links to guidance and templates, outlines essential requirements and actionable steps to help contractors navigate DoD AI compliance effectively.
The DoD has established a comprehensive framework for AI implementation through three key documents. The DoD Data, Analytics, and AI Adoption Strategy (2023) sets the strategic direction for AI deployment, focusing on enabling decision advantage through data integration. The Responsible AI Strategy and Implementation Pathway (2022) provides ethical principles and implementation guidance, establishing concrete expectations for AI system evaluation during acquisition and deployment. The Responsible AI Toolkit (2023) offers practical resources to align with DoD’s responsible AI standards, including templates and assessment guides that streamline compliance efforts.
Contractors must align with DoD’s five Responsible AI Tenets: Responsible (design systems that serve intended purposes without causing unintended harm), Equitable (ensure systems function without bias across diverse populations and scenarios), Traceable (maintain transparency in how AI systems operate and make decisions), Reliable (develop systems that perform consistently under varying conditions), and Governable (design mechanisms for appropriate human intervention and control). Implementation requires comprehensive documentation of data sources and model development, robust bias detection and mitigation, regular security assessments against standards like NIST SP 800-53, and governance structures that maintain alignment with DoD AI Ethical Principles.
Successful compliance implementation typically follows five phases: Assessment (conduct gap analysis against DoD requirements and assign compliance leadership), Documentation (develop AI governance policies and traceability documentation), Technical Integration (implement audit trails, secure data pipelines, and validation routines), Verification (conduct self-assessments and consider third-party certification), and Continuous Monitoring (maintain audit logs, address detected risks, and iterate policies). This structured approach helps organizations methodically build compliance capabilities while maintaining focus on core business objectives. Templates for documentation and self-assessment checklists are available in CDAO’s RAI Toolkit.
The defense contracting community’s experience demonstrates that proactive compliance creates competitive advantage. Contractors who have implemented comprehensive model documentation and traceability processes have secured significant contracts by demonstrating superior compliance readiness. Conversely, those neglecting these aspects have faced costly post-award audit findings requiring extensive remediation. Compliance costs may be allowable under FAR Part 31, especially for cost-reimbursable contracts. When preparing proposals, explicitly address how compliance measures contribute to system integrity and mission assurance, positioning compliance capabilities as value differentiators rather than merely added costs.
The Trump administration’s Executive Order on Removing Barriers to American Leadership in AI (2025) emphasizes streamlining AI development while maintaining responsible innovation. Forward-looking contractors should accelerate investment in responsible AI infrastructure aligned with DoD frameworks, participate in public-private pilot programs demonstrating mission-specific capabilities, and engage in consortia that promote global AI standards and cross-sector dialogue.
Contractors should know that DoD’s five Responsible AI Tenets are now evaluation criteria in procurement decisions, compliance documentation requirements are increasing in both depth and breadth, and DFARS 252.204-7012 and related clauses establish enforcement mechanisms with significant consequences. Contractors should immediately designate an AI compliance lead with authority to coordinate cross-functional implementation. Within 30 days, complete a gap assessment against DoD’s Responsible AI requirements. Within 90 days, document your AI governance framework and model development processes. Within 6 months, implement technical measures for traceability, security, and bias mitigation. On an ongoing basis, conduct quarterly compliance reviews and maintain documentation currency.
What’s Next?
Be prepared for increased scrutiny of AI systems during pre-award evaluations, requests for detailed model documentation and bias assessments, flow-down requirements to subcontractors and suppliers, and evolving standards as DoD refines its approach to AI acquisition. By approaching compliance strategically rather than reactively, contractors can transform regulatory requirements into competitive advantages while contributing to the responsible advancement of defense AI capabilities.