The Colorado AI Act: Implications for Health Care Providers
Artificial intelligence (AI) is increasingly being integrated into health care operations, from administrative functions such as scheduling and billing to clinical decision-making, including diagnosis and treatment recommendations. Although AI offers significant benefits, concerns regarding bias, transparency, and accountability have prompted regulatory responses. Colorado’s Artificial Intelligence Act (the Act), set to take effect on February 1, 2026, imposes governance and disclosure requirements on entities deploying high-risk AI systems, particularly those involved in consequential decisions affecting health care services and other critical areas.
Given the Act’s broad applicability, including its potential extraterritorial reach for entities conducting business in Colorado, health care providers must proactively assess their AI utilization and prepare for compliance with forthcoming regulations. Below, we discuss the intent of the Act, what types of AI it applies to, future regulation, potential impact on providers, statutory compliance requirements, and enforcement mechanisms.
1. What Is the Act Trying to Protect Against?
The Act primarily seeks to mitigate algorithmic discrimination, defined as AI-driven decision-making that results in unlawful differential treatment or disparate impact on individuals based on certain characteristics, such as race, disability, age, or language proficiency. The Act seeks to prevent AI from reinforcing existing biases or making decisions that unfairly disadvantage particular groups.
Examples of Algorithmic Discrimination in Health Care
Access to Care Issues: AI-powered phone scheduling systems may fail to recognize certain accents or accurately process non-English speakers, making it more difficult for non-native English speakers to schedule medical appointments.
Biased Diagnostic Tools and Treatment Recommendations: Some AI diagnostic tools may recommend different treatments for patients of different ethnicities, not because of medical evidence but due to biases in the training data. For instance, an AI model trained primarily on data from white patients might miss early signs of disease that present differently in Black or Hispanic patients, resulting in inaccurate or less effective treatment recommendations for historically marginalized populations.
By targeting these and other AI-driven inequities, the Act aims to ensure automated systems do not reinforce or exacerbate existing disparities in health care access and outcomes.
2. What Types of AI Are Addressed by the Act?
The Act applies broadly to businesses using AI to interact with or make decisions about Colorado residents. Although certain high-risk AI systems — those that play a substantial factor in making consequential decisions — are subject to more stringent requirements, the Act imposes obligations on most AI systems used in health care.
Key Definitions in the Act
“Artificial Intelligence System” means any machine-based system that generates outputs — such as decisions, predictions, or recommendations — that can influence real-world environments.
“Consequential Decision” means a decision that materially affects a consumer’s access to or cost of health care, insurance, or other essential services.
“High-Risk AI System” means any AI tool that makes or substantially influences a consequential decision.
“Substantial Factor” means a factor that assists in making a consequential decision or is capable of altering the outcome of a consequential decision and is generated by an AI system.
“Developers” means creators of AI systems.
“Deployers” means users of high-risk AI systems.
3. How Can Health Care Providers Ensure Compliance?
Although the Act sets out broad obligations, specific regulations are still forthcoming. The Colorado Attorney General has been tasked with developing rules to clarify compliance requirements. These regulations may address:
Risk management and compliance frameworks for AI systems.
Disclosure requirements for AI usage in consumer-facing applications.
Guidance on evaluating and mitigating algorithmic discrimination.
Health care providers should monitor developments as the regulatory framework evolves to ensure their AI-related practices align with state law.
4. How Could the Act Impact Health Care Operations?
The Act will require health care providers to specifically evaluate how they use AI across various operational areas, as the Act applies broadly to any AI system that influences decision-making. Given AI’s growing role in patient care, administrative functions, and financial operations, health care organizations should anticipate compliance obligations in multiple domains.
Billing and Collections
AI-driven billing and claims processing systems should be reviewed for potential biases that could disproportionately target specific patient demographics for debt collection efforts.
Deployers should ensure that their AI systems do not inadvertently create financial barriers for specific patient groups.
Scheduling and Patient Access
AI-powered scheduling assistants must be designed to accommodate patients with disabilities and limited English proficiency to prevent inadvertent discrimination and delayed access to care.
Providers must evaluate whether their AI tools prioritize certain patients over others in a way that could be deemed discriminatory.
Clinical Decision-Making and Diagnosis
AI diagnostic tools must be validated to ensure they do not produce biased outcomes for different demographic groups.
Health care organizations using AI-assisted triage tools should establish protocols for reviewing AI-generated recommendations to ensure fairness and accuracy.
5. If You Use AI, With What Do You Need to Comply?
The Act establishes different obligations for Developers and Deployers. Health care providers will in most cases be “Deployers” of AI systems as opposed to Developers. Health care providers will want to scrutinize contractual relationships with Developers for appropriate risk allocation and information sharing as providers implement AI tools into their operations.
Obligations of Developers (AI Vendors)
Disclosures to Deployers: Developers must provide transparency about the AI system’s training data, known biases, and intended use cases.
Risk Mitigation: Developers must document efforts to minimize algorithmic discrimination.
Impact Assessments: Developers must evaluate whether the AI system poses risks of discrimination before deploying it.
Obligations of Deployers (e.g., Health Care Providers)
Duty to Avoid Algorithmic Discrimination
Deployers of high-risk AI systems must use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination.
Risk Management Policy & Program
Deployers must implement a risk management policy and program that identifies, documents, and mitigates risks of algorithmic discrimination.
The program must be iterative, regularly updated, and aligned with recognized AI risk management frameworks.
Requirements vary based on the deployer’s size, complexity, AI system scope, and data sensitivity.
Impact Assessments (Regular & Event-Triggered Reviews)
Timing Requirements: Deployers must conduct impact assessments:
Before deploying any high-risk AI system.
At least annually for each deployed high-risk AI system.
Within 90 days after any intentional and substantial modification to the AI system.
Required Content: Each impact assessment must include the AI system’s purpose, intended use, and benefits, an analysis of risks of algorithmic discrimination and mitigation measures, a description of data processed (inputs, outputs, and any customization data), performance metrics and system limitations, transparency measures (including consumer disclosures), and details on post-deployment monitoring and safeguards.
Special Requirements for Modifications: If an impact assessment is conducted due to a substantial modification, it must also include an explanation of how the AI system’s actual use aligned with or deviated from its originally intended purpose.
Notifications & Transparency
Public Notice: Deployers must publish a statement on their website describing the high-risk AI systems they use and how they manage discrimination risks.
Notices to Patients/Employees: Before an AI system makes a consequential decision, individuals must be notified of its use.
Post-Decision Explanation: If AI contributes to an adverse decision, deployers must explain its role and allow the individual to appeal or correct inaccurate data.
Attorney General Notifications: If AI is found to have caused algorithmic discrimination, deployers must notify the Attorney General within 90 days.
Small deployers (those with fewer than 50 employees) who do not train AI models with their own data are exempt from many of these compliance obligations.
6. How is the Act Enforced?
Only the Colorado Attorney General has enforcement authority.
A rebuttable presumption of compliance exists if Deployers follow recognized AI risk management frameworks.
There is no private right of action, meaning consumers cannot sue directly under the Act.
Health care providers should take early action to assess their AI usage and implement compliance measures.
Final Thoughts: What Health Care Providers Should Do Now
The Act represents a significant shift in AI regulation, particularly for health care providers who increasingly rely on AI-driven tools for patient care, administrative functions, and financial operations.
Although the Act aims to enhance transparency and mitigate algorithmic discrimination, it also imposes substantial compliance obligations. Health care organizations will have to assess their AI usage, implement risk management protocols, and maintain detailed documentation.
Given the evolving regulatory landscape, health care providers should take a proactive approach by auditing existing AI systems, training staff on compliance requirements, and establishing governance frameworks that align with best practices. As rulemaking by the Colorado Attorney General progresses, staying informed about additional regulatory requirements will be critical to ensuring compliance and avoiding enforcement risks.
Ultimately, the Act reflects a broader trend toward AI regulation that is likely to extend beyond state borders. Health care organizations that invest in AI governance now will not only mitigate legal risks but also maintain patient trust in an increasingly AI-driven industry.
If health care providers plan to integrate AI systems into their operations, conducting a thorough legal analysis is essential to determine whether the Act applies to their specific use cases. This should also include careful review and negotiation of service agreements with AI Developers to ensure that the provider has sufficient information and cooperation from the Developer to comply with the Act and to properly allocate risk between the parties.
Compliance is not a one-size-fits-all process. It requires careful evaluation of AI tools, their functions, and their potential to influence consequential decisions. Organizations should work closely with legal counsel to navigate the Act’s complexities, implement risk management frameworks, and establish protocols for ongoing compliance. As AI regulations evolve, proactive legal assessment will be crucial to ensuring that health care providers not only meet regulatory requirements but also uphold ethical and equitable AI practices that align with broader industry standards.
Bipartisan Push to Strengthen American Supply Chains
Members of the Senate Commerce Committee have demonstrated an early bipartisan interest in continuing to promote U.S. supply chain resilience, highlighting an avenue for bipartisanship in the Trump Administration’s foreign policy agenda.
Sen. Marsha Blackburn (R-Tennessee) has partnered with Democratic colleagues as an original cosponsor on the reintroduction of two pieces of legislation aimed at coordinating the U.S. government’s focus on supply chain resilience: the Strengthening Support for American Manufacturing Act (S. 99); and, the Promoting Resilient Supply Chains Act(S. 257).
The Strengthening Support for American Manufacturing Act would require the Secretary of Commerce and the National Academy of Public Administration to produce a report on the effectiveness and management of the Department of Commerce’s various manufacturing support programs. Notably, the report is tasked with identifying relevant offices and bureaus within the Department of Commerce with responsibilities related to critical supply chain resilience, and manufacturing and industrial innovation, and make recommendations on improving their efficiency by identifying gaps and duplicative duties between offices.
Sen. Gary Peters (D-Michigan), who introduced the Strengthening Support for American Manufacturing Act, explains the legislation is intended to streamline various manufacturing programs offered by the federal government. Specifically, in a press release associated with the bill, Sen. Peters highlights a 2017 report released by the Government Accountability Office that identified 58 manufacturing related programs across 11 different federal agencies that serve US manufacturing, several of which are managed by the Department of Commerce.
The Promoting Resilient Supply Chains Act (the “PRSCA”) would establish a Supply Chain Resilience Working Group (the “Working Group”) comprised of federal agencies – including the Departments of Commerce, State, Defense, Agriculture, and Health and Human Services, among others. Moreover, under the PRSCA, the Assistant Secretary of Commerce for Industry and Analysis would be required to designate “critical industries,” “critical supply chains,” and “critical goods,” and the Working Group would be charged with mapping, monitoring, and modeling U.S. capacity to mitigate vulnerabilities in these areas.
Notably, during the Commerce Committee’s January 29, 2025, hearing to consider the nomination of Howard Lutnick to become Secretary of Commerce, Sen. Lisa Blunt Rochester (D-Delaware), the author of the PRSCA, asked Mr. Lutnick whether the Department of Commerce would maintain the agency’s supply chain mapping initiatives under his direction. Mr. Lutnick replied in the affirmative.
In discussing the merits of the PRSCA, Sen. Blackburn stated: “To achieve a strong, resilient, supply chain, we must have a coordinated, national strategy that decreases dependence on our adversaries, like Communist China, and leverages American ingenuity.” This claim is particularly relevant in the PRSCA’s promise to design and implement an “early warning supply chain disruption system” that would employ artificial intelligence and quantum computing to identify and mitigate potential supply chain shocks. As a crisis response measure, the platform would locate alternative sourcing options for supply chains under imminent threat and press private sector to shift their supply chains toward “countries that are allies or key international partners” of the United States. Secretary of State Marco Rubio has emphasized that the Trump Administration’s foreign policy program will prioritize “relocating [U.S.] critical supply chains closer to the Western Hemisphere,” namely in Latin American countries, as a means to enhance “neighbors’ economic growth and safeguard Americans’ own economic security.”
Sen. Blackburn’s willingness to support these Democratic pieces of legislation reflects an increasing bipartisan sense that the impacts of recent geopolitical conflicts, natural disasters, and the COVID-19 pandemic highlighted the fragility of U.S. supply chains. Additionally, the PRSCA has been endorsed by the private sector, including the Information Technology Industry Council, the National Association of Electrical Distributor, the National Association of Wholesaler-Distributors, and the Supply Chain Resiliency Consumer Brands Association.
It remains uncertain whether either the PRSCA or Strengthening Support for American Manufacturing Act can advance this Congress as standalone bills, as the Trump Administration’s tariff and foreign assistance actions deepen partisan trends. Still, the bills’ emphasis on government efficiency, prioritizing American manufacturing, and near-shoring may be able to leverage Trump Administration “America First” and “Department of Government Efficiency” themes to ride momentum into FY 2026 annual appropriations legislation under a national security title. Accordingly, importers interested in the U.S. market would likely benefit from reviewing their supply chains with a long view that seeks to leverage opportunities to reinvest in American manufacturing and looks to near shore material supply chains, particularly in the Western Hemisphere – where the Trump Administration has underscored its interests in boxing out Chinese investment.
Colorado’s AI Task Force Proposes Updates to State’s AI Law
Stemming from Colorado’s Concerning Consumer Protections in Interactions with Artificial Intelligence Systems Act (the Act), which will impose obligations on developers and deployers of artificial intelligence (AI), the Colorado Artificial Intelligence Impact Task Force recently issued a report outlining potential areas where the Act can be “clarified, refined[,] and otherwise improved.”
The Task Force’s mission is to review issues related to AI and automated detection systems (ADS) affecting consumers and employees. The Task Force met on several occasions and prepared a report summarizing their findings:
Revise the Act’s definition of the types of decisions that qualify as “consequential decisions,” as well as the definition of “algorithmic discrimination,” “substantial factor,” and “intentional and substantial modification;”
Revamp the list of exemptions to what qualifies as a “covered decision system;”
Change the scope of the information and documentation that developers must provide to deployers;
Update the triggering events and timing for impact assessments as well as changes to the requirements for deployer risk management programs;
Possible replacement of the duty of care standard for developers and deployers (i.e., consider whether such standard should be more or less stringent);
Consider whether to minimize or expand the small business exemption (the current exemption under the Act is for businesses with less than 50 employees);
Consider whether businesses should be provided a cure period for certain types of non-compliance before Attorney General enforcement under the Act; and,
Revise the trade secret exemptions and provisions related to a consumer’s right to appeal.
As of today, the requirements for AI developers and deployers under the Act go into effect on February 1, 2026. However, the Task Force recommends reconsidering the law’s implementation timing. We will continue to track this first-of-its-kind AI law.
With Enough Human Contribution, AI-Generated Outputs May Be Copyright Protectable
After several months of delays, the U.S. Copyright Office has published part two of its three-part report on the copyright issues raised by artificial intelligence (AI). This part, entitled “Copyrightability,” focuses on whether AI-generated content is eligible for copyright protection in the U.S.
An output generated with the assistance of AI is eligible for copyright protection if there is sufficient human contribution. The report notes that copyright law does not need to be updated to support this conclusion. The Supreme Court has explained that individuals can receive copyright protection when they translate an idea into a fixed and tangible medium. When an AI model supplies all creative effort, no human can be considered an author, thus no copyrightable work. However, when an AI model assists a human’s creative expression, the human is considered an author. The Copyright Office analogizes this to the principle of joint authorship because a work is copyright-eligible even if a single person is not responsible for creating the entire work.
The contribution level is determined by what a person provides to the AI model. The Copyright Office reasoned that inputting a prompt by itself is not a sufficient contribution to be considered an author. The report analogizes this to a person hiring an artist, where the person may have a general artistic vision, but the artist produces the creative work. Additionally, because AI models generally operate as a black box, a user is cannot exert the necessary level of control to be considered an author.
However, when a user inputs a prompt in combination with their original work, the resulting AI-generated output is copyrightable for the material that is perceivable from their expression. The author’s own work helps provide the AI model with a starting point and limits the range of outputs.
Finally, AI-generated content can be copyrightable when arranged or modified with human creativity. For example, while an AI-generated image is not copyrightable, a compilation of the images and a human-authored story can be protected by copyright. The Copyright Office is currently working on the third part of its report, which should be published later this year and will focus on the implications of using protected works to train AI models.
Nation State Backed Groups Using AI for Malicious Purposes
The Google Threat Intelligence Group (GTIG) recently published a new report “Adversarial Misuse of Generative AI,” which is well worth the read. The report shares findings on how government-backed threat actors use and misuse the Gemini web application. Although the GTIG is committed to countering threats across Google’s platforms, it is also committed to sharing findings “to raise awareness and enable stronger protections across the wider ecosystem.” This is an excellent mission.
GTIG found government adversaries, including the People’s Republic of China (PRC), Russia, Iran, and North Korea, are attempting to misuse Gemini through jailbreak attempts, “coding and scripting tasks, gathering information about potential targets, researching publicly known vulnerabilities and enabling post-compromise activities, such as defense evasion in a target environment.”
According to the report, Iranian threat actors used Gemini the most, for “crafting phishing campaigns, conducting reconnaissance on defense experts and organizations, and generating content with cybersecurity themes.” Over ten Iran-backed groups were using Gemini for these purposes.
PRC threat actors used Gemini the second most to “conduct reconnaissance, for scripting and development, to troubleshoot code, and to research how to obtain deeper access to target networks. They focused on topics such as lateral movement, privilege escalation, data exfiltration, and detection evasion.” GTIG found over 20 China-backed groups were using and misusing Gemini.
Nine North Korean-backed groups “used Gemini to support several phases of the attack lifecycle, including researching potential infrastructure and free hosting providers, reconnaissance on target organizations, payload development, and assistance with malicious scripting and evasion techniques. They also used Gemini to research topics of strategic interest to the North Korean government, such as the South Korean military and cryptocurrency. Of note, North Korean actors also used Gemini to draft cover letters and research jobs—activities that would likely support North Korea’s efforts to place clandestine IT workers at Western companies.”
Russian threat actors are using Gemini the least. Three Russia-backed groups focused on coding tasks, including converting publicly available malware into another coding language and adding encryption functions to existing code.
This research confirms our previous suspicions. Google has “shared best practices for implementing safeguards, evaluating model safety and red teaming to test and secure AI systems.” They are also actively sharing threat intelligence that will assist all users of AI tools to understand and mitigate risks of threat actors misusing AI.
The BR Privacy & Security Download: February 2025
STATE & LOCAL LAWS & REGULATIONS
New York Legislature Passes Comprehensive Health Privacy Law: The New York state legislature passed SB-929 (the “Bill”), providing for the protection of health information. The Bill broadly defines “regulated health information” as “any information that is reasonably linkable to an individual, or a device, and is collected or processed in connection with the physical or mental health of an individual.” Regulated health information includes location and payment information, as well as inferences derived from an individual’s physical or mental health. The term “individual” is not defined. Accordingly, the Bill contains no terms restricting its application to consumers acting in an individual or household context. The Bill would apply to regulated entities, which are entities that (1) are located in New York and control the processing of regulated health information, or (2) control the processing of regulated health information of New York residents or individuals physically present in New York. Among other things, the Bill would restrict regulated entities to processing regulated health information only with a valid authorization, or when strictly necessary for certain specified activities. The Bill also provides for individual rights and requires the implementation of reasonable administrative, physical, and technical safeguards to protect regulated health information. The Bill would take effect one year after being signed into law and currently awaits New York Governor Kathy Hochul’s signature.
New York Data Breach Notification Law Updated: Two bills, SO2659 and SO2376, that amended the state’s data breach notification law were signed into law by New York Governor Kathy Hochul. The bills change the timing requirement in which notice must be provided to New York residents, add data elements to the definition of “private information,” and adds the New York Department of Financial Services to the list of regulators that must be notified. Previously, New York’s data breach notification statute did not have a hard deadline within which notice must be provided. The amendments now require affected individuals to be notified no later than 30 days after discovery of the breach, except for delays arising from the legitimate needs of law enforcement. Additionally, as of March 25, 2025, “private information” subject to the law’s notification requirements will include medical information and health insurance information.
California AG Issues Legal Advisory on Application of California Law to AI: California’s Attorney General has issued legal advisories to clarify that existing state laws apply to AI development and use, emphasizing that California is not an AI “wild west.” These advisories cover consumer protection, civil rights, competition, data privacy, and election misinformation. AI systems, while beneficial, present risks such as bias, discrimination, and the spread of disinformation. Therefore, entities that develop or use AI must comply with all state, federal, and local laws. The advisories highlight key laws, including the Unfair Competition Law and the California Consumer Privacy Act. The advisories also highlight new laws effective on January 1, 2025, which include disclosure requirements for businesses, restrictions on the unauthorized use of likeness, and regulations for AI use in elections and healthcare. These advisories stress the importance of transparency and compliance to prevent harm from AI.
New Jersey AG Publishes Guidance on Algorithmic Discrimination: On January 9, 2025, New Jersey’s Attorney General and Division on Civil Rights announced a new civil rights and technology initiative to address the risks of discrimination and bias-based harassment in AI and other advanced technologies. The initiative includes the publication of a Guidance Document, which addresses the applicability of New Jersey’s Law Against Discrimination (“LAD”) to automated decision-making tools and technologies. It focuses on the threats posed by automated decision-making technologies in the housing, employment, healthcare, and financial services contexts, emphasizing that the LAD applies to discrimination regardless of the technology at issue. Also included in the announcement is the launch of a new Civil Rights Innovation lab, which “will aim to leverage technology responsibly to advance [the Division’s] mission to prevent, address, and remedy discrimination.” The Lab will partner with experts and relevant industry stakeholders to identify and develop technology to enhance the Division’s enforcement, outreach, and public education work, and will develop protocols to facilitate the responsible deployment of AI and related decision-making technology. This initiative, along with the recently effective New Jersey Data Protection Act, shows a significantly increased focus from the New Jersey Attorney General on issues relating to data privacy and automated decision-making technologies.
New Jersey Publishes Comprehensive Privacy Law FAQs: The New Jersey Division of Consumer Affairs Cyber Fraud Unit (“Division”) published FAQs that provide a general summary of the New Jersey Data Privacy Law (“NJDPL”), including its scope, key definitions, consumer rights, and enforcement. The NJDPL took effect on January 15, 2025, and the FAQs state that controllers subject to the NJDPL are expected to comply by such date. However, the FAQs also emphasize that until July 1, 2026, the Division will provide notice and a 30-day cure period for potential violations. The FAQs also suggest that the Division may adopt a stricter approach to minors’ privacy. While the text of the NJDPL requires consent for processing the personal data of consumers between the ages of 13 and 16 for purposes of targeted advertising, sale, and profiling, the FAQs state that when a controller knows or willfully disregards that a consumer is between the ages of 13 and 16, consent is required to process their personal data more generally.
CPPA Extends Formal Comment Period for Automated Decision-Making Technology Regulations: The California Privacy Protection Agency (“CPPA”) extended the public comment period for its proposed regulations on cybersecurity audits, risk assessments, automated decision-making technology (“ADMT”), and insurance companies under the California Privacy Rights Act. The public comment period opened on November 22, 2024, and was set to close on January 14, 2025. However, due to the wildfires in Southern California, the public comment period was extended to February 19, 2025. The CPPA will also be holding a public hearing on that date for interested parties to present oral and written statements or arguments regarding the proposed regulations.
Oregon DOJ Publishes Toolkit for Consumer Privacy Rights: The Oregon Department of Justice announced the release of a new toolkit designed to help Oregonians protect their online information. The toolkit is designed to help families understand their rights under the Oregon Consumer Privacy Act. The Oregon DOJ reminded consumers how to submit complaints when businesses are not responsive to privacy rights requests. The Oregon DOJ also stated it has received 118 complaints since the Oregon Consumer Privacy Act took effect last July and had sent notices of violation to businesses that have been identified as non-compliant.
California, Colorado, and Connecticut AGs Remind Consumers of Opt-Out Rights: California Attorney General Rob Bonta published a press release reminding residents of their right to opt out of the sale and sharing of their personal information. The California Attorney General also cited the robust privacy protections of Colorado and Connecticut laws that provide for similar opt-out protections. The press release urged consumers to familiarize themselves with the Global Privacy Control (“GPC”), a browser setting or extension that automatically signals to businesses that they should not sell or share a consumer’s personal information, including for targeted advertising. The Attorney General also provided instructions for the use of the GPC and for exercising op-outs by visiting the websites of individual businesses.
FEDERAL LAWS & REGULATIONS
FTC Finalizes Updates to COPPA Rule: The FTC announced the finalization of updates to the Children’s Online Privacy Protection Rule (the “Rule”). The updated Rule makes a number of changes, including requiring opt-in consent to engage in targeted advertising to children and to disclose children’s personal information to third parties. The Rule also adds biometric identifiers to the definition of personal information and prohibits operators from retaining children’s personal information for longer than necessary for the specific documented business purposes for which it was collected. Operators must maintain a written data retention policy that documents the business purpose for data retention and the retention period for data. The Commission voted 5-0 to adopt the Rule, but new FTC Chair Andrew Ferguson filed a separate statement describing “serious problems” with the rule. Ferguson specifically stated that it was unclear whether an entirely new consent would be required if an operator added a new third party with whom personal information would be shared, potentially creating a significant burden for businesses. The Rule will be effective 60 days after its publication in the Federal Register.
Trump Rescinds Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence: President Donald Trump took action to rescind former President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“AI EO”). According to a Biden administration statement released in October, many action items from the AI EO have already been completed. Recommendations, reports, and opportunities for research that were completed prior to revocation of the AI EO may continue in place unless replaced by additional federal agency action. It remains unclear whether the Trump Administration will issue its own executive orders relating to AI.
U.S. Justice Department Issues Final Rule on Transfer of Sensitive Personal Data to Foreign Adversaries: The U.S. Justice Department issued final regulations to implement a presidential Executive Order regarding access to bulk sensitive personal data of U.S. citizens by foreign adversaries. The regulations restrict transfers involving designated countries of concern – China, Cuba, Iran, North Korea, Russia, and Venezuela. At a high level, transfers are restricted if they could result in bulk sensitive personal data access by a country of concern or a “covered person,” which is an entity that is majority-owned by a country of concern, organized under the laws of a country of concern, has its principle place of business in a country of concern, or is an individual whose primary residence is in a county of concern. Data covered by the regulation includes precise geolocation data, biometric identifiers, genetic data, health data, financial data, government-issued identification numbers, and certain other identifiers, including device or hardware-based identifiers, advertising identifiers, and demographic or contact data.
First Complaint Filed Under Protecting Americans’ Data from Foreign Adversaries Act: The Electronic Privacy Information Center (“EPIC”) and the Irish Counsel for Civil Liberties (“ICCL”) Enforce Unit filed the first-ever complaint under the Protecting Americans’ Data from Foreign Adversaries Act (“PADFAA”). PADFAA makes it unlawful for a data broker to sell, license, rent, trade, transfer, release, disclose, or otherwise make available specified personally identifiable sensitive data of individuals residing in the United States to North Korea, China, Russia, Iran, or an entity controlled by one of those countries. The complaint alleges that Google’s real-time bidding system data includes personally identifiable sensitive data, that Google executives were aware that data from its real-time bidding system may have been resold, and that Google’s public list of certified companies that receive real-time bidding bid request data include multiple companies based in foreign adversary countries.
FDA Issues Draft Guidance for AI-Enabled Device Software Functions: The U.S. Food and Drug Administration (“FDA”) published its January 2025 Draft Guidance for Industry and FDA Staff regarding AI-enabled device software functionality. The Draft provides recommendations regarding the contents of marketing submissions for AI-enabled medical devices, including documentation and information that will support the FDA’s evaluation of their safety and effectiveness. The Draft Guidance is designed to reflect a “comprehensive approach” to the management of devices through their total product life cycle and includes recommendations for the design, development, and implementation of AI-enabled devices. The FDA is accepting comments on the Draft Guidance, which may be submitted online until April 7, 2025.
Industry Coalition Pushes for Unified National Data Privacy Law: A coalition of over thirty industry groups, including the U.S. Chamber of Commerce, sent a letter to Congress urging it to enact a comprehensive national data privacy law. The letter highlights the urgent need for a cohesive federal standard to replace the fragmented state laws that complicate compliance and stifle competition. The letter advocates for legislation based on principles to empower startups and small businesses by reducing costs and improving consumer access to services. The letter supports granting consumers the right to understand, correct, and delete their data, and to opt out of targeted advertising, while emphasizing transparency by requiring companies to disclose data practices and secure consent for processing sensitive information. It also focuses on the principles of limiting data collection to essential purposes and implementing robust security measures. While the principles aim to override strong state laws like that in California, the proposal notably excludes data broker regulation, a previous point of contention. The coalition cautions against legislation that could lead to frivolous litigation, advocating for balanced enforcement and collaborative compliance. By adhering to these principles, the industry groups seek to ensure legal certainty and promote responsible data use, benefiting both businesses and consumers.
Cyber Trust Mark Unveiled: The White House launched a labeling scheme for internet-of-things devices designed to inform consumers when devices meet certain government-determined cybersecurity standards. The program has been in development for several months and involves collaboration between the White House, the National Institute of Standards and Technology, and the Federal Communications Commission. UL Solutions, a global safety and testing company headquartered in Illinois, has been selected as the lead administrator of the program along with 10 other firms as deputy administrators. With the main goal of helping consumers make more cyber-secure choices when purchasing products, the White House hopes to have products with the new cyber trust mark hit shelves before the end of 2025.
U.S. LITIGATION
Texas Attorney General Sues Insurance Company for Unlawful Collection and Sharing of Driving Data: Texas Attorney General Ken Paxton filed a lawsuit against Allstate and its data analytics subsidiary, Arity. The lawsuit alleges that Arity paid app developers to incorporate its software development kit that tracked location data from over 45 million consumers in the U.S. According to the lawsuit, Arity then shared that data with Allstate and other insurers, who would use the data to justify increasing car insurance premiums. The sale of precise geolocation data of Texans violated the Texas Data Privacy and Security Act (“TDPSA”) according to the Texas Attorney General. The TDPSA requires the companies to provide notice and obtain informed consent to use the sensitive data of Texas residents, which includes precise geolocation data. The Texas Attorney General sued General Motors in August of 2024, alleging similar practices relating to the collection and sale of driver data.
Eleventh Circuit Overturns FCC’s One-to-One Consent Rule, Upholds Broader Telemarketing Practices: In Insurance Marketing Coalition, Ltd. v. Federal Communications Commission, No. 24-10277, 2025 WL 289152 (11th Cir. Jan. 24, 2025), the Eleventh Circuit vacated the FCC’s one-to-one consent rule under the Telephone Consumer Protection Act (“TCPA”). The court found that the rule exceeded the FCC’s authority and conflicted with the statutory meaning of “prior express consent.” By requiring separate consent for each seller and topic-related call, the rule was deemed unnecessary. This decision allows businesses to continue using broader consent practices, maintaining shared consent agreements. The ruling emphasizes that consent should align with common-law principles rather than be restricted to a single entity. While the FCC’s next steps remain uncertain, the decision reduces compliance burdens and may challenge other TCPA regulations.
California Judge Blocks Enforcement of Social Media Addiction Law: The California Protecting Our Kids from Social Media Addiction Act (the “Act”) has been temporarily blocked. The Act was set to take effect on January 1, 2025. The law aims to prevent social media platforms from using algorithms to provide addictive content to children. Judge Edward J. Davila initially declined to block key parts of the law but agreed to pause enforcement until February 1, 2025, to allow the Ninth Circuit to review the case. NetChoice, a tech trade group, is challenging the law on First Amendment grounds. NetChoice argues that restricting minors’ access to personalized feeds violates the First Amendment. The group has appealed to the Ninth Circuit and is seeking an injunction to prevent the law from taking effect. Judge Davila’s decision recognized the “novel, difficult, and important” constitutional issues presented by the case. The law includes provisions to restrict minors’ access to personalized feeds, limit their ability to view likes and other feedback, and restrict third-party interaction.
U.S. ENFORCEMENT
FTC Settles Enforcement Action Against General Motors for Sharing Geolocation and Driving Behavior Data Without Consent: The Federal Trade Commission (“FTC”) announced a proposed order to settle FTC allegations against General Motors that it collected, used, and sold driver’s precise geolocation data and driving behavior information from millions of vehicles without adequately notifying consumers and obtaining their affirmative consent. The FTC specifically alleged General Motors used a misleading enrollment process to get consumers to sign up for its OnStar-connected vehicle service and Smart Driver feature without proper notice or consent during that process. The information was then sold to third parties, including consumer reporting agencies, according to the FTC. As part of the settlement, General Motors will be prohibited from disclosing driver data to consumer reporting agencies, required to allow consumers to obtain and delete their data, required to obtain consent prior to collection, and required to allow consumers to limit data collected from their vehicles.
FTC Releases Proposed Order Against GoDaddy for Alleged Data Security Failures: The Federal Trade Commission (“FTC”) has announced it had reached a proposed settlement in its action against GoDaddy Inc. (“GoDaddy”) for failing to implement reasonable and appropriate security measures, which resulted in several major data breaches between 2019 and 2022. According to the FTC’s complaint, GoDaddy misled customers of its data security practices, through claims on its websites and in email and social media ads, and by representing it was in compliance with the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks. However, the FTC found that GoDaddy failed to inventory and manage assets and software updates, assess risks to its shared hosting services, adequately log and monitor security-related events, and segment its shared hosting from less secure environments. The FTC’s proposed order against GoDaddy prohibits GoDaddy from misleading its customers about its security practices and requires GoDaddy to implement a comprehensive information security program. GoDaddy must also hire a third-party assessor to conduct biennial reviews of its information security program.
CPPA Reaches Settlements with Additional Data Brokers: Following their announcement of a public investigative sweep of data broker registration compliance, the CPPA has settled with additional data brokers PayDae, Inc. d/b/a Infillion (“Infillion”), The Data Group, LLC (“The Data Group”), and Key Marketing Advantage, LLC (“KMA”) for failing to register as a data broker and pay an annual fee as required by California’s Delete Act. Infillion will pay $54,200 for failing to register between February 1, 2024, and November 4, 2024. The Data Group will pay $46,600 for failing to register between February 1, 2024, and September 20, 2024. KMA will pay $55,800 for failing to register between February 1, 2024, and November 5, 2024. In addition to the fines, the companies have agreed to injunctive terms. The Delete Act imposes fines of $200 per day for failing to register by the deadline.
Mortgage Company Fined by State Financial Regulators for Cybersecurity Breach: Bayview Asset Management LLC and three affiliates (collectively, “Bayview”) agreed to pay a $20 million fine and improve their cybersecurity programs to settle allegations from 53 state financial regulators. The Conference of State Bank Supervisors (“CSBS”) alleged that the mortgage companies had deficient cybersecurity practices and did not fully cooperate with regulators after a 2021 data breach. The data breach compromised data for 5.8 million customers. The coordinated enforcement action was led by financial regulators in California, Maryland, North Carolina, and Washington State. The regulators said the companies’ information technology and cybersecurity practices did not meet federal or state requirements. The firms also delayed the supervisory process by withholding requested information and providing redacted documents in the initial stages of a post-breach exam. The companies also agreed to undergo independent assessments and provide three years of additional reporting to the state regulators.
SEC Reaches Settlement over Misleading Cybersecurity Disclosures: The SEC announced it has settled charges with Ashford Inc., an asset management firm, over misleading disclosures related to a cybersecurity incident. This enforcement action stemmed from a ransomware attack in September 2023, compromising over 12 terabytes of sensitive hotel customer data, including driver’s licenses and credit card numbers. Despite the breach, Ashford falsely reported in its November 2023 filings that no customer information was exposed. The SEC alleged negligence in Ashford’s disclosures, citing violations of the Securities Act of 1933 and the Exchange Act of 1934. Without admitting or denying the allegations, Ashford agreed to a $115,231 penalty and an injunction. This case highlights the critical importance of accurate cybersecurity disclosures and demonstrates the SEC’s commitment to ensuring transparency and accountability in corporate reporting.
FTC Finalizes Data Breach-Related Settlement with Marriott: The FTC has finalized its order against Marriott International, Inc. (“Marriott”) and its subsidiary Starwood Hotels & Resorts Worldwide LLC (“Starwood”). As previously reported, the FTC entered into a settlement with Marriott and Starwood for three data breaches the companies experienced between 2014 and 2020, which collectively impacted more than 344 million guest records. Under the finalized order, Marriott and Starwood are required to establish a comprehensive information security program, implement a policy to retain personal information only for as long as reasonably necessary, and establish a link on their website for U.S. customers to request deletion of their personal information associated with their email address or loyalty rewards account number. The order also requires Marriott to review loyalty rewards accounts upon customer request and restore stolen loyalty points. The companies are further prohibited from misrepresenting their information collection practices and data security measures.
New York Attorney General Settles with Auto Insurance Company over Data Breach: The New York Attorney General settled with automobile insurance company, Noblr, for a data breach the company experienced in January 2021. Noblr’s online insurance quoting tool exposed full, plaintext driver’s license numbers, including on the backend of its website and in PDFs generated when a purchase was made. The data breach impacted the personal information of more than 80,000 New Yorkers. The data breach was part of an industry-wide campaign to steal personal information (e.g., driver’s license numbers and dates of birth) from online automobile insurance quoting applications to be used to file fraudulent unemployment claims during the COVID-19 pandemic. As part of its settlement, Noblr must pay the New York Attorney General $500,000 in penalties and strengthen its data security measures such as by enhancing its web application defenses and maintaining a comprehensive information security program, data inventory, access controls (e.g., authentication procedures), and logging and monitoring systems.
FTC Alleges Video Game Maker Violated COPPA and Engaged in Deceptive Marketing Practices: The Federal Trade Commission (“FTC”) has taken action against Cognosphere Pte. Ltd and its subsidiary Cognosphere LLC, also known as HoYoverse, the developer of the game Genshin Impact (“HoYoverse”). The FTC alleges that HoYoverse violated the Children’s Online Privacy Protection Act (“COPPA”) and engaged in deceptive marketing practices. Specifically, the company is accused of unfairly marketing loot boxes to children and misleading players about the odds of winning prizes and the true cost of in-game transactions. To settle these charges, HoYoverse will pay a $20 million fine and is prohibited from allowing children under 16 to make in-game purchases without parental consent. Additionally, the company must provide an option to purchase loot boxes directly with real money and disclose loot box odds and exchange rates. HoYoverse is also required to delete personal information collected from children under 13 without parental consent. The FTC’s actions aim to protect consumers, especially children and teens, from deceptive practices related to in-game purchases.
OCR Finalizes Several Settlements for HIPAA Violations: Prior to the inauguration of President Trump, the U.S. Department of Health and Human Services Office for Civil Rights (“OCR”) brought enforcement actions against four entities, USR Holdings, LLC (“USR”), Elgon Information Systems (“Elgon”), Solara Medical Supplies, LLC (“Solara”) and Northeast Surgical Group, P.C. (“NESG”), for potential violations of the Health Insurance Portability and Accountability Act’s (“HIPAA”) Security Rule due to the data breaches the entities experienced. USR reported that between August 23, 2018, and December 8, 2018, a database containing the electronic protected health information (“ePHI”) of 2,903 individuals was accessed by an unauthorized third party who was able to delete the ePHI in the database. Elgon and NESG each discovered a ransomware attack in March 2023, which affected the protected health information (“PHI”) of approximately 31,248 individuals and 15,298 individuals, respectively. Solara experienced a phishing attack that allowed an unauthorized third party to gain access to eight of Solara’s employees’ email accounts between April and June 2019, resulting in the compromise of 114,007 individuals’ ePHI. As part of their settlements, each of the entities is required to pay a fine to OCR: USR $337,750, Elgon $80,000, Solara $3,000,000, and NESG $10,000. Additionally, each of the entities is required to implement certain data security measures such as conducting a risk analysis, implementing a risk management plan, maintaining written policies and procedures to comply with HIPAA, and distributing such policies or providing training on such policies to its workforce.
Virgina Attorney General Sues TikTok for Addictive Fees and Allowing Chinese Government to Access Data: Virginia Attorney General Jason Miyares announced his office had filed a lawsuit against TikTok and ByteDance Ltd, the Chinese-based parent company of TikTok. The lawsuit alleges that TikTok was intentionally designed to be addictive for adolescent users and that the company deceived parents about TikTok content, including by claiming the app is appropriate for children over the age of 12 in violation of the Virginia Consumer Protection Act.
INTERNATIONAL LAWS & REGULATIONS
UK ICO Publishes Guidance on Pay or Consent Model: On January 23, the UK’s Information Commissioner’s Office (“ICO”) published its Guidance for Organizations Implementing or Considering Implementing Consent or Pay Models. The guidance is designed to clarify how organizations can deploy ‘consent or pay’ models in a manner that gives users meaningful control over the privacy of their information while still supporting their economic viability. The guidance addresses the requirements of applicable UK laws, including PECR and the UK GDPR, and provides extensive guidance as to how appropriate fees may be calculated and how to address imbalances of power. The guidance includes a set of factors that organizations can use to assess their consent models and includes plans to further engage with online consent management platforms, which are typically used by businesses to manage the use of essential and non-essential online trackers. Businesses with operations in the UK should carefully review their current online tracker consent management tools in light of this new guidance.
EU Commission to Pay Damages for Sending IP Address to Meta: The European General Court has ordered the European Commission to pay a German citizen, Thomas Bindl, €400 in damages for unlawfully transferring his personal data to the U.S. This decision sets a new precedent regarding EU data protection litigation. The court found that the Commission breached data protection regulations by operating a website with a “sign in with Facebook” option. This resulted in Bindl’s IP address, along with other data, being transferred to Meta without ensuring adequate safeguards were in place. The transfer happened during the transition period between the EU-U.S. Privacy Shield and the EU-U.S. Data Protection Framework. The court determined that this left Bindl in a position of uncertainty about how his data was being processed. The ruling is significant because it recognizes “intrinsic harm” and may pave the way for large-scale collective redress actions.
European Data Protection Board Releases AI Bias Assessment and Data Subject Rights Tools: The European Data Protection Board (“EDPB”) released two AI tools as part of the AI: Complex Algorithms and effective Data Protection Supervision Projects. The EDPB launched the project in the context of the Support Pool of Experts program at the request of the German Federal Data Protection Authority. The Support Pool of Experts program aims to help data protection authorities increase their enforcement capacity by developing common tools and giving them access to a wide pool of experts. The new documents address best practices for bias evaluation and the effective implementation of data subject rights, specifically the rights to rectification and erasure when AI systems have been developed with personal data.
European Data Protection Board Adopts New Guidelines on Pseudonymization: The EDPB released new guidelines on pseudonymization for public consultation (the “Guidelines”). Although pseudonymized data still constitutes personal data under the GDPR, pseudonymization can reduce the risks to the data subjects by preventing the attribution of personal data to natural persons in the course of the processing of the data, and in the event of unauthorized access or use. In certain circumstances, the risk reduction resulting from pseudonymization may enable controllers to rely on legitimate interests as the legal basis for processing personal data under the GDPR, provided they meet the other requirements, or help guarantee an essentially equivalent level of protection for data they intend to export. The Guidelines provide real-world examples illustrating the use of pseudonymization in various scenarios, such as internal analysis, external analysis, and research.
CJEU Issues Ruling on Excessive Data Subject Requests: On January 9, the Court of Justice of the European Union (“CJEU”) issued its ruling in the case Österreichische Datenschutzbehörde (C‑416/23). The primary question before the Court was when a European data protection authority may deny consumer requests due to their excessive nature. Rather than specifying an arbitrary numerical threshold of requests received, the CJEU found that authorities must consider the relevant facts to determine whether the individual submitting the request has “an abusive intention.” While the number of requests submitted may be a factor in determining this intention, it is not the only factor. Additionally, the CJEU emphasized that Data Protection Authorities should strongly consider charging a “reasonable fee” for handling requests they suspect may be excessive prior to simply denying them.
Daniel R. Saeedi, Rachel L. Schaller Gabrielle N. Ganz, Ana Tagvoryan, P. Gavin Eastgate, Timothy W. Dickens, Jason C. Hirsch, Tianmei Ann Huang, Adam J. Landy, Amanda M. Noonan, and Karen H. Shin contributed to this article
DeepSeek AI’s Security Woes + Impersonations: What You Need to Know
Soon after the Chinese generative artificial intelligence (AI) company DeepSeek emerged to compete with ChatGPT and Gemini, it was forced offline when “large-scale malicious attacks” targeted its servers. Speculation points to a distributed denial-of-service (DDoS) attack.
Security researchers reported that DeepSeek “left one of its databases exposed on the internet, which could have allowed malicious actors to gain access to sensitive data… [t]he exposure also includes more than a million lines of log streams containing chat history, secret keys, backend details, and other highly sensitive information, such as API Secrets and operational metadata.”
On top of that, security researchers identified two malicious packages using the DeepSeek name posted to the Python Package Index (PyPI) starting on January 29, 2025. The packages are named deepseeek and deepseekai, which are “ostensibly client libraries for access to and interacting with the DeepSeek AI API, but they contained functions designed to collect user and computer data, as well as environment variables, which may contain API keys for cloud storage services, database credentials, etc.” Although PyPI quarantined the packages, developers worldwide downloaded them without knowing they were malicious. Researchers are warning developers to be careful with newly released packages “that pose as wrappers for popular services.”
Additionally, due to DeepSeek’s popularity, it is warning X users of fake social media accounts impersonating the company.
But wait, there’s more! Cybersecurity firms are looking closely at DeepSeek and are finding security flaws. One firm, Kela, was able to “jailbreak the model across a wide range of scenarios, enabling it to generate malicious outputs, such as ransomware development, fabrication of sensitive content, and detailed instructions for creating toxins and explosive devices.” DeepSeek’s chatbot provided completely made-up information to a query in one instance. The firm stated, “This response underscores that some outputs generated by DeepSeek are not trustworthy, highlighting the model’s lack of reliability and accuracy. Users cannot depend on DeepSeek for accurate or credible information in such cases.”
We remind our readers that TikTok and DeepSeek are based in China, and the same national security concerns apply to both companies. DeepSeek is unavailable in Italy due to information requests from the Italian DPA, Garante. The Irish Data Protection Commissioner is also requesting information from DeepSeek. In addition, there are reports that U.S.-based AI companies are investigating whether DeepSeek used OpenAI’s API to train its models without permission. Beware of DeepSeek’s risks and limitations, and consider refraining from using it at the present time. “As generative AI platforms from foreign adversaries enter the market, users should question the origin of the data used to train these technologies. They should also question the ownership of this data and ensure it was used ethically to generate responses,” said Jennifer Mahoney, Advisory Practice Manager, Data Governance, Privacy and Protection at Optiv. “Since privacy laws vary across countries, it’s important to be mindful of who’s accessing the information you input into these platforms and what’s being done with it.”
Assessing Inputs: Determining AI’s Role in US Intellectual Property Protections
The US Patent & Trademark Office (PTO) issued additional guidance on the contribution of artificial intelligence (AI) in its January 2025 AI Strategy. Similarly, the US Copyright Office issued part two of its “Copyright and Artificial Intelligence” report, addressing the copyrightability of AI- or partially AI-made works. Both agencies appear to be walking a fine line by accepting that AI has become increasingly pervasive while maintaining human contribution requirements for protected works and inventions.
In its published strategy, the PTO states that its vision is to unleash “America’s potential through the adoption of AI.” The strategy describes five focus areas:
Advancing the development of intellectual property policies that promote inclusive AI innovation and creativity.
Building best-in-class AI capabilities by investing in computational infrastructure, data resources, and business-driven product development.
Promoting the responsible use of AI within the PTO and across the broader innovation ecosystem.
Developing AI expertise within the PTO’s workforce.
Collaborating with other US government agencies, international partners, and the public on shared AI priorities.
The PTO stated that it is still evaluating the issue of AI-assisted inventions but reaffirmed its February 2024 guidance on inventorship for AI-assisted inventions. That guidance indicates that while AI-assisted inventions are not categorically unpatentable, the inventorship analysis should focus on human contributions.
Likewise, the Copyright Office discussed public comments regarding AI contributions to copyright, weighing the benefits of AI in assisting and empowering creators with disabilities against the harm to artists working to make a living. Ultimately the Copyright Office affirmed that AI, when used as a tool, can generate copyrightable works only where a human is able to determine the expressive elements contained in the work. The Copyright Office stated that creativity in the AI prompt alone is, at this state, insufficient to satisfy the human expressive input required to produce a copyrightable work.
UK Publishes AI Cyber Security Code of Practice and Implementation Guide
On January 31, 2025, the UK government published the Code of Practice for the Cyber Security of AI (the “Code”) and the Implementation Guide for the Code (the “Guide”). The purpose of the Code is to provide cyber security requirements for the lifecycle of AI. Compliance with the Code is voluntary. The purpose of the Guide is to provide guidance to stakeholders on how to meet the cyber security requirements outlined in the Code, including by providing examples of compliance. The Code and the Guide will also be submitted to the European Telecommunications Standards Institute (“ETSI”) where they will be used as the basis for a new global standard (TS 104 223) and accompanying implementation guide (TR 104 128).
The Code defines each of the stakeholders that form part of the AI supply chain, such as developers (any business across any sector, as well as individuals, responsible for creating or adapting an AI model and/or system), system operators (any business across any sector that has responsibility for embedding/deploying an AI model and system within their infrastructure) and end-users (any employee within a business and UK consumers who use an AI model and/or system for any purpose, including to support their work and day-to-day activities). The Code is broken down into 13 principles, each of which contains provisions, compliance with which is either required, recommended or a possibility. While the Code is voluntary, if a business chooses to comply, it must adhere to those provisions which are stated as required. The principles are:
Principle 1: Raise awareness of AI security threats and risks.
Principle 2: Design your AI system for security as well as functionality and performance.
Principle 3: Evaluate the threats and manage the risks to your AI system.
Principle 4: Enable human responsibility for AI systems.
Principle 5: Identify, track and protect your assets.
Principle 6: Secure your infrastructure.
Principle 7: Secure your supply chain.
Principle 8: Document your data, models and prompts.
Principle 9: Conduct appropriate testing and evaluation.
Principle 10: Communication and processes associated with End-users and Affected Entities.
Principle 11: Maintain regular security updates, patches and mitigations.
Principle 12: Monitor your system’s behavior.
Principle 13: Ensure proper data and model disposal.
The Guide breaks down each principle by its provisions, detailing associated risks/threats with each provision and providing example measures/controls that could be implemented to comply with each provision.
Read the press release, the Code, and the Guide.
Workplace AI – Presidential Change and Unknown Expectations for Retail Employers
The use of Artificial Intelligence (“AI”) in the workplace has spread rapidly since President Trump left the White House in early 2021. In recent years, retail employers have started using AI technology in a variety of ways from automating tasks, to implementing data-driven decision making, to enhancing customer experience. Though the Biden administration started to grapple with the use of AI in the workplace, the second Trump administration could mark a dramatic shift in the federal government’s response to these issues.
The Biden administration took a somewhat cautious approach to the proliferation of AI in the workplace. In response to criticism, including the possibility of AI technology allegedly exhibiting implicit biases in hiring decisions, President Biden issued an executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which established parameters for AI usage and directed federal agencies to take steps to protect workers and consumers from the potential harms of AI.
President Trump repealed the Biden Executive order on January 23, 2025, but has not yet implemented his own policy. The Trump Executive Order directs the Assistant to the President for Science and Technology and other administration officials to develop an “Artificial Intelligence Action Plan” within 180 days of the order to advance the administration’s policy to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” The specifics of the “Artificial Intelligence Action Plan” remain unclear. President Trump signed an executive order regarding AI during his first term in 2019 which encouraged AI research and implementation, however, the technology has since developed rapidly. Given the Executive Order’s statement that previous government action constituted “barriers to American AI innovation” it is likely the “Artificial Intelligence Action Plan” will promote the development and use of AI rather than create new red tape for employers.
In the wake of the Trump Executive order, federal agencies have taken down the limited guidance regarding the use of AI in the workplace they had released during the Biden administration. The Equal Employment Opportunity Commission (“EEOC”), for example, released guidance documents outlining the ways in which AI tools in the workplace could violate the ADA or Title VII of the Civil Rights Act, particularly with respect to hiring. The Department of Labor also issued guidance addressing wage and hour issues related to AI and laying out best practices for implementing these tools to ensure transparency in AI use and support workers who are impacted by AI. Both these documents have been pulled from their respective agencies’ websites.
President Trump’s decision to appoint David Sacks as an “AI & Crypto Czar” also signals what retail employers can expect from the administration moving forward. Sacks is an entrepreneur and venture capitalist who has espoused pro-industry stances on his podcast, “All-In.” He also has a personal stake in AI being utilized as employers as the owner of “Glue” a software program that integrates AI into work place chats as a rival to platforms like Slack or Teams.
If the federal government does not regulate AI’s use in the workplace, states may attempt fill this vacuum of regulation with legislation addressing emerging issues or counteracting the Trump administration’s actions. This could lead to a patchwork of different compliance standards for employers from state to state. New York City’s Local Law 144 creates obligations for employers including conducting bias audits where automated tools play a predominant role in hiring decisions. Illinois has prohibited employers from using AI in a manner that causes a discriminatory effect. Other states may further complicate this landscape in attempts to correct perceived issues with the use of AI in the workplace.
While President Trump’s stance encourages the use of AI, retail employers should remember that existing anti-discrimination statutes may still provide a vehicle to challenge employers’ use of AI. For example, if AI used in hiring disadvantages a certain race, the employer could still face liability under Title VII. Retail employers should be on the look-out for further actions from the Trump administration and developments regarding AI in the coming year.
Europe – The AI Revolution Is Underway but Not Quite Yet in HR?

A couple of weeks ago we asked readers of this blog to answer a couple of questions on their organisation’s use of (generative) artificial intelligence, and we promised to circle back with the results. So, drum roll, the results are now in.
1. In our first question, we wanted to know whether your organisation allows its employees to use generative AI, such as ChatGPT, Claude or DALL-E.
While a modest majority allows it, almost 28% of respondents have indicated that use of genAI is still forbidden, and another 17% allow it only for certain positions or departments.
This first question was the logical build-up to the second:
2. If the use of genAI is allowed to any extent, does that mean the organisation has a clear set of rules around such use?
A solid 50% of respondents have effectively introduced guidelines in this respect. A further 22% are working on it. And that is indeed the sensible approach. It is important that employees know the organisation’s position on (gen)AI, if they can use it and for what, or why they cannot. They should understand the risks of using genAI inappropriately and what may be the sanction if they use it without complying with company rules.
Essential in the rules of play is transparency. Management should have a good understanding of the areas within the organisation where genAI is being used. In particular when genAI is being used for research purposes or in areas where IP infringements may be a concern, it is essential that employees are transparent about the help they have had from their algorithmic co-worker. The risk of “hallucinations” in genAI is still very real, and knowing that work product has come about with the help of genAI should make a manager look at it with different and more attentive eyes.
Please also note in this respect that under the EU AI Act, as from last weekend, providers and deployers of AI systems must ensure that their employees and contractors using AI have an adequate degree of AI literacy, for example by implementing training. The required level of AI literacy is determined “taking into account [the employees’] technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used”.
Since we had anticipated there would be quite a number of organisations that still prohibit the use of genAI, we had also asked:
3. What was the main driver for companies’ prohibition of the use of genAI in the workplace?
The response was not surprising. Organisations are mostly concerned about the risk of errors in AI’s responses and of its inadvertently leaking their confidential information.
While this fear of leaks is completely justified for free applications, such as the free version of ChatGPT and popular search engines such as Bing and Google that are increasingly powered by Large Language Models (LLMs), this fear is largely unjustified for the paid versions. Their business model depends on trust, and they guarantee that in the paid version of their LLM, no data will ever get reused for training purposes. To our knowledge, there have been no incidents and not even the slightest indications that the large vendors would disregard their promises in this regard.
This leads to the somewhat ironic conclusion that prohibiting the use of genAI by your employees may be more likely to realise the risks that the company fears, as employees may then be tempted to use a free, less safe version of the application on their personal devices instead.
4. In which areas of HR do our respondents use AI?
Where respondents indicated other areas of use in HR, they mentioned intelligence gathering, improvement of communication and specific areas of recruitment, such as writing job descriptions and skills testing.
5. Does your organisations plan to increase its use of AI in the next twelve months?,
The narrow majority responded that this would not be the case:
Those respondents which anticipated increased use of AI considered that there will be an increased use generally, in all areas. Specific predictions for increased use are in areas such as the use of HR bots for benefits enquiries, and forecasting.
6. If your organisation does not currently employ AI in HR, why not?
The response to this question is probably the most surprising: a majority of organisations which are not yet using AI in HR are not reluctant for philosophical, technical or employment relations reasons, but have simply not yet got round to it. It is expected that the next 12-18 months will see an important increase in usage overall, which will also lead to a similar uptick in the HR sector.
We ended our survey with perhaps the most delicate question:
7. Do you expect that in the next 12 to 24 months, there will be redundancies within your organisation due to increased use of AI?
For the large majority of organisations, this is not the case.
To this same question, ChatGPT itself responded the following:
The use of AI in businesses can indeed lead to job loss in certain sectors, especially in roles that rely heavily on routine, repetitive tasks. For example, administrative roles, customer service, or even certain manufacturing and warehouse jobs could be replaced by AI, as it can often perform these tasks more efficiently and cost-effectively. On the other hand, AI can also create new jobs, especially in fields like data analysis, machine learning, AI development, and management. Businesses will likely focus more on roles that require creativity and strategy, areas where human input is essential, like decision-making and improving customer relationships. The key will be how companies combine the use of AI with upskilling their workforce, enabling employees to adapt to the changing job landscape.
As is often – though certainly not always – the case, ChatGPT is not wrong about this. We didn’t ask it specifically about its impact on staffing levels in HR, but we think that considerable comfort can be taken from its reference to the continued sanctity of roles where “human input is essential”. It is a very far-off future where many of the more sensitive and difficult aspects of HR management will be accepted as adequately discharged by an algorithm.
The Opening Act: Significant Developments in Trump’s First Two Weeks
During the first two weeks in office, President Donald Trump’s administration released many policies impacting employers in areas like immigration, labor, and workplace safety, and reshaping federal regulatory and enforcement policies regarding artificial intelligence (AI) and unlawful employment discrimination and harassment.
Here is a roundup summarizing the key provisions of the executive orders and other policies from the first two weeks of the new administration.
Quick Hits
Changes to immigration policy included stopping entry of refugees and restricting birthright citizenship.
The federal government now recognizes only two genders, male and female. This policy included removing previous guidance that protected LGBTQ workers from discrimination and harassment.
Immigration Policy
On January 20, 2025, President Trump issued an executive order (EO 14160) limiting birthright citizenship. The executive order asserts that children born in the United States on or after February 19, 2025, who do not have at least one lawful permanent resident or U.S. citizen parent, will not have a claim to birthright citizenship.
On January 23, 2025, a federal judge in Seattle, WA, blocked enforcement of this executive order in response to four states (Washington, Illinois, Arizona, and Oregon) seeking a temporary restraining order. Two weeks later, on February 5, a Maryland federal judge issued a nationwide preliminary injunction blocking the executive order in response to a request by five pregnant undocumented women who argued that the order is unconstitutional and violates several federal laws[SF1] .
A different executive order revisits and reviews the United States-Mexico-Canada Agreement (USMCA) and other U.S. trade agreements. The United States’ participation in the UMSCA makes the TN professional work visa available for citizens of Canada and Mexico.
A separate executive order aims to utilize in-depth vetting and screening of all individuals seeking admission to the United States, including obtaining information to confirm any claims made by those individuals and assess public safety threats.
Another executive order suspended the entry of refugees into the United States under the United States Refugee Admissions Program (USRAP). That order took effect on January 27, 2025.
A separate executive order tightens enforcement of border policies. That includes:
detaining undocumented people “apprehended on suspicion of violating federal or state law,” and removing them promptly;
pursuing criminal charges against undocumented people and “those who facilitate their unlawful presence in the United States”;
terminating parole programs for Cubans, Haitians, Nicaraguans, and Venezuelans; and
utilizing advanced vetting techniques to determine familial relationships and biometrics scanning for all individuals encountered or apprehended by the U.S. Department of Homeland Security (DHS).
LGBTQ+ Employees
On January 20, 2025, President Trump issued EO 14168, which states that the federal government recognizes only two genders: male and female. The federal government will no longer use nonbinary gender categories in compliance and enforcement actions.
On January 28, 2025, U.S. Equal Employment Opportunity Commission (EEOC) Acting Chair Andrea R. Lucas rolled back much of the EEOC’s Biden-era guidance on antidiscrimination and antiharassment protections for LGBTQ+ employees.
On January 27, 2025, President Trump removed Democratic EEOC commissioners Charlotte A. Burrows and Jocelyn Samuels and discharged EEOC general counsel Karla Gilbride.
Labor
President Trump also took the unprecedented move of removing National Labor Relations Board (NLRB) Member Gwynne Wilcox, a Democratic appointee whose term was not set to end until August 2028. The president also discharged NLRB general counsel Jennifer Abruzzo before the end of her term and later tapped William Cowen, who was serving as the regional director for the NLRB’s Los Angeles Region Office (Region 21), as the new acting general counsel.
The discharge of the general counsel was expected after former President Biden discharged the general counsel who served during President Trump’s first term, which was upheld in the courts. However, the removal of a sitting NLRB member was surprising and leaves the Board without a quorum to hear cases. Former Member Wilcox has filed a lawsuit challenging her removal, which is likely to lead to a lengthy court case that could ultimately land before the Supreme Court of the United States.
Workplace Safety
The Occupational Safety and Health Administration’s (OSHA) proposed Biden-era rules on “Heat Injury and Illness Prevention in Outdoor and Indoor Work Settings” and the “Emergency Response Standard” appear to be on the chopping block following President Trump’s “Regulatory Freeze Pending Review” issued on January 20, 2025. The presidential memorandum directed the agency to refrain from issuing or proposing any new rules until a department or agency head designated by the president has had a chance to approve it.
Higher Education and Title IX
On January 31, 2025, the U.S. Department of Education announced that it would not enforce Title IX of the Education Amendments of 1972 in accordance with a 2024 Biden-era rule that had expanded the definition of “on the basis of sex” to include gender identity, sex stereotypes, sex characteristics, and sexual orientation, and mandated that schools allow students and employees to access facilities, programs, and activities consistent with their self-identified gender.
Instead, the department said it will enforce the protections under the prior 2020 Title IX rule. The change aligns the department with EO 14168 and follows federal court decisions that have vacated or enjoined the 2024 Title IX final rule, finding that it violated the plain text and original meaning of Title IX.
Artificial Intelligence (AI)
President Trump is also reshaping federal policy on artificial intelligence, moving away from the Biden administration’s focus on mitigating potential negative impacts on workers and consumers.
On January 23, 2025, President Trump signed EO 14179, “Removing Barriers to American Leadership in Artificial Intelligence.” The order states, “[i]t is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”
The EO came after President Trump, on his first day in office on January 20, 2025, rescinded President Biden’s EO 14110, which was signed in October 2023 and had sought to implement safeguards for the “responsible development and use of AI.”
Next Steps
President Trump’s recent executive orders and other actions over the first two weeks in office have disrupted labor and employment law and created uncertainty for employers, at least in the near term. It remains to be seen what the lasting effects could be, particularly as it appears the administration has more changes in store. However, some of the executive orders and other actions are being challenged, or are expected to be challenged, in the courts, which could answer questions about the constitutional authority of the president and other statutes creating federal agencies. It is unclear what the outcome of the court cases will be.