Some States Step Up Early to Regulate AI Risk Management
Key Takeaways
A global AI arms race may mean U.S. states are best positioned to regulate AI’s risks.
Colorado and Utah have enacted legislation for how AI is to be used with consumers.
Other states are emphasizing existing laws they say “have roles to play” in regulating AI.
In the span of one month, an executive order issued in 2023 focusing on artificial intelligence (AI) safety and security was repealed and replaced by an executive order focusing on the U.S. being the global leader in AI innovation, while in the EU a liability directive developed in 2022 was abandoned in favor of a bolder, simpler and faster 2025 Commission work program, with an “ambition to boost competitiveness.”
A ‘move fast and break things’ approach to an emerging technology arms race often has drawbacks. For example, the recent rise of DeepSeek provided a glimpse into what was previously unimaginable: an open-source large language model useful for a wide range of purposes, that’s fast, cheap and scalable. But within days it was hacked, sued and discredited.
While nations battle for AI supremacy by “removing barriers” and loosening regulations, in the U.S. last year, 45 states introduced AI bills, and 31 states adopted resolutions or enacted legislation. Overall, hundreds of bills in 23 different AI-related categories have been considered. Two states standout, Colorado and Utah, for their focus on consumer protection.
Colorado’s AI Act
The Colorado Artificial Intelligence Act (CAIA), which goes into effect on February 1, 2026, applies to developers and deployers of high-risk AI systems. A developer is an entity or individual that develops or intentionally and substantially modifies a high-risk AI system, and a deployer is an individual or entity that deploys a high-risk AI system. A high-risk AI system is one used as a substantial factor in making a consequential decision.
A consequential decision means a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the terms of, education, employment, a financial or lending service government service, healthcare service, housing, insurance or legal service.
These definitions of the CAIA can seem abstract when not applied to use cases. But a standout feature of the CAIA are its robust mitigation techniques which include a safe harbor if the National Institute of Standards and Technology’s AI Risk Management (NIST AI RMF) is considered when devising a Risk Management Policy and Program, which is required.
The NIST AI RMF provides voluntary guidance to individuals and companies on how to best manage AI risks throughout an AI system’s lifecycle, often referred to as the implementation of Trustworthy AI, which includes such characteristics as reliability, safety, security, resilience, accountability, transparency and fairness.1
The CAIA requires that deployers and developers meet certain criteria to ensure they understand what is required to protect consumers from known or foreseeable risks. In addition to a risk management policy and program, covered entities must complete impact assessments at least annually and in some instances within 90 days of a change to an AI system.
An impact assessment under CAIA requires substantial documentation. For instance, the assessment must include such things as a statement, an analysis, a description and overview of the data used, metrics, a description of transparency measures, and post-deployment monitoring and user safeguards.
Utah’s AI Policy Act
Utah is also an early adopter of AI legislation. In fact, the Utah Artificial Intelligence Policy Act (UAIP) has been in effect since May 2024. Among other things, the UAIP seeks to simultaneously increase consumer protections and encourage responsible AI innovation by:
Mandating transparency through consumer disclosure requirements;2
Clarifying liability for AI business operations, including key terms and legal defenses;
Enabling innovation through a regulatory sandbox for responsible AI development, regulatory mitigation agreements (RMAs) and policy and rulemaking by a newly created Office of Artificial Intelligence Policy (OAIP).
The statutory inclusion of RMAs is a unique example of how Utah aspires to balance AI’s potential risks and rewards. The UAIP defines RMAs as an agreement between a participant, OAIP and relevant state agencies and defines regulatory mitigation as restitution to users, cure periods, civil fines if any and other terms that are tailored to the AI technology seeking mitigation.
While not quite a safe harbor from all liability, RMAs provide AI developers, deployers and users with an opportunity to test for unintended consequences in a somewhat controlled environment. In December, the OAIP announced that it had executed its first RMA with ElizaChat, an app schools can offer teens for their mental health.
The 12-page RMA with ElizaChat is notable for its multiple references to cybersecurity – an area the UAIP intends to eventually establish standards for – and schedules. Included in Schedule A under the subheading “Mitigation Offered” are detailed requirements the ElizaChat app must meet, including a Testing Plan and notification obligations should certain incidents occur.3
As to AI liability, the UAIP specifies and clarifies that businesses cannot blame AI for any statutory offenses. The fact that AI “made the violative statement, undertook the violative act, or was used in furtherance of the violation” is irrelevant and cannot be used as a legal defense.4 The UAIP also contemplates the creation of AI cybersecurity standards through the OAIP.
The UAIP also establishes a Learning Lab through which businesses can partner with the OAIP to responsibly develop and test AI solutions. In this way, the UAIP sets the stage for a new era of AI regulation by being the first state law to embed cross-functional learning opportunities for future rules and regulation.
Other States Are Ready To Regulate
On the day this article was published, Virginia announced it passed an AI bill. It is similar to the Colorado and Utah AI Acts with references to AI disclosures and liability standards and the NIST AI RMF. Connecticut also reintroduced “An Act Concerning AI” and New Mexico introduced an anti-algorithmic discrimination bill.
Not to be outdone, in the last few months several states’ attorneys general (AGs) have issued guidance on how they intend to protect consumers and what they expect from organizations that develop, sell and use AI, none more forcefully as AG Rosenblum of Oregon: “If you think the emerging world of AI is completely unregulated under the laws of Oregon, think again!”
AG Rosenblum discusses how Oregon’s Unlawful Trade Practices Act, Consumer Privacy Act and Equality Act affect implementation of AI, even providing seven examples under the UTPA. AG Bonta of California followed suit a week later in a seven-page advisory, citing similar laws and providing nine examples of violations of its unfair competition law.
How to Prepare
To be sure, it’s still early. But states’ regulation of AI and their inclusion of voluntary guidance frameworks such as the NIST AI RMF or RMAs provide, at a minimum, iterative starting points for the types of industry standards that will emerge as legal obligations. Therefore, organizations should consider whether their policies, procedures and plans will enable them to leverage them.
[1] For further background on the NIST AI RMF see here https://natlawreview.com/article/artificial-intelligence-has-nist-framework-cybersecurity-risk (May 2023) and here https://natlawreview.com/article/nist-releases-risk-profile-generative-ai (May 2024).
[2] Yesterday, the UAIP’s original sponsors proposed an amendment to the required disclosures section, narrowing its application to “high-risk artificial interactions” which refers to interactions with generative AI involving health, financial, medial, and mental health data. If passed, this limitation to the required disclosures will go into effect in June of this year. https://le.utah.gov/~2025/bills/static/SB0226.html. If adopted, this limitation would go some way to lessening the burden of compliance for small and medium businesses.
[3] Id. at 8.
[4] Utah Code. Ann. section 13-2-12 (2).
Final Rule Implementing U.S. Outbound Investments Restrictions Goes into Effect
On October 28, 2024, the U.S. Department of Treasury (Treasury Department) published a final rule (Final Rule) setting forth the regulations implementing Executive Order 14150 of August 9, 2023 (Outbound Investment Order), creating a scheme regulating U.S. persons’ investments in a country of concern involving semiconductors and microelectronics, quantum information technologies and artificial intelligence sectors[1]. According to the Annex to the Outbound Investment Order, China (including Hong Kong and Macau) is currently the only identified “Country of Concern”. The Final Rule went effective on January 2, 2025.
Who are the in-scope persons?
The Final Rule regulates the direct and indirect involvement of “U.S Persons”, which is broadly defined to include (i) any U.S. citizen, (ii) any lawful permanent resident, (iii) any entity organized under the laws of the United States or any jurisdiction within the United States, including any foreign branches of any such entity, and (iv) any person in the U.S.
The Final Rule requires a U.S. Person to take all reasonable steps to prohibit a “Controlled Foreign Entity”, a non-U.S. incorporated/organized entity, from making outbound investments that would be prohibited if undertaken by a U.S. Person. As such, the Final Rule extends its influence over any Controlled Foreign Entity of such U.S. Person.
The Final Rule also prohibits a U.S. Person from knowingly directing a transaction that would be prohibited by the Final Rule if engaged by a U.S. Person.
Which outbound investments are in-scope?
The “Covered Transactions” include investment, loan and debt financing conferring certain investor rights characteristic of equity investments, greenfield or brownfield investments and investment in a joint venture (“JV”) or fund, relating to a “Covered Foreign Person” (as discussed below), as described below:
Equity investment: (i) acquisition of equity interest or contingent equity interest in a Covered Foreign Person; (ii) conversion of contingent equity interest (acquired after the effectiveness of the Final Rule) into equity interest in a Covered Foreign Person;
Loan or debt financing: provision of loan or debt financing to a Covered Foreign Person, where the U.S. Person is afforded an interest in profits, the right to appoint a director (or equivalent) or other comparable financial or governance rights characteristic of an equity investment but not typical of a loan;
Greenfield/brownfield investment: acquisition, leasing, development of operations, land, property, or other asset in China (including Hong Kong and Macau) that the U.S. Person knows will result in the establishment or engagement of a Covered Foreign Person; and
JV/ fund investment: (i) entry into a JV with a Covered Foreign Person that the U.S. Person knows will or plan to engage in covered activities; (ii) acquisition of limited partner or equivalent interest in a non-U.S. Person venture capital fund, private equity fund, fund of funds, or other pooled investment fund that will engage in a transaction that would be a Covered Transaction if untaken by a U.S. Person.
What are in-scope transactions and carve-out transactions?
The Final Rule identifies three categories of Covered Transactions involving covered foreign persons – Notifiable Transactions, Prohibited Transactions, and Excepted Transactions.
A “Covered Foreign Person” includes the following persons engaging in “Covered Activities” (i.e. Notifiable or Prohibited Activities identified in the Final Rule) relating to a Country of Concern:
A person of China, Hong Kong or Macau, including an individual who is a citizen or permanent resident of China (including Hong Kong and Macau and are not a U.S. citizen or permanent resident of the United States); an entity organized under the laws of China (including Hong Kong and Macau), or headquartered in, incorporated in, or with a principal place of business in China (including Hong Kong and Macau; the government of China (including Hong Kong and Macau); or an entity that is directly or indirectly owned 50% or more by any persons in any of the aforementioned categories.
A person directly or indirectly holds a board, voting rights, equity interests, or contractual power to direct or cause the management or policies of any person that derives 50% or more of its revenue or net income or incur 50% or more its capital expenditure or its operating expenses (individually or as aggregated) from China (including Hong Kong and Macau) (subject to a $50,000 in minimum); and
A person from China (including Hong Kong or Maca) who enters a JV that engages, plans to or will engage in a Covered Activity.
Notifiable and Prohibited Transactions
The Final Rule:
Requires U.S. Persons to notify the Treasury Department regarding transactions involving covered foreign persons that fall within the scope of Notifiable Transactions, and
Prohibits U.S. Persons from engaging in transactions involving Covered Foreign Persons that fall within the scope of Prohibited Transactions.
The underlying consideration for the delineation between a Notifiable Transactions and Prohibited Transactions hinges on how impactful it is as a threat to the national security of the United States — a Notifiable Transaction contributes to national security threats, while a Prohibited Transaction poses a particularly acute national security threat because of its potential to significantly advance the military intelligence, surveillance, or cyber-enabled capabilities of a Country of Concern.
Specifically, a Notifiable Transaction necessarily involves the following Notifiable Activities, while a Prohibited Transaction necessarily involves the following Prohibited Activities:
Prohibited Activities
Notifiable Activities
Semiconductors &Microelectronics
– Develops or produces any electronic design automation software for the design of integrated circuits (ICs) or advanced packaging;
– Develops or produces (i) equipment for (a) performing volume fabrication of integrated circuit, or (b) performing volume advanced packaging, or (ii) commodity, material, software, or technology designed exclusively for extreme ultraviolet lithography fabrication equipment;
– Designs any integrated circuits that meet or exceed certain specified performance parameters[2] or is designed exclusively for operations at or below 4.5 Kelvin;
– Fabricates integrated circuits with special characteristics;[3]
– Packages any IC using advanced packaging techniques.
Designs, fabricates, or packages any ICs that are not prohibited activities.
QuantumInformationTechnology
– Develops, installs, sells, or produces any supercomputer enabled by advanced ICs that can provide a theoretical compute capacity beyond a certain threshold;[4]
– Develops a quantum computer or produces any critical components;[5]
– Develops or produces any quantum sensing platform for any military, government intelligence, or mass-surveillance end use;
– Develops or produces any quantum network or quantum communication system designed or used for certain specific purposes.[6]
None
Artificial Intelligence (AI)
– Develops any AI system that is designed or used for any military end use, government intelligence, or mass-surveillance end use;
– Develops any AI system that is trained using a quantity of computing power greater than (a) 10^25 computational operations; and (b) 10^24 computational operations using primarily biological sequence data.
Design of an AI system that is not a prohibited activity and that is:
(a) Designed for any military, government intelligence or mass-surveillance end use;
(b) Intended to be used for:
Cybersecurity applications;
(digital forensic tools;
penetration testing tools;
control of robotic system;
or
(c) Trained using a quantity of computing power greater than 10^23 computational operations.
Excepted Transactions
The Final Rule sets forth the categories of Excepted Transactions, which are determined by the Treasury Department to present a lower likelihood of transfering tangible benefits to a Covered Foreign Person or otherwise unlikely to present national security concerns. These include:
Investment in publicly traded securities: an investment in a publicly traded security (as defined under the Securities Act of 1934) denominated in any currency and traded on any securities exchange or OTC in any jurisdiction;[7]
Investment in a security issued by a registered investment company: an investment by a U.S. Person in the security issued by an investment company or by a business development company (as defined under the Investment Company Act of 1940), such as an index fund, mutual fund, or ETF;
Derivative investment: derivative investments that do not confer the right to acquire equity, right, or assets of a Covered Foreign Person;
Small-size limited partnership investment: limited partnership or its equivalent investment (at or below two million USD) in a venture capital fund, private equity fund, fund of funds, or other pooled investment fund where the U.S. Person has secured a contractual assurance that the fund will not be used to engage in a Covered Transaction;
Full Buyout: acquisition by a U.S. Person of all equity or other interests held by a China-linked person, in an entity that ceases to be a Covered Foreign Person post-acquisition;
Intracompany transaction: a transaction between a U.S. Person and a Controlled Foreign Entity (subsidiary) to support ongoing operations or other activities are not Covered Activities;
Pre-existing binding commitment: a transaction for binding, uncalled capital commitment entered into before January 2, 2025;
Syndicated loan default: acquisition of a voting interest in a Covered Foreign Person by a U.S. Person upon default of a syndicated loan made by the lending syndicate and with passive U.S. Person participation; and
Equity-based compensation: receipt of employment compensation by a U.S. Person in the form of equity or option incentives and the exercising of such incentives.
What is the knowledge standard?
The Final Rule provides that certain provisions will only apply if a U.S. Person has Knowledge of the relevant facts or circumstances at the time of a transaction. “Knowledge” under the Final Rule includes (a) actual knowledge of the existence or the substantial certainty of occurrence of a fact or circumstance, (b) awareness of high probability of the existence of a fact, circumstance or future occurrence, or (c) reason to know of the existence of a fact or circumstance.
The determination of Knowledge will be made based on information a U.S. Person had or could have had through a reasonable and diligent inquiry, which should be based on the totality of relevant facts and circumstances, including without limitation, (a) whether a proper inquiry has been made, (b) whether contractual representations or warranties have been obtained, (c) whether efforts have been made to obtain and assess non-public and public information; (d) whether there is any warning sign; and (e) whether there is purposeful avoidance of efforts to learn and seek information.
Key points relating to the notification filing procedures
A U.S. person’s obligation to notify the Treasury Department is triggered when they know relevant facts or circumstances related to a Notifiable Transaction entered into by itself or its Controlled Foreign Entity. U.S. Person shall follow the electronic filing instructions to submit the electronic filing at https://home.treasury.gov/policy-issues/international/outbound-investment-program.
The filing of the notification is time-sensitive. The filing deadline is no later than 30 days following the completion of a Notifiable Transaction or otherwise no later than 30 days after acquiring such knowledge if a U.S. Person becomes aware of the transaction after its completion. If a filing is made prior to the completion of a transaction and there are material changes to the information in the original filing, the notifying U.S. Person shall update the notification no later than 30 days following the completion of the transaction.
In addition to the detailed information requested under the Final Rule, a certification by the CEO or other designees of the U.S. Person is required to certify the accuracy and completeness in material respects of the information submitted.
What are the consequences of non-compliance?
The Treasury Department may impose civil and administrative penalties for any Final Rule violations, including engaging in Prohibited Transactions, failure to report Notifiable Transactions, making false representation or omissions, or engaging in evasive actions or conspiracies to violate the Final Rule. The Treasury Department may impose fines, require divestments, or refer for criminal prosecutions to the U.S. Department of Justice for violations of the Final Rule.
U.S. Persons may submit a voluntary self-disclosure if they believe their conduct may have violated any part of the Final Rule. Such self-disclosure will be taken into consideration during the Treasury Department’s determination of the appropriate response to the self-disclosed activity.
Texas AG Alleges DeepSeek Violates Texas Privacy Law
On February 14, 2025, Attorney General Ken Paxton announced an investigation into DeepSeek, a Chinese artificial intelligence (“AI”) company, regarding its privacy practices and compliance with Texas law. The investigation also examines DeepSeek’s claims that its AI model rivals leading global models, including OpenAI’s technology.
As part of the investigation, Attorney General Paxton has issued Civil Investigative Demands (“CIDs”) to Google and Apple, requesting their analysis of the DeepSeek application and any documentation DeepSeek submitted before its app became available to consumers.
In a statement, Attorney General Paxton expressed concerns over DeepSeek’s potential connections to the Chinese Communist Party (“CCP”), and its implications for data security and AI competition. Citing national security and privacy risks, Paxton emphasized Texas’ commitment to upholding data protection laws and ensuring compliance with state regulations.
Additionally, on January 28, 2025, the Attorney General banned DeepSeek’s platform from all Office of the Attorney General devices, citing security concerns.
As of this publication date, the investigation remains ongoing.
Oregon’s AI Guidance: Old Laws in Scope for New AI
The Oregon AG’s Office, along with the state’s Department of Justice, issued guidance late last year on how state laws apply to the ways businesses use AI. The guidance may be two months old, but the cautions are still timely. The guidance seeks to give companies direction on times when AI uses might be regulated by existing state laws.
As outlined in the guidance, the Oregon state laws that may apply to a company’s use of AI include a variety of consumer protection laws. Namely, the state’s “comprehensive” privacy law (the Consumer Privacy Act), its Unlawful Trade Practices Act, the Equality Act, and its data security law (the Consumer Information Protection Act). Some key takeaways from the guidance:
Notice: A reminder to companies that they could be viewed as violating Oregon’s “comprehensive” privacy law if they do not disclose how they use personal information with their AI tools. Additionally, the AG may view it as a violation of Oregon’s Unlawful Trade Practices Act if they do not explain a potential “material defect” with an AI tool. For example, a business that places a third-party virtual assistant program on its website, but the tool is known to give incorrect information.
Choice: The guidance reminds companies that under Oregon’s privacy law, consent is required before processing sensitive information, which may occur if putting that information into AI tools. In addition, the guidance reminds companies that the same law requires giving consumers the ability to (a) withdraw consent (when such consent was required to process information) and (b) opt out of AI profiling for significant decisions. Companies will need to keep this in mind, inter alia, when inputting personal information into AI tools.
Transparency: The guidance outlines some potential AI uses that might violate the state’s Unlawful Trade Practices Act. For example, not being clear that someone is interacting with an AI tool. Or, misleading individuals about the AI’s capabilities or how the company will use AI-generated content. Another example given is using AI-generated voices for robocalling without accurately disclosing the caller’s identity.
Bias: The guidance states that using AI in a way that discriminates based on race, gender or other protected characteristics would violate Oregon’s Equality Act.
Security: The guidance reminds companies of the obligations of the state’s data security law. Thus, if an AI tool incorporates personal information, or a business uses personal information in connection with the tool, it will need to keep that law’s obligations in mind. These include obligations to have in place “reasonable safeguards” to protect personal information.
The Impact of AI Regulations on Insurtech
Insurtech is steeped in artificial intelligence (AI), leveraging the technology to improve insurance marketing, sales, underwriting, claims processing, fraud detection and more. Insurtech companies are likely only scratching the surface of what is possible in these areas. In parallel, the regulation of AI is expected to create additional legal considerations at each step of the design, deployment and operation of AI systems working in these contexts.
Legal Considerations and AI Exposure
As with data privacy regulations, the answer to the question “Which AI laws apply?” is highly fact-specific and often dependent on the model’s exposure or data input. Applicable laws tend to trigger based on the types of data or location of the individuals whose data is leveraged in training the models rather than the location of the designer or deployer. As a result, unless a model’s use is strictly narrowed to a single jurisdiction, there is likely to be exposure to several overlapping regulations (in addition to data privacy concerns) impacting the design and deployment of an Insurtech AI model.
Managing Regulatory Risk in AI Design
Given this complexity, the breadth of an Insurtech AI model’s exposure can be an important threshold design consideration. Companies should adequately assess the level of risk from the perspective of limiting unnecessary regulatory oversight or creating the potential for regulatory liabilities, such as penalties or fines. For instance, an Insurtech company leveraging AI should consider if the model in question is intended to be used for domestic insurance matters only and if there is value in leveraging data related to international data subjects. Taking steps to ensure that the model has no exposure to international data subjects can limit the application of extraterritorial, international laws governing AI and minimize the potential risk of leveraging an AI solution. On the other hand, if exposure to the broadest possible data is desirable from an operations standpoint, for instance, to augment training data, companies need to be aware of the legal ramifications of such decisions before making them.
Recent State-Level AI Legislation
In 2024, several U.S. states passed AI laws governing the technology’s use, several of which can impact Insurtech developers and deployers. Notably, state-level AI bills are not uniform. These laws range from comprehensive regulatory frameworks, such as Colorado’s Artificial Intelligence Act, to narrower disclosure-based laws such as California’s AB 2013, which will require AI developers to publicly post documentation detailing their model’s training data. Several additional bills relating to AI regulation are already pending in 2025, including:
Massachusetts’ HD 3750: Would require health insurers to disclose the AI use including, but not limited to, in the claims review process and submit annual reports regarding training sets as well as an attestation regarding bias minimization.
Virginia’s HB 2094: Known as the High-Risk Artificial Intelligence Developer and Deployer Act, would require the implementation of a risk management policy and program for “high-risk artificial intelligence systems,” defined to include “any artificial intelligence system that is specifically intended to autonomously make, or be a substantial factor in making, a consequential decision (subject to certain exceptions).
Illinois’ HB 3506: Among other things, this bill would require developers to publish risk assessment reports every 90 days and to complete annual third-party audits.
The Growing Importance of Compliance
With the federal government’s evident step back in pursuing an overarching AI regulation, businesses can expect state authorities to take the lead in AI regulation and enforcement. Given the broad and often consequential use of AI in the Insurtech context, and the expectation that this use will only increase over time given its utility, businesses in this space are advised to keep a close watch on current and pending AI laws to ensure compliance. Non-compliance can raise exposure not only to state regulators tasked with enforcing these regulations but also potentially to direct consumer lawsuits. As noted in our prior advisory, being well-positioned for compliance is also imperative for the market from a transactional perspective.
The Insurtech space is growing in parallel with the expanding patchwork of U.S. AI regulations. Prudent growth in the industry requires awareness of the associated legal dynamics, including emerging regulatory concepts across the nation.
Indian Music Industry Enters the Global Copyright Debate Over AI
The legal battles surrounding generative AI and copyright continue to escalate with prominent players in the Indian music industry now seeking to join an existing lawsuit against OpenAI, the creator of ChatGPT. On February 13, 2025, industry giants such as Saregama, T-Series, and the Indian Music Industry (IMI) presented their concerns in a New Delhi court, arguing that OpenAI’s methods for training its AI models involve extracting protected song lyrics, music compositions, and recordings without proper licensing or compensation. This development follows a broader trend of copyright holders challenging generative AI companies, as evidenced by similar claims in the U.S. and Europe.
This case was originally filed by Asian News International (ANI), a leading Indian news agency, which alleged that OpenAI had used its copyrighted content without permission to train its AI models. Since then, the lawsuit has drawn interest from music companies, book publishers, and news organizations, all highlighting the alleged economic harm and intellectual property concerns stemming from these practices in India. The proceedings emerge amid a global backlash against the use of copyrighted materials in AI training. In November 2024, GEMA, Germany’s music licensing body, filed a lawsuit against OpenAI, alleging that the company reproduced protected lyrics without authorization. In parallel, lawsuits from authors and publishers in the U.S. have accused OpenAI and other AI platforms of improperly using copyrighted materials as training data.
The unfolding litigation raises critical questions about the boundaries and applicability of ‘fair use’ within the context of AI in the digital age. While OpenAI maintains that its reliance on publicly available data falls within fair use principles, commentators warn that a ruling against the tech giant could set a precedent that reshapes AI training practices not only in India but worldwide—given the global nature of AI development and jurisdiction-specific nuances of copyright law. As courts grapple with these complex issues, both creative industries and the broader tech community are watching closely to understand how emerging precedent and legal frameworks around the world might influence future AI development and deployment.
As legal challenges mount globally, this litigation is another reminder for businesses developing AI models or integrating AI technologies to proactively assess data privacy and sourcing practices, secure appropriate licenses for copyrighted content, and thoroughly review existing agreements and rights to identify any issues or ambiguities regarding the scope of permitted AI use cases. Beyond obtaining necessary licenses, companies should implement targeted risk mitigation strategies, such as maintaining comprehensive records of data sources and corresponding licenses, establishing internal and (where appropriate) external policies that define acceptable AI use and development, and conducting regular audits to ensure compliance. For any company seeking to unlock AI solutions and monetization opportunities while safeguarding its business interests, engaging qualified local legal counsel early in the process is essential for effectively navigating the evolving global patchwork of fair use, intellectual property laws, and other relevant regulations.
Charting the Future of AI Regulation: Virginia Poised to Become Second State to Enact Law Governing High-Risk AI Systems
Virginia has taken a step closer to becoming the second state (after Colorado) to enact comprehensive legislation addressing discrimination stemming from the use of artificial intelligence (AI), with the states taking different approaches to this emerging regulatory challenge.
On February 12, 2025, the Virginia state senate passed the High-Risk Artificial Intelligence Developer and Deployer Act (H.B. 2094) which, if signed into law, will regulate the use of AI in various contexts, including when it is used to make decisions regarding “access to employment.” The legislation now heads to Governor Glenn Youngkin’s desk for signature. If signed, the law will come into force on July 1, 2026, and will establish new compliance obligations for businesses that deploy “high-risk” AI systems affecting Virginia “consumers,” including job applicants.
Quick Hits
If signed into law by Governor Youngkin, Virginia’s High-Risk Artificial Intelligence Developer and Deployer Act (H.B. 2094) will go into effect on July 1, 2026, giving affected businesses plenty of time to understand and prepare for its requirements.
The legislation applies to AI systems that autonomously make—or significantly influence—consequential decisions, such as lending, housing, education, and healthcare, and potentially job hiring as well.
Although H.B. 2094 excludes individuals acting in a commercial or employment context from the definition of “consumer,” the term “consequential decision” specifically includes decisions with a material legal or similar effect regarding “access to employment,” such that job applicants are ostensibly covered by the requirements and prohibitions under a strict reading of the text.
Overview
Virginia’s legislation establishes a duty of reasonable care for businesses employing automated decision-making systems in several regulated domains, including employment, financial services, healthcare, and other consequential sectors. The regulatory framework applies specifically to “high-risk artificial intelligence” systems that are “specifically intended to autonomously” render or be a substantial factor in rendering decisions—statutory language that significantly narrows the legislation’s scope compared to Colorado’s approach. A critical distinction in the Virginia legislation is the requirement that AI must constitute the “principal basis” for a decision to trigger the law’s anti-discrimination provisions. This threshold requirement creates a higher bar for establishing coverage than Colorado’s “substantial factor” standard.
Who Is a ‘Consumer’?
A central goal of this law is to safeguard “consumers” from algorithmic discrimination, especially where automated systems are used to make consequential decisions about individuals. The legislation defines a “consumer” as a natural person who is a resident of Virginia and who acts in an individual or household context. And, as with the Virginia Consumer Data Protection Act, H.B. 2094 contains a specific exclusion for individuals acting in a commercial or employment context.
One potential source of confusion is how “access to employment” can be a “consequential decision” under the law—while simultaneously excluding those in an employment context from the definition of “consumers.” The logical reading of these conflicting definitions is that job applicants do not act in an employment capacity on behalf of a business; instead, they are private individuals seeking employment for personal reasons. In other words, if Virginia residents are applying for a job and an AI-driven hiring or screening tool is used to evaluate their candidacy, a purely textual reading of the legislation suggests that they remain consumers under the statute because they are acting in a personal capacity.
Conversely, once an individual becomes an employee, the employee’s interactions with the business (including the business’s AI systems) are generally understood to reflect action undertaken within an employment context. Accordingly, if an employer uses a high-risk AI system for ongoing employee monitoring (e.g., measuring performance metrics, time tracking, or productivity scores), the employee might no longer be considered a “consumer” under H.B. 2094.
High-Risk AI Systems and Consequential Decisions
H.B. 2094 regulates only those artificial intelligence systems deemed “high-risk.” Such systems autonomously make—or are substantial factors in making—consequential decisions that affect core rights or opportunities, such as admissions to educational programs and other educational opportunities, approval for lending services, the provision or denial of housing or insurance, and, as highlighted above, access to employment. The legislature included these provisions to curb “algorithmic discrimination,” which is the illegal disparate treatment or unfair negative effects that occur on the basis of protected characteristics, such as race, sex, religion, or disability, and result from the use of automated decision-making tools. And, as we have seen with other, more narrowly focused laws in other jurisdictions, even if the developer or deployer does not intend to use an AI tool to engage in discriminatory practice, merely using an AI tool that produces such biased outcomes may trigger liability.
H.B. 2094 also includes a list of nineteen types of technologies that are specifically excluded from the definition of a “high-risk artificial intelligence system.” One notable carve-out is “anti-fraud technology that does not use facial recognition technology.” This is particularly relevant as the prevalence of fraudulent remote worker job applicants increases and more companies seek effective mechanisms to address such risks. Cybersecurity tools, anti-malware, and anti-virus technologies are likewise entirely excluded for obvious reasons. Among the other more granular exclusions, the legislation takes care to specify that spreadsheets and calculators are notconsidered high-risk artificial intelligence. Thus, those who harbor anxieties about the imminent destruction of pivot tables can breathe easy—spreadsheet formulas will not be subject to these heightened regulations.
Obligations for Developers
Developers—entities that create or substantially modify high-risk AI systems—are subject to a “reasonable duty of care” to protect consumers from known or reasonably foreseeable discriminatory harms. Before providing a high-risk AI system to a deployer—entities that use high-risk AI systems to make consequential decisions in Virginia—developers must disclose certain information (such as the system’s intended uses), known limitations, steps taken to mitigate algorithmic discrimination, and information intended to assist the deployer with performing its own ongoing monitoring of the high-risk AI system for algorithmic discrimination. Developers must update these disclosures within ninety days of making any intentional and substantial modifications that alter the system’s risks. Notably, developers are also required to either provide or, in some instances, make available extensive amounts of documentation relating to the high-risk AI tool they develop, including legally significant documents like impact assessments and risk management policies.
In addition, H.B. 2094 appears to take aim at “deep fakes” by mandating that if a developer uses a “generative AI” model to produce audio, video, or images (“synthetic content”), a detectable marking or other method that ensures consumers can identify the content as AI-generated will generally be required. The rules make room for creative works and artistic expressions so that the labeling requirements do not impair legitimate satire or fiction.
Obligations for Deployers
Like developers, deployers must also meet a “reasonable duty of care” to prevent algorithmic discrimination. H.B. 2094 requires deployers to devise and implement a risk management policy and program specific to the high-risk AI system they are using. Risk management policies and programs that are designed, implemented, and maintained pursuant to H.B. 2094, and which rely upon the standards and guidance articulated in frameworks like the National Institute of Standards and Technology’s AI Risk Management Framework (NIST AI RMF) or ISO/IEC 42001, presumptively demonstrate compliance.
Prior to putting a high-risk AI system into practice, deployers must complete an impact assessment that considers eight separate enumerated issues, including the system’s purpose, potential discriminatory risks, and the steps taken to mitigate bias. As with data protection assessments required by the Virginia Consumer Data Protection Act, a single impact assessment may be used to demonstrate compliance with respect to multiple comparable high-risk AI systems. Likewise, under H.B. 2094, an impact assessment used to demonstrate compliance with another similarly scoped law or regulation with similar effects, may be relied upon. In all cases, however, the impact assessment relied upon must be updated when the AI system undergoes a significant update and must be retained for at least three years.
Deployers also must clearly inform consumers when a high-risk AI system will be used to make a consequential decision about them. This notice must include information about:
(i) the purpose of such high-risk artificial intelligence system,
(ii) the nature of such system,
(iii) the nature of the consequential decision,
(iv) the contact information for the deployer, and
(v) a description of the artificial intelligence system in plain language of such system.
Any such disclosures must be updated within thirty days of the deployer’s receipt of notice from the developer of the high-risk AI system that it has made intentional and significant updates to the AI system. Additionally, the deployer must “make available, in a manner that is clear and readily available, a statement summarizing how such deployer manages any reasonably foreseeable risk of algorithmic discrimination that may arise from the use or deployment of the high-risk artificial intelligence system.” In the case of an adverse decision—such as denying a loan or rejecting a job application—the deployer must disclose the principal reasons behind the decision, including whether the AI system was the determining factor, and give the individual an opportunity to correct inaccuracies in the data or appeal the decision.
Exemptions, Cure Periods, and Safe Harbors
Although H.B. 2094 applies broadly, it exempts businesses that operate in certain sectors or engage in regulated activities for which equivalent or more stringent regulations are already in place. For example, federal agencies and regulated financial institutions may be exempted if they adhere to their own AI risk standards. Similarly, H.B. 2094 provides partial exemptions for Health Insurance Portability and Accountability Act (HIPAA)–covered entities or telehealth providers in limited situations, including those where AI-driven systems generate healthcare recommendations but require a licensed healthcare provider to implement those recommendations or where an AI system for administrative, quality measurement, security, or internal cost or performance improvement functions.
H.B. 2094 also contains certain provisions that are broadly protective of businesses. For example, the legislation conspicuously does not require businesses to disclose trade secrets, confidential information, or privileged information. Moreover, entities that discover and cure a violation before the attorney general takes enforcement action may also avoid liability if they promptly remediate the issue and inform the attorney general. And, the legislation contains a limited “safe harbor” in the form of a (rebuttable) presumption that developers and deployers of high-risk AI systems have met their duty of care to consumers if they comply with the applicable operating standards outlined in the legislation.
Enforcement and Penalties
Under H.B. 2094, only the attorney general may enforce the requirements described in the legislation. Nevertheless, the potential enforcement envisioned could be very impactful, as violations can lead to civil investigative demands, injunctive relief, and civil penalties. Generally, non-willful violations of H.B. 2094 may incur up to $1,000 in fines plus attorneys’ fees, expenses, and costs, while willful violations can result in fines of up to $10,000 per instance along with attorneys’ fees, expenses, and costs. Notably, each violation is counted separately, so penalties can accumulate quickly if an AI system impacts many individuals.
Looking Forward
Even though the law would not take effect until July 1, 2026, if signed by the governor, organizations that develop or deploy high-risk AI systems may want to begin compliance planning. By aligning with widely accepted frameworks like the NIST AI RMF and ISO/IEC 42001, businesses may establish a presumption of compliance. And, from a practical perspective, this early adoption can help mitigate legal risks, enhance transparency, and build trust among consumers—which can be particularly beneficial with respect to sensitive issues like hiring decisions.
Final Thoughts
Virginia’s new High-Risk Artificial Intelligence Developer and Deployer Act signals a pivotal moment in the governance of artificial intelligence at the state level and is a likely sign of things to come. The law’s focus on transparent documentation, fairness, and consumer disclosures underscores the rising demand for responsible AI practices. Both developers and deployers must understand the scope of their responsibilities, document their AI processes and make sure consumers receive appropriate information about them, and stay proactive in risk management.
President Trump’s Artificial Intelligence (AI) Action Plan Takes Shape as NSF, OSTP Seek Comments
On January 23, 2025, as one of the first actions of his second term, President Trump signed Executive Order (EO) 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” making good on a campaign promise to rescind Executive Order 14110 (known colloquially as the Biden AI EO).
It is not surprising that AI was at the top of the agenda for President Trump’s second term. In his first term, Trump was the first president to issue an EO on AI. On February 11, 2019, he issued Executive Order 13859, Maintaining American Leadership in Artificial Intelligence. This was a first-of-its-kind EO to specifically address AI, recognizing the importance of AI to the economic and national security of the United States. In it, the Trump Administration laid the foundation for investment in the future of AI by committing federal funds to double investment in AI research, establishing national AI research institutes, and issuing regulatory guidance for AI development in the private sector. The first Trump Administration later established guidance for federal agency adoption of AI within the government.
The current EO gives the Assistant to the President for Science and Technology, the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs, in coordination with agency heads they deem relevant, 180 days—until July 22, 2025—to prepare an AI Action Plan to replace the policies that have been rescinded from the Biden Administration.
OSTP/NSF RFI
To develop an AI Action Plan within that deadline, the National Science Foundation’s Networking and Information Technology Research and Development (NITRD) National Coordination Office (NCO)—on behalf of the Office of Science and Technology Policy (OSTP)—has issued a Request for Information (RFI) on the Development of an Artificial Intelligence (AI) Action Plan. Comments are due by March 15, 2025.
This is a unique opportunity to provide the second Trump Administration with important real-world, on-the-ground feedback. As the RFI states, this administration intends to use these comments to “define the priority policy actions needed to sustain and enhance America’s AI dominance, and to ensure that unnecessarily burdensome requirements do not hamper private sector AI innovation.
Epstein Becker Green and its Artificial Intelligence practice group, along with its health care, employment, and regulatory teams, are closely monitoring how the administration will address the regulation of health care AI and workplace AI in this plan. During President Trump’s first term, the administration focused its AI policy primarily around national security. Given the great expansion of the types and uses of AI tools since President Trump’s first term, we anticipate the Trump Administration will broaden its regulatory reach during this term—with the aim of “enhancing America’s global AI dominance.”
We have seen an explosion of AI tools adopted by our clients within health care—both clinical and administrative—as well as for employment decision-making. We work closely with clients to manage enterprise risk and drive strategic edge through AI innovation and look forward to helping shape the current administration’s AI policies through this and other opportunities for engagement with federal policymakers.
Submission Guidelines
OSTP seeks input on the highest priority policy actions that should be in the new AI Action Plan. Responses can address any relevant AI policy topic, including but not limited to: hardware and chips, data centers, energy consumption and efficiency, model development, open source development, application and use (either in the private sector or by government), explainability and assurance of AI model outputs, cybersecurity, data privacy and security throughout the lifecycle of AI system development and deployment (to include security against AI model attacks), risks, regulation and governance, technical and safety standards, national security and defense, research and development, education and workforce, innovation and competition, intellectual property, procurement, international collaboration, and export controls.
OSTP encourages respondents to suggest concrete AI policy actions needed to address the topics raised. Comments may be submitted by email to [email protected] or by mail at the address on page 2 of the RFI. Email submissions should be machine-readable, not copy-protected, and include “AI Action Plan” in the subject heading. Additional guidelines, including font and page limits, appear on page 2.
FTC Deadlock May End Next Week
On Tuesday, February 25, 2025, the Senate Committee on Commerce, Science and Transportation will hold a hearing to confirm the nomination of Mark Meador to serve as a Federal Trade Commissioner. Meador previously served in the Department of Justice Antitrust Division and the FTC’s Health Care Division. Most recently, he was a founding partner at Kressin Meador Powers LLC, where he was an antitrust litigator. If confirmed, Mr. Meador will be the third Republican Commissioner (along with Commissioners Andrew Ferguson and Melissa Holyoak) and will complete the full roster of five Commissioners. Meador’s confirmation would also end the deadlock that currently plagues the Commission and begin a Republican majority era at the FTC.
On January 23, 2025, the FTC approved a motion to give Chairman Ferguson the authority to comply with President Trump’s executive orders ending Diversity Equity and Inclusion initiatives across the federal government. Following the anticipated confirmation of Mr. Meador, we can also expect the FTC to slow the rulemaking pace of the prior administration, increase scrutiny of Big Tech (including aggressive antitrust enforcement of the industry’s largest players), and reduce regulation of Artificial Intelligence technologies. Maybe we’ll even see updates to the Green Guides.
How to Report Cyber, AI, and Emerging Technologies Fraud and Qualify for an SEC Whistleblower Award
SEC Forms Cyber and Emerging Technologies Unit
On February 20, 2025, the SEC announced the creation of the Cyber and Emerging Technologies Unit (CETU) to focus on combatting cyber-related misconduct and to protect retail investors from bad actors in the emerging technologies space. In announcing the formation of the CETU, Acting Chairman Mark T. Uyeda said:
The unit will not only protect investors but will also facilitate capital formation and market efficiency by clearing the way for innovation to grow. It will root out those seeking to misuse innovation to harm investors and diminish confidence in new technologies.
As detailed below, the SEC’s press release identifies CETU’s seven priority areas to combat fraud and misconduct. Whistleblowers can provide original information to the SEC about these types of violations and qualify for an award if their tip leads to a successful SEC enforcement action. The largest SEC whistleblower awards to date are:
$279 million SEC whistleblower award (May 5, 2023)
$114 million SEC whistleblower award (October 22, 2020)
$110 million SEC whistleblower award (September 15, 2021)
$104 million SEC whistleblower award (August 4, 2023)
$98 million SEC whistleblower award (August 23, 2024)
SEC Whistleblower Program
Under the SEC Whistleblower Program, the SEC will issue awards to whistleblowers who provide original information that leads to successful enforcement actions with total monetary sanctions in excess of $1 million. A whistleblower may receive an award of between 10% and 30% of the total monetary sanctions collected. The SEC Whistleblower Program allows whistleblowers to submit tips anonymously if represented by an attorney in connection with their disclosure.
In its short history, the SEC Whistleblower Program has had a tremendous impact on securities enforcement and has been replicated by other domestic and foreign regulators. Since 2011, the SEC has received an increasing number of whistleblower tips in nearly every fiscal year. In fiscal year 2024, the SEC received nearly 25,000 whistleblower tips and awarded over $225 million to whistleblowers.
The uptick in received tips, paired with the sizable awards given to whistleblowers, reflects the growth and continued success of the whistleblower program. See some of the SEC whistleblower cases that have resulted in large awards.
CETU Priority Areas for SEC Enforcement
The CETU will target seven areas of fraud and misconduct for SEC enforcement:
Fraud committed using emerging technologies, such as artificial intelligence (AI) and machine learning. For example, the SEC charged QZ Asset Management for allegedly falsely claiming that it would use its proprietary AI-based technology to help generate extraordinary weekly returns while promising “100%” protection for client funds. In a separate action, the SEC settled charges against investment advisers Delphia and Global Predictions for making false and misleading statements about their purported use of AI in their investment process.
Use of social media, the dark web, or false websites to perpetrate fraud. For example, the SEC charged Abraham Shafi, the founder and former CEO of Get Together Inc., a privately held social media startup known as “IRL,” for raising approximately $170 million from investors by allegedly fraudulently portraying IRL as a viral social media platform that organically attracted the vast majority of its purported 12 million users. In reality, IRL spent millions of dollars on advertisements that offered incentives to download the IRL app. Shafi hid those expenditures with offering documents that significantly understated the company’s marketing expenses and by routing advertising platform payments through third parties.
Hacking to obtain material nonpublic information. For example, the SEC brought charges against a U.K. citizen for allegedly hacking into the computer systems of public companies to obtain material nonpublic information and using that information to make millions of dollars in illicit profits by trading in advance of the companies’ public earnings announcements.
Takeovers of retail brokerage accounts. For example, the SEC charged two affiliates of JPMorgan Chase & Co. for failures including misleading disclosures to investors, breach of fiduciary duty, prohibited joint transactions and principal trades, and failures to make recommendations in the best interest of customers. According to the SEC’s order, a JP Morgan affiliate made misleading disclosures to brokerage customers who invested in its “Conduit” private funds products, which pooled customer money and invested it in private equity or hedge funds that would later distribute to the Conduit private funds shares of companies that went public. The order finds that, contrary to the disclosures, a JP Morgan affiliate exercised complete discretion over when to sell and the number of shares to be sold. As a result, investors were subject to market risk, and the value of certain shares declined significantly as JP Morgan took months to sell the shares.
Fraud involving blockchain technology and crypto assets. For example, the SEC brought charges against Terraform Labs and its founder Do Kwon for orchestrating a multi-billion dollar crypto asset securities fraud involving an algorithmic stablecoin and other crypto asset securities. In a separate action, the SEC brought charges against FTX CEO Samuel Bankman-Fried for a years-long fraud to conceal from FTX’s investors (1) the undisclosed diversion of FTX customers’ funds to Alameda Research LLC, his privately-held crypto hedge fund; (2) the undisclosed special treatment afforded to Alameda on the FTX platform, including providing Alameda with a virtually unlimited “line of credit” funded by the platform’s customers and exempting Alameda from certain key FTX risk mitigation measures; and (3) undisclosed risk stemming from FTX’s exposure to Alameda’s significant holdings of overvalued, illiquid assets such as FTX-affiliated tokens.
Regulated entities’ compliance with cybersecurity rules and regulations. For example, the SEC settled charges against transfer agent Equiniti Trust Company LLC, formerly known as American Stock Transfer & Trust Company LLC, for failures to ensure that client securities and funds were protected against theft or misuse, which led to losses of millions of dollars in client funds.
Public issuer fraudulent disclosure relating to cybersecurity. For example, the SEC settled charges against software company Blackbaud Inc. for making misleading disclosures about a 2020 ransomware attack that impacted more than 13,000 customers. Blackbaud agreed to pay a $3 million civil penalty to settle the charges. In a separate action, the SEC settled charges against The Intercontinental Exchange, Inc. and nine wholly owned subsidiaries, including the New York Stock Exchange, for failing to timely inform the SEC of a cyber intrusion as required by Regulation Systems Compliance and Integrity.
How to Report Fraud to the SEC and Qualify for an SEC Whistleblower Award
To report a fraud (or any other violations of the federal securities laws) and qualify for an award under the SEC Whistleblower Program, the SEC requires that whistleblowers or their attorneys report the tip online through the SEC’s Tip, Complaint or Referral Portal or mail/fax a Form TCR to the SEC Office of the Whistleblower. Prior to submitting a tip, whistleblowers should consult with an experienced whistleblower attorney and review the SEC whistleblower rules to, among other things, understand eligibility rules and consider the factors that can significantly increase or decrease the size of a future whistleblower award.
Recent Executive Orders: What Employers Need to Know to Assess the Shifting Sands
In January 2025, President Trump issued a flurry of executive orders. Several may significantly impact employers; the key aspects of these orders are described below, although this is not an exhaustive summary of every provision.
1. Diversity, Equity, and Inclusion (DEI) Programs and Affirmative Action Compliance Obligations
The “Ending Illegal Discrimination and Restoring Merit-Based Opportunity” Executive Order contains many provisions that may significantly impact federal contractors and private employers. First, this order revoked Executive Order 11246 (E.O. 11246), which, among other things, required federal contractors to engage in affirmative action efforts, including developing affirmative action plans concerning women and minorities. In addition to revoking E.O. 11246, President Trump’s order requires that the Office of Federal Contract Compliance (OFCCP) immediately cease promoting diversity, investigating federal contractors for compliance with their affirmative action efforts, and allowing or encouraging federal contractors to engage in workforce balancing based on race, color, sex, sexual preference, religion, or national origin. Further, the order states that federal contract recipients will be required to certify that they do not “operate any programs promoting diversity, equity, and inclusion (DEI) that violate any applicable Federal anti-discrimination laws.” This order does not impact affirmative action obligations concerning individuals with disabilities and protected veterans.
Second, private sector DEI efforts are also addressed in the order, which effectively states that the President believes such practices are illegal and violate civil rights and anti-discrimination laws. This order further provides that the Attorney General, in coordination with relevant agencies, must submit a report that identifies the most “egregious and discriminatory” DEI practices within the agency’s jurisdiction, including a plan to deter DEI programs or principles (whether the programs are denominated as DEI or not); identify up to nine potential civil compliance investigations of publicly traded corporations, large non-profits, large foundations, select associations and/or education institutions with endowments over one billion dollars; identify “other strategies to encourage the private sector to end illegal DEI discrimination;” and identify potential litigation and regulatory action or sub-regulatory guidance that would be appropriate.
In recent weeks, several corporations have rolled back or limited their DEI programs, presumably in anticipation of, or in reaction to, this order. Notably, the order does not prohibit all DEI policies and initiatives; rather, it impacts only those determined to be discriminatory and illegal, e.g., quotas or explicit preferences for women and/or minorities. Policies focusing on workplace inclusion, broadly defining diversity, and adhering to merit-based hiring may reduce the risk of violating this order.
2. Sex and Gender as Protected Characteristics
The “Defending Women From Gender Ideology Extremism and Restoring Biological Truth to the Federal Government” Executive Order redefines federal policy about sex and gender, stating that the federal government will only recognize sex (meaning biological sex – male or female) and not gender. This order directs federal agencies to end initiatives that support “gender ideology”; use the term “sex” not “gender” in federal policies and documents; enforce sex-based rights and protections using the order’s definition of “sex”; and rescind all agency guidance that is inconsistent with the order, including the Equal Employment Opportunity Commission’s “Enforcement Guidance on Harassment in the Workplace” (April 29, 2024), among others. This order also mandates that all government-issued identification documents, including visas, reflect the biological sex assigned at birth and seeks to limit the scope of the U.S. Supreme Court decision in 2020 that held that “sex discrimination” includes gender identity and sexual orientation. This order also directs the EEOC and U.S. Department of Labor (DOL) to prioritize enforcement of rights as defined by the order.
3. Artificial Intelligence
In 2023, former President Biden issued an executive order regarding the potential risks associated with artificial intelligence (AI), which resulted in the DOL releasing guidance on May 16, 2024, entitled “Department of Labor’s Artificial Intelligence and Worker Well-being: Principles for Developers and Employers.” On January 23, 2025, President Trump issued an executive order regarding AI entitled “Removing Barriers to American Leadership in Artificial Intelligence,” which rescinded President Biden’s order. President Trump’s order instructs federal advisors to review all federal agency responses to President Biden’s order and rescind those that are inconsistent with President Trump’s order. Accordingly, the DOL and any other related federal agency guidance, including the 2024 AI guidance issued by the OFCCP, will be rescinded. Employers incorporating such guidance into their policies and practices should respond appropriately. Despite this change in the federal landscape, employers should keep in mind that several states have recently passed laws governing AI use in the workplace, highlighting potential violations under federal and state anti-discrimination laws through AI use.
Below are links to the relevant Executive Orders.
Executive Order 14173 – “Ending Illegal Discrimination and Restoring Merit-Based Opportunity”
Executive Order 14168 – “Defending Women From Gender Ideology Extremism and Restoring Biological Truth to the Federal Government”
Executive Order 14151 – “Ending Radical And Wasteful Government DEI Programs And Preferencing”
Executive Order 14179 – “Removing Barriers to American Leadership in Artificial Intelligence”
New Jersey Updates Discrimination Law: New Rules for AI Fairness
The New Jersey AG and the Division on Civil Rights’ new guidance on algorithmic discrimination explains how AI tools might be used in ways that violate the New Jersey Law Against Discrimination. The law applies to employers in New Jersey, and some of its requirements overlap with new state “comprehensive” privacy laws. In particular, those laws’ requirements on automated decisionmaking. Those laws, however, typically do not apply in an employment context (with the exception of California). This New Jersey guidance (which mirrors what we are seeing in other states) is a reminder that privacy practitioners should keep in mind AI discrimination beyond the consumer context.
The division released the guidance last month (as reported in our sister blog) to assist businesses as they vet automated decision-making tools. In particular, to avoid unfair bias against protected characteristics like sex, race, religion, and military service. The guidance clarifies that the law prohibits “algorithmic discrimination,” which occurs when artificial intelligence (or an “automated decision-making tool”) creates biased outcomes based on protected characteristics. Key takeaways about the division’s position, as articulated in the guidance, are listed below, and can be added to practitioners’ growing rubric of requirements under the patchwork of privacy laws:
The design, training, or deployment of AI tools can lead to discriminatory outcomes. For example, the design of an AI tool may skew its decisions, or its decisions may be based on biased inputs. Similarly, data used to train tools may incorporate the developers bias and reflect those biases in their outcomes. When a business deploys a new tool incorrectly, whether intentionally or unintentionally, the outcomes can create an iterative bias.
The mechanism or type of discrimination does not matter when it comes to liability. Whether discrimination occurs through a human being or through automated tools is immaterial when it comes to liability, according to the guidance. The division’s position is if the covered entity discriminates, they have violated the NJLAD. Additionally, the type of discrimination, whether disparate or intentional, does not matter. Importantly, if an employer uses an AI tool that disproportionately impacts a protected group, then they could be liable.
AI tools might not consider reasonable accommodations and thus could result in a discriminatory outcome. The guidance points to specific incidents that could impact employers and employees. An AI tool that measures productivity may flag for discipline an individual who has timing accommodations due to a disability or a person who needs time to express breast milk. Without taking these factors into account, the result could be discriminatory.
Businesses are liable for algorithmic discrimination even if the business did not develop the tool or does not understand how it works. Given this position, employers, and other covered entities, need to understand the AI tools and automated decision-making processes and regularly assess the outcomes after deployment.
Steps businesses, and employers, can take to mitigate risk. The guidance recommends that there be quality control measures in place for the design, training, and deployment of any AI tools. Businesses should also conduct impact assessments and regular bias audits (both pre- and post- deployment). Employers and covered entities should provide notice about the use of automated decision-making tools.
Putting it into Practice: This new guidance may foreshadow a focus by the New Jersey division on employer use of AI tools. New Jersey is not the only state to contemplate AI use in the employment context. Illinois amended its employment law last year to address algorithmic bias in employment decisions. Privacy practitioners should not forget about these employment laws when developing their privacy requirements rubrics.
Listen to this post