Since January, the federal government has moved away from comprehensive legislation on artificial intelligence (AI) and adopted a more muted approach to federal privacy legislation (as compared to 2024’s tabled federal legislation). Meanwhile, state legislatures forge ahead – albeit more cautiously than in preceding years.

As we previously reported, the Colorado AI Act (COAIA) is set to go into effect on February 1, 2026. In signing the COAIA into law last year, Colorado Governor Jared Polis (D) issued a letter urging Congress to develop a “cohesive” national approach to AI regulation preempting the growing patchwork of state laws. In the letter, Governor Polis noted his concern that the COAIA’s complex regulatory regime may drive technology innovators away from Colorado. Eight months later, the Trump Administration announced its deregulatory approach to AI regulation making federal AI legislation unlikely. At that time, the Trump Administration seemed to consider existing laws – such as Title VI and Title VII of the Civil Rights Act and the Americans with Disabilities Act which prohibit unlawful discrimination – as sufficient to protect against AI harms. Three months later, a March 28th Memorandum issued by the federal Office of Management and Budget directs federal agencies to implement risk management programs designed for “managing risks from the use of AI, especially for safety-impacting and rights impacting AI.”

On April 28, two of the COAIA’s original sponsors, Senator Robert Rodriguz (D) and Representative Brianna Titone (D) introduced a set of amendments in the form of SB 25-318 (AIA Amendment). While the AIA Amendment seems targeted to address the concerns of Governor Polis, with the legislative session ending May 7, the Colorado legislature has only a few days left to act.

If the AIA Amendment passes and is approved by Governor Polis, the COAIA would be modified as follows:

Even if the AIA Amendment is passed, COAIA will remain the most comprehensive U.S. law regulating commercial AI development and deployment. Nonetheless, the proposed AIA Amendment is one example of how the innovate-not-regulate mindset of the Trump Administration may be starting to filter down to state legislatures.

Another example: in March, Virginia Governor Glenn Yougkin (R) vetoed HB 2094, the High-Risk Artificial Intelligence Developer and Deployer Act, which was based on the COAIA, and a model bill developed by the Multistate AI Policymaker Working Group (MAP-WG), a coalition of lawmakers from 45 states. In a statement explaining his veto, Governo Youking noted that “HB 2094’s rigid framework fails to account for the rapidly evolving and fast-moving nature of the AI industry and puts an especially onerous burden on smaller firms and startups that lack large legal compliance departments.” Last year California Governor Gavin Newsom (D) vetoed SB 1047, which would have focused only on large-scale AI models, calling on the legislature to further explore comprehensive legislation and states that “[a] California-only approach may well be warranted – especially absent federal action by Congress.”

Meanwhile, on April 23, California Governor Newson warned the California Privacy Protection Agency (CPPA) (the administration agency that enforces the California Consumer Privacy Act (CCPA)) to reconsider its draft automated decision-making technology (“ADMT”) regulations to leave AI regulation to the legislature to consider. His letter echoes a letter from the California Legislature, chiding the CPPA for its lack of the authority “to regulate any AI (generative or otherwise) under Proposition 24 or any other body of law.” At its May 1st meeting, the CPPA Board considered and approved staff’s proposed changes to the ADMT draft regulations, which include deleting the definitions and mentions of “artificial intelligence” and “deep fakes.” The revised ADMT draft regulations also include these revisions (along others):

(A more detailed analysis of the CCPA’s rule making, including regulation unrelated to ADMT, will be posted soon.)

MAP-WG inspired bills also are under consideration by several other states, including California. Comprehensive AI legislation proposed in Texas, known as the Texas Responsible AI Governance Act, was recently substantially revised (HB 149) to shift the focus from commercial to government implementation of AI systems. (The Texas legislature has until June 2 to consider the reworked bill.) Other states have more narrowly tailored laws focused on Generative AI – such as the Utah Artificial Intelligence Policy Act which requires any business or individual that “uses, prompts, or otherwise causes [GenAI] to interact with a person” to “clearly and conspicuously disclose” that the person is interacting with GenAI (not a human) “if asked or prompted by the person” and, for persons in “regulated occupations” (generally, need a state license or certification), disclosure must “prominently” disclose that a consumer is interacting with generative AI in the provision of the regulated services.

What happens next in the state legislatures and how Congress may react is yet to be seen. Privacy World will keep you updated.

Leave a Reply

Your email address will not be published. Required fields are marked *