Colorado’s Historic AI Law Survives Without Delay (So Far)

On May 17, 2024, Colorado Governor Jared Polis signed Colorado’s historic artificial intelligence (AI) consumer protection bill, SB 24-205, colloquially known as “Colorado’s AI Act” (“CAIA”), into law.
As we noted at the time, CAIA aims to prevent algorithmic discrimination in AI decision-making that affects “consequential decisions”—including those with a material, legal, or similarly significant effect with respect to health care services and employment decision-making. The bill is scheduled to take effect February 1, 2026.
The same day he signed CAIA, however, Governor Polis addressed a “signing statement” letter to Colorado’s General Assembly articulating his reservations. He urged sponsors, stakeholders, industry leaders, and more to “fine tune” the measure over the next two years to sufficiently protect technology, competition, and innovation in the state.
As the local and national political climate steers toward a less restrictive AI policy, Governor Polis drafted another letter to the Colorado legislature. On May 5, 2025, Polis—along with Attorney General Phil Weiser, Denver Mayor Mike Johnston, and others—requested that CAIA’s effective date be delayed until January 2027.
“Over the past year, stakeholders and legislators together have worked to find the right path forward on Colorado’s first-in-the-nation artificial intelligence regulatory law,” the letter states, adding that the collaboration took “many months” and “brought many ideas, concerns, and priorities to the table from a wide range of communities.” Nevertheless, “it is clear that more time is needed to continue important stakeholder work to ensure that Colorado’s artificial intelligence regulatory law is effective and implementable.”
The letter came the same day that SB 25-318, a bill that would have amended CAIA, was postponed indefinitely by the state Senate and reportedly killed by its own sponsor. Colorado Senate Majority Leader Robert Rodriguez introduced SB 25-318, entitled “Artificial Intelligence Consumer Protections,” just one week earlier.
On May 6, 2025, the day before the legislative session in Colorado ended, House Democrats made an eleventh-hour attempt to postpone the effective date of CAIA by inserting the delay into another unrelated bill, but that attempt also failed.
Proponents for the delay are calling for a framework “that protects privacy and fairness without stifling innovation or driving business away from our state,” as the Polis letter states. Technology groups have urged Governor Polis to call a special legislative session to delay implementation of CAIA. 
SB 25-318 Key Provisions
Despite SB 25-318’s failure to pass, several provisions remain noteworthy and likely to remain part of the ongoing policy debate. Viewed as “thoughtful amendments” by some commentators, the legislation would have modified the consumer protections of CAIA, which required developers and/or deployers of AI systems to implement a risk management program; do an impact assessment and make notifications to consumers. If it passed, SB 25-318 would have delayed many requirements from February 1, 2026, to January 1, 2027, and included the following adjustments:
Definitions. SB 25-318 attempted to redefine “algorithmic discrimination” to mean the use of an AI system that results in a violation of any applicable federal, state, or local discrimination law. It also would have created exemptions to the definition of “developer” of an AI system and exempted certain technologies, such as those performing a narrow procedural task, or cybersecurity and data security systems, from the definition of “high-risk AI systems.”
Reasonable Care. The bill would have eliminated the duty of developers or deployers of a high-risk AI system to use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination and further would have eliminated the duty of deployers to notify the attorney general of such risks arising from intended uses or that the system causes algorithmic discrimination.
Developer Disclosures. SB 25-318 sought to exempt developers from specified disclosure requirements if, for example, the systems make 10,000 or fewer consequential decisions in a year for 2027-2028, decreasing to 2,500 or fewer for 2029-2030. Other contemplated exemptions included instances where developers received less than $10,000 from investors, have annual revenues of less than $5,000,000, have operated and generated revenue for less than 5 years, etc. It sought to broaden disclosure requirement exemptions for deployers based on the number of full-time employees (500 instead of 50 between 2027 and 2028 and decreasing to 100 in 2029). It further would have also exempted developers with respect to the use of AI in hiring. A further exemption would apply if the AI system produces or consists of a score, model, algorithm, or similar output that is a consumer report subject to the Fair Credit Reporting Act.
Impact Assessments. SB 25-318 sought to amend the requirement that deployers, or third parties contracted by deployers, complete impact assessments within 90 days of a substantial modification to instead require these impact assessments be completed before the first deployment or January 1, 2027, whichever comes first, and annually thereafter. SB 25-318 would have also required deployers to include in an impact assessment whether the system poses any known or reasonably foreseeable risks of limiting accessibility for certain individuals, an unfair or deceptive trade practice, a violation of state or federal labor laws, or a violation of the Colorado Privacy Act.
Disclosures to Consumers. SB 25-318 attempted to require deployers to provide additional information to consumers if a high-risk AI system makes, or is a substantial factor in making, a consequential decision. It further included a transparency requirement that consumer disclosures must include information on whether and how consumers can exercise their rights.
Documentation Requirements. SB 25-318 would have required developers and deployers to maintain required documentation, disclosures, and other records with respect to each high-risk AI system throughout the period during which the developer sells, markets, distributes, or makes available the high-risk AI system—and for at least three years following the last date on which the developer sells, markets, distributes, or makes available the high-risk AI system.
Takeaways
Because Colorado’s 2025 legislative session ended at midnight on Wednesday, May 7, the CAIA will go into effect as originally passed on February 1, 2026, unless Governor Polis calls a special session, or a new bill is introduced in time for the new legislative session on January 14. To the extent additional attempts to modify CAIA arise before February 1, 2026, we anticipate that they will revive certain issues addressed in SB 25-318 as part of such efforts.
Many outside of Colorado are also following this process closely, including other states who are using CAIA as a framework for their own state laws and by federal lawmakers whose efforts to pass comprehensive AI legislation through Congress have stalled. On Tuesday, May 13, the House Energy and Commerce Committee will mark up language for potential inclusion in the reconciliation package that would prevent states from passing and implementing such AI laws for 10 years, but this language may not pass.
As we noted last year, organizations should start to consider compliance issues including policy development, impact assessments, engagement with AI auditors, contract language in AI vendor agreements to reflect responsibilities and coordination, and more. Impact assessments, in particular, take time and resources to design and conduct, and therefore we recommend that businesses using high-risk AI systems in Colorado begin preparations to conduct these impact assessments now, rather than waiting for a speculative change to the law. If properly designed, impact assessments will be a useful tool for businesses to ensure that their AI systems are reliable and deliver expected outcomes while minimizing the risk of algorithmic discrimination. 

AI Governance Remains Critical Despite Political Pendulum Swings

Businesses increasingly rely on AI and generative AI for myriad uses. A new body of “AI law” is forming—and some legal requirements are now live. AI governance is a mandatory compliance function right now rather than next quarter or next year. 
AI law is a patchwork across jurisdictions and can be hard to pin down. While some jurisdictions are enacting new laws, others are pulling back. As the political pendulum continues to swing, regulatory retrenchment is among the key themes coming into focus in 2025.
Some hardline AI regulatory regimes that dominated headlines in 2024 are being walked back. For example, at the U.S. federal level, the Trump administration has undone Biden-era AI executive orders, and federal agencies are recalibrating enforcement priorities accordingly. Consistent with broader deregulation impacts, observers expect that the FTC, SEC, and other agencies will focus primarily on clear cases of fraud, rather than pursuing broader or more innovative regulatory actions. 
At the state level, the Colorado AI Act is under scrutiny for possible amendments, including through a new bill introduced in April 2025. Meanwhile, the governors of California and Virginia recently vetoed high-profile AI bills. And the U.S. House Energy and Commerce Committee proposed a 10-year moratorium on the enforcement of state AI laws in a recent draft budget reconciliation bill. Across the pond, the EU Commission recently withdrew the draft AI Liability Directive and is reportedly considering amendments to the EU AI Act to soften certain requirements.
But AI regulation is not dead. Newly enacted state laws in the U.S. (e.g., California, Illinois, New York, Utah) address algorithmic discrimination and automated decision-making; disclosure of AI use; impersonation, digital replicas, and deepfakes; watermarking of AI-generated content; data privacy and biometric data; and more. State attorneys general (e.g., California, New Jersey, Oregon) have reiterated that they will enforce existing laws against unlawful uses of AI. And, of course, the AI “copyright war”—testing the boundaries of copyright infringement and fair use for AI training and outputs—wages on in dozens of lawsuits in the U.S. and elsewhere.
The first requirements of the EU AI Act went live in February 2025. For example, companies using AI within the EU are now subject to the “AI literacy” requirement mandating “measures to ensure, to their best extent, a sufficient level of AI literacy” for employees or others who operate or use AI systems. The AI Act is extraterritorial. It applies to U.S. companies using AI systems within the EU or whose AI systems produce outputs intended for use in the EU. Employee training regarding the responsible use of AI is now mandatory for such companies.
Bottom line: while there may be a trend towards softening AI regulation in some areas, this is not a universal truth, and enterprise AI governance remains essential. Some new “AI law” requirements are now live, while others will be soon. In addition, regulators, state AGs, and plaintiffs will seek to apply existing laws to new technology. And, of course, there’s the potential self-inflicted wounds (like data leakage) and the reputational and public relations risks from an AI-powered snafu.
Luckily, there are some common threads in the AI regulatory thicket, and established guidance may ease the governance burden. Voluntary AI compliance frameworks like the NIST AI RMF and ISO/IEC 42001:2023 not only provide useful, detailed guidance for responsible AI governance, but they also form the basis of statutory safe harbors or affirmative defenses under laws like the Colorado AI Act. They provide a wise starting point for compliance programs, in addition to choosing AI model providers, models, and use cases wisely.

Luckily, there are some common threads in the AI regulatory thicket, and established guidance may ease the governance burden.

Colorado Legislature Fails to Amend Recent Artificial Intelligence Act

In 2024, Colorado passedthe first comprehensive state-level law in the U.S. regulating the use of artificial intelligence, the Artificial Intelligence Act (the Act). It imposed strict requirements on developers and users of “high-risk” AI systems, particularly in sectors like employment, housing, finance, and healthcare. The Act drew criticism for its complexity, breadth, and potential to stifle innovation.
In early 2025, lawmakers introduced Senate Bill (SB) 25-318 as a response to growing concerns from the tech industry, employers, and even Governor Jared Polis, who reluctantly signed the Act into law last year.
SB25-318 aimed to soften and clarify some of the more burdensome aspects of the original legislation before its compliance deadline of February 1, 2026.
Amendments proposed under SB 25-318 included:

An exception to the definition of “developer” if the person offers an AI system with open model weights and meets specified conditions.
Exemptions for specified technologies.
Elimination of the duty of a developer or deployer to use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination and the requirement to notify the state attorney general of such risk.
An exemption from specified disclosure requirements for developers if they meet certain financial and operational criteria.

Despite its intention to strike a balance between innovation and regulation, SB25-318 was voted down 5-2 by the Senate Business, Labor, and Technology Committee on May 5, 2025.
With SB25-318 dead, the original Act remains intact, and the next step is for the Colorado Attorney General to issue rules and/or guidance. As it now stands, businesses and developers operating in Colorado must prepare for full compliance by early 2026 unless this date is otherwise extended.

Use of AI in Recruitment and Hiring – Considerations for EU and US Companies

1. Use of AI in Recruitment and Hiring
AI is transforming the recruitment landscape across the globe, making processes such as resume screening and candidate engagement more efficient by:

using keyword searches to automatically rank and eliminate candidates from a pool of applicants with minimal human oversight;  
performing recruitment tasks via chatbots that interact with candidates; 
formulating skills and aptitude tests; and 
analyzing video interviews to assess a candidate’s suitability for a particular position.

In addition to maximizing efficiency, AI may also be used to make automated, substantive decisions related to recruitment, hiring, and performance through the use of predictive analytics that forecast a candidate’s success in a specific role.
2. Regulation of AI Use in the European Union and United States
The European Union has taken a united approach to AI regulation, and all EU member states are currently governed by the EU Regulation on Artificial Intelligence (EU AI Regulation), which took effect on Aug. 1, 2024. The EU AI Regulation’s scope applies to all providers and deployers based in the EU, as well as those that place an AI system on the EU market or use the results of an AI system in the EU. Parties located outside the EU should therefore be aware that the EU AI Regulation may apply to them, as well.
The EU AI Regulation categorizes AI systems into different risk categories, with the applicable rules becoming stricter as the risk to health, safety, and fundamental rights increases (for example, “minimal” regulation for spam filters; “limited” regulation for chatbots; “high” regulation for use in recruitment; and “unacceptable” use of AI for social scoring and facial recognition). HR tools are considered high-risk AI systems if they (1) are used for recruiting or selecting candidates; and/or (2) provide the basis for HR employment-related decisions, e.g., promoting or terminating employment or monitoring and evaluating performance and behavior.
As of Feb. 2, 2025, the EU AI Regulation requires companies to eliminate “unacceptable” AI systems (as defined by the law) and to thoroughly and comprehensively train all employees using AI systems with respect to compliant AI use under the regulation. 
In contrast to the EU, the United States does not currently have uniform AI regulations on a federal level. Though the Biden administration had tasked government agencies such as the Department of Labor and the Equal Employment Opportunity Commission with monitoring the use of AI tools and issuing guidance to enhance compliance with anti-discrimination and privacy laws, in January 2025, President Trump expressed his support for deregulation, issuing an executive order entitled “Removing Barriers to American Leadership in Artificial Intelligence Issues.” Federal agencies have since removed all previously issued guidance on AI use.
In response to the executive order advocating for AI deregulation, regulations governing the use of AI have been introduced and passed on the state level. However, legislation passed does not always become legally binding. For example, in February 2025, the Virginia legislature passed the High-Risk Artificial Intelligence Developer and Deployer Act, which would have required companies creating or using “high-risk” AI systems in employment as well as other areas to implement safeguards against “algorithmic discrimination” for such systems. However, the governor vetoed the Act on March 24, 2025, and so the Act does not currently apply.
3. AI Use May Trigger Other Legal Violations
Aside from complying with laws such as the EU AI Regulation, which specifically regulates the use of AI, companies using AI in their recruiting and hiring processes should be careful such use does not trigger a violation of other laws. For example:

Bias and Discrimination: Algorithms used by AI in recruitment and hiring may inadvertently perpetuate bias, leading to discrimination against candidates based on race, gender, age, or other protected characteristics. Discrimination is prohibited in the EU under Council Directive 2000/78/EC, which bans discrimination in employment, education, and public safety, as well in the United States via more than one hundred federal, state, and local anti-discrimination laws. 
Data Security and Ownership: Companies that enter the personal data of potential candidates into an AI system have certain legal obligations with respect to maintaining the security of such data, as well as considerations with respect to the ownership of such data. Such obligations are governed by the EU General Data Protection Regulation (GDPR), which took effect on May 25, 2018. In the United States, more than 20 jurisdictions have passed laws imposing obligations on employers that use AI to collect and process candidate and employee data. 
Invasion of Privacy: Employers that collect candidate and/or employee data via AI tools may inadvertently be invading the privacy of such candidates and employees, and should be mindful of applicable privacy laws, which may require the company to obtain consent from the candidate or employee prior to running certain searches.

4. Penalties for Non-Compliance
An EU employer that violates the above discrimination, data security, and privacy laws risks significant (yet lower than U.S.) damage awards, as well as high administrative penalties from agencies such as the European AI Office and national data protection authorities.
Damages claims for individual breaches can vary significantly between jurisdictions, and EU member states retain national autonomy in determining award sums. However, European Court of Justice (ECJ) landmark judgments emphasize the importance of issuing awards that correspond to the nature and extent of the EU-protected rights violated.
Certain European nations, such as Estonia, Hungary, Ireland, Sweden, Austria, and Finland, have established statutory or customary upper limits on awardable damages to employees in the event a company fails to comply with applicable anti-discrimination regulations, with such damages ranging from the payment of EUR 500 to 104 weeks’ pay. In contrast, in Poland, Germany, and the Netherlands, damages are not formally limited, although in practice the awards are relatively low compared to the United States. The national laws of some European countries, such as UK, provide for punitive damages, which would further increase the sum of damages awarded.
In addition to the above, administrative fines for data security and privacy law violations under GDPR may reach up to the higher of EUR 20,000,000, or 4% of a company’s annual worldwide turnover for the preceding financial year.
Under the EU AI Regulation, both an EU employer and a non-EU employer using the results of an AI system in the EU can be fined up to the higher of EUR 35,000,000 or 7% of a company’s annual worldwide turnover for the preceding financial year. In the United States, penalties range depending on the jurisdiction. In New York City, for example, an employer may incur a fine of up to $500 for a first violation, and between $500 and $1,500 per day for each subsequent or continuing violation.
5. Considerations for Employers
To minimize exposure, employers should consider taking the following steps:

For the EU (including non-EU companies subject to EU laws as provided above):


Eliminate AI use deemed to be “unacceptable” under the EU AI Regulation. 


Train employees to use AI in accordance with the EU AI Regulation, applicable data security and privacy laws, and company policies. 


Prepare for additional new requirements scheduled to take effect in August 2026.

For the United States:


Inform candidates when using AI in recruiting and hiring and obtain informed written consent from a candidate prior to using AI for processing sensitive data. 


Provide an alternate method of screening should the candidate decline the use of AI. 


Use AI systems (including testing procedures) that provide clear parameters that can later be verified. 


Conduct periodic independent bias testing of AI systems and recruitment tools. 


Include human oversight in the decision-making process.

Thilo Ullrich and Dorothee von Einem also contributed to this article. 

Copyright Office Report on Training AI and Fair Use

The Copyright Office released a “Pre-publication” version of Part 3 of its Report on Copyright and AI. Coincidentally (?) Shira Perlmuter, the Register of Copyrights, was fired amid a shakeup at the Copyright Office. The Report was also supposed to address infringement issues, but did not. Those issued will now be addressed in a Part 4 of the Report. 
A more detailed summary will be provided, but some high level takeaways are as follows. 

There is no per se rule on whether training AI on copyrighted content is infringement/Fair Use. It will be a cases-by-case analysis. 
Each “use” of copyrighted content needs to be considered. Several stages in the development of generative AI involve using copyrighted works in different ways that implicate the owners’ exclusive rights. Each needs to be considered separately. Different end uses may yield different results. 
The fair use determination requires balancing the multiple statutory factors in light of all relevant circumstances. But the first and fourth factors will often be most significant. Nothing new here. 
Various uses of copyrighted works in AI training are likely to be transformative. The extent to which they are fair, however, will depend on what works were used, from what source, for what purpose, and with what controls on the outputs—all of which can affect the market. Nothing really new here. Whether a work was pirated or legally obtained can be considered. 
The Report indicates that voluntary licensing and extended collective licensing should be considered- no compulsory schemes or new legislation at least for now. 
Perhaps the most controversial part of the Report, in my opinion, is the breadth of the fourth fair use factor – the effect of the use upon the potential market for or value of the copyrighted work.
This section evaluates different ways in which the use of copyrighted works for generative AI can affect the market for or value of those works, including through lost sales, market dilution, and lost licensing opportunities. It also addresses broader claims that the public benefits of unlicensed training might shift the fair use balance. 
The Office’s view is that the statute on its face encompasses any “effect” upon the potential market. The speed and scale at which AI systems generate content pose a serious risk of diluting markets for works of the same kind as in their training data – not just competition for sales of an author’s works. Market harm can also stem from AI models’ generation of material stylistically similar to works in their training data, despite noting that copyright does not protect style. 
The report makes clear that using certain “guardrails” could reduce the likelihood of a finding of infringement. 
The Report assesses various infringement considerations with the use of Retrieval Augmented Generation (RAG). 
This is not even close to a complete summary of the 113 page report. But I am working on a more detailed summary. 

Listen to this post

Colorado’s Artificial Intelligence Act (CAIA) Updates: A Summary of CAIA’s Consumer Protections When Interacting with Artificial Intelligence Systems

During the 2024 legislative session, the Colorado General Assembly passed Senate Bill 24-205, which is known as the Colorado Artificial Intelligence Act (CAIA). This law will take effect on February 1, 2026, and requires developers and deployers of a high-risk AI system to protect Colorado residents (“consumers”) from risks of algorithmic discrimination. Notably, the Act also requires that developers or deployers must disclose to consumers that they are interacting with an AI system. Colorado Gov. Jared Polis, however, had some concerns in 2024 and expected that the legislators would refine key definitions and update the compliance structure before the effective date in February 2026.
As Colorado moves forward toward implementation, the Colorado AI Impact Task Force issued its recommendations for updates in its February 1, 2025 Report. These updates — along with the description of the Act — are covered below.
Background
A “high-risk” AI system is defined to include any machine-based system that infers outputs from data inputs and has a material legal or similar effect on the provision, denial, cost, or terms of a product or service. The statute identifies various sectors that involve consequential decisions, such as decisions related to healthcare, employment, financial or credit, housing, insurance, or legal services. Additionally, CAIA has numerous carve-outs for technologies that perform narrow tasks or certain functions, such as cybersecurity, data storage, and chatbots.
Outside of use case scenarios, CAIA also imposes on developers of AI systems the duty to prevent algorithmic discrimination and protect consumers from any known or foreseeable risks arising from the use of the AI system. A developer is one that develops or modifies an AI system used in the state of Colorado. Among other things, a developer must make documentation available for the intended uses and potential harmful uses of the high-risk AI system. 
Similarly, CAIA also regulates a person that is doing business in Colorado and deploys a high-risk AI system for Colorado residents to use (the “deployer”). Deployers face stricter regulations and must inform consumers when AI is involved in a consequential decision. The Act requires deployers to implement a risk management policy and program to govern the use of the AI system. Further, the deployers must report any identified discrimination to the Attorney General’s Office within 90 days and must allow consumers to appeal AI-based decisions or request human review of the decision when possible. 
Data Privacy and Consumer Rights
Consumers have the right to opt out of data processing related to AI-based decisions and may appeal any AI-based decisions. This opt-out provision may impact further automated decision-making related to the Colorado resident and the processing of personal data profiling of that consumer. The deployer must also disclose to the consumer when a high-risk AI system has been used in the decision-making process that results in an adverse decision to the consumer. 
Exemptions
The CAIA contains various exemptions, including for entities operating under other regulatory regimes (e.g., insurers, banks, and HIPAA-covered entities) or for the use of certain approved technologies (e.g., technology cleared, approved, or certified by a federal agency, such as the FAA or FDA). But there are some caveats, however. For example, HIPAA-covered entities are exempt to the extent they are providing healthcare recommendations that are generated by an AI system that require the HIPAA-covered entity to take action to implement the recommendation and are not considered to be “high risk.” Small businesses are exempt to the extent that they employ fewer than 50 full-time employees and do not train the system with their own data. Thus, deployers should closely analyze the available exemptions to ensure their activities fall squarely within the recommendations.
Updates
As highlighted in the recent Colorado AI Impact Task Force Report, the report encourages additional changes to CAIA before it is enforced in February 2026. The current concerns deal with ambiguities, compliance burdens, and various stakeholder concerns. The Governor is concerned with whether the guardrails inhibit innovation and AI progress in the State. 
The Colorado AI Impact Task Force notes that there is consensus to refine documentation and notification requirements. However, there is less consensus on how to adjust the definition of “consequential decisions.” Reworking the exemptions to the definition of covered systems is also a change desired by both industry and the public. 
Other potential changes to the CAIA depend on how interconnected sections are potentially revised in relation to other related provisions. For example, changes to the definition of “algorithmic discrimination” depend on other issues related to obligations of developers and deployers to prevent algorithmic discrimination and related enforcement. Similarly, intervals for impact assessments may be affected greatly by changes to the definition of “intentional and substantial modification” to high-risk AI systems. Further, those impact assessments are interrelated with the developer’s risk management programs and will likely implicate any proposed changes to either impact assessments or risk management programs. 
Lastly, there remains firm disagreement on amendments related to several definitions. “Substantial factor” is one debated definition that will take a creative approach to define the scope of AI technologies subject to the CAIA. Similarly, “duty of care” is hotly contested for developers and deployers and whether to remove that concept or replace it with more stringent obligations. Other debated topics that are subject to change include the exemption for small business, the opportunity to cure incidents of non-compliance, trade secret exemptions, consumer right to appeal, and the scope of attorney general rulemaking.
Guidance
Given that most stakeholders recognize that changes are needed, any business impacted by the CAIA should continue to watch the developments in the legislative process for potential changes that could drastically impact the scope and requirements of the Colorado AI Act.
Takeaways
Businesses should assess whether they, or their vendors, use any AI system that could be considered high risk under the CAIA. Some recommendations include:

Assess AI usage and consider whether that use is within the definition of the CAIA, including whether any exemptions are available
Conduct an AI risk assessment consistent with the Colorado AI Act
Develop an AI compliance plan that is consistent with the CAIA consumer protections regarding notification and appeal processes
Continue to monitor the updates to the CAIA
Evaluate contracts with AI vendors to ensure that necessary documentation is provided by the developer or deployer

Colorado has taken the lead as one of the first states in the nation to enact sweeping AI laws. Other states will likely look to the progress of Colorado and enact similar legislation or make improvements where needed. Therefore, watching the CAIA and its implementation is of great importance in the burgeoning field of consumer-focused AI systems that impact consequential decisions in the consumer’s healthcare, financial well-being, education, housing, or employment.
Listen to this post

Affirmative Artificial Intelligence Insurance Coverages Emerge

It was only a matter of time before new insurance coverages targeting the risks posed by artificial intelligence (AI) would hit the market. That time is now.
As the use of AI continues to proliferate, so too does our understanding of the risks presented by this broad and powerful technology. Some risks appear novel in form while others mirror traditional exposures that have long been viewed as insurable causes of loss. AI-related risks are made all the more novel because the meaning of AI itself is not only up for debate, but is constantly evolving as the technology matures. This mixture of old and new has the potential to create coverage gaps in even the most comprehensive insurance programs. Hence the development of specialized, AI-specific insurance solutions. In just the past few weeks, two new affirmative AI coverages have entered the market, signaling an acceleration in this trend.
Armilla’s Affirmative AI Coverage
On April 30, 2025, Armilla Insurance Services launched an AI liability insurance policy underwritten by certain underwriters at Lloyd’s, including Chaucer Group. This product is among the first to offer clear, affirmative coverage for AI-related risks, rather than relying on protections embedded in legacy policies.
While the introduction of this new, affirmative coverage should have no impact on the availability of coverage for AI-related losses that meet the terms of coverage under existing insurance policies such as cyber, directors and officers (D&O), or technology errors and omissions (E&O), this new product should address any unique exposures not contemplated under traditional coverages. Risks specifically contemplated under Armilla’s policy include AI hallucinations, deteriorating AI model performance, and mechanical failures or deviations from expected behavior. Armilla’s affirmative coverage may offer greater certainty for policyholders in an increasingly uncertain risk environment.
Google Cloud’s Entry into AI Risk Management
Earlier in 2025, Google took its own significant step into AI-specific risk mitigation by announcing a partnership with insurers Beazley, Chubb, and Munich Re. This collaboration introduces a tailored cyber insurance solution specifically designed to provide affirmative AI coverage that Google Cloud customers can purchase from the insurers Google has partnered with. 
Customers that purchase the Google-specific insurance coverage receive a Google policy Endorsement that provides a suite of protections that can include business interruption coverage for failures in Google Cloud services, liability coverage for certain bodily injury or property damage, and protection for trade secret losses linked to malfunctioning AI tools. By embedding insurance directly into its cloud offerings, Google has taken a proactive role in delivering technological innovation, while also managing the associated risks.
Insuring the AI Future
The emergence of affirmative AI insurance products marks a key shift in the industry’s approach to managing AI-driven risks. With companies like Armilla leading the charge, insurers are beginning to address perceived coverage gaps that traditional policies may overlook. As momentum builds, 2025 is likely to bring a continued rollout of AI-specific coverages tailored to this evolving landscape. Collectively, these developments reflect a growing recognition across the industry of the distinct and complex nature of AI-related risk.

Revised Draft California Privacy Regulations Lessen Impact on Business

The rulemaking process on California’s Proposed “Regulations on CCPA Updates, Cybersecurity Audits, Risk Assessments, Automated Decisionmaking Technology, and Insurance Companies” (2025 CCPA Regulations) has been ongoing since November 2024. With the one-year statutory period to complete the rulemaking or be forced to start anew on the horizon, the California Privacy Protection Agency (CPPA) voted unanimously to move a revised set of draft regulations forward to public comment on May 1, which began May 9 and closes at 5 pm Pacific June 2, 2025. The revisions cut back on the regulation of Automated Decision-making Technology (ADMT), eliminate the regulation of AI, address potential Constitutional deficiencies with regard to risk assessment requirements and somewhat ease cybersecurity audit obligations. This substantially revised draft is projected by the CPPA to save California businesses approximately 2.25 billion dollars in the first year of implementation, a 64% savings from the projected cost of the prior draft.
ADMT: Most notable changes relate to ADMT, which are said to result in an 83% cost savings for businesses compared to the prior draft. “Cut to the bone,” is the way Chair Jennifer Urban characterized it, which is welcome news to many, including likely California Gavin Newson, who had sent the CPPA a letter stating that he was “pleased to learn about the Board’s decision, at its April 4, 2025 meeting, to direct Agency staff to narrow the scope of the ADMT regulations.” The revised ADMT regulations now no longer address “artificial intelligence” at all and include the following revisions (among others):

Deleting the definition “extensive profiling” (behavior advertising or monitoring employees, students or publicly available spaces) and shifting to focus on transparency and choice obligations to use to make a significant decision about consumers. However, risk assessments would still be required for profiling based on systemic observation and training of ADMT to make significant decisions, to verify identity, or for biological or physical profiling.
Streamlining the definition of ADMT to “mean any technology that processes personal information and uses computation to replace … or substantially replace human decision-making [which] means a business uses the technology output to make a decision without human involvement.” Prior drafts had covered use to help facilitate human decisions.
Streamlining the definition of significant decisions to remove decisions regarding “access to” and limit applicability to the “provision or denial of” the following more narrow types of goods and services: “financial or lending services, housing, education enrollment or opportunities, employment or independent contracting opportunities or compensation, or healthcare services,” and clarifying that use for advertising is not a significant decision.
Obligations to evaluate the risk of error and discrimination for certain types of ADMT uses were deleted, but the general risk assessment obligations were largely kept. The requirement to implement policies, procedures and training to ensure that certain types of ADMT work as intended and do not discriminate were removed.
Pre-use notice obligations were streamlined.
Opt-out rights were limited to uses to make a significant decision.
Businesses were given until January 1, 2027, to comply with the ADMT regulations.

Cybersecurity Audits: Also pared back, though more through a phase-in of implementation than regarding substantive requirements, are the draft regulations on cybersecurity audits. Here are the highlights of where the CPPA landed:

Timing for completion of a first annual cybersecurity audit and filing an audit report with the state will depend on the size of the business:

April 1, 2028: $100 million + gross revenue.
April 1, 2029: between $50 million and $100 million.
April 1, 2030: under $50 million.

Rather than requiring the Board of Directors to review audit results and certify their sufficiency, such obligations were changed to a member of management with responsibility for cybersecurity.
The aspects of what an audit must assess remain broad, including the sufficiency of inventory and management of personal information and the business’ information systems, including “data maps and flows identifying where personal information is stored, and how it can be accessed” and hardware and software (including cookies) inventories and an approval and prevention processes.

Privacy Risk Assessments: We have detailed the prior drafts of the risk assessment regulations here. The latest draft not only reflects the ADMT changes discussed above but also a more thoughtful approach to the purpose and process for conducting and documenting assessments:

In keeping with the removal of the concepts of “extensive profiling” (public monitoring, HR/educational monitoring and behavioral advertising) under the ADMT regulations, these concepts were also removed from the types of high-risk activities that require a risk assessment, but were replaced with “profiling a consumer through systematic observation of that consumer when they are acting in their capacity as an educational program applicant, job applicant, student employee or independent contractor for the business” and “profiling a consumer based upon their presence in a sensitive location.” However, in the draft published for comment, these activities were more narrowly defined to include only use of such observation “to infer or extrapolate intelligence, ability, aptitude, performance at work, economic situation, health (including mental health), personal preferences, interests, reliability, predispositions, behavior, location or movements[,] but excluding “using a consumer’s personal information solely to deliver goods to, or provide transportation for, that consumer at a sensitive location.” These edits are responsive to concerns raised by Board member Mactaggart (e.g., a nurse ordering pizza delivered to work). 
The high-risk assessment trigger of training ADMT or AI was modified to remove the reference to AI and limited to where the business intends to use the ADMT for a significant decision concerning a consumer, or to train facial recognition, emotion recognition, or other technology that verifies a consumer’s identity, or conducts physical or biological identification or profiling of a consumer. Triggers tied to the generation of deepfakes and the operation of generative models, such as large language models, were removed.
The other high-risk activities from prior drafts remain: selling personal information, sharing personal information (for targeted advertising), and processing sensitive personal information.
While risk assessments must still include a harm-benefit analysis (Section 7152(a)(4) and (5)), that information, and the business judgment analysis as to the pros and cons thereof, is no longer required to be included in the form of Risk Analysis Report (a new term) that is subject to government inspection. This will make the assessment requirements much less vulnerable to First Amendment challenge as unconstitutional compelled speech on a matter of opinion and not mere recitation of facts, a concern publicly expressed previously by at least one CPPA Board member. This is a very significant development. [Note that while the Colorado regulations require documentation of such a risk-benefit analysis as part of assessment documentation, they also provide that assessments may be subject to legal privilege protections.]
Similarly, the forms of assessment summaries that must be filed with the state are limited to factual recitations and the new draft abandons the prior approach of requiring filing of abridged assessments summarizing each assessment in favor of a single attestation of annual compliance with some basic information on the number of assessments, in total and by type of covered processing activities, and which categories of personal information and/or sensitive personal information was covered. This substantially reduces what must be disclosed in agency filings and again helps insulate against compelled speech challenges.
Also, likely to address Constitutional issues, Section 7154 was changed from prohibiting processing activities if risks to consumer privacy are not outweighed by identified benefits, to expressing that the goal of risk assessments is to serve to prohibit or restrict processing activities if privacy risks outweigh processing benefits. This should go a long way to protect a business’s subjective business judgment and ethical value decisions that should not be subject to second-guessing by the government, absent violation of clear and unambiguous statutory requirements.
While high-risk activities occurring on and after the effective date of the regulations (likely before the end of 2025) will be subject to assessment, businesses will have until December 31, 2027, to complete the documentation of the corresponding Risk Assessment Reports through that date, and the filing of the first annual assessment attestation would not be due until April 21, 2028.

Finally, the amendments to the existing regulations were revised:
What stayed in:

If a business maintains personal information (collected after January 1, 2022) for longer than 12 months, it must enable consumers to specify a date range or treat the request as without time limitation.
A business must ensure that when a consumer corrects their personal information, it remains corrected.
A business must inform consumers who make corrections of their personal information of the source of the incorrect data or inform the source that the information was incorrect and must be corrected.
Symmetry of choice applies to any opt-in, not just an opt-in after opt-out.
A website must display the status of opt-out choice based on GPC / OPPS browser signals or opt-out request. [Most CMPs already have this feature as optional.]
A business must provide a means to confirm that a limit-sensitive information processing request has been received and is being honored.
In verifying agent authority and the consumer’s identity, a business may not require the consumer to resubmit an individual request.
Consumer statements regarding contested accuracy of health data, which are already required, must be shared with recipients of that data if the consumer requests.

What got cut:

The requirements of businesses and service providers to implement measures to ensure deleted personal information remains deleted or de-identified was removed.
The requirement to inform consumers, as part of a request response, of their right to file a complaint with the state was removed.
The requirement to inform those to which it has disclosed personal information of a subsequent consumer correction was removed.
The requirements to provide internal and external notice of consumer claims of inaccuracy that were not corrected (due to insufficient proof), unless the request was fraudulent or abusive, were removed.

The current revisions are subject to additional revisions based on the new round of public comment, which could lead to adding back or otherwise changing provisions. However, the CPPA Board members all seemed to express the opinion at the May 1 meeting that a set of regulations needed to be timely completed, and that future rulemaking could build on the foundation of the draft that has been advanced. Accordingly, it would appear that we are at the “home stretch” with the finish line in clear view. 

The Hypocrisy of “Delete IP”: Billionaire Frustrations Masquerading as Policy

Jack Dorsey’s recent tweet, “delete all IP,” and Elon Musk’s echo, “I agree,” are not serious policy proposals—they’re the petulant grumblings of billionaires frustrated that intellectual property laws are interfering with their ambitions. These statements are particularly rich coming from men whose companies have aggressively accumulated intellectual property rights: According to the USPTO’s database, Dorsey’s Twitter secured 410 U.S. patents, while Musk’s Tesla amassed 879. Their current distaste for IP conveniently arises now that their market dominance makes such protections less essential—for them.
Both Dorsey and Musk are deeply involved in artificial intelligence, a field that relies heavily on vast amounts of written material—much of it copyrighted—for training models. Whether this use constitutes copyright infringement is currently being litigated. If courts finally determine that it is infringement, the implications for AI development could be substantial. Of course, developers could turn to public domain materials, but if that’s the route, be prepared for your AI assistant to start speaking in “forsooths,” “prithees,” and “perchances.” The obvious (and apparently odious) alternative? Pay fair value for the content being used.
Musk once told Jay Leno, “patents are for the weak.” There’s a kernel of truth there—not because patents are inherently flawed, but because they level the playing field. Patents exist precisely to empower the weak: to incentivize inventors and protect them as they challenge entrenched giants. Abraham Lincoln famously described patents as having “added the fuel of interest to the fire of genius.” Patents don’t just reward inventors—they give entrepreneurs the confidence to invest in the risky business of innovation.
While Dorsey and Musk may be targeting copyright more than patents, the principle remains the same. Copyright is designed to reward creators by granting them property rights over their work—rights that encourage the creation and dissemination of culture and knowledge. As poet Joel Barlow eloquently argued in 1783, after authors dedicate their lives to honing their craft, it is a matter of “natural justice” that they should profit from their labor and reputation.
This principle was enshrined in the U.S. Constitution on March 4, 1789, when Article I, Section 8, Clause 8 gave Congress the authority “[t]o promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.” Congress exercised this power almost immediately with the Patent Act of 1790 and the Copyright Act of the same year.
For over 235 years, intellectual property has fueled innovation, creativity, and economic growth in America. The call to “delete all IP” reeks of a “pull up the ladder” mentality—a cynical effort by technology oligarchs who benefited from IP protections to now deny them to others, simply because they’ve become inconvenient. If Musk and Dorsey are as visionary as they claim, they should propose a fair compensation system for AI training. Don’t blow up the system that nurtured generations of inventors and authors—build something better.
The opinions expressed in this article are those of the author and do not necessarily reflect the views of The National Law Review.

Two New AI Laws, Two Different Directions (For Now)

Key Takeaways

Colorado legislature rejects amendments to the Colorado AI Act (CAIA).
Proposed amendments sought multiple exemptions, exceptions and clarifications.
Utah legislature enacts amendments that include a safe harbor for mental health chatbots.
Utah’s safe harbor provision includes a written policy and procedures framework.

In Colorado last week, highly anticipated amendments to its AI Act were submitted to the legislature. But, in a surprising turn of events this week, every single one of the proposed amendments was rejected, setting the stage for a sprint to February 1, 2026, the effective date of Colorado’s first-of-its-kind law impacting how AI is to be used with consumers.
Meanwhile, in Utah, which enacted an AI law last year that increases consumer protection but also encourages responsible innovation, amendments to its AI Policy Act (UAIP) took effect this week. The amendments are due in part to guidance found in the Best Practices for the Use of AI by Mental Health Therapists, published in April by Utah’s Office of AI Policy (OAIP).
We recently highlighted how a global arms race may mean U.S. states are best positioned to regulate AI risks, as evidenced by Colorado and Utah’s approaches, and how other states are emphasizing existing laws they say “have roles to play.” While there is still a lot of uncertainty, the outcome of the amendments in Colorado and Utah is instructive.
Colorado’s Rejected Amendments
A lot can be learned by what was rejected in Colorado this week, especially as other states, such as Connecticut, are considering adopting their own versions of an AI law for consumer protection, and as those that have already rejected such laws, such as Virginia, prepare to reconsider newer versions with wider input.
In some ways, it is not surprising that the amendments were rejected. They included opposing input from the technology sector and consumer advocates.1 This included technical changes such as exempting specified technologies from the definition of “high risk” and creating an exception for developers that disclose system model weights (e.g., parameters, biases).
The amendments also included non-technical changes, such as eliminating the duty of a developer or deployer of a high-risk AI system to use reasonable care to protect consumers. This was always going to be untenable. But there were others that made sense, such as providing exemptions for systems below certain investment or revenue thresholds ($10 and $5 million), which is why it is surprising that all the amendments were rejected, including an amendment that would have delayed the attorney general’s authority to enforce CAIA violations until 2027. Given the scope of the proposed amendments that have now been considered and rejected, it appears extraordinary circumstances would be needed for CAIA’s effective date to be delayed.
Utah’s AI Amendments
As previously noted, the UAIP endeavors to enable innovation through a regulatory sandbox for responsible AI development, regulatory mitigation agreements, and policy and rulemaking by the OAIP. Recently, the OAIP released findings and guidance for the mental health industry that were adopted by the legislature as amendments to the Act.
The guidance comprises 54 pages, the first 40 of which describe potential benefits and important risks associated with AI. It then examines use cases of AI in mental health therapy, especially in relation to inaccurate AI outputs, and sets forth best practices across these categories:

Informed consent;
Disclosure;
Data privacy and safety;
Competence;
Patient needs; and
Continuous monitoring and reassessment.

Emphasis is placed on competence. For example, the guidance states that therapists must maintain a high level of competence, which “involves continuous education and training to understand these AI technologies’ capabilities, limitations, and proper use.” This is consistent with how the UAIP specifies that businesses cannot blame AI for errors and violations.2
The guidance further states that mental health therapists should know “how frequently and under what circumstances one should expect the AI tool to produce inaccurate or undesirable outputs,” thus seeming to create a duty of care not only for AI system developers and deployers but also users. The guidance refers to these as “digital literacy” requirements.
Also, through its emphasis on continuous monitoring and reassessment, the guidance states that therapists, “to the best of their abilities,” should regularly and critically challenge AI outputs for inaccuracies and biases and intervene promptly if the AI produces incorrect, incomplete or inappropriate content or recommendations.
Based on the guidance, House Bill 452 was enacted and includes provisions relating to the regulation of mental health chatbots that use AI technologies, including the protection of personal information, restrictions on advertising, disclosure requirements and the remedies available for enforcement by the attorney general.
House Bill 452 includes an affirmative defense provision for mental health chatbots. In other words, a safe harbor from litigation initiated due to alleged harm caused by a mental health chatbot. To qualify for safe harbor protection, the supplier must develop a written policy that states the purpose of the chatbot and its abilities and limits.
In addition, the supplier must implement procedures that ensure mental health therapists are involved in the development of a review process and that the chatbot is developed and monitored consistent with clinical best practices, is tested to ensure that there is no greater risk to a user than there would be with a therapist and about ten other requirements.
As early best practices, the guidance may become industry standards that establish legal duties that can inform the risk management policies and programs contemplated by new laws and regulations, such as CAIA and UAIP. If so, these can form the basis for enforceable contract provisions.
Final Thoughts
We have previously provided recommendations that individuals and organizations should consider to mitigate risks associated with AI, both holistic and specific, emphasizing data collection practices. But, as shown through the rejected amendments in Colorado and the enacted AI amendments in Utah, AI literacy might be the most essential requirement. 

[1] For an insightful description of how the amendments died, see journalist Tamara Chuang’s excellent reporting here https://coloradosun.com/2025/05/05/colorado-artificial-intelligence-law-killed/#
[2] Utah Code. Ann. section 13-2-12 (2).

An AI Whistleblower Bill is Urgently Needed

Last year, thirteen brave AI whistleblowers issued a letter titled “A Right to Warn about Advanced Artificial Intelligence,” risking retaliation for highlighting rampant concerns around internal safety and security protocols on products that are built shielded from proper oversight being released to and unleashed upon the public. Legislation is needed to help workers responsibly report the development of high-risk systems that is currently occurring without appropriate transparency and oversight. 
The concerns of these AI whistleblowers, in combination with the documented attempts of AI companies to stifle whistleblowing, underscore the urgent need for Congressional action to pass a best-practices whistleblower bill that specifically addresses AI employees.
Like the powerful developing sectors preceding it, insiders in the Artificial Intelligence industry do not have explicit access to safeguards for reporting until legislation is passed. There is historical precedent for sector-based protections: Congress has enacted whistleblower protection laws in past decades that covered employees across relevant industries, including nuclear energy in the 1978 Energy Reorganization Act, airlines under AIR21 in 2000, the federal government in the 1989 Whistleblower Protection Act, and Wall Street under Dodd-Frank in 2010. This legislation helps ensure that workers in such specified fields are able to speak out on issues endangering the public. As its emergence in popular consciousness is recent — ChatGPT was initially released November of 2022 — legislation is behind technological advancement, with executives urging legislators to employ “light touch” regulation. Employees working for AI companies are left without any specialized whistleblower protections. 
Why We Need an AI Whistleblower Bill
The whistleblowers’ letter cited public claims from leading scholars, advocates, experts and AI companies themselves pointing to the significant potential harms of AI technology when released into the market without the proper safety protocols. These concerns included further entrenchment of existing inequalities, media manipulation and misinformation, and loss of control of autonomous AI systems. These companies themselves have even published reports on their models’ concerning and risky behavior, but continue to deploy their products for public, business, government, and military use. Specific points the group noted last year that companies, governments, and advocacy groups have made on the matter include: 

Serious risk of misuse, drastic accidents, and societal disruption … we are going to operate as if these risks are existential (OpenAI) 
Toxicity, bias, unreliability, dishonesty (Anthropic) 
Offensive cyber operations, deceive people through dialogue, manipulate people into carrying out harmful actions, develop weapons (e.g. biological, chemical) (Google DeepMind) 
Exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security (US Government – White House) 
Further concentrate unaccountable power into the hands of a few, or be maliciously used to undermine societal trust, erode public safety, or threaten international security … [AI could be misused] to generate disinformation, conduct sophisticated cyberattacks or help develop chemical weapons (UK Government – Department for Science, Innovation & Technology) 
Inaccurate or biased algorithms that deny life-saving healthcare to language models exacerbating manipulation and misinformation (Statement on AI Harms and Policy (FAccT)) 
Algorithmic bias, disinformation, democratic erosion, and labor displacement. We simultaneously stand on the brink of even larger-scale risks from increasingly powerful systems (Encode Justice and the Future of Life Institute) 
Risk of extinction from AI…societal-scale risks such as pandemics and nuclear war (Statement on AI Risk (CAIS)) 

With guidance from the scientific community, policymakers, and the public, these risks can be adequately mitigated. However, AI companies have financial incentives to avoid effective oversight. 
Also in 2024, whistleblowers brought to light broad confidentiality and non-disparagement agreements which were used to muzzle current and former employees from voicing their concerns. OpenAI whistleblowers filed a complaint with the SEC detailing that OpenAI utilized employment agreements which included:

Non-disparagement clauses that failed to exempt disclosures of securities violations to the SEC;
Requiring prior consent from the company to disclose confidential information to federal authorities;
Confidentiality requirements with respect to agreements, that themselves contain securities violations; 
Requiring employees to waive compensation that was intended by Congress to incentivize reporting and provide financial relief to whistleblowers. 

While OpenAI claims to have addressed its non-disclosure agreements, the chilling effect of these threats remains in company culture. It is highly concerning that OpenAI whistleblowers with inside knowledge on what oversight is needed have no explicit legal federal protections whatsoever. As it stands, a whistleblower working for major AI companies could be fired for raising concerns around issues such as venues for misuse, internal and external security concerns. 
Without information from whistleblowers, the ability of the U.S. government to police and regulate this newly developing technology is curtailed, risking heightening the technology’s risks to public health, safety, national security, and more. Insiders must be able to disclose potential violations safely, freely, and appropriately to law enforcement and regulatory authorities. 
What an AI Whistleblower Bill Needs to Include
Legislation must send the message to the AI industry, and to the tech industry at large, that violations on the right of employees to report wrongdoing will not be tolerated. Potential whistleblowers at AI companies must have comprehensive avenues to report even potential violations, instances of misconduct, and safety issues occurring throughout the field. Effective whistleblower laws require that such complaints be welcomed and rewarded as a matter of law and policy, not discouraged by companies sending direct or indirect messages to employees that chill their speech that has resulted in so many catastrophes in the past.
It is critical that an AI whistleblower law pass through the 119th Congress. Any such law must follow the solid precedents used in recent whistleblower legislation that has been passed, either unanimously or without any controversy, by Congress. The most recent example of a private sector whistleblower law that incorporated the basic due process requirements necessary to protect whistleblowers was the Taxpayer First Act, 26 U.S. Code § 7623(d). This law includes the following basic procedures, all of which need to be incorporated into any AI whistleblower law:

Due Process Protections: This includes the right to file a retaliation case in federal court and a jury trial, if requested.
Protection Against Retaliation: Anti-retaliation language that establishes that no employer, individual, or agent of an employer may fire, demote, blacklist, threaten, discriminate or harass an employee/former employee/applicant for employment and/or contractor who has engaged in protected activities covered under the law, which would include providing truthful information to state or federal law enforcement or regulatory authorities.
Appropriate Damages: A whistleblower who prevails in a retaliation case must be afforded a full “make whole” remedy, including (but not limited to) reinstatement and restoration of all of the privileges of his or her prior employment, back pay, front pay, compensation for lost benefits, compensatory damages, special damages, and all attorney fees, costs, and expert witness fees reasonably incurred. Some laws also provide double back pay or punitive damages, which should also be considered. Moreover, a court must have explicit jurisdiction to afford all equitable relief, including preliminary relief.
An Adequate Definition of a Protected Disclosure: Protected whistleblower disclosures should cover reports made, both internally to corporations and to other appropriate authorities, including Congress, and/or state law enforcement or regulatory authorities. Disclosures covered should include reporting threats AI may pose to national security, public health and safety, and financial frauds.
Anonymous and confidential reporting to a company’s internal compliance program.
Prohibition against contractual restrictions on the right to blow the whistle, including barring of mandatory arbitration agreements that would restrict an employee from filing a complaint under the whistleblower law.
No federal preemption or interference with the right to file claims under other state or federal law.

Modern anti-fraud and public safety laws uniformly include whistleblower protections similar to those outlined above. These include the laws such as the aforementioned Taxpayer First Act, as well as the Food Safety Modernization Act, Sarbanes-Oxley Act, the Anti-Money Laundering Act, and the National Transportation Security Act. Given the potential threats posed by AI, how companies have mishandled deployment, and the importance that emerging AI technology is safely developed, there is an urgent need for insiders working in the AI sector to be properly protected when they lawfully report threats to the public interest. 

Understanding the Utah AI Act and Newly Effective Amendments: What Your Business Needs to Know

Introduction
On May 7, 2025, the Utah Artificial Intelligence Policy Act (UAIP) amendments will go into effect. These amendments provide significant updates to Utah’s 2024 artificial intelligence (AI) laws. In particular, the amendments focus on regulation of AI in the realm of consumer protection (S.B. 226 and S.B. 332), mental health applications (H.B. 452), and unauthorized impersonations (aka “deep fakes”) (S.B. 271). 
Background (SB 149)
In March 2024, Utah became one of the first states to enact comprehensive artificial intelligence legislation with the passage of the Utah Artificial Intelligence Policy Act (UAIP, S.B. 149) specifically addressing AI. Commonly referred to as the “Utah AI Act,” these provisions create important obligations for businesses that use AI systems to interact with Utah consumers. The UAIP Act went into effect as of May 1, 2024.
If your business provides or uses AI-powered software or services that Utah residents access, you need to understand these requirements — even if your company isn’t based in Utah. This post will help break down these key amendments and what they mean for your business operations.
GenAI Defined
The Utah Act defines generative AI (GenAI) as a system that is (a) trained on data, (b) interacts with a person in Utah, and (c) generates outputs similar to outputs created by a human. (SB 149, 13-2-12(1)(a)). 
Transparency and Disclosure Requirements
If a company provides services in a regulated occupation (that is, those occupations that require a person to obtain a state certification or license to practice that occupation), the company shall disclose when the person is interacting with GenAI in the delivery of regulated services if the interaction is defined as “high risk” by the statute. The disclosure regarding regulated services shall be provided at the beginning of the interaction and disclosed orally if during a verbal interaction or in writing if via a written interaction. If the GenAI supplier wants the benefit of the Safe Harbor, then use of the AI system shall be disclosed at the beginning of any interaction and throughout the exchange of information. (S.B. 226).
If a company uses GenAI to interact with a person in Utah in “non-regulated” occupations, the company must disclose that the person is interacting with a GenAI system and not a human when asked by the Utah consumer. 
S.B. 226 further added mandatory requirements for high-risk interactions related to health, financial, and biometric data, or providing personalized advice in areas like finance, law, or healthcare. Additionally, S.B. 226 granted authority to the Division of Consumer Protection to make rules to specify the form and methods of the required disclosures.
Enforcement and Penalties
The Utah Division of Consumer Protection may impose an administrative fine of up to $2,500 for each violation of the UAIP. The courts or the Utah attorney general may also impose a civil penalty of no more than $5,000 for each violation of a court order or administrative order. As made clear by S.B. 226, violations of the Act are subject to injunctive relief, disgorgement of profits, and subject to paying the Division’s attorney fees and costs. 
Key Takeaways
The 2024 Act requires that companies clearly and conspicuously disclose when a person is interacting with GenAI if asked or requested by the person interacting with the AI system. The restrictions are even tighter when using GenAI in a regulated occupation that involve sensitive personal information or significant personal decision in the high-risk categories as amended in 2025 under S.B. 226. In those instances, the company shall disclose the use of GenAI. If the supplier wants the benefit of the 2025 Safe Harbor under S.B. 226, the AI system shall disclose its use at the beginning of an interaction and throughout the interaction.
Conclusion
Utah, along with several other states, took the lead to enact AI-related laws. It is likely that states will continue to regulate AI technology ahead of the federal government.
Stay tuned for subsequent blog posts that will provide updates on mental health applications (H.B. 452) and unauthorized impersonations (aka “deep fakes”) (S.B. 271).