While there are many AI regulations that may apply to a company operating in the Insurtech space, these laws are not uniform in their obligations. Many of these regulations concentrate on different regulatory constructs, and the company’s focus will drive which obligations apply to it. For example, certain jurisdictions, such as Colorado and the European Union, have enacted AI laws that specifically address “high-risk AI systems” that place heightened burdens on companies deploying AI models that would fit into this categorization.
What is a “High-Risk AI System”?
Although many deployments that are considered a “high-risk AI system” in one jurisdiction may also meet that categorization in another jurisdiction, each regulation technically defines the term quite differently.
Europe’s Artificial Intelligence Act (EU AI Act) takes a gradual, risk-based approach to compliance obligations for in-scope companies. In other words, the higher the risk associated with AI deployment, the more stringent the requirements for the company’s AI use. Under Article 6 of the EU AI Act, an AI system is considered “high risk” if it meets both conditions of subsection (1) [1] of the provision or if it falls within the list of AI systems considered high risk and included as Annex III of the EU AI Act,[2] which includes, AI systems that are dealing with biometric data, used to evaluate the eligibility of natural persons for benefits and services, evaluate creditworthiness, or used for risk assessment and pricing in relation to life or health insurance.
The Colorado Artificial Intelligence Act (CAIA), which takes effect on February 1, 2026, adopts a risk-based approach to AI regulation. The CAIA focuses on the deployment of “high-risk” AI systems that could potentially create “algorithmic discrimination.” Under the CAIA, a “high-risk” AI system is defined as any system that, when deployed, makes—or is a substantial factor in making—a “consequential decision”; namely, a decision that has a material effect on the provision or cost of insurance.
Notably, even proposed AI bills that have not been enacted have considered insurance-related activity to come within the proposed regulatory scope. For instance, on March 24, 2025, Virginia’s Governor Glenn Youngkin vetoed the state’s proposed High-Risk Artificial Intelligence Developer and Deployer Act (also known as the Virginia AI Bill), which would have applied to developers and deployers of “high-risk” AI systems doing business in Virginia. Compared to the CAIA, the Virginia AI Bill defined “high-risk AI” more narrowly, focusing only on systems that operate without meaningful human oversight and serve as the principal basis for consequential decisions. However, even under that failed bill, an AI system would have been considered “high-risk” if it was intended to autonomously make, or be a substantial factor in making, a “consequential decision,” which is a “decision that has a material legal, or similarly significant, effect on the provision or denial to any consumer of—among other things—insurance.
Is Insurtech Considered High Risk?
Both the CAIA and the failed Virginia AI Bill explicitly identify that an AI system making a consequential decision regarding insurance is considered “high-risk,” which certainly creates the impression that there is a trend toward regulating AI use in the Insurtech space as high-risk. However, the inclusion of insurance on the “consequential decision” list of these laws does not definitively mean that all Insurtech leveraging AI will necessarily be considered high-risk under these or future laws. For instance, under the CAIA, an AI system is only high-risk if, when deployed, it “makes or is a substantial factor in making” a consequential decision. Under the failed Virginia AI Bill, the AI system had to be “specifically intended to autonomously make, or be a substantial factor in making, a consequential decision.”
Thus, the scope of regulated AI use, which varies from one applicable law to another, must be considered together with the business’s proposed application to get a better sense of the appropriate AI governance in a given case. While there are various use cases that leverage AI in insurance, which could result in consequential decisions that impact an insured, such as those that improve underwriting, fraud detection, and pricing, there are also other internal uses of AI that may not be considered high risk under a given threshold. For example, leveraging AI to assess a strategic approach to marketing insurance or to make the new client onboarding or claims processes more efficient likely doesn’t trigger the consequential decision threshold required to be considered high-risk under CAIA or the failed Virginia AI Bill. Further, even if the AI system is involved in a consequential decision, this alone may not deem it to be high risk, as, for instance, the CAIA requires that the AI system make the consequential decision or be a substantial factor in that consequential decision.
Although the EU AI Act does not expressly label Insurtech as being high-risk, a similar analysis is possible because Annex III of the EU AI Act lists certain AI uses that may be implicated by an AI system deployed in the Insurtech space. For example, an AI system leveraging a model to assess creditworthiness in developing a pricing model in the EU likely triggers the law’s high-risk threshold. Similarly, AI modeling used to assess whether an applicant is eligible for coverage may also trigger a higher risk threshold. Under Article 6(2) of the EU AI Act, even if an AI system fits the categorization promulgated under Annex III, the deployer of the AI system should perform the necessary analysis to assess whether the AI system poses a significant risk of harm to individuals’ health, safety, or fundamental rights, including by materially influencing decision-making. Notably, even if an AI system falls into one of the categories in Annex III, if the deployer determines through documented analysis that the deployment of the AI system does not pose a significant risk of harm, the AI system will not be considered high-risk.
What To Do If You Are Developing or Deploying a “High-Risk AI System”?
Under the CAIA, when dealing with a high-risk AI system, various obligations come into play. These obligations vary for developers[3] and deployers[4] of the AI system. Developers are required to display a disclosure on their website identifying any high-risk AI systems they have deployed and explain how they manage known or reasonably foreseeable risks of algorithmic discrimination. Developers must also notify the Colorado AG and all known deployers of the AI system within 90 days of discovering that the AI system has caused or is reasonably likely to cause algorithmic discrimination. Developers must also make significant additional documentation about the high-risk AI system available to deployers.
Under the CAIA, deployers have different obligations when leveraging a high-risk AI system. First, they must notify consumers when the high-risk AI system will be making, or will play a substantial factor in making, a consequential decision about the consumer. This includes (i) a description of the high-risk AI system and its purpose, (ii) the nature of the consequential decision, (iii) contact information for the deployer, (iv) instructions on how to access the required website disclosures, and (v) information regarding the consumer’s right to opt out of the processing of the consumer’s personal data for profiling. Additionally, when use of the high-risk AI system results in a decision adverse to the consumer, the deployer must disclose to the consumer (i) the reason for the consequential decision, (ii) the degree to which the AI system was involved in the adverse decision, and (iii) the type of data that was used to determine that decision and where that data was obtained from, giving the consumer the opportunity to correct data that was used about that as well as appeal the adverse decision via human review. Developers must also make additional disclosures regarding information and risks associated with the AI system. Given that the failed Virginia AI Bill had proposed similar obligations, it would be reasonable to consider the CAIA as a roadmap for high-risk AI governance considerations in the United States.
Under Article 8 of the EU AI Act, high-risk AI systems must meet several requirements that tend to be more systemic. These include the implementation, documentation, and maintenance of a risk management system that identifies and analyzes reasonably foreseeable risks the system may pose to health, safety, or fundamental rights, as well as the adoption of appropriate and targeted risk management measures designed to address these identified risks. High-risk AI governance under this law must also include:
- Validating and testing data sets involved in the development of AI models used in a high-risk AI system to ensure they are sufficiently representative, free of errors, and complete in view of the intended purpose of the AI system;
- Technical documentation that demonstrates the high-risk AI system complies with the requirements set out in the EU AI Act, to be drawn up before the system goes to market and is regularly maintained;
- The AI system must allow for the automatic recording of events (logs) over the lifetime of the system;
- The AI system must be designed and developed in a manner that allows for sufficient transparency. Deployers must be positioned to properly interpret an AI system’s output. The AI system must also include instructions describing the intended purpose of the AI system and the level of accuracy against which the AI system has been tested;
- High risk AI systems must be developed in a manner that allows for them to be effectively overseen by natural persons when they are in use; and
- High risk AI systems must deploy appropriate levels of accuracy, robustness, and cybersecurity, which are performed consistently throughout the lifecycle of the AI system.
When deploying high risk AI systems, in-scope companies must carve out the necessary resources to not only assess whether they fall within this categorization, but also to ensure the variety of requirements are adequately considered and implemented prior to deployment of the AI system.
The Insurtech space is growing in parallel with the expanding patchwork of U.S. AI regulations. Prudent growth in the industry requires awareness of the associated legal dynamics, including emerging regulatory concepts nationwide.
[1] Subsection (1) states that an AI system is high-risk if it is “intended to be used as a safety component of a product (or is a product) covered by specific EU harmonization legislation listed in Annex I of the AI Act and the same harmonization legislation mandates that he product hat incorporates the AI system as a safety component, or the AI system itself as a stand-alone product, under a third-party conformity assessment before being placed in the EU market.”
[2] Annex 3 of the EU AI Act can be found at https://artificialintelligenceact.eu/annex/3/
[3] Under the CAIA, a “Developer” is a person doing business in Colorado that develops or intentionally and substantially modifies an AI system.
[4] Under the CAIA, a “Deployer” is a persona doing business in Colorado that deploys a High-Risk AI System.