As artificial intelligence becomes increasingly integrated into business operations, IT contracts covering the provision of AI systems are evolving to include critical safeguards. One emerging concept is the AI circuit breaker, a contractual mechanism that provides for an intervention, or override, where an AI system exhibits undesirable or harmful behavior. 

When contracting for AI, businesses should look to proactively include these safeguards in their contracts to mitigate against the risks of AI-driven processes causing unintended harm.

What Is an AI Circuit Breaker?

Borrowing from engineering, an AI circuit breaker triggers a pause or override when an AI system acts unpredictably, exceeds acceptable risk levels, or falls below a minimum performance threshold. This ensures that businesses remain in control of automated processes, mitigating against unintended consequences.

AI circuit breakers take multiple forms including:

Why Are AI Circuit Breakers Necessary?

Circuit breakers can benefit both providers and customers as they seek to mitigate the risks associated with the deployment of AI systems, including:

Particular benefits of circuit breakers include retaining control and human oversight over AI systems and providing contractual certainty; traditional contractual rights to suspend and terminate services are unlikely to offer sufficient clarity regarding the rights and obligations of each party if an AI system begins to exhibit undesirable or harmful behaviour.

Drafting and Negotiating AI Circuit Breakers

Key considerations when drafting and negotiating circuit breakers include:

Summary

The very nature of AI is that it continually ‘learns’ and evolves, often in an opaque manner meaning that providers and deployers of AI systems may not fully understand the power and capability of the AI technology at the outset of any deployment.

AI circuit breakers can provide an important safety net in respect of AI system deployments. As AI continues to shape the business and legal landscape, incorporating these safeguards can help providers and deployers of AI systems mitigate AI-driven risks through implementing appropriate guardrails, maintaining oversight and accountability and clearly defining responsibilities and rights in the event of undesirable or harmful behaviour.

Leave a Reply

Your email address will not be published. Required fields are marked *