Artificial Intelligence (AI) is revolutionizing business operations, offering advancements in efficiency, decision-making, and customer engagement. However, its rapid integration into business processes brings forth a spectrum of legal and financial risks that enterprises must navigate to ensure compliance and maintain trust.
The Broad Legal Definition of AI and Its Implications
In the United States, the legal framework defines AI far more expansively than the average person might expect, potentially encompassing a wide array of software applications. Under 15 U.S.C. § 9401(3), AI is any machine-based system that:
- makes predictions, recommendations, or decisions,
- uses human-defined objectives, and
- influences real or virtual environments.
This broad definition implies that even commonplace tools like Excel macros could be subject to AI regulations. As Neil Peretz of Enumero Law notes, such an expansive definition means that businesses across various sectors must now re-appraise all of their software usage to ensure compliance with new AI laws.
Navigating the Evolving Regulatory Landscape
The regulatory environment for AI is rapidly evolving. The European Union’s AI Act, for instance, classifies AI systems into risk categories, imposing strict compliance requirements on high-risk applications. In the United States, various states are introducing AI laws, requiring companies to stay abreast of changing regulations.
According to Jonathan Friedland, a partner with Much Shelist, P.C., who represents boards of directors of PE-backed and other privately owned companies, developments in artificial intelligence are happening so quickly that many companies of even modest size are spending significant time developing compliance programs to ensure adherence to applicable laws.
One result, according to Friedland, is that “[a]s one might expect, the sheer number of certificate programs, online courses, and degrees now offered in AI is exploding. Everyone seems to be getting into the game,” Friedland continues, “for example, the International Association of Privacy Professionals, a global organization previously focused on privacy and data protection, recently started offering its ‘Artificial Intelligence Governance Professional certification.” The challenge for companies, according to Friedland, is “to invest appropriately without overdoing it.”
Navigating Bias and Discrimination in AI Systems
Legal challenges have been associated with algorithmic bias and accountability, which claim that historical data used to train AI often reflects societal inequalities, which AI systems can further perpetuate.
Sean Griffin, of Longman & Van Grack, highlights cases where AI tools have led to allegations of discrimination, such as a lawsuit against Workday, where an applicant claimed the company’s AI system systematically rejected Black and older candidates. Similarly, Amazon discontinued an AI recruiting tool after discovering it favored male candidates, revealing the potential for AI to reinforce societal biases.
To mitigate these risks, businesses should implement regular audits of their AI systems to identify and address biases. This includes diversifying training data and establishing oversight mechanisms to ensure fairness in AI-driven decisions.
Addressing Data Privacy Concerns
AI’s reliance on vast datasets, often containing personal and sensitive information, raises significant data privacy issues. AI-powered tools might be able to infer sensitive information, such as health risks from social media activity, potentially bypassing traditional privacy safeguards.
Because AI systems potentially have access to a wide range of data, compliance with data protection regulations like the GDPR and CCPA is crucial. Businesses must ensure that data used in AI systems is collected and processed lawfully, with explicit consent where necessary. Implementing robust data governance frameworks and anonymizing data can help mitigate privacy risks.
Ensuring Transparency and Explainability
The complexity of AI models, particularly deep learning systems, often results in ‘black box’ scenarios where decision-making processes are opaque. This lack of transparency can lead to challenges in accountability and trust. Businesses should be mindful of the risks associated with engaging third parties to develop or operate their AI solutions. In many areas of decision-making, explainability is required, and a black-box approach will not suffice. For example, when denying someone for consumer credit, specific adverse action reasons need to be provided to the applicant.
To address this, businesses should strive to develop AI models that are interpretable and can provide clear explanations for their decisions. This not only aids in regulatory compliance but also enhances stakeholder trust.
Managing Cybersecurity Risks
AI systems are both targets and tools in cybersecurity. Alex Sharpe points out that cybercriminals are leveraging AI to craft sophisticated phishing attacks and automate hacking attempts. Conversely, businesses can employ AI for threat detection and rapid incident response.
The legal risks associated with AI in financial services highlight the importance of managing cybersecurity risks. Implementing robust cybersecurity measures, such as encryption, access controls, and continuous monitoring, is essential to protect AI systems from threats. Regular security assessments and updates can further safeguard against vulnerabilities.
Considering Insurance as a Risk Mitigation Tool
Given the multifaceted risks associated with AI, businesses should evaluate the extent to which certain types of insurance can help them manage and reduce risks. Policies such as commercial general liability, cyber liability, and errors and omissions insurance can offer protection against various AI-related risks.
Businesses can benefit from auditing business-specific AI risks and considering insurance as a risk mitigation tool. Regularly reviewing and updating insurance coverage ensures that it aligns with the evolving risk landscape associated with AI deployment.
Conclusion
While AI offers transformative potential for businesses, it also introduces significant legal and financial risks. By proactively addressing issues related to bias, data privacy, transparency, cybersecurity, and regulatory compliance, enterprises can harness the benefits of AI while minimizing potential liabilities.
AI tends to tell the prompter what they want to hear, whether it’s true or not, underscoring the importance of governance, accountability, and oversight in its adoption. Organizations that establish clear policies and risk management strategies will be best positioned to navigate the AI-driven future successfully.
To learn more about this topic view Corporate Risk Management / Remembering HAL 9000: Thinking about the Risks of Artificial Intelligence to an Enterprise. The quoted remarks referenced in this article were made either during this webinar or shortly thereafter during post-webinar interviews with the panelists. Readers may also be interested to read other articles about risk management and technology.
©2025. DailyDACTM, LLC d/b/a/ Financial PoiseTM. This article is subject to the disclaimers found here.