Over 70% of S&P 500 companies now flag AI as a risk in their annual Form 10-K filings. Two years ago? Just 12%. Think about that dramatic swing. It’s tempting to dismiss this increase as corporate lawyers covering their bases. But dig into what these companies actually choose to highlight in their disclosures, and a more revealing picture emerges. 

New research from The Conference Board and Harvard Law School analyzed these disclosures and found something surprising: the most frequently disclosed AI risks among large public companies aren’t what you might expect.

Risk factor disclosure drafting is inherently defensive and often influenced by peer bench-marking, so these disclosure trends don’t necessarily reveal what companies view as their most significant risks, but they do provide insight into the themes companies consider important enough to address in their public filings. The disclosure patterns indicate the areas companies most commonly flag as potential pain points, and more importantly, where governance efforts may need to focus right now.

An Obsession with Reputational Risk 

Here’s what companies most commonly chose to highlight: 38% of S&P 500 firms now cite reputational risk in connection with AI — making it the single most frequently disclosed AI risk category.

This isn’t abstract anxiety. It reflects hard-won lessons from watching AI failures go viral. A discriminatory hiring algorithm makes headlines. A chatbot generates offensive content, and the screenshots are spread across social media. A recommendation engine produces biased results and customers defect en masse. Such incidents don’t just create PR headaches — they also can trigger investor skepticism, regulatory inquiries, and class-action lawsuits. In addition, recent Federal Trade Commission activity around AI more broadly as well as recent SEC and DOJ “AI washing” enforcement cases likely inform why companies generally are highlighting these particular risks.

The specific reputational threats companies identify offer insight into the issues they perceive as most salient when deploying AI:

Implementation failures top the disclosure list. Forty-five companies explicitly warned that AI projects that fail to deliver promised outcomes can erode stakeholder confidence. We’ve seen this movie before with other technology adoptions: overpromise, underdeliver, watch trust evaporate. But AI failures feel different because the technology was marketed as transformative. And regulators have signaled increased regulatory scrutiny of AI-related claims, which companies appear to be anticipating in their risk factor disclosures.

Consumer-facing applications generate risks. Forty-two companies cited direct customer interaction as a vulnerability point. When your AI chatbot gives wrong medical advice or your product recommendation engine displays racially biased results, customers see it immediately. Consumer brands’ disclosures indicate they understand that AI mistakes in public view may damage reputations faster than operational failures behind closed doors.

Privacy concerns create a third pressure point. Twenty-four companies — concentrated in technology, healthcare, and financial services — disclosed that handling sensitive data with AI tools may expose them to regulatory risks and create reputational vulnerabilities.

What’s revealing here is the gap between technical failure and perceived failure. Many companies disclosed risks relating to the perception that they’ve deployed AI irresponsibly rather than the risks associated with the technical malfunction itself. The risk isn’t simply that an algorithm makes a mistake; it’s that stakeholders conclude leadership exercised poor judgment or that the organization’s values are compromised.

But reputational risk isn’t the only thing keeping companies up at night.

The Cybersecurity Double Bind

While reputational risk dominated the disclosure landscape, cybersecurity risk tied to AI appeared in 20% of filings — and the nature of these identified risks reveals a fundamental challenge: AI simultaneously expands your attack surface and strengthens your adversaries.

The largest group of companies — 40 firms — described AI as a “force multiplier” that intensifies risks associated with cyberattacks. AI enables adversaries to scale intrusion attempts, compress detection windows, and automate social engineering attacks with unprecedented sophistication. Traditional security defenses assume human-speed threats; AI-enabled attacks adapt faster than human defenders can respond.

But the more operationally significant disclosure theme involves third-party and vendor risk. Eighteen companies explicitly warned that dependence on cloud providers, SaaS platforms, and AI vendors can create vulnerabilities that strong internal safeguards cannot offset if these third parties are compromised.

This echoes recent Federal Trade Commission scrutiny of partnerships between Big Tech cloud providers and AI developers. When a handful of vendors control the infrastructure and foundational models powering AI deployments across industries, concentrated risk becomes systemic risk. A vendor’s security failure can create downstream exposure for your organization. Their architectural vulnerability becomes your exposure.

The disclosure patterns reveal companies wrestling with an uncomfortable truth: you can’t simply build a bigger wall around AI systems. The attack vectors may differ. The threat actors may be more sophisticated. And your most significant vulnerabilities may be inherited from vendors whose security practices you cannot fully audit.

So What Should Your Organization Actually Do?

While risk factor disclosure patterns don’t necessarily reflect internal prioritization, the themes companies emphasize can indicate areas where organizations anticipate heightened governance attention. Two priorities emerge clearly:

For Reputational Risk: Build Defensible Documentation

Strong documentation can help mitigate both litigation and reputational fallout when AI incidents occur. Organizations with mature AI governance maintain: (1) pre-deployment testing protocols with defined thresholds, (2) continuous post-deployment monitoring protocols, and (3) rigorous governance and audit decision trails. If AI-related litigation arises, your defense rests on proving you exercised reasonable judgment at deployment. Vague policy statements won’t cut it.

For Cybersecurity Risk: Treat Vendors as Critical Components of Your AI Security Posture 

The vendor risk disclosures reveal an uncomfortable truth that your AI security posture depends heavily on suppliers whose practices you cannot fully control. Standard vendor management approaches may not fully address AI-specific considerations. AI vendor agreements may benefit from including provisions that general SaaS contracts lack, including model training restrictions, expanded incident notification and indemnification coverage for AI-generated outputs that create liability. For board-level cybersecurity briefings, companies may now consider including AI red teaming exercises, third-party risk assessments specific to AI vendors, and supply chain vulnerability analyses. Directors overseeing cybersecurity programs may also wish to seek clarity on how management has implemented AI-specific security controls into existing frameworks and where traditional controls may need adjustment.

The disclosure surge from 12% to 72% in two years signals that companies are increasingly choosing to address AI-related risks in their public filings, a trend that appears to track growing organizational attention to how AI intersects with operational, regulatory, and cybersecurity risk management. The question is whether your organization has built the documentation infrastructure and vendor oversight mechanisms to address these risks.


Bottom Line:

Reputation trumps technology: Companies disclosed risks related to trust erosion and stakeholder backlash more than technical failures, indicating that AI governance, testing, and transparency are essential to protecting brand value.

Vendor dependencies create inherited risk: Third-party AI providers expose organizations to security and compliance vulnerabilities they cannot fully control, which risks can be mitigated by enhanced due diligence and AI-specific contractual protections.

Strong documentation beats vague policy statements: Rigorous testing, continuous monitoring, and governance and audit decision trails provide the best protection when AI systems produce discriminatory or harmful outcomes — vague “responsible AI” policy commitments do not.

Leave a Reply

Your email address will not be published. Required fields are marked *