According to Cybersecurity Dive, artificial intelligence is no longer experimental technology as more than 70% of S&P 500 companies now identify AI as a material risk in their public disclosures, according to a recent report from The Conference Board. In 2023, that percentage was 12%.

The article reports that major companies are no longer just testing AI in isolated pilots; they’re embedding it across core business systems including product design, logistics, credit modeling, and customer-facing interfaces. At the same time, it is important to note, these companies acknowledge confronting significant security and privacy challenges, among others, in their public disclosures.

One of the biggest drivers of these risks, perhaps, is a lack of governance. PwC’s 2025 Annual Corporate Director’s Survey reveals that only 35% of corporate boards have formally integrated AI into their oversight responsibilities—a clear indication that governance structures are struggling to keep pace with technological deployment.

Not surprisingly, innovation seems to be moving quite a bit faster than governance. That gap is contributing to various risks identified by most of the S&P 500. Extrapolating that reality, there is a good chance that small and mid-sized companies are in a similar position. Enhancing governance, such as through sensible risk assessment, robust security frameworks, training, etc., may help to narrow that gap.

Leave a Reply

Your email address will not be published. Required fields are marked *