OpenAI recently published research summarizing how criminal and nation-state adversaries are using large language models (LLMs) to attack companies and create malware and phishing campaigns. In addition, the use of deepfakes has increased, including audio and video spoofs used for fraud campaigns.

Although “most organizations are aware of the danger,” they “lag behind in [implementing] technical solutions for defending against deepfakes.” Security firm Ironscales reports that deepfakes are increasing and working well for threat actors. It found that the “vast majority of midsized firms (85%) have seen attempts at deepfake and AI-voice fraud, and more than half (55%) suffered financial losses from such attacks.”

CrowdStrike’s 2025 Threat Hunting Report projects that audio deepfakes will double in 2025, and Ironscales reports that 40% of companies surveyed had experienced deepfake audio and video impersonations. Although companies are training their employees on deepfake schemes, they have “been unsuccessful in fending off such attacks and have suffered financial losses.”

Ways to mitigate the effect of deepfakes include:

Threat actors will continue to use AI to develop and hone new strategies to evade detection and compromise systems and data. Understanding the risk, responding to it, educating employees, and monitoring can help mitigate the risks and consequences.

Leave a Reply

Your email address will not be published. Required fields are marked *