OpenAI recently published research summarizing how criminal and nation-state adversaries are using large language models (LLMs) to attack companies and create malware and phishing campaigns. In addition, the use of deepfakes has increased, including audio and video spoofs used for fraud campaigns.
Although “most organizations are aware of the danger,” they “lag behind in [implementing] technical solutions for defending against deepfakes.” Security firm Ironscales reports that deepfakes are increasing and working well for threat actors. It found that the “vast majority of midsized firms (85%) have seen attempts at deepfake and AI-voice fraud, and more than half (55%) suffered financial losses from such attacks.”
CrowdStrike’s 2025 Threat Hunting Report projects that audio deepfakes will double in 2025, and Ironscales reports that 40% of companies surveyed had experienced deepfake audio and video impersonations. Although companies are training their employees on deepfake schemes, they have “been unsuccessful in fending off such attacks and have suffered financial losses.”
Ways to mitigate the effect of deepfakes include:
- Training employees on how to detect, respond to, and report deepfakes;
- Creating policies that reduce the impact that one person can cause a compromise;
- Embedding multiple levels of authorization for wire transfers, invoice payments, payroll and other financial transactions; and
- Employing tools to detect threats that may be missed by employees.
Threat actors will continue to use AI to develop and hone new strategies to evade detection and compromise systems and data. Understanding the risk, responding to it, educating employees, and monitoring can help mitigate the risks and consequences.