<img src="https://secure.24-astute.com/796515.png" style="display:none;">

Combating Voice Cloning and Deepfake Fraud

Feb 24, 2025

The following article was written by Travis DeForge, Director of Offensive Cybersecurity at Abacus Group, and originally appeared on teiss.


With AI capabilities growing in sophistication and new attack vectors emerging all the time, cyber-risks are rising exponentially for businesses worldwide. Among the most concerning trends are the use of generative AI technologies to produce deepfake audio and visual scams, which bad actors are leveraging to conduct financially motivated attacks such as wire fraud.  

In 2024, nearly half of all businesses globally reported incidents of deepfake fraud. That’s according to a new survey commissioned by Regula. These figures highlight the growing threat facing organizations, a challenge that continues to intensify with the rapid advancements in the capabilities and accessibility of AI.

Bad actors no longer require hours of high-quality audio to clone an individual’s voice. A single minute of clear speech, taken from a keynote address, podcast, or social media video, will often be sufficient. Once cyber-criminals create a voice model, they can run it in real time to make phone calls that appear to come from a trusted person.

These tactics take advantage of the trust people typically place in a voice they recognize. Messages are often urgent: an executive “calling” from a busy conference to request an immediate wire transfer, or an IT leader “needing” password resets as soon as possible to meet an unmovable deadline. By weaving realistic context into the call (noisy backgrounds, plausible references to business challenges) attackers can lower their target’s guard long enough to complete the scam.

A Threat with No Bounds

Voice cloning attacks can impact any business from a sole trader right through to the largest conglomerate. Small firms typically lack extensive security resources and the training programs they run may be patchy and inconsistent, so employees responsible for fielding phone requests may be unprepared for sophisticated voice fraud.

Larger organizations may, in contrast, have stronger security budgets, but they also tend to have more complex attack surfaces. Attackers can consequently hide more easily in a big entity’s phone and email channels, impersonating an executive with lower odds of detection. 

Whatever the company size, the end result is often similar: significant financial loss and potential reputational harm.

Even previously secure methods of authentication, such as through biometric voice matching, can now be readily bypassed.  Most hackers can clone a voice with readily available open-source software, then exploit older voice-based verification systems meaning voice biometrics are no longer sufficient on their own. Over-reliance on a single measure leaves gaps in defenses that adversaries can quickly evolve to exploit.

Building Multi-layered Defenses

AI-driven detection tools remain key to delivering protection. They can analyze video and audio streams for anomalies and detect subtle changes in visuals or speech. However, technology alone is not enough. Employee education is equally crucial.

Training programs need to highlight how swiftly fraudsters can now clone voices and stage highly convincing calls. To protect against this, every staff member must learn the “verify then trust” principle, whereby if someone calls with an unusual request, employees are trained to hang up and call them back on a known number from the firm’s internal directory. 

Another protective step is introducing code words for especially sensitive conversations. Two colleagues can agree on a phrase known only to them. If an urgent phone request comes in that sounds legitimate, they can hang up, call back and ask for the agreed-upon phrase to confirm the caller’s identity.

Regulation and Culture Combined

Regulation will become increasingly key in addressing the escalating threat from voice cloning and deepfake fraud. While much guidance remains voluntary today, we are seeing mounting pressure to bring more stringent cyber-security standards into law. Organizations may eventually face specific mandates on testing for AI-driven threats, and that’s likely to mean that routine updates to risk assessments and penetration testing will become the new normal.

In the future, a tighter regulatory landscape will need to be allied to the development of more security-conscious business cultures. Many organizations have thorough processes for patching software vulnerabilities or running antivirus scans. Far fewer are prepared for human-targeted manipulations.

Ultimately, the battle against AI-driven fraud calls for constant vigilance, advanced detection, and comprehensive staff training. Leaders who treat the fight as an urgent priority are better equipped to prevent loss, protect reputations, and sustain trust with investors and clients.

The methods of attackers are evolving rapidly, but so are the tools and strategies available for defense. By uniting employees, technology, and robust procedures under the banner of a unified security program, businesses can maintain a strong defense against the rising tide of deepfake and voice cloning threats.

stock-market-candlestick-graph-map-stock-image

Learn more about how your firm can benefit from our comprehensive IT and cybersecurity services.

Contact Us