Prompt Inversion > Blog > Synthetic Voice Scams: A New Frontier in Fraud

July 7, 2025

Synthetic Voice Scams: A New Frontier in Fraud

We discuss the use of generative AI tools such as voice cloning and video deepfakes in targeted fraud.

Synthetic Voice Scams: A New Frontier in Fraud

In recent months, AI-powered fraud has taken a dramatic turn. Scammers are now free to use generative AI tools such as voice cloning and video deepfakes in targeted schemes.

Voice cloning has become significantly simpler over the past year. With just seconds of recorded speech, services like ElevenLabs can produce unnervingly convincing replicas. Fraudsters integrate cloned voices with AI chatbots, who then emotionally manipulate the victim. This has enabled “vishing” (voice phishing) campaigns that trick victims into sharing credentials or wiring money. Other more sophisticated fraudsters run audio “pig-butchering” scams, which rely on trust built over time, using fake investment apps and romantic manipulation to extract thousands of dollars at a time.

This is a clear warning sign for enterprises. More than half of financial crime now involves synthetic media, and Deloitte estimates that global losses from AI driven fraud may reach $40  billion by 2027. In one high profile case last year, an enterprise lost $25 million dollars after scammers created a deepfaked video avatar of their CFO. So what should executives do? 

Enterprises need to build smarter defenses. High tech solutions include a blend of AI-powered deepfake detectors, biometric voice verification, and real-time behavior analytics. For instance, a sophisticated technical solution can scan video calls with deepfake detection, head movement analysis, and pulse detection. These systems should also understand conversational context, not just voice signatures. Modern defenses could flag conversational flow that suddenly goes off-script or shifts in tone during an executive call.

There also has to be a cultural shift within an enterprise in order to implement identity verification protocols. Even simple safe words can reveal imposters. Such protocols don’t require heavy technology, just consistent processes and user training. Employees also have to be educated about the rising number of audio and video deepfakes and the constant need to be cautious. 

As AI-aided scams become more popular, there has been a corresponding increase in sophisticated deepfake detection systems. Enterprises can defend themselves against this rising danger by integrating both technological and cultural change.

Recent blog posts

Security

Synthetic Voice Scams: A New Frontier in Fraud

We discuss the use of generative AI tools such as voice cloning and video deepfakes in targeted fraud.

July 7, 2025
Read more
LLMs
Security

AI Embeddings: Not Necessarily Secure Anymore

We discuss recent research reverse engineering AI embeddings.

June 30, 2025
Read more
LLMs
Agents

Keeping up with AI Advances

We list some of our favorite sources for AI news.

June 16, 2025
Tanay Wakhare
Read more