The Federal Trade Commission described a "big wave" of AI-assisted phone scams in February, with criminals using generative AI to conduct IRS impersonation fraud at a scale and convincingness that previous fraud operations could not match. The Washington Post reported the same month on how scammers are using deepfake audio and video to extract money from ordinary taxpayers — not through elaborate corporate heists, but through mass-scale consumer fraud that targets the vulnerable and the distracted. The numbers behind these reports are striking: 487 deepfake-enabled financial attacks in Q2 2025, up 41% from the previous quarter, with approximately $347 million in losses in three months. That's one quarter. The trajectory is not levelling off.
The mechanism is worth understanding in concrete terms. A scammer acquires a short voice sample — from a voicemail, a social media video, a phone call — and uses commercially available voice-cloning software to generate convincing audio in that person's voice. They then call a family member claiming to be in trouble and needing money urgently, or send a voice message appearing to come from a senior executive authorising a wire transfer. The technology required to do this is not sophisticated and is not expensive. What AI has done is eliminate the specialised skill that previously limited this kind of fraud to professional criminal operations; now it is accessible to anyone with a laptop and an afternoon.
At the high end of the market, the attacks are more elaborate. A Hong Kong finance worker was tricked into transferring $25 million after participating in a video call in which every other participant — including the appearance of a known colleague — was a deepfake. The FBI has issued warnings about AI-generated voice memos impersonating senior US government officials. Microsoft's threat intelligence teams have documented North Korean state actors using AI to improve the scale and sophistication of their social engineering operations. These are not the same as mass consumer fraud, but they share an underlying dynamic: AI has lowered the cost and raised the quality of deception across the board.
The policy response has been fragmented. The TAKE IT DOWN Act addresses non-consensual intimate imagery but not financial fraud. Individual states have passed laws criminalising deepfakes in specific contexts — elections, sexual content — but there is no federal statute specifically targeting AI-assisted financial fraud as a distinct category. Existing wire fraud and identity theft laws apply, but prosecutorial frameworks designed for human-executed crimes do not map cleanly onto AI-assisted operations that may involve no human operator making individual decisions about each victim.
What makes this particularly difficult to address is that the same voice-cloning and video-synthesis technology is used for legitimate purposes — dubbing films into other languages, creating accessible content for people with disabilities, personalising marketing. Banning the technology would cause substantial collateral damage; regulating its misuse requires the kind of precise scalpel that financial crime law has rarely managed to deploy against rapidly evolving threats. The FTC can issue warnings and pursue cases after the fact; it cannot prevent the infrastructure from being built or deployed.
The burden, for now, is falling on individuals and institutions to adapt their verification practices. Banks and financial institutions have begun requiring secondary authentication for large transfers and flagging unusual requests regardless of apparent source. But consumer-facing fraud — the grandparent scam, the IRS impersonator — reaches people who are not operating inside institutional security frameworks, and for whom "verify before you transfer" is advice that arrives after the money is already gone. The gap between where AI-assisted fraud is and where the defences are is measurable in hundreds of millions of dollars per quarter, and it is widening.