How Organizations Can Fight ‘Voiceprint’ Fraud, Following OpenAI CEO’s Warning
With the rise of AI, comes the rise of voice authentication fraud.
Voiceprint, the biometric technology that uses unique features of an individual’s voice as a user’s identify verification and authentication digital passkey, is a hot industry. It’s expected to reach a staggering $11.5 billion global market valuation by 2032, up from a $2.5 billion valuation in 2023, according to an Allied Market Research report.
The technology is widely used in the financial, telecommunications, and health care industries, according to the report. However, one tech titan is sounding the alarm about financial institutions continuing to adopt voiceprint in the era of AI.
“A thing that terrifies me is apparently there are still some financial institutions that will accept the voiceprint as authentication,” OpenAI CEO Sam Altman said at a Federal Reserve conference in Washington, D.C., last month, the Associated Press reported.
[RELATED: ChatGPT-5’s Launch And The ‘AI Arms Race’]
“That is a crazy thing to still be doing. AI has fully defeated that.” Altman continued, according to the AP.
Voice deepfake fraud has ridden the coattails of AI’s advancement. In the U.S., there was a 173 percent increase in synthetic voice calls between Q1 and Q4 2024, according to the 2025 Voice Intelligence & Security Report by Pindrop, a provider of voice authentication and fraud detection services.
Moreover, there was a 1,300 percent increase overall in deepfake attacks in 2024, according to the report.
And the attacks are predicted to become more frequent. Pindrop’s report forecasts a 155 percent increase in deepfake calls, and a 162 percent increase in deepfake-related fraud this year.
[RELATED: No More Nigerian Prince: Today’s Cyber Threats Require Strong Offense]
Pindrop’s report attributes the rise in voice-related and other deepfake attacks to advances in AI. Altman said that AI’s ability to successfully impersonate voices will create a “significant impending fraud crisis” in the financial sector, according to AP’s report.
“I think Sam Altman’s warning is well-founded when it comes to older, static voiceprint systems. Traditional systems that rely solely on matching a stored template to a spoken phrase are increasingly ineffective in the face of AI-driven fraud,” Ralph Rodriguez, president and chief product officer at cybersecurity-focused Daon, said in an emailed statement shared with MES Computing.
Daon offers digital identity verification and authentication solutions. According to Rodriguez, legacy voiceprint systems are especially vulnerable to attacks.
“Deepfake generators can replicate pitch, cadence, and timbre so convincingly that legacy systems have little chance of telling a real caller from a synthetic clone. The risk is only compounded when you consider these systems often apply the same verification threshold to all transactions, whether someone is simply checking a balance or authorizing a high-value transfer. That leaves institutions incredibly vulnerable to fraud and regulatory scrutiny,” he said.
Unlike Altman, who practically advised full-stop adoption of voice authentication by financial institutions, Rodriguez said that voice authentication still has “a strong role to play,” and emphasized how the methods of securing those systems require modern capabilities.
The “right foundation” for preventing voiceprint attacks consists of “layered, adaptive models that combine verification throughout an interaction, real-time detection of fraud, and the flexibility to escalate when risk signals demand it,” he said.
“The most effective systems already detect the subtle, tell-tale artifacts of AI-generated audio while maintaining a frictionless experience for legitimate users. What will define the future is how these proven defenses continue to evolve, adapting to new threat vectors and meeting emerging regulatory demands. Organizations already deploying this technology will be best positioned to protect customers, maintain compliance, and stay ahead of the next wave of synthetic fraud,” Rodriguez said.