Faking Out Deepfakes: The Business Of Verifying Who’s Real Or Not

Deepfake detection is still in a relatively nascent stage, but with the rise of AI, that technology is bound to get more sophisticated.

The news reads like the script of a Hollywood movie: An internet scammer used deepfake technology to impersonate a Hong Kong-based company’s chief financial officer. A finance worker at the company was fooled by the deepfake and transferred $25 million from that company’s account, thinking he was following a directive from the CFO.

The story sounds like an extreme case perhaps, but deepfake technology is on the rise and targets all types of businesses.

According to the 2025 “Voice Intelligence & Security” report by Pindrop, a provider of voice authentication and fraud detection services, there was a 1,300 percent increase overall in deepfake attacks in 2024.

A September 2025 report from market research firm Gartner found that 62 percent of businesses experienced a deepfake attack within the last 12 months.

Voice deepfake fraud has ridden the coattails of AI’s advancement. In the U.S., there was a 173 percent increase in synthetic voice calls between Q1 and Q4 2024, according to Pindrop’s report.

As a result, there have been a crop of new startups of late, with business-focused solutions to combat deepfakes.

Newest Players In The Deepfake Detection Space

One such startup is Utah-based Attestiv, which has developed a video deepfake detection platform.

“Over the past months, deepfakes have advanced from a social media curiosity to a veritable threat to virtually any business,” said Nicos Vekiarides, CEO of Attestiv, in an emailed statement to MES Computing.

Attestiv combines AI algorithms with a blockchain ledger. That combination “allows comprehensive analysis of digital media” and can identify manipulations of video and images, the company said in a statement.

DebitMyData is a newly released deepfake detection platform that the company describes as an “LLM Security API Suite.” The platform is powered by reinforcement learning and blockchain-secured credentials, the company said in an email.

AI or Not is yet another deepfake detection platform that can detect AI-generated images, text, music, deepfakes and video for businesses, according to the company’s website.

One of the latest startups to enter the deepfake battle arena is Netarx. MES Computing spoke with founder and CEO Sandy Kronenberg about his new endeavor and how deepfakes can impact business.

Providing End Users With A Deepfake ‘Traffic Light’

Deepfakes are prolific because the issue is mostly with end users’ unawareness, Kronenberg said.

(Sandy Kronenberg, founder and CEO, Netarx)

“We provide a traffic light to end users and that traffic light is on screen and automatically joins a call and or a mobile phone call, or an email, text message or document on your screen,” Kronenberg said.

Kronenberg demonstrated how Netarx worked during a Microsoft Teams call. A bot automatically joins the call – which Kronenberg said is another aspect of the platform.

“This bot is actually taking a sample of each of our faces and running it through an AI inference to determine if you’re real or you’re fake,” he explained.

However, the platform does not store your data or image, Kronenberg said, likening the way the platform works to how the TSA performs airport security.

“When you go through [TSA] security, they’re not storing [your data] they’re just comparing it to your driver’s license, your passport. In our case, we are not actually comparing it to your driver’s license. We’re comparing it to a multitude of inference models.”

AI inference models are used by AI systems to make decisions based on learned and inferred patterns they train off of large datasets.

Yet, they are not always 100 percent successful in detecting deepfakes.

“Some of those models work and some of them don’t. And what I mean by that is we see that the bad guys are constantly evolving their technology, and as they evolve their technology, some of the inference models fall down and don’t work, so then you have to train the AI detection to make it work again,” Kronenberg said.

Deepfakes and other security threats jettisoned by AI’s rapid development are so prevalent that a new department is being created in some organizations, he said.

“What we’re seeing is a new corporate structure that’s coming around all of these problems, and they’re calling that corporate structure trust operations,” Kronenberg said.

The four pillars of trust operations include compliance/regulation adherence, financial approvals, HR, and brand protection, he said.

“You think about it, it’s like, ‘hey, that’s not really a picture of my CEO dancing nude on the bar, right? It’s an AI image, right? So that’s ... brand protection. I just had one where it was an investor [who] had passed away, and the private equity firm just needs to verify that the spouse is the trustee ... because we need to make sure, and the private equity firm needs kind of a CYA,” he said.

While deepfake detection technology may still be in toddler stage, machine learning will eventually utilize multiple inference models to improve performance and detection – a technique known as a “stacking ensemble.”

“Our next generation [of deepfake detection] will actually be referred to as a stacking ensemble. Instead of us manually creating these ensembles, we’re going to have an AI lab that will automatically create effective stacking ensembles that [work] on their own. You’re using AI to defeat AI,” Kronenberg said.