Accenture: What We Learned When Our CEO Got Deepfaked
Rather than a mere advance in social engineering, deepfakes represent ‘a paradigm shift in the attack vector,’ says security lead Flick March.
Last May, someone pretending to be an attorney set up a video call between Accenture CEO Julie Sweet and the company’s finance head to discuss an unpaid invoice.
“There was urgent request,” Flick March, EMEA cyber strategy lead at Accenture, told the audience at Computing’s Cyber Security Festival last week. “The finance leader came on the call. Julie was on camera and told the finance leader to do what she was asked to do. The attorney was off camera.”
But like the attorney, “Julie” was not what she seemed. She was a deepfake. Fortunately, the CFO was aware of the protocols to follow when transferring funds and, refusing to be flustered, set about validating the request. “And of course, funds did not leave the company,” said March. “However, it certainly woke us up.”
Since that time March has been working with customers, one of whom even turned out to have hired deepfake employees, on combatting this fast-growing threat.
‘A Paradigm Shift In The Attack Vector’
The pace of change in the field of AI-generated fakes is alarming, she said, and the scale and severity of the problem are being underestimated.
Tools that can create a deepfake video from a few seconds of audio and video footage, or even a photo, are both massively more capable and much cheaper than they were even a year ago. Last June, March spent £20,000 [$26,578.00 USD as of publishing] to create a deepfake of herself to use for training purposes; you could do it for £20 [$26.58] today. And there has been a 21,000 percent increase in the capabilities of providers of deepfakes as a service sold on the dark web, according to March.
As a result, people’s ability to detect deepfakes, even after training, is falling dramatically. “Don't expect them to have fat fingers. You will not be able to detect them. After a deepfake test we did in the finance industry, even after they'd done deepfake training, 50 percent fell foul of it.”
It’s a coin toss, in other words. There is effectively no way of telling if a photo or video is real or fake just by looking at it. This represents a dramatic hijacking of one of our primary mechanisms of trust in the online world.
“It’s a paradigm shift in the attack vector, and a paradigm shift in what security needs to be to allow you to maintain core purpose and integrity under attack and under duress.”
‘You Have To Redesign Your Security’
Maintaining core purpose and integrity means securing reputation and money. The challenge thrown up by ultra-convincing deepfakes is that the activity required spans traditional operational boundaries of cybersecurity, data governance and fraud.
“We're seeing a huge amount of activity that’s not quite fraud, not quite cyber,” March explained. The danger is that it becomes someone else’s problem, leaving gaping holes between the silos in which bad actors can operate.
“You have to redesign your security,” she insisted. “Security has to expand. It has to take authority, because your job is not just protecting the [infrastructure]. Now it’s how do you keep your company functioning? And also, what on earth do you do as response?”
She pointed to the current travails of M&S, still trying to recover from a ransomware attack, arguing that organizations need to prepare themselves for a world defined by disinformation. “Crisis comes when no-one knows what to do.”
As a plan of action, businesses should create a strategy to increase awareness and update policies for the deepfake age. They should then modernize their identity and access management (I&AM), permissions and controls, before building and testing response plans and playbooks.
‘Social Engineering On Steroids’
Deepfakes are popping up everywhere. Call centers are being deepfaked, executives are being deepfaked, celebrities are being deepfaked. Increasingly they are being used in ransomware attacks too, said March. “So, they'll penetrate something then they’ll use a deepfake of the CISO to tell the team to go and turn the SOC off or turn certain systems on again. They work it out through telephone calls and look up who knows who through LinkedIn. It’s social engineering on steroids.”
In response, organizations must embrace the concept of identity security across the entire sphere of operations. There need to be secure channels for communication between professionals to make voice and video fakery harder; there needs to be proper encryption and I&AM; and important decisions must always be verified out-of-band.
Technical solutions for assurance are emerging. For example, Google, Microsoft, the BBC and others are working on provenance standards, “a nutritional food label for what you're looking at.”
Elsewhere organizations like NCSC and NIST offer advice and frameworks for protecting against AI and deepfake scams and disinformation. Accenture hosts a special educational website First AI ID Kit that highlights the issues around deepfakes.
But when you can’t trust your eyes and ears, it’s vital to inculcate a culture of critical thinking and insist on adherence to protocol. This is what saved Accenture from what could have been a substantial loss. A similar scenario cost another company $25 million.
If there is an urgent request to ignore protocol, if the person is acting out of character, or if the video is just too good to be true, maybe it’s a deepfake at work. People need permission to be careful, March said.
“When someone phones you on 4 PM on Friday night, you should have the right to say, ‘sorry, I don't think it's you, phone me back on Monday’.”
This article originally appeared on our sister site Computing.