The poll response shows that more than half of C-suite and other executives (51.6 percent) are anticipating an increase in deepfake attacks on their companies, focused on financial and accounting data, over the next year.
Agreeing that an increase in attacks is imminent also portends that many of these executives have already been victimized. The survey also found that 15.1 percent of the executives polled had experienced at least one deepfake financial fraud incident during the past year.
With the advancement of AI technology, scammers are now using one or a combination of email, audio, and video, deepfake devices to steal money from businesses and financial institutions.
V.S. Subrahmanian, a computer science professor at Northwestern University, has been studying deepfakes for years and gave The Epoch Times what he says is a common example of a scam.
“Say a bank gets an email from someone purported to be Joe Schmoe, and you call him, and he has the right IDs and sounds like him and he wants to wire money. Well, you would believe him because he has all the credentials,” he said.
“The bank would end up executing the wire transfer to [the scammer using the] deepfake voice and then the account is debited from someone who has no way of recovering the money.”
In the Deloitte executive survey report, Mike Weil, Deloitte’s digital forensics leader and managing director, warned that the frequency of scams is increasing.
“Deepfake financial fraud is rising, with bad actors increasingly leveraging illicit synthetic information like falsified invoices and customer service interactions to access sensitive financial data and even manipulate organizations’ AI models to wreak havoc on financial reports,” he said.
“The good news is that concern about future incidents seems to peak after the first attack, with subsequent events tempering concerns as organizations gain more experience and become better at detecting, managing, and preventing fraudsters’ deepfake schemes.”
In an additional report issued earlier this year, Deloitte said generative AI could potentially lead to fraud losses reaching $40 billion in the United States by 2027.
Instituting Protocols and Common Sense
Kevin Libby has served as a fraud analyst for several organizations and is now at Javelin Strategy & Research, where he works on the company’s Fraud and Cyber Security Team. He said cybercriminals predominantly use AI to circumvent security and identity authentication protocols, steal money, run scams, and damage corporate brands and stakeholder reputations.Libby told The Epoch Times that employing institutional protocols is one of the quickest and most effective ways to combat threats where AI voice deepfakes are used.
“What I hope would become common is that businesses would craft stringent protocols. Of the cases I have read about, there are examples like the boss calls and says send me money and without questioning the request, it gets sent out,” he said.
“If someone is making requests for money, it comes down to authenticating the individual and having a protocol in place that says I can’t do this without these things in place first.”
Several tools are being developed now specifically for deepfake audio detection, including one from the University at Buffalo called the Deepfake-O-Meter.
According to the school, the tool aims to improve public access to deepfake identification, whereby users could detect manipulated videos or images.
However, while companies are spending increased resources on upgrading their audio-detection capabilities through their security and IT departments, Subrahmanian said they cannot solely rely on deepfake detectors to keep them safe. He said that there are some free and commonsense ways of weeding out potential audio deepfake scammers.
“A simple instrument to use is saying, ‘We’re going to call you back, and we’ll process this request today in the next few hours.’ Then they can call the person making the financial request on their home phone or cell and see if it really is who they say it is,” he said.
“Common sense is a highly underrated quality. We need a combination of security with technology combined with common sense approaches.”
While common sense and in-house training are inexpensive, Mr. Libby says security and IT investment is a needed collaborative component for companies, and it’s going to be expensive with few guarantees based on the rapid evolution of deepfake technology.
“You need the key stakeholders to buy in and understand this is a real threat so they will understand there’s ROI involved,” he said, referring to return on investment.
“Part of the problem is that it’s not enough just to build a system. You have to continually update it. Even then, there’s no guarantee it will be effective tomorrow.”