AI researchers are calling on every business and organisation to develop a response plan to the growing threat of deep fake cyber attacks, as the perils are real, according to researchers from the Queensland University of Technology (QUT).
AI can not only collect image or video data of a person from various sources and put it into neural networks to mimic but also use deep neural networks to generate entirely new content.
This means counterfeit content of voices, images and videos that appear undeniably genuine can be easily created and manipulated by malicious actors to extract money, harm brand reputation, steal Intelligence Property (IP) and shake customer confidence.
“With such risks becoming more common, anyone whose reputation is of corporate value, including every CEO or board member that has been featured on earnings calls, YouTube videos, TED talks, and podcasts, must brace themselves for the risks of synthetic media impersonation,” the paper said.
Dan Halpin, the CEO and founder of Cybertrace, a private cyber-investigations firm, agrees that the world is now facing an alarming situation.
“Unfortunately, sophisticated deepfake scams targeting Australian businesses and organisations pose a growing threat,” he told the Epoch Times in an email.
Deepfake Threats Real and Already Happening
The warning comes after the CEO of Australia’s largest bank, the Commonwealth Bank of Australia (CBA), was impersonated as part of a cyber scam scheme.“We launched an automatic platform for passive income. You invest in gold, banks, stocks, funds and other profitable instruments. The platform will automatically generate profits,” Comyn appears to say during a news programme mimicking the production of Nine new bulletins.
However, it was not too hard to see something was amiss. The Australian-born and educated CEO was speaking with an American accent, and the movement of his mouth doesn’t quite match his face.
“The scammers misuse well-known news brands and the CommBank brand to try and legitimise their scam,” the bank warned.
“Scammers have even used fraudulent, AI-generated videos of [Mr] Comyn, and others, to try and convince people to invest.”
To manage the risks, the researchers recommend a six-phase synthetic media incident response playbook which includes preparation, assessment, detection, containment and eradication, post-incident and coordination procedures.
Halpin also reminded businesses that almost every business and organisation should be aware and alert to deepfake threats.
“As technology evolves and becomes more accessible, any sector that relies on trust, credibility, or sensitive information should remain vigilant and adopt preventive measures against deepfake threats,” he said.
We are Wired to Believe Synthetic Media
Drawing on the theories of human behaviour, the researchers argue that individuals can only distinguish human and deepfake faces about 50 percent of the time and that consumers have little choice but to believe what they see, read, and hear online.Due to the innate tendency to avoid cognitive overload, individuals are inclined to trust visuals—information-rich media by simplifying the evaluation process, as “ the richness of audiovisual material requires the allocation of more cognitive resources, which can lead to cognitive overload”.
“We evaluate the sources of richer audiovisual messages less systematically than leaner information presented via text, assigning more credibility to modalities such as video and audio than we do to text and images,” the paper said.
More worryingly, the researchers argue that deep fakes often cause people to stop trying to come to a genuine understanding of information, thereby eroding the credibility of other information sources.
“These new synthetic realities generate ‘reality apathy’ by causing people to give up trying to discern between what is authentic and synthetic, ceasing their efforts to become informed citizens and thereby potentially eroding the perceived credibility of fundamental civic media, politics, academia institutions—and organizations.”