FBI Says More People Using ‘Deepfake’ Technology to Apply for Jobs

FBI Says More People Using ‘Deepfake’ Technology to Apply for Jobs
A man using a computer in Dongguan, China's southern Guangdong Province, on Aug. 4, 2020. Nicolas Asfouri/AFP via Getty Images
Nicholas Dolinger
Updated:
0:00

The FBI has published a new statement warning about the increasingly prevalent use of “deepfake” technology in job interviews, allowing users to assume the identity of someone else over digital video interviews in order to gain access to sensitive data.

On June 28, the FBI released a public service announcement about the growing problem, in which one individual superimposes the face of another onto themselves during a live job interview for remote tech work. In so doing, the individual may hope to gain access to sensitive, valuable information with a fraudulent identity,

This technology has been in use in the entertainment industry for years, and it has been used in mainstream entertainment products such as “Rogue One: A Star Wars Story” and Kendrick Lamar’s music video for “The Heart Part 5,” while fans have used the technology to create purportedly superior versions of young Robert De Niro and Mark Hamill in “The Irishman” and “The Mandalorian,” respectively (the YouTuber responsible for the “Mandalorian” edit was later hired by Disney as a direct result of his efforts).

However, deepfake technology also has been leveraged for more sinister uses. In March, a fabricated video of Ukrainian President Volodymyr Zelenskyy telling Ukrainian troops to surrender circulated throughout the internet, in a likely effort to undermine the morale of the Ukrainian military.

Now the FBI is warning of a rising trend of deepfake video being used in live interviews for the aforementioned purpose of gaining access to sensitive information from remote work.

“The remote work or work-from-home positions identified in these reports include information technology and computer programming, database, and software related job functions,” the FBI’s statement reads. “Notably, some reported positions include access to customer PII, financial data, corporate IT databases and/or proprietary information.”

The agency also warned of the use of “voice spoofing,” a separate technology that allows users to imitate the voice of another person through digital manipulation, which could lend further credibility to this pernicious form of identity theft.

The federal government has warned about potential misuse of deepfake technology since at least 2019, when the Department of Homeland Security issued a report about the emergent technology, which they predicted could be used to spread misinformation by preying on people’s predilection to believe what someone says at face value.

“The threat of Deepfakes and synthetic media comes not from the technology used to create it, but from people’s natural inclination to believe what they see, and as a result deepfakes and synthetic media do not need to be particularly advanced or believable in order to be effective in spreading mis/disinformation,” the report noted.

Subsequent to the 2019 statement, deepfake technology has become far more ubiquitous on the internet. A report by the Estonian cybersecurity startup Sentinel attested to 145,227 such videos found in 2020, over 9 times more than the year before.
The new technologies present a host of problems for cybersecurity, as existing tools to discern deepfakes from real content are still highly unreliable. According to a report from the Dutch threat-intelligence company Sensity, programs designed to identify deepfake videos would accept them as authentic 86 percent of the time.

However, experts say that deepfake videos may be identified by a few key giveaways visible to the naked eye.

Julia Bayer, a journalist specializing in deepfake detection, identifies several strategies for identifying deepfakes. She attests that such media may be recognized by uncanny mouth movements, unusual or overly symmetrical faces, mismatched earrings or glasses frames, disembodied fingers or strands of hair, and discordant lighting, among others.

“Trust your senses and gut feeling,” says Bayer. “Always ask yourself: Does this make sense? Could this really be true? Look carefully and always look twice. Focus on details.”

While these methods are imprecise and subjective, they may be the most reliable means of detecting deepfakes until the technologies for doing so become more reliable.

Until then, recruiters at remote tech jobs must be ever vigilant and scrupulous as they conduct video interviews for positions with access to important data.