The FBI has published a new statement warning about the increasingly prevalent use of “deepfake” technology in job interviews, allowing users to assume the identity of someone else over digital video interviews in order to gain access to sensitive data.
This technology has been in use in the entertainment industry for years, and it has been used in mainstream entertainment products such as “Rogue One: A Star Wars Story” and Kendrick Lamar’s music video for “The Heart Part 5,” while fans have used the technology to create purportedly superior versions of young Robert De Niro and Mark Hamill in “The Irishman” and “The Mandalorian,” respectively (the YouTuber responsible for the “Mandalorian” edit was later hired by Disney as a direct result of his efforts).
However, deepfake technology also has been leveraged for more sinister uses. In March, a fabricated video of Ukrainian President Volodymyr Zelenskyy telling Ukrainian troops to surrender circulated throughout the internet, in a likely effort to undermine the morale of the Ukrainian military.
Now the FBI is warning of a rising trend of deepfake video being used in live interviews for the aforementioned purpose of gaining access to sensitive information from remote work.
“The remote work or work-from-home positions identified in these reports include information technology and computer programming, database, and software related job functions,” the FBI’s statement reads. “Notably, some reported positions include access to customer PII, financial data, corporate IT databases and/or proprietary information.”
The agency also warned of the use of “voice spoofing,” a separate technology that allows users to imitate the voice of another person through digital manipulation, which could lend further credibility to this pernicious form of identity theft.
The federal government has warned about potential misuse of deepfake technology since at least 2019, when the Department of Homeland Security issued a report about the emergent technology, which they predicted could be used to spread misinformation by preying on people’s predilection to believe what someone says at face value.
“The threat of Deepfakes and synthetic media comes not from the technology used to create it, but from people’s natural inclination to believe what they see, and as a result deepfakes and synthetic media do not need to be particularly advanced or believable in order to be effective in spreading mis/disinformation,” the report noted.
However, experts say that deepfake videos may be identified by a few key giveaways visible to the naked eye.
“Trust your senses and gut feeling,” says Bayer. “Always ask yourself: Does this make sense? Could this really be true? Look carefully and always look twice. Focus on details.”
While these methods are imprecise and subjective, they may be the most reliable means of detecting deepfakes until the technologies for doing so become more reliable.
Until then, recruiters at remote tech jobs must be ever vigilant and scrupulous as they conduct video interviews for positions with access to important data.