Criminals are exploiting generative artificial intelligence (AI) to commit fraud on a much larger scale than before, warned the Federal Bureau of Investigation (FBI), adding that the advanced technology increases the “believability of their schemes.”
While more criminals make use of AI to commit fraud and extortion, it’s becoming increasingly difficult to identify AI-generated content.
In an audio scam, the criminals use AI-generated “short audio clips containing a loved one’s voice to impersonate a close relative in a crisis situation, asking for immediate financial assistance or demanding a ransom,” the alert said.
A man’s voice said that the girl had been kidnapped. However, DeStefano confirmed that her daughter was actually inside the home.
Believing the video, the individual invested in the platform and ended up losing at least $12,000, which amounted to his life savings.
Criminals use AI to create real-time video chats of popular individuals like company executives or authority figures.
AI-generated text and images allow fraudsters to create a sense of legitimacy for their schemes. For instance, AI tools are used to create social media profiles with voluminous content to make them look like real accounts.
AI image generation enables criminals to create fake driver’s licenses or other government and banking documents, which are used to carry out impersonation scams.
AI Explicit Content Threat
An FBI alert from June last year warned about malicious actors using AI to manipulate images and videos to create sexually explicit content.To generate such content, the threat actors use videos and photos that targets uploaded to their social media accounts or other places. After the fake content is created, it is circulated on social media or pornographic websites, the FBI said.
“The photos are then sent directly to the victims by malicious actors for sextortion or harassment,“ the agency said. ”Once circulated, victims can face significant challenges in preventing the continual sharing of the manipulated content or removal from the internet.”
One of the teen victims, Elliston Berry, shared her story during a Senate field hearing in June. “I was left speechless as I tried to wrap my head around the fact that this was occurring,” she said.
If the bill is eventually enacted, when a victim files a complaint with a social media platform, the company would be obligated to remove the content within 48 hours.
“For young victims and their parents, these deepfakes are a matter requiring urgent attention and protection in law,” Cruz said.