AI ‘Can Make Bad Actors More Convincing,’ Warns Apple Co-Founder

AI ‘Can Make Bad Actors More Convincing,’ Warns Apple Co-Founder
Steve Wozniak, co-founder of Apple, talks to people during a launch event in Cupertino, Calif., on Sept. 12, 2017. REUTERS/Stephen Lam
Naveen Athrappully
Updated:
0:00

Steve Wozniak, the co-founder of Apple, is predicting that artificial intelligence would transform the nature of scams and make them more convincing and is expecting stricter laws to control the use of such technologies.

“AI is so intelligent it’s open to the bad players, the ones that want to trick you about who they are,” Wozniak said in an interview with BBC. Though AI won’t replace human beings as it lacks emotion, the technology can make bad actors more convincing, he said, pointing to the ChatGPT AI that can generate text which sounds “so intelligent.”

Criminals are currently using AI to scam people using cloned voices that tend to be convincing enough to fool many people. Wozniak wants AI content to be clearly labeled and regulated.

According to Wozniak, the responsibility for content generated via AI and posted in public space must fall on those who publish such content.

“A human really has to take the responsibility for what is generated by AI.” Wozniak suggests implementing strict regulation to hold accountable the big tech firms that think “they can kind of get away with anything.”

However, Wozniak expressed skepticism about the possibility of regulators getting things right when it comes to AI. “I think the forces that drive for money usually win out, which is sort of sad.”

Though the AI technology cannot be stopped now, people can be better educated in a way that they can spot AI scams aiming to siphon off their personal information, Wozniak stated.

AI Scams

On May 2, McAfee Group, a global leader in online protection, published a report (pdf) about AI technology fueling a rise in online voice scams. Just three seconds of audio was found to be enough to clone an individual’s voice, the report stated.

Of the over 7,000 people from seven countries surveyed by the company, a quarter of the adults were found to have experienced some kind of AI voice scam, with 10 percent experiencing it personally while 15 percent seeing it happen to someone they knew. Seventy-seven percent of victims of such scams ended up losing money.

An incident involving the misuse of AI for criminal purposes took place in April when Jennifer DeStefano, a mother from Arizona, received an unexpected phone call from her 15-year-old daughter, who was sobbing and asking for help.

A man’s voice on the phone suggested that he had kidnapped the kid. DeStephano quickly confirmed that her daughter was safe. Her daughter was actually in the house. The criminal had cloned her daughter’s voice in an attempt to scam her.

Meanwhile, California’s Department of Financial Protection and Innovation has issued a warning about AI investment scams.

In April, the agency issued desist and refrain orders on Maxpread Technologies for offering unqualified securities. The firm claimed to use AI to trade crypto assets, promising daily returns of a minimum of 0.6 percent. The company also tricked investors about the identity of its CEO by using a fake, AI-generated avatar that was programmed to read a script.

One scammer used AI to create songs that were sold as leaked tracks of popular R&B singer-songwriter Frank Ocean, according to a report by Vice. The scammer raised thousands of dollars in the process.
The criminal hired a musician to create nine fake Ocean tracks using a model created with high-quality vocal snippets of the singer.

Controlling AI

The growing abilities of AI technology have triggered worries among experts. Wozniak was one of the more than 27,500 signatories of a recent open letter by the Future of Life Institute, which urged for temporarily pausing AI experiments.
“Contemporary AI systems are now becoming human-competitive at general tasks and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” the March 22 letter asked. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?”

“Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

The Biden administration is soliciting public input on measures to regulate AI tools like ChatGPT. The National Telecommunications and Information Administration wants public opinion on the matter to “shape the AI accountability ecosystem” like the various kinds of trust and safety testing that developers need to conduct on artificial intelligence.

“Just as food and cars are not released into the market without proper assurance of safety, so too AI systems should provide assurance to the public, government, and businesses that they are fit for purpose,” the agency said in an April 11 statement.

Naveen Athrappully
Naveen Athrappully
Author
Naveen Athrappully is a news reporter covering business and world events at The Epoch Times.
Related Topics