Humans Locked in a Digital Arms Race With Robots

It sounds like science fiction, but experts and leaders are now warning bots could be calling the shots on everything from product purchases to elections.
Humans Locked in a Digital Arms Race With Robots
A robot powered by artificial intelligence is displayed at a stand during the International Telecommunication Union (ITU) AI for Good Global Summit in Geneva, Switzerland on May 30, 2024. Fabrice Coffrini/AFP via Getty Images
Crystal-Rose Jones
Updated:
0:00
Analysis

If the internet were the real world, it may very well resemble a sci-fi movie where robots have taken over the planet, or at the very least, started living alongside human beings.

A tell-tale sign of this amalgam between the organic and the robotic realms are the absurd, artificial intelligence (AI) generated images popping up on social media accounts.

Some take the form of images that show puppies sleeping in a box in the rain, begging users to share.

Others show children in dire poverty or war zones.

Sometimes, on closer inspection, these “children” have three limbs, six fingers, or two noses—the result of an AI that may store information, but lacks human knowledge of what comprises an actual body.

Then there are AI creations that make no attempt to disguise themselves as homeless puppies or sad children—for example, the “shrimp Jesus” fad.

Images of Jesus Christ spliced in various ways with shrimp started to flood social media, clocking up an astounding number of shares, likes, and comments in what some are now terming “engagement hacking.”

Are Your Social Media Friends Humans or AI Bots?

Namely, an AI creates an image that is intended to either affect people’s emotions, or play on their urge to comment or interact by creating controversy or talking points. These images are commented on by additional AI accounts in what is, essentially, a robot-run forum, populated by and for other robots.

In the case of the shrimp Jesus images, their level of surreal absurdity spur countless “likes” and interactions, boosting the profiles they appear on, and tricking the Facebook algorithm into thinking they are a worthy source of engagement.

The question for people is whether the motive is simply to generate advertising revenue through boosting profiles with such digital wizardry, or whether there may be more at play.

It’s already becoming obvious that some forms of media are catching on to the risk of AI bots.

In the voting for the 2024 Eurovision Song Contest, watchers were asked to vote by submitting a small fee by way of a bank card—a relatively new development in the song contest.

It’s the same concept that belies Elon Musk’s musings over charging a fee to use social media platform X, formerly Twitter.

Bot engagement has become such a threat that many platforms are seeking a way to prove the force behind a computerised process is a human one—and in some cases it’s proving quite a challenge.

People check their phones as AMECA, an AI robot, looks on at the All In artificial intelligence conference in Montreal on Sept. 28, 2023. (Ryan Remiorz /The Canadian Press)
People check their phones as AMECA, an AI robot, looks on at the All In artificial intelligence conference in Montreal on Sept. 28, 2023. Ryan Remiorz /The Canadian Press

Humans Susceptible to Being Influenced: Scientist

Ian Oppermann is the New South Wales government’s chief data scientist and he says the situation is a perfect example of a digital arms race where AI can be both the cause, and the answer to the problem.

He says “likes” are a simple way of people demonstrating interest in a digital space, and most social media assumes likes have come from a human.

Unlike robots, humans are “creatures of the herd,” Mr. Oppermann told The Epoch Times, which makes them susceptible to being manipulated.

That manipulation could influence the purchase of a product, or it might influence the outcome of an election.

The Samsung vice president of product management unveils new flagship Galaxy phones packed with artificial intelligence features at a media event in the Silicon Valley city of San Jose, Calif., on Jan. 17, 2024. (Glenn Chapman/AFP via Getty Images)
The Samsung vice president of product management unveils new flagship Galaxy phones packed with artificial intelligence features at a media event in the Silicon Valley city of San Jose, Calif., on Jan. 17, 2024. Glenn Chapman/AFP via Getty Images

More “likes” on a page means it’s likely to be seen by more users, and a highly reviewed product or service will attract more customers—meaning the race is on to secure the attention of humans in the virtual world.

The fact that it can be faked means tech giants may need to start looking at “robot-proofing” likes, by making users tick a box, or selecting images.

“Even these barriers however will eventually be defeated by AI,” Mr. Oppermann said. “This leads to the need to rethink how people can easily show appreciation, or less satisfyingly, can easily tell if a review or series of ‘likes’ is fake.”

AI could, in theory, be developed to detect and remove fake likes, or provide disincentives for using AI boosted algorithms.

A Decade of Research into Online Manipulation

The use of AI in manipulating human beings may be a fresh discussion, but it’s an old science.

In 2012, Facebook conducted a week-long study to test the manipulation of newsfeeds based on human emotions.

A smartphone and a computer screen displaying the logos of the social network Facebook and its parent company Meta in Toulouse, southwestern France, on Jan. 12, 2023. (Lionel Bonaventure/AFP via Getty Images)
A smartphone and a computer screen displaying the logos of the social network Facebook and its parent company Meta in Toulouse, southwestern France, on Jan. 12, 2023. Lionel Bonaventure/AFP via Getty Images

Results were published two years later in the Proceedings of the National Academy of Sciences.

The study randomly selected almost 700,000 users and found that altering the algorithm to show users positive or negative posts from their friends resulted in their mood being dictated by the content they were exposed to.

The study ultimately concluded that positive posts were reflected in happier posts by subjects, while exposure to negativity was also reflected with more negativity.

Mr. Oppermann said humans were highly susceptible to “subtle nudges” and that images could be especially influential on people, even more than words.

“Controlling the media and the message has long been used to influence public opinion,” he said.

“Algorithmic manipulation of what people see (or do not see) is a powerful way to influence someone’s world view.

“If it can be highly personalised, then views on any topic or issue can potentially be influenced.”

Spectre of AI Disinformation Around EU Elections

While netizens have been interacting with surreal images on Facebook, real-world implications of that very same technology have caused sleepless nights for experts in Europe.

Voters in the European Union (EU) started electing lawmakers this week, with the threat of online disinformation looming large.

Left to right: President of the European Commission Ursula von der Leyen, Nicolas Schmit, Party of European Socialists, Terry Reintke, European Greens, Sandro Gozi, Renew Europe Now, Walter Baier, European Left. Lead candidates for European Commission presidency face off in live debate on May 23, 2024 in Brussels, Belgium. (Screenshot via Epoch Times, Reuters)
Left to right: President of the European Commission Ursula von der Leyen, Nicolas Schmit, Party of European Socialists, Terry Reintke, European Greens, Sandro Gozi, Renew Europe Now, Walter Baier, European Left. Lead candidates for European Commission presidency face off in live debate on May 23, 2024 in Brussels, Belgium. Screenshot via Epoch Times, Reuters
It sparked a warning from EU foreign policy chief, Josep Borrell, who suggested Russia had been using state-sponsored campaigns to flood the EU information space with deceptive content, according to AAP.

It’s something Mr. Borrell considers a genuine threat to the state of democracy, with cheap AI campaigns taking aim at leaders who’ve been critical of Russian President Vladimir Putin.

While it may appear the threat of AI has become pervasive, the solution, according to experts, is not to take things at face value.

“Not using social media is an option, however if that is too dramatic a step, cross checking with other, unrelated (and so hopefully differently biased) sources is always a good idea,” Mr. Oppermann said.

Regulators Taking Steps

Last year, a lawyer in the United States was fined for misleading a court after AI bot ChatGPT was used to research a case.

In Hong Kong, a finance employee was defrauded by a deepfake scam where everyone else on a video conference call was AI.

The framing of possible harms and reversibility are two core elements of the New South Wales AI Assurance Framework.

Two years ago, the framework became mandatory for all projects containing an AI component or utilise AI-driven tools.

In recent days, the Australian government introduced deepfake laws that could see the distribution of fake sexually explicit and AI images resulting in seven years’ jail.

While U.S. regulators have just launched investigations into Nvidia, Microsoft, and OpenAI in their role in developing AI.

A photo shows a frame of a video generated by a new artificial intelligence tool, dubbed "Sora", unveiled by the company OpenAI, in Paris on February 16, 2024. (Stefano Rellandini/AFP via Getty Images)
A photo shows a frame of a video generated by a new artificial intelligence tool, dubbed "Sora", unveiled by the company OpenAI, in Paris on February 16, 2024. Stefano Rellandini/AFP via Getty Images

Mr. Oppermann said it was vital to remember that AI boiled down to the use of data.

He believes the NSW AI Assurance Framework, which he has worked on, explores the relevant risks and mitigations.

It all comes back to information and how it’s used—namely the use of data in AI, and the biases associated with the algorithm (the AI).

Last year, Industry and Science Minister Ed Husic released advice from the National Science and Technology Council (NSTC) as well as a discussion paper on AI, saying it was time to start considering further regulation.

The Australian Information Industry Association says it believes organisations should self-regulate AI legislation in the absence of government laws, while the government should act as an enabler and adopter of solutions.

Crystal-Rose Jones
Crystal-Rose Jones
Author
Crystal-Rose Jones is a reporter based in Australia. She previously worked at News Corp for 16 years as a senior journalist and editor.
Related Topics