Humanity Faces ‘Orwellian’ Future If AI Is Not Controlled: Australian Human Rights Commissioner

Humanity Faces ‘Orwellian’ Future If AI Is Not Controlled: Australian Human Rights Commissioner
Screens displaying the logos of OpenAI and ChatGPT, in Toulouse, southwestern France, on Jan. 23, 2023. Lionel Bonaventure/AFP via Getty Images
Daniel Y. Teng
Updated:
0:00

It will be more and more difficult to tell fact from fiction as artificial intelligence (AI) becomes commonplace in the future, warns Australia’s Human Rights Commissioner Lorraine Finlay.

Finlay said that the rise of AI platforms like ChatGPT and Google’s Bard would be welcomed by those with “Orwellian tendencies.”

“In George Orwell’s 1984, the Ministry of Truth exercises absolute control of information according to The Party ethos, ‘Who controls the past, controls the future: who controls the present, controls the past,’” she wrote in The Australian newspaper.

“If the Ministry of Truth existed today, a more accurate slogan would be ‘Who controls the AI controls the past, the present, and the future,’” she continued.

“It will now be easier than ever to use generative AI cheaply and efficiently to run disinformation campaigns both domestically and abroad. There are numerous recent examples that highlight the growing threat posed by deep fakes and disinformation created and spread using generative AI tools.”

The comments from Finlay come after a video emerged in March of Ukrainian President Volodymyr Zelenskyy supposedly calling on his troops to lay down their weapons and surrender to Russia, the footage was fake.
Ukrainian President Volodymyr Zelensky speaks during a press conference in the western Ukrainian city of Lviv, on Jan. 11, 2023. (Yuriy Dyachyshyn/AFP via Getty Images)
Ukrainian President Volodymyr Zelensky speaks during a press conference in the western Ukrainian city of Lviv, on Jan. 11, 2023. Yuriy Dyachyshyn/AFP via Getty Images

What’s the Role of AI in the Future?

The emergence of ChatGPT (developed by OpenAI whose major investor is Microsoft), Bard (Google), and LLaMA (Meta) in recent months has spurred discussion over the role of AI in society.

AI Chatbots work by using available content online and “training” themselves in how to sequence and compose original sentences, articles, and even poetry in response to prompts from users (humans)—other AI platforms can also “create” music and art.

Attendees take pictures and interact with the Engineered Arts Ameca humanoid robot with artificial intelligence as it is demonstrated during the Consumer Electronics Show (CES) in Las Vegas, Nevada, on Jan. 5, 2022. (Patrick T. Fallon/AFP via Getty Images)
Attendees take pictures and interact with the Engineered Arts Ameca humanoid robot with artificial intelligence as it is demonstrated during the Consumer Electronics Show (CES) in Las Vegas, Nevada, on Jan. 5, 2022. Patrick T. Fallon/AFP via Getty Images
In fact, Microsoft co-founder Bill Gates said AI technology could eventually match teachers or tutors.

“The AIs will get to that ability to be as good a tutor as any human ever could,” Gates told the ASU+GSV Summit in San Diego on April 18.

“We have enough sample sets of those things being done well that the training can be done,” he added. “So, I’d say that is a very worthwhile milestone, is to engage in a dialogue where you’re helping to understand what they’re missing. And we’re not that far.”

Yet there are concerns over the propensity of Chatbots to portray incorrect information as fact.

“Even knowing whether we are interacting with a human or a machine may become challenging,” Finlay added. “This can have real consequences for fundamental human rights. Most immediately, it threatens our freedoms of expression and thought.”

“With many proponents of generative AI alluding to it being the next generation of search engines, there are real concerns around responses being politically biased, peddling false information, or having censorship and disinformation built into them.”

Authoritarian regimes like the Chinese Communist Party are already working to shape AI, with the Cyberspace Administration of China releasing draft measures that call for innovation in this space to align with “core socialist values.”

Humanity Must Come First, Commissioner Says

Finlay says the key to these challenges is to place humanity at the centre of AI engagement.

“We need to develop, deploy and use generative AI technology in responsible and ethical ways. Fundamental rights and freedoms must be protected at all stages of a product’s lifespan, from concept and design through to sale and use,” she said but warned many governments and companies were not considering this at all.

Finlay’s concerns echo that of tech entrepreneur and co-founder of Open AI, Elon Musk, who revealed he had a falling out with Google founder Larry Page over their views regarding AI and its relationship with human beings.

“The reason Open AI exists at all is that Larry Page and I used to be close friends, and I would stay at his house in Palo Alto, and I would talk to him late in the night about AI safety,” Musk told Tucker Carlson.
Elon Musk speaks at the 2020 Satellite Conference and Exhibition in Washington, D.C., on March 9, 2020. (Win McNamee/Getty Images)
Elon Musk speaks at the 2020 Satellite Conference and Exhibition in Washington, D.C., on March 9, 2020. Win McNamee/Getty Images

“At least my perception was that Larry was not taking AI safety seriously enough,” he alleged. “He really seemed to want digital superintelligence, basically digital god, as soon as possible.”

“Then he called me a speciesist.”

Musk also said he was concerned that the AI field is now concentrated in the hands of three tech giants, Google, Meta (formerly Facebook), and Microsoft.

The Tesla founder said that while he was “late to the game,” he would try to create a “maximum truth-seeking AI trained to care about humanity and be ideologically neutral and seek to understand the universe’s nature.”

“We want pro-human. Make the future good for the humans. Because we’re humans.”

Caden Pearson contributed to this report.
Daniel Y. Teng
Daniel Y. Teng
Writer
Daniel Y. Teng is based in Brisbane, Australia. He focuses on national affairs including federal politics, COVID-19 response, and Australia-China relations. Got a tip? Contact him at [email protected].
twitter
Related Topics