Study Offers Glimpse Into How Much We Trust AI

Study Offers Glimpse Into How Much We Trust AI
A robot from the Artificial Intelligence and Intelligent Systems (AIIS) laboratory of Italy's National Interuniversity Consortium for Computer Science (CINI) is displayed at the 7th edition of the Maker Faire 2019, in Rome on Oct. 18, 2019. Andreas Solaro/AFP via Getty Images
Updated:

Fear and worry are the dominant emotions of Australians towards artificial intelligence (AI) despite the technology’s increasing popularity and capability, a study by the University of Queensland has revealed.

Published on Feb. 22, the study is the first deep-dive global examination of the public’s attitudes towards AI use. It surveyed more than 17,000 people from 17 countries about their attitudes towards the system.

It reveals that three out of five respondents (61 percent) are either ambivalent or unwilling to trust AI.

Globally, about three in four (73 percent) acknowledge that AI carries significant risks, with cyber security topping all concerns. This is followed by harmful use of AI, job loss, especially in India and South Africa, loss of privacy, system failure (particularly in Japan), deskilling, and undermining human rights.

Younger generations (42 percent) and university-educated (42 percent) are more accepting compared to older generations (25 percent) and those who don’t have a degree (27 percent).

Australia is among the nations listed as the most worried about AI, along with Canada, the UK and Japan.

Less than half of Australians (40 percent) have faith in the use of AI at work, with only a quarter believing AI will create more jobs than it will eliminate.

Most Australians want AI to be regulated, but slightly above one-third say there are enough safeguards, laws and regulations in place.

On the contrary, people in India, China, South Africa and Brazil are most optimistic about the use of AI.

Lead author Professor Nicole Gillespie, KPMG Chair of Organisational Trust at the UQ Business School, said Australians are most concerned with the involvement of AI in human resources-related tasks such as monitoring, evaluating and recruiting employees.

“Australians are more open to AI being used to automate tasks and help employees complete their work,” Gillespie said.

“In fact, they actually prefer AI involvement in managerial decision-making over sole human decision-making - the caveat is they want humans to retain control.”

Co-author James Mabbott said a key challenge is that a third of people have low confidence in government, technology and commercial organisations to develop, use and govern AI in society’s best interest.

“Organisations can build trust in their use of AI by putting in place mechanisms that demonstrate responsible use such as regularly monitoring accuracy and reliability, implementing AI codes of conduct, independent AI ethics reviews and certifications and adhering to emerging international standards,” Mabbott said.

Despite the perceived risks, the study also shows that an overwhelming 85 percent of respondents globally believe AI will bring about a range of benefits, including efficiency, innovation and resources.

Wider Adoption Of AI Could Cause ‘Significant Harm’

The adoption of AI technology in Australia is increasing, with supermarket giant Woolworths set to expand the use of AI to capture customers scanning items at self-checkouts in 250 stores
Labor MP Julian Hill warned that ChatGPT had the potential to revolutionise the world but warned that if AI were to surpass human intelligence, it could cause significant damage.

“It doesn’t take long, if you start thinking, to realise the disruptive and catastrophic risks from untamed AGI are real, plausible, and easy to imagine,” he said in a speech in Parliament on Feb. 6.

“AGI has the potential to revolutionise our world in ways we can’t yet imagine, but if AGI surpasses human intelligence, it could cause significant harm to humanity if its goals and motivations are not aligned with our own, ” he said.

Sam Altman, the CEO of ChatGPT creator OpenAI, said in a series of tweets on Feb. 18 that it was “critical” for AI to be regulated in the future until it can be better understood. He stated that he believes that society needs time to adapt to “something so big” as AI.
Victoria Kelly Clark and Bryan Jung contributed to this article.
Nina Nguyen
Author
Nina Nguyen is a reporter based in Sydney. She covers Australian news with a focus on social, cultural, and identity issues. She is fluent in Vietnamese. Contact her at [email protected].
twitter
Related Topics