Fewer Than Half of Organisations Confident About AI Data Quality: Google Report

Many organisations are discovering new vulnerabilities and weaknesses, especially when it comes to the quality of their data.
Fewer Than Half of Organisations Confident About AI Data Quality: Google Report
People walk past an AI sign at the All In artificial intelligence conference in Montreal on Sept. 28, 2023. (The Canadian Press/Ryan Remiorz)
Updated:
0:00

Despite the rising interest in artificial intelligence (AI) in Australia, organisations are increasingly concerned about the technology’s weaknesses, especially its data accuracy.

A new report on AI trends released by Google on July 23 surveyed hundreds of business and IT leaders about their goals and strategies for harnessing generative AI.

The findings revealed that fewer than half of respondents (44 percent) are fully confident in their organisation’s data quality, with another 11 percent expressing even less confidence.

Moreover, slightly more than half of respondents (54 percent) consider their organisations as only somewhat mature regarding data governance, and only 27 percent consider their organisations to be either extremely or very mature in this area.

Meanwhile, over two-thirds (69 percent) of employees reported bypassing their organisation’s cybersecurity guidance in the past 12 months.

This comes despite search interest in AI reaching a record high in May, increasing by 20 percent during the April-June period compared to the year’s first quarter.

“This explosion of new technology has its drawbacks, too,” the report noted.

Many organisations are discovering new vulnerabilities and weaknesses, particularly when it comes to the quality of their data.

The report emphasised that it is “not enough” just to apply large language models (LLMs) to data; these models need to be “grounded in good quality enterprise data or otherwise risk hallucinations.”

LLMs , which power AI chatbots, are machine learning models that can comprehend and generate human language by processing vast amounts of text data.

AI hallucinations occur when LLMs create incorrect or misleading information but present it as fact.

‘Full Of Hallunications’

Similar concerns have been voiced by American AI expert Susan Aaronson, who said the datasets produced by AI are “not generally accurate.”

Speaking at an event hosted by the United States Studies Centre on July 9, Ms. Aaronson, a research professor of international affairs at George Washington University, expressed skepticism about AI’s benefits as it is “so full of hallucinations.”

“It is a risk-based system,” she said. “There is no federal law [in the U.S.] saying that AI can be misused. People will misuse it.”

“If I were creating a model, I would strive to ensure that it was trustworthy … but the point is right now, these are not trustworthy.”

She pointed to the childcare benefits scandal in the Netherlands as an example. Between 2005 and 2019, the tax authority used a self-learning algorithm to detect signs of benefits fraud, which then can be used by the authorities to weed out errors.

However, the algorithm began to develop bias and wrongly accused an estimated 26,000 parents of making fraudulent benefit claims, the majority of whom are from lower-income backgrounds, immigrants, and ethnic minorities.

This has driven tens of thousands of families into financial hardship, while some died by suicide due to pressure from the tax bills.

The scandal compelled the government of the Netherlands to resign in 2021.

A Senate inquiry recently echoed these concerns, with calls from media outlets, voice actors, and lawyers for guidelines and limitations on AI use.

“One of the biggest concerns that we have heard from our customers and the creative community has been around the misappropriation of image, likeness, voice, and artistic style,” Adobe Asia Pacific public sector strategy director John Mackenney said.

The inquiry is expected to present its findings in September, while Australia’s national AI expert advisory group is evaluating the introduction of mandatory regulations for high-risk AI deployments.

Nina Nguyen is a reporter based in Sydney. She covers Australian news with a focus on social, cultural, and identity issues. She is fluent in Vietnamese. Contact her at [email protected].
twitter
Related Topics