News Corp Should Be Paid for Content Used to Train AI, Says CEO

News Corp Should Be Paid for Content Used to Train AI, Says CEO
A visitor watches an AI (Artificial Intelligence) sign on an animated screen at the Mobile World Congress (MWC), the telecom industry's biggest annual gathering, in Barcelona. Josep Lago/AFP via Getty Images
Daniel Y. Teng
Updated:
0:00

Media giant News Corporation should be compensated by artificial intelligence engines for using its content, says company CEO Robert Thomson.

In an investor call on May 12, Thomson said AI would have a profound effect on the media business.

“Generative AI may pose a challenge to our intellectual property and to the future of journalism,” he said in comments obtained by the News Corp-owned The Australian newspaper.

“As those who have experimented with ChatGPT will be aware, the answers are only as insightful and factual as the source material and are more retrospective than contemporary.”

Thomson said News Corp’s content would be aggregated, synthesised, and monetised by other parties.

The News Corp. building on 6th Avenue, home to Fox News, the New York Post, and the Wall Street Journal in New York on March 20, 2019. (Kevin Hagen/Getty Images)
The News Corp. building on 6th Avenue, home to Fox News, the New York Post, and the Wall Street Journal in New York on March 20, 2019. Kevin Hagen/Getty Images

“We expect our fair share of that monetisation,” Thomson said. “Generative AI cannot be degenerative AI.”

The difficulty with a payment model is that developers use the wider indexed internet as a source of “training” for AI engines—meaning AI bots use content to learn how to sequence words and sentences.

However, AI engines generally use the open web and steer clear of content behind paywalls, which is now increasingly common in many larger news publication websites.

Further, content that is quoted directly by AI is often cited, a capability newer engines like Google’s Bard and GPT4 will also have.

AI’s Role in the Future?

Thomson’s comments come as ChatGPT makes headlines worldwide for being one of the first broadly accessible AI engines that the public can engage with—AI has been widely used in a range of other technologies for years, however.

The rise of ChatGPT has prompted questions about the wider role of technology in the future.

The World Economic Forum (WEF) has predicted a dramatic shift in the global economy with “The Future of Jobs Report 2023” predicting AI will help create 69 million jobs and eliminate 83 million—leaving a shortfall of 14 million.

The WEF claimed clerical and administrative roles were likely to suffer with over 26 million fewer jobs by 2027 as automation takes over, while jobs in the fields of AI, machine learning, business analysts, and software engineering would see an increase.

Microsoft co-founder Bill Gates said AI technology could eventually match teachers or tutors.

“The AIs will get to that ability to be as good a tutor as any human ever could,” Gates told the ASU+GSV Summit in San Diego on April 18.

“We have enough sample sets of those things being done well that the training can be done,” he added. “So, I’d say that is a very worthwhile milestone, is to engage in a dialogue where you’re helping to understand what they’re missing. And we’re not that far.”

Fellow tech entrepreneur Elon Musk, Tesla CEO, has been more concerned about the direction AI is heading.

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production in the sense that it has the potential—however small one may regard that probability, but it is non-trivial—it has the potential of civilizational destruction,” Musk told Fox News’ Tucker Carlson in a recent interview.

He raised a current example of what could be dangerous about AI.

“If you have a super-intelligent AI that is capable of writing incredibly well and in a way that is incredibly influential [and] convincing,” he said. “And it’s constantly figuring out what is more convincing over time—enter social media like Twitter, Facebook—and it potentially manipulates public opinion in a way that is very bad. How would we even know?”

Katabella Roberts and Samantha Flom contributed to this article.
Daniel Y. Teng
Daniel Y. Teng
Writer
Daniel Y. Teng is based in Brisbane, Australia. He focuses on national affairs including federal politics, COVID-19 response, and Australia-China relations. Got a tip? Contact him at [email protected].
twitter
Related Topics