What AI Brings to the Table and Its Risks

On the horror end, news organizations could use AI to write news stories and sack hundreds of journalists.
What AI Brings to the Table and Its Risks
The logo of the ChatGPT application on a laptop screen (R) and the letters AI on a smartphone screen in Frankfurt am Main, western Germany, on Nov. 23, 2023. Kirill Kudryavtsev/AFP via Getty Images
Graham Young
Updated:
0:00
Commentary

When thinking about artificial intelligence, or AI, I recall a quote from U.S. business guru Mark Cuban: “It takes 20 years to become an overnight success.”

In the case of AI, he’s off by a factor of three.

It’s been 74 years since Alan Turing wrote “Computing Machinery and Intelligence” and 68 years since the term was coined, but now, everyone is talking about AI, including journalists.

Why?

Because of fear. In the abstract, scientific innovation has always worried man. Practically, who knows who is going to lose their job because of it?

Mary Shelley’s tract “Frankenstein: or, The Modern Prometheus” has seeped into popular culture as a cautionary tale against techno-optimism, and it gestures back to much more ancient roots.

Prometheus was punished by the Greek gods for giving man the earliest of advanced technologies: fire.

Mankind was punished by the same gods who sent Pandora to Earth, where she accidentally released all the woes of the world from her box.

Pandora has echoes in HAL, the robot who takes murderous control in “2001: A Space Odyssey.” Or Skynet, the genocidal robotic network in the “Terminator” movies.

A significant human faction has been technophobic from the get-go; they worry that artificial intelligence might also gain artificial consciousness, or that some faction could subvert it to create a world-destroying threat.

Putting those fears to one side, why would AI threaten my job?

AI has worked to improve productivity by taking over jobs that humans currently do. Most technological improvements achieve that, although one may not see it in advance.

The power loom destroyed the jobs of thousands of weavers, and they could see that coming. In a similar way, the desktop computer, and now the laptop, destroyed first thousands of secretarial, and then clerical, even professional, ones.

Was this a bad thing?

We might not have the same jobs we would have had 100 years ago, but in many cases, we have different ones.

What Is AI?

Perhaps the best way to think of AI is as automated data mining. So instead of a human looking for patterns in a dataset, a computer program does it instead.

The recent excitement has been mostly elicited by the large language models (LLMs), such as OpenAI’s ChatGPT, a version of which has been integrated into Microsoft’s Bing, so you can use it in searching the internet.

LLMs add a conversational, ersatz human, front-end to the systems so you can query them using regular words and sentences and they will respond. You can even argue with them.

There’s a concept called the Turing test.

Alan Turing hypothesized that a machine was truly intelligent if you could have a conversation with it and not be able to tell that it was a computer.

I’ve been talking to ChatGPT about aspects of this article and it’s like talking to a very stilted C3P0, the protocol droid from the “Star Wars” trilogy.

So it kind of passes the Turing test, but in doing so, it proves the test is not adequate.

The answers I get are OK and might get you a 4, or even a 5, (out of 7) in a first- or second-year university course, but by the third year, the lecturer would be marking much more harshly.

The question it raises for me isn’t whether AI is intelligent. If you can’t tell an average student from AI, then perhaps the average student isn’t intelligent, even if he or she goes on to get the credentials to run a large company or bureaucracy.

News Outlets Jumping Onboard the AI Train

Well, on the horror end, news organizations could use them to write news stories and sack hundreds of journalists.

Worse, AI is already being used to write content for websites, so there may also be a shortage of alternative jobs requiring writing skills.

There is certainly something to this fear. Since 1984, there has been a decline of 43 percent per capita in people involved in information, media, and telecommunications. That understates the decline.

There was actually an increase between 1984 and 2007, which was peak employment, of 19 percent per capita.

Since the peak, the past 17 years have seen a decline of 71 percent. In absolute numbers, it is not so bad, with numbers today being the same as they were in 1995, but at the same time, the workforce has increased by 78 percent.

Perhaps things will get worse. This is not an unwarranted fear, with News Corporation in Australia having 600 positions on the chopping block.

With the dismal economics in the way we currently produce news, economies will have to be found somewhere or our existing institutions will go under.

Then there is the somewhat optimistic end.

In recent news, Vox and The Atlantic have signed deals with OpenAI, as has News Corp. These deals are promising because OpenAI will be paying them for access to their news sites and archives.

I’ve recently written about the absurdity of news organizations expecting social media and search engines such as Facebook, Twitter, and Google to pay them for displaying links to news articles on their sites when media organizations derive much of their traffic from those sites, which effectively advertise for them.

That is the whole point of search engines and to a lesser extent social media, and the reason media organizations take such care to curate their web pages to be friendly to discovery and display on those sites.

Search engines are really a primitive form of AI. They scrape content off the net, which they then curate for users, often, depending on the engine, personalizing the results based on the user’s previous searches.

LLMs do the same thing, but with more sophistication, and they present the content as their own, which creates a legitimate copyright issue. So some payment is due.

Further, most media sites these days limit free access to a fraction of the content on their sites. The LLMs add much more value to their own offerings if they can access that content, and if they want to access it, a fee is also warranted.

In that case, the media have effectively found a way to syndicate and sell some of their content, improving their economics on the income side, to the benefit of their employees and shareholders.

It also helps to solve a problem for the LLMs. Chatbots have been caught out making mistakes–wrong facts, and even hallucinating–and completely making facts up.

There is also a risk that as internet content leans left, the AI systems will follow in their answers as well. By accessing credible news sources across a range of positions, they can outsource neutrality and provide a more balanced output to their users.

The Limits of AI

I’ve been using AI for 20 years now, and it created my original niche in journalism.

When I first started writing about political campaigns, I didn’t want to make a fool of myself, as most journalists did, by retailing as fact the gossip I picked up around the halls of power, or at Sunday afternoon barbecues with my mates.

So, along with Mike Kaiser, who had been Queensland’s Labor state secretary and briefly a member of parliament, and who was also writing commentary, I devised a way to do focus group research using the internet.

We collected our data online through surveys that involved thousands of responses to open questions.

Questions such as, “What is the major issue for you in this federal election?” This was like a massive scientific version of the vox pop, the form of journalism in which you send a journalist out to a public space to ask individuals what they think.

Analyzing that much data wasn’t easy until I came across Leximancer in 2004. Developed at the University of Queensland, it looked for the occurrence of words and how closely they were associated with other words. It could uncover associations that were hard to spot otherwise.

Leximancer could sift through data and provide word maps showing from my survey questions how issues affected votes.

Suddenly, on a very low budget, I had the best electoral intelligence in the country outside the major political parties, who were spending millions to get it, and I could write accurate and insightful pieces for the major newspapers.

ChatGPT and similar will provide many such opportunities.

For example, perhaps AI will write some of the wire service, “just the facts,” type copy.

But someone will still need to check the copy, because you can’t absolutely trust AI to be accurate, and it wouldn’t look good in court to say you had your publication in self-driving mode when the defamation happened.

And the AI has to get the copy from somewhere else, which will be written for other organizations by people with journalistic skills. It’s possible that AI will move employment from news organizations to the dark side of public relations.

It might actually be that the biggest employment risk isn’t to people who use words but to people who do math and cut code. AI has been used for writing computer code, in which it apparently has variable ability, and also checking code, in which it is much better.

Author Luke Burgis actually thinks AI might lead to a bull market in the humanities because it lacks the insight that a human brings to a task, such as making connections that have never been made before.

If he’s correct, it will require a different method of teaching the humanities. At the moment, humanities departments tend to be dominated by arguments from authority, which is the reason critical theory took over so easily.

Perhaps what AI will lead to is a reformation in human thought, whereby original thinking, by necessity, is preferred over footnoted plagiarism, because the machines are so good at that.

After all, at the bottom of Pandora’s box, after all the ills were let out, remained Hope.

Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.
Graham Young
Graham Young
Author
Graham Young is the executive director of the Australian Institute for Progress. He is the editor and founder of OnlineOpinion.com.au and has conducted qualitative polling on Australian politics since 2001. Mr. Young has contributed to The Australian newspaper, The Australian Financial Review, and is a regular on ABC Radio Brisbane.
Related Topics