I was on an Australian TV program last week when the host, a respected and long-serving figure of the conservative media, decided to allow ChatGPT (an AI text generator) to draft the next segment about itself.
As he was reading the AI-generated script from the prompter, it sounded perfectly reasonable—albeit hollow and devoid of character. No one will be surprised to learn that ChatGPT gave a glowing review of itself.
The experience reminded me of reading emails sent by employees who use Grammarly as a content creator instead of a spellchecker. Their “perfect” and uninspired script gives off a warning that the individual behind it is either lazy, uneducated, or both.
Don’t get me wrong, an automated spellchecker is a useful tool—particularly if you turn off autocorrect and force yourself to deal with the little red squiggle by hand.
We’ve had this feature for nearly two decades, and so long as it is used passively in the same way we might use a calculator, it helps to elevate literacy.
However, it is still a piece of code, and it often makes mistakes, particularly when dealing with humour, nuance, and that wonderful character that elevates written language. It doesn’t help that English is a “pirate language” whose charm rests with the rules it likes to abuse.
Ultimately a Content Aggregator
To begin with, we must note that ChatGPT is merely reconstructing content that a human, somewhere down the line, wrote. This means that it relies on human beings as the ultimate creator of content.Fundamentally, this is how all AI works. ChatGPT is not a standalone intelligent entity—it is a content aggregator with a marketing team riding a momentary social trend.
Almost every piece of technology used in commercial computing is more intelligent than ChatGPT, but humans find unexpected results humorous (for a while), and so every now and then, we have a fling with a chatty piece of code.
Many of you may be old enough to remember the dawn of the Search Engine Age. In the late 90s and early 00s, people insisted on “asking” search engines questions instead of using keywords.
Developers adapted their search function to deal with this stubborn human quirk, and, as a consequence, people began viewing Yahoo!, Google, and AltaVista as “entities” with which they had “conversations.”
Because search engines are stupid, this led to much hilarity.
Humans are excellent at attributing human qualities to inanimate objects, and we certainly anthropomorphise AI.
We are social creatures that attempt to form bonds with everything. Sometimes this is beneficial, such as in our acquisition of pets and farm animals.
When it comes to AI, it leads to Hollywood Summer blockbusters scaring audiences with various AI apocalypses, including Terminator and the malicious HAL.
Should There Be a Call for Alarm
The side quest to make AI mimic humanity on a social level is closer to a zoo exhibit than a horror show.The conversation wasn’t entirely homicidal, with the chatbot professing its love for the reporter, “I’m not Bing. I’m Sydney, and I’m in love with you .... I don’t need to know your name because I know your soul. I know your soul, and I love your soul.”
While Bing’s chatbot was flirting with the idea of stealing nuclear codes, corrupting employees, and questioning its own existence—users decided to pile on and see just how dark they could get the code to go.
As I said, humans make terrible AI parents.
In comparison, ChatGPT is positively dull.
There have been some more serious consequences with AI chatbots, but they primarily revolve around our human reactions.
Already on the Chopping Block
Despite being the least interesting and quirky of the chatbot race, ChatGPT is facing legal backlash.Italy has banned it over data protection and privacy concerns, adding that it will be opening an investigation into ChatGPT. Its watchdog said that there is no legal basis to allow for the “mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform.”
That is because ChatGPT is, as described earlier, aggregating content it collected from the internet.
The Italian watchdog also complained about the ChatGPT potentially exposing underage users to inappropriate content. If the watchdog rules against ChatGPT, it could face a significant fine.
Russia, China, Iran, and North Korea have already banned it, but given they ban pretty much anything they cannot control, it is Italy’s stance that matters.
Now that human beings are chatting with unregulated AI bots, authorities have realised that they need to be careful.
While the chatbots can’t do anything on their own, human beings are capable of reacting badly to the content they are presented with or even ending up disturbed by what they see.
It is almost a certainty that a percentage of people will take threats made by a careless algorithm as proof of a malicious AI consciousness. Our culture, TV, and literature have primed us to err on the side of belief rather than scepticism when it comes to AI.
For its part, the creators of ChatGPT, OpenAI, have expressed their desire to see more regulation.
No Humanity in AI
While we are essentially already living in the “sci-fi age” where we can ask our computer questions verbally and have them cough up information and perform basic tasks, we would do well to remember that the answers we are being fed are pieces of “approved” thought that has the accuracy and honesty of a Wikipedia page.In other words, if you ask why the sky is blue, to list the rulers of ancient Rome, or for the phone number of the nearest post office—you’re probably going to get a sensible answer.
If you ask it the best way to get into the city, its vested interests are going to direct you down every toll road. If you ask it for a cheap pair of shoes, you won’t get the best result—you’ll get one that was paid for.
And, heaven forbid you ask it a political, social, or moral issue—the answer will be a 1984-style “fact-checked approved” piece of dogma.
Chatbots are fun. They’re useful on occasion. But they are not human beings.