AI Chatbot Could Become Real Threat If Controlled by Oppressive Power: Cybersecurity Expert

AI Chatbot Could Become Real Threat If Controlled by Oppressive Power: Cybersecurity Expert
Screens displaying the logos of OpenAI and ChatGPT, in Toulouse, southwestern France, on Jan. 23, 2023. Lionel Bonaventure/AFP via Getty Images
Tiffany Meier
Updated:
0:00

AI chatbot, such as ChatGPT, could become a real threat if it is controlled by an oppressive power like China and Russia, according to Rex Lee, a cybersecurity adviser at My Smart Privacy.

He pointed to the recent remark of British computer scientist Geoffrey Hinton, the “Godfather of AI,” who recently left his position as vice president and engineering fellow at Google.

In an interview with The New York Times, Hinton sounded the alarm about the ability of artificial intelligence (AI) to create false images, photos, and text to the point where the average person will “not be able to know what is true anymore.”

Lee echoed the concern, saying, “A legitimate concern is the ability for AI ChatGPT, or AI in general, to be used to spread misinformation and disinformation over the internet.

“But now, imagine a government in charge of this technology or oppressive governments like China or Russia with this technology. Again, it’s being trained by humans. Right now, we have humans who have a profit motive that are training this technology with Google and Microsoft. But now, mix in a government, and then it becomes much more of a threat,” Lee told “China in Focus” on NTD, the sister media outlet of The Epoch Times.

He raised concern that with the facilitation of AI, the Chinese Communist Party (CCP) can exacerbate its human rights abuse practices.

“If you look at this in the hands of a government, like China and the CCP, and then imagine them programming the technology to oppress or suppress human rights, and also to censor stories and identify dissenters on the internet, and so forth, so that they can find those people and arrest them, then it becomes a huge threat,” he said.

According to Lee, AI technology could also enable the communist regime to ramp up its disinformation campaign on social media in the United States at an unprecedented speed.

“Imagine now you have over 100 million Tiktok users in the United States that are already being influenced by China and the CCP through the platform. But now, think of it this way, they’re being influenced at the speed of a jet—you add AI to that, then they can be influenced at the speed of light. Now, you can touch millions of people, literally billions of people, literally within seconds with this and misinformation that can be pushed out,” he said.

“And that’s where it becomes very frightening ... how it can be used politically and/or be used by bad actors, including drug cartels, and criminal actors that also can then have access to the technology as well,” he added.

Elimination of Jobs

Lee pointed out that Hinton also expressed concern about the centralization of AI regarding Big Tech.

“One of his concerns was that Microsoft had launched open AI ChatGPT, ahead of Google’s Bark, which is their chatbot, and he felt that Google was rushing to market to compete against Microsoft,” Lee said.

“Another big concern is the elimination of jobs ... this technology can and will eliminate a lot of jobs that are out there, that’s becoming a bigger concern,” he said, adding that AI can eliminate jobs “that an automated computer chatbot can do, mainly in the area of customer service, but also in computer programming.”

Mitigate Threats

Lee defined ChatGPT as “a generated pre-trained transformer,” which he said is “basically the transformer, and it’s programmed by humans and trained.”

Thus, he deemed human factors as the biggest concern.

“Basically, AI is like a newborn baby; it can be programmed for good, just like a child. If the parents raise that child with a lot of love and care and respect, the child will grow up to be loving, caring, and respectful. But if it’s raised like a feral animal, and raised in the wild, like just letting AI learn by itself off of the internet with no controls or parameters, then you don’t know what you’re gonna get with it,” he said.

To mitigate such a threat, Lee suggested that the regulators who understand it at a granular level work with these companies to see how they’re programming it and what algorithms are used to program it.

“And they have to make sure that they’re training it with the right parameters to where it doesn’t become a danger not only to them but to their customers.”

Hannah Ng is a reporter covering U.S. and China news. She holds a master's degree in international and development economics from the University of Applied Science Berlin.
Related Topics