IN-DEPTH: As Chinese Tech Unveils New CCP-Approved Chatbots, Experts Warn of AI Threat From Beijing

China’s ChatGPT counterparts include Baidu’s new ‘Ernie.’ Beijing aims to capitalize on the military potential of large language models, raising concerns.
IN-DEPTH: As Chinese Tech Unveils New CCP-Approved Chatbots, Experts Warn of AI Threat From Beijing
(rafapress/Shutterstock)
Venus Upadhayaya
Updated:
0:00

As Chinese tech companies unveil their answers to ChatGPT, experts warn that China’s artificial intelligence landscape poses a host of threats.

On Aug. 31, tech giant Baidu introduced Ernie, the Chinese counterpart to ChatGPT, while Alibaba announced its AI model, Tongyi Qianwen, on Sept. 13.

Although Ernie was initially unveiled in March, the chatbot has now been approved and is available for download.

“In addition to ERNIE Bot, Baidu is set to launch a suite of new AI-native apps that allow users to fully experience the four core abilities of generative AI: understanding, generation, reasoning, and memory,” Baidu said in a statement announcing Ernie’s release.

As strategic competition heats up within the United States, Beijing has increased its support of Chinese companies working with AI. Alibaba’s cloud intelligence division said in a message on its WeChat account that several organizations, including OPPO, Taobao, DingTalk, and Zhejiang University, have reached cooperation agreements with Alibaba to train their own large language models (LLMs) on Tongyi Qianwen.

China’s use of cutting-edge AI tools could potentially enhance national security risks for its adversaries, experts say.

“Employing large language models such as ChatGPT, Alibaba’s Tongyi Qianwen or similar AI systems can potentially be considered a security threat as misinformation, automated attacks, ethical concerns, and data privacy,” said AI researcher and author Sahar Tahvili. Ms. Tahvili is the author (with Leo Hatvani) of  ”Artificial Intelligence Methods for Optimization of the Software Testing Process: With Practical Examples and Exercises.
Generative AI describes algorithms that can generate new content, including audio, code, images, text, simulations, and videos. LLMs are the text-generating component of generative AI.

LLMs involve a deep learning algorithm that can perform a variety of language processing tasks, such as recognizing, translating, predicting, or generating text or other content.

The Chinese are 15 years ahead of the rest of the world in using complicated AI LLMs, estimates Ashwin Kumaraswamy, a deep tech investor based in the UK. Mr. Kumaraswamy sits on the board of a number of tech companies.

Although the majority of Western social media platforms are banned in China, Mr. Kumaraswamy noted that Tencent’s WeChat platform (known in China as Weixin), which is ubiquitous in China, allows users a myriad of options, including shopping, messaging, conducting business, microblogging, and chatbot features.

“Digitally, they are really strong and integrated,” the venture capitalist told The Epoch Times.

Baidu CTO Wang Haifeng speaks at the unveiling of Baidu's Ernie chatbot, at an event in Beijing on March 16, 2023. (Michael Zhang/AFP via Getty Images)
Baidu CTO Wang Haifeng speaks at the unveiling of Baidu's Ernie chatbot, at an event in Beijing on March 16, 2023. (Michael Zhang/AFP via Getty Images)

Core Socialist Values: ‘Let’s Talk About Something Else’

Experts told The Epoch Times that the AI-related threat from China stems from the ideological divide between Beijing and the West.

“If one wants to evaluate how China is approaching the LLM boom, one must analyze the regulations put forward by the cyberspace administration, ” Mark Bryan Manantan, Director of Cybersecurity and Critical Technologies at the Honolulu-based Pacific Forum, told The Epoch Times in an email.

“Although the buzz is very much about AI, the main concern for China is still information security that is rooted in core socialist values and in alignment with existing laws and policies on data security and personal information protection,” he said.

Ms. Tahvili noted that the Chinese iteration of ChatGPT, like other AI models, is a “black box system.” That means “the inner workings or decision-making processes of the model are not easily interpretable or understandable by humans.”

The black-box nature of generative AI typically makes it unpredictable, which may give the appearance of autonomous thought processes. Therefore, other nations may be naive about the ideological impetus behind China’s AI, she cautioned.

However, media reports have said that it is mandatory for tech companies in China to report to the regime at every step and this makes Beijing’s strategic thrust involving its LLMs unmistakable, according to experts.

In fact, as noted in a Sept. 9 BBC article, when Ernie is asked a “difficult” question, it typically responds “Let’s talk about something else” or “I’m sorry! I don’t know how to answer this question yet.”

The general public in China has pretty much given up on resisting the Chinese Communist Party’s (CCP’s) political ideology—it has been brainwashed for a long time, Mr. Kumaraswamy said.

“Given they have data and info on general mood [and] general chit chat, those models can be used by ChatGPT-style solutions to auto-fill. But [the] Chinese don’t need AI to propagate the ideology, as they do so from schools with tighter control on shaping the minds of their younger generation,” he said.

An AI application launched by the Anhui Institute of Artificial Intelligence, a subsidiary of the Chinese Academy of Sciences, can test the loyalty of communist party members. (Screenshot from the institute’s website/ The Epoch Times)
An AI application launched by the Anhui Institute of Artificial Intelligence, a subsidiary of the Chinese Academy of Sciences, can test the loyalty of communist party members. (Screenshot from the institute’s website/ The Epoch Times)

Military Applications

Chinese LLM has potential applications within the People’s Liberation Army (PLA), particularly in areas like cognitive or information warfare.
Josh Baughman is an analyst with the China Aerospace Studies Institute at the U.S. Air Force’s Air University. In a paper entitled “China’s ChatGPT War“ published Aug. 21, Mr. Baughman wrote that generative AI will, in the words of Friedrich Engels, ”cause changes or even revolutions in warfare.”

The PLA aims to capitalize on AI’s military applications, he said, noting the large number of articles on AI that have been published in recent months in Chinese military journals.

Baughman cited an article by PLA Major General (Ret.) Hu Xiaofeng, currently a professor at China’s National Defense University, in which Hu said, “The cutting-edge technology of artificial intelligence represented by ChatGPT will inevitably be applied in the military field.”

LLM and natural language processing (NLP) models have several potential applications within the military and defense sectors, according to Ms. Tahvili.

Mr. Baughman’s paper mentions seven main areas of application: “human-machine interaction, decision making, network warfare, cognitive domain, logistics, space domain, and training.”

The Pacific Forum’s Mr. Manantan discussed the aspect of cognitive or information warfare. ChatGPat can amplify disinformation campaigns and enhance their execution, he said.

“LLM can create more convincing text. Other security threats include malware generation hacking and sophisticated phishing.”

Generally, cognitive warfare involves influencing the perceptions, beliefs, and decision-making of adversaries.

According to Ms. Tahvili, “In this regard, AI can be utilized for strategic planning, information analysis, and translation. Moreover, the AI models can be integrated into military training programs and simulation environments to provide realistic and interactive scenarios.”

Cautious of Pitfalls

Mr. Baughman said that while articles in the PLA media talk about the inevitable use of ChatGPT in warfare, “there is also not a rush for significant integration into military operations anytime soon.“ Three major concerns play into this, he said, namely, ”building a data set, optimization, and low mutual trust of the technology.”

Then there’s the issue of censorship: “In addition, while not mentioned by the PLA media, there’s also the issue of the Chinese communist party (CCP) itself. A program that has the potential to speak negatively about the Party will not be allowed and this could inhibit the overall efficacy of generative AI.”

Overall, Mr. Baughman said, “China understands the need to be a first mover (or close follower) in Generative AI on the battlefield.”

Like the United States, however, he feels the Chinese are cautious about integrating the technology too quickly, aware of the many potential pitfalls of intelligent warfare.

Venus Upadhayaya reports on India, China, and the Global South. Her traditional area of expertise is in Indian and South Asian geopolitics. Community media, sustainable development, and leadership remain her other areas of interest.
twitter
Related Topics