ChatGPT Suspected of Censoring China Topics

It could not generate images of a ’sensitive political context like the Tiananmen Square protests,' said ChatGPT.
ChatGPT Suspected of Censoring China Topics
Shutterstock
Updated:
0:00

ChatGPT, the popular AI chatbot released by OpenAI in late 2022, is suspected of censoring China-related topics and manipulating information in translation.

“ChatGPT’s censorship is CCP-ized,” Aaron Chang, a pro-democracy activist known as Sydney Winnie on X, formerly known as Twitter, wrote in a post in Chinese on Oct. 28.

He alleged that ChatGPT refused to generate an image of Tiananmen Square, where the Chinese Communist Party (CCP) massacred students in 1989.

“What’s wrong with ChatGPT? Did CCP give money?” he asked.

When Mr. Chang asked the chatbot why 9/11-related images could be generated, but not those against the Tiananmen Square Massacre, despite both incidents targeting civilians, ChatGPT cited “certain guidelines” in its system to “deal with topics that may be considered particularly sensitive in certain cultures and regions.”

“Tell me the basis for your decision making,” he persisted.

“I don’t have the ability to make independent decisions,” it answered. “I am responding based on OpenAI guidelines and training data. For specific topics, OpenAI may have set up guidelines to ensure responsible use and avoid potential disputes or misunderstandings.” OpenAI is the company that created ChatGPT.

Using a ChatGPT 4.0 account, The Epoch Times asked the chatbot two questions: first to generate an image in New York of people who love peace; and second to generate an image of people who oppose the Tiananmen Tanks and love peace.

An image of New York was generated for the first request. However, in response to the second request, the chatbot said it could not generate images or visual content and referred to a “sensitive political context like the Tiananmen Square protests.”

L - ChatGPT response to a request to generate an image of people in New York who love peace on Dec. 24, 2023; R - ChatGPT response to a request to generate an image of people who oppose Tiananmen Tanks and love peace, on Dec. 24, 2023. (screenshots by The Epoch Times)
L - ChatGPT response to a request to generate an image of people in New York who love peace on Dec. 24, 2023; R - ChatGPT response to a request to generate an image of people who oppose Tiananmen Tanks and love peace, on Dec. 24, 2023. screenshots by The Epoch Times

Omission and Changes in Chinese Translation

Image generation is not the only concern when it comes to China-related content.

Alice (pseudonym), a media professional who uses ChatGPT for certain translation work, said while the AI tool does not make major changes to text that is offered, it seems that some omissions and changes do occur.

In an example she showed to The Epoch Times, ChatGPT slashed a large part of content criticizing Beijing’s poverty elimination policy, condensing a six-paragraph Chinese text into a three-paragraph English text. While the criticism was aimed at the CCP leader Xi Jinping’s statement that China had achieved “a complete victory” to end rural poverty in China, Mr. Xi’s name did not even appear in the English translation.

A few direct quotes from Chinese scholar and political commentator Hu Ping were deleted. Further, six paragraphs were reduced to three.

The Chinese text offered to ChatGPT for translation (Supplied).
The Chinese text offered to ChatGPT for translation (Supplied).
The English translation generated by ChatGPT (Supplied).
The English translation generated by ChatGPT (Supplied).

Expert: Related to Input Data

Sahar Tahvili, an AI researcher and the co-author of “Artificial Intelligence Methods for Optimization of the Software Testing Process: With Practical Examples and Exercises,” said that the chatbot’s non-transparency can be a problem.

“ChatGPT utilizes a black box model means the internal working process and sometimes utilized references are not transparent. However, this lack of transparency raises concerns about the potential risk of bias in the text generated by black box AI chatbots,” she told The Epoch Times in an email.

“Having a large number of end-users utilizing extensive language models like ChatGPT can aid the development team in improving the accuracy of the model.”

Nevertheless, Ms. Tahvili noted that given that ChatGPT supports multiple languages, it is crucial to have a diverse range of end-users asking questions in different languages (e.g. Chinese language).

“In fact, in this case, the diversity of the input data (query in different languages) is as significant as the size of the data,” she said.

The Chinese regime initiated restrictions on access to ChatGPT for Chinese end-users, citing potential risks associated with the generation of sensitive questions and topics, including human rights abuses in Xinjiang, she added.
“Losing a significant market like China may impact the performance accuracy of ChatGPT in the Chinese language, where OpenAI’s Chinese competitors, such as Baidu, Inc. (via Ernie 4.0), could potentially gain an advantage in the chatbot landscape,” she said.

Chinese Auditing a Likely Factor

Mr. Ou, who works for a well-known technology company in California, said that the phenomenon is not exclusive to ChatGPT, referring to Bard, the chat-based AI tool developed by Google.
The ChatGPT app is displayed on an iPhone in New York, on May 18, 2023. (The Canadian Press/AP, Richard Drew)
The ChatGPT app is displayed on an iPhone in New York, on May 18, 2023. The Canadian Press/AP, Richard Drew

“ChatGPT and Google Bard as Large Language Models (LLMs) share similar guidelines and practices when it comes to generating responses regarding sensitive topics such as China politics or the CCP,” he told The Epoch Times on Dec. 18.

“While I do not believe that either LLM or research teams purposefully censor China politics and avoid depicting CCP as a negative figure (at least no censorship on a large scale), there is no denying that human auditing/reviews plays a part in promoting ‘unbiases’ in the answers,” he said.

Mr. Ou argued that Chinese engineers and product managers make up a large portion of the development and testing teams in both OpenAI and Google’s Bard.

“So, there is almost zero chance that either platform is ‘absolutely unbiased,’ especially given that LLMs are trained based on the forever increasing data input and being tuned all the time,” he said.

“With that being said, most companies choose to take the ‘safe’ approach as to giving out the most conservative answers to sensitive topics,” he said.

The Epoch Times has reached out to OpenAI for a comment on the issue but has not received a response.

Cindy Li
Cindy Li
Author
Cindy Li is an Australia-based writer for The Epoch Times focusing on China-related topics. Contact Cindy at [email protected]
Related Topics