ChatGPT, the popular AI chatbot released by OpenAI in late 2022, is suspected of censoring China-related topics and manipulating information in translation.
He alleged that ChatGPT refused to generate an image of Tiananmen Square, where the Chinese Communist Party (CCP) massacred students in 1989.
“What’s wrong with ChatGPT? Did CCP give money?” he asked.
When Mr. Chang asked the chatbot why 9/11-related images could be generated, but not those against the Tiananmen Square Massacre, despite both incidents targeting civilians, ChatGPT cited “certain guidelines” in its system to “deal with topics that may be considered particularly sensitive in certain cultures and regions.”
“Tell me the basis for your decision making,” he persisted.
“I don’t have the ability to make independent decisions,” it answered. “I am responding based on OpenAI guidelines and training data. For specific topics, OpenAI may have set up guidelines to ensure responsible use and avoid potential disputes or misunderstandings.” OpenAI is the company that created ChatGPT.
Using a ChatGPT 4.0 account, The Epoch Times asked the chatbot two questions: first to generate an image in New York of people who love peace; and second to generate an image of people who oppose the Tiananmen Tanks and love peace.
An image of New York was generated for the first request. However, in response to the second request, the chatbot said it could not generate images or visual content and referred to a “sensitive political context like the Tiananmen Square protests.”
Omission and Changes in Chinese Translation
Image generation is not the only concern when it comes to China-related content.Alice (pseudonym), a media professional who uses ChatGPT for certain translation work, said while the AI tool does not make major changes to text that is offered, it seems that some omissions and changes do occur.
A few direct quotes from Chinese scholar and political commentator Hu Ping were deleted. Further, six paragraphs were reduced to three.
Expert: Related to Input Data
Sahar Tahvili, an AI researcher and the co-author of “Artificial Intelligence Methods for Optimization of the Software Testing Process: With Practical Examples and Exercises,” said that the chatbot’s non-transparency can be a problem.“ChatGPT utilizes a black box model means the internal working process and sometimes utilized references are not transparent. However, this lack of transparency raises concerns about the potential risk of bias in the text generated by black box AI chatbots,” she told The Epoch Times in an email.
“Having a large number of end-users utilizing extensive language models like ChatGPT can aid the development team in improving the accuracy of the model.”
Nevertheless, Ms. Tahvili noted that given that ChatGPT supports multiple languages, it is crucial to have a diverse range of end-users asking questions in different languages (e.g. Chinese language).
“In fact, in this case, the diversity of the input data (query in different languages) is as significant as the size of the data,” she said.
Chinese Auditing a Likely Factor
Mr. Ou, who works for a well-known technology company in California, said that the phenomenon is not exclusive to ChatGPT, referring to Bard, the chat-based AI tool developed by Google.“ChatGPT and Google Bard as Large Language Models (LLMs) share similar guidelines and practices when it comes to generating responses regarding sensitive topics such as China politics or the CCP,” he told The Epoch Times on Dec. 18.
“While I do not believe that either LLM or research teams purposefully censor China politics and avoid depicting CCP as a negative figure (at least no censorship on a large scale), there is no denying that human auditing/reviews plays a part in promoting ‘unbiases’ in the answers,” he said.
Mr. Ou argued that Chinese engineers and product managers make up a large portion of the development and testing teams in both OpenAI and Google’s Bard.
“So, there is almost zero chance that either platform is ‘absolutely unbiased,’ especially given that LLMs are trained based on the forever increasing data input and being tuned all the time,” he said.
“With that being said, most companies choose to take the ‘safe’ approach as to giving out the most conservative answers to sensitive topics,” he said.
The Epoch Times has reached out to OpenAI for a comment on the issue but has not received a response.