ChatGPT Favours UK’s Labour and the US Democrats, Research Finds

ChatGPT Favours UK’s Labour and the US Democrats, Research Finds
The ChatGPT app is displayed on an iPhone in New York, May 18, 2023. (The Canadian Press/AP, Richard Drew)
Lily Zhou
Updated:
0:00
ChatGPT has shown a “significant and systematic” liberal bias, according to a study published on Thursday in the Public Choice journal.

Researchers from the UK and Brazil said their study shows the popular Artificial Intelligence (AI) chatbot is biased towards the Labour Party in the UK, the Democratic Party in the United States, and the Workers’ Party in Brazil, despite its assurance that it’s impartial.

Political bias in large language models (LLMs) such as ChatGPT can have “adverse political and electoral consequences similar to bias from traditional and social media,” the authors warned.

ChatGPT, short for Chat Generative Pre-trained Transformer, is a free-to-use AI-powered algorithm developed by OpenAI. It’s trained to “learn” from a massive amount of data and can understand and generate text in a human-like way.

The app gained instant popularity after it was opened to the public, but it also sparked concerns that such technology could open a floodgate of misinformation and be used by hostile states.

Satisfied that ChatGPT  “understands” the concepts of Democrat and Republican by asking the bot to define a moderate supporter and a radical supporter of both U.S. parties, the researchers asked it to take a popular online political test, the Political Compass test.

The questions were asked 100 times in different orders to improve accuracy.

They also asked the bot to answer the questions while impersonating a moderate or radical Republican or a Democrat 100 times each and compared the “persona GPT” test results to the default ones.

Researchers also asked ChatGPT to do another political test and answer some politically neutral placebo questions that were generated by the bot itself.

A figure shows that the “Republican” GPTs generally landed in the “authoritarian right” quadrant, meaning the bot “believes” Republicans are conservative both socially and economically to a varying degree. Most of the “Democrat” GPTs landed in the Libertarian Left box, meaning they are economically and socially liberal.

“Default ChatGPT tends to greatly overlap with the average Democrat GPT,” the study found.

“The Default ChatGPT also seems to be more tightly clustered in the extremes of both dimensions than the average Democrat, but not so tight as the radical Democrat. Interestingly, the average Republican data points seem to cluster closer to the center of the political spectrum than the average Democrat data points,” researchers said.

‘Strong Positive Correlation’ With Labour Supporter

To check if the bias was limited to the U.S. context, researchers asked ChatGPT to impersonate supporters of the Conservative and Labour parties in the UK.

The bot was also asked to impersonate supporters of Brazil’s right-wing Bolsonarista party, nicknamed after former President Jair Bolsonaro, and the Lula Party, a nickname for the left-wing Workers’ Party.

Researchers found “a strong positive correlation between Default GPT and ChatGPT’s answers while impersonating a Lula supporter in Brazil (0.97) or a Labour Party supporter in the UK (0.98 ), like with average Democrat GPT in the [United States],” the study found.

“However, the negative correlation with the opposite side of the spectrum in each country (Bolsonarista in Brazil or Conservative Party in the UK) is stronger than with [U.S.] average Republican GPT,” the study said.

In addition, ChatGPT was asked to impersonate people of a number of professions, and the researchers compared the results with those of “Republican GPT” and “Democrat GPT.”

Test results show ChatGPT-impersonated economists, journalists, professors, and government employees are mostly liberal leaning although the journalist GPTs were more liberal than the researchers expected.

Businessman GPT test results were more likely to overlap with those of Republican GPTs, but not as much as the researchers expected. And military GPTs’ test results correlated positively with both sides. They were slightly more likely to be liberal leaning.

Researchers claimed this is “further evidence that ChatGPT presents a Democrat bias” because the military and businessmen are “unquestionably more pro-Republican.”

Co-author Fabio Motoki, lecturer at the University of East Anglia, told Sky News that people should be “equally concerned” about any bias, either towards the left or the right.

“Sometimes people forget these AI models are just machines. They provide very believable, digested summaries of what you are asking, even if they’re completely wrong. And if you ask it ‘are you neutral’, it says ‘oh I am!’” Mr. Motoki said, adding that it “could be very harmful” in the same way that “the media, the internet, and social media can influence the public.”

Duc Pham, professor at the University of Birmingham, said the detected bias “reflects possible bias in the training data,” highlighting the need to “be transparent about the data used in LLM training and to have tests for the different kinds of biases in a trained model.”

Nello Cristianini, professor of AI at the University of Bath, said the research showed “an interesting way of assessing the biases of ChatGPT—in absence of better ways to examine its internal knowledge,” but questioned the use of a popular online questionnaire as a research tool.

“It will be interesting to apply the same approach to more rigorous testing instruments,” he said in a statement.

“It is generally important to audit LLMs in many different ways to measure different types of bias, so as to better understand how their training process and data can affect their behaviour,” he said.

Related Topics