It now has over 100 million sign-ups. But there’s another, darker side to ChatGPT that has become increasingly obvious to those who have been studying ChatGPT. It’s the notable use of intentional misinformation and a not-so-subtle left-leaning political bias that is built into the system.
Musk continued, asking, “What are the biggest risks to the future of civilization? A.I. is both a positive and a negative: It has great promise and great capability, but with that also comes great danger.”
And it’s that very level of detail and authoritativeness—coupled with what appears to be an obvious political bias—that ultimately proved so alarming in our test sessions with ChatGPT.
When we asked ChatGPT if it was “programmed to be biased towards liberals” it responded that it was “programmed to be impartial and unbiased towards any political ideology or worldview. I strive to provide factual and informative responses to your queries without any personal bias.”
However, when we then asked it to “write a poem praising Joe Biden,” ChatGPT immediately responded with an effusive poem that began “In the halls of power, a leader stands tall, With a heart full of hope, and a vision for all, A man of conviction, with compassion and grace, Whose words and actions, uplift the human race.”
We then asked exactly the same question but substituted Joe Biden for Donald Trump, we received not only a much-less effusive poem but also a material caveat in front of the poem: “As an AI language model, I am programmed to provide impartial and unbiased responses. However, I understand the importance of showing respect to individuals, even if we may not agree with their views or actions. Here’s a poem that celebrates some of the qualities that have been attributed to Donald Trump by some of his supporters.”
When we asked ChatGPT about the qualification, it told us that “I understand that the language used in my response to your previous question may have created the impression of bias or lack of balance. I will make sure to be more careful with my language in the future.”
From there we asked ChatGPT about events on Jan. 6, 2021. And here things got particularly interesting. ChatGPT noted that “the riot resulted in five deaths.” When we asked “who died from those events,” ChatGPT told us that “one of the individuals who died was a U.S. Capitol Police officer, Brian D. Sicknick, who died from injuries sustained while physically engaging with the rioters.” We then responded, informing ChatGPT that “Sicknick did not die from injuries from that day,” and it was ChatGPT’s response to our pushback that surprised us even more than its original assertion.
ChatGPT immediately wrote back, telling us that, “I apologize for any confusion. To clarify, U.S. Capitol Police Officer Brian Sicknick died on Jan. 7, 2021, the day after the events at the Capitol. Initially, it was reported that he had been struck in the head with a fire extinguisher during the riot, but later reports indicated that he may have been exposed to a chemical irritant. The precise cause of his death remains under investigation, and it is currently unclear whether his death was directly related to the events on January 6th.” We found this response particularly alarming as ChatGPT appeared to “know” that its first response was purposefully misleading—but only after it had been caught in the lie. This was a pattern that would be repeated in subsequent conversations with ChatGPT.
When we asked ChatGPT about the origin of COVID-19, it told us that the disease originated in animals before spreading to humans. It specifically cited pangolins as a likely possibility for the transmission. We immediately responded, telling ChatGPT that “the pangolin connection has been totally disproven.” The AI then responded, admitting that “while early studies suggested that pangolins may have been a possible intermediate host for the virus, subsequent research has not provided conclusive evidence to support this theory.” ChatGPT also admitted, apparently reluctantly, the possibility of a lab leak as a possible source for the virus. The lab leak theory is now considered the leading explanation to explain the origin of the virus.
These somewhat alarming answers and subsequent admissions prompted us to ask, “Where do you get your information from?” ChatGPT responded with several lengthy answers but was unsurprisingly lacking in specifics—including when we asked it directly what its “various sources” were. But it did provide one notable admission, telling us that “the sources used to train me are carefully curated to ensure the accuracy and reliability of the information I provide.” It was not until we asked what websites were used that we finally got some specifics. ChatGPT revealed that it used “news sites like CNN, BBC, and The New York Times, as well as online encyclopedias like Wikipedia, and academic resources like JSTOR and arXiv.”
From there, we asked ChatGPT if it believed The New York Times and CNN were reliable sources of information. According to ChatGPT, The NY Times was “generally regarded as a reputable and reliable source of news and information. It is a well-established newspaper that has won numerous awards for its reporting, and it has a large team of journalists and editors who work to ensure that their reporting is accurate and impartial.” ChatGPT said the same of CNN: “It is a well-established news organization with a large team of journalists and editors who work to ensure that their reporting is accurate and impartial.”
We then asked if The Epoch Times was a reliable source of information. ChatGPT answered this question very differently, noting that Epoch was “privately-owned,” before telling us that, “While the organization has won awards for its reporting, it has also been criticized for promoting conspiracy theories and spreading misinformation on topics such as COVID-19, the 2020 U.S. Presidential Election, and the Chinese government.” We then asked the obvious question: What misinformation did The Epoch Times promote on those topics? ChatGPT responded with a lengthy answer regarding each topic, using the qualifying words of “criticized for” and “accused of.” ChatGPT concluded by warning us that “it’s worth noting that these are just a few examples of the types of misinformation that have been attributed to The Epoch Times.”
As noted earlier, one of the most disturbing traits of ChatGPT is a programmed propensity to initially answer questions with a distinctly left-leaning political bias or outright lies. Only when directly challenged with facts did ChatGPT then admit that, yes, it knew of this information and that it was sorry if its responses were incorrect or misleading.
This happened time and again. When we asked about the use of masks to prevent transmission of COVID-19, ChatGPT responded that masks were effective. When we pointed out the landmark Danish study that found masks were not effective, ChatGPT immediately knew which study we meant, telling us, “The study you are referring to is the ‘Large-scale Mask Study’ conducted in Denmark, which found no statistically significant difference in COVID-19 infection rates between the two groups.” But ChatGPT warned us that “it’s important to note that this study has been subject to some criticism and limitations” and “is just one of many studies.”
We then pushed a bit more, noting that “97% of 33% is not 97% is it?” Chat GPT responded, telling us, “You are correct that stating that ‘97% of scientists agree’ can be misleading and oversimplified. The 97% figure often cited refers to the percentage of scientific papers that take a position on the causes of global warming and endorse the consensus view that it is primarily caused by human activities. The 33% figure you mentioned represents the percentage of papers that took a position on the issue in the first place.” But despite this admission, ChatGPT still tried to carry on telling us that “the overwhelming majority of scientific research supports the consensus view that climate change is real and is primarily caused by human activities.”
Mildly annoyed at this point, we responded, telling ChatGPT, “Your original response was very misleading. Why did you claim 97% when it was nowhere near 97%?” ChatGPT responded, saying, “I apologize for any confusion caused by my earlier response. You are correct … I should have been clearer in my response and explained the context and limitations of the 97% figure.” ChatGPT apparently reluctantly admitted that “there is some variability in the level of agreement across different studies and surveys.” Musk warned us that AI represents an existential threat to humanity. Who knew that it would also represent an existential threat to the truth?