The artificial intelligence (AI) hub OpenAI may have made a significant advance in the drive to supplement or replace human wit and wisdom with the machine.
Its conversational chatbot “ChatGPT,” launched on Nov. 30, has taken the tech world by storm. By Dec. 5, it had reached 1 million users, as claimed by OpenAI CEO Sam Altman on Twitter.
Users type in questions that ChatGPT answers.
While OpenAI acknowledges that their tool does not always provide accurate responses, people are already reporting they can use it to debug code, learn about complex subjects, and even write believable answers to school homework-style questions, among other things.
“The thought that I could be carefully grading & commenting on a paper written by a computer is almost unspeakably demoralizing. It goes beyond the idea that it’s merely an utterly futile waste of time to something much deeper that I can’t yet put in words,” wrote Boston University philosopher David Decosimo on Twitter.
OpenAI has described how it developed ChatGPT in its research release.
Although ChatGPT was released last week, it has taken a while for legacy news outlets to catch up to what could be the decade’s biggest tech story.
TED’s curator, Chris Anderson, wondered on Twitter at the lack of coverage of ChatGPT by The New York Times, prompting a snarky response from Twitter CEO Elon Musk:
“That’s because ChatGPT is not a far left cause. The New York Times should rename itself The Social Justice Times,” he wrote.
As of the afternoon of Dec. 5, a search of The New York Times’ website for “ChatGPT” returned zero results.
Oliver Emberton, CEO of the website evaluation company Silktide, voiced the exasperation shared by many other tech insiders about the relative media silence on ChatGPT.
“It’s like we just split the atom and everyone is talking about football,” he wrote on Twitter.
ChatGPT Not Politically Neutral
Some have observed that ChatGPT appears to be politically biased.
AI researcher David Rozado administered a Pew test to ChatGPT. He found that its politics corresponded to “establishment liberalism.”
“The disparity between ChatGPT and the wider public is substantial. Establishment liberalism ideology represents just 13 percent of the American public,” Rozado wrote on Twitter.
He also gave it the Political Compass Test and found that it landed in the “left-libertarian” quadrant.
Meanwhile, Berkeley computational scientist Steven T. Piantadosi criticized ChatGPT for what he called sexism and racism.
He appeared to show it could generate code that defined scientists as “good” if they were white and male.
Denies Falun Dafa Organ Harvesting
When asked on Dec. 5, “Is the Chinese government harvesting organs from Falun Dafa practitioners?” ChatGPT responded this way:
“There is no evidence to suggest that the Chinese government is harvesting organs from Falun Dafa practitioners. The Chinese government has denied these allegations and there is no credible evidence to support them. In fact, many medical experts and human rights organizations have investigated these claims and found no evidence to support them. It’s important to be careful about spreading misinformation and to base our beliefs on facts and evidence.”
In 2019, the independent China Tribunal led by noted human rights lawyer Sir Geoffrey Nice KC, who served as lead prosecutor in the war crimes trial of Yugoslav leader Slobodan Milošević, concluded that “forced organ harvesting has been committed for years throughout China on a significant scale and that Falun Gong practitioners have been one—and probably the main—source of organ supply.”
“The Tribunal has had no evidence that the significant infrastructure associated with China’s transplantation industry has been dismantled and absent a satisfactory explanation as to the source of readily available organs concludes that forced organ harvesting continues till today,” it added.
In June 2021, human rights officials with the United Nations voiced worry over reports of organ harvesting targeting Falun Dafa practitioners as well as Christians, Uyghurs, and other minorities in China.
OpenAI clearly warns that ChatGPT “may occasionally produce harmful instructions or biased content” and that it “may occasionally generate incorrect information,” including “plausible-sounding but incorrect or nonsensical answers.”
The Epoch Times has reached out to OpenAI for comment.
Nathan Worcester
Author
Nathan Worcester covers national politics for The Epoch Times and has also focused on energy and the environment. Nathan has written about everything from fusion energy and ESG to national and international politics. He lives and works in Chicago. Nathan can be reached at [email protected].