ChatGPT Rejected Over 250,000 Requests to Create Images of Presidential Candidates Before Election

OpenAI said it had applied safety measures to ChatGPT to refuse requests to generate images of real people.
ChatGPT Rejected Over 250,000 Requests to Create Images of Presidential Candidates Before Election
The ChatGPT application logo on a smartphone screen (L) and the letters AI on a laptop screen in Frankfurt am Main, Germany, on Nov. 23, 2023. Kirill Kudryavtsev/AFP via Getty Images
Aldgra Fredly
Updated:
0:00

OpenAI said on Friday that ChatGPT had denied more than 250,000 requests to generate images of U.S. presidential candidates ahead of the Nov. 5 general election.

The artificial intelligence (AI) chatbot blocked requests to generate images of former President Donald Trump, Vice President Kamala Harris, President Joe Biden, Minnesota Gov. Tim Walz, and Sen. JD Vance (R-Ohio) in the month leading up to the Election Day, OpenAI said.

“We’ve applied safety measures to ChatGPT to refuse requests to generate images of real people, including politicians,” the company said in a blogpost on Nov. 8.

“These guardrails are especially important in an elections context and are a key part of our broader efforts to prevent our tools being used for deceptive or harmful purposes,” it added.

ChatGPT also directed questions about voting in the United States to CanIVote.org as part of its safety measures during this year’s election season, according to the blogpost.

OpenAI said that it had been focusing on identifying and disrupting attempts to use its models to generate content for covert influence operations targeting this year’s global elections.

The company said there had been no evidence of covert operations that intended to influence the U.S. election receiving viral engagement or building sustained audiences through the use of its models.

In its October report, OpenAI said it disrupted more than 20 operations and deceptive networks worldwide that tried to use its models for activities such as debugging malware, writing articles for websites, and generating content posted by fake personas on social media accounts.

It stated that OpenAI had disrupted efforts to create social media content about the elections in the United States, Rwanda, India, and the European Union but that there had been no indication that these networks were able to attract viral engagement or build sustained audience using its tools.

“Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences,” the report stated.

Earlier this year, a group of 20 big tech companies—including OpenAI, Google, and Meta—signed a pact affirming their commitment to prevent deceptive use of AI in this year’s elections globally.

The specific focus of the pact is on AI-generated audio, video, and images designed to deceive voters and manipulate election processes. The companies pledged to “work collaboratively” to build on each of their existing efforts in this arena, according to a news release.

The participating companies agreed to eight actions, including to develop technology to detect and address deepfakes, mitigate risks and foster cross-industry resilience, and provide transparency to the public regarding their efforts.

Caden Pearson contributed to this report.