OpenAI Disrupts Influence Operations Linked to China, Russia, and Others

Actors behind these operations used OpenAI tools to generate comments, produce articles, or create fake names or bios for social media accounts.
OpenAI Disrupts Influence Operations Linked to China, Russia, and Others
Screens displaying the logos of OpenAI and ChatGPT in Toulouse, France, on Jan. 23, 2023. Lionel Bonaventure/AFP via Getty Images
Updated:
0:00

OpenAI announced that it has disrupted five influence operations from four countries that were using its artificial intelligence (AI) tools to manipulate public opinion and shape political outcomes across the internet.

The company stated on May 30 that these covert influence operations were from Russia, China, Iran, and Israel. Actors behind these operations used OpenAI tools to generate comments, produce articles, or create fake names or bios for social media accounts over the past three months.
The report found that the content pushed by these operations targets multiple ongoing issues, including criticisms of the Chinese regime from Chinese dissidents and foreign governments, U.S. and European politics, Russia’s invasion of Ukraine, and the conflict in Gaza.

However, such operations did not achieve their goals, meaningfully increasing their audience engagement because of the company’s services, OpenAI said in a statement.

The company found trends from these actors using its AI tools, including content generation, mixing old and new between AI-generated materials and other types of content, faking engagement by creating replies for their own social posts, and productivity enhancement such as summarizing social media posts.

Pro-Beijing Network

OpenAI stated that it disrupted an operation from a pro-Beijing Spamouflage disinformation and propaganda network in China. The Chinese operation used the company AI model to seek advice about social media activities, research news and current events, and generate content in Chinese, English, Japanese, and Korean.

Much of the content generated by the Spamouflage network are topics praising the Chinese communist regime, criticizing the U.S. government, and targeting Chinese dissidents.

Such content was posted on multiple social platforms, including X, Medium, and Blogspot. OpenAI found that in 2023, the Chinese operation generated articles that claimed that Japan polluted the environment by releasing wastewater from the Fukushima Nuclear Power Plant. Actor and Tibet activist Richard Gere and Chinese dissident Cai Xia are also targets of this network.

The network also used the OpenAI model to debug code and generate content for a Chinese-language website that attacks Chinese dissidents, calling them “traitors.”

Last year, Facebook uncovered links between Spamouflage and Chinese law enforcement, noting that the group has been promoting pro-Beijing campaigns on social media since 2018. The company deleted about 7,700 Facebook accounts and a hundred pages, and it deleted Instagram accounts involved in influence operations that pushed positive narratives about Beijing and negative comments about both the United States and critics of the Chinese regime.

Russia, Israel, and Iran Operations

OpenAI also found two operations from Russia, one of which is known as Doppelganger. This operation used OpenAI tools to generate comments in multiple languages and post on X and 9GAG. Doppelganger also used AI tools to translate articles in English and French and turn them into Facebook posts.

The company stated that the other is a previously unreported Russian network, Bad Grammar, which operates mainly on Telegram and focuses on Ukraine, Moldova, the United States, and the Baltic States. It used OpenAI tools to debug code for a Telegram bot that automatically posts information on this platform. This campaign generated short political comments in Russian and English about the Russia–Ukraine war and U.S. politics.

The ChatGPT parent company found one operation from Israel relating to Tel Aviv-based political marketing firm STOIC and the other from Iran. Both used ChatGPT to generate articles. Iran published the content on a website related to the Iran threat actor website, while Israel posted its comments on multiple platforms, including X, Facebook, and Instagram.

On May 29, Facebook released a quarterly report, revealing that “likely AI-generated” deceptive content has been posted on its platform. The report indicated that Meta disrupted six covert influence operations in the first quarter, including one Iran-based network and another from STOIC.

OpenAI released ChatGPT to the public in November 2022. The chatbot swiftly became a global phenomenon, attracting hundreds of millions of users with its ability to answer questions and engage across a wide array of topics.

OpenAI captured global tech attention last year when its board abruptly fired CEO Sam Altman. The move drew worldwide backlash, forcing the board to reinstate him and resulting in the resignation of most board members and the formation of a new board.
Aaron Pan
Aaron Pan
Author
Aaron Pan is a reporter covering China and U.S. news. He graduated with a master's degree in finance from the State University of New York at Buffalo.
Related Topics