OpenAI Exposes Chinese Use of ChatGPT in Propaganda Campaigns

China Used ChatGPT in Covert Propaganda Operations, OpenAI Reveals

As artificial intelligence grows more powerful and accessible, so does its misuse. OpenAI, the company behind the widely used ChatGPT, has revealed that it shut down several covert influence campaigns – including four linked to China – that used its AI tools to manipulate online conversations and spread propaganda.

These campaigns, part of a wider pattern that also includes alleged operations from Russia, Iran, and North Korea, leveraged ChatGPT to write politically loaded posts, comments, and even forged identities to influence social media engagement across various platforms. One such Chinese-linked campaign, named “Uncle Spam,” aimed to spark controversy on sensitive U.S. issues.

OpenAI’s investigations team found that these campaigns didn’t stop at content generation. The AI was used to simulate genuine online engagement and even to create performance reviews for operatives – a kind of propaganda management system that monitored and reported effectiveness to superiors.

Ben Nimmo, leading OpenAI’s threat analysis efforts, warned of a “growing range of covert operations” emanating from China, highlighting the evolution of disinformation tactics. The campaigns went beyond anonymous posting and included AI-generated emails targeting journalists, politicians, and analysts to build trust under false pretenses and extract information.

This report underscores an urgent need for global collaboration around ethical AI development. While AI can be a powerful force for good – in healthcare, education, and science – it also offers a toolkit for psychological warfare and misinformation when left unchecked.

OpenAI’s disclosures mark a significant moment in AI governance. As misuse grows more sophisticated, the question isn’t just whether AI can be controlled, but who will be accountable when it’s not. If advanced tools like ChatGPT are becoming digital mercenaries in influence campaigns, then transparency, oversight, and resilience must be built into the system from the ground up.

The message is clear: AI is no longer just a tool – it’s a weapon. And like any weapon, it depends on who wields it.

Related posts

Intel Overhauls Product Strategy: 50% Profit Margin Now Mandatory

Top Selling Nintendo Switch 2 eShop Games (And Why Those $10 Upgrades Are Winning)

Sony Breaks Viewership Records with June 2025 State of Play – But Is It Enough?