OpenAI announced on Friday that it had successfully disrupted an Iranian influence campaign that exploited ChatGPT to create and disseminate fake news articles and social media posts aimed at American audiences. The operation, which was part of a broader series of influence campaigns linked to the Iranian government, used artificial intelligence to generate polarizing content on hot-button issues such as the U.S. presidential race, LGBTQ+ rights, and the ongoing conflict in Gaza.
The campaign, identified as “Storm-2035,” involved the creation of five fake news websites—published in both English and Spanish—that masqueraded as legitimate media outlets. These sites were crafted to spread divisive messages tailored to deepen existing societal rifts. Additionally, the operation managed to create “a dozen accounts on X (formerly Twitter) and one on Instagram” to further amplify its content, according to OpenAI.
Despite the sophisticated use of AI, the influence operation appears to have had minimal impact. OpenAI reported that the majority of the social media posts it identified garnered little to no engagement, with most posts receiving few or no likes, shares, or comments. In an official statement, the company emphasized, “The majority of social media posts that we identified received few or no likes, shares, or comments.”
In an effort to assess the operation’s potential threat, OpenAI used the Brookings Institution’s Breakout Scale, which evaluates the severity of influence campaigns on a scale of one to six. The “Storm-2035” campaign was rated a Category 2, indicating that while the operation was active across multiple platforms, there was no significant evidence that its content resonated with or was widely shared by real users.
OpenAI revealed that the content generated by this campaign aimed to create faux conservative and progressive news outlets, each pushing narratives designed to appeal to and polarize opposing viewpoints. For example, one piece of content falsely suggested that Donald Trump was being censored on social media and that he was preparing to declare himself “king of the U.S.” Another misleading article portrayed Kamala Harris’ selection of Tim Walz as her running mate as a “calculated choice for unity.”
Interestingly, the campaign didn’t limit its focus to American politics. It also generated content related to Israel’s participation in the Olympics, Venezuelan politics, the rights of Latin American communities, and Scottish Independence. To appear more genuine or to build a broader audience, the campaign even peppered its divisive content with posts about fashion and beauty trends.
“The operation tried to play both sides but it didn’t look like it got engagement from either,” said Ben Nimmo, an investigator from OpenAI’s Intelligence and Investigations team, in an interview with Bloomberg.
This failed influence operation follows closely on the heels of recent disclosures that Iranian hackers have also been targeting both Kamala Harris’ and Donald Trump’s campaigns. The FBI recently reported that Trump’s informal adviser, Roger Stone, was compromised by phishing emails, allowing the hackers to take control of his account and distribute additional phishing attempts. Fortunately, there is no evidence to suggest that anyone in the Harris campaign fell victim to this scheme.
In conclusion, while the “Storm-2035” operation underscores the growing use of AI in influence campaigns, its failure to gain traction serves as a reminder of the challenges these actors face in penetrating public discourse. OpenAI’s swift action in identifying and neutralizing the operation highlights the ongoing battle against misinformation and the importance of vigilance in the digital age.