OpenAI has taken decisive action against a coordinated influence operation linked to Iran that was leveraging ChatGPT to create content related to the U.S. presidential election. The company announced on Friday that it had banned a cluster of accounts associated with this operation, which was found to be generating AI-crafted articles and social media posts. While the operation’s content did not appear to gain significant traction, the move underscores growing concerns about the misuse of generative AI in election interference.
This incident is part of a broader pattern, as OpenAI has previously identified and banned accounts connected to state-affiliated actors attempting to use ChatGPT for malicious purposes. In May, the company disrupted five similar campaigns, all aimed at manipulating public opinion through AI-generated content.
The situation echoes the tactics used by state actors on social media platforms like Facebook and Twitter in past election cycles. Now, these same groups—or ones that closely resemble them—are turning to generative AI to inundate social channels with misinformation. OpenAI, much like social media companies before it, appears to be engaging in a continuous effort to identify and shut down these operations as they arise.
A key element in OpenAI’s recent investigation was a report from Microsoft’s Threat Intelligence team, released last week. This report highlighted the activities of a group dubbed Storm-2035, identified as part of a larger campaign to influence U.S. elections that has been active since 2020. According to Microsoft, Storm-2035 is an Iranian network that operates a number of websites designed to mimic legitimate news outlets. The group has been actively engaging U.S. voter groups across the political spectrum, using polarizing messaging on hot-button issues such as the U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict. The goal of these campaigns is not necessarily to support any particular policy but to create discord and sow division.
OpenAI’s investigation uncovered five websites linked to Storm-2035, masquerading as both progressive and conservative news outlets. These sites carried seemingly credible domain names like “evenpolitics.com” and featured long-form articles crafted with the help of ChatGPT. One such article falsely claimed that the social media platform X, formerly known as Twitter, had censored tweets by Donald Trump. In reality, Elon Musk, who now owns X, has actively encouraged Trump’s participation on the platform.
On social media, OpenAI identified a dozen accounts on X and one on Instagram that were under the control of this operation. These accounts used ChatGPT to rewrite political commentary, which was then disseminated across these platforms. One misleading tweet, for example, falsely attributed comments to Vice President Kamala Harris, alleging that she blamed “increased immigration costs” on climate change, accompanied by the hashtag “#DumpKamala.”
Despite the sophisticated nature of the operation, OpenAI reports that Storm-2035’s efforts largely failed to gain significant attention. Most of the social media posts generated by the group received little to no engagement, with few likes, shares, or comments. This outcome is typical of such operations, which are often quickly and cheaply set up using AI tools like ChatGPT. However, as the U.S. election season heats up, it’s likely that we’ll see more of these influence attempts, as well as more efforts by companies like OpenAI to counteract them.
This incident highlights the ongoing battle against election interference in the digital age, where generative AI has become the latest tool in the arsenal of those seeking to disrupt democratic processes. As we move closer to the election, vigilance will be key in identifying and combating these sophisticated new threats.