OpenAI says it has disrupted more than 20 operations and deceptive networks to prevent threat actors in the year of global elections.
In an October report, the company behind ChatGPT said they “know it is particularly important to build robust, multi-layered defenses against state-linked cuber actors and covert influence operations that may attempt to use our models in furtherance of deceptive campaigns on social media and other internet platforms.”
Threat actors are a group of people who intentionally cause harm in the cyber sphere.
Since May, the company has continued to build new AI-powered tools that help them to detect and dissect potentially harmful activity.
While threat actors have been noted, the AI research firm hasn’t seen “evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences.”
OpenAI has disrupted activity that generated social media content about the elections in the United States, Rwanda, India, and the European Union.
OpenAI remains on ‘high alert’ to detect and disrupt threat actors
Since the beginning of the year, four separate networks that included at least some degree of election-related content have been disrupted. Only one of these networks, in Rwanda, is said to have focused exclusively on election issues while the others generated and posted content on other topics too.
In May and June, two operations that sometimes referenced democratic processes, but who had another primary focus, were disputed.
Also in June, ahead of the European Parliament elections, a previously unreported operation was disrupted that consisted of “generating comments about the European Parliament elections in France, and politics in Italy, Poland, Germany and the United States.”
It was then in July when a number of ChatGPT accounts based in Rwanda were banned following comments being generated about the elections in that country. The comments were then posted by several accounts on X, but the majority received few or no likes, shares, or comments.
As the election is around the corner in the United States, the AI company explains how they will continue their approach to responsible disruption. “We shared threat intelligence with industry partners and relevant stakeholders. We will remain on high alert to detect, disrupt, and share insights into further attempts to target elections to democratic processes.”
Featured Image: AI-generated via Midjourney