Foreign actors struggle to effectively use AI for influence operations, raising concerns about misinformation

From Condé Nast: 2024-05-30 18:15:00

OpenAI released its first threat report, exposing attempts by actors from Russia, Iran, China, and Israel to use AI for foreign influence operations globally, shutting down five networks between 2023 and 2024. These networks, like Russia’s Doppleganger and China’s Spamoflauge, struggle with using generative AI effectively.

While actors haven’t mastered AI for disinformation, they’re experimenting with generative AI, raising concerns. The report reveals that these influence campaigns are facing challenges with generating quality copy and code due to issues with idioms and grammar.

One network used ChatGPT to automate posts on Telegram, occasionally leading to mistakes, such as posting as separate characters. ChatGPT was also used to create code and content for websites, like one publishing anti-Chinese diaspora stories.

AI-generated content from influence networks didn’t gain traction on mainstream platforms like X, Facebook, or Instagram, despite efforts by an Israeli company. The report highlights relatively ineffective campaigns with crude propaganda, easing fears of AI spreading misinformation during elections.

Influence campaigns on social media evolve over time to evade detection, potentially becoming more effective. Actors like Doppleganger use generative AI to post divisive political articles through real-seeming profiles, testing platform algorithms and learning to avoid detection. Expect these campaigns to improve over time.



Read more at Condé Nast: Foreign Influence Campaigns Don’t Know How to Use AI Yet Either