OpenAI identifies and disrupts Israeli company Stoic's influence operations, with minimal impact.
From Calcalist: 2024-06-02 08:33:00
OpenAI exposed five covert influence operations using its GenAI models, including Stoic, an Israeli company targeting audiences in multiple countries. Stoic’s activity was also noted in Meta’s recent threat report. While these operations tried to increase engagement using AI, they had little impact, according to OpenAI’s report. OpenAI faced concerns about its tools being used to influence politics and conduct cyber attacks since launching DALL-E and ChatGPT. A recent report revealed how state and private actors worldwide tried to use OpenAI products for influence campaigns. The report rated the impact of these campaigns as low, and OpenAI disrupted them, blocking access to its tools. The report highlighted trends like using AI to create content and fake engagement activities. Stoic, a “for-hire threat actor,” was described in the report for promoting various agendas across different countries. Stoic’s campaigns focused on various topics such as Gaza, Indian elections, and critiquing different organizations and governments. Despite their efforts, Stoic’s activities had minimal user engagement, according to OpenAI. Meta also removed accounts tied to influence campaigns targeting U.S. and Canadian users.
Read more at Calcalist: OpenAI cracks down on Israeli “for-hire threat actor” Stoic