Russian, Chinese, Iranian, and Israeli groups use OpenAI's tech for covert propaganda campaigns.

From Washington Post.: 2024-05-30 14:52:58

OpenAI discovered groups from Russia, China, Iran, and Israel using its technology to influence global political discourse, utilizing generative AI for covert propaganda campaigns. While these groups had minimal impact, the report raises concerns about the misuse of AI in future influence operations as the 2024 presidential election approaches.

Various groups, including a Russian entity dubbed “Bad Grammar,” used OpenAI’s technology to write posts, translate them, and automate social media postings. Despite limited reach and followers, these propagandists have been able to generate text at a higher volume and with fewer errors, showcasing the potential of AI in misinformation campaigns.

As advances in AI enable the creation of realistic deepfakes, researchers warn of the increasing challenge in detecting and responding to false information and covert influence online. Companies like OpenAI and Google are working on technology to identify deepfakes made with their tools, but concerns remain about the effectiveness of such detection methods.

OpenAI’s report outlined how these groups employed the company’s technology in their influence operations, demonstrating the diverse applications of AI in generating and disseminating propaganda. From social media research and multilingual content creation to automated posting, these groups leveraged AI tools to push their narratives across different platforms.

Meta exposed an Israeli political campaign firm, Stoic, for using OpenAI to generate pro-Israel content targeting audiences in Canada, the US, and Israel during the Gaza conflict. Stoic’s operation involved creating fake accounts that posed as American college students and other identities to support Israeli military actions, with limited success in attracting genuine followers.

Researchers speculate on the future use of AI chatbots to engage individuals in detailed conversations and tailor messages based on user data, potentially amplifying the impact of influence operations. While current AI applications in propaganda represent an evolutionary step, experts caution that more sophisticated uses of AI could emerge in the future, posing new challenges in countering misinformation.



Read more at Washington Post.: OpenAI finds Russian, Chinese propaganda campaigns used its tech