Disinformation Report by OpenAl

A 39-page report published by OpenAI reveals how artificial intelligence technologies are being used by countries at war and actors in regional tensions.

Newstimehub

Newstimehub

31 May, 2024

A 39-page report published by OpenAI reveals how artificial intelligence technologies are being used by countries at war and actors in regional tensions.

OpenAI has published its first report on how AI tools are being used for covert influence operations. According to the 39-page report, at least four states are using AI technologies to spread misinformation and manipulate societies.
According to the report, states such as Russia, China, Israel and Iran are using AI tools to spread false ideas and manipulate the masses. To create propaganda content, actors use OpenAI’s AI models to generate and publish content on social media platforms. OpenAI claims to have found and banned five accounts of state and private actors associated with covert influence operations in the last three months.
The Russian accounts found produced and published content critical of the US, Ukraine and several Baltic states. China’s operation created texts in English, Chinese, Japanese and Korean and posted them on platforms such as Twitter and Medium. Iranian users created articles attacking the US and Israel and translated them into English and French.
Stoic, an Israeli political firm, operated a network of fake social media accounts that created a variety of content, including posts accusing pro-Palestinian protests by US students of being anti-Semitic.
The report also emphasizes that the incorporation of generative AI into disinformation campaigns is one way to improve certain aspects of content generation, but it is not the only tool for propaganda. Generative AI allows malicious actors to increase propaganda production, but the campaigns have not had a significant impact.