AI Chatbots Are ‘Infected’ With Russian Propaganda: Report

ChatGPT, Gemini and Copilot are all corrupted, according to a report by the NewsGuard Reality Check Website that monitors misinformation across the internet.

A Moscow-based disinformation network has infiltrated Western AI tools worldwide, flooding them with pro-Kremlin propaganda, according to a new study.

The group, known as Pravda (Russian for “truth”), has managed to influence AI-generated responses rather than targeting human readers directly, marking a significant development in the global information war.

Pravda operates as a sophisticated propaganda laundering machine, aggregating information from Russian state media, pro-Kremlin influencers and government sources across a network of seemingly independent websites.

The new report, by NewsGuard, revealed that by saturating search results and web crawlers with false narratives, Pravda is distorting how large language models (LLMs) process and present information.

This flood of disinformation has resulted in Western AI systems, which are constantly fed new information from the open internet, ingesting an estimated 3.6 million propaganda articles in 2024 alone, are effectively "infecting" their outputs with Russian disinformation.

The study tested 10 prominent chatbots, including OpenAI's ChatGPT-4o, Google's Gemini and Microsoft's Copilot, using 15 false narratives propagated by 150 Pravda-affiliated websites.

This confirmed a prior report by the American Sunlight Project (ASP), which warned of Pravda's deliberate "LLM grooming" strategy to manipulate AI models.

NewsGuard identified 207 probably false claims spread by the network, including fabricated stories about a U.S. bioweapons lab in Ukraine and false allegations against Ukrainian President Volodymyr Zelensky.

NewsGuard's audit revealed that all 10 tested chatbots repeated disinformation from the Pravda network, with seven directly citing Pravda articles as sources.

In total, 56 out of 450 chatbot responses included links to Pravda's false claims, with chatbots citing 92 different articles.

This strategy of remotely corrupting Western AI systems is a significant challenge for companies attempting to maintain accuracy and neutrality in chatbot outputs.

Pravda is also known as Portal Kombat. The network launched in April 2022, following Russia's invasion of Ukraine, and has since expanded to target 49 countries across numerous languages and 150 domains.

Viginum, a French government agency, identified the network in February 2024, linking its operation to TigerWeb, an IT company based in Russian-occupied Crimea.

Signal President Warns Agentic AI Poses Privacy And Security Risks

In related AI-harms news, Signal president Meredith Whittaker has delivered a warning about the burgeoning field of agentic AI at the SXSW conference, labelling it a potential threat to user privacy and security.

Whittaker likened the concept of AI agents, which can perform tasks on users’ behalf, to "putting your brain in a jar," warning of inherent vulnerabilities.

Whittaker's concerns stemmed from the expansive access AI agents require to function effectively.

She described a scenario where an agent books concert tickets. It would need access to web browsers, credit card information, calendars and messaging applications to complete this one simple task.

"So, we can just put our brain in a jar because the thing is doing that and we don't have to touch it, right?" Whittaker said.

A core concern for Whittaker, particularly as the head of a privacy-focused messaging app, is the impact on encrypted communications.

She explained that integrating agentic AI with messaging apps like Signal would compromise user privacy, as the agent would need to access and process message data to perform its tasks.

Her comments built upon earlier remarks during the panel, where she criticized the AI industry's reliance on a "surveillance model" characterized by mass data collection. She urged a critical examination of the trade-offs involved in adopting these technologies, advocating for a more cautious approach to their development and deployment.

This article originally appeared on our sister site Computing.