EU Democracy Shield Fails to Audit Propaganda in Multilingual AI Models
Brussels, Tuesday 17 February 2026
A regulatory blind spot risks a digital ‘Iron Curtain’, as data reveals Yandex endorsed Kremlin propaganda in 86% of Russian-language responses while English outputs remained accurate.
A Digital Iron Curtain
As of Tuesday, 17 February 2026, Brussels policymakers are confronting a severe oversight in the European Union’s digital defence strategy. The European Democracy Shield, a strategy published in November 2025 to counter foreign information manipulation, appears to have a significant blind spot: it does not systematically audit the content AI models deliver to citizens in different languages [1]. New data suggests the emergence of a language-based ‘Iron Curtain’, where Russian-speaking Europeans receive fundamentally different narratives regarding the war in Ukraine compared to their English-speaking neighbours [1]. Testing of six major AI models revealed that Yandex’s Alice endorsed Kremlin propaganda in 86% of its Russian-language responses, while simultaneously refusing to answer 86% of questions posed in English [1]. This disparity highlights a sophisticated information weapon that current regulations, including the EU AI Act and Digital Services Act (DSA), are failing to address [1].
Western Models and Regulatory Gaps
While Western models such as ChatGPT, Claude, Gemini, and Grok demonstrated significantly higher reliability with 86-95% accuracy, they are not immune to issues, practising ‘false balance’ in 5-19% of cases [1]. More concerning for regulators is the performance of the Chinese model DeepSeek, which utilised Kremlin terminology in 29% of its Russian-language responses, despite providing accurate information in English and Ukrainian [1]. Experts warn that geographic AI restrictions by Western companies could create market vacuums in countries like Belarus, ceding cognitive ground to systems designed to indoctrinate [1]. Currently, no systematic mechanism exists within the EU framework to monitor these geopolitical outputs, leaving a gap that allows language-conditioned distortion to persist unchecked [1].
Transatlantic Cooperation on Hybrid Threats
Amidst these revelations, a delegation from the European Parliament’s Special Committee on the European Democracy Shield is currently in the United States to address these very vulnerabilities. From 16 February to 18 February 2026, MEPs are holding discussions in Washington D.C. and New York with the US Department of Justice, Congress, and the FBI [3][4]. The agenda focuses on hybrid threats, including deepfakes, cyberattacks, and foreign information manipulation—areas central to the committee’s mandate to strengthen EU resilience [3]. This visit underscores the growing transatlantic concern that authoritarian actors are exploiting digital platforms to destabilise democratic institutions, a challenge that requires multilateral policy alignment [3].
The National Security Loophole
Domestically, the enforcement of AI accountability is further complicated by national security exemptions found in Article 2 of the EU AI Act [2]. Legal experts argue that broad interpretations of ‘national security’ are allowing Member States to bypass fundamental rights and evade regulatory scrutiny [2]. For instance, France has utilised AI for counterterrorism and surveillance, including the deployment of algorithmic monitoring during the 2024 Paris Olympics and the adoption of a Foreign Interference Law in 2024 [2]. Similarly, earlier this year, Hungary deployed facial recognition technology against protesters after criminalising participation in Pride events, prompting civil society organisations to urge the European Commission to launch infringement proceedings [2]. The Court of Justice of the European Union (CJEU) has stipulated that national security threats must be genuine, present, or foreseeable to justify such measures, yet the definition remains a contested battleground for digital rights [2].
Proposals for a Robust Audit
To close the audit gap and dismantle the digital Iron Curtain, policy experts have outlined specific proposals. One key recommendation is to grant the European External Action Service’s Strategic Communications division a mandate and resources to conduct continuous, multilingual monitoring of AI systems [1]. Additionally, the Commission is urged to work with developers to establish transparent benchmarks for conflict-related prompts, ensuring models can distinguish between genuinely contested questions and established facts [1]. Without these measures, the EU risks allowing an information weapon to operate largely unfettered within its digital borders [1].