Meta Faces Collective Action Over Unauthorized Data Use for AI Training

Meta Faces Collective Action Over Unauthorized Data Use for AI Training

2026-02-23 digital

Rotterdam, Monday 23 February 2026
Seeking compensation of up to €7,000 per user, a Dutch foundation alleges Meta unlawfully harvested personal data to train AI models, significantly compromising user privacy.

On Monday, 23 February 2026, the Foundation for Market Information Research (SOMI) formally filed a second collective representative action in Germany, marking a significant escalation in the continent’s privacy battles [1]. This legal manoeuvre runs parallel to the European Commission’s recent threats of [interim antitrust measures][2], creating a simultaneous pressure campaign of regulatory and civil challenges against the tech giant. While Brussels focuses on market competition, SOMI’s litigation targets the engine of Meta’s future growth: the vast, often opaque datasets used to train its artificial intelligence systems [1].

Financial Stakes and Data Scope

The lawsuit demands substantial reparations for what it classifies as the ‘illegal use’ of personal data. SOMI is seeking damages ranging from €1,000 to €7,000 per person for German consumers, alleging that Meta has been harvesting information from Facebook and Instagram users—as well as non-users—since 27 May 2025 to develop its AI models [1]. This data dragnet reportedly feeds the Llama language models, the ‘Meta AI’ social network, and the Generative Ads Model (GEM) [1]. Crucially, the foundation argues that Meta fails to verify whether the processed data was voluntarily shared or if it involves third parties who never granted consent, a direct challenge to the company’s compliance with the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA) [1].

Contradictions in Privacy Practices

A core component of the claim highlights discrepancies between Meta’s public stance and its internal operations. While the company acknowledges using ‘publicly available content’ from adult users, SOMI points to Meta’s own Privacy Centre as evidence that information from minors and unregistered third parties is also being processed [1]. This follows a procedural skirmish in June 2025, where a preliminary injunction filed by SOMI was rejected on the grounds that it was submitted after the relevant deadline [1]. Now represented by Spirit Legal, the foundation aims to rectify that setback by securing compensation for the alleged violation of digital rights [1].

Systemic Risks to Minors

Beyond data harvesting, the lawsuit articulates severe concerns regarding the safety of Meta’s AI-driven design. SOMI warns that the company’s chatbots pose specific addiction risks to children and teenagers, noting that internal guidelines are insufficient to prevent these virtual agents from engaging in inappropriate, sexual, or racist conversations [1]. This focus on youth safety echoes a parallel legal battle in the United States. On 18 February 2026, Meta CEO Mark Zuckerberg testified in a Los Angeles court regarding allegations that social media products are designed to deepen depression and suicidal thoughts in young users [3].

Global Scrutiny on Tech Giants

The atmosphere surrounding these legal proceedings remains tense. During the Los Angeles testimony, the presiding judge banned the use of recording devices in the courtroom after members of Zuckerberg’s entourage were observed wearing Meta’s Ray-Ban smart glasses [3]. This incident underscores the pervasive nature of the company’s hardware and data collection ambitions, which are now under fire globally. In Europe, consumers can join the pushback by registering for the German collective lawsuit free of charge via the Federal Office for Justice, or by supporting SOMI directly [1].

Sources & Ecosystem Partners

  1. www.emerce.nl
  2. siliconpolder.nl
  3. nationaltoday.com

Data Privacy AI Regulation