The future of large-scale AI models in Europe is uncertain. Data protection authorities and activists are increasing the pressure on providers such as OpenAI, while the EU is pushing for clear regulations.
The debate surrounding large AI models that are trained with massive amounts of data is intensifying. The European Data Protection Board (EDPB) has presented new guidelines on the compatibility of AI with the General Data Protection Regulation (GDPR). The focus is on models such as GPT, Gemini and Claude, whose training methods have been heavily criticized under data protection law. Particularly controversial: bans on such systems are still on the table.
The EDSA specifications include a 3-step test to assess the legitimacy of AI systems. However, this approach leaves room for interpretation and creates uncertainty among developers and companies. Data protection activists such as the organization Noyb, founded by Max Schrems, accuse AI providers of systematically violating the GDPR.
The Italian data protection authority Garante has already taken action: ChatGPT has been temporarily blocked as the data storage and use for training purposes was classified as non-transparent and unlawful. The authority could now re-examine the case in accordance with EDPB requirements. The French data protection authority CNIL is also working on implementing the recommendations and is focusing on web scraping, the mass reading of data from public sources.
In Germany, the data protection authorities of Baden-Württemberg and Rhineland-Palatinate consider the EDPB guidelines to be an important step. However, the statement does not offer a definitive assessment of existing AI models, but rather "guidelines for individual case reviews". Federal Data Protection Commissioner Andreas Hartl believes that politicians have a duty to create clear regulations for the processing of training data.
The German Digital Industry Association (BVDW) is disappointed: the EDPB guidelines are vague and leave more legal uncertainty than clarity. Mark Zuckerberg, CEO of Meta, sharply criticizes the slow development of the European guidelines. His teams are increasingly implementing innovations outside the EU.
While the EU is striving for a responsible approach to AI, the future of large models remains unclear. The balancing act between data protection and technological progress could be decisive for the EU's competitiveness in the AI market.