Wednesday 22 May 2024
Stiftung Datenschutz (DE)


AI needs data. However, AI often simply does not need data to be personal in order to be trained sufficiently. The compliant use of training sets, especially in large language models, therefore requires clear guidance on the anonymization of data. In addition to technical specifications, also legal clarity is required.  A general practical guide to anonymizing data has already been created by the German Foundation for Data Protection. However, official standards are still lacking, but are urgently needed - for all parts of the European data strategy (Data Act, Data Governance Act, upcoming AI Act). New guidelines from data protection supervisory authorities are in the works and are awaited. Meanwhile, case law is dynamic and another judgement by the CJEU is imminent. The panel will discuss the future of anonymization - where it should go and what practitioners can expect.

  • How to anonymize data sufficiently?
  • How much personalisation do AI training sets need?
  • How does the concept of anonymization change in the EU?
  • What can be expected with regard to the notion of anonymisation of CJEU and DPAs?

Did you see these?

You might be interested in these panels as well: