Academic
Business
Policy
IAPP (International)
Trevor Hughes (moderator, IAPP, United States), Aura Salla (European Parliament, Europe), Thomas Le Goff (Telecom Paris, Institut Polytechnique de Paris, France), Sylvie de Oliveira (L'Oréal, France), Gaia Marcus (Ada Lovelace Institute, United Kingdom), Eduardo Ustaran (Hogan Lovells, United Kingdom)
Regulation of the online world is increasing across the globe. It leads to a multilayer legal and regulatory landscape, against the backdrop of rapid technology deployment. Organizations – across policymakers, regulators, enterprises and civil society – are faced with an unprecedented challenge of having to rethink digital governance. This panel will analyse the regulatory intersectionality and its implications for organizational governance and digital risk management for stakeholders across the board.
Academic
Business
Policy
Utrecht University (Netherlands)
Mirko Tobias Schäfer (moderator, University of Helsinki, Finland), Angela Müller (AlgorithmWatch CH, Switzerland), Arnika Zinke (European Parliament, Europe), Katja Mayer (University Vienna, Austria), Rob Heyman (Vrije Universiteit Brussel, Belgium)
As the EU’s Artificial Intelligence Act (AI Act) approaches implementation, we discuss how this landmark legislation will be effectively upheld in practice and through enforcement. This panel will explore the practices for compliance and critical steps needed to ensure that the AI Act goes from a legislative framework into a practical tool for governing AI use across Europe. We will focus on practical solutions and key mechanisms for ensuring compliance: practices for compliances, the role of oversight bodies and supervisory authorities in monitoring AI systems, to the contributions of NGOs and civil society in holding organizations accountable, and the challenges policymakers face in translating regulation into actionable enforcement on the ground. Arguing against the notion of trickle-down policy, the panel will highlight the collective efforts necessary to transform the AI Act into an impactful and enforceable reality.
Academic
Policy
University of Luxembourg (Luxembourg)
Niovi Vavoula (moderator, University of Luxembourg, Luxembourg), Elaine Fahey (City St George's, University of London, United Kingdom), Marco Almada (University of Luxembourg, Luxembourg), Suzanne Nusselder (Tilburg University, Netherlands), Laura Brodahl (Wilson Sonsini Goodrich & Rosati, Belgium)
In recent years, cyber policy has progressively emerged as a distinct policy field at EU level, as exemplified through the various legal instruments adopted and under negotiations. Such legal instruments involve the revision of the NIS Directive, as well as the Cybersecurity Act, the Cyber Resilience Act and the Cyber Solidarity Act. In addition, several EU legal instruments, such as DORA and the AI Act also include provisions on cybersecurity. This panel aims to address questions regarding the evolution of EU cybersecurity law in recent years and explore questions regarding its coherence, its benefits and challenges in implementation.
Academic
Business
Policy
AI Collaborative (International)
Fanny Hidvegi (moderator, AI Collaborative, International), Shazeda Ahmed (UCLA, United States), Gerald Hopster (Autoriteit Persoonsgegevens (Dutch DPA), Netherlands), Roel Dobbe (TU Delft, Netherlands), Iverna McGowan (Office of the United Nations High Commissioner for Human Rights, International)
The discussion about AI regulation is often reduced to simple dichotomies, but the reality is far more complex. While there are prominent voices warning that any regulation will kill innovation, there is also huge enthusiasm across the globe to regulate advanced AI systems. Within the pro-regulation side, however, there is an absolute cacophony of perspective and ideologies with very different ideas of what precisely to regulate, and how. With speakers from government, civil society and academia, this panel will explore four prominent concepts within the global debate on AI governance: the role of national ‘AI Safety’ authorities; the need for human rights standards; the role of effective altruism and related ideologies; and the role of infrastructure and industrial policy.
Academic
Business
Policy
University of Turin, Law Department (Italy)
Jacopo Ciani Sciolla (moderator, Law Department, University of Turin, Italy), Bart van der Sloot (Tilburg University, Netherlands), Raphaële Xenidis (Sciences Po Law School, France), Monica Senor (Garante per la protezione dei dati personali, Italy), Stefaan Verhulst (The Data Tank, Belgium)
EU anti-discrimination law was developed before the growing impact of the latest generation of AI systems and models that can perpetuate existing biases or create new ones, amplifying systemic forms of discrimination in crucial areas, such as access to services, employment, and education. General purpose artificial intelligence (GPAI), as defined and regulated in the AI Act, does raise normative and epistemological challenges that must be examined in connection with principles and provisions of the GDPR. The panel will bring together experts from the academia, institutions, and civil society to discuss the regulatory, technical, and cultural strategies to safeguard responsible and inclusive uses of GPAI that shall respect the fundamental right to personal data protection.
5Rights Foundation (Belgium)
Leanda Barrington-Leach (5Rights Foundation, Europe), Monika Milanovic (European Commission (AI Office), Europe), Natalia Giorgi (European Trade Union Confederation (ETUC), Europe)
Children are early adopters of technologies, including products and services that use or embed AI. Yet, their needs, rights and views remain largely overlooked in public, policy and technical debates. Together with leading experts, 5Rights developed the Children and AI Design Code to offer a practical and actionable framework for regulators and innovators. Building on the Code and using concrete examples, the workshop will offer practical insights into how children are impacted by AI systems and how innovation can be child-rights respecting. A facilitated discussion will also be held on the possibility to establish concrete technical guidance under the AI Act for implementation and enforcement of robust requirements for children.
Academic
Business
Policy
Council of Europe (Europe)
Peter Kimpian (moderator, Council of Europe, Europe), Anamarija Mladinić (AZOP, Croatian DPA, member of the Bureau of Convention 108, Croatia), Marcello Ienca (TUM School of Medicine and Health, Germany), Murielle Popa-Fabre (Responsible AI Policies and Governance, France), Cathal McDermott (Microsoft, International), Emma Redmond (Open AI, Ireland)
In a period of rapid technological evolution, many ask how to uphold and secure human rights and fundamental freedoms as defined by international instruments — notably the right to privacy. New data processing techniques and technologies, such as big data and profiling, are almost a thing of the past, and today we face other technologies such as neurotechnology and Large Language Models that elevate the processing of personal data to a complexity never seen before. But not only does the level of complexity become higher, the risk of a potential impact on individuals' private lives through the processing of personal data by these technologies and applications also increases. The panel will look into current challenges international organisations face when elaborating standards in these fields and into solutions to overcome them. It will also delve into whom these international standards are useful for, and how.
Academic
Business
Policy
Article 19 (International)
Corinne Cath (moderator, Article 19, International), Kai Zenner (European Parliament, Europe), Seda Gürses (Delft University of Technology, Netherlands), Maria-Luisa Stasi (Article 19, United Kingdom), Maria Donde (Coimisiún na Meán, Ireland)
Amidst much fanfare on the need for a ‘competitive’ Europe, geo-political pressures and a clear intention to pursue industrial policy & security approaches, we are concerned that regulating for the ‘common good’ and respect for human rights have been deprioritised. The AI Liability Directive will be withdrawn and with the rise of purpose general-purpose AI (GPAI) technologies, the EU has developed regulatory responses (e.g. the voluntary AI Code of Practice) which have been criticised as little more than window-dressing with negligible input from civil society. There is a growing risk that meaningful guardrails will be sacrificed in favour of approaches that maintain the status quo power imbalances between technology companies and people affected by their products and services. This has implications for the economy, democracy, and society, which go well beyond the industrial and economic policy aims championed by the Draghi Report. This panel of experts will consider the rapidly changing environment and what a change in regulatory approach towards AI could mean.
Academic
Business
Policy
CEU San Pablo University (Spain), South EU Google Data Governance Chair ()
Georgios Yannopoulos (moderator, National and Kapodistrian University of Athens, Greece), Vincenzo Zeno-Zencovich (Roma Tre University, Italy), Maria da Graça Canto (Nova University of Lisbon, Portugal), José Luis Piñar (CEU San Pablo University, Spain), Maria Biliri (Vodafone, Greece)
The use of AI-based techniques in the context of databases has potential and currently offers many opportunities for different actors. The question we face with the use of AI, and in particular Generative AI, is whether its use always complies with legal and ethical requirements and whether these practices can benefit everyone. We can identify various technical measures and multiple safeguards in order to minimise the risks of violating the right to the protection of personal data and reinforcing the commitment to transparency, also with regard to web scraping techniques. Responsible technological innovation can provide very useful and necessary answers and enable a very favourable scenario for the years to come. The recent Opinion of the EDPB on certain data protection aspects related to the processing of personal data in the context of AI models provides important elements. An analysis is essential to find the balance between the different rights at stake.
Academic
Policy
Gesellschaft für Freiheitsrechte (Germany)
Luzie Neyenhuys (moderator, Gesellschaft für Freiheitsrechte (GFF) - Lawyer at Centre for User Rights, Germany), Sven Herpig (interface - tech analysis and policy ideas for Europe, Germany), Sophie in ’t Veld (Oxford Martin AI Governance Initiative, University of Oxford, United Kingdom), Lori Roussey (Data Rights, France), Anna Buchta (European Data Protection Supervisor, Europe)
As spyware continues to undermine privacy, democracy, and human rights across Europe, unpatched vulnerabilities are the silent enablers of these threats. This panel brings together legal experts, policymakers, and technologists to explore how robust vulnerability management can shield our digital infrastructure. We’ll discuss the urgent need for laws mandating swift reporting and remediation of software vulnerabilities. By addressing these legal gaps, we can prevent spyware exploitation at its source. Join us to uncover how strategic litigation, policy innovation, and civil society collaboration can drive systemic change. Together, we can build a resilient digital ecosystem that protects citizens, journalists, and activists from surveillance abuses and sets a global precedent for safeguarding fundamental rights in the digital age. We must close the backdoor on spyware—once and for all.
Academic
Business
Policy
Humboldt Institut für Internet und Gesellschaft (HIIG) (Germany)
Anja Wyrobek (moderator, European Parliament, Europe), Christina Michelakaki (OECD, International), Felix Mikolasch (noyb, Austria), Max von Grafenstein (Humboldt Institut for Internet and Society / Law & Innovation, Germany), Malte Beyer-Katzenberger (European Commission, Europe)
The risks of personalised advertising and the problem of ineffective consent in practice have long been recognised by regulators, studied by scientists and fought by privacy activists. With the Fitness Check of EU consumer law on digital fairness, a ban on personalised advertising is being discussed again. However, is a ban necessary to effectively contain the risks, or can consent agents fulfil this function just as effectively while better reflecting the different privacy attitudes of EU citizens? Which solutions are currently available? What methods can be used to empirically test their effectiveness and what are the latest results of such studies? What implications do these solutions have for the online advertising market? And what legal measures are still needed? The panel will provide an overview of current developments, the current state of research and possible practical and legislative development paths.
Privacy Salon ()
In this discussion titled "A bit, a prompt," we will focus on the eponymous exhibition by artist aaajiao, held in 2024 at SETAREH Gallery in Berlin. The exhibition, spanning video, painting, and installation, explores the internet as a direct manifestation of computational power—how algorithmic mechanisms rooted in the attention economy and ideological tensions gradually alienate our behavior, turning everyday life into flickering, fragmented bits of information. Within this system, aaajiao seeks the void (gaps in the internet)—fluid spaces not yet fully occupied by systemic power, capable of soothing our wounds and dismantling the structures shaped by distraction and hatred.
Pels Rijcken Advocaten (Netherlands)
Lars Groeneveld (Pels Rijcken Advocaten, Netherlands), Lizzy Samuels (Lemstra van der Korst, Netherlands)
The Netherlands has long been recognized for its distinctive approach to collective redress. In recent years, a significant reform of the Dutch collective redress framework has led to a marked increase in class action lawsuits concerning privacy violations. High-profile proceedings against tech giants such as Google, Meta, and TikTok illustrate the growing importance of private enforcement mechanisms under the GDPR. This workshop examines the rapidly evolving landscape of GDPR class actions in the Netherlands. We will begin with an overview of the legal framework governing these proceedings, focusing on the Wet Afwikkeling Massaschade in Collectieve Actie (WAMCA) and its implications for privacy-related claims. The discussion will highlight the unique procedural and substantive aspects that make the Netherlands an attractive jurisdiction for initiating collective GDPR claims. We will analyze recent case law, including landmark decisions by the Amsterdam District Court in actions against Meta and TikTok. Particular attention will be paid to the legal reasoning applied by the courts,
Privacy Studies Journal (International)
Beate Lindegaard (Privacy Studies Journal, International), Mette Birkedal Bruun (Privacy Studies Journal / Centre for Privacy Studies, Denmark)
Privacy Studies Journal emerged in 2021 with the ambition to probe facets of privacy broadly understood. We believe that a multidisciplinary approach reveals how notions of privacy affect everything from family constellations, via technology regulation, to systems of political power. Four years in, the journal has published a series of widely different articles, and we are still looking to incorporate new fields of knowledge. Join us for a dynamic session on notions of privacy in different fields, contexts and cultures. We will explore together how diverse approaches and experiences shed new light on privacy. We want to nourish a transdisciplinary privacy studies community around PSJ and welcome participants of all backgrounds and levels of experience. This workshop is the first of two; it is encouraged, but not required to participate in both.
SRIW e.V. (Germany)
Sarah Hamou (SRIW e.V., SCOPE Europe Monitoring, Germany), Fatimah Diouani (SCOPE Europe, Belgium)
As General Purpose AI (GPAI) models reshape industries, robust governance frameworks be-come critical. This interactive workshop will touch upon the development of the Code of Prac-tice (CoP) for GPAI, examining its role in fostering ethical innovation while addressing compli-ance challenges. Participants will engage in discussions on balancing accountability with flexibility as well as the value of traditional codes of conduct, and alternative compliance pathways under the AI Act’s Recital 117. Through case studies and group activities, at-tendees will identify ways to mitigate barriers for SMEs, ensuring transparency, and enhance transparency, and develop practical self-regulation strategies. Key subjects to be addressed: Challenges startups and SMEs face in CoP adherence and potential solutions. How traditional codes of conduct can address CoP limitations and create market value under Recital 117. Lessons from GDPR’s co-regulatory frameworks to inform effective GPAI self-regulation.
Center for European Policy Studies (Belgium)
Cameran Ashraf (Center for European Policy Studies, Belgium)
Human rights investigators are increasingly using open-source, user-generated content scraped from online platforms as crucial digital evidence for human rights advocacy, accountability and analysis (witness accounts, footage of protests and state violence, etc.). This workshop will explore the intersection between the good-faith documentation of sensitive data on human rights violations and data protection and privacy standards. We aim to collaboratively answer the following questions: 1) What data protection standards should apply to gathering and processing such content and how? 2) How can data protection and privacy regulation negatively impact or impede digital evidence gathering and processing? 3) How can sensitive content be archived in challenging political contexts with restricted right to privacy and sophisticated surveillance? The workshop will provide the audience with a short brief on the topic, followed by an interactive exercise simulating both sides of the argument and a guided discussion to share and summarize findings.
Academic
Business
Policy
FRA ()
Alyson Kilpatrick (moderator, European Network of National Human Rights Institutions (ENNHRI); Northern Ireland Human Rights Commission, Europe), Kilian Gross (AI Office, Europe), Hanne Juncher (Council of Europe, Europe), Sirpa Rautio (European Union Agency for Fundamental Rights, Europe), Wojciech Wiewiórowski (European Data Protection Supervisor, Europe)
The use of AI presents both opportunities and challenges to fundamental rights, and its regulation will impact the future of Europe. Two important legal instruments adopted in 2024 – the EU AI Act and the Council of Europe Framework Convention on AI – include a variety of safeguards to address challenges ahead. Among other things, the AI Act foresees that providers and certain deployers have to identify, analyse and manage risks that high-risk AI systems pose to fundamental rights through risk management obligations and a fundamental rights impact assessment, respectively. The Council of Europe has developed a methodology to assess AI systems’ impact on human rights, democracy and the rule of law. There is much discussion about what these assessments should encompass. This panel will consider ways ahead and the relevance of fundamental rights assessments to protect European values when using AI.
Business
Policy
BEUC - The European Consumer Organisation (Europe)
Urs Buscke (moderator, BEUC - The European Consumer Organisation, Europe), Maria-Myrto Kanellopoulou (European Commission, Europe), Simona De Heer (European Parliament, Europe), Ilya Bruggeman (EuroCommerce, Europe), Frithjof Michaelsen (UFC Que Choisir, France)
Following the Fitness Check of EU consumer law completed in autumn 2024, the European Commission has announced that it will develop a Digital Fairness Act (DFA) to improve the protection of consumers online. The DFA is expected to address dark patterns, addictive design, influencer marketing, unfair data-driven personalisation and other harmful practices. In the Fitness Check report, the Commission points out that existing rules, such as the Digital Services Act (DSA) and the General Data Protection Regulation (GDPR), are not enough to ensure fairness online, and that EU consumer law therefore needs to be updated. The Commission also found that more clarity is needed on how EU consumer law applies to unfair commercial practices online, and that enforcement of EU consumer law is insufficient. Against this background the panel will debate how the EU could improve digital fairness.
Academic
Business
Policy
Centre for IT and IP Law (CiTIP) - KU Leuven (Belgium)
Irmak Erdoğan Peter (moderator, Centre for IT & IP Law (CiTiP), Belgium), Nikos Papadopoulos (Homo Digitalis, Greece), Larisa Munteanu (Erasmus School of Law/Protector PriVit, Netherlands), Eva King (Arbeiterkammer Vorarlberg, Austria), Aída Ponce Del Castillo (ETUI, Belgium)
This panel discusses the relationship between the GDPR and workers’ rights, with particular emphasis on the right to association. It examines the obligations trade unions bear in managing sensitive data, such as membership and political beliefs, and their impact on their daily operations, and advocacy efforts. The discussions analyse whether these regulatory frameworks may inadvertently impact unions' ability to advocate effectively on behalf of workers. Another key focus are the safeguards provided by the GDPR against employer access to union membership data, assessing the risks of misuse and potential cases of worker surveillance, while examining whether access can ever be justified. The panel further explores the GDPR’s implications for union-led collective enforcement under Article 80. Ultimately, the discussions aim to propose a more nuanced regulatory approach—one that strikes a careful balance between safeguarding personal data and upholding essential workers’ rights.
Academic
Business
Policy
Karlsruhe Institute of Technology (Germany)
Thorsten Strufe (moderator, Karlsruhe Institute of Technology, Germany), Franziska Boehm (FIZ, Germany), Dominique Schroeder (TU Vienna, Austria), Kim Wuyts (Price Waterhouse Coopers, Belgium), Isabel Wagner (Uni Basel, Switzerland)
Publication and sharing of medical data is considered desirable for precision medicine and research, especially for rare diseases. The European Union and several member states are therefore planning extensive sharing of medical data with public and private research institutions, for example through the Electronic Health Record (ePA). However, medical data is obviously extremely privacy sensitive. Modelling privacy threats will be necessary for the preparation of DPIAs, but has proven to be complex and difficult to achieve objective results. This panel will therefore discuss experiences and approaches to privacy threat modelling and impact assessment in the context of medical data sharing.
Academic
Business
Policy
TILT (Tilburg Institute for Law, Technology and Society) (Netherlands)
Suzanne Nusselder (moderator, TILT (Tilburg Institute for Law, Technology and Society), Netherlands), Lokke Moerel (Morrison & Foerster, Netherlands), Renate Verheijen (ENISA, Europe), Pier Giorgio Chiara (University of Bologna, School of Law and ALMA-AI Research Center, Italy), Niovi Vavoula (University of Luxembourg, Luxembourg)
Cybersecurity, a rapidly evolving regulatory domain, has seen an explosion of legislative developments in recent years (NIS2, DORA, CRA). Effective cooperation and information sharing is crucial for strengthening the overall level of cybersecurity and is increasingly mandated by law. Cybersecurity is a multistakeholder endeavour characterised by a complex institutional landscape whereby information sharing occurs between various (decentralised) EU bodies, Member States, and an array of actors, such as CSIRTs, SOCs, public authorities, private actors, and occasionally law enforcement authorities. Importantly, such information-sharing ought to be done in accordance with EU data protection law. Inappropriate sharing and disclosure of cybersecurity information not only poses risks to cybersecurity itself but also to users. The panel will discuss the data protection challenges arising from cybersecurity information sharing and how these shall be addressed to ensure compliance with the EU data protection framework.
LSTS, VUB (Belgium)
This book introduces the revolutionary use of AI in the field of cervical cancer detection. The book explores how advanced computer algorithms can analyse medical images and patient data to enhance early detection and accurate diagnosis of cervical cancer. The book starts by providing a comprehensive overview of cervical cancer, its risk factors, and the importance of early detection. It then delves into the fundamental concepts of artificial intelligence and its application in healthcare. Readers will gain a deeper understanding of how AI algorithms can "see" patterns in cervical cells and tissue, enabling the detection of abnormal cells and precancerous changes that may indicate the presence of cervical cancer. Drawing on the latest research and real-world case studies, the book showcases the various AI techniques used for cervical cancer screening, including the analysis of Pap smear and liquid-based cytology images.
European Center for Not-for-Profit Law / Brussels Privacy Hub / Danish Institute for Human Rights (Netherlands)
Gianclaudio Malgieri (Brussels Privacy Hub / Leiden University, Europe), Ioana Tuta (The Danish Institute for Human Rights, Denmark), Karolina Iwańska (European Center for Not-for-Profit Law, Netherlands)
Under the AI Act, deployers of high-risk AI systems must assess their impacts on fundamental rights, a crucial safeguard for preventing abuses associated with the deployment of risky systems. This workshop brings together civil society, academic experts, AI Office, human rights and equality bodies, and EU governments to test how FRIAs can help deployers, especially in the public sector, to identify, prevent and mitigate fundamental rights risks. Based on existing experiences, a coalition involving CSOs, academics and human rights institutions will present a draft FRIA methodology designed to inform the upcoming AI Office template. In breakout groups will focus on practical aspects and stress-test the model: can it help deployers meaningfully - and easily - assess and measure severity and likelihood of impacts to fundamental rights? By offering a practical, cross-stakeholder discussion, we want to build a shared understanding of the role of FRIAs, unpack common challenges, and devise ways to overcome them without compromising the purpose of the assessment.
Privacy Studies Journal (International)
Beate Lindegaard (Privacy Studies Journal, International)
So far, Privacy Studies Journal has published on, e.g., privacy legislation, urban culture, surveillance, and research ethics. What could future contributions look like? Join us for a dynamic session on notions of privacy in different fields, contexts and cultures. Together we will consider how combining diverse perspectives helps us overcome biases and blind spots, and investigate how our different backgrounds can complement each other in collaborations across academic fields, journalistic approaches and artistic practices. We aim for exchanges that could potentially grow into PSJ research articles or other publications, but this session is just as much about nourishing a multifaceted community within transdisciplinary privacy studies. Participants of all backgrounds and levels of experience are welcome. This workshop is the second of two; it is encouraged, but not required to participate in both.
Rathenau Instituut (Netherlands)
Wouter Nieuwenhuizen (Rathenau Instituut, Netherlands), Timo Nieuwenhuis (Rathenau Instituut, Netherlands)
How can we design content moderation that truly respects human rights? AI-driven moderation on large platforms increasingly threatens fundamental rights like freedom of expression and privacy. Rethinking platform design has never been more urgent. LGBTIQA+-people are often the first to experience these harms—and to develop creative ways to navigate and resist them. Their ingenuity offers valuable insights for reimagining moderation. In this workshop, we will explore innovative moderation approaches, drawing on lessons from LGBTIQA+-communities. We will present requirements and prototypes developed by the Rathenau Institute and design studio Idiotes in collaboration with these communities, using them as a starting point to co-create new models with participants. By uniting experts in policy, data protection, and human rights we strive to co-create requirements and prototypes of moderation systems that genuinely protect and empower users. Join us in building platforms that uphold fundamental rights—because better content moderation isn’t just possible, it’s essential.
Academic
Business
Policy
LSTS (Belgium)
Gloria GONZÁLEZ FUSTER (moderator, LSTS/VUB, Belgium), Asha Allen (Centre for Democracy and Technology Europe, International), Elisabetta Stringhi (Università degli Studi di Milano, Italy), Aleksandra Kuczerawy (KU Leuven, Belgium), Farieha Aziz (Bolo Bhi, Pakistan)
Gender-based violence keeps being a major problem online, with serious implications for our rights and freedoms. The Digital Services Act (DSA) was supposed to contribute to progress in this area, notably by obliging providers of very large online platforms and of very large online search engines to consider and act upon the possible systemic risk of negative effects in relation to gender-based violence. How is this working in practice, if it is? This panel, exploring developments in Europe but also other experiences and perspectives, will notably discuss:
Academic
Business
Policy
Bocconi University (Italy)
Oreste Pollicino (moderator, Bocconi University, Italy), Lokke Moerel (Tilburg University; Morrison & Foerster, Netherlands), Yordanka Ivanova (European Commission, Europe), Neil Richards (Washington University School of Law, United States), Andrea Cosentini (Intesa Sanpaolo, Italy), Federica Paolucci (Bocconi University, Italy)
The EU AI Act mandates Fundamental Rights Impact Assessments (FRIA) for high-risk AI systems, establishing them as a critical safeguard for fundamental rights like privacy, non-discrimination, and human dignity. This panel examines the legal, practical, and ethical dimensions of FRIA, addressing key questions: Who conducts these assessments? How are risks identified and mitigated? And what ensures their substantive, not merely procedural, value? Using real-world examples, such as the systemic risks of bias in creditworthiness assessments and surveillance in AI applications like facial recognition, the panel will analyze the challenges of aligning FRIA with EU law while promoting accountability and transparency. The discussion will propose a comprehensive FRIA model, offering actionable strategies for implementation and compliance. By uniting experts from academia, industry, and institutions, the panel aims to ensure fundamental rights are embedded into the governance of AI systems.
CPDP (Belgium)
Manos Roussos (moderator, TILT (Tilburg Institute for Law, Technology and Society), Netherlands), Nicola Leschke (Paris Lodron Universität Salzburg, Austria), David Michels (Queen Mary University of London, United Kingdom), Thiago Guimaraes Moraes (University of Brasilia (UnB) and Vrije Universiteit Brussels (VUB), Belgium), Emmanouil Bougiakiotis (European University Institute, Italy)
The first academic session explores the evolution of data protection through the GDPR in a constantly evolving digital world, through the lens of four related academic papers. Papers to be presented: Emanuela Podda, Università degli Studi di Milano (IT) & Nicola Leschke, Paris Lodron Universität Salzburg (AT) - Transitioning from Portability in the GDPR to Access in the Data Act: A Multidisciplinary Analysis of Data Access by Design David Michels, Queen Mary University of London (UK) - Beyond Schrems: The Full Conflict Between US Government Access and the GDPR Thiago Guimaraes Moraes, University of Brasilia and Vrije Universiteit Brussels (BR/BE) - From GDPR to the AI Act: Implementing Privacy by Design and Data Protection by Design in AI Systems Emmanouil Bougiakiotis, Fool’s Gold: A Sceptical View of the GDPR as the Golden Privacy Standard and an alternative way forward
Academic
Business
Policy
Inria (France)
Nataliia Bielova (moderator, Inria, France), Wojciech Jukowski (L’Oréal Deutschland, Germany), Sepideh Ghanavati (University of Maine, United States), Cristiana Santos (Utrecht University School of Law, Netherlands), Karel Kubicek (Unaffiliated, Switzerland)
Today’s websites and apps are complex applications built with numerous intermediary services. Such services help website owners to build their websites/apps, help them integrate third-party advertising services, or propose compliance and monetization solutions. While these services often facilitate online tracking and collection of personal data, they sometimes include manipulative practices aimed at website owners, yet it’s unclear how such services are covered by the ePrivacy Directive/GDPR or the DSA. As a result, intermediary services may deflect responsibility onto other actors, placing compliance obligations mostly on website owners. In this panel, we present the chains of dependencies between intermediary services down to website/app owners, analyze the current EU regulatory framework, identify challenges for website owners and discuss how research from Computer Science can help improve compliance.
Academic
Business
Policy
Health & Ageing Law Lab (HALL), VUB (Belgium)
Andrea Martani (moderator, University of Basel, Switzerland), Paul Quinn (Health & Ageing Law Lab (HALL), VUB, Belgium), Ruoxin Su (Health & Ageing Law Lab (HALL), VUB, Belgium), Johanna Rahnasto (Roschier, Attorneys Ltd., Finland), Róisín Costello (Trinity College Dublin, Ireland)
Genomics research drives advancements in precision medicine, clinical diagnostics, and treatments. However, using genetic data in research encounters complex legal, ethical, and regulatory challenges worldwide. The EU’s data protection framework sets special rules for genetic data processing in research, while the recent initiative of the European Health Data Space (EHDS) intensifies debates over secondary use. Beyond privacy, genetic data holds unique legal attributes with societal, familial, and national implications, which is underemphasized in EU law but evident in other jurisdictions. For example, China views genetic data as a critical national resource tied to bio-sovereignty and security; whereas the US adopts a market-driven approach to boost biotechnology and balance privacy, with growing biosecurity concerns amid shifting geopolitics. This panel aims to explore the changing general nature of genomics and discuss genetic data governance in research through these global perspectives.
University of Lausanne (Switzerland)
With the promise of greater efficiency and effectiveness, public authorities have increasingly turned to algorithmic systems to regulate and govern society. In Algorithmic Rule By Law, Nathalie Smuha examines this reliance on algorithmic regulation and shows how it can erode the rule of law. Drawing on extensive research and examples, Smuha argues that outsourcing important administrative decisions to algorithmic systems undermines core principles of democracy. Smuha further demonstrates that this risk is far from hypothetical or one that can be confined to authoritarian regimes, as many of her examples are drawn from public authorities in liberal democracies that are already making use of algorithmic regulation. Focusing on the European Union, Smuha argues that the EU's digital agenda is misaligned with its aim to protect the rule of law. Novel and timely, this book should be read by anyone interested in the intersection of law, technology and government.
Uppsala University (Sweden)
Andreas Kotsios (KTH Royal Institute of Technology/Uppsala University, Sweden), Anna-Kaisa Kaila (KTH Royal Institute of Technology, Sweden), Katja de Vries (Uppsala University, Sweden), Geraint Wiggins (Vrije Universiteit Brussel, Belgium)
AI-generated illustrations. 3D models of cultural heritage. A song sung by a digital voice replica. An avatar imitating the gaming-style of a YouTuber. Who should reap the economic and cultural benefits of the datafied exploitation of creative works and performances? Currently the main public focus is on the opposing interests of large AI companies and artists struggling to protect their works from unlawful uses through intellectual property law. In this workshop we complicate this simple narrative by focusing on more actors, concerns, and legal frameworks. We discuss our Data4SCI ("An Empirical Perspective on Challenges and Opportunities of the European Data Act for SMEs in the Swedish Creative Industry") research project, engage with discussant prof. Wiggins and invite the audience to participate in a Legal Hackathon (15 min) and a roundtable discussion. We focus on three themes: (1) fairness (who should earn from datafied creativity?), (2) consent (for yet unknown future uses and implications), and (3) governance (what kind of intermediaries, standards, legal frameworks?).
d.pia.lab, LSTS, VUB (Belgium)
Alessandra Calvi (d.pia.lab, LSTS, VUB, Belgium), Anastasia Karagianni (LSTS, VUB, Belgium), Maciej Otmianowski (d.pia.lab, LSTS, VUB, Belgium)
Designed as an interactive and interdisciplinary discussion, this workshop invites researchers, activists, civil society organisations and practitioners interested in AI governance to brainstorm and propose concrete pathways to rethinking algorithm impact assessments (AIAs) as tools for meaningful public engagement and accountability. Reclaiming AIAs means challenging the paradigm that sees impact assessments as merely expert-driven exercises whereby the contribution of persons potentially affected by AI systems is marginalised. Instead, turning AIAs into collective exercises giving prominence to their inputs would arguably lead to fairer AI systems. Yet, as participation does not mean co-decision, such an approach may mask forms of participation washing. Therefore, it is necessary to find novel solutions for participatory AI governance. As researchers, we have critically analysed AIAs and tried to propose strategies to address these shortcomings. To kickstart the discussion, we will briefly share our insights. Then, the participants will have the floor.
Academic
Business
Policy
Microsoft (Belgium)
Lorelien Hoet (moderator, Microsoft, Belgium), Michael Aendenhof (Belgian Permanent Representation to the EU, Belgium), Alexandre Ferreira Gomes (Clingendael Institute, Netherlands), Valerie Höss (Commerzbank, Germany), Chiara Manfredini (Access Now, Italy), Andres Raieste (Nortal, Estonia)
In our global and digital world, marked by geopolitical uncertainty, Europe aims to stimulate digital infrastructure that is trusted, resilient and secure, while fostering competitiveness, innovation and tech sovereignty. Some stakeholders believe that achieving these goals requires a shift towards more localized digital infrastructure and fewer data transfers. Conversely, others argue that the key lies in expanding digital solutions to boost Europe's competitiveness, and that this cannot be done without international cooperation. In this panel, we aim to gather insights from selected experts and thought leaders on these diverse viewpoints. Specifically, we seek to explore how robust cybersecurity protection and resilience can go hand in hand with EU's data protection framework.
Academic
Business
Policy
Georgia Institute of Technology (United States)
Kenneth Propp (moderator, Georgetown University Law Center, United States), Ignacio Gomez Navarro (E-Evidence and Cybercrime, European Commission, Europe), Elonnai Hickok (Global Network Initiative, International), DeBrae Kennedy-Mayo (Georgia Institute of Technology, United States), Jan Kralik (Cybercrime Division, Council of Europe, Europe)
The United Nations has completed work on a new multilateral Convention Against Cybercrime; many UN members are expected to sign in 2025. The Convention obliges State Parties to criminalize a range of cyber-dependent offenses and to assist each other in obtaining electronic evidence for criminal investigations and prosecutions. The UN Convention has many similarities to the Council of Europe (COE) Cybercrime Convention (Budapest Convention). Nonetheless, the new instrument has attracted strong opposition from human rights groups and technology companies. They believe the UN Convention lacks safeguards against abuse by authoritarian governments seeking to suppress free speech and dissent, and that it could undermine data protection guarantees. This panel will explore the UN Convention’s potential law enforcement benefits, its value in relation to the Budapest Convention, and the sufficiency of its protections against misuse.
Business
Policy
EDPS (Belgium)
Fanny Coudert (moderator, EDPS, Europe), Gaëtan Goldberg (CNIL, France), Milla Vidina (Equinet, Belgium), Sven Stevenson (Autoriteit Persoonsgegevens, Netherlands)
As AI systems are being deployed, public and private entities are being confronted with the challenging task of complying with different - sometimes competing- legal frameworks. AI systems must be assessed from their multiple impacts on the rights and freedoms of individuals and comply with the respective laws put into place. One of the most debated examples is the impact the use of AI systems has on the right to non-discrimination. However, in order to achieve this normative goal, several well-established legislations, such as data protection, non-discrimination laws or the AI Act, play a role in reducing such impact, each of them contributing to a piece of the puzzle. For Data Protection Authorities, this brings a new level of complexity. Data protection should now be interpreted and implemented in the light of a multifaceted regulatory framework. From a regulatory perspective, this pushes regulators to build bridges between different experience in implementing these laws. This panel will delve into the challenges of supervision of this complex regulatory environment.
Academic
Business
Policy
Interdisciplinary Center for Security Reliability and Trust (SnT) of the University of Luxembourg (Luxembourg)
Claudia Negri Ribalta (moderator, University of Luxembourg, Luxembourg), Arianna Rossi (Sant'Anna School of Advanced Studies, Italy), Vincent Toubiana (CNIL, France), Stefan Schauer (noyb, Austria), Trushant Mehta (Fair Patterns, France)
Automation is helping various actors of the data economy to streamline the large-scale detection of illegal deceptive practices online, gather evidence of wrongdoing, enforce actions, or propose remedies. This nascent field of research and practice raises questions concerning the quality of data used to train the tools, the reliability of their outputs, their usability, sustainability, and scalability, as well as other prerequisites that would instill trust in the use of such computing technologies.
Academic
Business
Policy
ALTI - VU Amsterdam (Netherlands)
Georgiana Mirza (moderator, ALTI - VU Amsterdam, Netherlands), Paola Cardozo Solano (ALTI - VU Amsterdam, Netherlands), Johan Keetelaar (Oxera, Netherlands), Anna Colom (The Data Tank, Spain), Alberto Di Felice (DigitalEurope, Italy)
As Europe becomes increasingly digitalised, interconnected, and data-driven, the pursuit of innovation is heralded as a means to enhance citizen welfare and strengthen global economic standing. Yet, in this landscape, fundamental rights can sometimes seem negotiable. This panel critically examines the transformative role of EU Common Data Spaces in shaping digital ecosystems and how this framework interacts with fundamental rights on the one hand, and competition law and industrial policy on the other. Beyond market dynamics, these discussions probe Europe’s core values. Grounded in the principles of fundamental rights, this session evaluates whether emerging EU data policies can support an equitable and innovative digital economy in which individual, corporate, and societal needs do not compete but co-exist.
Privacy Salon ()
In this discussion, we take artist YAO Qingmei’s work The Burrow—Monitor & Control (2022) as a point of departure to explore mechanisms of bodily governance under global surveillance systems.
Inspired by Franz Kafka’s novella The Burrow, the video depicts a female security guard deeply connected to security machines, situated in an underground surveillance room of a gated middle-class community in China. In this hidden digital prison, the controller experiences the outside world—its time, nature, and weather—only through LCD screens and cameras. Yet she herself is also under constant surveillance, reflecting how individuals are disciplined and self-disciplined under invisible regimes of panoptic control.
TechFreedom (International)
Benjamin Shultz (The American Sunlight Project, United States), Berin Szóka (TechFreedom, International), Giovanni De Gregorio (Católica Lisbon School of Law, Portugal), Ramsha Jahangir (Tech Policy Press, Netherlands)
The Trump administration has blasted European countries and the European Commission for “censoring” what Republicans call “free speech” and “hiding behind ugly Soviet-era words like misinformation and disinformation”. How did we get here? Is there any validity to these complaints? Are “European values” being unfairly caricatured? How should European policymakers respond? Is the Trump administration serious about leveraging European dependence on the US military to force changes to European regulation of digital services? Are European regulators, like US tech companies, already giving way? What do Republicans really want anyway? Join us for a lively, interactive discussion of these issues among a diverse panel from both sides of the Atlantic to explore the law, geopolitics, and practical consequences of this brewing feud for Internet users and the health of European democracies. Come share your darkest fears, your unanswered questions, and your practical advice.
Autoriteit Persoonsgegevens (Netherlands)
Ruth Ruskamp (Autoriteit Persoonsgegevens, Netherlands), Niels Kohnstam (Autoriteit Persoonsgegevens, Netherlands), Hannah Erkelens (Autoriteit Persoonsgegevens, Netherlands)
This workshop will start with a short introduction of article 4 of the AI Act, and aspects of AI literacy. Secondly, we touch upon our published framework as a kick off and inspiration for the case study discussion. This is the starting point for group work on 3-5 case studies. Each table discusses a specific case and possible measures regarding a small/medium or large organisation and its approach to AI literacy. The case studies will be inspired on the Compilation of AI Literacy practices, published by the European Commission. After the group work, there will be a plenary discussion of the key take aways from each group to highlight interesting ideas and inspiration. Furthermore, the participants will prioritize the most hands-on and effective measures to improve AI literacy in organisations. The workshop will end with a possibility for discussion and questions.
Nexus Institut (Germany)
Volkan Sayman (nexus Institut für Kooperationsmanagement und interdisziplinäre Forschung e. V., Germany), Gesa Feldhusen (nexus Institut für Kooperationsmanagement und interdisziplinäre Forschung e. V., International), Daniel Guagnin (nexus Institut für Kooperationsmanagement und interdisziplinäre Forschung e. V., Germany)
Does consent information truly help users balance the risks and benefits of data processing? Our research raises more questions than answers. Conducting a set of qualitative interviews with a diverse population, we have developed three user personas: The Guarded, the Wary, and the Whatever. With the participants we will design new ways to communicate privacy issues in the digital sphere to said idealtypical personas. The aim is to increaseuser engagement and promote informed decision-making. Objectives: 1. Discuss our empirical findings on privacy attitudes and further elaborate or differentiate the user personas we gathered from our interviews. 2. Creatively think of new ways to communicate risks and benefits to different types of users discussed and refined in step one.
Academic
Business
Policy
AlgorithmWatch (Germany)
Matthias Spielkamp (moderator, AlgorithmWatch, Germany), Jill McArdle (Beyond Fossil Fuels, Ireland), Fanny Hidvégi (AI Collaborative, Hungary), Claude Turmes (Independent, Luxembourg), Boris Ruf (AXA, France)
You’ve read the news: A Three Mile Island reactor will be restarted to power Microsoft's AI operations (this was the site of a nuclear meltdown in 1979), Google’s emissions climbed nearly 50% over the last five years due to AI energy demand, and the only thing impressive about Amazon’s claim that it uses green energy is the chutzpah with which it lies about its creative accounting. Still, if you think you have a good idea of generative AI’s resource demands, think again. The snippets above only present the tip of the iceberg. To show what’s at stake, we will not only present the latest evidence on energy demand for the entire value chain of AI, including hardware production – we will also explain how it threatens the transition to renewable energy and the stability of the grid, aka critical infrastructure.
Academic
Business
Policy
Panoptykon Foundation (Poland)
Katarzyna Szymielewicz (moderator, Panoptykon Foundation, Poland), Alissa Cooper (Knight-Georgetown Institute, United States), Ian Brown (Fundação Getulio Vargas Law School, Visiting Prof, Spain), Marc Faddoul (AI Forensics, France), Ulrik Lyngs (Oxford University, United Kingdom)
The social media ecosystem has been dominated by a handful of tech companies, shaping the flow of information and limiting innovation. The dangers have never been clearer. Their algorithms decide which content is sourced and indexed, which posts rank higher in newsfeeds, which topics will be trending or censored. This power should come with great responsibility. Meanwhile, very large online platforms stick to a reckless business model, optimising their recommender systems for short-term user engagement and advertiser value. This design choice has been criticised by civil society and independent researchers because engagement-based ranking disproportionately amplifies low-quality, misleading and sensational content. Can social media platforms deliver long-term user value and support pluralism? How to make design choices that affect our fundamental rights, civic discourse and public health more responsible? The panel will discuss strategies for transforming a closed social media ecosystem into an open market, benefiting consumers/citizens, ethical innovators and publishers.
Academic
Business
Policy
University of Bradford and CPDP Africa (International)
Tami Koroye (moderator, University of Bradford, United Kingdom), Karine Caunes (Centre for AI and Digital Humanism (Digihumanism), International), Yomi Ajibade (Insulet, Nigeria), Melody Musoni (ECDPM, Netherlands), Mercy King'ori (Future of Privacy Forum (FPF), Kenya)
There is a growing rise in the use of AI technologies across various sectors, which raises various ethical and regulatory concerns for the lawmaker, the users and the developers. This panel will examine the intersection of AI, ethics, security, governance, and the unique challenges and opportunities in the African context. Experts will explore how AI systems can be designed to align with African values, foster trust, and address ethical and regulatory gaps. The session will also examine geopolitical dynamics influencing AI development, particularly the competing roles of global powers and the implications for Africa’s digital sovereignty. Additionally, panelists will consider the need to address underlying inequalities, such as access to technology and the internet, to ensure inclusive AI development that drives innovation and sustainable growth across the continent.
Academic
Business
Policy
NGO Algorithm Audit (Netherlands)
Jurriaan Parie (moderator, NGO Algorithm Audit, Netherlands), Marie Beth van Egmond (Dutch DPA (Autoriteit Persoonsgegevens), Netherlands), Brent Mittelstadt (Oxford Internet Institute, United Kingdom), Carlos Mougan (European AI Office, Europe), Ylja Remmits (NGO Algorithm Audit, Netherlands)
Over the years, lesson have been learnt from Dutch scandals involving risk profiling algorithms. Investigations conducted by consultants, academics and NGOs have contributed to a growing body of public knowledge from which best-practices emerge. This panel explores the interplay between the qualitative principles of law and ethics and the quantitative methodologies of statistics and data analytics. Specifically, we shed light on how empirical approaches can help interpret and contextualize open legal norms under EU non-discrimination law and public administration law. Examples are drawn from a recent audit conducted in collaboration with the Dutch Executive Agency for Education (DUO), in which aggregated statistics on the migration background of 300.000+ students were analyzed. We discuss whether bias testing inevitably leads to the feared ‘battle of numbers’, or whether it can serve as a critical role for fostering meaningful democratic oversight of AI.
Academic
Business
Policy
Open Universiteit (Netherlands)
Andreas Hӓuselmann (moderator, Open Universiteit, Netherlands), Gloria González Fuster (Vrije Universiteit Brussel (VUB), Belgium), Kleanthi Sardeli (noyb, Austria), Stephanie Rossello (Open Universiteit / KU Leuven, Netherlands), Anouk Focquet (Faros, Belgium)
This panel intends to discuss an under-explored data subject right, the right to rectification of article 16 GDPR. The focus lies on the role the right can have as an instrument of contestation of inadequate evaluations concerning an individual, using AI inferences as a case study. It will take the audience on a journey throughout the history of the right, from its application by national data protection authorities and courts to traditional pen and paper assessments to contemporary automated AI ones. The goal is to uncover the purpose of this right, clarify the meaning of ‘accuracy’ and ‘completeness’ and elucidate what ‘rectification’ entails in practical terms, especially in relation to subjective data. Against this background, the main question raised is: can the right to rectification be used to challenge wrong (AI) assessments concerning an individual and, if yes, how? The panel will dicuss this question by means of provocative statements.
Frederik de Wilde is an artist whose practice spans visual art, media art, and philosophy. He investigates and works at the intersection of art, science, and technology. His oeuvre offers critical insights into technology and society, exploring the inaudible, the elusive, and the invisible in both digital and physical spaces.
With TARPIT, the art project he is developing as a net-artist-in-residence, he aims to investigate how web bots shamelessly scrape the internet—including our own websites—in search of relevant data. This raises important questions: For whom? For what purpose?
At CPDP.ai, he is primarily seeking expertise as a source of input for this open-source project. What ethical questions are at stake? They can emerge from all directions—for instance, from the art world or the field of cybersecurity.
The panel is conceived as a workshop and Q&A—not one where questions come only from the audience, but one where the artist will also pose questions to the CPDP.ai community.
University of Luxembourg (Luxembourg)
Vanessa Franssen (University of Liège, Belgium), Stanislaw Tosza (University of Luxembourg, Luxembourg)
Starting 18 August 2026, law enforcement and judicial authorities in the EU will gain the power to issue direct cross-border preservation and production orders regarding electronic communications data against service providers established or represented in another Member State. While the adoption of the e-Evidence Regulation was a major achievement, marking a milestone in cross-border cooperation, its implementation implies considerable technical preparations and legal adjustments. At EU level, the Commission shall adopt implementing acts to establish a secure IT system – a process concerning industry and Member States. States must also adopt national laws, define sanction mechanisms, and ensure appropriate remedies. Equally important, authorities require training to properly apply the new orders. The workshop aims to take stock of this complex implementation process, with representatives of all stakeholders (European/national authorities, industry, NGOs, academia), and to discuss constructive solutions for real-world problems in times when fundamental rights are under increasing pressure.
Featuring Federico Cappelletti (ECBA), Marie-Hélène Descamps (Belgian Ministry of Justice), Aisling Kelly (Microsoft), Marc van der Ham (Leiden University), Tania Schröter (European Commission).
Cybereco (Canada)
Jan Schallaboek (iRights Law, Germany), Claudia Roda (American University Paris, France), Jonathan Fox (INCITS, United States), Antonio Kung (Trialog, Europe)
A new ISO sub-committee has been established - ISO/IEC JTC 1/SC 44, Consumer protection in the field of privacy by design - to continue the work of the Project Committee (PC317) that published ISO 31700. Participants will take part in an interactive consultation/brainstorming activity to provide possible solutions to a problem – how do we shape the future of consumer protection in the world of AI and how can this new ISO sub-committee on privacy by design play a role? What do we need to know? What do we need to do? What are the main priorities? Former members of PC317, representing diverse perspectives (consumer advocacy, academics, technical, legal/regulatory experts), will facilitate a lively, energetic Workshop. Known as Robert Jungk’s “Future Workshop” methodology we will brainstorm using colorful Post-it notes on walls, subsequently cluster inputs, during which, we will offer an interactive space for exchanging diverse ideas for new areas and approaches to standards in the field. The outcomes will be used to inform the work of SC44 and its future projects.
Data Protection Scholars Network ()
Ready to test your data protection knowledge with fun yet challenging questions? Grab your phone and join us to compete for the "Ultimate CPDPub Quiz Champion" title. And wait, there is more - a surprise prize awaits the ultimate champion, along with esteemed international recognition!
Sponsored by Brasserie de la Senne.
Academic
Business
Policy
Bits of Freedom (Netherlands)
Evelyn Austin (moderator, Bits of Freedom, Netherlands), Caroline Sinders (Convocation Research, United Kingdom), Romayne Gad el Rab (Consultant Psychiatrist, at The Maudsley Hospital and Clinical Research Fellow to Professor Allan Young at the Institute of Psychiatry, Psychology & Neuroscience (IoPPN)., United Kingdom), Kim van Sparrentak (Member of Parliament of the EU, Europe), Maryant Fernández (BEUC, International)
The issue of addictive design has gained significant attention within the European Union. Recently, MEP Kim van Sparrentak highlighted this concern in an initiative report, and the European Commission announced its upcoming Digital Fairness Act, which will include provisions addressing addictive design. The challenge is clear: growing numbers of people, amongst them are minors, are facing social media addiction. Yet, the path to a solution remains complex. Unlike other addictive substances, social media itself is not inherently harmful. This raises a pressing question: how can we reshape social media design through legislation to make it less addictive, without compromising its positive aspects?
Academic
Business
Policy
Mozilla Foundation (Belgium)
Maximilian Gahntz (moderator, Mozilla Foundation, Germany), Julia Keseru (Independent, Hungary), Esme Harrington (AWO, United Kingdom), Martin Degeling (AI Forensics, Germany)
The value of independent scrutiny of generative AI systems, in the form of independent testing, red teaming, bug bounties, and other kinds of vulnerability discovery, is well understood. But the information asymmetry between AI providers and users is massive. And while large language models are feasting off the internet for training data, public interest research is starving for safe and reliable data, unsure of their legal protections should companies dislike their research. As the DSA is launching its regime structured data access regime, the AI Act’s Code of Practice on GPAI is asking important questions about what constitutes an appropriate third-party evaluator and what role safe harbours should play in AI safety and evaluation. But what should be the methods for encouraging public interest research? Who is a legitimate and independent third party? What are the tensions and trade-offs?
Business
Policy
interface - tech analysis and policy ideas for Europe (Germany)
Thorsten Wetzling (moderator, interface - tech analysis and policy ideas for Europe, Germany), Sharon Bradford Franklin (Independent, United States), Judith Lichtenberg (CTIVD - Review Committee on the Intelligence and Security Services, Netherlands), Corbinian Ruckerbauer (interface - tech analysis and policy ideas for Europe, Germany), Jan Ellermann (Europol, Europe)
Geolocation data and other sensitive information about individuals is for sale in epic quantities. Investigative reporting documents, for example, how frictionless one can obtain a near-live subscription to 3.6 billion geolocation records for less than 14.000 Euros through a Berlin-based platform. This trade interferes with the rights of millions and European security agencies are top clients for data brokers. Nascent regulatory efforts on sensitive commercially available data are driven primarily by valid national security concerns. For example, U.S. policymakers are advancing a framework to better protect bulk sensitive data of Americans from being sold to countries of concern. Are legislators also prepared to better protect individuals from unconstrained, unreasonable, arbitrary or disproportionate government use of purchased data? How can European democracies better address the enormous privacy challenges to which the GDPR has not been a cure.
Academic
Policy
FIZ Karlsruhe GmbH - Leibniz Institute for Information Infrastructure (Germany)
Franziska Boehm (moderator, FIZ Karlsruhe GmbH - Leibniz Institute for Information Infrastructure, Germany), Eleni Kosta (Tilburg University - Tilburg Institute for Law, Technology and Society (TILT), Netherlands), Teresa Quintel (European Parliament - Secretariat of the LIBE Committee, Europe), Juraj Sajfert (DG JUST - Data Protection Unit (C.3), Europe), Nora Ni Loideain (Institute of Advanced Legal Studies, University of London, United Kingdom)
The Law Enforcement Directive (LED) was adopted by the European Union (EU) in May 2016 under the shadow of the General Data Protection Regulation (GDPR). While the official legislative process for the LED started together with the negotiations for the GDPR, in reality negotiations on the LED only genuinely started during the second half of 2015. The LED has by far not achieved all its goals, but it has nevertheless paved the way towards a more coherent and comprehensive framework on the protection of personal data for law enforcement purposes at national level. In light of ongoing development of AI-driven data processing, data protection is embedded in increasingly complex legal and ethical frameworks. This panel examines the challenges of the Law Enforcement Directive in the light of political developments, recent case law and increasing use of AI in law enforcement.
Academic
Business
Policy
CArE project: Securing individuals’ human rights against technology-facilitated cyberviolence (Netherlands)
Irene Kamara (moderator, Tilburg Law School/TILT, Netherlands), Kim Barker (Lincoln Law School, United Kingdom), Catherine van de Heyning (University of Antwerp, Belgium), Maria Asensio Velasco (Council of Europe, International), Evin Incir (European Parliament, Europe)
Cyberviolence, the use of computer systems and technology more generally to cause, facilitate, or threaten violence against individuals, has many manifestations, ranging from cyber harassment, online stalking, to intimate image abuse. While there is recognition that cyberviolence is a major problem, the age of offenders keeps on dropping, which means that new generations engage into aspects of cyberviolence. AI is often used as a tool to facilitate cyberviolence, for example for the creation of synthetic images or cloning of voices. Women and children are disproportionately victimised. In April 2024, the new EU Directive on combating violence against women was adopted, which includes aspects of cyberviolence. In November 2024, the CoE Lanzarote Committee issued a Declaration on protecting children against sexual exploitation and sexual abuse facilitated by emerging technologies.
Maastricht University (Netherlands), European Centre on Privacy and Cybersecurity (ECPC), Maastricht University (Netherlands), Digital Constitutionalist (Netherlands)
From Frankenstein to Snow Crash, science fiction has long served as a mirror for our deepest ambitions—and fears—about technology. But what happens when these speculative warnings are misread as roadmaps? In this session, we’ll examine how foundational sci-fi novels, often crafted as dystopian cautionary tales, have been reinterpreted by today’s tech elite as aspirational visions: Elon Musk styling himself after Iron Man, Jeff Bezos chasing Star Trek-inspired space dominion, and Mark Zuckerberg branding his virtual empire with a term lifted from a dystopia of disconnection.
We’ll explore why these misreadings persist—and what alternatives we might imagine. Emerging genres like solarpunk and degrowth fiction offer radically different visions of the future: collaborative, regenerative, and grounded in ecological wisdom. What can we learn from Becky Chambers’ A Psalm for the Wild-Built, where humans and robots find harmony in mutual respect and simplicity? Or from Ursula K. Le Guin’s The Dispossessed, which challenges us to rethink ownership, productivity, and the very structure of society?
Designed for storytellers, technologists, and policy makers alike, this interactive session invites you to consider the narratives shaping our future—and to co-create new ones. Join us as we reimagine speculative fiction not as escapism, but as a toolkit for envisioning just, sustainable, and humane alternatives
European Institute of Public Administration (EIPA) (Netherlands)
Michaela Sullivan-Paul (The European Institute of Public Administration (EIPA), Netherlands), Florina Pop (The European Institute of Public Administration (EIPA), Netherlands)
With AI systems facing growing regulatory and public scrutiny, understanding and applying AI impact assessments is key to ensuring compliance with fundamental rights and data protection laws. This hands-on workshop introduces participants to existing AI risk assessments and their role in enhancing transparency, accountability, and enforcement. Participants will work in small groups to critically assess the scope, purpose, and practicality of AI impact assessments, such as UNESCO’s Ethical Impact Assessment, the European Council’s HUDERIA, and national frameworks, like the Dutch Fundamental Rights and Algorithms Impact Assessment and the Catalan methodology for fundamental rights impact assessment. Using a structured evaluation method provided by the facilitators, groups will analyse these frameworks in relation to a real-world AI case study, identifying their strengths, limitations, and applicability. Participants will gain awareness of existing tools and develop a comparative understanding of how different frameworks address fundamental rights risks in AI.
Business
Policy
Google ()
Rob Van Eijk (moderator, , International), Sarah Holland (Google, International), Christian Reimsbach-Kounatze (OECD, International), Valda Beizitere (DG Just, Europe)
The private sector is building compelling proof points for privacy enhancing technologies (PETs), showcasing their power to enable data-driven breakthroughs while safeguarding personal information. Yet, the rate of uptake lags behind expectations. How can we unlock the full potential of PETs and overcome the barriers to adoption? Join Google, leading European and international regulators and civil society experts to discover how governments can be the catalyst for widespread PETs adoption. We'll explore how strategic investments, promotion of interoperable standards, and implementation of smart regulatory incentives can build a future where dynamic innovation and robust privacy and data protection go hand in hand.
Academic
Business
Policy
5Rights Foundation (Belgium)
Leanda Barrington-Leach (moderator, 5Rights Foundation, Europe), Linn Høgåsen (Norwegian Consumer Council, Norway), Felix Mikolasch (noyb, Austria), Ali Hessami (Vega Systems, United Kingdom), Ioannis Koutsoumpinas (Ministry for Digital Governance, Greece)
Children are a priority in EU digital policy for 2024-2029. Beyond fake dichotomies between protection and privacy, a focus on children’s rights in the digital environment could drive institutions and tech companies to ensure better online experiences for minors - and everyone else. From fixing recommender systems and edtech, to reining in advertising, detrimental data practices and addictive design features, there is growing awareness that children must be protected against risks and harms of the digital environment, as well as empowered to learn, play and participate. Robust protection of children’s data under GDPR, ambitious implementation of the DSA, and reform of consumers law to redress digital asymmetries and tackle vulnerabilities with a Digital Fairness Act can show that regulating and innovating for a better internet is possible. Fixing children’s online experiences, the EU may show a way forward for all.
Academic
Business
Policy
Luminate (Europe)
Emmanuelle Debouverie (moderator, Luminate, Europe), Joris van Joboken (DSA Observatory, Netherlands), Augustin Reyna (BEUC - Director General, Europe), Simone Ruf (Gesellschaft für Freiheitsrechte (GFF) - Lawyer at Centre for User Rights, Germany), Ursula Pachl (NOYB - Head of Collective Redress, Austria)
As new tech regulations start to be implemented and enforced, there has so far been relatively little focus on the role of private enforcement. Private enforcement, through individual litigation and collective action, can significantly enhance the protection of individual rights and platforms' obligations. It has the potential to complement public oversight by addressing enforcement gaps, while providing individuals and civil society with a means to seek redress for harms and influence regulatory agendas and policies from the ground up. Important knowledge gaps remain regarding the private enforcement of tech regulations. Drawing from their experience in competition law, DSA/DMA and GDPR enforcement, panellists will offer insights on legal and practical questions relating to private enforcement of new tech regulation.
Academic
Policy
Haifa Center for Law & Technology, University of Haifa (Israel)
Tal Zarsky (moderator, Faculty of Law, University of Haifa, Israel), Eldar Haber (Haifa Center for Law & Technology (HCLT), Israel), Nathalie Smuha (KU Leuven Faculty of Law, Belgium), Paula Cipierre (ada Learning GmbH, Germany), Courtney Bowman (Palantir Technologies, United States)
Generative AI is reshaping academia, influencing research, teaching, and governance. This panel examines the transformative impact of AI on scholarly writing, including its potential to empower non-native speakers and early-career academics, alongside concerns about authenticity and cognitive skill erosion. Panelists will address critical issues of bias, reliability, and scholarly integrity, exploring the implications for peer review and research transparency. They will discuss the importance of cultivating comprehensive AI literacy within academia, highlighting ethical preparedness and responsiveness to regulatory frameworks like the EU AI Act. Additionally, the panel will explore the divide between theoretical academic AI research and operational AI applications, offering strategies for effectively bridging this gap. The discussion aims to provide actionable insights for responsibly integrating generative AI, balancing innovation with accountability and academic rigor.
Academic
Business
Örebro University (Sweden)
Ahmed Qadi (moderator, 7amleh: The Arab Center for Social Media Advancement, Palestine), Fabio Cristiano (Utrecht University, Netherlands), Özgün Topak (York University, Canada), Mais Qandeel (Örebro University, Sweden), Mohammd Khader (Birzeit University, Palestine)
The use of technologies in situations of armed conflicts and belligerent occupation continues to lead to social and legal instabilities and aggravated violations. This panel highlights the issues pertaining to the status of Palestinians under Israeli AI-enhanced surveillance and their effects on fundamental rights. In doing so, this session will discuss the use of AI-enhanced systems to track Palestinians, systematizing massive surveillance and automating harsh restrictions to their rights and freedoms as part of a structural conduct of oppression. The session also discusses the role of social media platforms in facilitating digital control leading to authoritarian and genocidal surveillant assemblage, destruction of cyber infrastructure and data violence. This session, finally, explores the legality of such conduct under the applicable rules and norms of international law.
Privacy Salon ()
The emergence and spreading of the electronic communication media during XX century had a huge impact on societies all over the world, completely changing the ways of information exchange. But in the Soviet Union and Eastern Bloc countries, these technologies evolved in a specific socioeconomical climate, from the very beginning affected by Marxist-Leninist ideology, military communism, and red terror, which influenced attitudes towards them in the post-Soviet states to the present day.
The “infra” artistic research project by Boris Shershenkov uses the applied media archaeology toolkits to unravel the design patterns, technomythologies and governmental social engineering methods used for the exploitation of any civil scientific, cultural, and technological developments as total propaganda and mass surveillance tools for the sake of the “state security”.
The talk will focus on the dystopian symbiosis of the authoritarian “secret police state” and Western electronic media technologies, fueling the cold civil war and governmental terrorism, which was predicted by Evgeny Zamyatin in 1920, and implemented almost 100 years later in contemporary Russia.
EU Cloud Code of Conduct (Belgium)
Koen Gorissen (Belgian Data Protection Authority, Belgium), Eva Lievens (Ghent University, Belgium), Nicholas Knoop (HubSpot, United States), Gabriela Mercuri (SCOPE Europe, Belgium)
The European Commission’s Second Report on the application of the GDPR provides a critical opportunity to evaluate regulatory effectiveness and ongoing challenges. This workshop will reflect on the Report’s key takeaways, including persistent implementation hurdles and enforcement gaps. The interactive discussion aims to address the crucial findings of the Report while exploring workable solutions to mitigate compliance obstacles organizations face daily. Participants will, therefore, examine practical steps to enhance GDPR implementation, from harmonizing regulatory guidance to fostering solutions that address operational realities. By bringing together diverse perspectives, this workshop will dive into the role of collaborative efforts to ensure robust and uniform protection of personal data across the EU, ensuring the GDPR’s long-term impact, and a robust and future-proof compliance landscape.
European Data Protection Board (EDPB) (Europe)
Gwendal Le Grand (EDPB, Europe), Amandine Jambert (EDPB, Europe)
In this hands-on workshop, you'll learn how to use the free software EDPB Website Auditing Tool. EDPB case officers will explain how to use the tool in your particular case and are standby to answer all your questions. They will demonstrate how you can use the tool to 1. check the basic security features of your site 2. visualise trackers (cookies) 3. evaluate if these are compliant and 4. prepare future re-use (your own knowledge data base). You can download the tool on your own PC ahead of the workshop to make sure you get the most out of this session: https://code.europa.eu/edpb/website-auditing-tool/-/releases
powered by TikTok (Europe)
Negotiations on the Commission's Proposal for a GDPR Procedural Regulation are at an advanced stage. This workshop will bring together stakeholders who have been closely engaged in the legislative process, including from the EU institutions, NGOs, industry, and practitioners, with a view to raising awareness of the proposed legislation and discussing how the final agreement can bolster trust in the enforcement of the GDPR, whilst aligning with the Commission's competitiveness and simplification agenda. This workshop will involve an active discussion with the following facilitators: (1) Sara Brandsätter, Mlex; (2) Ms. Karolina Mojzesowicz, European Commission; (3) Mr. Yann Padova, Wilson Sonsini; (4) Romain Robert, EDPS; (5) Ralf Bendrath, European Parliament; (6) Isabelle Vereecken, EDPB
Academic
Business
Policy
CPDP (Belgium)
Nóra Ni Loideain (moderator, Institute of Advanced Legal Studies, University of London, United Kingdom), Karolina Mojzesowicz (DG Just, Europe), Anu Talus (European Data Protection Board, Europe), Matthias Spielkamp (AlgorithmWatch, Germany), Maximilian von Grafenstein (Berlin University of the Arts/Alexander von Humboldt Institute for Internet and Society, Germany), Charly Helleputte (King & Spalding, Belgium)
Last year, CPDP hosted a panel focused on the question of how the EU digital framework – the range of laws, many newly enacted, regulating the digital, including, for example, the AI Act, the DMA the DSA, and now the EHDS Regulation – might be realised in practice. Now, one year on from that panel, the realisation of the digital framework remains the subject of significant uncertainty. In particular, questions now emerge as to the ways in which, and the degree to which, patterns and regularities can be identified in the implementation of the framework. In this regard, this panel assembles representatives from legal practice, politics, academia, and civil society, to consider the current state of implementation of the digital framework, and what this means moving forward. The panel will consider, amongst others, the following questions:
Academic
Business
Policy
CPDP (Belgium)
Gabriela Zanfir-Fortuna (moderator, Future of Pivacy Forum, United States), Helena Koning (Mastercard, Belgium), Valentina Pavel (Ada Lovelace Institute, United Kingdom), Diarmuid Goulding (Irish Data Protection Commissioner, Ireland), Irene Kamara (Tilburg Law School, Netherlands)
One of the most confronting challenges facing regulators, researchers, and public and private sector organizations today is determining the when, why, and how of the application of fundamental data protection principles to AI systems. The question is not only whether personal data is processed by such systems, but also of how foundational principles such as lawfulness, fairness, and transparency are challenged by the way such systems operate. Complex AI systems, particularly when forming part of a broader product or service, also invite us to determine how requirements around data minimization and accuracy can be assured, and how individual rights can be fulfilled.
Academic
Policy
Fraunhofer ISI (Germany)
Felix Bieker (moderator, ULD, Germany), Katherine Nolan (Technological University Dublin, Ireland), Irmak Erdoğan Peter (Center for IT & IP Law (CiTiP), Belgium), Aljosa Ajanovic Andelic (European Digital Rights (EDRi), Europe), Isabel Barberá (Rhite, Netherlands)
Risk has proliferated since the GDPR and is now also found in the DSA and AI Act. While the DSA introduces the notion of systemic risks, the AI Act covers anything from 'self-replicating' AI systems to discrimination of individuals, groups and societies. The panel will touch these more abstract issues and consider the cases of migration and law enforcement to illustrate the implications of these policies, for instance, how framing migration as a "risk" can affect human rights. Broadening the scope, we discuss the role of standardisation as well as how policy and political narratives of risk-based approaches can be code for deregulatory agendas. Internal risk classifications can obscure broader questions of risk to a fundamental rights-based EU legal and political order in a changing geo-political landscape. With this inflation of risk, our panel will engage the following questions:
Academic
Business
Policy
Europol Data Protection Experts Network (EDEN) (Europe)
Jekaterina Macuka (moderator, Director of the Data State Inspectorate (DPA) of Latvia, Latvia), Daniel Drewer (Europol Data Protection Experts Network (EDEN), Europe), Michael Armstrong (An Garda Síochána, Ireland), Marjolein Louwerse (Dutch Police, Netherlands), Anna Pouliou (CERN, Switzerland)
Law Enforcement DPOs face a myriad of challenges in their efforts to uphold privacy rights while ensuring effective policing. Some of the most pressing challenges include ensuring that only necessary data is collected for legitimate law enforcement purposes, and establishing protocols for the retention and deletion of data to prevent unnecessary storage and potential misuse. Balancing the need to share data between law enforcement agencies for effective collaboration and intelligence-sharing while safeguarding against unauthorized access, data breaches, and potential violations of privacy laws forms part of the DPO portfolio as well as assessing the ethical and legal implications of using advanced surveillance technologies such as facial recognition, drones, and biometric systems, including ensuring transparency, accountability, and compliance with privacy.
Academic
Business
Policy
Institute for Information Law (IViR) (Netherlands)
Kristina Irion (moderator, Institute for Information Law (IViR), Netherlands), Zuzanna Warso (Open Future, Poland), Yordanka Ivanova (European Commission, Europe), Lucie-Aimée Kaffee (HuggingFace, Germany), Arriën Molema (International Council of Music Creators (CIAM), Netherlands)
It is an open secret that generative AI models have also been trained on any kind of datasets developers could get hold of and regardless of the legality of their practice. Concerns about the conformity with copyrights law and the General Data Protection Regulation abound. In relation to general-purpose AI (GPAI) models the European Union's Artificial Intelligence Act foresees the disclosure of public information about training data. Soon the providers of GPAI models will have to publish a sufficiently detailed summary of the data used to train their models. What this summary should look like is subject to ongoing debate as the EU’s AI Office is developing a template for the summary. In this panel we will take a closer look at the emerging contours of the template in light of the purpose of this summary.
Join film maker Peter Porta, People versus Big Tech and Global Witness for a screening of ‘The Click Trap’ a documentary about the underlying business model of the internet – digital advertisement – and how it links to global disinformation flows, privacy breaches and online scams. Following the screening a panel discussion will explore how existing EU policies such as the Digital Services Act (DSA) and the General Data Protection Regulation (GDPR) can be used to tackle these issues - and how ordinary citizens can be part of the fight back against the ever-expanding surveillance of social media and advertising companies, how
Questions to be answered?
MetaverseUA Chair (Spain)
Javier López-Guzmán (MetaverseUA Chair - University of Alicante, Spain), Eleni Kosta (Tilburg University, Netherlands), Pia Groenewolt (Vrije Universiteit Brussel, Belgium), Martin Sas (KU Leuven Centre for IT & IP Law, International)
The inception of Artificial Intelligence has disrupted almost every aspect of our lives. Virtual Worlds are not an exception to this advent. AI will help model the future Metaverse, and deeply impact it. Ranging all the way from virtual assets generation, through avatars powering and interaction, to content moderation, many aspects of the Virtual Worlds will fundamentally change under AI. The spread of deepfakes is a threat that cannot be ignored, and it poses many ethical and legal questions, when developed in the Metaverse. The regulatory effort for AI in the EU has focused on a horizontal risk approach, technologically neutral and not specific to the Metaverse. However, these risks call for a debate on how the AI Act is applied to the Metaverse and its platforms. Could the application of the AI Act on prohibited conducts and the risk-approach be challenged in the Virtual Worlds? What should be the regulatory approach of the EU regarding AI generated content in the Metaverse? Is automated decision making with data protection impact well delimited in the Virtual Worlds?
Multidisciplinary Institute in Artificial Intelligence (MIAI) (France)
Pankaj Raj (Multidisciplinary Institute in Artificial Intelligence (MIAI), India), Shadée Pinto (Multidisciplinary Institute in Artificial Intelligence (MIAI), France)
The integration of AI into healthcare is revolutionizing patient care, offering unprecedented advancements while reshaping medical practices. From medical imaging and personalized treatments to predictive analytics and robotic surgery, AI presents immense opportunities. However, it also introduces risks, such as the manipulation of medical images or deepfakes, which could have severe consequences, including misdiagnosis or loss of life. This transformation raises crucial challenges, particularly regarding liability, secrecy and informed consent. This workshop seeks to explore these issues and to develop actionable recommendations for decision-makers and organisations to ensure safe, ethical and accountable integration of AI in healthcare. The workshop will address: • What mechanisms should apply to ensure liability when AI-driven healthcare decisions lead to adverse outcomes? • How can patient data privacy be safeguarded as AI systems become deeply integrated into healthcare? • What should an effective informed consent framework look like in an AI-powered healthcare ecosystem?
University of Kassel: Project PriDI: Privacy-enhancing Digital Infrastructures (Germany)
Leopold Beer (Open Search Foundation: Project PriDI: Privacy-enhancing Digital Infrastructures, Germany), Chrisitian L. Geminn (University of Kassel, ITeG, provet, Germany), Kai Erenli (UAS BFI Vienna, Austria), Lara Sokol (University of Kassel, Germany)
This interactive workshop explores how a European Open Web Index (OWI) enhances European digital sovereignty and data protection. Using a gamified HeroQuest-inspired format, participants assume roles — Barbarians, Wizards , Dwarves, and Elves — to collaboratively counter the evil Mage and his Monsters (representing e,g, legal challenges, status quo, lack of trust and user acceptance). Each group will explore how an OWI supports or challenges digital sovereignty and privacy and how current EU regulations shape the feasibility of an OWI. The workshop includes breakout discussions to refine perspectives and a combat phase where the evil mage offers challenges to refine the group's arguments using HeroQuest-like monsters (e.g. the Data Demon and The Blind Oracle) representing real-world obstacles (e.g., user adoption, compliance, market dominance). Key outcomes include recommendation and collaborative strategies for an OWI. By merging different perspectives, OWIQuest gives participants new and actionable ideas for a fairer digital future.
Academic
Business
Policy
European Digital Rights (EDRi) (Europe)
Yassine Chagh (moderator, IGLYO, International), Rita Costa Cots (Build Up, Spain), Manon Baert (5Rights Foundation, Belgium), Stefi Richani (Equinox Initiative for Racial Justice, Belgium), Luísa Franco Machado (Equilabs, Brazil)
From blanket bans to mandatory age verification systems, ‘silver bullet’ solutions have been on the rise to protect minors in online spaces. While these reactive measures are rapidly being adopted in the EU and many parts of the world, they often gloss over the harmful impacts these measures have on the rights of children and adults, including the right to privacy, participation, non-discrimination and access to information. These 'mitigating' measures, which are privacy-invasive and exclusionary, are often based on normative assumptions of the harms and risks young people encounter—harms and risks that are far more complex than what these stop-gap solutions can address. In this panel, we take a step back and look at research, evidence and lived experiences to explore what it really means to feel (un)safe in online spaces at different ages and from different positionalities.
Academic
Business
Policy
Mozilla (United States)
Kirsten Nelson-de Búrca (moderator, Mozilla, International), Graham Mudd (Anonym, International), Vincenzo Tiani (Future of Pivacy Forum, International), Arletta Gorecka (Glasgow International College, United Kingdom), Aymeric Pontvianne (CNIL, France)
The online advertising sector is at a turning point. Digital advertising depends both on aggressive data collection practices and on excessive data sharing between platforms and advertisers. This, in turn raises serious privacy and competition concerns. Regulators step up enforcement and industry players explore alternatives to existing privacy-intrusive practices. Developments in Privacy Enhancing Technologies (PETs) offer a promising way to protect user privacy. However, not all PETs are created equal. While they can reduce data exposure, PETs are often deployed in ways that reinforce the market power of dominant players rather than serve as a true privacy-preserving alternative. In some cases, PETs risk becoming a form of privacy-washing—providing a veneer of compliance while maintaining the same underlying data control structures that limit competition. This panel will examine the potential of PETs in digital advertising. the regulatory steps needed to incentivize their adoption while at the same time prevent them from becoming another tool for industry consolidation.
Academic
Business
Policy
Microsoft (Belgium)
Anahita Valakche (moderator, Microsoft, United States), Naman Goel (Tony Blair Insitute, United Kingdom), Marta Ziosi (Oxford Martin AI Governance Initiative, University of Oxford, United Kingdom), Marta Przywała (SAP, Belgium), Friederike Grosse-Holz (European Commission, Europe)
In this panel discussion, we will delve into some of the intricacies of the EU AI Act’s General-Purpose AI Code of Practice (CoP), which aims to provide a comprehensive compliance framework for managing GPAI models with systemic risk, ensuring that such models are developed and deployed responsibly. This discussion will explore the various methodologies and tools available for risk assessment and mitigation at the model level, the role and views of different stakeholders on how to mitigate these risks, and the challenges and trade-offs faced in implementing and standardizing effective risk management strategies. Key questions to be addressed include: What are the current state-of-the-art methods for identifying and assessing AI model risks? How can the CoP contribute to mitigating these risks effectively and what are the role for all stakeholders? What are the current challenges in implementing the CoP? What areas of the CoP can be improved in the future, in view of emerging AI model risks? What is the overall assessment of the CoP?
Academic
Business
Policy
Center for Quantum & Society, Quantum Delta NL (Netherlands)
Joris van Hoboken (moderator, University of Amsterdam, Netherlands), Golestan (Sally) Radwan (UN Environment Programme, International), Hendrik Hamann (IBM T.J. Watson Research Center, United States), Eric Monjoux (European Space Agency, Europe), Bengi Zeybek (University of Amsterdam, Netherlands)
This panel investigates the digitalisation - climate action relationship and discusses the response of law and policy to it. Digitalisation is touted as the solution for environmental challenges. The EU policy considers digital infrastructures integral to achieve the European Green Deal’s net-zero goals ("twin transition"). But these create new risks and dependencies as they implicate power dynamics at the intersection of digital economy, geopolitics, security. This panel investigates some of these frictions focusing on two technologies: foundation models and digital twins. For example, foundation models can provide novel climate insights, but they can also transfer bias in context and training data into climate solutions and cement market logics into sustainability efforts. Digital Earth applications (e.g. DestinE), bringing together sensing and computing, can change environmental decision making processes and can have potential uses for disaster prevention, migration management and security. How could the law take account of these dynamics going forward?
Stanford (United States), Tilburg Law School (Netherlands)
In The Tech Coup, Marietje Schaake offers a behind-the-scenes account of how technology companies crept into nearly every corner of our lives and our governments. She takes us beyond the headlines to high-stakes meetings with human rights defenders, business leaders, computer scientists, and politicians to show how technologies—from social media to artificial intelligence—have gone from being heralded as utopian to undermining the pillars of our democracies. To reverse this existential power imbalance, Schaake outlines game-changing solutions to empower elected officials and citizens alike. Democratic leaders can—and must—resist the influence of corporate lobbying and reinvent themselves as dynamic, flexible guardians of our digital world.
SURF (Netherlands)
Julian Rill (SURF, Netherlands), Jan Landsaat (SURF, Netherlands)
Discover the power of technical investigation in compliance assessments. In this interactive workshop we analyse the technical workings of a fictional digital service provider. In small groups, guided by the SURF compliance experts, we explore technical findings, learn how to interpret them and assess their legal impact. Come and enhance your technical skills, and discover how this type of investigation can be a valuable addition to your compliance framework.
Flemish supervisory committee for the processessing of personal data (Vlaamse Toezichtcommissie) (Belgium)
Anne Teughels (Flemish supervisory committee for the processessing of personal data (Vlaamse Toezichtcommissie), Belgium)
Contribute to a better collaboration between these two key roles. How can they strengthen each other? We organize small working groups, each including at least one DPA representative (not necessarily yours). Using a questionnaire, both DPOs and DPAs will discuss and identify areas for improvement and best practices. Suggestions for questions are welcome! The results are presented to all attendees and, after evaluation, will be disseminated as recommendations. Don't miss this unique opportunity! Academics involved in this topic can also sign up. If there are available spots, other academics can join. They will be asked to lead one of the group discussions and take notes on the conclusions.
EDRi European Digital Rights (Europe)
Itxaso Domínguez de Olazábal (EDRi, Belgium)
The rise of 'Consent or Pay' models—where users must either consent to data collection or pay for privacy—has sparked debates on fairness, choice, and digital rights. But beyond legal arguments and corporate justifications, how does this model shape the daily lives of those with the least power to negotiate? From gig workers and students to marginalised communities and grassroots collectives, many are forced into unfair choices with little say in the rules that govern their digital lives. This highly interactive session shifts the focus away from industry and policy insiders to those most affected. Through critical role-play and collaborative discussions, participants will step into real-world scenarios, exploring how ‘Pay or Okay’ deepens inequalities and imagining alternative models that go beyond this false binary. No panels, no lectures—just fresh perspectives, creative problem-solving, and a rethinking of what meaningful consent could and should look like.
Dastra (France)
Leila Sayssa (Dastra, France), Romain Bidault (Dastra, France)
As AI systems rapidly integrate into organisational processes, the question is no longer if they need to be governed — but how. We’ll explore the current state of internal AI governance through the lens of privacy professionals and DPOs. We know there's no one-size-fits-all solution. That’s exactly why this session is an opportunity to compare models, challenge assumptions, and benchmark your approach against those of your peers. We’ll share initial findings from our ongoing study, “Is the Privacy World Embracing AI?” & discuss processes, roles & how to remediate Privacy challenges, to make your governance both actionable and auditable. Whether you’re just starting or refining your AI strategy, this session will help clarify where theory meets operational reality. Workshop Format: this is interactive by design, aka your contributions are not just welcome, they’re essential. Feel free to ask questions, share challenges, and test your governance assumptions live.
Academic
Policy
EDPS (Belgium)
Leonardo Cervera Navas (moderator, EDPS, Europe), Markus Wünschelbaum (Hamburg's Data Protection Commissioner, Germany), Uljan Sharka (iGenius, Italy), Ignasi Belda (The Spanish Artificial Intelligence Supervisory Agency (AESIA), Spain), Marietje Schaake (Stanford Cyber Policy Center, International)
The European Union’s regulatory landscape is undergoing a transformative phase, marked by the introduction of the Artificial Intelligence Act (AI Act) and the ongoing application and enforcement of the General Data Protection Regulation (GDPR). Both frameworks aim to foster trust, accountability, and fundamental rights in the digital age, but their interplay raises critical questions for stakeholders. This panel will explore the synergies, tensions, and practical implications of ensuring the coherent and consistent applications of these landmark regulations. Experts from law, technology, and policy fields will discuss pathways to ensure compliance with the GDPR while fostering sustainable and responsible innovation in AI development and deployment.
Academic
Business
Policy
Lexxion ()
Bart van der Sloot (moderator, Tilburg University, Netherlands), Franziska Boehm (Leibniz-Institute for Information Infrastructure / Karlsruhe Institute for Technologies, Germany), Axel Freiherr von dem Bussche (Taylor Wessing, Germany), Eleni Kosta (Tilburg University, Netherlands), Wojciech Wiewiórowski (European Data Protection Supervisor, Europe)
The EDPL Young Scholar Award, organised by the European Data Protection Law Review (EDPL), is an annual competition for data protection researchers in the early stages of their career. The panel will feature the best authors of this year’s competition who will present the findings of their research and discuss it with the Award’s jury of renowned data protection experts. The panel will conclude with the announcement of the winner of the award and an award ceremony. Note: The panel organisers are grateful to the CPDP leadership for making an exception to the “one speaker-one panel” rule due to the EDPL 10-year anniversary.
Academic
Business
Policy
FARI - AI for the Common Good Institute (Belgium)
Nathan Genicot (moderator, FARI - AI for the Common Good Institute, Belgium), Thiago Moraes (LSTS (VUB), Belgium), Moltzau Alex (European Commission (AI Office), Europe), Sophie Tomlinson (Datasphere Initiative, United Kingdom), Sam Jungyun Choi (Covington & Burling, United Kingdom)
The EU AI Act introduces regulatory sandboxes to foster innovation while ensuring compliance. However, the legal framework surrounding these sandboxes requires careful analysis. A key challenge is determining how these sandboxes will be supervised and by which authorities. This issue is intricately linked to the broader oversight structure of the AI Act, particularly the role of market surveillance authorities, which must be designated by Member States by August 2025. An equally pressing issue is the involvement of data protection authorities when personal data is processed within these sandboxes. This panel will examine how Member States can establish regulatory sandboxes that align with their oversight frameworks, foster effective collaboration between market surveillance and data protection authorities, and achieve a balance between innovation and robust supervision, ensuring regulatory sandboxes effectively support AI development while adhering to the AI Act’s provisions.
Academic
Business
Policy
DPO Track by Uber (Belgium)
Carolien Michielsen (moderator, Stibbe, Belgium), Renato Leite Montero (e&, United Arab Emirates), Helena Koning (Mastercard, Belgium), Isabelle Vereecken (European Data Protection Board, Belgium)
The legal provisions and the practical implementation of the role of the Data Protection Officer (DPO) differ between different continents or countries. Even within the same jurisdiction, we observe different interpretations of the data protection rules. This panel explores the challenges that DPOs face when functioning in different jurisdictions with different interpretations of their roles, responsibilities, safeguards and liability. What does it mean for a DPO if their degree of independence is more guaranteed in one country than in another? How do multinationals deal with the differences in national legislation regarding personal liability and dismissal protection? And how do these variations influence the positioning of the DPO within the governance structures of organizations, such as in the ‘Three Lines of Defence’ model? In addition, the panel highlights how the DPO function differs globally – not only within the EU, but also beyond – and how companies (including their stakeholders) and supervisory authorities deal with these differences.
Academic
Business
Policy
AI-Regulation,com, Multidisciplinary Institute of Artificial Intelligence, University Grenoble Alpes (France)
Theodore Christakis (moderator, AI-Regulation,com, Multidisciplinary Institute of Artificial Intelligence, University Grenoble Alpes, France), Ben Nimmo (OpenAI, United States), Yann Padova (Wilson Sonsini Goodrich & Rosati, Belgium), Kai Zenner (European Parliament, Europe), Becky Richards (Office of the Director of National Intelligence, NSA, United States)
This panel addresses the increasingly significant concept of "Sovereign AI," reflecting Europe's shift from digital sovereignty to strategic autonomy in AI. Geopolitical tensions—particularly between the U.S., EU, and allied nations—have intensified debates around AI governance, tech dependence, and national security. Europe's concerns about reliance on U.S. infrastructure have spurred initiatives to develop regional data ecosystems and local computational resources. Simultaneously, the U.S., citing national security, has imposed controls on access to advanced AI tools and datasets for adversarial states. The rise of generative AI, like ChatGPT, adds security concerns about misuse by malicious actors, prompting calls for safeguards and oversight. The panel will examine how strategic autonomy, global frictions, and evolving risks are reshaping international AI policy.
LSTS, VUB (Belgium)
As the fortification of Europe's borders and its hostile immigration terrain has taken shape, so too have the biometric and digital surveillance industries. And when US Immigration Customs Enforcement aggressively reinforced its program of raids, detention, and family separation, it was powered by Silicon Valley corporations. In cities of refuge, where communities on the move once lived in anonymity and proximity to familial and diaspora networks, the possibility for escape is diminishing. As cities rely increasingly on tech companies to develop digital urban infrastructures for accessing information, identification, services, and socioeconomic life at large, they also invite the border to encroach further on migrant communities, networks, and bodies. In this book, Matt Mahmoudi unveils how the unsettling convergence of Silicon Valley logics, austere and xenophobic migration management practices, and racial capitalism has allowed tech companies to close in on the final frontiers of fugitivity—and suggests how we might counteract their machines through our own refusal.
Utrecht University (Netherlands)
Julia Straatman (Utrecht University, Netherlands), Iris Muis (Utrecht University, Netherlands)
With the AI Act now in force, fundamental rights impact assessments (FRIA) will be mandatory for high-risk AI systems. This workshop offers a unique opportunity to explore FRAIA: a fundamental rights impact assessment developed by Utrecht University and used by several Dutch governmental agencies for over three-and-a-half years. The workshop trainers will explain the ins-and-outs of FRAIA and share insights from real-life experiences from its development and implementation. The trainers have facilitated 20+ FRAIA assessments across various governmental institutions and use cases. During this hands-on workshop, you will: -Learn why checks and balances for algorithms are crucial; -Apply FRAIA to a realistic use case; -Learn about FRAIA’s relationship to Article 27 of the AI Act; -Discuss implementation; -Have a Q&A with FRAIA's developers and implementors. This workshop is for professionals working on technology, ethics and/or fundamental rights. Considering last year's success, arrive early to secure your spot!
TU Delft (Belgium)
Christina Dinar (Catholic University of Applied Science Berlin, Germany), Daniel Guagnin (nexus Institut Berlin, Germany), Ben Wagner (TU Delft, Belgium)
This interactive workshop explores how social media platform architectures shape democratic discourse and privacy, using Twitter's recent transformation as a springboard to examine alternative models like the Fediverse. We will present key insights on interoperability, content moderation, and democratic implications of different technical architectures. The workshop combines brief expert presentations (5 min each) with a world café hands-on collaborative sessions at themed tables. Participants will rotate through practical exercises exploring technical design implementation, community moderation strategies, and governance frameworks. (45 min). In the plenary discussion (25 min) we will discuss how architectural choices influence privacy, speech rights, and democratic participation. Workshop objectives: • Analyze technical and governance trade-offs between social media architectures • Develop practical insights into implementing community moderation in distributed networks • Explore accountability structures and regulatory frameworks for democratic platforms
HIVA KU Leuven (Belgium)
A workshop to reflect on findings from literature and expert interviews: The aim is to engage participants the discussion the ethical and legislative aspects related to “employees monitoring their own wellbeing”. The idea for this workshop grew from research exploring employee wellbeing through subjective survey data and objective sensor data. The past two years, dr. Michiel Bal became well-aware that his project’s destination (improving employee wellbeing), would follow a bumpy road filled with ethical, legal, and technological challenges and hindrances. In a 75’ session, Michiel will present the topic based on literature (10’). Afterwards, participants will be engaged in a first round of group discussions (25’). Following this, Michiel will give further insights stemming from 21 expert interviews (10’). These insights will provide new material and perspectives to further deepen the second series of group discussions. If consent is given, the interactive group discussions will be recorded as part of an ongoing Delphi study.
Spirit Legal (Germany)
Peter Hense (Spirit Legal, Germany), Tea Mustać (Spirit Legal, Germany)
This workshop delivers practical insights gained from two years of AI Act implementation in public and private sector organizations. Participants will explore real-world case studies and engage in guided discussions with trainers and peers. Designed for professionals with prior knowledge of the AI Act, ML/AI technologies, and (Data) Quality Management, this intensive workshop focuses on solving complex, real-life challenges. It avoids superficial overviews, pushing participants to tackle advanced tasks and answer difficult questions. Key topics include: *Legal and technical challenges, including conflicts of laws and compliance with standards *Risk categorization, risk management and litigation risks *Human and organizational aspects of implementation *Critical issues such as data collection, accuracy, bias mitigation, data protection, copyright, contract management, and product liability The workshop equips participants with actionable insights and strategies for navigating the complexities of AI the AI Act.
Academic
Business
Policy
Center for Technology and Society at FGV Law School (Brazil)
Monika Zalnieriute (moderator, Australian Research Council, Australia), Pablo Trigo (Vrije Universiteit Brussel (VUB), Belgium), Agneris Sampieri (Access Now, Mexico), María Pilar Llorens (CETYS, Argentina), Filipe Medon (Center for Technology and Society at FGV Law School, Brazil)
The adoption of Artificial Intelligence (AI) technologies by the Judiciary in Latin American countries has the power to transform judicial systems, improving efficiency, and addressing long-standing challenges such as case backlogs and delays. However, this transformation raises critical questions about transparency, accountability, fairness and, ultimately, data protection. Therefore, the main question is whether these courts can leverage AI to deliver justice while ensuring the protection of fundamental rights and public trust. In light of that, the panel plans to explore the current use of different AI tools across judiciary systems in Latin America. Speakers will analyze success stories, regulatory gaps, and ethical dilemmas associated with the adoption of these technologies, notably considering the recent administrative regulation in Brazil. Special focus will be given to the potential risks of hallucination in Generative AI models, bias in AI decision-making, challenges of ensuring transparency in AI systems, and the implications for due process and human oversight.
Academic
Business
Policy
Wageningen Social and Economic Research (Belgium)
Pia Groenewolt (moderator, Vrije Universiteit Brussel, Belgium), Can Atik (Wageningen Social and Economic Research, Netherlands), Monja Sauvagerd (University of Bonn, Germany), Elena Spolidoro (Ministerie van Landbouw, Visserij, Voedselzekerheid en Natuur, Netherlands), Seth van Hooland (European Commission, Europe)
This panel examines data governance in the agri-food sector, focusing on recent EU regulations that have fostered the creation of data spaces. These initiatives provide an opportunity to revisit the concept of commons, long associated with food systems, and explore the potential of data spaces as inclusive governance models for sustainable food systems. The discussion will address critical issues surrounding data flows, access, and sharing practices, examining the roles of public, private, and civil society actors in shaping agri-food data governance. Drawing on theoretical perspectives, fieldwork, and case studies in agricultural data spaces, the panel will explore how frameworks such as the GDPR, and data regulations, influence the balance between public and private interests. This session aims to offer insights into fostering innovation, equity, and sustainability in food data governance by unpacking data governance structures and the commons.
Academic
Business
Policy
Center for AI and Digital Policy (International)
Marc Rotenberg (moderator, Center for AI and Digital Policy (CAIDP), International), Anne-Charlotte Recker (Belgian Data Protection Authority, Belgium), Julia Apostle (Orrick, Europe), Karolina Iwańska (European Center for Not-for-Profit Law, Netherlands), Gregory Lewkowicz (Université libre de Bruxelles, Belgium)
As the EU AI Act moves toward implementation, critical questions remain about its enforcement, scope, and impact on fundamental rights. This panel brings together policymakers, legal scholars, industry leaders, and civil society representatives to explore how AI governance in Europe ensures compliance with human rights protections while addressing risks like biometric surveillance, AI-driven discrimination, and regulatory gaps. With AI regulations emerging worldwide, what role will European AI governance play in shaping global norms? The discussion will focus on enforcement mechanisms, the role of data protection authorities, and cross-border implications.
Academic
Business
Policy
CNIL (France)
Aymeric Pontvianne (moderator, CNIL, France), Ricardo Catalan (Autoriteit Persoonsgegevens, Netherlands), Nadia Arnaboldi (AssoDPO, Italy), Thomas van Gremberghe (Agoria, Belgium), Gerard Buckley (University College London, United Kingdom)
Sometimes required by GDPR, sometimes voluntary, the setting up of a Data Protection Officer is often seen by firms as a compliance duty and as a source of costs. But firm sometimes underestimate the economic interest and the business benefits for them to have a DPO. Such economic benefits actually overlap the benefits of GDPR compliance itself. Based on novel results of a statistical investigation in France, as well as interviews, the panel will identify the main types of economic gains associated with the presence of a DPO, set up a typology of controllers concerned and come back to the condition of success of such an approach. Controllers need a new perspective on their DPOs, considering them as an asset generating economic value added, and to organise its role in conjunction with this investment.
In a world shaped by powerful digital systems, how can we reclaim a sense of agency—and what role should art play in that effort? This panel brings together contributors from the CODE network and beyond to explore how creative, critical, and research-based practices can help us push back against the dominance of digital platforms.
We’ll focus on why it’s essential to approach digital challenges—such as surveillance, loss of privacy, algorithmic injustice, and shrinking digital rights—through interdisciplinary collaboration. We’ll also ask what makes working across fields like art, law, technology, and activism so difficult in practice, even when it’s widely encouraged in theory.
Together, we’ll explore how artistic and imaginative work can open up new ways of understanding and resisting digital systems—and why that work deserves to stand alongside legal, technical, and scientific approaches.
University of Lausanne (Switzerland)
Konrad Kollnig (Maastricht University, Netherlands), Luka Bekavac (University of St. Gallen, Switzerland), Simon Mayer (University of St. Gallen, Switzerland)
This workshop explores how official and alternative data access methods—provided by platforms and those developed independently by researchers and civil society—can be combined to audit systemic risks on very large online platforms. Participants will compare research APIs and transparency tools available under the Digital Services Act (DSA) with alternative approaches such as sockpuppet audits, analyzing strengths and limitations of each. Through scenario-based simulations and interactive group role-play exercises, attendees will collaboratively design and debate audit strategies while confronting challenges like API restrictions, access barriers, platform compliance tactics, and legal uncertainties. Participants will gain hands-on experience with the SOAP (System for Observing and Analyzing Posts) tool, learning to simulate user interactions and collect algorithmic recommendation data. Throughout the session, ethical and legal considerations surrounding data collection and usage will be explored, equipping participants with practical skills to independently audit platform practices
Centre for Future Generations (Belgium)
Unpacking the realities of mental health and well-being in the digital age. This cocktail will be paired with a short and provocative Q&A with Virginia Mahieu, neurotechnology director at the Centre for Future Generations, about what is happening to our brains on tech, and what we can do about it to protect and promote mental health for current and future generations.
Academic
Business
Policy
The Cordell Institute for Policy in Medicine & Law, Washington University in St. Louis, USA (United States)
Neil Richards (moderator, Washington University in St. Louis, United Kingdom), Woodrow Hartzog (Boston University, United States), Claire Boine (The Cordell Institute for Policy in Medicine & Law, Washington University, United States), Carolina Foglia (European Data Protection Board Secretariat, Europe), Orla Lynskey (University College London, United Kingdom)
For the past decade data protection and privacy regimes have increasingly been asked to deal with social problems that transcend informational self-determination and the preservation of the Athenian ideal of private lives and spaces. These problems have included political misinformation, deepfakes, algorithmic decision making, racial bias, attention theft, and the increasingly vast set of challenges presented by artificial intelligence technologies. This panel convened by the Cordell Institute at Washington University in St Louis will examine the extent to which data protection and privacy regimes can continue to address these problems while retaining their coherence, and what other frameworks might be necessary to bring them within the rule of law - including frameworks that might not yet exist. This all-star panel of academics and policy thought leaders will bravely ask the big existential questions facing our field as a whole.
Academic
Business
Policy
PinG (Privacy in Germany) & DAV (Deutscher Anwaltverein) (Germany)
Niko Härting (moderator, PinG (Privacy in Germany), Germany), Chloé Berthélémy (EDRi European Digital Rights, Belgium), Anna Drozd (CCBE, Belgium), Leonardo Cervera Navas (European Data Protection Supervisor, Europe), Thorsten Wetzling (interface – Tech analysis and policy ideas for Europe e.V., Germany)
Surveillance by security authorities, especially intelligence services, is on the rise. Simultaneously, lawyers are increasingly seen as "enablers“ in areas such as money laundering and tax evasion and therefore targeted in legislative initiatives. In 2021, the ECtHR ruled in two cases against the UK and Sweden: mass surveillance of communications without cause is incompatible with the ECHR unless human rights safeguards are in place – meaning: Big Brother can watch, the safeguards are decisive.Still, mass communications surveillance, skimming of data from private service providers and the information exchange between intelligence services do not stop at communication protected by legal privilege. As Lawyers are treated as "enablers“ of criminal or unlawful acts, we would like to discuss their vital role in granting access to justice, necessary limits of surveillance and imposed duties in the interest of preserving the rule of law.
Academic
Business
Policy
CDSL ()
Vagelis Papakonstantinou (moderator, Cyber & Data Security Lab, VUB, Greece), Catherine Forget (Groupe de recherche en matière pénale et criminelle (GREPEC), Belgium), Grace Mulvey (Microsoft, Ireland), Justus Coenraad Reisinger (Van Boom Advocaten, Netherlands), Gilles Robine (European Commission, DG Home Affairs & Migration, Digital investigation, Europe)
In the recently published ProtectEU Internal Security Strategy, lawful access to encrypted data and communications is presented as one of the key challenges for law enforcement authorities. In the next months, the EC will present a "Technology Roadmap" on encryption to identify and assess "proportionate" solutions. In this context, the massive police data collection conducted in the ongoing SKY ECC case is illustrative of the fundamental rights implications of encryption workarounds, including on the right to privacy and on the right to a fair trial. An additional important issue is the one of the sanction of procedural irregularities in judicial criminal proceedings. The role of Europol in international data driven investigations should also be clarified. In our panel, the speakers will take into account the perspective of national jurisdictions in Belgium, the Netherlands, France, Austria, Italy and Germany. They will also discuss the judicial cooperation at stake: How are the European jurisdictions (ECJ/ECHR) and the EU data protection authorities reacting?
Academic
Business
Policy
CPDP (Europe)
Ivan Szekely (moderator, Central European University/Blinken OSA Archivum, Hungary), Charles Raab (University of Edinburgh, United Kingdom), Colin Bennett (University of Victoria, Canada), Marit Hansen (Privacy Commissioner, Schleswig-Holstein, DE, Germany), Sébastien Ziegler (Europrivacy, Luxembourg)
In Articles 42 and 43, the GDPR sets out provisions for the formation and use of accredited certification systems so that data controllers and processors can demonstrate their compliance with the law. Such systems, including standardization, can play an important part in the fulfilment of data subjects’ rights and in data protection oversight arrangements. Data protection authorities, as supervisory authorities, and EU institutions play a key part in the establishment and regulation of certification, and the establishment and application of relevant standards are important in these processes. But many questions can be asked, including:
Academic
Policy
CPDP (Belgium)
Kristina Irion (moderator, University of Amsterdam, Netherlands), Anna-Julia Saiger (University of Freiburg, Germany), Rachael Olaitan Aborishade (Center for Artificial Intelligence and Digital Policy, United States), Emma Semaan (University of Oxford, United Kingdom), Maria-Lucia Rebrean (Leiden University, Netherlands)
Anna Julia Saiger, University of Freiburg (DE) - Navigating Disruption: the Case of the European AI Act; Rachael Olaitan Aborishade, Center for Artificial Intelligence and Digital Policy (US) - Navigating Transparency Obligations for Companion Chatbots under the European Union AI Act: Evaluation And Policy Directions; Emma Semaan, University of Oxford (UK) - Technical Standards: Co-Regulatory Pathways for Digital Policymaking; Maria-Lucia Rebrean, Gianclaudio Malgieri, Leiden University (NL) - Vulnerability in the AI Act: Building an interpretation
Society for Civil Rights (GFF) (Germany)
in the past decade, online platforms like Instagram, TikTok and YouTube have been accused of contributing to — in some cases even driving — a host of real-life harms with significant impacts for individuals and communities across the world. Yet even after decades of research, our understanding of platforms’ implications remains limited. Companies tightly control access to their vast amounts of data, leaving researchers dependent on whatever access platforms are willing to provide — which may change at a whim. The Digital Services Act (DSA), Europe’s comprehensive new platform law, aims to address this issue by introducing a new right for researchers to access platforms’ data. The goal of this workshop to introduce researchers and practitioners to the opportunities offered by the DSA, discuss practical obstacles to data access and how to approach them.
Data Protection Moot Court / Department of Innovation and Digitalization in Law, University of Vienna (Austria)
Katja Hartl (OPTIMA Project / Department of Innovation and Digitalization in Law, University of Vienna, Austria), Mariana Rissetto (DPMC / Department of Innovation and Digitalization in Law, University of Vienna, Austria), Kseniia Guliaeva (OPTIMA Project / Department of Innovation and Digitalization in Law, University of Vienna, Austria), Saskia Kaltenbrunner (OPTIMA Project / Department of Innovation and Digitalization in Law, University of Vienna, Austria)
During this workshop a fictional scenario triggers the participants' ability to deal with data protection in practice. The case revolves around the modernization need of a healthcare facility in order to develop an AI tool for prediction, diagnosis and treatment recommendations for cancer patients. Participants will delve into the different roles according to GDPR and be challenged by the interplay and application of the GDPR and other EU legal frameworks, such as the AI Act, MDR. In this fictional scenario, an external data protection officer is asked to secure that the tender’s requirements for the AI model to be procured are considered a privacy-by-design product. After a successful award, development and deployment of the AI tool, a patient submits a complaint to the Data Protection Authority because of the hospital’s alleged failure to erase (personal) data from the AI model.
Academic
Business
Policy
Open Markets Institute (International)
Max von Thun (moderator, Open Markets Institute, International), Wolfgang Oels (Ecosia, Germany), Ariel Ezrachi (University of Oxford, United Kingdom), Filomena Chirico (European Commission, Europe), Linda Griffin (Mozilla, International)
Today’s tech giants are at the heart of efforts by governments to promote AI innovation and competitiveness. This strategy has been encouraged by Big Tech corporations themselves, which have sought to portray themselves as the wellspring of AI innovation. But is Big Tech really delivering the kind of innovation we want as a society, and if not, how can we create space for alternative approaches to emerge? With this panel, the Open Markets Institute aims to challenge the prevailing narrative that tech giants are the main source of beneficial innovation. Panelists will discuss the benefits for innovation from open and competitive markets, and explore how different policy tools – from AI regulation and antitrust to industrial policy and procurement – can create a diverse innovation ecosystem that promotes the public interest instead of further entrenching today’s dominant tech firms.
Academic
Business
Policy
Autoriteit Persoonsgegevens (Dutch DPA) (Netherlands)
Ferdi Konyali (moderator, Autoriteit Persoonsgegevens (Dutch DPA), Netherlands), Catherine Jasserand (University of Groningen, Netherlands), Bilgesu Sumer (KU Leuven, Belgium), Oyidiya Oji (European Network Against Racism (ENAR), Belgium), Stefan Kulk (Autoriteit Persoonsgegevens (Dutch DPA), Netherlands)
The use of biometric technologies is becoming increasingly normalized. The adoption of “soft biometrics” has grown significantly in recent years. Unlike traditional biometrics, such as fingerprints or facial recognition, soft biometrics often lack the distinctiveness to identify individuals but are used to categorize people based on physical or behavioral traits. Increasingly, public spaces deploy these technologies to detect potential incidents like theft, violence or other unwanted activities. Soft biometrics raise urgent regulatory challenges. As these technologies become normalized, what risks arise from their use in public spaces? How should Data Protection Authorities (DPAs) tackle this phenomenon? What legal tools do DPAs have at their disposal to adequately address these challenges? Together with civil society and academia, the Dutch DPA will explore these questions during this panel.
Academic
Business
Policy
European Commission, Research Ethics and Integrity Sector, DG RTD (EU) and Politecnico di Torino (IT) (Europe)
Mihalis Kritikos (moderator, European Commission, Research Ethics and Integrity Sector, DG RTD, Greece), David Reichel (European Union Agency for Fundamental Rights, Austria), Alessandro Mantelero (Polytechnic University of Turin, Italy), Zuzanna Warso (Open Future Foundation, Poland), Migle Laukyte (Universitat Pompeu Fabra, Spain)
Regulations on AI pay limited attention to research with the aim of stimulating it and reducing regulatory barriers. However, the increasing use of AI in research projects is moving from technology to societal applications with potentially high impacts on individuals and society. Research ethics and ethical review processes remain the main way to provide safeguards for human-centred AI. This panel will focus on the development of an EU ethical governance framework for AI and highlight the need for a possible revamping of traditional ethical structures and practices in AI research, given the novelty of this set of technologies from a research ethics governance perspective. Ongoing work in this area in Europe will be presented, with a particular focus on concrete advice on how to achieve trustworthy AI research across the innovation ecosystem, including full protection of fundamental rights.
Academic
Policy
Centre for Democracy and Technology Europe (CDT) (Europe)
Laura Lazaro Cabrera (moderator, Centre for Democracy and Technology Europe, Europe), Rocco Saverino (LSTS, VUB, Belgium), Maria Magierska (Maastricht University, Netherlands), Thomas Zerdick (EDPS, Europe), Itxaso Domínguez de Olazábal (EDRi, Belgium)
Data protection authorities have collectively been at the forefront of assessing the impacts of new technologies on fundamental rights, naturally emerging as strong contenders for the role as Market Surveillance Authorities under the AI Act. With the deadline to appoint Market Surveillance Authorities approaching in August 2025, this panel aims to identify lessons learned from data protection authorities’ experience in ensuring effective remedies for data protection harms which could be applied to the AI Act’s market surveillance framework. The panel will identify the different enforcement needs for AI harms as opposed to solely data protection harms, compare and contrast the governing frameworks for data protection and market surveillance authorities and how these regulators have traditionally fulfilled their roles, and discuss how best civil society and fundamental rights authorities may support and collaborate with them.
Academic
Business
Policy
Brussels Privacy Hub (Belgium)
Sophie Stalla-Bourdillon (moderator, Brussels Privacy Hub, Belgium), Romain Robert (EDPS, Europe), Lex Zard (Leiden University, Netherlands), Margaux Schaeffer (CNIL, France), Luca Nannini (Privacy Network, Italy)
In a world increasingly shaped by data-driven and AI-mediated practices, the preservation of individual dignity and civic well-being takes a backseat to algorithmic efficiency and commercial interest. This panel will examine the complex challenges of embedding and enforcing fundamental protections—by design and by default—within online service architectures that impact billions daily. From Online Behavioural Advertising (OBA) to pervasive personalisation and algorithmic sorting and scoring systems, there is an urgent need to clarify boundaries and explore alternative service designs to ensure technology serves the people behind the screens, not the other way around. In recent case law, the CJEU appears to draw inspiration from service design principles that advocate for the separation of two layers—primary services and personalized services— to better preserve individual autonomy and agency. This panel will examine what further steps are needed to make service design truly human-centric.
George Washington University Law School ()
How is technology changing privacy? Are we doomed? Is there anything we can do? “Privacy is dead!” This cry has rung out again and again as we have witnessed the rapid rise of new digital technologies. Are we destined to watch helplessly as technology turns our society into a terrible blend of Franz Kafka’s The Trial, George Orwell’s 1984, Aldous Huxley’s Brave New World, and Margaret Atwood’s The Handmaid’s Tale? Can anything be done to save us from a dystopian world without privacy? In this short and accessible book, internationally renowned privacy expert Professor Daniel J. Solove reflects on his examination of privacy over the past twenty-five years. On Privacy and Technology describes the profound changes technology is wreaking upon privacy, why these changes matter, and what can be done about them. Solove’s lively discussions of technology and policy, infused with the humanities, argue that the law should focus on power and structure and that it is failing because it is not holding creators and users of technology accountable for the harms they create. Solove deftly weaves together philosophical ideas with concrete practical knowledge from his many years of working with technology companies and talking with policymakers. His book is an essential primer for anyone who wants to understand the threats to privacy in today’s Digital Age and how we can face them effectively. Succinct, understandable, and engaging, On Privacy and Technology is beautifully written, passionate, and filled with surprising insights.
Technical University of Munich (Germany)
Christian Djeffal (Technical University of Munich, Germany)
A hands-on workshop that brings together diverse perspectives on managing regulatory complexity through legal design approaches. After a brief introductory round where participants share their CPDP backgrounds and expectations, the session opens with three dynamic lightning talks (7 minutes each): a law professor discussing theoretical frameworks, an information design expert highlighting visualization techniques, and an in-house counsel sharing practical implementation challenges. Following design thinking methodology, participants break into small groups for a structured process: First, they engage in problem exploration and ideation around specific regulatory challenges. Using creative techniques like brainwriting and affinity mapping, teams develop potential solutions. Ideas are then prioritized through dot voting, ensuring focus on the most promising approaches. Workshop outcomes, including sketches and prototype concepts, will be synthesized into an actionable report. All materials and results will be documented and shared with participants.
ADAPT Centre (Ireland) and Joint Research Centre (Italy) (Europe)
Dave Lewis (ADAPT Centre, Trinity College Dublin, Ireland), Eimear Farrell (Joint Research Centre, European Commission, Italy)
In this interactive workshop, early-stage researchers (ESRs) take the stage to present their cutting-edge work on AI and data governance—ranging from regulatory divergence and risk-based oversight to algorithmic transparency and ethical concerns. Further, renowned experts will explore the challenges and opportunities for ESRs to engage in the global digital regulatory space. Bringing together voices from research, policy, and practice, the session invites participants to actively engage in discussions that bridge theory and implementation. With experts from academia, industry, civil society, and EU institutions in the room, this workshop offers a unique space for dialogue on the future of responsible and evidence-based digital governance in the EU and beyond.
EU AI and Society Fund & European Digital Rights (EDRi) (Europe)
Artificial intelligence hype is at an all-time high. Public and private spending aims to make Europe ‘the AI continent’, and AI tools are posited as the solution to all our problems. While civil society have long critiqued the lack of human rights concerns in these technosolutionist narratives, we need to do more to challenge the severe effects of AI technologies on the environment and climate. This workshop brings together researchers and advocates in the digital rights and climate justice space with those looking to expand their understanding of the problem and strategise about how we can collectively work towards a more just digital future. We will create an informal space where participants engage in a critical reflection on the material relationship between the struggle for climate justice and the proliferation of data centers, extraction of raw materials and labour exploitation that underpin the drive for AI innovation at any cost.
Academic
Policy
noyb (Austria)
Jennifer Baker (moderator, Freelancer, Belgium), Max Schrems (noyb, Austria), Sean O'Sullivan (Barrister, Ireland), Ianika Tzankova (Tilburg University / Rubicon Impact&Litigation, Netherlands), Charles Demoulin (Deminor, Belgium)
Gradually the EU Collective Redress Directive gets into operation and allows ‘Class Actions’ for GDPR violations. Courts also gradually confirm that GDPR violations can create relevant non-material damages. However, so far there was not an instant boom in such litigation. The panel is meant to look at the reality after the laws (largely) being implemented and highlighting practical and legal obstacles.
Academic
Business
Policy
RESOCIAL Project (Leiden University) & VULNERA (Brussels Privacy Hub) (Netherlands)
Gianclaudio Malgieri (moderator, Leiden University & Brussels Privacy Hub, Belgium), Kim van Sparrentak (European Parliament, Europe), Damian Clifford (ANU university, Australia), Itxaso Domínguez de Olazábal (EDRi, Belgium), Adele Zeynep Walton (Logging Off Club, United Kingdom), Constanta Rosca (Leiden University, Netherlands)
Social media platforms are increasingly becoming spaces of structural dependency and digital addiction, amplifying human vulnerabilities. These dynamics raise pressing questions about the adequacy of current legal and social tools to protect individuals in an age of pervasive connectivity and AI-driven ecosystems. This panel will critically explore the challenges and opportunities in addressing human vulnerability online. Bringing together legal scholars, policymakers (the AI Office and national DSA enforcement authorities), and social scientists, this panel promotes a multidisciplinary dialogue to evaluate the effectiveness of the current regulatory landscape and propose pathways towards a more empathetic and rights-centred digital ecosystem. This session will provide insights for academics, practitioners, and regulators interested in the intersection of technology, human rights, and vulnerability. Join us for an in-depth exploration of the challenges and the way forward in safeguarding dignity and fairness in the digital age.
Academic
Business
Policy
European Trade Union Institute (ETUI) (Belgium)
Aida Ponce Del Castillo (moderator, European Trade Union Institute (ETUI), Belgium), Anastasia Siapka (KU Leuven, Belgium), Anton Ekker (Independent Lawyer, Netherlands), Michele Mole Mole (University of Groningen, Netherlands), Sandy JJ Gould (University of Cardiff, United Kingdom)
In the employment context, both the AI Act and the Directive on Improving Working Conditions in Platform Work introduce rights and obligations concerning the use of AI systems, particularly in automated monitoring and decision-making. While both frameworks intersect with the GDPR, their scope and application differ. The Platform Work Directive applies exclusively to platform workers, whereas the AI Act addresses mainly high-risk AI systems, and even then with important exemptions and ambiguities in scope. This panel will explore whether the existing EU legislative framework provides sufficient safeguards for workers in an increasingly automated work, or whether additional or targeted regulation is necessary to fill legal and practical gaps. The discussion will also consider the experience of legal practitioners regarding recent decisions by Data Protection Authorities on labour-related investigations, and whether these cases can help clarify obligations or address enforcement challenges
Academic
Business
Policy
eLaw Center for Law and Digital Technologies (Netherlands)
Nanou van Iersel (moderator, Erasmus Law School + eLaw Leiden, Netherlands), Marlon Kruizinga (Erasmus Law School + eLaw Leiden, Netherlands), Tundé Adefioye (St Lucas School of Arts Antwerp, Belgium), Astrid Voorwinden (Infranum, France), Max Gahntz (Mozilla Foundation, Germany)
Public safety is inherently a multi-agency phenomenon. The arrangement of public safety is often framed as a balancing act between implementing controls on the one hand, to promote freedoms and well-being on the other. Data and AI-'solutions' are often proposed as tools, albeit mostly for the side of the controls. The ELSA Lab-project AI-MAPS (NL), studies multi-agency public safety by engaging with a very wide range of stakeholders in use cases a.o. around social 'disorder' and public 'nuisances' in neighbourhoods. In this panel, we aim to conceptualize and address inherent fragmentation of accountability around public safety arrangements that include technologies, highlighting different narratives around safety, roles, objectives and responsibilities.
Academic
Business
Policy
CRISP (International)
William Webster (moderator, Centre for Research into Information, Surveillance and Privacy (CRISP), United Kingdom), Greg Singh (University of Stirling, United Kingdom), Patricia Lustig (Association of Professional Futurists (APF), Netherlands), Antonia Mochan (Joint Research Centre, European Commission, Europe), Rosamunde Van Brakel (Vrije Universiteit Brussels (VUB), Belgium)
This panel will explore how different organisations and society perceive and plan for the future, especially in relation to the evolution of new digital technologies. This visioning is especially important where the technologies being considered have potential societal harms, such as those associated with enhanced surveillance and/or privacy infringements. Foresight mechanisms can include trend monitoring and analysis, scenario planning, technology road mapping, and foresight workshops and innovation labs. They are designed to help organisations plan for the future. In this panel we will explore the processes of foresight planning from distinctly different perspectives, including from commercial, service, regulatory and literary perspectives. The speakers will contrast how the future is perceived in science fiction, by futurists, and by those who promote, use and regulate such technologies.
George Washington University Law School ()
In this session, Professor Daniel J. Solove (George Washington University Law School) discusses notable books about privacy and data protection over the last 50 years.
Business
Policy
Access Now (International)
Daniel Leufer (moderator, Access Now, International), Marwa Fatafta (Access Now, International), Sarah Chander (Equinox Racial Justice Initiative, Europe), Matt Mahmoudi (Amnesty International, International), Lydia De Leeuw (SOMO (Centre for Research on Multinational Corporations), Netherlands)
Israel has deployed digital technologies to fuel the atrocities committed in its war in Gaza. The report by the United Nations Special Committee to Investigate Israeli Practices Affecting the Human Rights of the Palestinian People and Other Arabs of the Occupied Territories highlights how digital technologies are facilitating human rights abuses in Gaza, including actions consistent with the crime of genocide. Israel's digital warfare in Gaza presents an unprecedented blueprint for the militarization of data and digital technologies, “AI-washing” gross violations of international law. It also forces the door wide-open for further collusion between Big Tech and the military posing grave risks to fundamental rights, peace, and security elsewhere. Gaza, and occupied Palestine at-large, have become open-air technology expositions; a live laboratory for the testing and marketing of unprecedented technologies of violence, at the expense of Palestinians.
Academic
Business
Policy
CPDP (Belgium)
Hielke Hijmans (moderator, Belgian Data Protection Authority, Belgium), Finn Myrstad (Transatlantic Consumer Dialogue, International), Jacques Mandrillon (Salesforce, France), Estelle Masse (DG Just, Europe), Teodora Lalova-Spinks (Ghent University, Belgium)
The theme of international transfers of personal data outside the EU remains as debated as ever. Questions swirl, for example, as to the suitability of the current range of mechanisms in light of the realities of international data flows, which mechanisms can be legally be used in relation to different types of transfers, and as to the degree to which different mechanisms are, or can be made, future-proof in light of political instability and legal challenge, Much of the discussion on international transfers, however, focuses on legal technical, and structural, issues relating to international transfers – for example, how specific third country adequacy decisions might be evaluated in light of specific principles of EU law. This panel aims to take a different approach and to discuss the current international transfers landscape putting stakeholders on the ground - individuals and businesses at the forefront of discussion. In this regard, the panel will consider, amongst others, questions such as the following:
Academic
Business
Policy
Information Commissioner’s Office (ICO) (United Kingdom)
Declan McDowell-Naylor (moderator, Information Commissioner’s Office (ICO), United Kingdom), Aislinn Kelly-Lyth (Blackstone Chambers, United Kingdom), Halefom Abraha (Utrecht University School of Law, Netherlands), Lindsey Zuloaga (HireVue, United States)
Algorithms are hiring, firing and managing performance of workers all over the world. To support these AI decisions, increasing quantities and types of data are collected from workers through automated monitoring. This panel will explore the ways in which legal regimes can empower workers to individually and collectively control the use of their data, focusing on data protection law—including the principles of transparency and fairness. The panel will also examine the motivation for using AI to make decisions about recruitment and employment; whether AI-driven recruitment and employment practices can be fair; and how that can be achieved if so. The discussion will focus in particular on whether and how legal regimes can support and promote effective and fair deployment of algorithmic tools in the employment context.
Academic
Policy
CPDP (Belgium)
Jo Pierson (moderator, VUB, Belgium), Lucas Reckziegel Weschenfelder (Pontifícia Universidade Católica do Rio Grande do Sul, Brazil), Nafiye Yücedağ (Istanbul University Faculty of Law, Turkey), Evrim Görmüş (MEF University, Turkey), Beatriz de Souza (Lawgorithm Research Association, Brazil), Elif Beyza Akkanat-Öztürk (Istanbul University Faculty of Law, Turkey)
Lucas Reckziegel Weschenfelder, Pontifícia Universidade Católica do Rio Grande do Sul (BR) - A museum of Great Novelties: The Risks Of The Unified Identification Number For Citizens In Brazil; Nafiye Yücedağ & Elif Beyza Akkanat-Öztürk, Istanbul University (TR) - Data Minimisation and the “Reasonably Be Fulfilled by Alternative Means” Test: A Comparative Study of Turkish and EU Approaches; Evrim Görmüs, MEF University (TR) - Navigating the Technopolitics of the Middle East: Implications of AI Cooperation between the UAE and Israel in the Aftermath of the Abraham Accords; Bernardo Fico, Lawgorithm Research Association (BR) - Lessons learned from merger control in digital markets and their contributions to competition in AI markets.
Academic
Policy
Institute for Information Law (IViR) (International)
Brandi Geurkink (moderator, Coalition for Independent Technology Research, International), Mathias Vermeulen (AWO, Belgium), LK Seiling (Weizenbaum Institut, Germany), Kirsty Park (Coimisiún na Meán, Ireland), Paddy Leerssen (University of Amsterdam, Netherlands)
The Digital Services Act (DSA) contains ambitious new rules for researchers to demand access to platform data. Over the past year, major steps towards implementation have been taken. For access to publicly available data, large platforms are processing applications from researchers, the European Commission has launched investigations. For non-public data, a new delegated regulation promises to empower even more detailed investigations. Each of these steps is fraught with legal complexity, including data protection issues. How should research be enabled whilst protecting user privacy? This panel will bring together leading experts to discuss the state of play and next steps.
ANU university (Australia)
Data protection law is often positioned as a regulatory solution to the risks posed by computational systems. Despite the widespread adoption of data protection laws, however, there are those who remain sceptical as to their capacity to engender change. Much of this criticism focuses on our role as 'data subjects'. It has been demonstrated repeatedly that we lack the capacity to act in our own best interests and, what is more, that our decisions have negative impacts on others. Our decision-making limitations seem to be the inevitable by-product of the technological, social, and economic reality. Data protection law bakes in these limitations by providing frameworks for notions such as consent and subjective control rights and by relying on those who process our data to do so fairly.
Despite these valid concerns, Data Protection Law and Emotion argues that the (in)effectiveness of these laws are often more difficult to discern than the critical literature would suggest, while also emphasizing the importance of the conceptual value of subjective control. These points are explored (and indeed, exposed) by investigating data protection law through the lens of the insights provided by law and emotion scholarship and demonstrating the role emotions play in our decision-making. The book uses the development of Emotional Artificial Intelligence, a particularly controversial technology, as a case study to analyse these issues.
The Centre for Democracy and Technology Europe (CDT Europe) (Europe)
Silvia Lorenzo Perez (The Centre for Democracy and Technology Europe (CDT Europe), Europe)
The spyware market in the EU presents a significant challenge for regulators, civil society, and industry stakeholders. Despite growing awareness and some regulatory efforts, the EU has not done enough to curb the proliferation and abuse of spyware. Weak enforcement, export control loopholes, and lack of accountability have allowed the market to thrive. This workshop will assess the state of spyware production, trade, and use in the EU, exploring potential regulatory interventions at the EU level. Experts, policymakers, and civil society representatives will come together for an interactive roundtable, designed to encourage dialogue between regulators, industry representatives, and civil society experts. The session will: - Discuss the scale and structure of the spyware market in the EU, focusing on domestic production, intra-EU trade, and third-country imports. - Highlight challenges in regulation, enforcement, oversight, and accountability of spyware abuses. - Identify opportunities for a stronger, rights-respecting EU regulatory framework.
UGent-imec (Belgium)
Beatriz Esteves (UGent-imec, Belgium), Ruben Verborgh (UGent-imec, Belgium), Wout Slabbinck (UGent-imec, Belgium)
People worried about privacy often think our personal data is being shared too easily. But the real issue is actually the opposite: our data doesn't flow smoothly enough, pushing companies to resort to cheaper or easier— sometimes illegal—shortcuts just to speed up data exchanges, which more often than not aren’t the most privacy-friendly options. We want to build a world where personalized, tech-assisted, digital trust enables parties to reliably and sustainably exchange data, goods, and services, negotiating on the appropriate legal ground to do so, as an alternative to today’s overuse and misuse of informed consent. To facilitate these mutually beneficial interactions, we are developing technologies to assist with legal processes towards creating and maintaining long-term trust relationships between humans, businesses and machines. In this interactive workshop, we are presenting these technologies and invite participants to actively engage in discussions that bridge theory and implementation.
Academic
Business
Policy
AWO (Belgium)
Lex Zard (moderator, Harvard Carr Center for Human Rights Policy, United States), Nick Botton (AWO, Belgium), Denis Sparas (European Commission, Europe), Harriet Kingaby (Conscious Advertising Network, United Kingdom), Nataliia Bielova (Inria, France)
With the passing of the Digital Services Act and Digital Markets Act, and a potential Digital Fairness Act being discussed by political actors in the new mandate of the European Commission, the EU's legislative and enforcement agenda on online advertising now stands at a cross-roads. This panel will look into the impact of (1) industry-led initiatives such as Apple's ATT or Meta's pay or consent initiative, (2) the increased use of AI for targeting, (3) the new online ads transparency provisions of the DMA and the DSA and (4) outcomes of new relevant court cases such as the DOJ Google Search trial, on the online advertising market. Can we identify potential regulatory gaps or does Europe have all the tools it needs to properly regulate this market?
Business
Policy
European Commission (DG JUST) (Europe)
Tamar Kaldani (moderator, Independent, Georgia), Louisa Klingvall (European Commission (DG JUST), Europe), Declan McDowell-Naylor (Information Commissioner's Office, United Kingdom), Beatriz de Anchorena (Argentina's Agency for Access to Public Information, Argentina)
The processing of personal data along the lifecycle of AI systems – often presenting high risk to fundamental rights – is an essential feature of the rapidly expanding worldwide AI technologies. National data protection authorities (DPAs) across the world already closely follow both technological and regulatory developments, in particular, when those create additional enforcement tasks. While national DPAs will have a leading role to play in enforcing secure and privacy-oriented developments of AI systems, their cross-border collaboration may prove even more crucial than ever before. Hence, the panel will explore opportunities and avenues for cooperation between DPAs with the anticipated expansion of tasks related to the supervision of the AI systems. Drawing on experience of effective international cooperation, speakers representing various jurisdictions, will reflect on the needs and prospects for future joint activities in the age of AI.
Academic
Business
Policy
Data Privacy Brasil (Brazil)
Adriana Schnyder (moderator, Digitus Legal, Spain), Bruno Bioni (Data Privacy Brasil/IDP, Brazil), Carmen Alvarez Valdes (Microsoft, Spain), Adeboye Adegoke (Luminate Foundation, Nigeria), Alison Gillwald (Research ICT Africa, South Africa)
Data, personal and otherwise, are often framed as both an enabler of technological innovation and digital transformation and warranting limitations in the case of data governance more generally and protection as a fundamental right of individuals when it comes to data protection. Looking back at the landscape of global governance of tech during the year 2024, AI was central but related subjects such as Digital Public Infrastructure also gained prominence, from the G20 to various UN forums and processes to national-level debates. This panel will discuss the most recent developments in global governance forums, their impacts on domestic regulation and governance, the role of data, including personal data, in driving innovation, but also the learnings from past and current efforts to govern data and protect the fundamental right to data protection with new challenges moving forward.
University of California at Santa Crouz (), LSTS, VUB (Belgium)
This open access book is about how Israel is using Algorithmic Intelligence (AI) and other computer technology in military operations in the Gaza Strip to achieve goals based on ancient religious entitlements. Changes in Israel Defense Force (IDF) ethical codes and innovation policies have not led to victory, but have resulted in a wide range of War Crimes and Crimes Against Humanity in a strategy focused on The Torture of Gaza, which includes ethnic cleansing and is approaching genocide. It covers the history of using AI in war, and current U.S. and Israeli military AI technologies such as Maven, Iron Dome, Pegasus, the Alchemist, Gospel, Lavender, and Where’s Daddy, all tested and perfected in the Palestinian Laboratory and marketed as such. This book also places the current data-driven and AI-directed assault on Palestine in the context of Postmodern War, which precludes military victories and enshrines the profits and power of the U.S.-Israeli military-industrial complex in a system of perpetual war and militarized technological innovation. Through an analysis of Israeli military policies, AI, sacred texts, and the basic tenets of postmodern war, the book ultimately reveals the limits of the IDF’s embrace of illusions about new technologies producing actual victory. War today is about winning hearts and minds, not body counts. As fundamentalist politics achieve more and more power around the world in the context of new information technologies, there is growing danger to the future of all of us.
Academic
Policy
Centre for Fundamental Rights - Hertie School (Germany)
FRANCESCA PALMIOTTO (IE University, Spain), Derya Ozkul (University of Warwick, United Kingdom), Joanna Parkin (European Data Protection Supervisor, Europe), Eleftherios Chelioudakis (Homo Digitalis, Greece)
As displaced populations increasingly interact with technologies, effective digital rights protection is more urgent than ever. This panel discusses the role data protection law plays in safeguarding the rights of asylum seekers and refugees. It examines the legal and practical challenges that arise in ensuring privacy, consent, and fairness for these vulnerable individuals. Our panel aims to foster a collaborative dialogue between academia (particularly researchers of the AFAR project), NGOs, and data protection authorities to close gaps between law and practice and to empower displaced individuals to exercise their digital rights. To create a dynamic session, we will invite stakeholders from the fields of Data Privacy and New Technologies to attend our panel as guests. These stakeholders will be engaged during the Q&A session to provide additional perspectives and foster a vibrant, interdisciplinary exchange of ideas.
Academic
Business
Policy
ALTEP DP (Belgium)
Rocco Saverino (moderator, ALTEP DP (VUB), Belgium), Pawel Hajduk (AI In-house lawyer, Poland), Cláudio Teixeira (BEUC, Europe), Sophie Stalla-Bourdillon (Vrije Universiteit Brussel (VUB), Belgium), Sabrina Küspert (AI Office, Europe)
(Over-)Regulation is commonly perceived as stifling innovation, particularly in the competitive race to develop AI. The EU Commission finds itself in a challenging position, balancing the desires of GPAI providers who advocate for lenient enforcement with those who call for stronger measures to protect fundamental rights. How can the EU Commission and its newly established AI Office ensure harmonised enforcement across these various frameworks while effectively protecting individuals' rights and promoting innovation in the AI sector? To address these complex issues, this panel will delve into the role of the AI Office, examining how it interacts with other enforcement mechanisms within the framework of adjacent regulations such as the GDPR, DSA, and DMA. By exploring these dynamics, the discussion aims to shed light on the challenges and opportunities presented by regulatory enforcement in the AI domain.
Academic
Business
Policy
Meiji University (Japan)
Yasutaka Machimura (moderator, Seijo University, Japan), Yoichiro Itakura (Hikari Sogoh Law Offices, Japan), Akemi Yokota (Meiji University, Japan), Laura Drechsler (KU Leuven, Belgium)
Currently in Japan, a "Triennial Review Study Group" has been convened to revise the Act on the Protection of Personal Information, and an interim report has been issued. This includes clarifying provisions on the handling of personal information, introducing Consumer Organization Collective Litigation (group action), and reviewing the surcharge, administrative order, and criminal penalty systems. Discussions on AI regulations are also underway at the Cabinet Office's AI System Study Group. We will review this situation from the perspective of independent researchers and practitioners, and provide responses from researchers and practitioners on the European side. We will be conducting this international discussion with a particularly strong interest in the future of adequacy decisions.
Wojciech Wiewiórowski (European Data Protection Supervisor, Europe)
Center for AI and Digital Policy (Europe)