Towards a Safe Harbour for Public Interest AI Research

  • Panel
  • Maritime
  • Thursday 22.05 — 08:45 - 10:00

Organising Institution

Mozilla Foundation

Belgium

Mozilla is a global nonprofit dedicated to keeping the Internet a public resource that is open and accessible to all.
  • Academic 2
  • Business 1
  • Policy 3
The value of independent scrutiny of generative AI systems, in the form of independent testing, red teaming, bug bounties, and other kinds of vulnerability discovery, is well understood. But the information asymmetry between AI providers and users is massive. And while large language models are feasting off the internet for training data, public interest research is starving for safe and reliable data, unsure of their legal protections should companies dislike their research. As the DSA is launching its regime structured data access regime, the AI Act’s Code of Practice on GPAI is asking important questions about what constitutes an appropriate third-party evaluator and what role safe harbours should play in AI safety and evaluation. But what should be the methods for encouraging public interest research? Who is a legitimate and independent third party? What are the tensions and trade-offs?

Questions to be answered

  1. What structured AI researcher access programs exist currently, and what are they like?
  2. What would a good AI researcher access program for foundation models look like?
  3. What are concerns and risks related to researcher access to foundation models - ie, security, privacy, trade secrets?
  4. How should regulators and agencies ensure researcher access, for example in the health sector? What existing lessons do we have from cybersecurity, from platform accountability, or from other fields?

Moderator

Maximilian Gahntz

Mozilla Foundation - Germany

Maximilian Gahntz (he/him) is the Mozilla Foundation's AI Policy Lead, working on questions around the regulation and governance of AI around the world. Previously, he has also led work around data governance and platform accountability. Before he joined Mozilla, he was a fellow of the Mercator Fellowship on International Affairs, working on the EU's AI Act at the European Commission.

Speaker

Julia Keseru

Independent - Hungary

Julia Keseru works at the intersection of emerging technology, justice, power, and human rights. She is a researcher, writer, and activist focused on studying the societal impact of emerging technologies, advising organisations—both large and small—on their tech and data strategies, and advocating for a just, fair, and sustainable internet.

Speaker

Esme Harrington

AWO - United Kingdom

Esme Harrington works for data rights law firm and consultancy AWO. She conducts research and analysis on AI governance, platform accountability, and data protection in the European Union and the United Kingdom. At AWO, she leads the Algorithm Governance Roundup, a monthly newsletter sharing AI policy developments, interviews with international stakeholders and standard-setters, and research. She holds a Master of Laws from the London School of Economics (2021), and a first-class BA (Hons.) in law from the University of Cambridge (2020).

Speaker

Martin Degeling

AI Forensics - Germany

Martin Degeling is post-doctoral researcher focussed on black-box auditing of algorithmic systems, usable privacy and security, and data protection. Currently he work as a freelancer, mainly for AI Forensics and ISD. Before that, he spent time work at Interface auditing auditing TikTok, doing research at Ruhr-University Bochum and at CMU working on personalized privacy assistant.