Empirical methods for fairer AI – Lessons from auditing a Dutch public sector risk profiling algorithm

  • Panel
  • Class Room
  • Wednesday 21.05 — 17:20 - 18:40

Organising Institution

NGO Algorithm Audit

Netherlands

A European knowledge platform for AI bias testing and AI standards. Connecting statistical, legal and ethical frameworks for responsible AI through a case-based approach.
  • Academic 2
  • Business 1
  • Policy 3
Over the years, lesson have been learnt from Dutch scandals involving risk profiling algorithms. Investigations conducted by consultants, academics and NGOs have contributed to a growing body of public knowledge from which best-practices emerge. This panel explores the interplay between the qualitative principles of law and ethics and the quantitative methodologies of statistics and data analytics. Specifically, we shed light on how empirical approaches can help interpret and contextualize open legal norms under EU non-discrimination law and public administration law. Examples are drawn from a recent audit conducted in collaboration with the Dutch Executive Agency for Education (DUO), in which aggregated statistics on the migration background of 300.000+ students were analyzed. We discuss whether bias testing inevitably leads to the feared ‘battle of numbers’, or whether it can serve as a critical role for fostering meaningful democratic oversight of AI.

Questions to be answered

  1. To what extent should a state collect sensitive data attributes, such as ethnicity and race, about its inhabitants?
  2. Why do quantitative insights relating to the proxy nature of profiling characteristics matter?
  3. What hinders the widespread adoption of empirical methods for testing bias in algorithmic systems?
  4. Does bias testing inevitably lead to the feared ‘battle of numbers’, or could it play a crucial role for establishing meaningful democratic oversight of AI?