A plethora of ethics manifestos, guidelines and frameworks calls for responsible AI and data practices. Legislation is under way to regulate AI practices. But how to effectively close the gap towards practical application? How can organisations implement practices that stimulate responsible application of AI systems, and how can -increasingly digitized- democratic societies establish necessary checks and balances? Looking further than the good intentions of ethics guidelines, this panel discusses which best practices are most effective to align the design and use of AI systems with the values of our open and democratic societies. This panel investigates how practical approaches, such as impact assessments and audits, and the role of oversight bodies help to establish responsible and safe uses of AI and big data practices and can constitute accountability.
• Should impact assessments be mandatory?
• Who is responsible for oversight and enforcement?
• Can we hold algorithms accountable?
• What are the limits to good governance of AI?