The future of AI is here and already seamlessly integrated into a variety of sectors, from healthcare to transportation. Despite AI becoming more ubiquitous, surveys indicate that trust in AI continues to be low, especially among individuals in the U.S. and EU. Much of this seems to stem from fundamental misunderstanding about what artificial intelligence and machine learning are. However, improving transparency in AI on an ongoing basis can be a “moving target,” with hundreds of definitions and new findings that promote responsible AI development, deployment, and integration. Join us for a conversation about what meaningful transparency in AI practically looks like and how organisations should prepare for GDPR-like rules for AI governance.
• What does “transparency” mean in the context of AI, what are the target groups and why is it beneficial?
• Is there a need to understand in detail how AI works or rather the positive or negative effects it can produce based on its input?
• What obligations or incentives should be put in place, how, when and on whom?
• How can we effectively demonstrate and verify that obligations are fulfilled and incentives used?