Balancing (AI) Act: Risk Assessment and Mitigation under the GPAI Code in Practice

  • Panel
  • Orangerie
  • Thursday 22.05 — 14:15 - 15:30

Organising Institution

Microsoft

Belgium

  • Academic 1
  • Business 3
  • Policy 2
 In this panel discussion, we will delve into some of the intricacies of the EU AI Act’s General-Purpose AI Code of Practice (CoP), which aims to provide a comprehensive compliance framework for managing GPAI models with systemic risk, ensuring that such models are developed and deployed responsibly. This discussion will explore the various methodologies and tools available for risk assessment and mitigation at the model level, the role and views of different stakeholders on how to mitigate these risks, and the challenges and trade-offs faced in implementing and standardizing effective risk management strategies. Key questions to be addressed include: What are the current state-of-the-art methods for identifying and assessing AI model risks? How can the CoP contribute to mitigating these risks effectively and what are the role for all stakeholders? What are the current challenges in implementing the CoP? What areas of the CoP can be improved in the future, in view of emerging AI model risks? What is the overall assessment of the CoP?

Questions to be answered

  1. Overview of the EU AI Act’s General-Purpose AI Code of Practice (CoP): What is the CoP, which aims to provide a comprehensive compliance framework for managing GPAI models with systemic risk, ensuring responsible development and deployment?
  2. Risk Assessment and Mitigation Tools: What are the methodologies and tools available for risk assessment and mitigation at the model level?
  3. Stakeholder Perspectives: What are the roles and views of different stakeholders on mitigating AI model risks?
  4. Challenges and Future Improvements: What are the challenges in implementing the CoP, areas for improvement in view of emerging AI model risks, and overall assessment of the CoP?

Moderator

Anahita Valakche

Microsoft - United States

Anahita is part of Microsoft’s Responsible AI Public Policy and European Government Affairs teams, which seek to advance effective and interoperable public policy that helps Microsoft, our customers, and the world secure the benefits while ensuring trustworthy development and deployment of AI. Anahita specializes in EU policy on AI and data protection, with a focus on Microsoft's AI Act compliance and participation in the General-Purpose AI Code of Practice. Before joining the Office of Responsible AI, Anahita worked on data protection and human rights due diligence as part of Microsoft's European Government Affairs team. Prior to joining Microsoft, Anahita focused on cybersecurity and privacy law and policy issues at Dell Technologies. Anahita has a BA in Government and Art from Colby College, and an MA in International Relations and Global Conflict Studies from Leiden University.

Speaker

Naman Goel

Tony Blair Insitute - United Kingdom

Dr. Naman Goel is a Senior Policy Advisor - AI Policy and Governance at the Tony Blair Institute in London. Naman's work focuses on aiding the development of trustworthy and human-centric AI systems. Naman earned a Ph.D. at the School of Computer and Communication Sciences, EPFL in Switzerland. He earlier studied Computer Science and Engineering at the Indian Institute of Technology (BHU), Varanasi. Further information about Naman and his work is available at his website.

Speaker

Marta Ziosi

Oxford Martin AI Governance Initiative, University of Oxford - United Kingdom

Dr. Marta Ziosi is a Postdoctoral Researcher at the Oxford Martin AI Governance Initiative, where her research focuses on standards for Frontier AI. She currently serves as a vice-chair for the EU GPAI Code of Practice in WG2: Risk identification and assessment, including evaluations. Marta has a background in policy, philosophy and mathematics, and she holds a PhD in algorithmic bias from the Oxford Internet Institute.

Speaker

Marta Przywała

SAP - Belgium

MARTA PRZYWAŁA is SAP Government Affairs Lead for EU AI and Cybersecurity Policy and Aspen Young Leader (2024 Program). Based in Brussels, Belgium, she is a specialist in European affairs with an excellent knowledge of the EU legislative process. She has 10 years of experience in a variety of cybersecurity policy, advisory, and research roles in both the private and public sectors. Marta joined SAP in 2020. Since then, she has worked in multiple tech and industry policy areas. Before joining SAP, she worked in the European Commission’s DG CNECT in the Cybersecurity Policy Unit. Marta is a graduate of the MA double-degree program in political science between the Jagiellonian University of Krakow and the University of Strasbourg taught in English and French.

Speaker

Friederike Grosse-Holz

European Commission - Europe

Friederike Grosse-Holz is the Lead of AI Safety at the EU AI Office. Committed to building a flourishing future, she has a background in Biology and AI Safety, with experience on the convergence of AI and the Life Sciences at the UK AI Security Institute.