Thursday 23 May 2024
HT Aula
Microsoft (US)


In this panel discussion, we will dive deeper into the concept of “high-impact capabilities”, based on which GPAI models can be classified as presenting “systemic risk” under the EU AI Act. The AI Act will introduce obligations for providers of general-purpose AI (GPAI) models, as well as additional requirements for a subcategory of general-purpose AI models with systemic risk. Currently, the AI Act includes only a single quantitative criterion for determining systemic risk, based on the amount of computing power used for training the model (<10^25 FLOPS). Enforcement and advisory authorities such as the AI Office and the scientific panel of experts may decide to consider additional criteria when determining whether a GPAI model poses systemic risk, such as the number of business or end-users, number of parameters of the model, and quality or size of its data set. The aim of the discussion will be to explore different criteria put forward by the AI Act to determine such risk, also linking these criteria to a model’s actual capabilities and impact on the market.

  • What are current challenges into measurement and evaluation of systemic risks posed by general-purpose AI models?
  • Is there an emerging global consensus on the understanding of systemic risk? What are some differences between the U.S. and EU regulatory approaches?
  • What is the current state of research into establishing reliable performance-based evaluations?


Did you see these?

You might be interested in these panels as well: