Wednesday 22 May 2024
BEUC - The European Consumer Organisation (BE)


For the AIA, now comes the hard part: to ensure its implementation, application and enforcement result in trustworthy AI and protection of individuals. The AIA only puts specific obligations on certain AI systems; other EU laws (GDPR, consumer protection, product safety, product liability) will be crucial to protect people.  

A lot will depend on how effective public and private enforcement will be. Procedural matters such as allocation of the burden of proof will play an essential role. Currently, it is the responsibility of enforcers or the claimant to prove that an AI system is not compliant with the AIA or that it is “unsafe” or “defective”. In the complex and opaque world of AI, this task might prove very challenging. 

With the AILD stalled and the AIA relying heavily on standards, the discussion will also focus on accountability and compliance.

  • Who will have the burden of proof when an AI system leads to harm; the consumer, the deployer or the provider of the system and how can proof be provided?
  • What EU liability rules for AI should have to adequately protect consumers?
  • What will be consumers’ and civil society’s role in ensuring a successful enforcement of the AI Act?
  • How will the reliance on certification bodies and harmonized standards affect the burden of proof, for the AI Act and for related legal instruments such as liability rules? 

Did you see these?

You might be interested in these panels as well: