Digital innovation has reshaped society, benefiting it, but also raising critical issues. These issues have often been addressed by data protection laws, but recent applications of AI have shown a wider range of potentially affected interests. A broader approach focusing on the impact of AI on fundamental rights and freedoms is therefore emerging. Several provisions in the draft EU regulation on AI and in international and corporate documents push in this direction, but do not outline concrete methodologies for impact assessment. Moreover, existing HRIA models are not easily replicable in the AI context. This is despite the important role of such an assessment in relation to the risk thresholds in regulatory proposals. The panel will discuss how fundamental rights can be effectively put at the heart of AI development, providing concrete solutions for a rights-oriented development of AI.
• Are there different types of AI risk assessment and, if so, what are they?
• Who should be entrusted with conducting HRIAs, when and how?
• What are the key criteria that fundamental rights impact assessments need to fulfil to achieve the intended goals?
• How can the HRIA be operationalised in the context of AI by providing measurable thresholds for risk management and human rights due diligence?
No Panels found.