The performance of DPIAs, in cases where the processing operations present high risks to the rights and freedoms of natural persons, is a novel legal obligation under article 35 of the GDPR. It is a promising obligation with a potential to contributing to a more effective data protection, given its proactive and comprehensive nature. However, the DPIA comes with challenges. For example, the concept of 'high risk' and the assessment of identified risks in terms of 'likelihood' and 'severity' are novel in data protection. This panel will focus on the role that this obligation could play for a high level of data protection, especially in the era of algorithms and machine learning. A discussion of practical experience within the industry will substantially contribute to our understanding of challenges and potential solutions.
• How should the concepts of high risk, likelihood and severity be understood within the context of data protection?
• What key aspects should data controllers take into consideration when building up a data protection risk management framework?
• Which factors may influence risk perceptions and how should risk be measured? Which are the lessons learned from the lists of processing operations subject to a DPIA published by the Data Protection Authorities? How is the risk to the rights perceived from the perspective of the regulators? How could DPIAs encourage attempts to ensure fairness, anti-discrimination and procedural justice in Machine Learning (ML) applications?
• How could the assessment of the principles in Article 5 be operationalized in the DPIA process?