With inferential data, conclusions can be made that extend beyond the immediate data at hand. We use for example inferential statistics to make inferences from our data to more general conditions. With more data and powerful machine learning tools, AI can magnify the ability to infer sensitive information about us and create new data sets. Algorithmic inferences can lead to new intrusions into people’s privacy and may be difficult to understand, to verify or to refute. Inferred data sets can be used for microtargeting, nudging political choices, and help to spread misinformation, with little visibility to the targets, watch dogs and oversight bodies. The use of algorithmic inferences thus can be maliciously used to encroach on democratic freedoms. European data protection laws might not sufficiently guard against these novel risks. Too much emphasis is put on data collection, very little on protection over how the data is assessed, used and shared. This panel will discuss how inferred data should be reasonably used and protected.
• What are the novel opportunities and challenges of inferential analytics?
• What are the strengths and weaknesses of European data protection law and how can we close accountability gaps?
• How can we strike a balance between privacy and transparency and business interests (IP laws, trade secrets) as well as freedom of expression?
• What governance strategies are most promising to govern inferred data, microtargeting, and the spread of misinformation?