There are many possible scenarios to use AI for preventing and prosecuting crime. While smart surveillance tools can help to predict crime as well as to support finding and analysing evidence, automated suspicion algorithms are even able to open up criminal investigations without human intervention. After the fact, AI can be used to support arresting and sentencing decisions. US courts already use software applications like COMPAS to support decisions. These scenarios raise questions about the limits of, and the need to, regulate the use of AI in the security sector. Data protection law only offers general answers concerning automated decision making while questions regarding anti-discrimination, the presumption of innocence and other issues remain wide open. This panel aims at identifying the most important questions raised by, and finding answers for, AI regulation in security law along the lines of European human rights.
• Which regulations are necessary for predictive policing?
• Which principles should apply to smart prosecution technologies (e.g. sentencing algorithms)?
• Are further anti-discrimination laws needed for the use of AI?
• How can AI recognise the presumption of innocence?