Wednesday 22 January 2020
Area 42 Grand

Panel Description

Automated facial recognition and face analysis are becoming increasingly used both in the private sector and in the public sector. Security is the most documented and debated application area - in particular the use by police forces -but these technologies are also deployed, or experimented with, in many other areas, either to make life easier or to improve "customer experience". For example, they can make it possible to unlock a mobile phone without having to enter a secret code or to pay in a shopping center without having to use cash or a payment card. The huge threats to privacy posed by automated facial recognition and its impact on human rights are widely recognized and organizations, such as the CNIL, have called for a democratic debate on the topic. The key issue that we would like to discuss in this panel is the best way to control the development of these technologies. In particular, we would like to ask the following questions:

  • Is it possible to draw a red line and identify uses of automated facial recognition or facial analysis (sentiment analysis, etc.) that should be banned?
  • Considering the creeping dissemination of these technologies, is it possible that even seemingly mundane uses of automated facial recognition contribute to acclimatize people so that the generalization of these technologies will, in fact, become natural very soon (meaning the end of anonymity)?
  • If automated facial recognition systems have to be assessed on a case by case basis, how should we proceed to evaluate them? Are privacy impact assessment methods well suited to this purpose?
  • Do we need new dedicated regulation for automated facial recognition in Europe or is the GDPR sufficient?

Did you see these?

You might be interested in these panels as well: