An established line of thinking promotes the importance of designing human values into the development of technology. Values include dignity, fairness, autonomy, and transparency, and go beyond fundamental legally recognised human rights. The dominant view is that these values will not naturally be accommodated and there must be a conscious effort to design technology to facilitate these values. The emergence of smart technologies, such as AI, with their opaque information processes, has proven to be challenging. There is a clear movement to ensure appropriate governance and oversight of their use, in order to fuel the promotion of human values and at the same time manage potential risks. The GDPR and the proposed AI Act illustrate the growing importance of designing-in the regulation of AI and other data-processing systems, and the human values in question have been among those articulated in the host of ethical frameworks produced in recent years. This panel explores the process of designing-in human values by contrasting the design phase of development with the lived experiences of those using the technology. The panel highlights discrepancies in design and implementation, as well as valuable lessons about how to enhance the design process.
• What are human values and what is the relationship between human values and new technology?
• How can human values be designed into new digital technology, including AI?
• Are there any lived experience examples of how digital technologies reflect previous attempts at designing in human values?
• How can the process of technology design be enhanced by lived experiences to better reflect the importance of human values?