The proposed EU AI Act highlighted the need for standards that support ethical alignment of applications. The IEEE7000 standard proposes concrete processes to build systems that bear a whole spectrum of social values. First case studies give hope that the standards’ processes deliver what was promised: to go from shareholder value to social value. But are companies ready to follow such a process framework? Is the positive vision of ‘technology for humanity’ realistic in times of global competition on cost and technological sovereignty; times where personal data market models and the attention economy seem to be firmly established? Is thorough planning and documentation, as well as good control over eco-system partners in contradiction with the current mantra of agile system development? Do we need a general return to ‘risk-thinking’ for any kind of system development?
• Is corporate strategy willing to sacrifice profit margins for human values?
• Are engineers ready to forgo some agility for the sake of value-based requirements engineering and transparent system design (which implies documentation)?
• Is it realistic to establish and live strong eco-system control?
• Do we really need risk-based design approaches only for high-risk applications?