Core elements required. for the implementation of the EU AI Act involve harmonised technical standards related to risk, quality, data management, testing and verification. Some standards suitable for certifying AI system are under-development internationally by expert committees such as ISO/EC JTC1 SC42 on AI, while European Standard bodies including CEN/CENELEC JTC21 on AI are addressing how such standards can serve as harmonised standards for the Ai Act.
This raises several concerns around: the legitimacy of such standards development in bodies dominated by experts from large multinationals; whether the level of societal stakeholder involvement in this technical rule making is sufficient to protect fundamental rights; and how well can standardised technical rules can be effective across different high-risk AI applications and across different member states’ enforcement of the AI Act.
4 questions that will be addressed by the panel:
• How well prepared are the bodies that will undertake AI certification and market surveillance for technical enforcement for protection of established fundamental rights?
• How can legitimacy and democratic oversight of AI rule-making be effectively extended to the development of standards by experts from multinationals in international committees?
• How can regulators, societal stakeholder and standards developers react effectively to new AI harms that emerged quickly?
• How can regulators, societal stakeholder and standards developers collaborate to determine the correct acceptable application-appropriate thresholds for technical assessment of AI risks, e.g. voice recognition bias tests in education vs. emergency dispatch applications?