Dark patterns are under the spotlight in policymaking, research and practice: the DSA, DMA, AI Act and Data Act proposals define them as design elements that impair user autonomy and informed decisions. However, the impact on autonomy is difficult to prove, whereas focusing only on graphical interface elements might not account for next-generation deceptive patterns that emerge from personalised hypernudges, human-robot manipulation, voice and haptic interfaces, etc. But how might we reliably detect, test, measure and regulate digital influences when they are so varied and since identifying what constitutes manipulation is often based on more or less paternalistic views? The individual and collective perception of adverse effects, as well as demonstrable direct and indirect harms, may be ideal proxies to identify and report dark patterns.
• Which attributes can we leverage to reliably measure the presence of dark patterns in digital services?
• Which legal, technical and design instruments do we need to quantify dark patterns’ harms at large?
• How might we determine the risks engendered by dark patterns and are these more severe for certain “vulnerable” users?
• How might we detect manipulation and potential for harm in emerging technologies?