Deceptive design practices, or ‘dark patterns’, are used to make consumers take actions against their own interests, to the benefit of companies. Common privacy-invasive dark patterns include hidden default settings that maximise data collection, ambiguous language designed to confuse, and consent flows that push toward certain choices. Such practices are particularly damaging in the context of the surveillance economy, when used by the large platforms to increase their market power. The harms caused by dark patterns are not distributed evenly and have a higher impact on people in vulnerable situations, those with low incomes, children, the elderly, or those with disabilities. Existing policies, such as the GDPR in the EU or US FTC’s section 5 regulations, are not fully equipped to deal with manipulative design practices at scale. However, legislative initiatives are taking place on both sides of the Atlantic.
• What are the drivers/business objectives of ‘dark patterns’?
• How can dark patterns impact individuals and society generally?
• What policy solutions are needed internationally to deal with such practices?
• Will (and how) AI technologies affect dark patterns in the future?