Insurance companies could use algorithmic systems to set premiums for individual consumers, or deny them insurance. More and more data become available for insurers for risk differentiation. For instance, some insurers monitor people’s driving behaviour to estimate risks. To some extent, risk differentiation is necessary for insurance. And it can be considered fair when high-risk drivers pay higher premiums. But there are drawbacks. Algorithmic decision-making could lead, unintentionally, to discrimination on the basis of, for instance, skin colour. And too much risk differentiation could make insurance unaffordable for some consumers, and could threaten the risk-pooling function of insurance. Furthermore, risk differentiation might result in the poor paying more. A consumer who lives in a poor neighbourhood with many burglaries might pay more for house insurance, because the risk of a burglary is higher. Hence, poor people might pay, on average, more.
• How should discrimination on the basis of, for instance, skin colour be avoided?
• Can non-discrimination norms be built in the computer systems of insurance companies?
• Are current laws sufficient to protect fairness and the right to non-discrimination in the insurance area?
• Should the law protect poor people against paying extra?
• Is it always reasonable when high-risk insurance consumers pay extra?