DATE
Thursday 25 May 2023
VENUE
Grande Halle
SLOT
ORGANISED BY
Uber (US)

Panel Description

There is increasing recognition that automated decision-making at scale, using AI or ML, is changing the nature of discrimination. Discrimination was once driven primarily by individuals and systemic barriers. Now, machine learning can learn from biased data. In doing so, it may amplify existing prejudice. As a result, regulators, civil society advocates, and tech companies have recognized the need for testing algorithms for bias against historically disadvantaged groups.
In this panel, we bring together practitioners from tech companies, academics, policy-makers, regulators and AI ethics advisors to provide an overview of how fairness testing works (or should work) today. A data scientist from Uber’s Fairness Research team will describe how fairness testing actually works in practice. The AI ethics advisors will describe challenges and best practices for assessing fairness tests. Policy-makers and regulators will provide their view about how existing and upcoming legislation can allow, support or encourage fairness testing.

• How does fairness testing actually work and what data and statistical methods are used?
• What are the challenges and best practices for robust fairness testing?
• Can a uniform model work across industries and use cases?
• What can regulators learn from practitioners as they craft legislation?

Did you see these?

You might be interested in these panels as well: