Inaugural Meeting – TOC4Fairness Seminar – Annette Zimmermann

Date: Wednesday, January 20th, 2021
9:00 am – 10:00 am Pacific Time
12:00 pm – 1:00 pm Eastern Time

Location: Weekly Seminar, Zoom

Title: Algorithmic Fairness and Decision Landscapes

Abstract:

Much of the recent literature on algorithmic fairness in computer science and applied statistics has focused on optimizing the quality of decision outcomes reached by implementing algorithmic decision-making models. The guiding question is: does the model adhere to a number of plausible fairness metrics, mathematically defined, and does this enable the model to reach decision outcomes that are qualitatively better than decisions reached by a human decision-maker, or a competing algorithmic model?

Given available evidence that algorithmic decision-making in many different domains of deployment leads to outcomes that reflect and amplify social inequalities, such as structures of racial and gender inequality, these are the right questions to ask about algorithmic decision-making models—but they are not the only ones, and often not the most important ones. If what we care about is fairness, we have to move beyond an approach that focuses exclusively on the decision quality of algorithmic models. In addition to evaluating decision quality for each algorithmic model, we ought to critically scrutinize the decision landscape. Doing so requires investigating not only which alternative decision outcomes are available, but also which alternative decision problems we could, and should, be solving with the help of algorithmic models. This is an underexplored approach to algorithmic fairness, as it requires thinking beyond the internal optimization of a given model, and instead taking into account interactions between models and the model-external social world.

After briefly sketching the state of the contemporary debate on decision quality optimization with respect to algorithmic fairness, I develop three arguments for why scrutinizing available decision landscapes matters in our pursuit of algorithmic fairness: first, the Benchmarking Argument; second, the Aliefs Argument; and third, the Quality-Independence Argument. There are two important upshots. Neither one has received sufficient explicit attention, either in the philosophical literature or the literature in computer science. First, assessing the quality of algorithmic decision outcomes is insufficient for assessing algorithmic fairness. The direction of future research on algorithmic fairness must be responsive to this problem. Second, considering decision landscapes in conjunction with decision quality has implications for the question of whether we ought to deploy a given algorithmic tool in a given domain at all.

Bio: 

Dr Annette Zimmermann is a Lecturer (Assistant Professor) in Philosophy at the University of York, and a Technology & Human Rights Fellow at Harvard University. Dr Zimmermann’s current research focuses on the political and moral philosophy of AI and machine learning.

Before that, Dr Zimmermann was a postdoctoral fellow at Princeton University (2018-2020), with a joint appointment at the Center for Human Values and the Center for Information Technology Policy. Prior to that, they were awarded a DPhil from Nuffield College at the University of Oxford, for work focusing on contemporary analytic political and moral philosophy—in particular, democratic decision-making, justice, and risk.

Dr Zimmermann’s recent research visitor positions include Yale University (2016), the Australian National University (2019) and Stanford University (2020). They have advised policy-makers on AI ethics issues at UNESCO, the Australian Human Rights Commission, the UK Centre for Data Ethics and Innovation, and the OECD. In recognition of their research, Dr Zimmermann has received the 2020 David Roscoe Early Career Award in Science, Ethics, and Society by the Hastings Center, and they have been named on the 2021 “100 Brilliant Women in AI Ethics” List.