Date: Wednesday, April 19th, 2023
9:00 am – 10:00 am Pacific Time
12:00 pm – 1:00 pm Eastern Time
Location: Weekly Seminar, Zoom
Title: Characterizing algorithmic unfairness: from compounding injustices to social norm bias
In this talk, I will characterize different dimensions of algorithmic unfairness. I will first show how societal biases encoded in data may be compounded by machine learning models. I will relate this to the political philosophy notion of compounding injustices and illustrate it in the context of automated recruiting. I will then discuss residual harms of fairness-aware algorithms that mitigate bias at a group level. In doing so, I will introduce the notion of Social Norm Bias (SNoB), a subtle but consequential type of algorithmic discrimination in which predictions are associated with conformity to inferred social norms. I will relate this to the social psychology notion of descriptive stereotypes and their role in workplace discrimination. I will conclude by discussing implications for algorithm design, deployment, and evaluation.
Maria De-Arteaga is an Assistant Professor at the Information, Risk and Operation Management (IROM) Department at the University of Texas at Austin, where she is also a core faculty member in the Machine Learning Laboratory and an affiliated faculty of Good Systems. She holds a joint PhD in Machine Learning and Public Policy and a M.Sc. in Machine Learning, both from Carnegie Mellon University, and a. B.Sc. in Mathematics from Universidad Nacional de Colombia. Her research focuses on the risks and opportunities of using machine learning to support experts’ decisions in high-stakes settings, with a particular interest in algorithmic fairness and human-AI collaboration.