Date: Wednesday, October 27th, 2021
9:00 am – 10:00 am Pacific Time
12:00 pm – 1:00 pm Eastern Time
Location: Weekly Seminar, Zoom
Title: Lexicographically Fair Learning: Algorithms and Generalizations
Most notions of group fairness require that some notion of error is equalized across groups of individual. This creates tensions with the accuracy of the classifier, as equalizing errors may require some group errors to be artificially inflated. One alternative approach, minimax fairness, instead guarantees that the maximum error of any group is minimized. In this talk, I will discuss joint work with Emily Diana, Wesley Gill, Michael Kearns, Aaron Roth, and Saeed Sharifi-Malvajerdi on lexicographic fairness, a natural extension of minimax fairness, in which subject to the maximum group error being minimized, the second highest is, and so on. I will introduce the definition, discuss its subtleties in the realizable setting, and explain our algorithms for efficiently finding approximately lexicographically fair solutions in practice.
Ira Globus-Harris (they/them) is a second-year PhD student at the University of Pennsylvania, working with Aaron Roth and Michael Kearns. Their current work focuses on methods for algorithmic fairness that avoid the typical constraints between fairness and accuracy, and differentially private statistics on small datasets. Prior to graduate school, they were a software engineer at Boston University, where they developed differentially private software libraries in collaboration with the OpenDP project. In their spare time, Ira runs the group for LGBTQ+ graduate and professional students at the University of Pennsylvania’s School for Engineering and Applied Sciences, and thinks about how to improve equity and inclusion in computer science.