Date: Wednesday, October 13th, 2021
9:00 am – 10:00 am Pacific Time
12:00 pm – 1:00 pm Eastern Time
Location: Weekly Seminar, Zoom
Title: An Algorithmic Framework for Fairness Elicitation
We consider settings in which the right notion of fairness is not captured by simple mathematical definitions (such as equality of error rates across groups), but might be more complex and nuanced and thus require elicitation from individual or collective stakeholders. We introduce a framework in which pairs of individuals can be identified as requiring (approximately) equal treatment under a learned model, or requiring ordered treatment such as “applicant Alice should be at least as likely to receive a loan as applicant Bob.” We provide a provably convergent and oracle efficient algorithm for learning the most accurate model subject to the elicited fairness constraints, and prove generalization bounds for both accuracy and fairness. This algorithm can also combine the elicited constraints with traditional statistical fairness notions, thus “correcting” or modifying the latter by the former. We report preliminary findings of a behavioral study of our framework using human-subject fairness constraints elicited on the COMPAS criminal recidivism dataset.
Logan Stapleton (he/him/his) is a 3rd-year Computer Science PhD student at the University of Minnesota, advised by Professor Steven Wu. Broadly, his interests lie in machine learning, human-computer interaction (HCI), and ethics. His current work focuses on discrimination and fairness in machine learning, algorithmic economics, and algorithmic decision support tools used in governance. He lives in Minneapolis with his cat, Pepe.