Date: Wednesday, October 25th, 2023
9:00 am – 10:00 am Pacific Time
12:00 pm – 1:00 pm Eastern Time

Location: Weekly Seminar, Zoom
Title: Group Fairness with Uncertainty in Sensitive Attributes
Abstract:
Learning a fair predictive model is crucial to mitigate biased decisions against minority groups in high-stakes applications. A common approach to learn such a model involves solving an optimization problem that maximizes the predictive power of the model under an appropriate group fairness constraint. However, in practice, sensitive attributes are often missing or noisy resulting in uncertainty. We demonstrate that solely enforcing fairness constraints on uncertain sensitive attributes can fall significantly short in achieving the level of fairness of models trained without uncertainty. To overcome this limitation, we propose a bootstrap-based algorithm that achieves better levels of fairness despite the uncertainty in sensitive attributes. The algorithm is guided by a Gaussian analysis for the independence notion of fairness where we propose a robust quadratically constrained quadratic problem to ensure a strict fairness guarantee with uncertain sensitive attributes. Our algorithm is applicable to both discrete and continuous sensitive attributes and is effective in real-world classification and regression tasks for various group fairness notions, e.g., independence and separation.
Bio:
Abhin Shah is a final-year Ph.D. student in EECS department at MIT advised by Prof. Devavrat Shah and Prof. Greg Wornell. He is a recipient of MIT’s Jacobs Presidential Fellowship. He interned at Google Research in 2021 and at IBM Research in 2020. Prior to MIT, he graduated from IIT Bombay with a Bachelor’s degree in Electrical Engineering. His research interests include theoretical and applied aspects of trustworthy machine learning with a focus on causality and fairness.