Date: Wednesday, November 10th, 2021
9:00 am – 10:00 am Pacific Time
12:00 pm – 1:00 pm Eastern Time
Location: Weekly Seminar, Zoom
Title: Loss Minimization and Multi-group Fairness
Training a predictor to minimize a loss function fixed in advance is the dominant paradigm in machine learning. However, loss minimization by itself might not guarantee desiderata like fairness and accuracy that one could reasonably expect from a predictor. In contrast, group-fairness notions such as multiaccuracy and multicalibration constrain the predictor to share certain statistical properties of the data, even when conditioned on a rich family of subgroups. There is no explicit attempt at loss minimization.
In this talk, we will explore some recently discovered connections between loss minimization and notions of multi-group fairness. We will see settings where one can lead to the other, and other settings where this is unlikely. In doing so, we will introduce some novel notions such as that of an omnipredictor, whose predictions are simultaneously “optimal” for a large class of loss functions and hypothesis, and low-degree multicalibration, which interpolates smoothly between multiaccuracy and multicalibration.
This is based on joint works with Adam Kalai, Michael Kim, Omer Reingold, Vatsal Sharan, Mihir Singhal and Udi Wieder.
Parikshit Gopalan is a researcher at VMware. His current interests are in fairness in machine learning, unsupervised learning and algorithms/systems for big data. In the past, he has made important contributions to erasure coding for distributed storage, coding theory and computational complexity. His work has been awarded the 2014 Joint IEEE Communication Society & Information Theory Society Paper Prize, 2013 Microsoft TCN Storage Technical Award and the best paper award for the 2012 USENIX Advanced Technology Conference. In the past, he has been a researcher at MSR (Silicon valley and Redmond), a postdoc at UW and UT Austin, a graduate student at Georgia Tech and an undergraduate at IIT Bombay.