TOC4Fairness Seminar – Sumegha Garg

Date: Wednesday, March 6th, 2024
9:00 am – 10:00 am Pacific Time
12:00 pm – 1:00 pm Eastern Time

Location: Weekly Seminar, Zoom

Title: Online Omniprediction

Abstract:

A recent line of work has shown a surprising connection between multicalibration, a multi-group fairness notion, and omniprediction, a learning paradigm that provides simultaneous loss minimization guarantees for a large family of loss functions [GKR+22, GHK+23 , GKR23 , GHHK+23]. Prior work studies omniprediction in the batch setting. Our work initiates the study of omniprediction in the online adversarial setting. In the talk, we will briefly see the definitions and motivations for these theoretic notions, and then survey the new results for online omniprediction.  Our contributions are two-fold: 
1. We develop a new online multicalibration algorithm that is well defined for infinite benchmark classes F (e.g. the set of all linear functions), and is oracle efficient — i.e. for any class F, the algorithm has the form of an efficient reduction to a no-regret learning algorithm for F. This implies an oracle efficient online omnipredictor — an onlineprediction algorithm that can be used to simultaneously obtain no regret guarantees to all Lipschitz convex loss functions.  
2. We show upper and lower bounds on the extent to which our regret rates can be improved.   

Joint work with Christopher Jung, Omer Reingold and Aaron Roth.

Bio:

Sumegha Garg is an Assistant Professor in the CS Department at Rutgers University. Before, she was a postdoctoral fellow at Stanford CS, and Rabin postdoctoral fellow in Harvard’s Theory of Computation group. She completed her PhD in Computer Science at Princeton University, advised by Mark Braverman. She received her bachelor’s degree in CSE from IIT, Delhi. She uses her background in computational complexity theory to answer questions in applied areas such as learning and data streaming. In particular, she is interested in memory lower bounds and theory of fair predictions.