TOC4Fairness Seminar – Mahdi Haghifam

Date: Wednesday, May 1st, 2024
9:00 am – 10:00 am Pacific Time
12:00 pm – 1:00 pm Eastern Time

Location: Weekly Seminar, Zoom

Title: Information Complexity of Stochastic Convex Optimization: Applications to Generalization, Memorization, and Privacy

Abstract:

The amount of information that a learning algorithm uses from its training set to produce the output is a natural and important quantity to study. A central idea in the learning theory is that a learning algorithm that only uses a small amount of information from its input sample will generalize well. In this talk, we investigate the interplay between information and learning in the context of stochastic convex optimization (SCO).  In particular, we answer the question: “ How much information about the training set is revealed by a learning algorithm in the context of SCOs?” We show that memorization of a large fraction of the training dataset is a necessary component in the context of learning SCOs. The results are based on a joint work with Idan Attias, Gintare Karolina Dziugaite, Roi Livni, and Daniel M. Roy.

Bio:

Mahdi Haghifam is a distinguished postdoctoral fellow at Northeastern University’s Khoury College of Computer Science hosted by Jonathan Ullman. He holds a PhD from the University of Toronto, where he was also a graduate student researcher at the Vector Institute for Artificial Intelligence. Mahdi received his Bachelor’s and Master’s degrees in Electrical Engineering from Sharif University of Technology. During his PhD, he worked as a research intern at Google Brain (Privacy Team) and ServiceNow-Element AI.  His current research focuses broadly on statistical learning theory and Differential Privacy. For more information, please visit https://mhaghifam.github.io/mahdihaghifam/