TOC4Fairness Seminar – Michael Kim

Date: Wednesday, March 3rd, 2021
9:00 am – 10:00 am Pacific Time
12:00 pm – 1:00 pm Eastern Time

Location: Weekly Seminar, Zoom

Video

Title: Outcome Indistinguishability

Abstract:

Prediction algorithms assign scores to individuals that are popularly understood as individual “probabilities”—e.g., given a patient’s risk factors, what is the probability of 5-year survival after cancer diagnosis?  The meaning of individual probabilities for such unrepeatable events has been intensely debated within probability theory, statistics, and philosophy without clear resolution.  Towards a rigorous interpretation of algorithmic risk scores, we introduce and study Outcome Indistinguishability.  Outcome Indistinguishable predictors yield a model of individual probabilities that cannot be efficiently refuted on the basis of the real-life observations produced by Nature.

Drawing on an understanding of computational indistinguishability developed in complexity theory and cryptography, we investigate a hierarchy of Outcome Indistinguishability (OI) definitions, whose stringency increases with the degree to which distinguishers may access the predictor in question.  Our findings reveal that OI behaves qualitatively differently than previously studied notions of indistinguishability.  First, we provide constructions at all levels of the hierarchy.  Then, leveraging recently-developed machinery for proving average-case fine-grained hardness, we obtain lower bounds on the complexity of the more stringent forms of OI.  The hardness result provides scientific grounds for the political argument that, when inspecting algorithmic risk prediction instruments, auditors should be granted query-access to the algorithm, not simply historical predictions.

Joint work with Cynthia Dwork, Omer Reingold, Guy N. Rothblum, Gal Yona; to appear at STOC 2021.

Bio: 

Michael P. Kim is a Postdoctoral Fellow at the Miller Institute for Basic Research in Science at UC Berkeley, hosted by Shafi Goldwasser. His research investigates core questions on the foundations of responsible machine learning.  Michael obtained his PhD recently from the Stanford Theory Group, under the sage guidance of Omer Reingold.