Date: Wednesday, February 17th, 2021
9:00 am – 10:00 am Pacific Time
12:00 pm – 1:00 pm Eastern Time
Location: Weekly Seminar, Zoom
While there has been a flurry of research in algorithmic fairness, what is less recognized is that modern antidiscrimination law may prohibit the adoption of such techniques. We make three contributions. First, we discuss how such approaches will likely be deemed “algorithmic affirmative action,” posing serious legal risks of violating equal protection, particularly under the higher education jurisprudence.Such cases have increasingly turned toward anticlassification, demanding “individualized consideration” and barring formal, quantitative weights for race regardless of purpose. This case law is hence fundamentally incompatible with fairness in machine learning. Second, we argue that the government-contracting cases offer an alternative grounding for algorithmic fairness, as these cases permit explicit and quantitative race-based remedies based on historical discrimination by the actor. Third, while limited, this doctrinal approach also guides the future of algorithmic fairness, mandating that adjustments be calibrated to the entity’s responsibility for historical discrimination causing present-day disparities. The contractor cases provide a legally viable path for algorithmic fairness under current constitutional doctrine but call for more research at the intersection of algorithmic fairness and causal inference to ensure that bias mitigation is tailored to specific causes and mechanisms of bias.
Daniel E. Ho is the William Benjamin Scott and Luna M. Scott Professor of Law at Stanford Law School, Professor of Political Science, and Senior Fellow at the Stanford Institute for Economic Policy Research. He is also Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence, Faculty Fellow at the Center for Advanced Study in the Behavioral Sciences, and is Director of the Regulation, Evaluation, and Governance Lab (RegLab). He received his J.D. from Yale Law School and Ph.D. from Harvard University and clerked for Judge Stephen F. Williams on the U.S. Court of Appeals, District of Columbia Circuit. Ho previously served as president for the Society of Empirical Legal Studies and co-editor of the Journal of Law, Economics, & Organization.
Alice Xiang is a Senior Research Scientist at Sony AI, where she leads research on AI ethics. Alice previously worked as the Head of Fairness, Transparency, and Accountability Research at the Partnership on AI. Core areas of Alice’s research include algorithmic bias mitigation, explainability, causal inference, and algorithmic governance. She has been recognized as one of the 100 Brilliant Women in AI Ethics. Alice is both a statistician and lawyer and has previously developed machine learning models and served as legal counsel for technology companies. Alice holds a Juris Doctor from Yale Law School, a Master’s in Development Economics from Oxford, a Master’s in Statistics from Harvard, and a Bachelor’s in Economics from Harvard.