Machine learning and data analysis have enjoyed tremendous success in a broad range of domains. These advances hold the promise of great benefits to individuals, organizations and society. Undeniably, algorithms are informing decisions that reach ever more deeply into our lives, from news article recommendations to criminal sentencing decisions to healthcare diagnostics. This progress, however, raises (and is impeded by) a host of concerns regarding the societal impact of computation. A prominent concern is that these algorithms should be fair. Unfortunately, the hope that automated decision-making might be free of social biases is dashed on the data on which the algorithms are trained and the choices in their construction: left to their own devices, algorithms will propagate – even amplify – existing biases of the data, the programmers, and the decisions made in the choice of features to incorporate and measurements of “fitness” to be applied. Addressing wrongful discrimination by algorithms is not only mandated by law and by ethics, but is essential to maintaining the public trust in the current computation-driven revolution.
The study of fairness is ancient and multi-disciplinary: philosophers, legal experts, economists, statisticians, social scientists and others have been concerned with fairness for as long as these fields have existed. Nevertheless, the scale of decision making in the age of big-data, the computational complexities of algorithmic decision making, and simple professional responsibility mandate that computer scientists contribute to this research endeavor. This project aims at establishing firm mathematical foundations, through the lens of computer science theory, for the emerging area of algorithmic fairness, following the example set by the revolutionary role of theory in cryptography, algorithmic economics, privacy, quantum information, computational biology, social sciences and more. The project will form and study basic frameworks that address discrimination concerns, aiming at a much clearer and widely accepted understanding of the algorithmic fairness landscape and new algorithmic solutions. The project will strengthen the impact of theory by extending its reach to rigorously address a much richer set of fairness concerns (such as handling faulty data, strategic and adversarial parties, non-binary outputs, complex compositions of multiple “fair” components, and fairness in dynamically evolving systems). Finally the project will promote stronger interactions with other areas within theory, other areas in CS and areas outside of CS for a more evolved foundational perspective on Algorithmic Fairness.