Notes on Algorithmic decision making and the cost of fairness

 

The question whether intelligent decision making algorithms produce fair and unbiased results is widely discussed by researchers. It seems that the bias is not contained in the algorithm itself but in the data used to train the algorithm. Viewed in the context these algorithms are being used, they pose starting points for sophisticated ethical discussions.

In a paper published by Sam Corbett-DaviesEmma PiersonAvi FellerSharad GoelAziz Huq (see Reference), the authors point out the crucial points:

„Algorithms are now regularly used to decide whether defendants awaiting trial are too dangerous to be released back into the community. In some cases, black defendants are substantially more likely than white defendants to be incorrectly classified as high risk. To mitigate such disparities, several techniques recently have been proposed to achieve algorithmic fairness. Here we reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. We show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds. We further show that the optimal unconstrained algorithm requires applying a single, uniform threshold to all defendants. The unconstrained algorithm thus maximizes public safety while also satisfying one important understanding of equality: that all individuals are held to the same standard, irrespective of race. Because the optimal constrained and unconstrained algorithms generally differ, there is tension between improving public safety and satisfying prevailing notions of algorithmic fairness. By examining data from Broward County, Florida, we show that this trade-off can be large in practice. We focus on algorithms for pretrial release decisions, but the principles we discuss apply to other domains, and also to human decision makers carrying out structured decision rules.“

The Process towards mathematically “fair” Algorithms

Some people think debiasing algorithms is inherently impossible, but like self-driving cars which will inevitably get in accidents, the first step is to design systems that are safer or less biased than their human counterparts. The process of mathematically defining “fair” decision-making metrics also forces us to pin down tradeoffs between fairness and accuracy that must be faced and have sometimes been swept under the carpet by policy-makers. It makes us rethink what it really means to treat all groups equally—in some cases equal treatment may only be possible by learning different group-specific criteria.

There is an entirely new field emerging at the intersection of computer science, law, and ethics. It will not only lead to fairer algorithms, but also to algorithms which track accountability, and make clear which factors contributed to a decision. There’s much reason to be hopeful!

Reference:

Sam Corbe -Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic decision making and the cost of fairness. In Proceedings of KDD ’17, Halifax, NS, Canada, August 13-17, 2017, 10 pages.
Digital Object Identifier: 10.1145/3097983.3098095

Gerhard Schimpf, the recipient of the ACM Presidential Award 2016 and 2024 the Albert Endes Award of the German Chapter of the ACM, has a degree in Physics from the University of Karlsruhe. As a former IBM development manager and self-employed consultant for international companies, he has been active in ACM for over four decades. He was a leading supporter of ACM Europe, serving on the first ACM Europe Council in 2009. He was also instrumental in coordinating ACM’s spot as one of the founding organizations of the Heidelberg Laureates Forum. Gerhard Schimpf is a member of the German Chapter of the ACM (Chair 2008 – 2011) and a member of the Gesellschaft für Informatik. --oo-- Gerhard Schimpf, der 2016 mit dem ACM Presidential Award und 2024 mit dem Albert Endres Award des German Chapter of the ACM geehrt wurde, hat an der TH Karlsruhe Physik studiert. Als ehemaliger Manager bei IBM im Bereich Entwicklung und Forschung und als freiberuflicher Berater international tätiger Unternehmen ist er seit 40 Jahren in der ACM aktiv. Er war Gründungsmitglied des ACM Europe Councils und gehört zum Founders Club für das Heidelberg Laureate Forum, einem jährlichen Treffen von Preisträgern der Informatik und Mathematik mit Studenten. Gerhard Schimpf ist Mitglied des German Chapter of the ACM (Chairperson 2008 – 2011) und der Gesellschaft für Informatik.


Leave a Reply

Your email address will not be published. Required fields are marked *

WP2Social Auto Publish Powered By : XYZScripts.com