Notes on Algorithmic decision making and the cost of fairness
The question whether intelligent decision making algorithms produce fair and unbiased results is widely discussed by researchers. It seems that the bias is not contained in the algorithm itself but in the data used to train the algorithm. Viewed in the context these algorithms are being used, they pose starting points for sophisticated ethical discussions.
In a paper published by Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, Aziz Huq (see Reference), the authors point out the crucial points:
„Algorithms are now regularly used to decide whether defendants awaiting trial are too dangerous to be released back into the community. In some cases, black defendants are substantially more likely than white defendants to be incorrectly classified as high risk. To mitigate such disparities, several techniques recently have been proposed to achieve algorithmic fairness. Here we reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. We show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds. We further show that the optimal unconstrained algorithm requires applying a single, uniform threshold to all defendants. The unconstrained algorithm thus maximizes public safety while also satisfying one important understanding of equality: that all individuals are held to the same standard, irrespective of race. Because the optimal constrained and unconstrained algorithms generally differ, there is tension between improving public safety and satisfying prevailing notions of algorithmic fairness. By examining data from Broward County, Florida, we show that this trade-off can be large in practice. We focus on algorithms for pretrial release decisions, but the principles we discuss apply to other domains, and also to human decision makers carrying out structured decision rules.“
The Process towards mathematically “fair” Algorithms
Some people think debiasing algorithms is inherently impossible, but like self-driving cars which will inevitably get in accidents, the first step is to design systems that are safer or less biased than their human counterparts. The process of mathematically defining “fair” decision-making metrics also forces us to pin down tradeoffs between fairness and accuracy that must be faced and have sometimes been swept under the carpet by policy-makers. It makes us rethink what it really means to treat all groups equally—in some cases equal treatment may only be possible by learning different group-specific criteria.
There is an entirely new field emerging at the intersection of computer science, law, and ethics. It will not only lead to fairer algorithms, but also to algorithms which track accountability, and make clear which factors contributed to a decision. There’s much reason to be hopeful!
Reference:
Sam Corbe -Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic decision making and the cost of fairness. In Proceedings of KDD ’17, Halifax, NS, Canada, August 13-17, 2017, 10 pages.
Digital Object Identifier: 10.1145/3097983.3098095