Goodbye cake and welcome to the era of the possible

You can’t always get what you want, a young man once sang. It’s simple wisdom, but one worth remembering. Boris Johnson was widely — and rightly so — derided in 2016 for declaring, “Our policy is to have our cake and eat it.” This was a dishonest refusal to acknowledge that the Brexit referendum forced the UK government to make some painful decisions. But it’s not always easy to see when Mick Jagger’s wisdom is in the game.

Consider the question of whether algorithms make fair decisions. In 2016, a team of reporters at ProPublica, led by Julia Angwin, published an article titled “Machine Bias.” It was the result of a more than year-long investigation into an algorithm called Compas, which was widely used in the US justice system to make recommendations on parole, pretrial detention, and sentencing. Angwin’s team concluded that COMPAS was more likely to rate white defendants as less dangerous than black defendants. What’s more, “Black defendants were twice as likely to be classified as high risk but not reoffending. White defendants were twice as likely to be charged with new crimes after being classified as low risk.”

This looks bad. Northpointe, the manufacturer of Compas, noted that black and white defendants with a danger rating of, say, 3 have an equal chance of being re-arrested. The same is true for black and white defendants with a risk rating of 7, or any other rating. Risk scores mean the same thing, regardless of race.

Shortly after ProPublica and Northpointe produced their findings, rebuttals and counterresponses, several teams of academics published papers making a simple but surprising point: There are many different definitions of what it means to be “fair” or “unbiased,” and it is mathematically impossible to Be fair in all of these ways at once. The algorithm could satisfy ProPublica’s definition of fairness or it could satisfy Northpointe, but not both.

Here are Corbett-Davies, Pierson, Feller, and Goel: “It is virtually impossible for a degree of risk to satisfy both criteria of fairness at the same time.”

Or Kleinberg, Mullinathan, and Raghavan: “We formally set three conditions of justice .

This is not just a fact about algorithms. Whether decisions about parole are made by human judges, robots or dart-throwing chimpanzees, the same diligent arithmetic applies.

We need more scrutiny and less naivety about the life-altering magic of algorithmic decision-making, so to highlight the automation of the most dangerous judgments, ProPublica’s analysis has been invaluable. But if we want to improve algorithmic decision-making, we need to remember Jagger’s aphorism. These decisions cannot be “fair” on every possible measure. When it is impossible to have everything, we will have to choose what really matters.

Painful choices are, of course, the bread and butter of the economy. There is one particular genre that seems to fascinate economists: the “impossible trinity”. The wisest of all the Impossible Trinity will be well known to fans of Armistead Maupin from More Tales of the Town (1980). It’s “Mona’s Law”: You can have a hot job, a hot lover and a hot apartment, but you can’t have all three at once.

In economics, the impossible trinity is more prosaic. The most notorious is that while you might want a fixed exchange rate, free movement of capital across borders and an independent monetary policy, at best you have to choose two. Another, coined by economist Dani Rodrik, is more formal: You can set the rules nationally, you can be economically integrated or you can let the popular vote decide policy, but you can’t do all three. the possibility of an economically integrated national technocracy; So is democratic policy-making at the supranational level. If you can’t imagine any of these, you need to set limits to economic globalization.

Much like Mona’s Law, these impossible triples are more like rules of thumb than mathematical proofs. There may be exceptions, but don’t get your hopes up.

Mathematicians call these results the Impossibility Proof, or simply the Impossibility Results. Some of them are basic: we will never find the largest prime number, because there is no greatest prime number that can be found, and we cannot express the square root of two as a fraction.

Others are deeper and more mind-bending. Perhaps the most profound is Gödel’s incompleteness theorem, which proved in 1931 that for any mathematical system there will be true statements in that system that cannot be proven. Thus, mathematics is incomplete, and legions of mathematicians trying to develop a complete and consistent mathematical system were wasting their time. At the end of the symposium on which Gödel detonated this intellectual bomb, the great John von Neumann remarked, “It’s all over.”

No one likes being told they can’t have it all, but a painful truth is more beneficial than a comforting lie. Gödel’s incompleteness theorem was one of the painful truths I studied as a young logician alongside Lise Truss. Maybe you’ve finally learned your lesson. It is important to understand when something is impossible. This fact frees us from trying in vain to always get what we want and allows us to focus instead on getting what we need.

Written for and first published in the Financial Times on October 28, 2022.

The Data Detective paperback was published on February 1 in the United States and Canada. Titled Elsewhere: How to Make the World Add.

It has created a storefront on Bookshop in the US and UK. Links to Bookshop and Amazon may incur referral fees.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *