AI, Algorithms, and Awful Humans

April 1, 2024

A profound shift is occurring in the way many decisions are made, with machines taking greater roles in the decision-making process.  Two arguments are often advanced to justify the increasing use of automation and algorithms in decisions.  The “Awful Human Argument” asserts that human decision-making is often awful and that machines can decide better than humans.  Another argument, the “Better Together Argument,” posits that machines can augment and improve human decision-making.  These arguments exert a powerful influence on law and policy.

In this Essay, we contend that in the context of making decisions about humans, these arguments are far too optimistic.  We argue that machine and human decision-making are not readily compatible, making the integration of human and machine decision-making extremely complicated.

It is wrong to view machines as deciding like humans do, except better because they are supposedly cleansed of bias.  Machines decide fundamentally differently, and bias often persists.  These differences are especially pronounced when decisions require a moral or value judgment or involve human lives and behavior.  Making decisions about humans involves special emotional and moral considerations that algorithms are not yet prepared to make—and might never be able to make.

Automated decisions often rely too much on quantifiable data to the exclusion of qualitative data, resulting in a change to the nature of the decision itself.  Whereas certain matters might be readily reducible to quantifiable data, such as the weather, human lives are far more complex.  Human and machine decision-making often do not mix well.  Humans often perform badly when reviewing algorithmic output.

We contend that algorithmic decision-making is being relied upon too eagerly and with insufficient skepticism.  For decisions about humans, there are important considerations that must be better appreciated before these decisions are delegated in whole or in part to machines.