This month in a Harvard Business Review blog Andrew McAfee wrote a stirring article called Big Data’s Biggest Challenge? Convincing People NOT to Trust Their Judgment. It says management education and human nature encourage people to trust their guts and instincts in business. He says this is a bad thing, calling it “the most harmful misconception in the business world today (maybe in the world full stop).”
Instead of trusting intuition, the article says businesspeople need to build, and rely on, better empirical data and algorithms to guide decision-making. In short, intuition is a poor man’s algorithm.
As a data scientist who uses algorithms to predict business survival or failure, this argument is dear to my heart. Andrew… go on with your bad self.
The point he makes is, without question, a controversial one.
On the one hand, algorithms have proven far more accurate than intuition at predicting a variety of things. The article lists examples, but for a longer list check out Ian Ayres’s website for the book Super Crunchers or read Daniel Kahneman’s Thinking Fast and Slow. There are domains where algorithms are simply better. End of argument.
This makes some people very uncomfortable. Sometimes the discomfort is justified, other times it isn’t. In either case, there’s a human tendency to rush to intuition’s defense whenever it gets challenged.
Some of the better arguments against algorithmic decision-making are as follows:
the algorithm has to work in the first place, or at least do a better job than the alternatives (obviously you shouldn’t rely on a broken tool); and
Beware of unintended consequences or moral hazards (mortgage-backed securities… anyone?).
Meanwhile weaker objections include:
Algorithms are soul-crushing, creativity-smashing bludgeons that sap the joy out of everything (art is always better than science);
Math sucks so I don’t want it to catch on more than it already has;
Embracing algorithms will make everything over-constrained and rigid, making it impossible to be “outside the box”;
Accurate predictions in <insert domain here> are simply impossible, even if it’s actually happening right now (this is called ignorance or denial); and
Human intuition is inherently better all the time, no matter what.
In between strong and weak objections are more nuanced, circumstance-dependent concerns such as:
You can’t take the “human” out of an algorithm – they’re built by humans, run by humans, and their outputs are acted on by humans. Never forget algorithms are only as good as the people surrounding them; or
Algorithms threaten my value or livelihood (fear of being replaced by a robot).
I think any algorithm or technology that solves a big human problem in an empirically better, morally positive way is a good thing. It’s a great aspiration. In medicine, it’s sometimes called a “cure.” Viewed on a continuum between perfect knowledge and total ignorance, algorithms and educated guesses are simply different waypoints. We’d all like to know everything, about everything, all the time. Until then, bits of knowledge remain scattered across the varied landscape of progress.
Algorithms aren’t all equal. Not all of them will have the same impacts on their domains and the human lives they touch. We need to see some algorithms, the good ones, as what they are – a sign of progress (even if it makes us a little uncomfortable at first). Humans do learn things. Knowledge moves forward, often taking the shape of algorithms as our understandings of the universe deepen and mature. Beware of knee-jerk resistance to algorithms because, just as in the case of intuition, sometimes the biggest leaps forward come from unexpected places.
(BTW: special thanks to Jared for sending me this HBR article the other day.)