top of page
  • Thomas Thurston

The Black Box Fallacy: is it keeping you in the dark?

It can be a bad idea to defer blindly to algorithms, statistics or AI. For example, we’ve all seen The Terminator[i] and The Matrix[ii]. It can also be a bad idea to be dismissive of algorithms, statistics or AI. For example, most of us have seen (or read) Moneyball[iii] or The Big Short[iv], or have used Google.


Said differently (with an emphasis on AI), we all know that sometimes it can be a very bad idea for humans to accept whatever AI tells them. We’re all comfortable with that. It’s a given. What’s less comfortable is the idea that, sometimes, it’s also a very bad idea for humans to not accept what AI tells them. Between these lazy extremes of “always do,” and “never do,” is where the real work lives.


The Black Box Fallacy is on the “never do” extreme.


The term "black box" is used as an offhand objection to AI. It’s a way to be dismissive without having to think too hard. It usually sounds something like this: “I can’t use that AI because it’s a black box.”


The objection is derived from the idea of a physical black box that, when closed, you can’t see inside. It’s used as a metaphor to describe something complex or opaque that’s hard to understand. Said differently, calling AI a black box is shorthand for saying “I don’t know fully understand what all the inputs are, or how they’re used, so the AI should be dismissed.”



What makes the Black Box Fallacy tricky is that it contains a kernel of truth. Yes, it can be a bad idea to rely on anything you don’t understand. Obviously. The Black Box idea is a fallacy, however, because it unintentionally or deliberately dismisses the equally obvious fact that we humans rely on things we don’t fully understand every day, and that doing so can often be reasonable, responsible and sound.


Do you use Google?


Probably.


Do you know everything about how Google’s search engine works? Do you know exactly what inputs it uses and exactly how those inputs are combined?


Probably not.


Apologies for the rhetorical questions. The point is, anyone who both uses the Black Box Fallacy as a way to dismiss AI offhand, and who simultaneously uses Google for online searches, is a living contradiction. The contradiction probably hasn’t occurred to people in the midst of it, but that’s why I’m calling attention to it now.


This raises an interesting question: why do Google searches feel different? Yes, it’s a “black box,” but for some reason searching on Google doesn’t feel especially scary. The reason some black boxes feel less problematic than others, even when neither fully disclose their inputs or how those inputs are combined, is this:


Google’s search algorithm inputs may be hidden, but its outputs are objectively verifiable.


In other words, we don’t know how Google’s search engine does what it does, but we do know how relevant (or not) its results are to what we’re looking for.


This is why the Black Box Fallacy is a fallacy. It assumes the only way, or even the best way, to judge an opaque system is to have perfect knowledge of its inputs. Perfect knowledge is a theoretical ideal and is certainly worth pursuing, but the everyday real world tends to be far more about outputs than inputs.


Think of it this way – antibiotics have saved millions of lives. Yet despite tons of progress, we still don’t fully know how antibiotics work. They're a "black box." Mysteries remain about the inputs and how those inputs interact. For example, we continue to try and better understand why antibiotics work, why sometimes they don’t work, how bacterial resistance works, how intracellular activity works, how many host-microbe interactions occur, etcetera.


Still, doctors can responsibly prescribe antibiotics in many cases because, despite not having perfect knowledge of the inputs, we know enough about the outputs. We’ve been able to objectively verify how often antibiotics stop infection and how this can vary under different circumstances. Yes, there are still surprises. It would be better to know all the inputs, but the outputs are where the rubber meets the road. In the case of penicillin, if we had to rely on perfect knowledge in theory, millions of people would be unnecessarily dead in reality.


AI should be held to a high standard and the bigger the stakes, the higher that standard needs to be. I don’t need the same level of vetting for AI that recommends songs as I’d demand from AI that attempts to find a cure for Alzheimer’s. Still, it’s the output that matter most.


If the AI recommends songs I hate, I’m less likely to use it. If the AI fails to cure Alzheimer’s, clearly that’s a bad thing too. That said, if it does happen to cure Alzheimer’s, it would be a huge breakthrough even if we never fully understand the inputs.


So the next time you’re deliberating some use of AI and you start to feel anxious about a “black box,” remember that inputs may not be the gating issue. Instead, focus on the outputs.


What does this black box tell you? What results does it produce? Are those results objectively verifiable? What level of testing, control groups, statistical rigor and pragmatic interrogation have the outputs been subjected to?


Said differently, don’t hold AI to a standard of imaginary theoretical perfection – which is a goalpost that forever moves away as you approach it. Instead, hold AI to a standard akin to a Google search, or any scientific breakthrough. Does it work? How well? How often? Under what conditions does this change? What kind of vetting has been done with the outputs? Is it enough?


You don’t need a Ph.D. in data science or perfect information about the inputs. Rather, it boils down to this:


Black box or no black box, does the darn thing actually work?


Avoid the Black Box Fallacy because it keeps people in the dark.







[i] The Terminator. Directed by James Cameron, performances by Arnold Schwarzenegger, Linda Hamilton, and Michael Biehn, Hemdale Film Corporation, 1984. [ii] The Matrix. Directed by Lana Wachowski and Lilly Wachowski, performances by Keanu Reeves, Laurence Fishburne, and Carrie-Anne Moss, Warner Bros., 1999. [iii] Michael Lewis. Moneyball: The Art of Winning an Unfair Game. W. W. Norton & Company, 2003. [iv] "The Big Short." Directed by Adam McKay, performances by Christian Bale, Steve Carell, Ryan Gosling, and Brad Pitt, Paramount Pictures, 2015.

Comments


bottom of page