From Antifragile:
Let us introduce the philosopher’s stone back into this conversation. Socrates is about knowledge. Not Fat Tony, who has no idea what it is.
For Tony, the distinction in life isn’t True or False, but rather sucker or nonsucker. Things are always simpler with him. In real life, as we saw with the ideas of Seneca and the bets of Thales, exposure is more important than knowledge; decision effects supersede logic. Textbook “knowledge” misses a dimension, the hidden asymmetry of benefits—just like the notion of average. The need to focus on the payoff from your actions instead of studying the structure of the world (or understanding the “True” and the “False”) has been largely missed in intellectual history. Horribly missed. The payoff, what happens to you (the benefits or harm from it), is always the most important thing, not the event itself.
Philosophers talk about truth and falsehood. People in life talk about payoff, exposure, and consequences (risks and rewards), hence fragility and antifragility. And sometimes philosophers and thinkers and those who study conflate Truth with risks and rewards.
My point taken further is that True and False (hence what we call “belief”) play a poor, secondary role in human decisions; it is the payoff from the True and the False that dominates—and it is almost always asymmetric, with one consequence much bigger than the other, i.e., harboring positive and negative asymmetries (fragile or antifragile). Let me explain.
We check people for weapons before they board the plane. Do we believe that they are terrorists: True or False? False, as they are not likely to be terrorists (a tiny probability). But we check them nevertheless because we are fragile to terrorism. There is an asymmetry. We are interested in the payoff, and the consequence, or payoff, of the True (that they turn out to be terrorists) is too large and the costs of checking are too low. Do you think the nuclear reactor is likely to explode in the next year? False. Yet you want to behave as if it were True and spend millions on additional safety, because we are fragile to nuclear events. A third example: Do you think that this random medicine will harm you? False. Do you ingest these pills? No, no, no.
If you sat with a pencil and jotted down all the decisions you’ve taken in the past week, or, if you could, over your lifetime, you would realize that almost all of them have had asymmetric payoff, with one side carrying a larger consequence than the other. You decide principally based on fragility, not probability. Or to rephrase, You decide principally based on fragility, not so much on True/False.
Let us discuss the idea of the insufficiency of True/False in decision making in the real world, particularly when probabilities are involved. True or False are interpretations corresponding to high or low probabilities. Scientists have something called “confidence level”; a result obtained with a 95 percent confidence level means that there is no more than a 5 percent probability of the result being wrong. The idea of course is inapplicable as it ignores the size of the effects, which of course, makes things worse with extreme events. If I tell you that some result is true with 95 percent confidence level, you would be quite satisfied. But what if I told you that the plane was safe with 95 percent confidence level? Even 99 percent confidence level would not do, as a 1 percent probability of a crash would be quite a bit alarming (today commercial planes operate with less than one in several hundred thousand probabilities of crashing, and the ratio is improving, as we saw that every error leads to the improvement of overall safety). So, to repeat, the probability (hence True/False) does not work in the real world; it is the payoff that matters.