Do not cross a river if it is on average four feet deep.
(a) The more nonlinear, the more the function of something divorces itself from the something. If traffic were linear, then there would be no difference in traffic time between the two following situations: 90,000, then 110,000 cars on the one hand, or 100,000 cars on the other.
(b) The more volatile the something—the more uncertainty—the more the function divorces itself from the something. Let us consider the average number of cars again. The function (travel time) depends more on the volatility around the average. Things degrade if there is unevenness of distribution. For the same average you prefer to have 100,000 cars for both time periods; 80,000 then 120,000, would be even worse than 90,000 and 110,000.
(c) If the function is convex (antifragile), then the average of the function of something is going to be higher than the function of the average of something. And the reverse when the function is concave (fragile).
As an example for (c), which is a more complicated version of the bias, assume that the function under question is the squaring function (multiply a number by itself). This is a convex function. Take a conventional die (six sides) and consider a payoff equal to the number it lands on, that is, you get paid a number equivalent to what the die shows—1 if it lands on 1, 2 if it lands on 2, up to 6 if it lands on 6. The square of the expected (average) payoff is then (1+2+3+4+5+6 divided by 6)2, equals 3.52, here 12.25. So the function of the average equals 12.25. But the average of the function is as follows. Take the square of every payoff, 12+22+32+42+52+62 divided by 6, that is, the average square payoff, and you can see that the average of the function equals 15.17. So, since squaring is a convex function, the average of the square payoff is higher than the square of the average payoff. The difference here between 15.17 and 12.25 is what I call the hidden benefit of antifragility—here, a 24 percent “edge.”
The hidden benefit of antifragility is that you can guess worse than random and still end up outperforming. Here lies the power of optionality—your function of something is very convex, so you can be wrong and still do fine—the more uncertainty, the better.
This explains my statement that you can be dumb and antifragile and still do very well.
A squeeze occurs when people have no choice but to do something, and do it right away, regardless of the costs.
Squeezes are exacerbated by size. When one is large, one becomes vulnerable to some errors, particularly horrendous squeezes. The squeezes become nonlinearly costlier as size increases.
“small is beautiful.”
Bottlenecks are the mothers of all squeezes.
Many large-scale projects a century and a half ago were completed on time; many of the tall buildings and monuments we see today are not just more elegant than modernistic structures but were completed within, and often ahead of, schedule.
the Crystal Palace project did not use computers, and the parts were built not far from the source, with a small number of businesses involved in the supply chain. Further, there were no business schools at the time to teach something called “project management” and increase overconfidence. There were no consulting firms.
Black Swan effects are necessarily increasing, as a result of complexity, interdependence between parts, globalization, and the beastly thing called “efficiency” that makes people now sail too close to the wind.
There is an asymmetry in the way errors hit you—the same as with travel.
on a timeline going left to right, errors add to the right end, not the left end of it.
In the United States the prime example remains the Iraq war, expected by George W. Bush and his friends to cost thirty to sixty billion, which so far, taking into account all the indirect costs, may have swelled to more than two trillion—indirect costs multiply, causing chains, explosive chains of interactions, all going in the same direction of more costs, not less.
fragility (and antifragility) detection heuristic,
Let’s say you want to check whether a town is overoptimized. Say you measure that when traffic increases by ten thousand cars, travel time grows by ten minutes. But if traffic increases by ten thousand more cars, travel time now extends by an extra thirty minutes. Such acceleration of traffic time shows that traffic is fragile and you have too many cars and need to reduce traffic until the acceleration becomes mild (acceleration, I repeat, is acute concavity, or negative convexity effect).
Likewise, government deficits are particularly concave to changes in economic conditions. Every additional deviation in, say, the unemployment rate—particularly when the government has debt—makes deficits incrementally worse. And financial leverage for a company has the same effect: you need to borrow more and more to get the same effect. Just as in a Ponzi scheme.
Time is an eraser rather than a builder, and a good one at breaking the fragile—whether buildings or ideas.
fragility was simply vulnerability to the volatility of the things that affect it
For the fragile, shocks bring higher harm as their intensity increases (up to a certain level).
Your car is fragile. If you drive it into the wall at 50 miles per hour, it would cause more damage than if you drove it into the same wall ten times at 5 mph. The harm at 50 mph is more than ten times the harm at 5 mph.
Jumping from a height of thirty feet (ten meters) brings more than ten times the harm of jumping from a height of three feet (one meter)—actually, thirty feet seems to be the cutoff point for death from free fall.
For the fragile, the cumulative effect of small shocks is smaller than the single effect of an equivalent single large shock.
For the antifragile, shocks bring more benefits (equivalently, less harm) as their intensity increases (up to a point).
Simply, if for a given variation you have more upside than downside and you draw the curve, it will be convex; the opposite for the concave.
If you double the exposure to something, do you more than double the harm it will cause? If so, then this is a situation of fragility. Otherwise, you are robust.
When I last met Alison Wolf we discussed this dire problem with education and illusions of academic contribution, with Ivy League universities becoming in the eyes of the new Asian and U.S. upper class a status luxury good. Harvard is like a Vuitton bag or a Cartier watch. It is a huge drag on the middle-class parents who have been plowing an increased share of their savings into these institutions, transferring their money to administrators, real estate developers, professors, and other agents. In the United States, we have a buildup of student loans that automatically transfer to these rent extractors. In a way it is no different from racketeering: one needs a decent university “name” to get ahead in life; but we know that collectively society doesn’t appear to advance with organized education.
Which scam in history has lasted forever? I have an enormous faith in Time and History as eventual debunkers of fragility. Education is an institution that has been growing without external stressors; eventually the thing will collapse.
We check people for weapons before they board the plane. Do we believe that they are terrorists: True or False? False, as they are not likely to be terrorists (a tiny probability). But we check them nevertheless because we are fragile to terrorism.
You decide principally based on fragility, not probability. Or to rephrase, You decide principally based on fragility, not so much on True/False.
the probability (hence True/False) does not work in the real world; it is the payoff that matters.
You have taken probably a billion decisions in your life. How many times have you computed probabilities? Of course, you may do so in casinos, but not elsewhere.
doing is wiser than you are prone to believe—and more rational.
the reader can see how the ancients saw naive rationalism: by impoverishing—rather than enhancing—thought, it introduces fragility. They knew that incompleteness—half-knowledge—is always dangerous.
Clearly, Wittgenstein would be at the top of the list of modern antifragile thinkers, with his remarkable insight into the inexpressible with words.
exposure is more important than knowledge; decision effects supersede logic.
The need to focus on the payoff from your actions instead of studying the structure of the world (or understanding the “True” and the “False”) has been largely missed in intellectual history. Horribly missed.
The payoff, what happens to you (the benefits or harm from it), is always the most important thing, not the event itself.
Philosophers talk about truth and falsehood. People in life talk about payoff, exposure, and consequences (risks and rewards), hence fragility and antifragility. And sometimes philosophers and thinkers and those who study conflate Truth with risks and rewards.
(i) Look for optionality; in fact, rank things according to optionality, (ii) preferably with open-ended, not closed-ended, payoffs; (iii) Do not invest in business plans but in people, so look for someone capable of changing six or seven times over his career, or more (an idea that is part of the modus operandi of the venture capitalist Marc Andreessen); one gets immunity from the backfit narratives of the business plan by investing in people. It is simply more robust to do so; (iv) Make sure you are barbelled, whatever that means in your business.
The biologist and intellectual E. O. Wilson was once asked what represented the most hindrance to the development of children; his answer was the soccer mom.
soccer moms try to eliminate the trial and error, the antifragility, from children’s lives, move them away from the ecological and transform them into nerds working on preexisting (soccer-mom-compatible) maps of reality.
Only the autodidacts are free.
Some can be more intelligent than others in a structured environment—in fact school has a selection bias as it favors those quicker in such an environment, and like anything competitive, at the expense of performance outside it.
Again, I wasn’t exactly an autodidact, since I did get degrees; I was rather a barbell autodidact as I studied the exact minimum necessary to pass any exam, overshooting accidentally once in a while, and only getting in trouble a few times by undershooting. But I read voraciously, wholesale, initially in the humanities, later in mathematics and science, and now in history—outside a curriculum,
Trial and error is freedom.
I realized that school was a plot designed to deprive people of erudition by squeezing their knowledge into a narrow set of authors. I started, around the age of thirteen, to keep a log of my reading hours, shooting for between thirty and sixty a week,
It was a barbell—play it safe at school and read on your own, have zero expectation from school.
An extraordinary proportion of work came out of the rector, the English parish priest with no worries, erudition, a large or at least comfortable house, domestic help, a reliable supply of tea and scones with clotted cream, and an abundance of free time. And, of course, optionality.
It means the right policy would be what is called “one divided by n” or “1/N” style, spreading attempts in as large a number of trials as possible: if you face n options, invest in all of them in equal amounts.5 Small amounts per trial, lots of trials, broader than you want. Why? Because in Extremistan, it is more important to be in something in a small amount than to miss it. As one venture capitalist told me: “The payoff can be so large that you can’t afford not to be in everything.”
no, you cannot centralize innovations, we tried that in Russia.
Corporations are in love with the idea of the strategic plan. They need to pay to figure out where they are going. Yet there is no evidence that strategic planning works—we even seem to have evidence against it. A management scholar, William Starbuck, has published a few papers debunking the effectiveness of planning—it makes the corporation option-blind, as it gets locked into a non-opportunistic course of action.
It turns out, strategic planning is just superstitious babble.
Nobody discusses the possibility of the birds’ not needing lectures—and nobody has any incentive to look at the number of birds that fly without such help from the great scientific establishment.
So the illusion grows and grows, with government funding, tax dollars, swelling (and self-feeding) bureaucracies in Washington all devoted to helping birds fly better.
The Soviet-Harvard illusion (lecturing birds on flying and believing that the lecture is the cause of these wonderful skills) belongs to a class of causal illusions called epiphenomena.
Whenever an economic crisis occurs, greed is pointed to as the cause, which leaves us with the impression that if we could go to the root of greed and extract it from life, crises would be eliminated. Further, we tend to believe that greed is new, since these wild economic crises are new. This is an epiphenomenon: greed is much older than systemic fragility.
one journalist (Anatole Kaletsky) saw the influence of Benoît Mandelbrot on my book Fooled by Randomness, published in 2001 when I did not know who Mandelbrot was.
Academia is well equipped to tell us what it did for us, not what it did not—hence how indispensable its methods are. This ranges across many things in life. Traders talk about their successes, so one is led to believe that they are intelligent—not looking at the hidden failures.
The real world relies on the intelligence of antifragility, but no university would swallow that—just as interventionists don’t accept that things can improve without their intervention.
It would seem a reasonable investment if one accepts the notion that university knowledge generates economic wealth. But this is a belief that comes more from superstition than empiricism.
That very same day I stopped reading economic reports. I felt nauseous for a while during this enterprise of “deintellectualization”—in fact I may not have recovered yet.
People with too much smoke and complicated tricks and methods in their brains start missing elementary, very elementary things. Persons in the real world can’t afford to miss these things; otherwise they crash the plane. Unlike researchers, they were selected for survival, not complications.
So I saw the less is more in action: the more studies, the less obvious elementary but fundamental things become; activity, on the other hand, strips things to their simplest possible model.
To our great excitement, we had proof after proof that traders had vastly, vastly more sophistication than the formula. And their sophistication preceded the formula by at least a century. It was of course picked up through natural selection, survivorship, apprenticeship to experienced practitioners, and one’s own experience.
Practitioners don’t write; they do. Birds fly and those who lecture them are the ones who write their story. So it is easy to see that history is truly written by losers with time on their hands and a protected academic position.
No, we don’t put theories into practice. We create theories out of practice. That was our story, and it is easy to infer from it—and from similar stories—that the confusion is generalized. The theory is the child of the cure, not the opposite—ex cura theoria nascitur.
And take a look at Vitruvius’ manual, De architectura, the bible of architects, written about three hundred years after Euclid’s Elements. There is little formal geometry in it, and, of course, no mention of Euclid, mostly heuristics, the kind of knowledge that comes out of a master guiding his apprentices. (Tellingly, the main mathematical result he mentions is Pythagoras’s theorem, amazed that the right angle could be formed “without the contrivances of the artisan.”) Mathematics had to have been limited to mental puzzles until the Renaissance.
Cooking schools are entirely apprenticeship based.