Don't Be a Sucker

Book Review:

Nassim Nicholas Taleb, "The Black Swan: The Impact of the Highly Improbable", Penguin Books, 2010.


The MIRI and LessWrong founder, Eliezer Yudkowsky, once wrote:
"There is a meme which says that a certain ritual of cognition is the paragon of reasonableness and so defines what the reasonable people do. But alas, the reasonable people often get their butts handed to them by the unreasonable ones, because the universe isn't always reasonable. [...] If you keep on losing, perhaps you are doing something wrong. Do not console yourself about how you were so wonderfully rational in the course of losing. That is not how things are supposed to go. It is not the Art that fails, but you who fails to grasp the Art."
At the very least, a rationalist should avoid shooting their own foot off. As a corollary, they should avoid being a sucker -- which is certainly what the philosopher/statistician/trader/essayist Prof. Nassim Nicholas Taleb would say. Taleb is quite the phenomenon, with several best-selling books under his belt, including Fooled by Randomness, Antifragile and Skin in the Game, which are parts of his ongoing "Incerto" series. But the focus of today's blog post is probably his most well-known book, The Black Swan.


Title notwithstanding, the swan is not the only kind of bird featured in this book. Throughout, the author uses the metaphor of a turkey who, before Thanksgiving, gets acclimated to being fed well every day, until one day its neck is wrung. The first thousand days of the turkey's life gave it a feeling of safety, yet this history said nothing about the surprising and unexpected slaughter on day one thousand and one. That's why Taleb writes in Chapter 4: "In a way, all I care about is making a decision without being the turkey." In other words, "to not be a sucker in things that matter", and "to avoid crossing the street blindfolded" (p. 49). He repeats this in a footnote on page 268 of the book, where he writes that "people think that science is about formulating predictions (even when wrong). To me, science is about how not to be a sucker." This echoes Yudkowsky's notion of non-self-destruction, which is why I think Taleb has prime rationalist credentials.

The "turkey problem" runs in the same vein as the Black Swan problem, which itself is based on the problem of induction -- the limitations of generalizing from past data. As the legend goes, Europeans for centuries used to believe that all swans were white, until black swans were discovered in Australia. The philosophical-logical problem has been popularized by David Hume, but was formulated much earlier by Sextus Empiricus and Algazel. The turkey example comes from Bertrand Russell. But Taleb's own version of the Black Swan problem goes further: he wants to emphasize the role of exceptional/extreme events in life, and the need to be robust to negative Black Swans (while being exposed to positive ones).

To be clear, a Black Swan event according to Taleb is defined as (i) rare; (ii) impactful; and (iii) predictable only in hindsight. In the book's prologue he argues that most things in our world -- ranging from world wars to technological innovations, epidemics, economic crashes, the spread of particular religions, and even our personal lives -- are the products of Black Swan dynamics. In other words, a small number of unplanned but significant events explain most of the cumulative effect. Moreover, Taleb argues that we (including social scientists) tend to be blind to Black Swans, such that we try to make predictions and forecasts without focusing on robustness to errors or the consequences of such errors, and then call ourselves "experts".

According to Taleb, we focus too much on what we know, and not enough on what we don't know. We focus too much on precise facts, and not enough on general rules. For example, the terrorist attacks on 11 September 2001 happened because people didn't conceive of the risk beforehand; and afterwards people learned to put locks on cockpit doors but they didn't learn that some events are largely unpredictable. We remember famous martyrs rather than silent heroes. (If a hypothetical legislator imposed locks on cockpit doors before 9/11, preventing it, then that person probably wouldn't have been honored with public statues or mentions in history books.) We focus too much on the "normal" states of phenomena and not enough on the "outliers". We focus too much on pure, ideal Platonic forms and not enough on messy realities. Thus to understand phenomena, we need to start by looking at the extremes, and by practicing true decision-making under uncertainty rather than engaging in nerdy "cheap talk".

This idea that "what you don't know is more relevant than what you do know" is reflected in the title of Part 1 of The Black Swan: "Umberto Eco's Antilibrary". The writer Umberto Eco has a large personal library as a research tool, for which the purpose is not to show off how many books he has read, but to collect as many unread books as he can. As Nassim Nicholas Taleb says: "Read books are far less valuable than unread ones. ... Indeed, the more you know, the larger the rows of unread books. Let us call this collection of unread books an antilibrary" (p. 1). The point of this anecdote is to illustrate how people generally, unlike Eco, seek to validate what they know. This is a feature of human psychology, which is dealt with in the first half of the book -- Taleb calls it "literary" in subject and treatment (presumably because it reviews the literature on how our perceptions of history are distorted).

The second half of the book deals more with business and natural science, including the "unadvertised limits" of science and prediction (Part 2) and the technical science of "complexity" (Part 3). Part 4 briefly wraps up the original edition of the book and includes a handy glossary. But since I am reviewing the second edition of the book, there is also a fifth part, a Postscript Essay, which offers additional reflections on robustness and fragility and responds to some misconceptions about the first edition. At the end there are notes, a lengthy bibliography, and an index.

***

Be an Antischolar

Nassim Nicholas Taleb uses the term "antischolar" to refer to a skeptical empiricist: someone who does not treat his or her knowledge as a treasure. The skeptical and empirical traditions owe much to Menodotus of Nicomedia, Sextus Empiricus, Al-Ghazali, Pierre Bayle, Pierre-Daniel Huet, and David Hume. Unfortunately, people seem to prefer the anecdotal over the empirical -- that is the topic of Part 1 of The Black Swan. Taleb begins by describing his background as a Levantine (i.e. Greco-Syrian-Byzantine) immigrant to the United States, who, as a teenager, witnessed the Lebanese civil war -- a Black Swan event that came out of nowhere and lasted much longer than people expected. Taleb uses this story to illustrate the "triplet of opacity":
  • The illusion of understanding; we think we know what is going on but we don't.
  • The retrospective distortion; history seems clearer in the rear-view mirror due to our minds' self-deluding ability to produce explanations.
  • The overvaluation of factual information; supposed experts who try to learn every detail are at no advantage when it comes to predicting random events.
The last point relates to the tendency of elite thinkers to Platonify -- to "cut reality into crisp shapes" (p. 15) and to ignore "those objects of seemingly messier and less tractable structures" (p. 303). Platonicity leads to categorizing the world into arbitrary clusters, for example the idea that libertarians are right-wing (or left-wing). The consequence is a reduction in complexity that skews our understanding of reality, often increasing our blindness to Black Swans. What may be interesting to readers familiar with the rationality community is that Taleb also defines Platonicity in terms of the map-territory distinction. (As he writes in the prologue, p. xxx: "These models [maps of reality] are like potentially helpful medicines that carry random but very severe side effects.")

Taleb continues his story by talking about how he became obsessed with the Black Swan problem after studying at Wharton and working at an investment bank in Manhattan. The financial crash of 1987 was another Black Swan, but this time Taleb had bet on an improbable yet consequential event. While other traders were sobbing, he made "f*** you money". This helped give him the financial independence to become a flâneur and meditate on his ideas. One of his conclusions is that the Black Swan problem is to a large extent psychological.

Before continuing, it is important to define the crucial distinction between "Mediocristan" and "Extremistan". Mediocristan is, as the name suggests, an environment where there are very few large or extreme events; most observations can be fitted onto a Gaussian bell curve (i.e. normal distribution) and no single data point can significantly skew the aggregate. For example, the world's tallest or heaviest person will still represent only a tiny fraction of the total mass or height of the population. Mediocristan is subject to "mild" (type 1) randomness, and things tend to be non-scalable. For example, a dentist or a prostitute works in a nonscalable profession because they cannot see more than a certain number of clients per hour, and their income depends on continuous effort. By contrast, Extremistan is the province where large deviations are possible and a single observation can disproportionately impact the total -- it is subject to "wild" (type 2) randomness, and things tend to be scalable. For example, some professions, like speculators and writers, can add extra zeroes to their income without much more work, if they have high-quality ideas. They are not limited by the hours of their labor. But in Extremistan, the divide between success and failure is much more... well, extreme. A few giants can take a large share of the pie; the very wealthiest people skew the average.

In general, social and informational matters tend to belong to Extremistan. Wealth, book sales, academic citations, company size, city population, deaths in a war, commodity prices and financial markets all seem to fall into this category. In other words, they are especially vulnerable to the Black Swan. On the other hand, safer from Black Swans are the physical quantities of Mediocristan: the height and weight of people, the income of a prostitute, car accidents, and so on.

Now we can better understand what Nassim Nicholas Taleb means when he writes that people (especially economists and social scientists) tend to focus excessively on the Platonic form of the bell curve and neglect outliers and extremes. To put it another way: people treat event generators from Extremistan as if they belong to Mediocristan! But why do people do this? As Taleb writes in Chapter 4, it is simply more convenient to assume that Black Swan surprises don't exist. It essentially pushes the problem of induction under the rug. However, this is the road that leads to being a turkey.

There are five other psychological aspects to the Black Swan problem. They can all be summarized under the statement, "the cosmetic and the Platonic rise naturally to the surface" (p. 131).
  1. The error of confirmation: we generalize from the seen to the unseen in a way that confuses absence of evidence for evidence of absence. When a medical test finds "no evidence of disease", that is not the same as "evidence of no disease"! Yet people tend to focus on corroborating instances (e.g. signs of health) rather than looking into the dark at negative instances (e.g. signs of cancer). The problem, as philosopher of science Karl Popper pointed out, is that you can never be certain that a theory is right, but you can be certain that it is wrong by falsifying it. This asymmetry suggests we should practice one-sided semiskepticism.
  2. The narrative fallacy: we like stories that simplify reality and attribute causes to everything. By default, our brains go on theorizing -- it takes deliberate effort not to invent explanations. The need to reduce dimensionality also has something to do with the nature of information: info is costly to obtain, store, and retrieve. (This applies as much to robots as to humans.) Compressing information lets us store more of it in our memory. Alas, this imposition of order (Ă  la Platonicity) may leave out Black Swans. It also leads to irrationalities like the conjunction fallacy. The solution is, of course, to favor experimental experience over storytelling.
  3. The emotion of hope vs. the existence of Black Swans: human nature simply isn't programmed for Black Swans. In Extremistan, success is concentrated very unequally, yet our reward mechanisms (both biological and social) have evolved for a steady and regular environment. Most scientists and artists are not famous or influential -- this makes them seem like losers or idiots to others. One day they might be vindicated by a Nobel Prize... or not. Reality is lumpy and nonlinear. The main factor that drives people to gamble on writing or entrepreneurship is hope; but sadly, even if these people do win, they might be worse off in terms of happiness than if they had spread their satisfaction out over many small but frequent rewards.
  4. The problem of silent evidence: what we see is not necessarily all there is, because history hides Black Swans from us. Thinkers as far back as Cicero, Montaigne and Francis Bacon have recognized that superstitions arise when we gather evidence in biased ways. We see evidence for miracles in shipwreck survivors who prayed; yet we fail to see those who prayed then drowned. As the saying goes: history is written by the victors. The difference between successful millionaire CEOs and failed ones is not courage, risk-taking or optimism (as the successful ones write in their memoirs), but luck. Taleb calls this bias vicious, because "the more lethal the risks, the less visible they will be" (p. 108). Worse, if we have been lucky enough to survive risks in the past, we will retrospectively underestimate how serious the threats were. It may be that human existence is a Black Swan, as per the anthropic bias.
  5. The ludic fallacy: we tend to build knowledge from the world of games (hence "ludus", the Latin for games). This is because we focus too much on a well-defined, specific list of Black Swans that come easily to mind -- Taleb also calls this "tunneling", presumably after tunnel vision. When we tunnel, we scorn the abstract by favoring contextual thinking. For example, "nerdy" types tend to think inside the box when it comes to uncertainty: they focus on the Platonic world of casino games, which can be modeled by Gaussian probabilities. In fact, gambling profits belong to Mediocristan, and as such, the "sterilized" uncertainty of the casino does not transfer to the wild, non-computable uncertainty of real life. On the flip side, Taleb notes that military people seem to deal with randomness better than academics, because they understand the idea of the "unknown unknown".
According to Taleb, these are really just one topic: the unseen side of Umberto Eco's library. Or put another way, "... we are naturally shallow and superficial -- and we do not know it" (p. 132). Taleb's advice is to learn to spot the difference between the sensational and the empirical.

***

Just Give Up Already

Part 2 of The Black Swan dives deeper into the topic of prediction. Taleb calls it "scandalous" that we continue to fall for people who, using phony mathematics, purport to help us navigate uncertainty even though their track record is dismal. The irony is that the more knowledge we have, the more confident we tend to be, and this confidence makes us less competent at prediction. An example of this is the planning fallacy, as illustrated by the Sydney Opera House, which finished ten years later and at nearly AU$100 million more than initially planned. Taleb talks about "epistemic arrogance", which is when someone thinks they know more than they actually know. Unfortunately, most of us are bad at calibrating our level of confidence.

The consequence is that we underestimate the role of uncertainty and outliers -- Black Swans. Professional forecasters are especially vulnerable to this kind of miscalculation, because they are more likely to gather lots of information. And more information can actually make people worse at prediction, because (i) some of the information may be useless noise; and (ii) people experience confirmation bias and don't easily reverse their initial theories. Indeed, experiments have shown that when people are given more information, they don't become more accurate; they just become more confident. This leads to something Taleb calls "the expert problem" or "empty-suit problem": the puzzling abundance of so-called experts who do not have genuinely unique abilities. Clinical psychologists, economists, financial risk experts, political and military analysts, CEOs and college admissions officers tend to be non-experts.

To be fair, some professions, especially those which don't deal with things that move like the future, do have real expert skills. These include physicists, accountants, chess masters, and judges of soil and livestock. But what is it about political and economic forecasters that lead to epistemic arrogance? Well, one factor is the self-serving bias of attributing successes to their own expertise while attributing failures to external circumstances. Another factor is that they fool themselves with complex and sophisticated statistical methods, even when those models don't necessarily improve accuracy. (This relates to the tunneling effect discussed previously.) Finally, they forecast without incorporating an error rate, with the upshot that they neglect the importance of variability, neglect forecast degradation over time, and of course, neglect Black Swans. So what are these forecasters supposed to do? As Taleb writes: get another job.
"People who are trapped in their jobs who forecast simply because "that's my job," knowing pretty well that their forecast is ineffectual, are not what I would call ethical. What they do is no different from repeating lies simply because "it's my job." [...] Please, don't drive a school bus blindfolded." (p. 163)
Ouch. Next, Taleb goes on to explain why prediction has inherent limitations. Here, he invokes the Berra-Hadamard-Poincaré-Hayek-Popper conjecture.
  • From the baseball coach Yogi Berra we get the sayings, "It is tough to make predictions, especially about the future," and "You can observe a lot by just watching," and "You got to be very careful if you don't know where you're going, because you might not get there."
  • From mathematician Jacques Hadamard we get the insight that a tiny change in the input parameters of a model can lead to wildly divergent results -- a point that Edward Lorenz later built on to describe the "butterfly effect" and establish chaos theory.
  • From Henri PoincarĂ© we get the "three-body problem", which states the following: take a system of two planets; you could easily predict their behavior. But add a third body (e.g. a comet) and your error rate will grow the farther into the future you project. The dynamical system that is our world is of course even more complicated than this!
  • From economist-philosopher Friedrich Hayek we get the idea that a single agent cannot aggregate all the information in a system. Hayek used this to argue for libertarianism, but also against using the tools of hard sciences in the social sciences.
  • From Sir Doktor Professor Karl Raimund Popper (as Taleb refers to him) we get the insight that historical events are unpredictable because in order to predict them, you would also need to predict technological innovation... but if you are certain about what an invention will look like, then you can already build it! Thus, "we do not know what we will know" (p. 173).
These structural limitations imply that many great discoveries and inventions came serendipitously. It also means that the true significance of a find is often not realized immediately. Yet, economists have imposed top-down, sterilized models onto social matters (see: "rationality" and "optimization") in order to aid projections. In this context, Taleb celebrates the empirical psychologists who study heuristics and biases, as they have demonstrated how people make inconsistent choices.

What to do about epistemic arrogance? Taleb proposes epistemic humility: holding one's knowledge in great suspicion. If we could somehow structure our laws around human fallibility, we would have an "Epistemocracy", as promoted by Michel de Montaigne. This would be Taleb's idea of utopia. It would be governed by those who emphasize ignorance rather than knowledge. Importantly, this includes awareness of the asymmetry between past and future; i.e. understanding that it's easier to predict how an ice cube will melt than is the backward process of reconstructing the shape of the ice cube that a puddle might have been. Some folks seem to have "future blindness", in that they fail to see that future people will view us today like we view past people (with pity). One cannot reverse-engineer history. Know history, but resist the temptation to theorize and find causal links.

But Taleb also gives more practical advice. The first is, surprisingly, to be human and to go ahead and make predictions and have opinions. Just don't depend on large-scale predictions that can harm you. For example, you can forecast your next picnic, but beware economic pundits' social security forecasts for the year 2040. Secondly, be prepared. Thirdly, expose yourself to positive accidents by (i) trial-and-error, and (ii) the "barbell strategy", which entails that you put about 90% of your investments in maximally safe instruments and the remaining 10% in extremely speculative leveraged bets. Collect as many opportunities as possible -- for example by living in a big city, going to cocktail parties, and getting in the business of movies, publishing, venture capital, or scientific research. Base your decisions around how you guess the unknown might affect you.

The general theme of Part 2 is that "we just can't predict". The plea for epistemocracy and petition to "get another job" is reminiscent of an article by Eliezer Yudkowsky entitled "Just Lose Hope Already". In it, Yudkowsky cites the hedge fund Long-Term Capital Management (LTCM) as an example of what happens when people refuse to admit when they have lost. Interestingly, LTCM pops up in Part 3 of Taleb's The Black Swan too, as we shall see...

***

Fake Uncertainty

Part 3 is the most technical part of the book. Here, Nassim Nicholas Taleb explains why the world is moving deeper into Extremistan, why the Gaussian bell curve is a "great intellectual fraud", how we can avoid being suckers by turning Black Swans into Gray Swans, and how some philosophers mislead us by focusing on phony uncertainty.

It is worth pausing to make the point that not all events from Extremistan are Black Swans; some of these events are somewhat predictable and scientifically tractable, such that they are less surprising. Taleb refers to them as Gray Swans, because it's possible to know a bit about how they scale. They tend to follow a distribution of Mandelbrotian (fractal) randomness, or "power laws". While we cannot produce precise calculations, we can at least take them into account. Such events include earthquakes, stock market crashes, and best-selling books.

However, if you try to model them with bell-curve statistics, you'll have a bad time. The two economists Robert C. Merton and Myron Scholes won the 1997 Nobel Prize in economics for their formula for option pricing -- yet they also founded LTCM, which accumulated one of the biggest trading losses in history during the summer of 1998, abruptly going bust. Taleb uses this as an example of the dangers of Platonified knowledge and the ludic fallacy, which he perceives to be endemic within Modern Portfolio Theory and neoclassical economics in general. He even calls out Paul Samuelson, John Hicks, Kenneth Arrow and Gerard Debreu as "Locke's madmen", because they use impeccable, rigorous reasoning but start from faulty premises. Others in this category include Harry Markowitz and William Sharpe. All of them have been crowned by the Bank of Sweden's Nobel Committee. This is part of the reason why Taleb advocates "academic libertarianism", i.e. to be skeptical of academic tenure and institutional authority. According to Taleb, academic guilds will favor cosmetic yet fake knowledge when it furthers their self-perpetuation.

But why exactly is the bell curve a "great intellectual fraud" (GIF)? Well, the issue that Taleb has been hinting at throughout the book is that the Gaussian may be appropriate in Mediocristan, but not in Extremistan. Mathematically speaking, the further an observation is from the middle of the bell curve, the exponentially less likely its odds. For example, the odds that a person is 10 centimeters taller than the average may be 1 in 6.3 but the odds of being 50 centimeters taller than average is about 1 in 3,500,000. With a large enough sample, no single observation will significantly impact the total, which suggests that fluctuations around the average tend to be small and "cancel out". This gives people a degree of certainty, which is why they like to use the Gaussian bell curve. Taleb describes it as a "longing for the aurea mediocritas, the golden mean" (p. 241). He blames Adolphe QuĂ©telet and Francis Galton for walking around with bell curves in their heads. Unfortunately, reality is not Mediocristan and does not resemble the idealized, Platonic form of the Gaussian and related concepts such as standard deviation, correlation, regression, and R-square. Gaussian statistics can be used when you're running a casino (with bet limits) or doing empirical psychology or genetics, but in matters that belong to Extremistan like portfolio returns or book sales, "fuhgedaboudit" (in Taleb's words).

By contrast, look at the distribution of wealth: if the odds of having a net worth above €1 million in Europe is 1 in 62.5, then the odds of having a net worth higher than €4 million is about 1 in 1,000; for above €8 million it is 1 in 4,000. Unlike the height example, the rate of decrease here remains constant. That is what Taleb means when he calls it scalable. These kinds of distributions are sometimes known as Pareto power laws, but Taleb prefers to call them Mandelbrotian after his friend BenoĂ®t Mandelbrot, to whom The Black Swan is dedicated.

Mandelbrot is perhaps best known for the Mandelbrot set, which generates fractals when visualized.


Taleb himself feels a connection to BenoĂ®t Mandelbrot because "...it was as if we both originated from the same country, meeting after years of frustrating exile, and were finally able to speak in our mother tongue without straining" (p. 254). His respect for Mandelbrot is due to the way Mandelbrot "connected the dots". The geometry of reality does not look like pure Euclidean triangles, circles and squares -- it looks more jagged, rough, and broken. When we look at mountains and trees, we see similar geometric patterns as we see in rocks and branches; this recursive resemblance at different scales is how Mother Nature incorporates fractals. There is an aesthetic aspect to fractals, as they are also found in visual arts, music and poetry. But Mandelbrot's key mathematical idea was that statistical measures in wealth, city size, war casualties etc. are also somewhat scale-independent (unlike the Gaussian). For example, if you assume that the power law exponent of city population is about 1.3 it means that the top 1% of cities contribute to 34% of the total population. Taleb assumes that the exponent for American net worth is 1.1 (meaning the top 1% owns 66% of the wealth). Notice the difference between 1.1 and 1.3! But keep in mind that these are approximations; you often cannot compute the exponents precisely, because it may take a long time for a fractal process to reveal its properties. Just like a water puddle can be made from various shapes of ice cubes, a sample of data can be generated by various processes. This is why we ought to be cautious around complexity theory and "dynamical systems" with "critical points": people in these fields try to be precisely predictive, but in doing so they commit the error of traveling from representation to reality (rather than the other way around). While you shouldn't take these models at face value, they do provide the benefit of making us less surprised when the stock market crashes. As Taleb writes: "You are indeed much safer if you know where the wild animals are" (p. 273).

So, Mandelbrotian fractal randomness can help turn Black Swans gray (but not white). It gives us a glimmer of hope as the world moves deeper into the wild randomness of Extremistan. According to Taleb we are "gliding into disorder" due to globalization (i.e. interlocking fragility, especially in financial institutions) and the World Wide Web (which allows garage startups like Google to rise to dominance very quickly). Actually, the concentration of banks and corporations will create more periods of calm and stability, and make crises less likely... it's just than when a crisis does happen, it will be more devastating and global in scope. Nobody stays a king for long in Extremistan. Models such as winner-take-all tournaments, the "Matthew effect" (cumulative advantage) and preferential attachment, and network theory tell us how inequalities arise. But when you incorporate the role of luck and randomness, you can see how these inequalities are (i) arbitrary, and (ii) impermanent. As Taleb puts it, luck "both made and unmade Rome" (p. 222). He finds randomness to be unfair and revolting; it gives a few superstars a disproportionate share of intellectual influence. Studies have even shown that people with higher social rank live longer! At the same time, Taleb admits that luck is more egalitarian than intelligence, because people don't choose their abilities.

Unfortunately, the whole issue of randomness is swept under the rug by phony philosophers who treat uncertainty as if we were playing a casino game (see the ludic fallacy above). They invoke Heisenberg's greater uncertainty principle as the "limit of prediction", while ignoring the elephant in the room: that they can't even predict how long their marriages will last, or how the conflict between Israel and Hezbollah will develop. Taleb argues that these philosophers can be dangerous, because their job is to help us think -- yet they can be rather closed-minded in the way they compartmentalize their "critical thinking" and other subjects like Gaussian methods. They are happy to debate Wittgenstein but don't question the abilities of their pension plan manager. (Note how this section is reminiscent of Yudkowsky's essay "Outside the Laboratory".)

Taleb's antidote to Black Swans is to act more like Fat Tony than Dr. John. These are purely fictional characters, but Taleb uses them to illustrate a point. Tony from Brooklyn is gregarious yet shrewd: he can effortlessly make a buck through his connections and by finding the sucker (often a bank clerk). He may not ace an IQ test, but his knack for thinking outside the box leads him to do well in any real-life situation. By contrast, Dr. John is an engineer-turned-actuary who wears a suit and always arrives on time. He is a nerd but is blind to Black Swans due to his Platonic approach to risk. At his insurance company job he uses methods like "value-at-risk" which are based on bell curve mathematics, defending them on the grounds that "these models are all we have" and that they fit "the rigor of economic theory". According to Taleb, Fat Tony is actually the more scientific one here, because unlike Dr. John, Tony is a skeptical empiricist who develops his intuitions from bottom-up practice and observation. He minimizes theory, is not afraid of messy approximate math, and is willing to admit when he doesn't know.

***

Life is a Black Swan

The original edition of The Black Swan ends with the following consideration:
"In refusing to run to catch trains, I have felt the true value of elegance and aesthetics in behavior, a sense of being in control of my time, my schedule, and my life. Missing a train is only painful if you run after it! [...] It is more difficult to be a loser in a game you set up yourself.
In Black Swan terms, this means that you are exposed to the improbable only if you let it control you." (p. 297).
This Stoic mindset reappears at the end of the revised edition too. Here, Taleb cites Seneca and the willingness to lose everything, everyday. This extends to one's wife, children, and one's own life. As Taleb writes: "A Black Swan cannot so easily destroy a man who has an idea of his final destination" (p. 377), referring to his family cemetery in Lebanon.

The rest of the Postscript Essay has eight mini-chapters:
  • Firstly, we can learn about fragility and robustness from Mother Nature, because she is the oldest and wisest system around. Mother Nature has picked up a few tricks: (i) redundancies act as insurance against perturbations; (ii) big things tend to be more fragile; and (iii) hyper-connectivity allows more scalability, and hence Extremistan events. According to Taleb, these ideas should make us more aware of the side-effects of globalization.
  • Living organisms need variability (i.e. occasional acute stressors) to avoid fragility. For example, intermittent fasting plus occasional yet random high-intensity exercise are presumably better than steady nutrition and steady exercise. This is an application of the "barbell strategy".
  • In the third mini-chapter, Taleb addresses some misconceptions. For instance, some folks say that any map is better than no map, but Taleb responds that if you don't have a good map of La Guardia airport you wouldn't fly there -- you'd take the train or stay home. This doesn't mean "no forecasts", but "no sterile forecasts with huge error". Some people want to talk about "objective" Black Swans, but Taleb points out that "a Black Swan for the turkey is not a Black Swan for the butcher". And others have (perhaps unfairly?) suggested that Taleb's work is nothing new atop Popperian falsification, Knightian uncertainty, power laws, behavioral economics and so on.
  • A common mistake by otherwise intelligent people is to use an overly-systematizing mindset (almost like Asperger syndrome) which makes it difficult to imagine the world from the perspective of other people's knowledge, therefore making them blind to risk. In other words, they don't take into account the meta-probability that their probabilistic measure is wrong. (Perhaps this can be compared to Yudkowsky's Mind Projection Fallacy.)
  • Decision-making is not about epistemology (what is True or False) but about the payoffs of decisions. A huge problem in modern philosophy is how we can represent rare events a priori given that their frequency cannot be estimated from observation. Statistics faces a self-reference problem: we assume a Gaussian distribution due to the data, but in a circular regress we also assume we have enough data due to the Gaussian. Taleb's solution is not to bother computing small probabilities -- just categorize decisions based on the severity of potential estimation error.
  • This brings us to the "Four Quadrants", a map which makes only one a priori assumption, namely that large events reside in Extremistan, not Mediocristan. This gives us the following:
    • First Quadrant: decisions with simple binary payoffs in Mediocristan (e.g. yes/no answers in a psychology experiment or medical test);
    • Second Quadrant: complex open-ended payoffs in Mediocristan (e.g. some casino bets);
    • Third Quadrant: simple payoffs in Extremistan;
    • Fourth Quadrant: complex payoffs in Extremistan; here be dragons!
  • You cannot change the distribution, but you can change your exposure. Thus, for negative Black Swans we should move from the Fourth Quadrant into the third one. This is simple advice, yet people prefer to hear positive advice (e.g. "actionable next steps"). Taleb points out that current knowledge and methods in science and medicine can actually harm us in some places (see iatrogenics), so using no models can be better than using defective mathematical acrobatics.
  • Finally, Taleb gives some practical tips for robustness in society. Don't give bank executives short-term bonuses when their long-term returns are negative. Avoid debt and overspecialization, and learn to love savings and insurance. Beware predictions of remote events with open-ended payoffs and accompanying risk metrics like "Sharpe ratio". Remember that absence of volatility does not mean absence of risk. Let fragile things fail early when they're still small (and nationalize those that really need to be bailed out). Don't listen to the economics establishment that failed us in 2008. Ban complex financial products that nobody understands. Do not depend on investments that you do not control (better to get anxiety from your own business). And "make an omelet with the broken eggs".
So there you have it: The Black Swan in a (long) nutshell. According to the author, his main problem is that many folks agree with him on the role of rare events, yet still use conventional risk metrics based on Mediocristan. As he writes in Chapter 17: "Not one of these users of portfolio theory in twenty years of debates, explained how they could accept the Gaussian framework as well as large deviations. Not one" (p. 281). In the same chapter is one of my favorite graphs from the book:


It shows how ten days account for nearly half the returns in fifty years! It's a nice piece of evidence for Taleb's general message that the way we perceive events is distorted, that there are hard limits to prediction because incomplete information is practically indistinguishable from randomness, and that we ought not to apply bell curves from statistics books to real-world problems but go from problems to books.

***

What is amazing about The Black Swan is that it can serve as a foundation for an entire worldview. Consider economic theory:
"... the reason free markets work is because they allow people to be lucky, thanks to aggressive trial and error, not by giving rewards or "incentives" for skill. The strategy is, then, to tinker as much as possible and try to collect as many Black Swan opportunities as you can." (p. xxv)
Taleb uses this point to compare the United States favorably to Europe. He argues that the US is more creative because it tolerates more bottom-up tinkering and "undirected trial and error" (versus the European mathematical snobbery). But it's not all rainbows and sunshine: "Indeed, the tragedy of capitalism is that since the quality of the returns is not observable from past data, owners of companies, namely shareholders, can be taken for a ride by the managers who show returns and cosmetic profitability but in fact might be taking hidden risks" (p. 44).

Of course, the ideal scenario according to the author would be to be conservative and skeptical when (model) errors can hurt, and aggressive in gaining exposure to positive Black Swans. (Side note: this does not mean literally collecting lottery tickets, since they are not scalable.)

The skeptical empiricism Taleb promotes also has implications for conformity and contrarianism. As he writes in Chapter 3: "To be genuinely empirical is to reflect reality as faithfully as possible; to be honorable implies not fearing the appearance and consequences of being outlandish" (p. 27). This has much in common with Eliezer Yudkowsky's notion of lonely dissent: "If you do things differently only when you see an overwhelmingly good reason, you will have more than enough trouble to last you the rest of your life." Suffice it to say, both Yudkowsky and Taleb are rather unconventional fellows.

One unconventional practice, at least among the educated, is to avoid reading the news. Bryan Caplan has blogged that most news is unimportant, and Robin Hanson has written that news is a substitution for insight (and is, of course, about signalling). Nassim Nicholas Taleb echoes these concerns when he says in The Black Swan that newspapers favor the sensational yet ordinary (i.e. focus on narrativity and neglect silent evidence), and that reading the news can actually decrease your knowledge of the world.

When it comes to general life philosophy, Taleb echoes Montaigne in accepting human weaknesses and imperfections. As he writes in Chapter 13:
"We cannot teach people to withhold judgment; judgments are embedded in the way we view objects. I do not see a "tree"; I see a pleasant or an ugly tree. It is not possible without great, paralyzing effort to strip these small values we attach to matters. Likewise, it is not possible to hold a situation in one's head without some element of bias. Something in our dear human nature makes us want to believe; so what?" (p. 202)
We need to come to terms with the fact that nobody is a good predictor of anything. But by shifting our focus away from probabilities, we can think more about the consequences of events, and therefore prepare for events like earthquakes, wars or market crashes. Indeed, Taleb writes in the Postscript that a robust society and economy is not about correcting mistakes or eliminating randomness, but about letting "human mistakes and miscalculations remain confined" (p. 322).

We can also find solace in the fact that randomness "reshuffles" society's cards by occasionally knocking down the big guys, and giving the little guys hope.

And in the end, we can perhaps take cue from Taleb's take on Stoic philosophy -- to love fate (amor fati) and to learn how to die. (Personally, I still like transhumanism.)

***

I gave The Black Swan 5/5 stars on Goodreads because it really is an impressive book. It is well-written and often humorous in tone; Taleb loves to poke fun at hotshot economists and, for some reason, the French. He also creatively incorporates elements of fiction in a nonfiction book: besides the characters of Dr. John and Fat Tony, there's also Yevgenia Krasnova (a best-selling writer who uses literary form to express her scientific theories) and Nero Tulip (a trader who is a friend of Fat Tony and Yevgenia, and who uses a "bleed" financial strategy which entails small frequent losses outweighed by infrequent yet big wins). Nassim Taleb's argument for using stories, vignettes and metaphors in a book attacking "narrative disciplines" is basically that one must fight fire with fire.

Another unique trait of this book is that each chapter begins with a rather cryptic overview of the chapter's contents. For example, at the start of Chapter 4: "Surprise, surprise - Sophisticated methods for learning from the future - Sextus was always ahead - The main idea is not to be a sucker - Let us move to Mediocristan, if we can find it" (p. 38). Even the headings are sometimes a bit opaque; for instance, "Saw Another Red Mini!" in Chapter 5 and "The Evolution of the Swimmer's Body" in Chapter 8, which won't make sense until you've read the sections. But at least the book does have a structure. It also contains images, figures and tables, which helps to make it engaging -- perhaps what Taleb intended, as he confesses in Chapter 19 that half the time (i.e. when not dealing with Black Swan matters) he prefers aesthetics, poetry, dignity and elegance before cold truth.

As I mentioned earlier, The Black Swan has a lengthy bibliography and notes section, indicating that the author is well-read and familiar with the viewpoints of those he attacks. In the Acknowledgements section, he writes: "Over the years I have ended up reading more material from those I disagree with than from those whose opinion I share [...] It is the duty of every author to represent the ideas of his adversaries as faithfully as possible" (pp. 434 - 435). Whether Taleb succeeds in being fair while savagely ripping into his victims is a question I won't answer here, partly because I'm not familiar with the arguments against Black Swan theory.

In any case, Taleb persuasively argues for his side, and the book is worth reading for its insights on a range of issues. I do wonder what he would say about Pascal's Mugging and its application to stuff like AI risk, since these seem like obvious issues that aren't touched in The Black Swan.

Comments

Popular posts from this blog

Don't accept a Nobel Prize -- and other tips for improving your rationality