Both of Taleb’s books are highly entertaining. But he over-reaches. There are some odd mistakes. For example, he makes much of a supposed kink in the integral of the Student-t distribution, (where tail probability declines linearly with deviation from the mean) but if you compute the integral using R software there is no kink — so Taleb evidently made a mistake.
Based on web sites, Taleb made his fortune using a kind of option trade. He purchases only options to buy securities at a certain price during a specified time window in the future. So if the security is trading above Taleb’s price, he buys it and then immediately sells at the market price, thereby making a profit. He says that his life involves long periods of time watching while nothing happens and his options to buy expire. But once in a great while he makes a killing. This is exactly like predators who specialize on very large prey, or fishing for big game fish, or hunting for rare but big game animals.
His writings have done a lot to publicize the importance of huge rare events. I think this is a good thing. But also he over-reaches. In some recent interviews he seems to be gloating over the current economic collapse. And (according to economist colleagues) some economists see his ideas as rather routine. Yet he is a provocative and entertaining writer; if sales measure impact he has made a difference.
To me, the most novel feature of the current ongoing collapse is the coincidence of huge shocks with apparently different triggers. Who would have thought that an epidemic of bad loans in America, steep ramp of energy prices, and biofuels tightening the link of energy to food prices would coincide, against a backdrop of lower economic firewalls between countries and increasingly intense food limitation of the human population, with almost no scope for growth of the food supply. It’s a wonderland for testing resilience ideas and a global tragedy, all at the same time.
For a recent talk I re-analyzed a bunch of information from the Millennium Assessment, to try to figure out if humanity had any chance at all for making it through the next few decades.
If everyone shifts trophic status to roughly herbivore level, and we educate all the world’s women to secondary level, we have a chance.
The difference between 12 billion and 9 billion people in 2050 is one child per woman. If all the world’s women were educated to secondary level, fertility would drop by about 1.7 children per woman. And we can probably feed 9 billion herbivorous people, if we can maintain the crop diversity of the major grain crops high enough to avoid catastrophic disease outbreaks.
Energy needs for agriculture and climate change could make it pretty hard to achieve the rosy scenario; climate heating, more variable precipitation and sea level rise have bad implications for agriculture. So the rosy scenario itself may be way out on the tail of the distribution. And what will happen to relations among people as the going gets rough? Human conflict can wreck agriculture. What are the chances that no one will use nuclear weapons? Even a few nukes would take out huge areas of arable land for millennia. And, as Will Rogers said about land, they ain’t makin’ any more of it. A Taleb-like fat tail breakdown seems not so implausible.
This is a sort of book review. By now you may have heard of The Black Swan: the impact of the highly inprobable by Nassim Nicholas Taleb published by Random House (2007).
Taleb is from Lebanon, but he prefers to be called a Levantine. He worked as a trader in currencies, and maybe also derivatives. He claims that nothing of importance in finance can be predicted, except its unpredictability. His book will undoubtedly attract attention for its claim of an inevitable financial collapse, like the one we are experiencing.
He writes on p. 225:
I spoke about globalization in Chapter 3; it is here, but it is not all for the good: it creates interlocking fragility, while reducing volatility and giving the appearance of stability. In other words, it creates devastating Black Swans [events that are extremely rare and important]. We have never lived before under the threat of a global collapse. Financial institutions have been merging into a smaller number of very large banks. Almost all banks are now interrelated. So the financial ecology is swelling into gigantic, incestuous, bureaucratic banks (often Gaussianized [assuming normal deviations] in their risk measurement) — when one falls, they all fall. [lengthy footnote here, which includes the statement that "Fannie Mae, when I look at their risks, seems to be sitting on a barrel of dynamite"] The increased concentration among banks seems to have the effect of making financial crisis less likely, but when they happen they are more global in scale and hit us very hard. We have moved from a diversified ecology of small banks with varied lending policies, to a more homogeneous framework of firms that all resemble one another. True, we have fewer failures, but when they occur … [no deletion here] I shiver at the thought. I rephrase here: we will have fewer but more severe crises. The rarer the event, the less we know about its odds. It mean[s] that we know less and less about the possibility of a crisis.
Taleb goes on to mention the power blackout of 2003 as an example of what happens when things are tied too closely together. Taleb points out that all the experts use Gaussian assumptions for risk analysis, which delivers precisely the wrong answer. Hence I think that it is extremely likely that the favored solution to the world financial crisis will be to tie the financial system even more tightly together, thus ensuring an even bigger collapse next time. It seems to be happening already. There is no sign that Obama has twigged to the hazards of greater financial integration. There is no sign that the experts can learn from collapses: they don’t seem to have learned from past collapses, as Taleb points out.
I think we can learn from Taleb: he writes very forcefully, but exaggerates his points too much. It may be that if we confine ourselves to financial situations, then his statements are valid, even though they are extreme. Taleb seems to have been treated very nastily by the financial establishment: Scholes, Merton & Co. He seems to be both hurt and angry. Perhaps this causes his arrogance as well. I had to grit my teeth to get through to the later chapters, which have most of the substance.
Taleb offers some financial advice:
1. Above all try to protect yourself from the big drops that are coming (have already come). This implies investing a very high percentage in lower risk securities such as government bonds.
2. Try to participate in the big booms that are also sure to come. Taleb advises spreading some stuff in venture capital. In view of the behavior of the Vancouver stock exchange, I should think that it would be necessary to try to avoid scams. See David Baines in the Vancouver Sun for details (e.g.).
What has this to do with ecology?
Buzz Holling has been talking for years about “surprise”, which is just another name for Black Swans. Anyone who has ever looked at ecological data knows that deviations are not Gaussian. Of course, if we drop the Gaussian or some similar assumption, we lose most of statistics, and we lose all of “risk analysis”. So we lose just about all theory. Experts can’t function without theory, so they make unrealistic assumptions, and come up with the wrong answers in Black Swan situations.
Since Black Swans are rare, ordinary experience doesn’t show any, and the experts are confirmed in their misleading assumptions, until the next time.
We can use analogies instead of theory. I recall the raft analogy we used years ago to illustrate resilience: in order to survive on rough seas, we use loose coupling rather than strong coupling. Likewise, we guard against overconfidence: another of Buzz’ favorite themes. Managing for resilience involves guarding against collapses, even though they might be rare: it implies a precautionary principle. In light of the recent financial collapse, this latter point might finally be accepted for ecological management.
Nassim “Black Swan” Taleb writes on Edge about an unwillingness or consider and remember extreme events leads to financial disaster in The Fourth Quadrant: A map of the limits of statistics.
Statistical and applied probabilistic knowledge is the core of knowledge; statistics is what tells you if something is true, false, or merely anecdotal; it is the “logic of science”; it is the instrument of risk-taking; it is the applied tools of epistemology; you can’t be a modern intellectual and not think probabilistically—but… let’s not be suckers. The problem is much more complicated than it seems to the casual, mechanistic user who picked it up in graduate school. Statistics can fool you. In fact it is fooling your government right now. It can even bankrupt the system (let’s face it: use of probabilistic methods for the estimation of risks did just blow up the banking system).
The current subprime crisis has been doing wonders for the reception of any ideas about probability-driven claims in science, particularly in social science, economics, and “econometrics” (quantitative economics). Clearly, with current International Monetary Fund estimates of the costs of the 2007-2008 subprime crisis, the banking system seems to have lost more on risk taking (from the failures of quantitative risk management) than every penny banks ever earned taking risks. But it was easy to see from the past that the pilot did not have the qualifications to fly the plane and was using the wrong navigation tools: The same happened in 1983 with money center banks losing cumulatively every penny ever made, and in 1991-1992 when the Savings and Loans industry became history.
It appears that financial institutions earn money on transactions (say fees on your mother-in-law’s checking account) and lose everything taking risks they don’t understand. I want this to stop, and stop now— the current patching by the banking establishment worldwide is akin to using the same doctor to cure the patient when the doctor has a track record of systematically killing them. And this is not limited to banking—I generalize to an entire class of random variables that do not have the structure we thing they have, in which we can be suckers.
And we are beyond suckers: not only, for socio-economic and other nonlinear, complicated variables, we are riding in a bus driven a blindfolded driver, but we refuse to acknowledge it in spite of the evidence, which to me is a pathological problem with academia. After 1998, when a “Nobel-crowned” collection of people (and the crème de la crème of the financial economics establishment) blew up Long Term Capital Management, a hedge fund, because the “scientific” methods they used misestimated the role of the rare event, such methodologies and such claims on understanding risks of rare events should have been discredited. Yet the Fed helped their bailout and exposure to rare events (and model error) patently increased exponentially (as we can see from banks’ swelling portfolios of derivatives that we do not understand).
Are we using models of uncertainty to produce certainties?
…So the good news is that we can identify where the danger zone is located, which I call “the fourth quadrant”, and show it on a map with more or less clear boundaries. A map is a useful thing because you know where you are safe and where your knowledge is questionable. So I drew for the Edge readers a tableau showing the boundaries where statistics works well and where it is questionable or unreliable. Now once you identify where the danger zone is, where your knowledge is no longer valid, you can easily make some policy rules: how to conduct yourself in that fourth quadrant; what to avoid.
Now it lets see where the traps are:
First Quadrant: Simple binary decisions, in Mediocristan: Statistics does wonders. These situations are, unfortunately, more common in academia, laboratories, and games than real life—what I call the “ludic fallacy”. In other words, these are the situations in casinos, games, dice, and we tend to study them because we are successful in modeling them.
Second Quadrant: Simple decisions, in Extremistan: some well known problem studied in the literature. Except of course that there are not many simple decisions in Extremistan.
Third Quadrant: Complex decisions in Mediocristan: Statistical methods work surprisingly well.
Fourth Quadrant: Complex decisions in Extremistan: Welcome to the Black Swan domain. Here is where your limits are. Do not base your decisions on statistically based claims. Or, alternatively, try to move your exposure type to make it third-quadrant style (“clipping tails”).
Below I’ve redrawn Taleb’s figure. His article provides a fuller picture.
Similarly, Scientific American reprints Benoit Mandelbrot’s 1999 How Fractals Can Explain What’s Wrong with Wall Street:
Individual investors and professional stock and currency traders know better than ever that prices quoted in any financial market often change with heart-stopping swiftness. Fortunes are made and lost in sudden bursts of activity when the market seems to speed up and the volatility soars. Last September, for instance, the stock for Alcatel, a French telecommunications equipment manufacturer, dropped about 40 percent one day and fell another 6 percent over the next few days. In a reversal, the stock shot up 10 percent on the fourth day.
The classical financial models used for most of this century predict that such precipitous events should never happen. A cornerstone of finance is modern portfolio theory, which tries to maximize returns for a given level of risk. The mathematics underlying portfolio theory handles extreme situations with benign neglect: it regards large market shifts as too unlikely to matter or as impossible to take into account. It is true that portfolio theory may account for what occurs 95 percent of the time in the market. But the picture it presents does not reflect reality, if one agrees that major events are part of the remaining 5 percent. An inescapable analogy is that of a sailor at sea. If the weather is moderate 95 percent of the time, can the mariner afford to ignore the possibility of a typhoon?
Nassim Nicholas Taleb uses the term Black Swan to identify significant unexpected events. Holling made some similar points from a different perspective in his 1973 paper on resilience and his 1986 paper the resilience of terrestrial ecosystems; local surprise and global change. In on the interdisciplinary Edge Taleb writes on Learning to expect the unexpected and defines what he means by Black Swans:
A black swan is an outlier, an event that lies beyond the realm of normal expectations. Most people expect all swans to be white because that’s what their experience tells them; a black swan is by definition a surprise. Nevertheless, people tend to concoct explanations for them after the fact, which makes them appear more predictable, and less random, than they are. Our minds are designed to retain, for efficient storage, past information that fits into a compressed narrative. This distortion, called the hindsight bias, prevents us from adequately learning from the past.
From my perspective, Black swans occur when there are significant mismatches between the models people use to understand the world and the subsquent expectations that those models produce and observations. In other words, black swans are model errors – something that I’ve written (Peterson, Carpetner & Brock et al 2003) in the context of ecological management.