…it’s a straightforward implication of standard economic analysis that the more uncertainty is the rate of climate change the stronger is the optimal policy response. That’s because, in the economic jargon, the damage function is convex. To explain this, think about the central IPCC projection of a 3.5 degrees increase in global mean temperature, which would imply significant but moderate economic damage (maybe a long-run loss of 5-10 per cent of GDP, depending on how you value ecosystem effects). In the most optimistic case, that might be totally wrong – there might be no warming and no damage. But precisely because this is a central projection it implies an equal probability that the warming will be 7 degrees, which would be utterly catastrophic. So, a calculation that takes account of uncertainty implies greater expected losses from inaction and therefore a stronger case for action. This is partly offset by the fact that we will learn more over time, so an optimal plan may involve an initial period where the reduction in emissions is slower, but there is an investment in capacity to reduce emissions quickly if the news is bad. This is why its important to get an emissions trading scheme in place, with details that can be adjusted later, rather than to argue too much about getting the short term parts of the policy exactly right.
Anyway, back to my main point. The huge scientific uncertainty about the cost of inaction has obscured a surprisingly strong economic consensus about the economic cost of stabilising global CO2 concentrations at the levels currently being debated by national governments, that is, in the range 450-550 ppm. The typical estimate of costs is 2 per cent of global income, plus or minus 2 per cent. There are no credible estimates above 5 per cent, and I don’t think any serious economist believes in a value below zero (that is, a claim that we could eliminate most CO2 emissions using only ‘no regrets’ policies).
For anyone who, like me, is confident that the expected costs of doing nothing about emissions, relative to stabilisation, are well above 5 per cent of global income that makes the basic choice an easy one.
Minsky called his idea the “Financial Instability Hypothesis.” In the wake of a depression, he noted, financial institutions are extraordinarily conservative, as are businesses. With the borrowers and the lenders who fuel the economy all steering clear of high-risk deals, things go smoothly: loans are almost always paid on time, businesses generally succeed, and everyone does well. That success, however, inevitably encourages borrowers and lenders to take on more risk in the reasonable hope of making more money. As Minsky observed, “Success breeds a disregard of the possibility of failure.”
As people forget that failure is a possibility, a “euphoric economy” eventually develops, fueled by the rise of far riskier borrowers – what he called speculative borrowers, those whose income would cover interest payments but not the principal; and those he called “Ponzi borrowers,” those whose income could cover neither, and could only pay their bills by borrowing still further. As these latter categories grew, the overall economy would shift from a conservative but profitable environment to a much more freewheeling system dominated by players whose survival depended not on sound business plans, but on borrowed money and freely available credit.
Once that kind of economy had developed, any panic could wreck the market. The failure of a single firm, for example, or the revelation of a staggering fraud could trigger fear and a sudden, economy-wide attempt to shed debt. This watershed moment – what was later dubbed the “Minsky moment” – would create an environment deeply inhospitable to all borrowers. The speculators and Ponzi borrowers would collapse first, as they lost access to the credit they needed to survive. Even the more stable players might find themselves unable to pay their debt without selling off assets; their forced sales would send asset prices spiraling downward, and inevitably, the entire rickety financial edifice would start to collapse. Businesses would falter, and the crisis would spill over to the “real” economy that depended on the now-collapsing financial system.
It sounds very similar to Holling’s adaptive cycle and the pathology of natural resource management.
Ecological Society of America’s new policy statement on ecosystem services Ecological Impacts of Economic Activities proposes:
To encourage decision makers to account for the environmental costs of growth, we propose the following four strategies:
1. Internalize externalities
Environmental impacts and resource shortages caused by economic activities often affect people far removed in space and time from those whose actions produced these problems. This separation of cause from consequence represents what economists refer to as externalities. Agribusiness, for example, benefits from using nitrogen fertilizers but does not bear the costs associated with oxygen-depleted “dead zones” that agrochemical runoff produces in aquatic ecosystems. Because the adverse environmental impacts of fertilizer use are not reflected in fertilizer prices, they do not affect decisions about how much fertilizer to use.
Resolving this disparity would drive more environmentally and socially sustainable investments, but only following significant changes to our existing economic framework. Environmental economists advocate a range of measures to internalize externalities. Examples include property rights for environmental assets, payments for ecosystem services, and liabilities for environmental damage. Developing effective incentives requires an in-depth understanding of the ecological implications of externalities.
2. Create mechanisms for sustaining ecosystem services
Environmental economists have long recommended creating markets for ecosystem services such as pest control and carbon sequestration. Such markets would provide incentives for environmentally sound investments, while allowing communities to be compensated for actions that benefit others. Whether this means clean air in Beijing, China or safe drinking water in Central Valley, California, people would be able to invest in their welfare and the welfare of their children, just as they are currently able to invest in more material forms of security.
Markets must often be coupled with other strategies in order to be effective. In the emerging market for carbon sequestration, for example, if sequestration is priced while other services like freshwater provisioning remain unpriced, negative ecological outcomes may ensue. Carbon markets need to be paired with other strategies, such as the regulation of land use, the direct protection of biodiversity, and the development of “green standards” to which projects must adhere.
3. Enhance decision makers’ capacity to predict environmental impacts
Society is growing increasingly aware of the economic repercussions of environmental change. Still, this linkage often only becomes apparent after the environment has been damaged, sometimes irreversibly. Routine assessments of environmental risks, such as environmental impact statements, play an important role in identifying short-term environmental damage, but they rarely account for impacts that take decades to emerge. For example, DDT, a synthetic pesticide, was widely used for almost 20 years before its harmful effects on human and bird populations were recognized. The resulting US ban on DDT led to marked recoveries in bald eagles and other impacted species, but not all environmental impacts can be reversed with such success. Similarly, deforestation in Panama displaced mosquito populations in the canopy, causing a dramatic increase in Yellow Fever cases. Such outbreaks of zoonotic diseases are rarely foreseen in routine environmental risk assessments but can quickly escalate to unmanageable proportions, leading to the loss of countless human lives as well as billions of dollars in damages, lost output, and livestock mortality.
Recognizing that environmental impacts are often highly uncertain, it is important to develop models better able to project the consequences of anthropogenic environmental change. Equally important are new monitoring systems to detect problematic trends before they surpass society’s ability to address them.
4. Manage for resilient ecosystems
When ecosystem thresholds are breached, undesirable and often irreversible change can occur. For instance, grassy savannas capable of supporting grazing and rural livelihoods can suddenly “flip” to woody systems with lower productive capacity. Many common management strategies move ecosystems closer to these thresholds. Ecosystem management strategies need to leave a “margin of error”, trading some short-term yield for long-term resilience that sustains a suite of services.
Fewer than a dozen prominent economists saw this economic train wreck coming — and the Federal Reserve chairman, Ben Bernanke, an economist famous for his academic research on the Great Depression, was notably not among them. Alas, for the real world, the few who did warn us about the train wreck got no more respect from the rest of their colleagues or from decision-makers in business and government than prophets usually do.
How could the economics profession have slept so soundly right into the middle of the economic mayhem all around us? Robert J. Shiller of Yale University, one of the sage prophets, addressed that question an earlier commentary in this paper. Professor Shiller finds an explanation in groupthink, a term popularized by the social psychologist Irving L. Janis. In his book “Groupthink” (1972), the latter had theorized that most people, even professionals whose careers ostensibly thrive on originality, hesitate to deviate too much from the conventional wisdom, lest they be marginalized or even ostracized.
If groupthink is the cause, it most likely is anchored in what my former Yale economics professor Richard Nelson (now at Columbia University) has called a ”vested interest in an analytic structure,” the prism through which economists behold the world.
This analytic structure, formally called “neoclassical economics,” depends crucially on certain unquestioned axioms and basic assumptions about the behavior of markets and the human decisions that drive them. After years of arduous study to master the paradigm, these axioms and assumptions simply become part of a professional credo. Indeed, a good part of the scholarly work of modern economists reminds one of the medieval scholastics who followed St. Anselm’s dictum “credo ut intellegam”: “I believe, in order that I may understand.”
Adam Philips & Barbara Taylor, authors of the recent book On Kindness that uses history and psychoanalysis to examines the idea of kindness in western society, write in the Guardian Love thy neighbour: Why have we become so suspicious of kindness?
Kindness was mankind’s “greatest delight”, the Roman philosopher-emperor Marcus Aurelius declared, and thinkers and writers have echoed him down the centuries. But today many people find these pleasures literally incredible, or at least highly suspect. An image of the self has been created that is utterly lacking in natural generosity. Most people appear to believe that deep down they (and other people) are mad, bad and dangerous to know; that as a species – apparently unlike other species of animal – we are deeply and fundamentally antagonistic to each other, that our motives are utterly self-seeking and that our sympathies are forms of self-protectiveness.
Kindness – not sexuality, not violence, not money – has become our forbidden pleasure. In one sense kindness is always hazardous because it is based on a susceptibility to others, a capacity to identify with their pleasures and sufferings. Putting oneself in someone else’s shoes, as the saying goes, can be very uncomfortable. But if the pleasures of kindness – like all the greatest human pleasures – are inherently perilous, they are none the less some of the most satisfying we possess.
On Comment is Free, economist Kenneth Arrow writes that The financial turmoil is a challenge to economic theory
The current financial crisis, the loss of asset values, the refusal to extend normally-given credit and the great increase in defaults on obligations ranging from individual mortgages to the debts of great investment banks presents, of course, a pressing challenge to the fiscal authorities and central banks to take measures to minimise the consequences. But they also present a challenge to standard economic theory, a challenge all the more important since the development of policies to prevent future financial crises will depend on a deeper understanding of the processes at work.
That economic decisions are made without certain knowledge of the consequences is pretty self-evident. But, although many economists were aware of this elementary fact, there was no systematic analysis of economic uncertainty until about 1950. There have been two developments in the economic theory of uncertainty in the last 60 years, which have had opposite implications for the radical changes in the financial system. One has made explicit and understandable a long tradition that spreading risks among many bearers improves the functioning of the economy. The second is that there are large differences of information among market participants and that these differences are not well handled by market forces. The first point of view tends to argue for the expansion of markets, the second for recognising that they may fail to exist and, if they do come into being, may fail to work for the benefit of the general economic situation.
There is obviously much more to the full understanding of the current financial crisis, but the root is this conflict between the genuine social value of increased variety and spread of risk-bearing securities and the limits imposed by the growing difficulty of understanding the underlying risks imposed by growing complexity.
The idea that bad mathematical models used to evaluate investments are at least partially to blame for the financial crisis has plenty of appeal, and perhaps some validity, but it doesn’t justify a lot of the anti-intellectual responses we are seeing. That includes this NY Times headline In Modeling Risk, the Human Factor Was Left Out. What becomes clear from the story is that a model that left human factors out would have worked quite well. The elements of the required model are
- in the long run, house prices move in line with employment, incomes and migration patterns
- if prices move more than 20 per cent out of line with long run value they will in due course fall at least 20 per cent
- when this happens, large classes of financial assets will go into default either directly or because they are derived from assets that can’t pay out if house prices fall
It was not the disregard of human factors but the attempt to second-guess human behavioral responses to a period of rising prices, so as to reproduce the behavior of housing markets in the bubble period, that led many to disaster. A more naive version of the same error is to assume that particular observed behavior (say, not defaulting on home loans) will be sustained even when the conditions that made that behavior sensible no longer apply.
…More generally, in most cases, the headline result from a large and complex model can usually be reproduced with a much simpler model embodying the same key assumptions. If those assumptions are right (wrong) the model results will be the same. The extra detail usually serves to produce more detailed results rather than to produce significant changes in the headline results.
..There are at least 15,000 professional economists in this country, and you’re saying only two or three of them foresaw the mortgage crisis? Ten or 12 would be closer than two or three.
What does that say about the field of economics, which claims to be a science? It’s an enormous blot on the reputation of the profession. There are thousands of economists. Most of them teach. And most of them teach a theoretical framework that has been shown to be fundamentally useless.
And a review of the impact of the crisis.
During the go-go investing years, school districts, transit agencies and other government entities were quick to jump into the global economy, hoping for fast gains to cover growing pension costs and budgets without raising taxes. Deals were arranged by armies of persuasive financiers who received big paydays.
But now, hundreds of cities and government agencies are facing economic turmoil. Far from being isolated examples, the Wisconsin schools and New York’s transportation system are among the many players in a financial fiasco that has ricocheted globally.
First, Gretchen Morgenson in the New York Times writes Behind Insurer’s Crisis, Blind Eye to a Web of Risk:
“It is hard for us, without being flippant, to even see a scenario within any kind of realm of reason that would see us losing one dollar in any of those transactions.”— Joseph J. Cassano, a former A.I.G. executive, August 2007
…Although America’s housing collapse is often cited as having caused the crisis, the system was vulnerable because of intricate financial contracts known as credit derivatives, which insure debt holders against default. They are fashioned privately and beyond the ken of regulators — sometimes even beyond the understanding of executives peddling them.
Originally intended to diminish risk and spread prosperity, these inventions instead magnified the impact of bad mortgages like the ones that felled Bear Stearns and Lehman and now threaten the entire economy.
In the case of A.I.G., the virus exploded from a freewheeling little 377-person unit in London, and flourished in a climate of opulent pay, lax oversight and blind faith in financial risk models. It nearly decimated one of the world’s most admired companies, a seemingly sturdy insurer with a trillion-dollar balance sheet, 116,000 employees and operations in 130 countries.
Second, America’s National Public Radio’s Planet Money has a lot of recent indepth coverage of the financial crisis in this vein available of podcasts. Including a recent one called the day America’s economy almost died.
Looking a more the big economic picture, Predicting Crisis in the United States Economy a profile of Nouriel Roubini discusses the selective vision of models and the biases against discontinuities or nonlinear change.
Recessions are signal events in any modern economy. And yet remarkably, the profession of economics is quite bad at predicting them. A recent study looked at “consensus forecasts” (the predictions of large groups of economists) that were made in advance of 60 different national recessions that hit around the world in the ’90s: in 97 percent of the cases, the study found, the economists failed to predict the coming contraction a year in advance. On those rare occasions when economists did successfully predict recessions, they significantly underestimated the severity of the downturns. Worse, many of the economists failed to anticipate recessions that occurred as soon as two months later.
The dismal science, it seems, is an optimistic profession. Many economists, Roubini among them, argue that some of the optimism is built into the very machinery, the mathematics, of modern economic theory. Econometric models typically rely on the assumption that the near future is likely to be similar to the recent past, and thus it is rare that the models anticipate breaks in the economy. And if the models can’t foresee a relatively minor break like a recession, they have even more trouble modeling and predicting a major rupture like a full-blown financial crisis. Only a handful of 20th-century economists have even bothered to study financial panics. (The most notable example is probably the late economist Hyman Minksy, of whom Roubini is an avid reader.) “These are things most economists barely understand,” Roubini told me. “We’re in uncharted territory where standard economic theory isn’t helpful.”
Finally, Science Fiction writer Charlie Stross writes about the increasing difficulty of projecting the near future at all:
We are living in interesting times; in fact, they’re so interesting that it is not currently possible to write near-future SF.
… Put yourself in the shoes of an SF author trying to construct an accurate (or at least believable) scenario for the USA in 2019. Imagine you are constructing your future-USA in 2006, then again in 2007, and finally now, with talk of $700Bn bailouts and nationalization of banks in the background. Each of those projections is going to come out looking different. Back in 2006 the sub-prime crisis wasn’t even on the horizon but the big scandal was FEMA’s response (or lack thereof) to Hurricane Katrina. In 2007, the sub-prime ARM bubble began to burst and the markets were beginning to turn bearish. (Oh, and it looked as if the 2008 presidential election would probably be down to a fight between Hilary Clinton and Rudy Giuliani.) Now, in late 2008 the fiscal sky is falling; things may not end as badly as they did for the USSR, but it’s definitely an epochal, historic crisis.
Now extend the thought-experiment back to 1996 and 1986. Your future-USA in the 1986 scenario almost certainly faced a strong USSR in 2019, because the idea that a 70 year old Adversary could fall apart in a matter of months, like a paper tiger left out in a rain storm, simply boggles the mind. It’s preposterous; it doesn’t fit with our outlook on the way history works. (And besides, we SF writers are lazy and we find it convenient to rely on clichés — for example, good guys in white hats facing off against bad guys in black hats. Which is silly — in their own head, nobody is a bad guy — but it makes life easy for lazy writers.) The future-USA you dreamed up in 1996 probably had the internet (it had been around in 1986, in embryonic form, the stomping ground of academics and computer industry specialists, but few SF writers had even heard of it, much less used it) and no cold war; it would in many ways be more accurate than the future-USA predicted in 1986. But would it have a monumental fiscal collapse, on the same scale as 1929? Would it have Taikonauts space-walking overhead while the chairman of the Federal Reserve is on his knees? Would it have more mobile phones than people, a revenant remilitarized Russia, and global warming?
There’s a graph I’d love to plot, but I don’t have the tools for. The X-axis would plot years since, say, 1950. The Y-axis would be a scatter plot with error bars showing the deviation from observed outcomes of a series of rolling ten-year projections modeling the near future. Think of it as a meta-analysis of the accuracy of projections spanning a fixed period, to determine whether the future is becoming easier or harder to get right. I’m pretty sure that the error bars grow over time, so that the closer to our present you get, the wider the deviation from the projected future would be. Right now the error bars are gigantic.
Nassim Nicholas Taleb writes in Financial Times that because financial economics focus on normal and marginal behaviour at the expense of shocks and market reorganizations it is a pseudo-science hurting markets:
I was a trader and risk manager for almost 20 years (before experiencing battle fatigue). There is no way my and my colleagues’ accumulated knowledge of market risks can be passed on to the next generation. Business schools block the transmission of our practical know-how and empirical tricks and the knowledge dies with us. We learn from crisis to crisis that MPT [modern portfolio theory] has the empirical and scientific validity of astrology (without the aesthetics), yet the lessons are ignored in what is taught to 150,000 business school students worldwide.
Academic economists are no more self-serving than other professions. You should blame those in the real world who give them the means to be taken seriously: those awarding that “Nobel” prize.
In 1990 William Sharpe and Harry Markowitz won the prize three years after the stock market crash of 1987, an event that, if anything, completely demolished the laureates’ ideas on portfolio construction. Further, the crash of 1987 was no exception: the great mathematical scientist Benoît Mandelbrot showed in the 1960s that these wild variations play a cumulative role in markets – they are “unexpected” only by the fools of economic theories.
Then, in 1997, the Royal Swedish Academy of Sciences awarded the prize to Robert Merton and Myron Scholes for their option pricing formula. I (and many traders) find the prize offensive: many, such as the mathematician and trader Ed Thorp, used a more realistic approach to the formula years before. What Mr Merton and Mr Scholes did was to make it compatible with financial economic theory, by “re-deriving” it assuming “dynamic hedging”, a method of continuous adjustment of portfolios by buying and selling securities in response to price variations.
Dynamic hedging assumes no jumps – it fails miserably in all markets and did so catastrophically in 1987 (failures textbooks do not like to mention).
Later, Robert Engle received the prize for “Arch”, a complicated method of prediction of volatility that does not predict better than simple rules – it was “successful” academically, even though it underperformed simple volatility forecasts that my colleagues and I used to make a living.
The environment in financial economics is reminiscent of medieval medicine, which refused to incorporate the observations and experiences of the plebeian barbers and surgeons. Medicine used to kill more patients than it saved – just as financial economics endangers the system by creating, not reducing, risk. But how did financial economics take on the appearance of a science? Not by experiments (perhaps the only true scientist who got the prize was Daniel Kahneman, who happens to be a psychologist, not an economist). It did so by drowning us in mathematics with abstract “theorems”. Prof Merton’s book Continuous Time Finance contains 339 mentions of the word “theorem” (or equivalent). An average physics book of the same length has 25 such mentions. Yet while economic models, it has been shown, work hardly better than random guesses or the intuition of cab drivers, physics can predict a wide range of phenomena with a tenth decimal precision.
via 3quarks daily.
For more see Taleb’s home page – Fooled by Randomness.