Tag Archives: models

Controversies around the Social Cost of Carbon

What is the social cost of carbon? That is,the monetary value of the long-term damages done by greenhouse gas emissions? Frank Ackerman from the Stockholm Environment Institute U.S. Center, recently gave a fascinating talk at the Stockholm Resilience Centre where he presented the widely used FUND-model, an integrated assessment model of climate change that links climate change science with economics. According to Ackerman, the interesting aspect with this model is not only that it is commonly cited by policy-makers in the US, but also that some of its basic assumptions, lead to quite bizarre results. The policy implications can not be overestimated.

As Ackerman notes in the TripleCrisis blog:

True or false: Risks of a climate catastrophe can be ignored, even as temperatures rise? The economic impact of climate change is no greater than the increased cost of air conditioning in a warmer future? The ideal temperature for agriculture could be 17oC above historical levels?

All true, according to the increasingly popular FUND model of climate economics. It is one of three models used by the federal government’s Interagency Working Group to estimate the “social cost of carbon” – that is, the monetary value of the long-term damages done by greenhouse gas emissions. According to FUND, as used by the Working Group, the social cost of carbon is a mere $6 per ton of CO2. That translates into $0.06 per gallon of gasoline. Do you believe that a tax of $0.06 per gallon at the gas pump (and equivalent taxes on other fossil fuels) would solve the climate problem and pay for all future climate damages?

I didn’t believe it, either. But the FUND model is growing in acceptance as a standard for evaluation of climate economics. To explain the model’s apparent dismissal of potential harm, I undertook a study of the inner workings of FUND (with the help of an expert in the relevant software language) for E3 Network. Having looked under the hood, I’d say the model needs to be towed back to the shop for a major overhaul.

A working paper that teases the critique in detail can be found here. To summarize the conclusions for non-economists: the social cost of carbon is way higher than $6 per ton of CO2….

Resilience 2011: notes on regime shifts and coupled social-ecological systems

The Resilience 2011 conference was a unique opportunity to meet people and new ways of thinking about resilience. This post is dedicated to the sessions I enjoyed the most, and my research interests biased me towards sessions on regime shifts and coupled social-ecological system analysis.

As PhD student working with regime shifts, it was not surprisingly that the panel on research frontiers for anticipating regime shifts was on my top list. Marten Scheffer from Wageningen University introduced the theoretical basis of critical transitions on social-ecological systems. His talk was complemented by his PhD student Vasilis Dakos on early warnings. Their methods are based on the statistical properties of systems when approaching a bifurcation point. These are gradual increase in spatial and temporal auto-correlation, as well as variability. A perfect counterpoint to these theoretical approaches was offered by Peter Davies from University of Tasmania; who presented the case study of a river catchment in Tasmania. Davies and colleagues introduced Bayesian networks as a method to estimate regime shifts, their likelihood and possible thresholds. Victor Galaz from Stockholm Resilience Centre presented an updated version of his work with web crawlers, exploring how well informed Internet search can give early warnings on, for example, disease outbreaks. Galaz point out the role of local knowledge as fundamental component of the filtering mechanism for early warning systems.  Questions from the audience and organizers were focused on the intersections from theory and practical applications of early warnings.

While Dakos’ technique does not need deep understanding of the system under study, his time series analysis approach does require long time series. On the other hand, Bayesian networks require a deep understanding of the system and their feedbacks in order to make well-informed assumptions to design models. An alternative approach was proposed by Steve Lade from Max Planck Institute in a parallel session, who used generalized models to identify the model’s Jacobian. Although his approach does need a basic knowledge of the system, it is able to identify critical transitions with limited time series, typical of social-ecological datasets in developing countries.

Most of the work on regime shifts is based on state variables that reflect either ecological processes or social dynamics, but rarely both. Thus, I was also interesting in advances on operationalizing the concept of critical transitions to social-ecological systems in a broader sense. I looked for modeling examples where it is easier to track how researchers couple social and ecological dynamics. Here are some notes on the modeling sessions.

J.M. Anderies and M.A. Janssen from Arizona State University (ASU) presented their work on the impact of uncertainty on collective action. They used a multi-agent model based in irrigation experiments (games in the lab). Their work caught my attention because first they capture the role of asymmetries in common pool resources, which is often overlooked. In the case of irrigation systems, it is given by the relative positions of “head-enders” and “tail-enders” with different access to the resource.  Secondly, they used their model to explore how uncertainty both in water variability and shocks to infrastructure affects the evolution of cooperation.

Ram Bastakoti and colleagues (ASU) complemented the previous talk by bringing Anderies and Janssen insights to the field, particularly to cases in Thailand, Nepal and Pakistan. Batstakoti is studying the robustness of irrigation systems to different source of disturbances including policy changes, market pressure and the biophysical variability associated with resource dynamics. In the following talk, Rimjhim Aggarwal (ASU) presented the case of India, a highly populated country facing a food security challenge in the forthcoming decades; where groundwater levels are falling faster than expected. Aggarwal research explores the tradeoffs among development trajectories. His focus on technological lock-ins and debt traps as socially reinforced mechanism towards undesirable regimes makes his study case a potential regime shift example.

My colleagues from the Stockholm Resilience Centre at Stockholm University also presented interesting work on modeling social-ecological dynamics. Emilie Lindqvist uses a theoretical agent model to explore the role of learning and memory in natural resource management. Her main results point out that long-term learning and memory is essential for coping with abrupt decline or cyclic resource dynamics. On the other hand, Jon Norberg and Marty Anderies presented a theoretical agent model where social capital dynamics are coupled with a typical fishery model. Although their work is still prelimary, it was the only talk that I saw which actually coupled social and ecological dynamics.

Resilience 2011 gave me the opportunity to rethink and learn a lot about regime shifts. Although my main question: how to study regime shifts in coupled social-ecological system remains unsolved, the discussions in the panel sessions gave me some possible ways of tackling it.

The research agenda on regime shifts is strongly developing towards early warnings. Three competing methods arise:

  1. look for signals in spatial and temporal data by examining the statistical properties of a system approaching a threshold: increase in variance and autocorrelation
  2. acquire a deep knowledge of feedback dynamics and apply Bayesian networks to understand and predict potential interacting thresholds
  3. use shallow knowledge of the system to estimate their Jacobian using short time series.

Social and ecological dynamics are hard to couple. It is not only because there are usually studied in different disciplines with different methods. My guess is that the rates of change of their main variables occur at very different rates. As consequence social scientists assume nature dynamics to be constant or as drivers, while natural scientists assume the “social stuff” to be constant as well.

Modelers have started breaking the ice by introducing noise to the external variables (e.g. rainfall variability, political instability, market pressure); or by looking at how memory or social capital at individual level scale up to resource dynamics. However, their main insights remain confined to study cases making difficult to generalize or study the coupling of society with global change trends.

Systems theorist Vladimir Arnold has died

Vladimir Arnold from WikipediaVladimir I. Arnold one of the major creators of dynamical systems theory used to represent ecological regime shifts died June 3rd this year.

He was one of the creators of the mathematics behind what is known as catastrophe theory and singularity theory which are used to represent regime shifts.  The New York Times writes:

Singularity theory predicts that under certain circumstances slow, smooth changes in a system can lead to an abrupt major change, in the way that the slipping of a few small rocks can set off an avalanche. The theory has applications in physics, chemistry and biology.

“He was a genius and one of the greatest and most influential mathematicians of our time,” said Boris A. Khesin, a former student of Dr. Arnold’s and now a professor of mathematics at the University of Toronto.

One of Dr. Arnold’s biggest contributions was applying the methods of geometry and symmetry to the motion of particles. Dr. Arnold work on how fluids flow was applied to the dynamics of weather, providing a mathematical explanation for why it is not possible to make forecasts months in advance. Infinitesimal gaps or errors in information cause forecasts to diverge completely from reality.

A similar approach can also be applied to the motion of planets. If Earth were the only planet to circle the Sun, its orbit would follow a precise elliptical path, but the gravity of the other planets disturbs the motion. Scientists found that it impossible to calculate the precise motion of the planets over very long periods of time or even prove that Earth will not one day be flung out of the solar system.

Understanding the subtle and difficult-to-predict boundary between stability and instability is important not only in the study of planetary dynamics but also in other endeavors, like designing a nuclear fusion reactor.

In 1954, the Russian mathematician Andrey Kolmogorov figured out a key insight to calculating whether such systems are stable. Dr. Arnold provided a rigorous proof in 1963 for one set of circumstances. Another mathematician, Jürgen Moser, provided the proof for another. The work is now collectively know at the KAM theory.

Andrew Gelman’s statistical lexicon

On his group’s weblog, influential Bayesian statistican Andrew Gelman proposes a statistical lexicon to make important methods and concepts related to statistics better know:

The Secret Weapon: Fitting a statistical model repeatedly on several different datasets and then displaying all these estimates together.

The Superplot: Line plot of estimates in an interaction, with circles showing group sizes and a line showing the regression of the aggregate averages.

The Folk Theorem: When you have computational problems, often there’s a problem with your model. …

Alabama First: Howard Wainer’s term for the common error of plotting in alphabetical order rather than based on some more informative variable.

The Taxonomy of Confusion: What to do when you’re stuck.

The Blessing of Dimensionality: It’s good to have more data, even if you label this additional information as “dimensions” rather than “data points.”

Scaffolding: Understanding your model by comparing it to related models.

Multiple Comparisons: Generally not an issue if you’re doing things right but can be a big problem if you sloppily model hierarchical structures non-hierarchically.Taking a model too seriously: Really just another way of not taking it seriously at all.

Computer trading producing new financial dynamics?

In October 1987,  stock markets around the world crashed, with the Dow Jones droping 22%.  The causes of this crash are still unclear, but one of the suspected causes was computer automated trading.  This concern lead attempts to design mechanisms to break potential viscous cycles by creating ‘circuit breakers‘, rules that halt trading if the Dow rapidly .  However, as financial engineers innovate, new risks are emerging.   The Financial Times writes Computer-driven trading raises meltdown fears:

An explosion in trading propelled by computers is raising fears that trading platforms could be knocked out by rogue trades triggered by systems running out of control.

Trading in equities and derivatives is being driven increasingly by mathematical algorithms used in computer programs. They allow trading to take place automatically in response to market data and news, deciding when and how much to trade similar to the autopilot function in aircraft.

Analysts estimate that up to 60 per cent of trading in equity markets is driven in this way.

… Frederic Ponzo, managing partner at GreySpark Partners, a consultancy, said: “It is absolutely possible to bring an exchange to breaking point by having an ‘algo’ entering into a loop so that by sending them at such a rate the exchange can’t cope.”

Regulators say it is unclear who is monitoring traders to ensure they do not take undue risks with their algorithms.

The Securities and Exchange Commission has proposed new rules that would require brokers to establish procedures to prevent erroneous orders.

Mark van Vugt, global head of sales at RTS Realtime Systems, a trading technology company, said: “If a position is blowing up so fast without the exchange or clearing firm able to react or reverse positions, the firm itself could be in danger as well.”

For more details on current problems see the Financial Times article Credit Suisse fined over algo failures

NYSE Euronext revealed on Wednesday it had for the first time fined a trading firm for failing to control its trading algorithms in a case that highlights the pitfalls of the rapid-fire electronic trading that has come to dominate many markets.

The group, which operates the New York Stock Exchange, said it had fined Credit Suisse $150,000 after a case in 2007 when hundreds of thousands of “erroneous messages” bombarded the exchange’s trading system.

Asked if the exchange’s systems could have been knocked out, he said: “If you had multiplied this many times you’d have had a problem on your hands.”

Shared pre-analytical vision, and groupthink and economics

On the Economix Blog, economist Uwe Reinhardt writes An Economist’s Mea Culpa that argues that economists have become locked into a too narrow pre-analytical vison of how the world works:

Fewer than a dozen prominent economists saw this economic train wreck coming — and the Federal Reserve chairman, Ben Bernanke, an economist famous for his academic research on the Great Depression, was notably not among them. Alas, for the real world, the few who did warn us about the train wreck got no more respect from the rest of their colleagues or from decision-makers in business and government than prophets usually do.

How could the economics profession have slept so soundly right into the middle of the economic mayhem all around us? Robert J. Shiller of Yale University, one of the sage prophets, addressed that question an earlier commentary in this paper. Professor Shiller finds an explanation in groupthink, a term popularized by the social psychologist Irving L. Janis. In his book “Groupthink” (1972), the latter had theorized that most people, even professionals whose careers ostensibly thrive on originality, hesitate to deviate too much from the conventional wisdom, lest they be marginalized or even ostracized.

If groupthink is the cause, it most likely is anchored in what my former Yale economics professor Richard Nelson (now at Columbia University) has called a ”vested interest in an analytic structure,” the prism through which economists behold the world.

This analytic structure, formally called “neoclassical economics,” depends crucially on certain unquestioned axioms and basic assumptions about the behavior of markets and the human decisions that drive them. After years of arduous study to master the paradigm, these axioms and assumptions simply become part of a professional credo. Indeed, a good part of the scholarly work of modern economists reminds one of the medieval scholastics who followed St. Anselm’s dictum “credo ut intellegam”: “I believe, in order that I may understand.”

Financial crisis: bad models or bad modellers

Australian economist, John Quiggin writes on Crooked Timber about the contribution of models to the financial crisis:

The idea that bad mathematical models used to evaluate investments are at least partially to blame for the financial crisis has plenty of appeal, and perhaps some validity, but it doesn’t justify a lot of the anti-intellectual responses we are seeing. That includes this NY Times headline In Modeling Risk, the Human Factor Was Left Out. What becomes clear from the story is that a model that left human factors out would have worked quite well. The elements of the required model are

  1. in the long run, house prices move in line with employment, incomes and migration patterns
  2. if prices move more than 20 per cent out of line with long run value they will in due course fall at least 20 per cent
  3. when this happens, large classes of financial assets will go into default either directly or because they are derived from assets that can’t pay out if house prices fall

It was not the disregard of human factors but the attempt to second-guess human behavioral responses to a period of rising prices, so as to reproduce the behavior of housing markets in the bubble period, that led many to disaster. A more naive version of the same error is to assume that particular observed behavior (say, not defaulting on home loans) will be sustained even when the conditions that made that behavior sensible no longer apply.

…More generally, in most cases, the headline result from a large and complex model can usually be reproduced with a much simpler model embodying the same key assumptions. If those assumptions are right (wrong) the model results will be the same. The extra detail usually serves to produce more detailed results rather than to produce significant changes in the headline results.