Mapping global soil degradation

UNEP global map of soil degradation

UNEP provides access to a number of global environmental datasets via the GEO data portal. For example, the Global Assessment of Human Induced Soil Degradation data base was created in 1990 by ISRIC. The database contains information on soil degradation within map units reported by a survey of soil experts from around the world. It includes the type, degree, extent, cause and rate of soil degradation.
The main causes of degradation are (in order) overgrazing, deforestation, and agricultural mismanagement. Each of these is responsible for a bit less than a third of total degraded area. Other causes of degradation make up less than 8% of the total degraded area. Deforestation dominates in Asia, while overgrazing is the main driver in Africa and Australia.
For details of assessment see:

Oldeman LR, Hakkeling RTA and Sombroek WG 1991. World Map of the Status of Human-Induced Soil Degradation: An explanatory Note (rev. ed.), UNEP and ISRIC, Wageningen pdf

Women’s work and the global economy

The April 12th Economist has an article on Women and the world economy

… it is misleading to talk of women’s “entry” into the workforce. Besides formal employment, women have always worked in the home, looking after children, cleaning or cooking, but because this is unpaid, it is not counted in the official statistics. To some extent, the increase in female paid employment has meant fewer hours of unpaid housework. However, the value of housework has fallen by much less than the time spent on it, because of the increased productivity afforded by dishwashers, washing machines and so forth. Paid nannies and cleaners employed by working women now also do some work that used to belong in the non-market economy.

Nevertheless, most working women are still responsible for the bulk of chores in their homes. In developed economies, women produce just under 40% of official GDP. But if the worth of housework is added (valuing the hours worked at the average wage rates of a home help or a nanny) then women probably produce slightly more than half of total output.

The increase in female employment has also accounted for a big chunk of global growth in recent decades. GDP growth can come from three sources: employing more people; using more capital per worker; or an increase in the productivity of labour and capital due to new technology, say. Since 1970 women have filled two new jobs for every one taken by a man. Back-of-the-envelope calculations suggest that the employment of extra women has not only added more to GDP than new jobs for men but has also chipped in more than either capital investment or increased productivity. Carve up the world’s economic growth a different way and another surprising conclusion emerges: over the past decade or so, the increased employment of women in developed economies has contributed much more to global growth than China has.

In particular, there is strong evidence that educating girls boosts prosperity. It is probably the single best investment that can be made in the developing world. Not only are better educated women more productive, but they raise healthier, better educated children. There is huge potential to raise income per head in developing countries, where fewer girls go to school than boys. More than two-thirds of the world’s illiterate adults are women.

More Jobs More Babies - from Economist April 12 2006 It is sometimes argued that it is shortsighted to get more women into paid employment. The more women go out to work, it is said, the fewer children there will be and the lower growth will be in the long run. Yet the facts suggest otherwise. Chart 3 shows that countries with high female labour participation rates, such as Sweden, tend to have higher fertility rates than Germany, Italy and Japan, where fewer women work. Indeed, the decline in fertility has been greatest in several countries where female employment is low.

via Three Quarks

Hurricanes, Risk Models, and Insurance

Roger Pielke Jr has an interesting post Are We Seeing the End of Hurricane Insurability? on the Prometheus weblog. The insurance industry uses models of expected losses to set rates for catastrophic losses – from things such as huricanes. However, the models that are properitarity and not open to public evaluation. Now consumer groups are attacking the providers of “catastrophe models” arguing that these models main purpose is to justify increases in insurance rates.

In the past consumer groups have argued:

Consumers were told that, after the big price increases in the wake of Hurricane Andrew, they would see price stability. This was because the projections were not based on short-term weather history, as they had been in the past, but on very long-term data from 10,000 to 100,000 years of projected experience. The rate requests at the time were based upon the average of these long-range projections. Decades with no hurricane activity were assessed in the projections as were decades of severe hurricane activity, as most weather experts agree we are experiencing now. Small storms predominated, but there were projections of huge, category 5 hurricanes hitting Miami or New York as well, causing hundreds of billions of dollars in damage. Consumers were assured that, although hurricane activity was cyclical, they would not see significant price decreases during periods of little or no hurricane activity, nor price increases during periods of frequent activity. That promise has now been broken.

While the catastrophe modelling firms argue:

Given a constant climatological state (or if annual variations from that state are short lived and unpredictable) the activity rate in a catastrophe model can best be represented as the average of long-term history. In this situation there is no need to characterize the period over which the activity is considered to apply because, with current knowledge, it is expected that rate will continue indefinitely. The assumption that activity remains consistent breaks down, however, where there are either multi-year fluctuations in activity or persistent trends. It then becomes necessary to characterize the time period over which the activity in the Cat model is intended to apply.

Pielke argues that the disaster modellers are implying

…that the historical climatology of hurricane activity is no longer a valid basis for estimating future risks. This means that the catastrophe models that they provide are untethered from experience. Imagine if you are playing a game of poker, and the dealer tells you that the composition of the deck has been completely changed – now you don’t know whether there are 4 aces in the deck or 20. It would make gambling based on probabilities a pretty dodgy exercise. If RMS [Risk Management Solutions – a catastrophe modelling company] is correct, then it has planted the seed that has potential to completely transform its business and the modern insurance and reinsurance industries.

What happens if history is no longer a guide to the future? One answer is that you set your expectations about the future based on factors other than experience. One such approach is to ask the relevant experts what they expect. This is what RMS did last fall, convening Kerry Emanuel, Tom Knutson, Jim Elsner, and Mark Saunders in order to conduct an “expert elicitation”.

… RMS conducted its elicitation October, 2005 with the intent that it will shape its risk estimates for the next 5 years. This is wholly unrealistic in such a fast moving area of science. It is unlikely that the perspectives elicited from these 4 scientists will characterize the views of the relevant community (or even their own views!) over the next five years as further research is published and hurricane seasons unfold. Because RMS has changed from a historical approach to defining risk, which changes very, very slowly, if at all over time, to an expert-focused approach, it should fully expect to see very large changes in expert views as science evolves. This is a recipe for price instability, exactly the opposite from what the consumer groups, and insurance commissioners, want.

From the perspective of the basic functioning of the insurance and reinsurance industries, the change in approach by RMS is an admission that the future is far more uncertain than has been the norm for this community. Such uncertainty may call into question the very basis of hurricane insurance and reinsurance which lies in an ability to quantify and anticipate risks. If the industry can’t anticipate risks, or simply come to a consensus on how to calculate risks (even if inaccurate), then this removes one of the key characteristics of successful insurance. Debate on this issue has only just begun.

Hedging ones bets with insurance is a good strategy to deal with risk – where known outcomes are expected to occur with some known probability. However, when confronting more uncertain situations other approaches such as building resilience to potential classes of shock, engaging in experimental management to decrease uncertainty, and accelerating learning by integrating sources of knowledge across a wider variety of domains (e.g. meterology, ecology, and urban planning) and different regions (e.g. Sri Lanka, the Netherlands, and New Orleans).

ATEAM: Modelling ecosystem services

Worldchanging guest writers David Zaks and Chad Monfreda, from Center for Sustainability and the Global Environment at the U of Wisconsin, have a post ATEAM: Mr.T takes on ecosystems services on a project to model ecosystem services in Europe.

The ATEAM (Advanced Terrestrial Ecosystem Analysis and Modeling) project (also here and here) is not made up of rogue soldiers of fortune, but academics in Europe. The scientific assessment correlates changes in human well-being with future changes in climate and land-use. Researchers combined global climate models and land-use scenarios using innovative interdisciplinary methods to show how ecosystem goods and services are likely to change through the 21st century in Europe. ATEAM paints a mixed picture of the continent divided into a vulnerable south and adaptive north. The results are freely available online as a downloadable (PC only) mapping tool that displays the vulnerability of six key sectors: agriculture, forestry, carbon storage and energy, water and biodiversity.Stakeholder input helped to quantify regional adaptive capacity, while climate and land-use models estimated potential impacts. Adaptive capacity and potential impacts together define the overall vulnerability of individual ecosystem services. Even when ‘potential impacts’ are fixed, differential vulnerability across Europe indicates an opportunity to boost ‘adaptive capacity’. Emphasis on adaptation certainly doesn’t condone inaction on climate change and environmental degradation. Rather it stresses resilience in a world that must prepare for surprise threats that are increasingly the norm.

ATEAM is a wonderful example of sustainability science that lets people imagine the possible futures being shaped through decisions taken today. Integrated assessments like ATEAM and the MA (also here) have a huge potential to create a sustainable biosphere by offering solutions that are at once technical and social. Combined with many ideas that WC readers are already familiar with—planetary extension of real-time monitoring networks, open source scenario building, and pervasive citizen participation—the next generation of assessments could help tip the meaning of ‘global change’ from gloomy to bright green.

Climate change, smoking, and gaming mental models

The environmenal journalist Mark Hertsgaard has an article about climate change politics and journalism in Vanity Fair (May 2006) that shows how climate denial was involved many of the same people who worked to deny the health impacts of smoking.

Although scientists apply the neutral term “climate change” to all of these phenomena, “climate chaos” better conveys the abrupt, interconnected, wide-ranging consequences that lie in store. “It’s a very appropriate term for the layperson,” says Schellnhuber, a physicist who specializes in chaos theory. “I keep telling politicians that I’m not so concerned about a gradual climate change that may force farmers in Great Britain to plant different crops. I’m worried about triggering positive feedbacks that, in the worst case, could kick off some type of runaway greenhouse dynamics.”…

No one pretends that phasing out carbon-based fuels will be easy. The momentum of the climate system means that “a certain amount of pain is inevitable,” says Michael Oppenheimer. “But we still have a choice between pain and disaster.”

Unfortunately, we are getting a late start, which is something of a puzzle. The threat of global warming has been recognized at the highest levels of government for more than 25 years. Former president Jimmy Carter highlighted it in 1980, and Al Gore championed it in Congress throughout the 1980s. Margaret Thatcher, the arch-conservative prime minister of Britain from 1979 to 1990, delivered some of the hardest-hitting speeches ever given on climate change. But progress stalled in the 1990s, even as Gore was elected vice president and the scientific case grew definitive. It turned out there were powerful pockets of resistance to tackling this problem, and they put up a hell of a fight.

In the 1970s and 1980s, R. J. Reynolds Industries, Inc. funded medical research with the help of Dr. Frederick Seitz, a former president of the National Academy of Sciences, to the tune of $45 million. However, the research focused on issues other than the health effects of smoking, which was the central health concern for Reynolds. Seitz admitted that they were not allowed to study the health effects of smoking. Despite this, the tobacco industry used the multi-million-dollar research program as evidence of its commitment to science, and to argue that the evidence on the health effects of smoking was inconclusive. For smoking enthusiasts, the Higher Grade Store offers a wide selection of smoking accessories to choose from.

In the 1990s, Seitz began arguing that the science behind global warming was likewise inconclusive and certainly didn’t warrant imposing mandatory limits on greenhouse-gas emissions. He made his case vocally, trashing the integrity of a 1995 I.P.C.C. report on the op-ed page of The Wall Street Journal, signing a letter to the Clinton administration accusing it of misrepresenting the science, and authoring a paper which said that global warming and ozone depletion were exaggerated threats devised by environmentalists and unscrupulous scientists pushing a political agenda. In that same paper, Seitz asserted that secondhand smoke posed no real health risks, an opinion he repeats in our interview. “I just can’t believe it’s that bad,” he says.

Al Gore and others have said, but generally without offering evidence, that the people who deny the dangers of climate change are like the tobacco executives who denied the dangers of smoking. The example of Frederick Seitz, described here in full for the first time, shows that the two camps overlap in ways that are quite literal—and lucrative. Seitz earned approximately $585,000 for his consulting work for R. J. Reynolds, according to company documents unearthed by researchers for the Greenpeace Web site ExxonSecrets.org and confirmed by Seitz. Meanwhile, during the years he consulted for Reynolds, Seitz continued to draw a salary as president emeritus at Rockefeller University, an institution founded in 1901 and subsidized with profits from Standard Oil, the predecessor corporation of ExxonMobil.

Seitz was the highest-ranking scientist among a band of doubters who, beginning in the early 1990s, resolutely disputed suggestions that climate change was a real and present danger. As a former president of the National Academy of Sciences (from 1962 to 1969) and a winner of the National Medal of Science, Seitz gave such objections instant credibility. Richard Lindzen, a professor of meteorology at M.I.T., was another high-profile scientist who consistently denigrated the case for global warming. But most of the public argument was carried by lesser scientists and, above all, by lobbyists and paid spokesmen for the Global Climate Coalition. Created and funded by the energy and auto industries, the Coalition spent millions of dollars spreading the message that global warming was an uncertain threat. Journalist Ross Gelbspan exposed the corporate campaign in his 1997 book, The Heat Is On, which quoted a 1991 strategy memo: the goal was to “reposition global warming as theory rather than fact.”

“Not trivial” is how Seitz reckons the influence he and fellow skeptics have had, and their critics agree. The effect on media coverage was striking, according to Bill McKibben, who in 1989 published the first major popular book on global warming, The End of Nature. Introducing the 10th-anniversary edition, in 1999, McKibben noted that virtually every week over the past decade studies had appeared in scientific publications painting an ever more alarming picture of the global-warming threat. Most news reports, on the other hand, “seem to be coming from some other planet.”

The deniers’ arguments were frequently cited in Washington policy debates. Their most important legislative victory was the Senate’s 95-to-0 vote in 1997 to oppose U.S. participation in any international agreement—i.e., the Kyoto Protocol—that imposed mandatory greenhouse-gas reductions on the U.S.

Tim Lambert on Deltoid provides some further background on the funding of smoking and climate denialists documented in tobacco documents.

Balance, Bias, and Complexity in Climate Change Journalism

The Society of Environmental Journalists: SEJ Publications has an interesting set of articles on climate change and journalism – an interview with NYTimes journalist Andrew Revkin and an article on journalistic balance.

Do you think that climate change is covered adequately by the media? I mean, what kind of job do you think they’re doing?

AR: It’s certainly a decent amount of coverage these days, but I still…I don’t think people are covering it wrong. It doesn’t fit the norms of journalism. The heft of the story is not conveyed. Either the uncertainties make us all fuzz out and look at something more germane like a new explosion in Iraq or the latest scandal in Washington with lobbyists. So we turn away from it. Or we latch onto some new finding that feels like news (abrupt change) and our endless sift for the “front-page thought” makes us minimize the uncertainties.

But it’s not just a journalism problem. After covering it for twenty years…you can write the perfect story capturing both the gravitas and the uncertainties of human-induced climate change, perfect on every level, and it won’t change things.

We are not attuned to things on this time scale and with this level of uncertainty. Partly because of our political system being so short term, our business cycle being so short term, and because our concerns are focused mainly on what affects my family, then what affects my community, then what affects my state, then what affects my country, and then what affects my globe.

What would be the key points you’d stress with other journalists about climate change? What subjects should they hit?

AR: Not just for climate change, but just in general. When you can step back, whether it’s sprawl or nonpoint source pollution or climate change, there are things going on around you that are profound, that are transforming landscapes. And we ignore them because they are happening in this incremental fashion that journalism just does not recognize.

And it’s not the kind of thing that you can do daily or maybe even yearly. But once in a while, when there’s a slow news cycle, step back and see how many houses are being built on steep slopes, or how much leakage there is from underground gas tanks. Or what ecologists and biologists are saying about the way a valley, watershed or coast will be transformed over the next century and how does that relate to the surrounding institutions?

A perfect example is coastal development and sea level rise. One of the firmest things coming out of any climate model is that rising seas are the new normal for centuries to come. So if you are a journalist on the coast, this immediately starts a series of stories to see what is being done to reflect that.

You have to look at the world and ask, “Do our institutions reflect, are we still granting flood insurance to low lying areas?” It can lead to these types of stories.

On the mitigation side, college activism is exploding now. When I went to Montreal to cover the last round of climate-treaty talks the only people there who seemed to be talking sense were the youngest ones.

Earth and Sky site also has an interview with Andrew Revkin about his interest in the North Pole.

Tremors and Tipping Points

Tipping points cause some important ecosystem surprises.  Examples include collapses of rangelands, water quality, and some fisheries.  The trouble with tipping points is that they are hard to anticipate in advance.  However, tremors may provide an advance warning of some tipping points.

The graphic shows a model of a pastoral system .  There is a tipping point when the stocking level of herbivores is about 5.  Above the tipping point, grassy vegetation disappears and the grazing system collapses.  As the tipping point is approached from low levels of herbivores, the standard deviation of grass biomass rises sharply before the tipping point is reached.  If the herbivore level is rising slowly enough, the rise in standard deviation could provide advance warning of impending collapse.  If the pastoralist was attentive to the warning, sheep numbers could be reduced in time to prevent the collapse.
 Pastoral Ecosystem

 Thomas Kleinen and colleagues have shown that reddening of the variance spectrum can anticipate rapid climate changes such as those that could result from a breakdown in ocean circulation.  Steve Carpenter and Buz Brock have analyzed water pollution, air pollution, and social systems that tremble before they tip.  They demonstrate increases in variance, which may be more easily detected than reddening of spectra.  Importantly, the variance increases can be detected with simple statistical filters using common time-series data.  No particular knowledge of the actual ecosystem dynamics is required.  Berglund and Gentz compare hard losses of stability in which an attractor vanishes (such as the pastoral system shown here) with soft losses of stability where an attractor divides like a braided river.  Hard losses of stability — the regime shifts that cause resource collapses — may provide stronger advance warnings than soft losses of stability — the regime shifts that gradually and imperceptibly create traps for ecosystem management.  Ludwig, Walker and Holling provide a more general discussion of hard and soft losses of stability in ecosystems.

 

Adaptive environmental assessment and management: course reading evaluation

At the end of my adaptive management course at McGill I asked my students evaluate the course readings and suggest which three I should keep and which three I should cut. There was substantial agreement on what to keep, but more disagreement on what to cut.

Keep 

A favourite reading for over half the class was:

The students liked this chapter because it was a real world case from the point of view of an individual that was also well connected to theory.

Students also really liked the Holling readings. Both the summary of the book Panarchy

  • CS Holling. 2001. Understanding the complexity of economic, ecological and social systems. Ecosystems 4: 390–405.

and the pathology of resource management.

  • CS Holling, and G. K. Meffe. 1996. Command and Control and the Pathology of Natural Resource Management. Conservation Biology 10(2): 328-37.

The next favourite was controversial

This paper was popular with about a third of the class but an equal number thought it was one of the readings that should be cut.

Other readings that got more than one vote were readings from Kai Lee’s book, Carl Walters’ book, my scenario planning paper, and the Fazey learning article.

Cut 

The paper most recommended to be cut was:

Students thought it didn’t add a lot to the course. While some students thought it was one of the best papers, more than three times more thought it should be cut than kept.

The second recommendation for cutting was

Students found this paper too technical (I don’t think it is). This rating probably indicates that I need to rework how I discuss about bayesian statistics, learning and experimental design in the class.

The third least popular paper was the Olsson et al paper . mentioned above.
The excerpts from Kai Lee’s book were the only other readings to have more than two recommendations for removal, however an equal number of students thought they were some of the best readings.
Reading Revisions 

What I plan to do reduce the number of core readings, add some supplementary readings, and rethink the quantitative part of the course – I think I need some good in class excercises and homework assignments on bayesian stats and experimental design.  But, I might change my mind after I read the reports from their adaptive management projects.

MA: Putting a Price Tag on the Planet

Putting a Price Tag on the Planet is a long article by Lila Guterman on the Millennium Ecosystem Assessment in the April 7, 2006 The Chronicle of Higher Education. The article describes the history, funding, and operation of the MA as well as its findings.

As the 20th century drew to a close, leaders in the field of ecology decided they were failing at one of their primary goals. They had presented sign after sign that people were harming the environment — killing off species, destroying rain forests, polluting the air and water — but the warnings had little effect. So, to encourage conservation, they decided to appeal to humanity’s baser instincts.

Continue reading

Self-Organization of Ecosystem Lumpiness

niche evolutionWe have growing evidence that ecosystems are lumpy. Along an axis such as body size, for example, we find clusters of similar-sized species separated by intervals of body size in which no species are found. Multiple explanations exist for lumpy patterns, and causes are still debated. Scheffer and van Nes present a simple mathematical explanation for evolution of lumpy patterns in ecosystems. Their article appears in the Early Edition of PNAS on 3 April 2006. The abstract states

Here we show that self-organized clusters of look-a-likes may emerge spontaneously from evolution of competitors. The explanation is that there are two alternative ways to survive together: being sufficiently different or being sufficiently similar. Using a model based on classical competition theory, we demonstrate a tendency for evolutionary emergence of regularly spaced lumps of similar species along a niche axis . . . Our result suggest that these patterns may represent self-constructed niches emerging from competitive interactions.

Later, the authors comment

Finally, it is worth noting a remarkable link to Hotelling’s theory in social sciences suggesting that competition of companies or political parties will often lead to convergence rather than differentiation. In this field of research, the focus is on the problem that such convergence is not in the interest of the public. For instance, having more of the same kind of TV channels is not better. By contrast, the seeming redundancy of similar species in nature may be essential to ensure ecosystem functioning in the face of adverse impacts.

When Scheffer and van Nes’s article is published in the print version of PNAS, it will be accompanied by a commentary written by Craig Allen which places the new findings in the context of research on lumps dating to the original discovery by C.S. Holling in 1992 (Ecological Monographs 62: 447-502).