This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Skip to the content

IOP A community website from IOP Publishing

August 2009 Archives

Speakers of English in my subject, glaciology, have never been afraid of borrowing good words from other languages. My all-time favourite glaciological word comes from Icelandic: jökulhlaup.

First things first. How do you pronounce it? Icelandic j is pronounced like English y. Then you stretch the ö into an uh noise, and emphasize the h, which should be like the ch in Scots loch. The au is roughly as the vowel in cow, or possibly slurp. You can listen to an Icelander saying it here.

Next, what does it mean? It translates literally into English as "glacier burst", which will be more informative for most readers but is nowhere near as much fun. A jökulhlaup is a large, sudden and usually unwelcome increase in the rate of flow of a stream draining a basin in which there is an ice-dammed lake.

Glacier ice can be very effective as a dam for its own meltwater. Unfortunately it is also untrustworthy for this job. Being less dense than the water it is damming, it is vulnerable to flotation. If the water depth reaches nine tenths of the thickness of the ice dam (the ratio of ice density to water density), the ice will float.

Flotation can be avoided if the water manages to tunnel beneath the ice. A subsurface channel forms and the water starts squirting out of the lake. The flow rate grows steadily because the water enlarges the channel steadily, melting its walls. This kind of jökulhlaup ends abruptly when the supply of water runs out, that is, when the lake has emptied. The glacier carries on deforming slowly, squeezing the channel shut over the course of the winter, and the same thing happens again next summer once the lake has refilled with meltwater.

The nastier kind of jökulhlaup is the one in which flotation is sudden and on a large scale. Huge volumes of water can begin to flow almost immediately. How long the jökulhlaup lasts depends on how long the supply of water lasts, how good it is at enlarging the channel and how long it can keep the dam afloat. The nastiness lies in the unpredictability of flood onset. If you live downstream, or have invested in valuable downstream structures such as bridges or oil pipelines, you get little or no warning of the arrival of an enormous wall of water.

The biggest jökulhlaup we know of is probably the one that emptied Lake Agassiz-Ojibway and led to the disconcerting cold snap 8200 years ago. But in modern times they happen on a smaller scale every year, in hundreds of glacierized drainage basins.

Icelanders learned to live with jökulhlaups long ago. One option, in sparsely inhabited terrain, is simply to stay away from the rivers. Nepal and other Himalayan countries don't have that option. There are many more people than in Iceland, and the rivers are too important as a resource sustaining agriculture. Here the jökulhlaups have come to be referred to as "GLOFs" – glacial lake outburst floods – which I think is not nearly as good as jökulhlaup but does cover the point that not all of these floods are due to the breaching of ice dams. Some of them come from the sudden collapse of moraine dams, and some from the drainage of lakes that are not proglacial (in contact with the glacier margin) but subglacial or supraglacial (on the glacier surface).

Whatever their etymological merits, GLOFs are a serious hazard, and have spurred the completion by ICIMOD, the International Centre for Integrated Mountain Development, of extensive glacier and glacier-lake inventories along the length of the Himalaya. Fears that the hazard is worse now than in former times, when glacier retreat was less rapid, are rational. Glacier retreat creates space for the impounding of water between the glacier and the moraine it left behind at its position of maximum extent. There is more meltwater nowadays, and more scope for it to pond in whatever embayments result from the changing relationship of the glacier to its confining walls. Call them what you will, jökulhlaups or GLOFs are worth all of the attention they are beginning to get.

About 6250 BC, there was a drop in temperature, followed by recovery to what was normal for the time, over perhaps 200 years. Cooling was as much as –5°C at the summit of the Greenland Ice Sheet, and signs of a cooler, or in some places a drier or a dustier, atmosphere can be seen across much of the Northern Hemisphere.

Palaeoclimatologists call this cold snap "the 8.2 ka" (ka being thousands of years before the present, present being defined as 1950 AD). It is a much lesser phenomenon than the 15,000-year course of deglaciation, and lesser even than some of the other short-term anomalies we can see during deglaciation, but the 8.2 ka is nevertheless a clearly-defined, short, sharp blip in the record.

Why did the 8.2 ka start, and why, having started, did it stop? We now think we know the answer to the first of these questions. The simplest explanation is one which also explains, in a rather satisfying way, some of the other cold spells during deglaciation. It rests on how the bulk of the meltwater from the Laurentide Ice Sheet, covering northern North America, found its way to the ocean, and what it did when it got there. The four largest outlets for Laurentide meltwater were the Mississippi, the Mackenzie, Hudson Strait (draining all of the region now occupied by Hudson Bay) and the St Lawrence. As the ice sheet shrank, the volume of meltwater varied but so too did the path it followed to reach the Atlantic.

We can connect evidence from earlier times during deglaciation quite confidently with switches from the Mississippi to the St Lawrence as the main Laurentide meltwater outlet, and from the St Lawrence to the Mackenzie.

Both of these switches were followed by hemisphere-wide cold spells, but the 8.2 ka has a more dramatic precursor than either. It began with, or at least followed, the final dismemberment of the ice sheet into two parts, one east and one west of Hudson Bay. The dismemberment was due to a colossal flood. At the time, meltwater was ponded between the ice margin and the higher ground to the south, as the long-vanished but very large Lake Agassiz-Ojibway. Apparently the lake water was able to force open a subglacial channel beneath the dwindling neck of the ice sheet, draining in one fell swoop (or possibly two) towards Hudson Strait over a time believed to be a year or less. The level of the world ocean would have risen by something like 100–200 millimetres in each swoop, but more significantly the catchment delivering fresh water to the Atlantic via Hudson Strait would thereafter have been close to its present-day extent.

Why should it matter how the meltwater gets where it is going? The key to this question is in the adjective "fresh". In oceanography, fresh means not salty, and not salty means less dense. Make the surface layers of the north Atlantic less dense and you make them less likely to sink, which is bad news for the meridional overturning circulation or MOC. If you think the Laurentide Ice Sheet was big, you should check out the MOC, which is a major player on the global climatic playing field.

The 8.2 ka is one signal from the past for which I can't think of an immediate near-future angle. There are plenty of worrying ice-dammed lakes in the modern world, but there is no chance at all of a repeat of the 8.2 ka in modern times because there are no stores of ice-dammed water anywhere near the size of Lake Agassiz-Ojibway.

However, we don't know the answer to the second question: why did the 8.2 ka stop? Evidently it wasn't big enough to switch off the MOC, and according to the most authoritative recent assessment such an event is "very unlikely" in the foreseeable future. But it would be nice if we could be more confident about such assessments. It would help a lot if we could find out whether the 8.2 ka was a near miss or just a mildly interesting blip in the climatic record.

In a book to be published in Oct, Dr Nina Pierpont, a New York paediatrician, says she has identified a 'wind turbine syndrome' ('WTS') due to the disruption or abnormal stimulation of the inner ear's vestibular system by wind turbine infrasound and low-frequency noise, the most distinctive feature of which is a group of symptoms which she calls 'visceral vibratory vestibular disturbance', or VVVD. Evidently this can cause problems ranging from internal pulsation, quivering, nervousness, fear, a compulsion to flee, chest tightness and tachycardia - increased heart rate. Turbine noise can also trigger nightmares and other disorders in children as well as harm cognitive development in the young. However, Dr Pierpont made clear that not all people living close to turbines are susceptible.

That might explain why, as the British Wind Power Association (BWEA) noted in an initial response, 'an independent study on wind farms and noise in 2007 found only four complaints from about 2000 turbines in the country, three of which were resolved by the time the report was published'.

Nevertheless, it's wise to be cautious. Some small domestic scale machines, which being small have very high rotation speeds, can be noisy, but audible noise from large modern wind turbines is nowadays hardly an issue, with gearless, variable speed turbines being very quite, since the blade rotation speed is better matched to the changing wind speed, thus increasing energy transfer efficiency and reducing aerodynamic noise. But low frequency sound might conceivably be a problem for some people. The only way to find out if this is this case and then to assess it's significance, is to carry out research on a large scale – Pierpont's sample was tiny, evidently based mostly on interviews with just 10 families living near wind turbines – 38 people!

In 2006 DTI published a study by Hayes McKenzie, which investigated claims that infrasound or low frequency noise emitted by wind turbine generators was causing health effects. The report concluded that there was no evidence of health effects arising from infrasound or low frequency noise generated by wind turbines. But the report noted that a phenomenon known as Aerodynamic Modulation (AM) was in some isolated circumstances occurring in ways not anticipated by ETSU-R-972, the report which described the method of assessing the impact of the wind farm locally. So the Government commissioned Salford University to conduct a further work. This study concluded that AM is not an issue for the UK's wind farm fleet. Based on an assessment of 133 operational wind projects across Britain, the study found that, although the occurrence of AM cannot be fully predicted, the incidence of it from operational turbines is low. Out of all the working wind farms at the time of the study, there were four cases where AM appeared to be a factor. Based on these findings, the Government said it did not consider there to be a compelling case for more work into AM, but it would keep the issue under review.

Pierpont's views have attracted significant media attention, with for example, the normally pro-wind Independent even suggesting that 'there is a prudential argument for postponing the commissioning of land-based wind farms until they are shown to be safe'. A bit less radically, Pierpont has called for a 2 km safety zone.

The NHS website stepped in with a short critique, which noted that "The study design was weak, the study was small and there was no comparison group. There is also no information on how the group was selected in the first place and some uncertainty as to which countries these people come from." It concluded 'it is physically and biologically plausible that low frequency noise generated by wind turbines can affect people' but, 'this study provides no conclusive evidence that wind turbines have an effect on health or are causing the set of symptoms described'.

The BWEA, in a special Factsheet, then went on the offensive, noting that 'Dr Pierpont is a known anti-wind campaigner'. And it pointed to a peer reviewed study by Geoff Leventhall which had refuted the allegations about infrasound, concluding that they were " irrelevant and possibly harmful, should they lead to unnecessary fears".

The Pierpont study does sound very weak but, to clear the air, it seems that the issue has to be resolved once and for all by an authoritative independent study.

Pierpont: www.windturbinesyndrome.com/wp-content/uploads/2009/03/non-clinicians-3-2-09-with-pics.pdf

Geoff Leventhall 'Infrasound from Wind Turbines - Fact, Fiction or Deception?" Canadian Acoustics Vol. 34 No.2 (2006) www.wind.appstate.edu/reports/06-06Leventhall-Infras-WT-CanAcoustics2.pdf

NHS: www.nhs.uk/news/2009/08August/Pages/Arewindfarmsahealthrisk.aspx

In China, India and South-East Asia the automobile industry hits the sky. The Chinese market is by January 2009 larger than the US market, the Indian Tata Nano is longed for and aspires to revolutionize the global car industry. The environmental degradation and unsustainability of this trend is obvious. But, of course, the Asian car boom doesn't come from nowhere. So let's look in detail at the good and the bad of the Asian car boom.

At least in China, the car industry is centrally planned and promoted. This copies the recipe of economic success in OECD countries. All formerly successful major economies, such as the US, Japan, and Europe have an established car industry and an internal automobile market. The car industry is understood as a so-called base-multiplier industry that substantially increases economic (and resource) turnover, i.e. the GDP. As a base-multiplier, the industry pushes a whole economy and supports many services around this base. The indirect effects of a substantial industry with huge economic turnover can be very beneficial, GDP-wise, for a society. This is particularly the case, when economic well-being starts from a low baseline. From this perspective, promoting the car industry makes sense.

Furthermore, a car is the focal point of a rising aspiring middle-class who understand it as the decisive status symbol. With GDP being the most predictive factor in car ownership, and indirectly car usage, a rapidly rising Asian economy develops into a car ownership society.

midday.jpg
Daylight in Beijing. Image credit: the author.

Finally, there a real mobility advantages for car users. At least in theory. The bad consequences are quite clear. With more than 10% annual grow in the Chinese automobile market, and similar rates in other countries, GHG emissions in the transport sector are rising rapidly, ridiculing any other marginal progress. Asian countries may be hit hardest first by climate change, especially by dwindling water supplies from the Himalaya and transformed monsoon patterns. Air pollution becomes or keeps being a dangerous health hazard in cities, and congestion immediately reduces any expected mobility benefits - for the car drivers, but also for users of other modes. Asian cities certainly don't become more attractive for tourists, and if business has a choice, it avoids the most polluted cities.

The consequences are worse than in established car cultures, particularly the US. Here, for decades infrastructure were built around the car, a possibility only for such a young country with significant oppurtinies in space and resources. However, Asian societes have developed over millenia, population densities are higher, and many important cities have been built before the advent of the automobile. As a result, cities are not designed for the car, and transformations certainly can't keep pace with Asias automobilization. However, unsuited infrastructures and high population magnify the social costs of the car. Infarct of major arteries is reached even with relatively low car ownership rate, and air pollution is concentrated where most people live (a so-called very high receptor density).

Fortunately, Asia is still not completely locked-in into carbon dependence. It's time, however, to develop a truly modern vision for sustainable mobility. This vision will only be sucessful, when the also the advantages of the Asian car boom are addressed and substituted for.

Even the smallest glacier is too heavy to weigh, at least by the classical method of lifting your object on to a set of scales and measuring the force with which it deflects a spring, or a known weight on the other arm of the scales. But that is not the only way to weigh something.

What we really want to know about the glacier is its mass balance, that is, the change in its mass over a stated period of time. Various ideas have evolved for measuring the mass balance, but until recently the list did not include what would be the obvious idea – repeated weighing – if the things were not too heavy and unwieldy.

That changed in 2002, when the U.S. National Aeronautics and Space Administration and the German Research Agency for Air and Space Travel launched the GRACE satellite mission. GRACE, short for Gravity Recovery And Climate Experiment, is revolutionizing the measurement of glacier mass balance.

GRACE is actually two satellites in the same orbit, one 200 km behind the other. Each continually measures its separation from the other with radar. Subtle differences in this distance are due to equally subtle differences in gravity as experienced by the two satellites – the two arms of the scales. These differences in the force of gravity are dominantly due, once a long list of corrections has been made, to the distribution (and redistribution) of mass in the solid and liquid Earth beneath the satellites.

Late last year Anthony Arendt and colleagues, and Scott Luthcke and colleagues, showed in impressive detail what GRACE was able to make of four years in the evolution of the mass balance of glaciers in southern Alaska. They were able to resolve changes every ten days, and to show that GRACE can see changes, if they are big enough, within regions as small as 200 km across.

Ten days is much better time resolution than offered by traditional methods, which are expensive, time-consuming, hazardous and sparse. GRACE's weak points for our purpose are its poor spatial focus and the fact that it needs the signal of mass change to be strong. The Alaskan glaciers were losing mass at an average rate of 20.6 gigatonnes per year, give or take 3.0 gigatonnes. A gigatonne is an awful lot of ice, but here the most interesting number is the error-bar number. If GRACE can resolve changes as small as 3.0 Gt/yr, what are the prospects for the GRACE follow-on mission for which glaciologists and others are already slavering?

Estimates of the global average glacier mass balance involve a lot of interpolation, which is a fancy word for guesswork. For example the number of annual mass balances that have ever been measured by traditional methods in the Karakoram and western Himalaya is exactly four, and they required a good deal of unsatisfactory corner-cutting. My interpolated estimates for this region suggest an annual loss in the neighbourhood of 3.0 Gt/yr. In other words the present GRACE would have a hard time seeing the glaciological signal from these remote and poorly-covered mountains. But if the GRACE follow-on had sharper focus and better sensitivity it would give invaluable answers, and there are several other mountain ranges and high-latitude archipelagos where it would do equally well or better.

Technically, the improvements now being sketched by GRACE specialists will come mainly from switching from radar to laser interferometry for measuring satellite separation, reducing drag on the satellites, and lowering their orbit. They won't amount to a complete solution of the problem of undersampling of glacier mass balance. There will always be glaciers too small for GRACE to notice, they will continue to contribute a significant proportion of the meltwater flowing into the sea, and we will still need to do small science if we want to understand glacier mass balance. But three cheers for the big science of the GRACE follow-on nevertheless.

Beyond NIMBYism

| | Comments (107) | TrackBacks (0)

At the launch of the film 'The Age of Stupid' earlier this year, Ed Miliband, Secretary of State for Energy and Climate Change, commented "The government needs to be saying, 'It is socially unacceptable to be against wind turbines in your area- like not wearing your seatbelt or driving past a zebra crossing'."

Certainly there has been a lot of invective hurled at so called NIMBY's- those who, while perhaps professing to be in favour of renewables generally, resist deployment of wind farms and the like near where they live. The NIMBY, Not In My Back Yard, concept has gained credence because national opinion surveys show overwhelming support for renewables, but when it comes to specific schemes there is often a lot of opposition to projects, wind farms in particular.

Of course it may be that in many cases this opposition is only from a noisy minority- aided and abetted by national anti wind lobby groups like Country Guardian. But there is no question that opposition to wind power has slowed its progress in the UK. So NIMBYs are seen as a major problem.

Thus the Danish company Vestas, when seeking to explain why it was closing its wind turbine blade manufacturing plant on the Isle of White, with the potential loss of over 600 jobs, saw local opposition to new projects around the UK as part of the problem- along with the recession and the local planning process ,which it said 'remains an obstacle to the development of a more favourable market for onshore wind power.' The British Wind Energy Association echoed these views: 'There is now a direct correlation between nimbyism and the curtailment of the economic benefits of wind power. A positive factor of this unfortunate crisis is that the public are now aware of the fact that the opposition to wind farms is affecting the economic opportunities available to this country.'

So are NIMBYs really the problem? A multi-University study funded by the ESRC and led by Dr. Patrick Devine-Wright (then at the University of Manchester, now at Exeter) aimed to deepen understanding of the factors underlying public support and opposition to renewable energy technologies. 8 case studies were undertaken covering 10 projects across 4 sectors - on and offshore wind, biomass and marine.

The research found little evidence of nimbyism - only 2% of the respondents to a survey of over 3,000 people fitted the stereotype of being strongly in favour of renewable energy in general, yet strongly against a local proposal.

Dr Patrick Devine-Wright, said: "We have identified what the key issues are that shape public concerns about new proposals. Developers and government should be acting to address these key issues, not labelling protestors as nimbies. They need to pay more attention to how the benefits or drawbacks of a proposal are perceived by local people." and "avoid the politically expedient term of nimby"

He added: "Government needs to do much more to make sure that planning decision processes are open, fully informed and fair. At the moment local people often feel disenfranchised as their concerns are not properly listened to or decisions end up being taken in a 'black hole' in London. Under such conditions local resistance can easily escalate."

The research summary notes that 'When opposition occurred this was characterised in particular by developers as emotionally based and outside of what they saw as 'rational' planning concerns. These conceptions of the public have a number of implications. First, for the design and engineering of technologies, with marine developers, for example, aware of the need to 'design in' potential public reactions from the beginning. Second, for the locational strategies of where projects are developed. Third, for public engagement practices. Here it was found that engagement has become routinised and not dependent directly on public responsiveness. Engagement was essentially conceptualised in terms of information provision and addressing public concerns'.

Overall, they say 'we found a range of supportive (38.1%), neutral (38.2%) and oppositional (23.7%) attitudes to specific projects. Marine energy projects tended to be most supported, whilst onshore wind projects tended to be least supported. Lack of trust in developers was consistently found, as well as strong concerns about the fairness of planning procedures. For example, in each of our Welsh case studies, there was substantial opposition to planning decisions being made in London. Only 2% (61 individuals) of survey respondents held the stereotypical NIMBY attitude of being strongly in favour of renewable energy generally, but strongly against a proposed project. We found no significant relationship between project support and personal characteristics commonly assumed to characterise opponents, including length of residence in the area, perceived proximity of home to project site, and age. Our analysis showed that project support was best explained by the perception of the local impact of the project (drawbacks vs. benefits); attitude to the technology sector; the perception that the developer listened to local residents; levels of trust in the developer and the perceived fairness of planning procedures'.

In conclusion they say: 'The research found evidence of substantial social consent, both for renewable energy generally and for specific projects, and little evidence to support the continued use of the NIMBY concept to explain why some people oppose project proposals. We conclude that rather than trying to dismiss and undermine legitimate questioning and criticism of particular renewable energy projects, industry and policy makers should instead focus on protecting and nurturing social consent for what is a key part of a low carbon future. No simple formula will achieve this, as each place and context has distinctive characteristics, but our findings show the importance of factors such as enhancing local benefits; timely and meaningful engagement by developers; trust; and fair planning procedures'.

Project Summary Report

www.sed.manchester.ac.uk/research/beyond_nimbyism/

There may of course also be political aspects to opposition to wind projects. A Greenpeace survey found that between Dec 2005 and Nov 2008, Tory councils blocked 158.2MW of wind projects, approving just 44.7MW, while Labour councils fared only a bit better rejecting 62.6MW, while approving just 68.3MW.

It's also perhaps worth noting that local opposition is much less apparent elsewhere in the EU e.g. in Denmark, which now gets around 20% of its electricity from wind projects. One reason could be that, unlike in the UK, most of them are locally owned by farmers or wind co-ops. As the Danish proverb goes 'your own pigs don't smell'.

There is a tremendous amount of talk these days about the costs of health care in the United States, and if the government should provide a public insurance option for citizens who either do not have health insurance or who do not like their employer-provided insurance. For those in the much of the EU, Canada, and other countries with some form of socialized health care, this may seem a tad ridiculous.

However, a recent article about resources and health care caught my attention. The article is about the supply, or declining lack thereof, of the preferred material technetium-99m as a radio-isotope for medical imaging for scanning for conditions indicating heart disease and cancer. The energy part of this equation is that it takes a nuclear reactor to create the isotope, usually starting from using highly enriched uranium. The article notes that the US is the sole supplier of this uranium except for that going into a single reactor in Australia.

This sounds like a normal story of running out of a natural resource and having to adapt technology to find a better substitute, or deal with using a lesser quality substitute. Except this material is a mine natural resource - it is created from another mined substance (uranium ore) which has yet to deplete. This is more of a production issue, but someone has to build a nuclear reactor as part of the supply chain. I'd say using UPS to ship some Chinese-made hair clips is an easier supply chain to manage.

I can only imagine the embodied energy in using these radio-isotope procedures. Given the cost and complexity of many modern medical instruments and procedures from magnetic resonance imaging (MRI) to computerized axial tomography (CAT) scans, together with the tremendous amount of research and design of the instruments, the embodied energy is large. Siemens is one company making MRI machines, and the Siemens website discusses results from a life cycle analysis indicating approximately 460 MWh/year consumed for a certain MRI model. An average household in the US might consume 10-15 MWh/yr. So we could operate all of the modern amenities to 30-45 homes for the energy operational cost of an MRI machine.

So how much will health care suffer if we don't have technetium-99m or electricity to power MRI machines? Well, if you look at measures of life quality such as the Human Development Index of the United Nations, the marginal gain in 'development' (of which lifespan is 1/3 of the index) for increased energy usage is very small for the US and most industrialized countries. A recent study Julia Steinberger of the Institute of Social Ecology in Vienna, Austria shows how over the last few decades we might be eeking out increases in HDI with less energy consumption. Steinberg's study deserves more description than one sentence, so perhaps I'll save that for a later post! Until then, eat healthy and increase your chances of preventing the need for energy-intensive medical procedures, although I have to admit, I had an MRI once for tearing a ligament in my knee, and I don't think it had anything to do with what I ate that day!

A recent study, summarized here, described the Gamburtsev Mountains in the heart of East Antarctica. This buried landscape exhibits many of the classical results of alpine glaciation, including cirques – great bowl-shaped hollows carved out of mountainsides by glaciers – and overdeepened valleys.

The work may have begun as early as 34 million years ago, when records from elsewhere show that ice began to accumulate in Antarctica in significant amounts. But the bowl-shaped hollows are still there. Ice sheets are about 2000 km across, and they don't carve bowl-shaped hollows that are only about 5 km across. So the cirques were probably shaped more than 14 million years ago, which is when we think the ice in Antarctica grew to continental proportions. If this is right, the ice must have shifted from carving up the bedrock surface to protecting it.

We don't know when the Gamburtsev Mountains were first lifted up. The dates just given are from indirect reasoning. On the other hand, the glaciation of Antarctica had to start sometime, somewhere, and a preglacial mountain range not far from the South Pole sounds like a good nucleus. But there is more missing from the story than just the age of the Gamburtsevs.

First, whether mountainous or not, there has been an extensive landmass over the South Pole for much longer than 34 million years. Motions due to plate tectonics brought Antarctica to roughly its present position almost 100 million years ago, yet it seems to have enjoyed a benign climate for the first 60 or more million years of that span. The switch from benign to cool and then frigid could well have been triggered by the uplift of the Gamburtsevs, or possibly of the more extensive Transantarctic Mountain Range, but in the one case we have no evidence as yet and in the other the uplift has been going on for even longer than 100 million years.

Second, this is a good excuse for me to tell you about the widely-unread paper I published 25 years ago about the subglacial topography of Antarctica. Developments since then have not altered the main conclusion: if you take away the ice that now covers East Antarctica, and allow the bed to rebound from the load of 3 to 4 km of ice, you get a rather unusual preglacial continent. This ice-free East Antarctica of the geomorphologist's imagination is a full 700 m higher than all of the continents we know today (except that it is only 500 m higher than Africa – but that is another story). We are not talking about a single mountain range here. This is the whole continent, or in other words a broad plateau cooler than a normal continent would have been by perhaps 4°C.

Unfortunately, we don't know when Antarctica became an elevated plateau, any more than we know when the Gamburtsev Mountains first appeared. There are far too many ifs in the story of Antarctic topography and glaciation. That is a strong argument for reducing the number of ifs, but lurking in the background there is a familiar friend: the greenhouse effect.

Less greenhouse gas in the atmosphere would account for all of the evidence that Antarctica has been getting colder for several tens of millions of years. The evidence that the greenhouse effect has been diminishing for a long time is in fact extremely good. One, or to be more accurate Bob Berner of Yale University, does a detailed accounting of all the carbon in the rocks, and uses the book-keeping to drive calculations of how the carbon would have passed to and fro between the various stores, such as the Earth's deep interior, the biosphere and the atmosphere. The atmospheric stock, nearly all of it carbon dioxide, was about five times its present size 100 million years ago. (Why so? That is yet another other story.)

The more diverse the facts that a hypothesis succeeds in explaining, the more do we respect it. The long-term cooling of Antarctica is not as remote from our 21st-century concerns as it sounds. In fact the same explanation holds for the climate of Antarctica over the past 100 million years as for the temperatures we have measured over the past 100 years and the temperatures we expect over the next 100 years. Greenhouse gas makes our home warmer.

Hydroelectric plants generate about 17% of total world electricity and are the largest existing renewable source of electricity. However, many environmental/development organisations, including WWF, FoE, and Oxfam, while backing micro hydro, have opposed large hydro projects because of the large social and environmental impacts.

The social dislocation resulting from flooding areas for new reservoirs is an obvious issue, but there are also more subtle eco issues. For example a few years back the World Commission on Dams claimed that in some hot climates, biomass carried down stream was collected by the dam and can rot, generating methane, so that the net greenhouse emissions can be more than from a fossil plant of the same energy capacity. This effect is site specific, but its does indicate that in some locations hydro may not be quite such an attractive renewable source as some suggest.

Nevertheless, there is still a strong push m for more hydro. For example, The African Union (AU),The Union of Producers, Transporters and Distributors of Electric Power in Africa (UPDEA), The World Energy Council (WEC), The International Commission On Large Dams (ICOLD), The International Commission on Irrigation and Drainage (ICID), and The International Hydropower Association (IHA) have all recently agreed that hydro is an important answer for some if Africa's major problems.

They note that 'During the past century, hydropower has made an important contribution to development, as shown in the experience of developed countries, where most hydropower potential has been harnessed. In some developing countries, hydropower has contributed to poverty reduction and economic growth through regional development and to expansion of industry. In this regard, we note that two-thirds of economically viable hydropower potential is yet to be tapped and 90% of this potential is still available in developing countries. In Africa, less than 7% of hydropower potential has been developed'.

They say 'We firmly believe that there is a need to develop hydropower that is economically, socially, and environmentally sustainable. Regarding the environmental and social impact of hydropower, a number of lessons have been learnt from past experience. Governments, financing agencies and industry have developed policies, frameworks and guidelines for evaluation and mitigation of environmental and social impacts, and for addressing the concerns of vulnerable communities affected by hydropower development. Those guidelines must be adjusted to the relevant individual country context. We note that the key ingredients for successful resettlement include minimization of resettlement, commitment to the objectives of the resettlement by the developer, rigorous resettlement planning with full participation of affected communities, giving particular attention to vulnerable communities. The decision making process should incorporate the informed participation of the vulnerable communities and those negatively affected, who must in all circumstances derive sustainable benefits from the project. The costs of social and environmental mitigation measures and plans should be fully assessed and integrated in the total cost of the project'.

They point to giant potential projects like Grand Inga on the Congo river - 40 000 MW which could generate more energy than 280 TWh /year of exceptionally cheap electricity, at less than $ 0.01. For comparison diesel generators, widely used in Africa, costs from $ 0.15 to $ 0.30/kWh.

Certainly schemes like this have large potentials. The proposed £40bn Grand Inga hydro project could, its supporters say, double the amount of electricity available on the continent and jump start industrial development, bringing electricity to hundreds of million of people as well as exporting power to South Africa, Nigeria and Egypt, and even Europe and Israel. It would supply twice as much electricity as the world's current largest dam, the Three Gorges in China.

However ,not everyone is so keen. The Guardian reported (21/4/08) that environmental groups and local people have warned that 'it could bypass the most needy and end up as Africa's most ruinous white elephant, consigning one of the poorest countries to mountainous debts'.

Grand Inga was proposed in the 1980s but never got beyond feasibility studies because of political turmoil in central Africa. Now there seem to be prospects for it to go ahead and be completed by 2022. The big change is that banks and private companies can earn high returns from the emerging global carbon offset market and, in some cases, from the Clean Development Mechanism credits.

Terri Hathaway, Africa campaigner with International Rivers, a watchdog group monitoring the Grand Inga project, said that 'As it stands, the project's electricity won't reach even a fraction of the continent's 500 million people not yet connected to the grid. Building a distribution network that would actually light up Africa would increase the project's cost exponentially. It would be very different if rural energy received the kind of commitment and attention now being lavished on Inga.'

While it is clear that hydro has many attractions and that Africa needs power, there are also clearly counterviews about whether hydro, especially large hydro, is the best bet. Large projects are expensive and involve large companies who may not be that concerned about local impacts. Large centralised projects may in any case be the wrong answer for Africa - the very large distances involved make it unlikely that grids could ever cover the entire continent. As with the Grand Inga project much of the power seems likely to be exported on HVDC links to remote markets- not used locally. Local decentralised power may make more sense. That can be micro hydro, or wind, or biomass or solar, technologies which can be installed quickly with low local impacts and a potential for direct local involvement - and possibly for the creation of local manufacturing enterprises to build the equipment The debate over the way ahead continues.

For more see IRN: www.internationalrivers.org/

Also see: Wind in Africa www.theecologist.org/News/ news_round_up/293874/kenya_to_build_africas_biggest_ windfarm.html

A couple of years ago, I was asked to help in the writing of a report from the U.S. Climate Change Science Program on Abrupt Climate Change. The 460-page report appeared some months ago, but mercifully a short summary was also provided. I was pleased to see that the lead authors defined "abrupt" carefully and clearly – I will come to that definition later – but working out what "abrupt" means caused me a good deal of trouble when I was just getting started.

Abrupt climate change is the name of a new branch of climatology, or rather of palaeoclimatology. I kept asking the palaeoclimatologists what the word meant, and got a variety of answers that were not entirely satisfactory. Eventually one of them said, laconically, "faster than the forcing", and I decided that that was the answer I was looking for.

You need to understand here that "forcing" is scientists' jargon for "cause", a word that we don't like because it is philosophically very shaky. Loosely, the forcing is the input to the system and, in the case of the atmosphere, "climatic change" is the output. Carbon dioxide from fossil fuels is an input. Warming is one of the expected outputs, and so is sea-level rise. In their present-day configuration, glaciers (excluding the ice sheets) are transferring about an additional half a gigatonne of water to the ocean, per year, for every gigatonne of carbon dioxide added to the atmosphere.

The current rate of addition of carbon dioxide to the atmosphere is about 28 Gt/yr. You are free to think either that this is or is not rapid. It is consistent with the calculation that the rate of transfer of meltwater, currently about 400 Gt/yr, is growing at about 12 Gt/yr. Again, you may or may not consider this rapid.

My original problem was that I think "abrupt" has to be more serious than "rapid". If you don't think the present-day changes are rapid, just wait a bit. If you want abrupt change, you may have to wait a bit longer, and could well be disappointed, but you can find lots of examples by looking back rather than ahead. Dansgaard-Oeschger transitions are examples of abrupt warming, well-documented over the course of the last ice age. There was a disconcerting cold snap at about 6250 BC – disconcerting to us, although our Mesolithic forebears had so little capital invested that they may have shrugged it off or even failed to notice it. There is evidence of still bigger abrupt changes further back in the past, up to tens of millions of years ago.

The definition settled on for our report on abrupt change was "a large-scale change in the climate system that takes place over a few decades or less, persists (or is anticipated to persist) for at least a few decades, and causes substantial disruptions in human and natural systems". In other words, "inconveniently rapid for the next generation or two of human beings". You may prefer this to "faster than the forcing" because of its greater immediacy, but it doesn't tell you as much about how things work.

Whichever definition you choose, it is important to realize that "rapid" does not merge smoothly into "abrupt" as the forcing grows more intense. The point about "abrupt" is that it is not what the forcing would lead you to expect. Could it happen to us, or to our grandchildren? There doesn't seem to be any reason why not, even though we can't assign a probability to it with any confidence.

What would the United States look like 69% of today's electricity were generated from solar technologies, photovoltaic and concentrated solar power? A recent paper in Energy Policy (see 1 below: Fthenakis, Mason, and Zweibel., 2009) proposes this type of vision of generating 69% of US electricity from solar by 2050 (using also large quantities of energy storage technologies such as molten salts and compressed air energy storage). The authors note that 640,000 km2 of available land area exists in the Southwest US that can be used to for solar power stations (see link on National Renewable Energy Laboratory website for resource maps, but without land restrictions - NREL Solar Maps). This area is 48% of the total area of the included states of California, Nevada, Arizona, and New Mexico - which in total cover 1,320,000 km2.

In 2008 approximately 4,100 TWh of electricity were generated in the US. Thus, 69% is approximately 2,840 TWh. If we assume that installed solar systems convert sunlight (at an average of 6.4 kWh/m2/day for the Southwestern US) at 8% efficiency from sunlight to delivered electricity to the consumer, then approximately 2.4% of the available sunlight on the 640,000 km2 would be needed for solar systems. Note that current photovoltaic (PV) and concentrated solar power (CSP) systems can have solar-to-electric conversion efficiencies from 7%-40%, where the high end of the efficiency range is by laboratory multi-junction PV cells under concentrating lenses.

As stated by Fthenakis at al. (2009) their projected 2050 solar electricity generation (which is more than 2,870 TWh due to assumed increases in generation each year) would occur from an installed capacity of approximately 5.5 TW of solar generation - 1.5 TW from solar CSP and 4.0 TW from PV. For a 500 MW solar plant the authors estimate 10.6 km2 of land is needed (at 14% efficiency). Using this ratio of land for all solar installations projects the 5.5 TW of capacity would need approximately 110,000 km2 of land, or 17% of the available 640,000 km2.

Only 2.4% of the sunlight hitting the land needs to be converted to electricity, but 17% of the 640,000 km2 of land would be required for power plants. The reason is that the conversion efficiencies are calculated assuming that the PV panels are tilted toward the sun at an angle equal to the latitude of installation. This causes a PV panel to shade the next PV panel just to the North if it is placed too closely. Thus, there needs to be some spacing between panels and mirrors for CSP systems. If the solar power plants were equally distributed spatially throughout the Southwestern US (which is a practical impossibility and strategically undesirable) one could likely not walk on any path from southern California and Nevada throughout Arizona and New Mexico without seeing a solar power plant. Solar power plants would essentially be pervasive throughout the Southwestern United States.

This large scale of solar power pervasiveness brings to mind the question of whether there are enough materials to manufacture all of the solar panels and mirrors. A paper from this year in Environmental Science and Technology by Wadia, Alivisatos, and Kammen takes a first-order look at this question for the various semiconductor materials that can be used to create the functional component of the PV panels. They point out that the known reserves of the inorganic photovoltaics materials in ore deposits are sufficient for creating a broad variety of PV panels to generate the 17,000 TWh of electricity in the world today. The other necessary materials associated with circuitry (silver, copper, etc.) and mounting (aluminum) would also need to be available. Because there are many competing products for these resources (e.g. indium for flat panel displays) and some elements are only secondarily mined substances - meaning they are essentially byproducts associated with mining for other primary materials - we will only find out the future to what uses these elements are applied. It might also be interesting to think about how much energy (and power) would be required to mine all of necessary materials ...

Perhaps we should use our fossil fuels to mine the fossil minerals and elements that are needed to stop using fossil fuels. Sounds like a chicken or the egg problem. We may not know which came first, but eventually, we'll learn which one is last.

1. Fthenakis, V.; Mason, J.E.; and Zweibel, K. The technical, geographical, and economic feasibility for solar energy to supply the energy needs of the US. Energy Policy. 2009, 37, 387-399, doi:10.1016/j.enpol.2008.08.011.

Critics say that wind turbines can't be relied on to provide secure grid power because of the variability of the winds, and will have to be backed up by conventional power plants. So no conventional plants will actually be replaced, costs will be excessive and there will be very few emissions saved net.

Views like this, regularly expressed by groups ranging from Country Guardian to the Renewable Energy Foundation, have just as regularly been challenged by detailed academic and agency studies- with for example a major review of the various studies being produced in 2006 by the UK Energy Research Centre and an overview Earthscan book Renewable Energy and the Grid in 2007. The debate nevertheless continues.

The latest batch of reports includes one entitled 'Managing Variability' for Greenpeace, Friends of the Earth, RSPB and WWF by energy consultant David Milborrow. He concludes forcefully that there are no major technological problems with dealing with variable wind inputs to the grid, just minor economic costs: 'If wind provides 22% of electricity by 2020 (as modelling for Government suggests), variability costs would increase the domestic electricity price by about 2%'.

Basically, this is to pay for the fact that some fossil fuelled plant would have to be run a bit more each year to balance out low wind periods. These 'standby' plants already exist- they are used to run up and down to full power on a daily basis to meet the standard demand peaks, and can also be used to cope when some other plant (e.g. a nuclear plant) shuts down unexpectedly. So there's very little extra cost in using this existing standby capacity a bit more, occasionally, to back up wind plant. And very few extra emissions would be involved by the extra use of these plants. Milborrow says that it would reduce the carbon emissions saved from having 20% of UK electricity supplied by wind by about 1%.

This issue, once sometimes cited by critics as a major problem for wind, now seems to have finally been resolved. As the House of Lord Select Committee on Economic Affairs put it in its review of the Economics of Renewable last year 'The need to part-load conventional plant to balance the fluctuations in wind output does not have a significant impact on the net carbon savings from wind generation'- a view subsequently accepted by the government.

However some of the other issues are still fiercely debated. The critics insist that there will be times when there is no wind at all over the whole of the UK. In which case, as consultant Denis Stephens put it in a critical review of a recent Carbon Trust report on wind, in which he drew on a study produced by Oswald for the Renewable Energy Foundation, 'for every megawatt of output from wind turbines there has to be an equivalent backup facility of conventional power generation'.

Milborrow by contrast insists that 'numerous studies have shown that, statistically, wind can be expected to contribute to peak demands', although he accepts that the amounts it can reliably supply then (its capacity credit) will be much less than the full rated capacity. However he notes that 'system operators do not rely on the rated power of all the installed wind farms being available at the times of peak demand, but a lower amount- roughly 30% of the rated capacity at low penetration levels, falling to about 15% at high penetration levels'. He does accept that wind variation adds an extra uncertainty into the management of the grid, although this ' is not equal to the uncertainty of the wind generation, but to the combined uncertainty of wind, demand and thermal generation', which is already dealt with by existing balancing measures. It simply adds a bit to the costs, and these can, if necessary, be reduced by a range of new measures.

He notes that ' Improved methods of wind prediction are under development worldwide and could potentially reduce the costs of additional reserve by around 30%. Most other mitigation measures reduce the costs of managing the electricity network as a whole. 'Smart grids', for example, cover a range of technologies that may reduce the costs of short-term reserves; additional interconnections with continental Europe, including 'Supergrids' also deliver system-wide benefits and aid the assimilation of variable renewables. Electric cars hold out the prospect of reduced emissions for the transport network as a whole and could act as a form of storage for the electricity network - for which the electricity generator would not have to pay.'

This view is shared by National Grid, who have produced a new consultative report, which includes a look at some of these ameliorative balancing options. Overall they seem to think that there should be no major problems in balancing the grid: 'As wind generation increases, so does geographic dispersion of the wind farms and we believe that this combined with ongoing improvements in wind forecasting will allow us to minimise the reserve requirements for wind going forward.' They admit that 'The need to carry operating reserve means the effective 'capacity credit' for wind output of 15% of capacity will therefore be less than 15%, but say that 'National Grid's view at this stage is that for 2020, a wind generation output assumption of up to 15% of capacity at times of peak demand is reasonable'.

The debate continues, although the emphasis now seems more on the costs rather than the technical viability. For example, in a new report on the 'Impact of Intermittency' in the UK and Ireland , Poyry Energy Consulting have opened up a new issue, claiming the problem is not so much the familiar short term variations in wind availability, but annual variations. It looked at the period 2000-2007 and found that annual levels of wind generation output varied by almost 25% in the Irish market and 13% in the British market. As a result there could be significant economic problems facing fossil fuel back-up capacity: 'plant may only operate for a few hours one year and then hundreds of hours the next year', making revenue planning hard.

Overall they say that grid interconnectors will be important for grid balancing, especially for Ireland, but as National Grid and Miborrow argue, there are also other technological adjustment that could change the situation- not just interconnectors but also pumped hydro storage and load management techniques like smart metering.

In addition we could use other non-variable renewable plants for balancing, not just hydro plants, but also biomass fired plants and possibly geothermal generation as well. A recent German study showed that it was possible to use biomass generation to balance wind and PV solar over the year on a national basis, despite weather cycles and demand cycles, and new enhanced geothermal systems are being developed in Germany and elsewhere which can provide firm power outputs.

What's interesting for the present is that the Poyry study shows that, given some inter-connection, Ireland can cope reasonably well without nuclear power. Indeed Poyry note that their assumption that there would be new EPR nuclear plants in the UK 'had the expected impact of increasing response requirements' i.e. from backup fossil plant, given expected nuclear plant fault levels. So having nuclear on the grid makes it even harder to balance wind!

Milborrow report: www.greenpeace.org.uk/files/pdfs/climate/wind-power-managing-variability-ngo-summary.pdf

National Grid report: www.nationalgrid.com/uk/Electricity/Operating+in+2020

Pöyry report : http:// www.ilexenergy.com.

Oswald et al. 2008 Energy Policy Vol 36 Issue 8 Aug pp3202–3215.