This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Skip to the content

IOP A community website from IOP Publishing

June 2009 Archives

Superposition means putting one thing on top of another. Nature does it all the time, but it is only in the past thousand years or so that we have worked out how to exploit it. Ibn Sina, an 11th-century Persian better known in the West as Avicenna, understood how Nature piles younger sediment on top of older, and Leonardo came very close. The first person to articulate the Principle of Superposition clearly, though, was a 17th-century Dane, Steno: In any pile of sediment, the youngest is on top and the oldest is on the bottom.

It is an idea so blindingly obvious as to sound stupid, but for a long time the obviousness blinded us to its potential: depth in the pile is equivalent to time before the date of the top. With patience and hard work, there is a historical record waiting there for us to decode.

There are exceptions that prove the rule. For example folding can overturn the layers. Little beasts that live just under the sea floor can blur the layers by burrowing. In glaciers, where the accumulating snow is a sediment just as much as the mud on the sea floor, the main problems are flow, which stretches and squeezes the layers, and refreezing of meltwater, which mixes this year's accumulation with that of earlier years. A satisfactory solution is to drill at the summit of an ice sheet, where there is no melting and the flow rate is negligible.

The payoff has been invaluable. Ice cores give us our most detailed picture of the Earth's history over the past million years. We have barely begun to unravel the story. The wealth of incident in the story is so rich that it is hard to know how to pick and choose, but a recent technical advance by Elizabeth Thomas and colleagues makes a good start. They cut slices just 2 millimetres thick from a 4.5-metre section of a core from the interior of the Greenland Ice Sheet. This section, 2070 metres beneath the surface, is estimated to represent the years from 36,401 to 36,169 BC - at a rate of 7 to 11 samples per year. The assignment of calendar years is a bit dodgy. The dates could be out by more than 1400 years. But the relative error, from bottom to top of the section, is only about three years, and the march of the seasons all those years ago can be seen distinctly in the varying concentrations of dissolved ions. We also learn interesting facts such as that 36,263 BC was a rather dry year, while 36,262 BC was so-so and 36,261 BC rather snowy.

There is more to this work than minute detail. It tells the story of the transition from a full glacial state to the warm climatic stage DO-8. The last ice age is peppered with these DO or Dansgaard-Oeschger events, warmer episodes that lasted 1000-1500 years and began abruptly.

The authors are properly cautious about interpretation. Their aim was more to show what attention to detail can uncover than to write the last word about the transition to DO-8. But they do suggest that the transition lasted just 21 years, during which snowfall increased by a half and temperature rose by 11.4 °C. This last number calls for particular caution. It needs to be seen in context, because it probably represents a local rather than a global change, and there are some technical complications to be sorted out. But at face value it implies warming at 0.5°C per year, a hundred times faster than the global warming of the 20th century and ten times faster than some extreme predictions for the 21st century.

Dansgaard-Oeschger transitions are not like the warming that is about to happen this century. For one thing, they are almost certainly not due to increases in greenhouse-gas concentrations, at least not primarily. They are more probably related to abrupt changes in the circulation of the north Atlantic Ocean. But they do share the attribute of abruptness with our near future, and that makes them intensely interesting. Avicenna and Leonardo would have understood why.

With the climate conference in Copenhagen in December seen by many as the make-or-break event, the EU position is relatively clear- a 20% by 2020 cut in emissions (from 1990 levels), unless a good global agreement can be reached, when the target would be raised to 30%.

The UK is amongst the leaders in pushing for high targets. The Budget in April set what was claimed as the world's first carbon budget, as required by the new Climate Change Act, with a legally binding 34% reduction in emissions by 2020. The government said it will 'increase the level of ambition of carbon budgets once a satisfactory global deal on climate change is reached'. Longer term, there is a firm commitment to an 80% cut by 2050.

While welcome, all that will mean very little if the US and China don't come up with decent targets. The good news from the USA is that, after years of denial under Bush, the US government now sees greenhouse emissions as a major issue: the Environmental Protection Agency is now regulating them. And progress is being made on national targets. Against strong opposition, the House of Representatives has just voted 219 to 212 to bind the US to cutting carbon emissions by 17% from 2005 levels by 2020 and by 83% by 2050. It also agreed that a national carbon 'cap and trade' system should be established and to a 15% 2020 target for electricity from renewables. However this has still all to be passed by Senate- where opposition is likely to be even stronger.

The opposition has already led to watering down of targets. For example, the draft US Clean Energy act called for a 20% cut on 2005 emission levels by 2020, and for the US to get 25% of its electricity from renewables by 2025. The fossil lobby wanted just a 6% cut by 2020 and lower renewable targets. Even so, the emission level now agreed by the House of Representatives (17%) is a significant compromise and the 15% target for electricity from renewables is an even bigger compromise, especially since it seems 12% could be allowed in some regions with poor resources, and energy efficiency gains may be allowed as a substitute for some renewables.

In any case, even if finally passed into law through Senate, these are just paper targets. The crucial thing is the proposed new US Carbon trading system - a key element in translating the targets into reality. Indeed, although much was made of the £150 billion over ten years that Obama allocated to renewables and other green energy projects earlier this year, as part of the US Economic stimulus package, much of that funding will only materialise if the carbon trading system goes ahead. This may explain why the very large stimulus allocation (around 10 times current support levels) was not fought much by Republicans- they may have been waiting to block it at source by opposing the Carbon trading system. If that is proves to be the case, the fear is that the new proposals won't get through in time for the USA to make a clearly positive contribution at the Copenhagen conference.

While this may be a problem, it seems that the simple fact that Obama is now taking the US into climate negotiations has been enough for the Chinese to engage in the process more fruitfully - and that in turn has helped Obama, since one of the main reasons for opposition to the Kyoto protocol in the US was that it didn't apply to newly developing countries like China, whose emissions were expanding rapidly. They have actually recently overtaken the US. But China now seems to be thinking in terms of, if not absolute cuts, then at least a commitment to the reductions in the growth of its rapidly expanding carbon emissions.

Su Wei, a leading figure in China's climate change negotiating team, said that officials were considering introducing a national target that would limit emissions relative to economic growth in the country's next 5-year plan from 2011.'China hasn't reached the stage where we can reduce overall emissions, but we can reduce energy intensity and carbon intensity.' i.e. carbon emissions/GNP. Whether an agreement will be reached on that before the Copenhagen conference remains to be seen.

The stakes are high- for Obama and for the world. The EU is pushing hard, and, whatever might be happening at home, the USA seems to be bending over backwards to get a global agreement. It has proposed that developing nations like China should not be required to commit to specific emission targets, but should be asked to commit to boosting energy efficiency standards and improving the take-up of renewable energy. And there are positive signs, with talk of China being able to go beyond the current target of getting 15% of all energy from renewables by 2020, to 18% and possibly 20% - on a par with the EU and well ahead of the USA.

We may make it yet.

Waxman-Markey, a bill "to create clean energy jobs, achieve energy independence, reduce global warming pollution and transition to a clean energy economy" is voted on by end of this week in the House." A lot of attention has highlighted the global warming parts of the bill, and rightly so. In the current draft, the emission reduction target is 17% reduction from 2005 levels by 2020. This is not more than 4% reduction by 1990 level and may be not enough to persuade China, Europe and other world regions to get tougher on their own targets. Also, potentially ineffective offsets can be purchased, hence avoiding emission reduction at the smokestack. 

However, the bill is surprisingly comprehensive in addressing also large-scale clean energy deployment, sustainable transportation, smart grid advances and transmission issues. All these measures support a transition to a clean energy economy, as the bill claims. 

In particular, Waxman-Markey holds quite some promise, as it  

  • aims to invest $190 billion into renewable energies
  • provides grants for transmission infrastructure and requires coordination of electricity transmission planning with the goal of building out the grid to facilitate deployment of renewables (i.e., brings the wind energy of the Mid-West to urban centers)
  • asks regional electric grid planning to take into account all significant demand-side and supply-side options, including energy efficiency, distributed generation, renewable energy and zero-carbon electricity generation technologies, smart-grid technologies and practices, demand response, electricity storage, voltage regulation technologies, and even more detailed measures. (Thanks to Cathy Kunckel for pointing this out.)

And this transition is actually the bottom line. Make it more lucrative to invest in renewable energies than in coal plans, more attractive to move into mixed-use neighborhoods with high-quality public transit than relying on gas-guzzling monsters in ex-urbia. If the bill heads into this directions, it will be a huge success for avoiding disastrous human-made climate change. Currently, utilities have expertise in operating coal plants and know this market. However, when coal plants get a little bit more expensive to operate and renewable energies get a little cheaper to deploy, utilities start to reconsider their investment decisions. And one point the market may switch over to new technologies, like wind, geothermal and concentrated solar power. The current gradual change can accelerate to a switch in the way our energy economy operates. If that happens, weak targets in emission reductions can much more easily be strengthened; the system dynamics have changed and there is less strong interest anymore in coal plants.

One of the emergent technologies is wind. It is mature by now, the market is well developed, and in many locations in the US, wind is cost competitive to conventional sources of energy. With more policy attention on the grid infrastructure, a wave of investment into wind energy within the next years can be expected. For example, a study published in PNAS points out that US wind resources, particularly in the central plain states, could supply 16 times more energy than the current total US demand. 

One of the truths about field work on glaciers is that most of the time the weather is rotten, even in summer, when most field work is done. But every so often the clouds lift and even disappear altogether. Whether or not the temperature goes up on one of those infrequent sunny days, the view makes up for all the sleet, wind and fog that represent the norm, and the field workers get out their cameras. The remote-sensing specialists are also grateful for these cloud-free days, because they make air photography and satellite imaging possible.

The outcome of all this fair-weather photographic activity is pretty spectacular, and much of the best work has found its way onto the internet. Here are a few of my favourite places in cyberspace for pictures of glaciers.

Glaciers Online is a web site maintained by Jürg Alean and Michael Hambrey. Jürg Alean is a Swiss teacher who studied Baby Glacier on Axel Heiberg Island, northern Canada, as an M.Sc. student. Baby Glacier is a glacier in which my university, Trent University, has a special interest, and we were fortunate to be able to arrange a return visit to Axel Heiberg Island for Alean in summer 2008. You can see the results at Glaciers Online.

Alean has translated his prowess with the camera into a distinguished career showing the world what glaciers look like. Together, he and Hambrey, a structural glaciologist (among other things) at Aberystwyth University, have published the magnificently illustrated Glaciers (2004, Cambridge University Press). Many of the illustrations are posted at Glaciers Online.

Much of the photoglaciology on the web has a flavour about it of Last Chance to See, Douglas Adams and Mark Carwardine's 1990 book about filming animals that are on the verge of extinction. All of the glaciers, almost without exception, are getting smaller, and if you return to a place from which somebody photographed a glacier several decades ago there is an increasing chance that there won't be any ice left to see. Al Gore exploited this plain fact in the award-winning An Inconvenient Truth . At OceanAlaska Kenai Fjords, Bruce Molnia, of the United States Geological Survey, uses the technique of morphing - animating a transition between before and after images - to impressive effect to show what has been happening to the glaciers of the Kenai Mountains in southern Alaska. Most are much smaller now than they used to be.

Molnia is also the author of Glaciers of Alaska, chapter 1386-K in the Satellite Image Atlas of Glaciers of the World, which has been appearing since the 1970s as U.S. Geological Survey Professional Paper 1386. Chapter K, like the Satellite Image Atlas as a whole, is a tour de force in the patient assembly of scattered information about some well-known and a great many almost unknown glaciers.

I have to say, though, that the U.S. Geological Survey's idea of a "chapter" is not well aligned with mine. I don't know how much chapter 1386-K weighs because I only have it as a 90-MByte PDF file, but chapter 1386-J, Glaciers of North America (excluding Alaska), weighs 1.6 kg according to the scales in our kitchen. Most of the chapters that have appeared so far are gorgeous.

Finally, back briefly to Gutenberg space. Three visually stunning books about glaciers, not available electronically as far as I know, are Glacier Ice by Austin Post and Ed LaChapelle (revised edition, 2000, University of Washington Press, Seattle); The Opening of a New Landscape: Columbia Glacier at Mid-retreat by Tad Pfeffer (American Geophysical Union, 2007); and Glaciologi by Per Holmlund and Peter Jansson (Stockholm University, 2002). If you are looking for visual delight, then like me you will not mind if you are unable to read the Swedish text of the latter.

The use of nuclear power and/or renewable energy is seen as part of the response to climate change, but climate change may have a negative impact on some of these energy sources, limiting the contribution they can make.

Most of the UK's nuclear plants are on the coast, so as to get access to sea-water for cooling. In future, some of these sites may be inappropriate as locations for new plants, as has been proposed, due to the risk of flooding and storm-sea ingress. The Nuclear Consultation Group, which includes leading UK experts in the field of environmental risk, said, in response to Governments new Criteria for the Siting of proposed new nuclear plants, that 'the Strategic Siting Assessment process is flawed and inadequate. It is inconceivable that the selection of sites on vulnerable coasts in southern England represents good sense', given that 'the risks from climate change in the form of sea level rise, storm surge and coastal erosion at the favoured sites are serious and increasing over time'. It noted that the Flood Hazard Research Centre at Middlesex University had concluded that there could be problems at four of the favoured sites, Bradwell, Hinkley, Dungeness, and Sizewell.

Nuclear Consutlation Group member Prof. Andy Blowers, writing in the TCPA journal, said the new UK siting criteria amounted to nothing less than a means of trying to justify putting a new generation of power stations and spent fuel waste stores on existing coastal sites, most of which are likely to become submerged during the next century under the impact of sea level rise and storm surges. It's the on-site spent fuel stores, expected to hold old fuel for 100 years, that he felt were particularly worrying.

It's not a trivial issue. Climate Scientists are now predicting that sea levels could rise by 1 metre or more by 2100, and maybe up to 2 metres, and with increased storm surges likely as well, that could pose threats to many locations around the world- the UK included. The Institution of Mechanical Engineering, which recently published a report on 'Climate Change, Adapting to the Inevitable', said that coastal sites like Sizewell might have to be abandoned or relocated in the long term.

Dr Colin Brown, IMechE's director of engineering commented: "The Sizewell B nuclear plant has been built on the Suffolk coast, a site that has been earmarked for the construction of several more nuclear plants. However, Sizewell will certainly be affected by rising sea levels. Engineers say they can build concrete walls that will keep out the water throughout the working lives of these new plants. But that is not enough. Nuclear plants may operate for 50 years, but it could take hundreds of years to decommission them. By that time, who knows what sea-level rises and what kinds of inundations the country will be experiencing?"

Sea level issues are not the only climate related problem that may impact on nuclear projects. In continental Europe, the USA and elsewhere, many plants are located near rivers, but climate change could make this problematic too. Recent episodes of excessively hot summer weather in France led to nuclear plants being closed since the exit cooling water temperature was higher than environmental regulations allowed. Longer term, getting access to cooling water in summer could be a major issue in many countries.  This could well become another key issue in reactor location and design.

Changing climate and weather systems could also undermine the viability of some renewables to some extent. Changing rainfall patterns will have a significant impact on the amount of energy that some hydroelectric plants can generate. Increased temperatures may also lead to more evaporation in some locations. And changed wind patterns may also mean that in some locations wind turbines will not be able to produce as much energy are expected. A recent preliminary study in the Journal of Geophysical Research suggested that average and peak winds may have been slowing across the US midwest and eastern states since 1973, a 10% decline in average wind speed being noted over the past decade. Climate modelling has evidently suggested a further 10% decline in wind levels could occur over the next four decades, although this has not been confirmed, and in any case it may not be a general trend. However, it seems possible that, if temperature differentials between the poles and the equatorial regions decrease, then so will wind flows.

Given that waves are the result of wind moving over the oceans, then if wind flows are reduced, wave energy will also be reduced. Tidal flows should be unaffected by climate change, while direct solar generation may actually benefit, but the impact of changing climate and weather patterns on biomass as an energy source may be more complex.

The Institution of Mechanical Engineers report is at: www.imeche.org/NR/rdonlyres/D72D38FF-FECF-480F-BBDB-6720130C1AAF/0/Adaptation_Report.PD

This past week I had the pleasure of meeting with seven colleagues for a Water and Energy workshop in Brussels. The purpose of the gathering, organized by COST (Cooperation of Science and Technology) was to organize a set of case studies on the links between water and energy for a special journal issue and presentation at a side event during the United Nations Framework Convention on Climate Change conference in Copenhagen, Denmark this December (aka the Conference of Parties 15: COP 15).

The case studies span four continents and cover the breadth of interactions. I list here the topics and the colleagues (in attendance) working on the papers:

1. Food-Water-Energy in Spain (Anna Osann, Universidad de Castilla La Mancha)
2. How the carbon reduction policies in Australia will affect the Water-Energy Nexus (Debborah Marsh, Australian National University)
3. Water needed for bioenergy crops in Tuscany Region of Central Italy (Anna Della Marta, )
4. Energy-Water Nexus of Texas (Carey King, University of Texas at Austin)
5. Underground Thermal Energy Storage in The Netherlands (Adriana Hulsmann, Watercycle Research Institute)
6. Energy-Water Nexus - China Case Study (Xingshu Zhao, Chinese Academy of Science)
7. Opportunities for Greenhouse Gas reductions in water and wastewater supply, use, and treatment in England and Wales (Andy Howe, Environment Agency)
8. Conflicts and Synergies Between Climate Change policies and Sustainable Water Management (Jamie Pittock, Australian National University and WWF)

What has become more and more apparent as we study the ties between energy and water is that historically water has not proven as a constraint to energy development of supply and use. However, most of the world's fresh water resources are now already allocated to one purpose or another. So as people want water for new energy (e.g. mining, cooling for electricity generation, growth of bioenergy crops, etc.) it is now beginning to be supplied at the expense of other water needs. Many times integrated water resource management planning has already set limits on the use of water in a certain river basin or region.

When water is fully allocated or already scarce, and new energy needs arise, a showdown can ensue. The question becomes: Is the sustainable and ecological mentality of water resource management going to influence the energy sector, or is the energy sector's more exploitive and revenue-maximizing style going to overtake the water management priorities?

So far, it may still be unclear what position will win out as a couple of examples show. In Australia, an ongoing drought since the beginning of this century has caused power generating stations to ask for environmental flow restrictions to be lifted for certain rivers. The problem for them is that they needed the water for cooling, but are only allowed to extract the water when flows are sufficiently high. Because the flows were not high enough due to prolonged drought, and they successful in lobbying for the removal of certain river flow restrictions, they were forced to buy water from the rural water market in Australia. This was a major cause for electricity prices rising up to 270% last year for a certain period.

In Texas, a 200 mile interbasin water transfer project ("LCRA-SAWS") from the central coastal region of Texas to the San Antonio Water System was studied for over seven years before recently being cancelled by the water supplier, the Lower Colorado River Authority. General cost overruns were much of the issue, combined with energy costs for pumping and restrictions for freshwater inflow into the Texas bays. However, these kinds of issues are not much of a stumbling block for China trying to keep its northern, now rather dry, agricultural regions productive and growing cities healthy. The "South North Water Transfer Project" is expected to take 40 years to construct 3 main arteries, transfer 38-43 billion m3 of water per year and cost almost 500 billion yuan (~ 75 billion US dollars). Additionally, there are plans for 83 GW (almost 1/10 of the US electric capacity) of hydropower dams to be constructed from 2005 to 2020. Natural river flows are not really an issue in China. They need electricity (hydropower) and water to maintain economic growth and thus, political stability.

When we look to the biofuels push, this is where we may see water management lose out. Agriculture already withdraws and consumes the most water of any sector. Historically, this has been for food production, and using water to grow food crops has been a fundamental use of water since the dawn of civilization. Using water to grow crops that then get converted to liquid fuels, on the massive scale of billions of gallons per year, is a more recent trend. Should irrigation water be used for growing biofuel crops? Is there some target percentage of irrigation water that should be an upper limit, given that some parts of the world are still malnourished? I think this is where the debate should go. I don't believe that agricultural energy interests should be completely shut out from irrigation, but at the same time I don't believe we should allow full reign of aquifers and surface water for irrigating biofuels. A common argument for some 2nd generation biofuel crops such as grasses and other cellulosic material, is that they can be grown on marginal lands. Well, marginal lands are just that, so the yields will be higher with fertilizers and irrigation. If irrigated water is subsidized for these purposes, then there is no reason to believe that the drive for higher yields and more fuels will not lead to irrigating crops grown in areas where we are led to believe it will not be used.

On the morning of 12 October 2007 I sat down at my computer and learned that the IPCC, the Intergovernmental Panel on Climate Change, had been awarded a share of the Nobel Peace Prize. I was a contributor to the report of IPCC Working Group I, and I recall the exhilaration vividly. I also remember thrilling, or at least impressing, the students in my climatology lecture later that day by telling them the news and suggesting that a little bit of the magic Nobel dust might settle on them if they listened to me carefully.

Contributing authors are the lowest form of life in the IPCC pond, but the facts that there were 800 of us, and that we were repeating an exercise carried out three times before, has a lot to do with the success of the IPCC. Plans are afoot now for the next, fifth IPCC assessment, developing concurrently with a post-mortem on the fourth. Already there are signs that the fifth assessment may have to be more disturbing than the fourth.

Some of the reasons are laid out in a set of commentaries, The road to Copenhagen, in a recent issue of Nature. For example, earlier estimates of climatic sensitivity, the amount of warming for a given amount of extra greenhouse gas, were too low. The same target for maximum carbon dioxide concentration now means a higher maximum temperature. Second, new modelling efforts show persuasively that recovery will be much slower than we thought - many centuries, not just a couple. The biosphere and the ocean cannot soak up greenhouse gas fast enough to draw down the atmospheric concentrations at rates that previously seemed probable.

In part this is just the natural evolution of understanding. The accumulating facts yield a clearer picture as they are subjected to more and more study. But this works for past events as well as for things that haven't happened yet.

With three or four colleagues, my contribution to the IPCC's fourth assessment was to show that glaciers have been losing mass more and more rapidly over the past three or four decades. We had assembled as many of the relevant facts as we could, but a leading problem was that there aren't enough of such facts: too few measurements, too unevenly distributed.

Now, further study is showing that the IPCC numbers for glacier mass balance need revising in the pessimistic direction. First, in a paper in Annals of Glaciology I brought in a large quantity of previously unused facts by working out a way to handle measurements made by so-called geodetic methods (based on repeated mapping, as opposed to direct measurements on the glacier). These newly-accessed facts make the mass balance appreciably more negative.

Second, a study led by Regine Hock shows that the IPCC work probably didn't allow properly for the glaciers around Antarctica. We had to handle these by guesswork, because there are practically no measurements down there. The new study uses alternative but credible information, modelling the mass balance from a knowledge of temperature and precipitation, to find that the IPCC guesses were too optimistic. Call their work educated guesswork if you like, but their guesses are very likely to be better than the IPCC guesses.

It now seems probable that the glaciers were contributing about 1.3 mm/yr to sea-level rise in recent years, rather than the IPCC estimate of about 1.0 mm/yr.

We IPCC contributors did our best. If you can't find any facts, you have to think of a substitute. And the facts that you do have will keep evolving. It takes time to find, process and test them, and therefore the picture will keep changing in detail. Sometimes it will look more and sometimes less rosy. But it hasn't changed in broad outline for a long time. Indeed, it hasn't changed much since Arrhenius calculated that doubling the atmospheric concentration of carbon dioxide would increase the temperature by 5 to 6 ºC. That was in 1896. The very latest estimates of this number, higher than that of the IPCC, are about the same.

Offshore wind is the big new thing, with the UK doing quite well - taking the lead from Denmark, with around 600 MW now installed and more planned. However there may be battles over how these projects, installed by rival companies, are linked up to land via power transmission links.

Most of the project are now being sited several miles off the coast, linked to land via sea-bed marine cables. There are various ways in which these could be arranged. So far however it could be a case of each offshore project having their own parallel (and very expensive) links back to shore. In some cases that seems likely to involve duplication of effort, with links to rival projects running close to each other, in parallel. It would arguably be more rational and cheaper overall to have a network of offshore links, with possibly a single link back to shore for each region, offering a common service for each project to use. That is even more the case as we go further out to sea, and would be vital if we also build links across the North Sea to the continent- as part of the EU supergrid concept.

In its report last year on Renewables, the Innovation, Universities, Science and Skills Select Committee said that they were: 'concerned that the proposed offshore transmission arrangements are not appropriate for the UK's target of 33GW of offshore wind by 2020. We urge the Government to reconsider the development of an offshore grid.'

www.publications.parliament.uk/pa/cm200708/cmselect/cmdius/216/216.pdf

Imera, who have proposed a Europa Grid linking up North Sea wind to the UK and the continent, made a similar point- there were 'unused cable capacity in traditional radial connections', whereas a grid network would be better especially since it could also be used to import/export power. www.imerapower.com/.

However, not everyone is enthusiastic about a supergrid network. In a submission in March 2009 (FBEN 29) to the new Energy and Climate Change Select Committee, the German owed utility E.ON commented: 'A super grid connecting offshore wind farms to adjacent countries is an exciting proposal, but it is unclear whether this is the most cost effective route for connecting new offshore wind. Timely delivery of the supergrid will be an issue. For example, round three offshore windfarms should not be delayed because the connection of a zone is dependent upon a wider interconnection project'.

Ofgem, the energy regulator, has also noted that the advantages with the parallel 'point to point' radial approach is that it 'allows generators to proceed individually and avoid delays due to third parties', but it has said that it's also happy with the more integrated network approach. Ofgem nevertheless got a pretty rough ride on this issue at last years BWEA wind conference - it was argued that the proposed grid regime would not encourage joined up networks, and that change was needed to ensure collaborative development and a strategic approach. Do we really need a host of separate lines just to protect competition in the short term?

That was certainly an issue for Green MEP Claude Turmes, who was the European Parliament's lead negotiator for the Renewable Energy Directive. Speaking at the UK Renewable Energy Associations annual conference earlier this month, he claimed that the competitive tender process favoured by Ofgem was delaying grid connections for offshore wind projects: 'The UK approach, imposed by Ofgem, for competitive bids for chunks of 40 km cables for offshore, is not very productive, to put it mildly. Much better is the Danish model and the German model, where you have one system operator, the Danish grid company or the regional grid operator in Germany. This company is in charge of delivering the cable to the offshore platform where you then have to plug in your wind turbine. You have to get rid of Ofgem's over-liberalised idea, by which you can have competition on grid installation.'

Source: NewEnergyFocus.com

Like all specialists, glaciologists are fond of acronyms and need to be reminded not to use them if they want to be understood by normal people. I don't know why acronyms exert such power over the modern mind. It has to be more than laziness. Perhaps it is the way the string of letters squeezes such a lot of concept into such a small space.

If you are a normal person who would like some clues to the decoding of glaciobabble, two acronyms stand out. One is ELA, short for equilibrium line altitude. The one I would like to focus on here, though, is AAR, which is short for accumulation-area ratio and is a close relative of ELA.

Two recent studies by Dave Bahr, Mark Dyurgerov and Mark Meier illustrate nicely why the AAR is a powerful concept. The first, with Bahr as first author, appeared in Geophysical Research Letters, and the other, with Dyurgerov as first author, will appear shortly in Journal of Glaciology.

Glaciers are moving bodies of ice on the Earth's surface. Ice, being near or at its melting point, is a soft solid, and flows from where there is net accumulation, usually at higher altitude, to where there is net loss due to melting or calving or both, usually at lower altitude. (Calving glaciers, however, are special cases, not considered here.)

The ELA is the altitude separating net gain above from net loss below. The AAR is the extent of the upper or accumulation area, above the ELA, divided by the total extent of the glacier.

There are some important ideas squeezed into these acronyms. First, the glacier is a whole. You must consider the accumulation area and the ablation area (the area of net loss) together. Next, the size of the whole depends on the shape. For the glacier to be in balance, neither growing nor shrinking in an unchanging climate, there has to be a balance between the accumulation area and the ablation area.

If nothing happens to alter the balance between gain and loss, the ice flow adjusts matters until a particular, equilibrium AAR is attained. At typical speeds of flow, a glacier somewhat out of balance will need from a few years to a few centuries to reach equilibrium.

The authors' best estimate of the equilibrium AAR is about 58 percent (give or take 1). Earlier estimates were larger. This finding comes from well-studied glaciers for which the authors plot annual averages of measured AAR against measured mass balance. They get a cloud of dots that defines a clear straight-line relationship. The equilibrium AAR is the AAR at which this line crosses the line representing zero mass balance.

The Dyurgerov study makes it clear that the AARs actually measured over recent decades are well below 58. The average for 1997 to 2006 is 44, give or take 2. Today's accumulation areas are too small, and today's glaciers too big, for today's climate, consistent with today's mass balances nearly always being negative.

The argument which follows from this finding rests mainly on an earlier demonstration that glacier volume is related to glacier area. Given current total area and AAR, we can estimate the change in volume required to reach the equilibrium total area and its AAR of 58. The authors find that to get to equilibrium with today's climate the glaciers will have to shed meltwater equivalent to 184±33 millimetres of sea-level rise. That would require somehow stopping climatic change in its tracks. But if we carry on with business as usual, the loss over the next hundred years comes out at 373±21 mm, or 3.7 mm/yr. The recent rate of mass loss is near to 1.3 mm/yr.

The upshot of all this is a new twist on the notion of committed change. Just as we will have to live with the greenhouse gases we have already added to the atmosphere, so we will have to watch our glaciers melt away - no matter what we do.

Over three decades ago the US government, through the then-known and newly-established Solar Energy Research Institute (SERI), established a Biofuels Program that included the Aquatic Species Program (ASP) to explore the ability to develop biofuels from microalgae. Today, SERI is known as the National Renewable Energy Laboratory (NREL), and in 1998 they concluded the ASP as the progress had slowed and there was a belief that advances in biological control and genetic engineering of algae were required to create a valid algae-based biofuel industry. Aside from carbon sequestration, NREL reports that: "Algal biodiesel is one of the only avenues available for high-volume re-use of CO2 generated in power plants. It is a technology that marries the potential need for carbon disposal in the electric utility industry with the need for clean-burning alternatives to petroleum in the transportation sector." [Sheehan et al., 1998]

Furthermore, NREL states: "...we believe that biodiesel made from algal oils is a fuel which can make a major contribution to the reduction of CO2 generated by power plants and commercial diesel engines." [Sheehan et al., 1998]

Finally, the NREL closeout report reads: "When compared to the extreme measures proposed for disposing of power plant carbon emissions, algal recycling of carbon simply makes sense." [Sheehan et al., 1998]

If we combine these statements made in 1998 with proposed legislation in 2009 for greenhouse gas (GHG) reductions, we can pose the question regarding the viable size of an algal-based biofuel industry in the United States. The most popular climate bill in the current Congress is the American Clean Energy and Security Act of 2009 (ACES Act) by Henry Waxman and Edward Markey, and it discusses reducing GHG emissions by 83% of 2005 levels by 2050.

In 2005, the US carbon dioxide (CO2) emissions were 6,030 million metric tons (MtCO2). The electricity sector accounted for 2,510 MtCO2 and the transportation sector accounted for 1,980 MtCO2. In accordance with popularly discussed proposed legislation, 17% of 2005 US CO2 emissions are approximately 1,000 MtCO2. For simplicity of this analysis, we'll assume that total CO2 emissions, rather than more generally all GHG, will need to be reduced to the target 17% by 2050.

Algae production requires CO2. And because algae and grow in aquatic environments instead of on land, the surface area of the algae that are exposed to the air, which contains CO2, is more limited than terrestrial biomass. Therefore, to grow algae biomass on industrial scales (e.g. profitable scales) CO2 is pumped into the algae-bearing water at much higher concentrations than in the atmosphere. Estimates for the amount of CO2 that are required for making biodiesel from algae are approximately 0.02 +/- 0.004 tons of CO2 per gallon of biodiesel (tCO2/gal). For example, NREL reports an example that 60 billion gallons (Bgal) of biodiesel would require 900 - 1,400 MtCO2. This quantity of CO2 is 36%-56% of total US power plant emissions.

So to get a maximum limit of how much biodiesel could be produced per year under the carbon restriction of the ACES Act, we can assume that all CO2 emissions come from transportation only. The figure below plots a simplified trajectory of US CO2 emissions (left axis) under the ACES Act, along with emissions from the electricity and transportation sectors. On the right axis, I've plotted the amount of biodiesel from algae that can be produced assuming that 100% of power plant emissions are captured and used for growing algae to make biodiesel (clearly an over estimate). This inherently assumes that (1) there will be absolutely no net CO2 emissions from any other industrial process, industry, or combustion of any hydrocarbon aside from burning the biodiesel in vehicles and (2) that no technology will feasibly exist for re-capturing the CO2 from combustion of biodiesel in the vehicle itself.

AlgebraOfAlgae_image.jpg
AlgebraOfAlgae_image.jpg

The plot shows that in 2050 50 Bgal/yr of biodiesel from algae would be the maximum amount allowed. Compare this to the 2008 US consumption of approximately 138 Bgal of gasoline and 61 Bgal of diesel. About half of the diesel was for freight trucks. Therefore, in 40 years, for the US to meet the ACES Act carbon reductions, we could produce 50 Bgal of biodiesel from algae, with 1,000 MtCO2 coming from fossil fueled power plants (assumed) if and only if no other fuel or economic sector had a net emission of CO2. Thus, if the CO2 supplied for algae came from coal power plants, then we would essentially be producing electricity from coal with CO2 capture, but not geologic or other storage systems, in the quantity of approximately 1,000 TWh or 50% of today's coal powered generation. This does not mean that additional coal or natural gas power plants could not operate, but each would have to capture and sequester 100% of the CO2 emissions - a practical impossibility, but a sufficient assumption for this back-of-the-envelope analysis.

So what are some implications or conclusions from this quick analysis?

To drive as many miles as we do today (2.7 trillion/yr by cars and light trucks only) on 25%of current liquid fuels consumption, we need our transportation sector to be 400% more "liquid fuel" efficient in the range of 80 MPG of biodiesel to leave 16 Bgal for freight (about half the fuel for today's freight)

This is not entirely difficult to imagine for light duty vehicles that currently have a fleetwide average of approximately 21 MPG. By creating plug-in hybrids and making cars lighter, the capability of meeting this fuel economy has been demonstrated. Imagining the implications for freight trucks may be more difficult, as they would still have to get over twice as efficient as today, and increasing freight travel by rail could help get goods around the country with less fuel. There are other possibilities, but knowing what we have to work for in terms of a carbon balance can prevent a "algae to biodiesel" bubble while still moving us to a lower-carbon future.

The limits to renewables

Prof. David MacKay from Cambridge University has been getting good media coverage for his seminal self-published book 'Sustainable Energy without the hot air', in which he attempts to construct and then test a range of possible energy mixes for the UK. It's a very stimulating- and sobering- exercise. His clearly presented analysis offers a challenging assessment of the renewable resource, and he is obviously worried that enthusiasts for renewables sometimes overstate what they can deliver- he says 'plans must add up'.

It would be interesting then to see his reactions to a bold new paper in The Electricity Journal (Vol. 22, No.4, May 2009, pp95-111) by Ben Sovacool and Charmaine Watts who ask is 'Going Completely Renewable' possible and desirable - and say yes, for electricity in both the USA and New Zeeland, which they select as case studies, and also, ultimately, for the world as a whole: 'Excluding biomass, and looking at just solar, wind, geothermal, and hydroelectric, the world has roughly 3,439,685 TWh of potential- about 201 times the amount of electricity the world consumed in 2007'.

MacKay's focus is just on the UK and he is at pains to alert people to the fact that if they want to use renewables to meet their energy needs, the scale of deployment will have to be very large in land use terms ('country sized'). Even then he doubts if enough can be obtained- we may also need nuclear or CCS, or both.

His approach is based on an assessment of averaged watts /sq. m, and he calculates that, for example, on-land wind delivers 2 watts/sq. m. He says that he is 'not anti wind, jut pro arithmetic' However his sums seem to ignore the possibility that the land around wind turbine bases can be used for other activities- e.g. farming or energy crop growing. And going off shore avoids land-use limits altogether, as Mackay recognises- although he points out that very large areas ( 'the size of Wales') would have to be involved to get significant amounts of energy.

Looking beyond the UK, Sovacool and Watts mention the potential of concentrating solar arrays in desert areas. As Mackay points out, they certainly use a lot of land, but there are plenty of low value desert areas, for example in North Africa.

Even so, there is major gap between MacKays cautious resource analysis and some of the more speculative data used by Sovacool and Watts. But then again, while we have to avoid over-enthusiastic assessment, there is also a need to challenge overly conservative estimates. One issue is costs. Mackay mostly escapes this by focusing on resources and physical data, By contrast Sovacool and Watts are stronger on the economics - although, once again, there will be disputes about their selection of economic data.

Overall, in looking at these two studies, we have on one hand an attempt at a hard nosed physical assessment, and on the other, a more speculative vision of what we might aim for. That's not to say MacKay's book lacks vision- it's packed with ideas and insights on how we might reduce emissions effectively. However, his overall approach does sometimes feel overly deterministic. While his calculations are clearly valuable in setting order of magnitude boundary conditions, I'm still reminded of Bertrand Russell's dictum that "Science may set limits to knowledge, but should not set limits to imagination."

David MacKay's book can be downloaded for free from http://www.withouthotair.com

For an object that is supposed to be in the frozen state, your typical glacier has a surprising amount of liquid water in it. Even in the coldest parts of the Antarctic Ice Sheet, you can find water in the spaces between the ice crystals. It is liquid partly because it is under pressure but mainly because it is very salty, and in truth the amounts are tiny.

At the glacier bed, though, there is often a great deal of water. This makes sense when you think about the energy balance down there. The ice may, as in Antarctica, begin its sojourn on the Earth's surface at a temperature tens of degrees below the freezing point, but as more of it accumulates the ice at the bottom can only get warmer. Some of the energy comes from friction as the ice moves over the bed. More comes from geothermal heat, the slow leakage of energy from the Earth's interior. The ice has to warm up to its melting point eventually.

The basal meltwater has to go somewhere. A lot of it ends up in subglacial lakes, such as Subglacial Lake Vostok in East Antarctica. Subglacial Lake Vostok is about the same size, and coincidentally about the same shape, as Lake Ontario, but the resemblance is superficial. This is hardly an appropriate use of "superficial": Subglacial Lake Vostok, apart from being hundreds of metres deep and millions of years old, is buried under 3.5 kilometres of ice. By comparison Lake Ontario is an ephemeral puddle.

Vostok is the biggest one we know of, but a couple of hundred other subglacial lakes are also known and more are being found all the time. One was newly identified last year only 15 km from Amundsen-Scott, the American base at the South Pole. Now, in a paper in Journal of Glaciology, Helen Fricker and Ted Scambos have described in more detail than before a complex system of connected subglacial lakes in West Antarctica.

Two things fascinate me about this report. First, although you can find subglacial lakes by hauling a radio echo-sounder across the surface of the ice sheet, these were found and their behaviour tracked by repeated observation from 600-700 km away. MODIS is an imaging satellite, and ICESat is a satellite travelling in a precisely repeating orbit from which it points a laser altimeter at the Earth's surface. Subglacial lakes show up in ICESat track records as flat patches. The flatness means that the ice must be afloat. When Fricker and Scambos compared earlier with later observations, they saw these flat patches rising and falling through a few tenths of a metre, sometimes more. The only reasonable explanation is filling and draining of the subglacial meltwater. Filling pushes the surface up, draining allows it to subside.

The second fascinating thing is the connection of the subglacial lakes one with another. When one drains, one or more others fill a short time later. This kind of thing has been reported before (with an interesting very recent update by Sasha Carter and others), but the accumulation of evidence is showing that Antarctic subglacial hydrology must be a lively and a complicated business.

Fricker and Scambos show a block diagram of the subglacial topography, in which the lakes are arranged in the sequence you would expect. The uphill ones, when they are full (whatever that means), feed water to downhill ones. But "downhill" doesn't mean what you would expect. This topography is a fiction, consisting of the real topography of the bed of the ice sheet plus the effect of the varying thickness of ice. The water is pressurized by this overburden, so to understand the flow of the water you have to redefine downhill so that it means "direction of the force of gravity plus the force due to the ice-thickness gradient".

There is no suggestion that this work is revealing environmental change. Antarctic meltwater has presumably been behaving like this for ages. But there are two outstanding questions, both with intriguing implications. How, as it surely must, does subglacial hydrology affect the behaviour of the ice sheet? And how does the meltwater get out?

As of last week, the United States government will own just nearly 72% of General Motors (GM) after going through a bankruptcy procedure. Additionally, new Corporate Average Fuel Economy (CAFE) standards will be targeting nearly 35.5 miles per gallon (MPG) of gasoline, or approximately 15 kilometers per liter. The 35.5 MPG by 2016 is broken down as 39 MPG for cars and 30 MPG for trucks. Taken together, free market capitalists are appalled at these actions early in President Obama's tenure. People discuss how political motives, mostly those pushing environmental agendas, are unduly forcing consumers to "buy cars that they don't want". They say the profit motive of a car company will best guide the decisions. Environmentalists say we are simply incorporating external costs, such as greenhouse gas emissions (global scale) or emissions of particulate matter and smog-forming gases (local scale).

First of all, GM had been losing money and market share for the last couple of years. The typical capitalist will tell you that private industry will make better decisions about making cars than the government, and I agree. Unfortunately in this case, GM made enough incorrect decisions over the last decade that they are now a failed company. GM was out-marketed and out-designed by Japanese and German automakers that focused broadly on the overall world market and were not over-committed to the US consumer who wanted to buy light trucks and sport utility vehicles. This is not to say that Toyota does not have top-selling full size pickups and SUVs that supported the sales of their flagship hybrid Prius.

Secondly, GM suffered from general short-sidedness of mainstream economics. There is a major disconnect between the time frames of interest in economics and the time frame of energy resource development. The lure of making large margins when selling more light trucks and SUVs in the short term (think of quarters to years) was just too great. When global forces significantly increased the operating cost of these vehicles - interpret that as high oil and gasoline prices - people "wanted" more fuel efficient cars. Then when US gasoline prices dropped from over $4/gallon in the summer of 2008 to near $2/gallon by the end of 2008 (a tremendously quick change) people were again considering relatively low fuel-efficient cars, and now one can buy a hybrid vehicle off a car lot instead of needing to pre-order a Prius months in advance.

I believe we are crossing into a new era of less prosperity governed by increasingly expensive energy resources, and most politicians and economists do not comprehend the situation. The prerequisite of available energy for economic growth is simply not universally understood well enough. For instance, the usual reason cited for the tremendously quick increase and drop of oil prices in 2008 was that "speculators" were pushing up the price. Well, speculators are part of the market system, so you can't say that the system was being "gamed" by part of the system itself. For the first time in the history of oil, the world market found out what price of oil was so high that consumers would legitimately begin to alter their lifestyles ... and that means a lower lifestyle in the form of lower purchasing power. Because this oil price increase (and subsequent crash) was not politically driven, as during the 1973 OPEC oil embargo, it is a much more important data point. What most people neglect to discuss is that world oil production was essentially level from 2005 to 2008 hovering in the range of 85 million barrels per day. This is after world oil production experienced an annual increase of 1-1.5 million barrels per day from 1990 to 2005. This literally means that the demand continued to increase, as evidenced by increases in consumption in China and the US, as oil production did not. The price of oil had to go up.

So we have a market system that can cause the price of oil to rise and fall over 300% within the span of 1 year. The oil resource and the technologies for extracting oil cannot possibly change that quickly and at that magnitude. It takes up to a decade for investments in the oil and most other energy industries to come to fruition. In making investments, or incentives for investments, in energy production and generation infrastructure or energy consumption infrastructure - such as automobiles and buildings - governments and businesses cannot judge success or failure based upon time frames of only a few years. It takes approximately a decade to see the benefits of changes in energy investment. This time frame is much longer than quarterly financial reports and election time scales. There is much evidence that suggests US presidents lost reelection (e.g. Carter) or lost much popularity (Nixon) made good energy policies for the long term, but that caused pain in the short term.

Elected officials in the United States, the European Union, and around the world, must focus energy policy on time scales longer than fiscal and election cycles because the market is not set up to perform this necessary function. Putting a price on greenhouse gas emissions, or carbon, is the major option to connect long time scales (centuries) of energy and the environment with short time scales (years) of economic markets. A price on carbon will be the most influential change to the economic system since banking began. It combines externalities of energy resources and environmental impacts to economics in a way that has never been done. Some detractors say it will destroy the economy to have such a "tax" on carbon, but what it really does is redefine what the economy is.

The economic influence of a price on carbon will be more of an artifact of the abundance and quality of current and future energy resources. In other words, the abundance of energy resources will dictate economic prosperity many times more than a tax/price on carbon. After all, if there were limitless fossil fuel supplies, we could (1) capture 90% of the CO2 emissions from all fossil fuel combustion at centralized power plants and (2) use the electricity to power industrial machinery and run homes and businesses as well as electrolyze water to create hydrogen as a stored fuel for transportation. In this case, the price on carbon wouldn't matter because we could use our limitless energy supply to take prevent the carbon from being emitted. Unfortunately, we know that do not have easily accessible and limitless supplies of fossil fuels.