This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Skip to the content

IOP A community website from IOP Publishing

May 2010 Archives

The full Stokes equation is a precise description of the flow of a deformable continuum. It says that, in an ice sheet, the pressure gradient force and the force of gravity, resisted by the temperature-dependent stiffness of the ice, are balanced by the motion of the ice. Stated that way, it is simple arithmetic – but there is a devil of a lot of arithmetic to do.

At each point in the ice sheet, the pressures, or more accurately the shear stresses and the normal (compressive or extensional) stresses, are directed along each of three coordinate axes (pointing either way), and they can change from point to point along each of those axes. You want the best spatial resolution you can afford. But the resolution you need, if you are to maintain accuracy, is simply not affordable even on today's fastest computers. In a recent study, Gaël Durand and others remarked laconically that in going from a grid spacing of 20 km to 2.5 km one of their simulations ballooned from two days of supercomputer time to two weeks.

Various simplifications of the full-Stokes treatment have been developed. If you ignore along-flow velocity gradients, tending to stretch or squeeze the ice in the horizontal direction, you get the so-called shallow-ice approximation, oddly named because it works better the thicker the ice. Ice sheets tend to feed floating ice shelves. Here the underlying water cannot support horizontal shearing, so the ice flows at the same speed throughout the thickness; the vertical gradients of the horizontal velocities are negligible. This is the shallow-shelf approximation. Both approximations are valuable time-savers.

Unfortunately they both break down near the grounding line that separates the ice sheet from the ice shelf. Faced with the impracticality of the full-Stokes treatment and of any one-size-fits-all approximation, the dynamicists have been working hard to make the problem tractable.

Rectangular arrays of grid cells are definitely old hat. Nowadays the favoured approach to the numerics is the finite-element method, in which you describe your ice sheet with cells of variable size and shape. This is laborious but reduces the computational burden later. You give the ice sheet the full-Stokes treatment everywhere, but spend little time where full Stokes isn't really necessary.

There is an obvious snag. The grounding line is the focus of interest because it might migrate unstably towards the interior of the ice sheet. But if it migrates away from where you have laboriously set up lots of little cells, you are sunk. Instead of migrating in little steps it gains the computational freedom to take great big ones. It can, and may well, end up in some entirely unrealistic location.

So Durand and co-authors created an adaptive grid consisting of cells that were small near the grounding line, growing larger progressively with distance from it. But they re-centred the grid on the grounding line after each model time step, such that the little cells kept company with the grounding line. More purely preliminary labour, but with the smallest cells only 200 m in size they were able to obtain consistent numerical behaviour and to confirm Christian Schoof's finding, from a different theoretical angle of attack, that grounding lines are indeed unstable when the bed slopes upwards.

There is irony in this greed for number-crunching power. Long before ice sheets became objects of scientific scrutiny, Stokes laid all of the conceptual groundwork with a pencil (or maybe a quill – did they have pencils in the 1840s?). Much of our understanding of how ice sheets work was developed on computers to which you would not give desk room (even if they would fit). Now, the glacier dynamicists are right up there with the astrophysicists, climate modellers and the like, baying for time on unimaginably fast computers that have trouble satisfying the demand.

The glaciological demand, though, is real and pressing. The full-Stokes treatment is getting attention because of the socioeconomic risks of grounding-line instability, which was identified in the Fourth Assessment by the Intergovernmental Panel on Climate Change as one of our biggest gaps in understanding of how the Earth works. My dynamicist colleagues have to have something to say about it in time for the IPCC's next assessment, due in 2014. They have made tremendous progress by working overtime, but if yet more time is what it takes to crack the problem then I hope they will resist this pressure to deliver.

Offshore wind energy is booming, with for once the UK in the lead, having installed over 1000 Megawatts (MW) of offshore wind farm generation capacity. Denmark is second in the league table, with 640MW in place, followed by the Netherlands at 250MMW and Sweden at 164MW. But several other EU countries are moving ahead. Belgium, Finland and Ireland all have working offshore projects, while Germany has started up its first large offshore project – it wants 10,000 MW by 2020. France has announced 10 zones for offshore projects off its Atlantic and Mediterranean coasts – it wants to have 6,000MW in place by 2020. However the UK seems likely to stay in the lead – it aims to install up to 40,000MW by around 2020, maybe more.

That's not to say there have not been problems. Costs have risen, in part because of the increased cost of materials like steel, which in turns reflects the increased costs of conventional energy. And there have been teething problems with some of the designs. A minor fault in the design of the transitional piece which connects the tower to the monopile foundations of the newer machines has been detected, which has resulted in movement of a few centimetres in a number of turbines. Fortunately it is not thought that there is any safety risk or threat to service or output and it's evidently planned to deal with the as part of the usual rolling programmes of operation and maintenance, with any repairs that are necessary being carried out turbine by turbine, so that there should be no impact on the operation of the rest of the wind farm. The fault evidently does not effect earlier offshore designs.

Clearly issues like this will have to be taken into account in the design of new much larger 10MW machines now being developed. However, one of the newer designs, the 10MW SWAY floating turbine being developed in Norway, won't face quite the same problem – it's actually designed to tilt by 5–8 degrees in the wind.

Outside the EU, in 2009 China installed a 3MW offshore turbine, the first unit of a 100MW project. And, after nearly 10 years of sometimes heated debate, the Cape project off Nantucket Sound in New England has at last got the go ahead. It will be the USA's first offshore wind farm – with 130 turbines. But many others are being considered, including floating versions for use in deeper water. For example, researchers at the Worcester Polytechnic Institute (WPI) have a $300,000 grant from the US National Science Foundations for a three-year on floating wind turbine platforms.

The EU is of course well advanced in this field – with for example the Norwegian Sway device mentioned above, Statoil Hydro's Hywind and the UK's 10MW Nova project. There is also the novel floating Poseidon wave and wind platform system being developed in Denmark – a 10MW version is now planned.

But a report released by the US Dept of Energy in 2008, says the 28 US states that have coastlines consume about 80% of all the electricity the US produces, so maybe they'll have an incentive to push ahead too.

As in the EU, the idea of an offshore supergrid to link up offshore wind projects has also been mooted in the US. Researchers from the University of Delaware and Stony Brook University say that linking Atlantic Coast offshore wind parks with high-voltage direct current (HVDC) cables under the ocean would substantially smooth out the fluctuations. As a fix for intermittency, they say "transmission is far more economically effective than utility-scale electric storage".

http://www.pnas.org/content/early/2010/03/29/0909075107.full.pdf+html

Currently there are proposals for five offshore wind farms from Delaware to Massachusetts. As plans stand, each would have separate underwater transmission cables linked into the nearest state electric grid. But the report suggest a single, federal offshore Atlantic Transmission Grid would be a better bet. Co-author Brian Colle said: "A north-south transmission geometry fits nicely with the storm track that shifts northward or southward along the U.S. East Coast on a weekly or seasonal time scale. Because then at any one time a high or low pressure system is likely to be producing wind (and thus power) somewhere along the coast."

http://solveclimate.com/

Offshore wind isn't the only offshore option. The use of wave energy and tidal streams is also moving ahead around the world, with once again the EU, and the UK especially, in the lead. For example 1,200 MW of wave and tidal current turbine project have just be given the go ahead in Scotland. But US company Ocean Power Technologies (OPT) has been making progress winning contracts for its Power Buoy wave device including one from the Australian government.

Tidal current turbine projects are also developing around the world, for example Ireland's Open Hydro has linked with Nova Scotia Power to deploy a 1MW tidal turbine in the Bay of Fundy. And the UK's Marie Current Turbine Ltd is to install a 1.2MW Seagen there too. Meanwhile, South Korea is pushing ahead with a range of ambitious tidal projects, over 2,000MW in all, while businessgreen.com has reported that Israeli marine renewables company SDE Energy recently completed construction of a 1MW wave power plant in China. The $700,000 plant consists of a floating buoy attached to a breakwater. It's been installed near the city of Dong Ping in Guangzhou province. SDE is also reportedly in the final stages of negotiations over other projects to be built near Zhanjiang City and in the province of Hainan. SDE has talked in terms of ultimately having 10GW of wave energy systems along the Chinese coastline.

It looks like offshore renewables could really become a significant new option. The big advantage of going offshore is that there is less visual impact. The energy potential is also large – wind speed are usually higher and less variable, and for tidal flow systems, there is a lot more energy in moving water than in moving air. But there may be some environmental impacts (e.g. on fish and sea mammals), something that the device developers are very keen to avoid by careful location and a sensitive design.

However, it is argued that relatively slowly rotating free-standing tidal rotors, or wave energy buoys or platforms, should not present many hazards, while it seems that offshore wind turbine foundations can provide a substrate for a range of sea-life to exploit. As with on-land wind turbines, birds can be at risk of collision with moving wind turbine blades, but observations have suggested that sea birds avoid offshore wind turbines.

Even so, environmental and wildlife impact issues need attention, for example in terms of influencing the choice of location and layout. Overall, a precautionary approach has been adopted: developers have to submit detailed Environmental Impact Statements and there is much research on specific impacts.

http://www.cefas.co.uk/publications/files/windfarm-guidance.pdf

But most of the problems seem to be during the installation process (e.g. noise impacts when driving piles for wind-turbine foundations and disruption during cable laying). Once installed, there seem to be fewer problems, other than possibly sea-bed sediment movements, although navigation hazards have led to some debates.

http://www.offshorewindfarms.co.uk/Pages/COWRIE/

* A new UK report co-ordinated by the Public Interest Research Group puts the total practical UK resource for offshore wind, wave and tidal power as 2131TWh p.a. (six times current UK electricity use: www.offshorevaluation.org). I'll be looking at that in a subsequent blog.

Reblog this post [with Zemanta]


In the continuing saga of the oil leak after the April 20 explosion and subsequent sinking of the Transocean's Deepwater Horizon drilling rig, operated by BP, there has been no shortage of people quoted in the news media wondering why we can't just throw money at the problem and have the well plugged. We've heard "Why aren't BP and the government responding?" over and over. But they have been responding, only ineffectively until BP's "top kill" procedure that seems to be having success (as of this writing), but is not yet completed through the process of cementing the well. This thinking that we should easily be able to stop this leak stems from the fact that many people are uneducated about the principles of science and that all things new are viewed as equally innovative. If this fallacy persists it will undermine research and education in energy.

To give an example, I've heard prominent policy speakers on prominent talk shows say that if we'd simply hire Google employees tackle the problem of plugging the leaking oil well, then it would be completed within days. This mentality assumes that, when it comes to environmental remediation of an oil leak a mile below the sea surface, the people who invented the drilling technology itself are at some level less competent then those that make their revenue from linking advertisements to Web searches. Granted, both Google and BP are generally very good at what they do. But suggesting Google is best qualified to stop an oil leak is akin to suggesting that BP should be in charge of Google's strategy for operating its search engine in China. This suggestion also implies that the past research on energy alternatives has been performed by buffoons.

Just as in the past "The Marine Biologist" episode of the popular 1990s US sitcom Seinfeld, we might as well ask Kramer (the clumsy neighbor) to hit a golf ball into the ocean to plug up the well just as he plugged up a whale's blowhole with his "hole-in-one." Oh wait, I forgot, he actually did plug up the hole. In the case of the whale - not well - unplugging the passageway was needed. The call came out for a marine biologist, a relevant expertise for the task at hand. The fact that George (who often lied of his intellectual capabilities to get ahead) solved the problem because he pretended to be a marine biologist, and did so successfully, is relevant to my point. Society perceives that we don't need a foundation in science and engineering to solve energy problems that involve science and engineering.

We as people are more prone to act in times of crises than when continual change is required. Former President George W. Bush' decision to go to war with Iraq to oust Saddam Hussein was based upon the highly uncertain belief that there were weapons of mass destruction (WMD) that needed confiscation before he chose to use them. As we all know today, there were no WMD in Iraq, but within a few years we at least knew the answer to the question.

We wait until financial crises occur such that we have to take drastic measures to bail out banks so that we can justify actions by saying we didn't have time to pursue other solutions. These justifications exist even though looking at the past data shows that total debt in the US, public and private, has been continuously increasing for all practical history, and is at near 350% of GDP. And now the US public debt is at 90% of GDP. With these trends, why do we need a crisis to act? In group planning exercises, simulated crises are often created to force people to make concrete decisions to explore the effectiveness of the decisions. For example, say that your region is experiencing drought, and the demand for water is 10% higher than the supply - for whom and how much to you reduce water access to meet the supply?

In the research community we should do a much better job at explaining the differences in making decisions under uncertainty. There are measureable decisions that produce short term feedback regarding effectiveness (e.g. acts of war, plugging an oil leak) that have highly uncertain outcomes, but that history has shown people pursuing out of choice or necessity. There are also decisions where the feedbacks occur over long times and succeed due to multiple coordinated actors due to their disperse nature (e.g. climate change mitigation, energy investments, land use management to preserve aquatic environments such as prevention of hypoxic zones). We're good at the former and bad at the latter. Because these latter decisions for environmental management require group coordination, regulation and government involvement is usually used, and those that are affected and unaware question the motives to the point of noncompliance. Only after convincing them that their personal actions make a difference as part of a coordinated effort do they believe they should change their actions.

With regard to energy investments, given the existing measures for economic growth that discount the future and keep environmental impacts external from the growth equation, oil still makes sense. As long as value is measured by the flow of goods instead of the stock goods, we will favor energy and fuel-consuming items and systems. The "innovative" energy efficient investments in web servers that lower the energy per bit have simply followed Jevons' paradox as we now process even more bits than were saved. We stream movies on YouTube and constantly check the web on our mobile phones.

We assume that solar electric generating technologies will someday be cheaper than coal, and we assume that putting a sufficient price on greenhouse gas emissions will drive innovation in energy systems that enable continuous living at high standards in the developed world while bringing the developing world up to par. Most of these assumptions of innovation of new technologies are based upon the study of gadgets that consume, rather than produce energy. There is a reason why solar power is not cheaper than coal power - it is hard to take a diffuse energy resource such as sunlight and make it as productive as energy dense resources like fossil fuels. There are real physical constraints that limit the power that can be produced. These physical constraints can't be removed by programming a search engine (Google) or a mimicking a sitcom (Seinfeld), or simply believing they will work.

We need to understand how well renewable energy systems can replace fossil fuels. This is not because the fossil fuel industry is necessarily evil, but because fossil resources will inevitably become uneconomical, no matter how we quantify that. And because today renewable energy technologies are manufactured by burning fossil fuels, they will also not be economical in the long run unless they are made with their own energy as an input.

We are all familiar with the idea of hardness. Falling on your knees is more painful if you fall on a pavement than on a lawn. But most of us would be puzzled to make the idea precise and quantitative. The geologists, thanks to Friedrich Mohs (not Moh), have a good working scale for the hardness of minerals. More surprisingly, so do the snow scientists for the hardness of snow.

Hardness can indeed be defined precisely. All those with a serious interest in the hardness of substances agree that the everyday concept is proportional to the force required to produce an indentation in the surface of the substance.

In Mohs' hardness test, you press a series of test minerals into the mineral whose hardness is to be estimated. The one that produces an indentation, and powder when you drag it across the test surface, gives you the Mohs' hardness of the unknown mineral. Mohs' hardness is just a number on a scale from 1 to 10, but careful studies have shown that it is proportional to the logarithm of the force, measured in newtons, N, or preferably N m-2 (newtons per square metre, because the size of the indenter makes a difference) that is applied to the surface.

It is the same with snow, but the test indenters are even simpler. In the classical hardness test for snow, introduced by de Quervain in 1950 and explained in last year's new edition of The International Classification for Seasonal Snow on the Ground (search on "Fierz"), you use successively (1) your fist, (2) the ends of your four fingers, (3) just one finger, (4) the tip of a pencil and (5) the blade of a knife. You apply gentle force, and your hardness index is the number of the first list entry that penetrates the snow.

It sounds very fuzzy, doesn't it? Surely these indenters differ in size and shape? What does "gentle" mean? Are you allowed to wear gloves? (The rugged answer to that one, apparently, is No.) Do you sharpen the pencil? The International Classification says Yes, but in the only photo I have ever seen of a pencil in snow-science service it was unsharpened. Why would people bother with a procedure with so many question marks attached?

The answer to that one, of course, is that the procedure works. It is also fast, and above all cheap. Assuming that you borrowed the pencil and somebody gave you a penknife for your birthday, the cost is nil. And in a recent paper in Annals of Glaciology, Höller and Fromm show that the hand hardness index does reproduce, with acceptable accuracy, measurements using more advanced instruments. If your fist indents the snow, the implied force is about 20 N and the implied strength or resistance about 4,000—8,000 N m-2. If you need a knife to do the job, the implied strength is about 0.1 up to 2 million N m-2.

Why worry about the hardness of snow? There are plenty of reasons. Soft snow, especially a lot of soft snow, is a bore if you have to walk over, or rather through, it. Skiers have a fairly obvious interest in the hardness of snow on the surface. But probably the biggest justification for snow scientists who stick their fingers into snow is the risk of avalanches.

Snow can be jerked into catastrophic motion in a range of ways, and its hardness is only one of the factors to be considered. But a safe prediction is that abrupt failure is more likely where there is a sharp discontinuity of strength between adjacent layers. More precisely, the shear strength is the ability of the snow on either side of an interior plane to resist relative motion over that plane. The hardness test, applied at regular intervals in a snow pit, measures the shear strength in a roundabout way, by measuring the compressive strength of adjacent layers.

Predicting avalanches is difficult at best, but if it were expensive then the prediction might not happen at all. So the hand hardness index has won and held a place for itself in keeping the death rate down in cold, mountainous terrain. It also tends to confirm my hypothesis that science doesn't have to be expensive to be worthwhile.


Few people see nuclear power as a cheap option. The capital cost is high, and the ultimate cost, if something goes seriously wrong, could be very large. The UK's nuclear liability law is based on the Paris and Brussels Convention on Nuclear Third Party Liability, which has been in operation since the 1960s. The operator is required to take out the necessary financial security to cover its liabilities and in the UK this is currently set at £140m. Recent amendments, which are not yet in force, are aimed at ensuring that greater compensation is available to a larger number of victims in respect of a broader range of nuclear damage. In particular, it will be possible to claim compensation for certain kinds of loss other than personal injury and property damage, including loss relating to impairment of the environment. The period of operators' liability for personal injury has been increased from 10 to 30 years and, more generally, the limit on operators' liability has been increased to €700 m. That's the situation as summarised recently by Lord Hunt, then energy Minister.

However if the worst comes, then even €700m is unlikely to be enough. The cost of just upgrading the emergency containment shelter at Chernobyl in 1997 was $758 m. Quite apart from the loss of life, with estimates of early deaths ranging up to several thousand and beyond, and also lifelong illnesses (e.g. related to immune system damage) for some of those exposed, the total economic costs of the Chernobyl disaster were much larger: e.g. Belarus has estimated its losses over 30 years at US $235 bn, with government spending on Chernobyl amounting to 22.3% of the national budget in 1991, declining gradually to 6.1% in 2002. And 5-7% of government spending in the Ukraine still goes to Chernobyl-related benefits and programmes. www.greenfacts.org/en/chernobyl

Some of these estimates may be inflated (like any insurance claim), but it seems clear that major nuclear accidents are expensive. They are of course rare. Less rare is overspend on building nuclear plants. The EPR being built in Finland has seen overrun cost put at 55% so far. If and when a new nuclear construction programme goes ahead in the UK, we are told it will be the private sectors problem- the government is not providing subsidies. Lord Hunt commented 'We have taken clear steps to minimise the risk of costs falling to Government'. But he added 'Of course, in extreme circumstances, if the protections we have put in place prove insufficient, the Government would step in to meet the cost of ensuring the protection of the public and the environment'.

It is of course not entirely true that the taxpayer is not providing subsidies for nuclear power. For example, the Labour government helped set up a range of new support projects, including £20m for a 'Nuclear Centre of Excellence' along with 'up to £15m' for a 'Nuclear Advanced Manufacturing Research Centre' and a £80m loan to help a Sheffield steel company to link into the nuclear supply chain. And of course, via the NDA, it has taken over the legacy debts associated with waste management and decommissioning old plants and facilities - over £70billion and rising. For more on subsidies see: www.energyfair.org.uk/home.

In line with the Con- Lib Dem collation agreement, Chris Hulne, the new Lib Deb Energy and Climate Change Secretary, has said that there will be no public subsidy for nuclear power. He even seemed, perhaps ill-advisedly given the liability arrangements mentioned by Hunt above, to extend that to support in the event of a disaster. He told the Times (15/5/10) "That would count as a subsidy absolutely. There will be no public bailouts . . . I have explained my position to the industry and said public subsidies include contingent liabilities." If that is adhered to strictly, it would certainly face the nuclear industry with extra, and possibly open ended, costs. But then, in line with Lib Des views, Hulne has made it clear that he doesn't think the industry can actually proceed without subsidies.

The Coalition does however want the carbon trading system to yield much higher prices for carbon, so that fossil fuel would cost more. Electricity prices would then rise across the board, helping nuclear. Renewables would of course also benefit and there are good environmental reasons for increasing energy prices. But it would be an indirect subsidy for nuclear and renewables - paid for by consumers. In addition, the Coalition has backed the idea of setting a minimum 'floor price' for carbon, to stabilise the carbon market. In the event of a carbon market down turn, it would have to be backed up by taxpayer- in effect a direct subsidy.

Against these and other costs and risks there will also be benefits- low carbon energy and employment gains. However that would also be true if the money was spent instead on other low carbon energy options - of which there are plenty, many of which could be cheaper and could be deployed more rapidly.

The UK is backing some of them -e.g. offshore wind, wave, tidal- but it is sometimes argued that nuclear will add diversity to the mix. The problem is that there may be some conflicts and incompatibilities when trying to mix basically inflexible nuclear with variable renewables. That could lead to extra costs and waste - for example when there is excess wind generation available over and above the nuclear base-load, the wind input will be 'curtailed', making wind power artificially expensive and wasting useful carbon-free energy. Moreover, given that, as overall budgets are limited, any spending on nuclear must inevitably involve less spending on renewables, they may not all develop as rapidly as they could. So we may end up with less overall diversity.

To be fair, that would also be true if we focussed on some other options. Some critics say that we are focussing too much on wind power. But that has to be put in perspective- over the years, nuclear has had the lion's share of R&D and public investment in new energy technology, and we are still spending about half our energy R&D funding on nuclear fission and fusion. By comparisons, renewables, as a group, have been starved of funding until recently. Wind has been the first of the new options to break through, but hopefully we can now develop a fuller range of renewable options. After all, as group, they are very diverse, and arguably offer a viable route to a more sustainable future. From that perspective, nuclear just gets in the way- diverting technical, skill and financial resources away from renewables and other sustainable options and making it harder and more costly to operate a viable low carbon future.

Reblog this post [with Zemanta]

In the aftermath of the fuss about Himalayan glaciers, I have noticed a tendency among my colleagues to hesitate about citing so-called "grey" literature – loosely, stuff that has not been reviewed by scientific peers and accepted for publication by an editor acting on recommendations from such reviewers.

Some have argued that we should stop citing any publication that has not appeared in a peer-reviewed journal. The snag about this idea is that it would make scientific studies of the climate in general, and glaciers in particular, almost impossible. Much of the raw data appears in documents, and nowadays files on the internet, published by governments or quangos. Sometimes there is a reviewed paper to document the work underlying the measurements, sometimes not.

In glaciology, some of our mass-balance measurements are superbly documented in high-profile journals. I don't know of any wrong numbers in this kind of source, but by definition the documentation is not superb if it doesn't include a thorough analysis of uncertainties. It is a pity, but hardly the fault of the authors, that readers in a hurry tend to read the name of the journal but to skip the thorough analysis.

Some measurements are mentioned only briefly in very obscure documents, with few or no accompanying details. You can only decide whether to accept this kind of measurement by reading critically and judging whether the measurers knew what they were doing. (I will come back to this idea of reading and judging.)

Most of the measurements lie between these extremes. You can find them in black and white, with some background information, but in a grey source. Your choices, in a context in which you are desperately short of hard facts, are to reject the measurements because they were not peer-reviewed, reject them because they do not stand up to judgement, or accept them with appropriate reservations.

It is not as if publication in the peer-reviewed literature is a guarantee of correctness. There are some appalling wrong results in the literature. Among the most famous examples is the 1989 claim by Fleischmann and Pons (in the Journal of Electroanalytical Chemistry, volume 261(2A), pp301–308) that they had observed cold fusion, that is, the fusion of atoms at room temperature. It would have altered our world forever, but it was a report better suited to the Journal of Irreproducible Results.

During the recent furore, a recurrent criticism of the Intergovernmental Panel on Climate Change was that it had cited grey literature, as if that were a mortal sin. Indeed, at the centre of the Himalayan maelstrom was a report by the World Wildlife Fund, usually berated by the ill-disposed or mischievous as a "green advocacy group". Somebody then counted other IPCC references to WWF reports and found 16. I haven't read the other 15, but the Himalayan one was in fact pretty good – thorough, reliable except for one howler and, ironically, reviewed. The WWF made the same regrettable error as the IPCC's Himalayan-glacier authors, namely swallowing nonsense uncritically from a popular science magazine. It is further ironic that the WWF and the IPCC can be shown to have made this error independently, but that the IPCC erred additionally by splicing in a reference to the WWF instead of to the popular magazine. So the WWF was a victim of friendly fire, but not an innocent victim.

The culminating irony, however, is that the IPCC's guidelines for the treatment of grey literature, in Appendix A of the Principles Governing IPCC Work, are a model of reasonableness. (See Annex 2 in particular.) Had they been followed with respect to the Himalayan glaciers the fiasco would never have happened. The IPCC's guidelines for handling work that has not been reviewed by peers boil down to "read the darn thing for yourself and do your own review".

It strikes me as good advice. If more people – preferably everybody – were to heed it, we would all be better off. And whether the source were grey or not wouldn't make any difference. The most basic error is accepting authority as a substitute for reasonableness.

A study by consultants Poyry on behalf of the European Wind Energy Association claims that wind power generation can reduce electricity prices by between €3 and €23 (£2.60 and £19.90) per megawatt hour depending on the scale and location. Entitled Wind Energy and Electricity Prices, the report looks at a range of studies of prices and concluded that an increased penetration of wind power reduces wholesale spot prices. Overall it says that "due to market dynamics and the lower marginal costs of wind power compared to conventional power, the spot electricity prices decrease by 1.9 €/MWh per additional 1,000 MW wind capacity in the system".

That already seem to have been playing out in practice. Poyry reports studies that suggest that, for example, in Germany, increased wind power has reduced costs in the range of €1.3–5 bn per year. One result has it seems been that some power companies have begun to reduce charges to consumers, although this is mainly because of the need to avoid wind curtailment: at periods of low demand and when there is excess wind available, a price-rebate system has been used in Germany, to avoid wind curtailment (i.e. power dumping), and according to an article in Offshore Wind Magazine (OWM), "payments have risen as high as 500.02 euros a megawatt-hour" for some users. In effect it's negative pricing-selling power at below cost.

Anti-wind cynics might of course say that this just goes to show that we should not subsidise wind – then suppliers wouldn't have to pay people to use it when there was excess. However, the counter point is that, although the capital cost is high, so that subsidies may initially be needed to establish it in the market, once it is established the marginal operating costs are low (there is no fuel cost) making it very competitive, especially if curtailment can be limited.

Dropping the price locally can certainly help to avoid having to curtail the wind-turbine output, but of course, like curtailment, this does eat into profits. Since the spot-price volatility is unpredictable, it makes it hard to plan ahead. So arguably there have to be limits on how low prices can fall or else it becomes uneconomic to produce power, at least within a competitive market system.

To avoid this, and the risk of the market collapsing, OWM notes: "Nord Pool, the Nasdaq OMX Group Inc.-owned Scandinavian power bourse, last year took steps to encourage generators to limit production by implementing a minimum price. The most generators would pay users to take their power is 200 euros per megawatt hour if there is excess electricity from too much wind." It added, the measures are meant to "increase the effectiveness of the market, forcing power generators to consider reducing their electricity generation or having to pay for delivering electricity". Similar arrangements have evidently been made in Australia, where excess wind generation has also at times been forcing pool prices to go negative, leading the market regulator to set a minimum floor price. But that is a pretty crude approach – fixing the market. It's not a sustainable long-term solution.

Diversity is another way out. OWM noted that RWE, Germany's second-largest utility, minimizes the risks of having to pay consumers to use power by using a "broad" range of different-generation technologies in different markets. RWE said that rebates or negative prices didn't have a big effect on the company.

Longer term, what's needed is diversity plus better and wider-ranging grid links, which can allow excess power to be sold further afield, and also compensate for any shortfalls by drawing power from other areas (i.e. via regional and even inter-country power trading). OWM noted that this is already done among France, the Netherlands and Belgium, and Germany plans to join them soon – selling excess power to power-short areas. That's where the HVDC supergrid would come into play, allowing for EU-wide optimization of green energy.

Storing excess energy (e.g. using it to pump water up into hydro reservoirs) is another option, and this could be done on an EU-wide basis, using the supergrid. That's what Denmark already does with some of its surplus wind-derived electricity, selling excess to Norway and Sweden, who, if they don't need it immediately, may store in their hydro reservoirs. Denmark then, in effect, buys it back when there is a wind shortfall in Denmark. The only problem is that the Danes get less for the excess wind that they sell than they have to pay for the bought-back power.

Negative pricing and surpluses are not just an EU issue. OWM noted that, according to the Electricity Reliability Council of Texas, Texas had so-called negative power prices in the first half of 2008 because wind turbines in the western part of the state weren't adequately linked with more populated regions in the east. See my earlier blog on grid congestion and wind-curtailment issues in the US.

OWM quote Andrew Garrad, chief executive officer of GL Garrad Hassan, a wind-consulting company, that commented that in parts of Texas, some utilities are using wind power because it's the cheapest form of energy. But until there's more integration and better transmission grids, prices probably will continue to fluctuate, leading to negative prices, with payment to consumers being reflected as a discount on their monthly bills. Garrard conclude that "we do need to get the right market mechanisms in place" to better integrate wind power into energy grids.

That's clearly true. But rather than just leaving it up to the market or fixing floor prices, one idea might be to introduce a cross feed tariff on power transfers, to help balance cash and energy flows, so as to optimize carbon savings by avoiding curtailment. For example, it could raise money via a levy on the overall transfer or from sales to stimulate investment in grid upgrades and reward suppliers who can deal with excess generation, and then meet shortfalls, using energy-storage facilities.

Source: www.offshorewind.biz/2010/04/25/offshore-wind-boom-lowers-electricity-prices/

A version of the OWM article originally appeared in Bloombergs Business Week, authored by Jeremy van Loon.

Reblog this post [with Zemanta]

Delhi urban transport is surprisingly difficult to grasp. Delhi travellers can not only choose between cars, taxis, different sorts of busses, subway, bicycles or walking, but can also enter three-wheelers, such as auto rikshaws or bicycle rikshaws. The old city is so narrow and crowded with people that a single car suffocates street flow, putting riskshaws and hundreds of pedestrians on the hold. In contrast, South Delhi boasts wide avenues and immense space.

Delhi urban transport is also confronted with massive challenges. Rapid urbanization, and more crucially, rapid motorization cause constant congestion, asthma-provoking air pollution, honking-induced noise stress, regular accidents, and GHG emissions (not that this matters for local citizens). A number of recent articles addressed this issue. Sen et al. calculate the marginal external cost of urban transport in Delhi [1]. The marginal costs are dominated by congestion induced by private motorized transport, with air pollution coming in as a dominant second. Noise and accidents play a much smaller role. This is reminiscent of results from Beijing – where the urban transport induced air pollution produces about the same monetarized externalities as congestion does [2]. For citizens of Asian cities the massive consequences of air pollution may come as no surprise. Nonetheless, this observation is crucial for being different in European or US cities, and, hence, the particular focus of transportation planners on air pollution is needed in Asian cities.

Another paper by Woodstock et al. dealing with the health effects of alternative urban transport in London and Delhi uncovers an additional dimension [3]. The main finding is that a combination of reduced reliance on motorized traffic and increased active travel (pedestrain and cycling modes) produces huge health benefits. Crucially, in the two cities, but even more so for Delhi, the health benefits are dominated by active travel itself (reductions in ischaemic heart disease, cerebrovascular disease and diabetes). Air pollution reduction benefits are, however, also very significant. For Delhi, the burden of road traffic injuries would also be reduced in this scenario, compared with business-as-usual.

Han et al. calculate the benefits for a shift from indivudal motorized transport to mass rapid transit in Delhi [4], recommending an acceleration of the development of the rail-based system, increasing fuel taxes and lower priority for road extension. In contrast, coming from an external cost perspective, Sen et al. recommend market-based mechanisms, such as road pricing [1].

Delhi municipality is of course cogniscitant of the challenge. Already by 2002 the public transport fleet had alreadby converted from diesel to CNG [5], improving air quality. The metro system is rapidly expanding, with 190 km of track expected to be open for the CommonWealth Games in September [6]. The network is expected to span 413 km in 2021. Delhi also constructed the first bus rapid transit line in the city. Arguably, the municipality does its share in the domain of public transit investment – again similar to Beijing. However, and sharing this experience with Beijing, too, mass rapid transit, that mostly induces additional transport demand, is in times of rapidly increasing motorization not even enough to leave a dent in the increase of the massive generalized congestion costs. To make the city more liveable, cars need to be restrained, by physical or financial disincentives. A newspaper from May 11, 2010, suggests that Delhi municipality is heading into this direction: a congestion charge is under discussion [7]. Implementing this more aggressive measure, Delhi would seriously move towards sustainable transport and would leapfrom cities like New York and Beijing.



[1] Sen et al. (2010) Estimating marginal external costs of transport in Delhi. Transport Policy 17: 27–37

[2] Creutzig and He (2009) Climate change mitigation and co-benefits of feasible transport demand policies in Beijing. Transportation Research D 14: 120–131

[3] Woodstock et al. (2009) Public health benefi ts of strategies to reduce greenhouse-gas emissions: urban land transport. Lancet 374: 1930–43

[4] Han et al. (2010) Assessment of Policies toward an Environmentally Friendly Urban Transport System: Case Study of Delhi, India. Journal of Urban Planning and Development 136: 86–93

[5] Clean Air Initiative, CNG busses in Delhi, retrieved 13 May 2010

[6] Washington Post, New Delhi residents cheer arrival of new Metro system 11 May 2010

[7] ExpressIndia, Congestion charge for cars to clear air, govt weighs options 11 May 11 2010

I was asked recently to make suggestions for a list of classic papers in the Journal of Glaciology and Annals of Glaciology, the two main publications of the International Glaciological Society (IGS). If you are keen, you can expect to see a feature on glaciological classics in the Journal's 200th issue later this year. It will be interesting to see what the community of glaciologists comes up with as its selection of the papers about which it is proudest.

My own little list begins with Anonymous 1969. We have got into the habit of calling it that, although it baffles people from neighbouring disciplines (and in fact most of us know who wrote it). It codified the thinking on which we have relied over the past 40 years for describing the components of glacier mass balance, enshrining for example bn as the symbol for net mass balance, and c and a for accumulation and ablation respectively. Even its author would not, I imagine, describe it as exciting, but that hundreds of glaciologists take it for granted every day shouldn't disqualify it from classic status to my mind.

Then I added Jay Zwally's 1977 paper about the emission of microwaves by cold snow. This work opened up a new part of the electromagnetic spectrum to glaciological investigation. We know what glaciers look like in the visible part of the spectrum, in which our eyes make pretty good sensors. But microwaves, with wavelengths of millimetres rather than nanometres, show us a new world. Not the least of their advantages is that they pay no attention to clouds and don't need sunlight. If you have a microwave radiometer, or better still a radar with which to make your own microwaves and bounce them off your target, you can look at your glaciers whenever you want. (Oh, you also have to have an orbiting satellite on which to mount your instrument.)

Zwally's particular contribution was to show that the strength of microwave emission from cold snow is proportional to temperature and grain size and therefore, by an ingenious and very productive analysis, to the rate of accumulation of the snow. This has become a leading way of estimating accumulation rates above the dry-snow line. (Things become a lot more complicated if the snow starts to melt).

My all-time most significant IGS paper is probably Geoff Boulton's 1979 work on the deformation of the glacier bed by the flowing ice. He showed that, by comparing the along-glacier stress due to the glacier's flow to the downward pressure due to its thickness (possibly offset by pressurized basal water), you can fashion any of a variety of intriguing and familiar shapes. For example you can make drumlins (by lodgement of the glacier's sediment load; steep end up-glacier) or roches moutonnées (by abrasion of the bed; steep end down-glacier) algebraically, both from the same equation.

Boulton's classic measurement at the bed of Breiðamerkurjökull. Boulton's classic measurement at the bed of Breiðamerkurjökull.

The crucial insight for this study came from a "Why didn't I think of that?" measurement at the bed of Breiðamerkurjökull, a large outlet glacier in southern Iceland. Drill a hole in the sediment of the bed, and drop into it a metal rod with many metal rings fitted around it, one on top of another. Withdraw the central rod. Return ten days later, dig an access pit, and make the observations summarized in the diagram and its caption: "90% of the forward motion of the glacier sole is accounted for by deformation of the till".

Deformable glacier beds are now universally understood to be fundamental pieces of many glaciological puzzles, on scales ranging from the 50-metre tunnel dug by Boulton (or, if he had any sense, by his student assistants) to reach the bed of Breiðamerkurjökull up to the behaviour of whole ice sheets.

Boulton has gone on to a distinguished career as a glacial geologist, elaborating his early ideas about the interaction of glaciers with their beds and about the intellectual importance of coupling observation with thought. At about the time you read this you will be hearing from him as a member of the team commissioned by the University of East Anglia to look into the doings of its Climate Research Unit. I notice that there is a surprising quantity of nonsense about Boulton on climate-denialist web sites. You can safely ignore it. Reading his classic paper would be a far more profitable investment of your time.

Come to think of it, reading classic papers is a profitable investment of time, period.

Renewable energy isn't cheap, but prices should fall as markets build and technology develops. Indeed the basis of 'learning curve' theory is that this is what happens, over time. So why are some renewables getting more expensive- off shore wind for example?

One reason is that materials, like steel and aluminium, have been getting more expensive, in part reflecting the increased costs of (conventional) energy – these are very energy intensive materials. Some of those price rises were just blips (e.g. linked to the oil crisis last year), but the long-term trend for fossil (and fissile) fuels must surely always be up, since they are finite reserves. So the old technology undermines the new. At some point in the future we will be using renewable energy to produce/process these and other materials, so the continual price escalation problem will be avoided. But that's some way off.

Certainly we will need an initial fossil fuel input to kick off a renewable energy future – so as to build the new renewable-energy technologies and produce the materials used in their construction. So some say that net emissions will increase if we try to expand renewables quickly. That ignores the fact that we will have to replace the existing fossil plants anyway in the years ahead, as they get old and are retired. Moreover, the energy (and carbon) debt from replacing the existing system with renewables should be less that either building more fossil plants, or even than going nuclear e.g. the embedded energy content of wind turbines is low – typically you get around 80 times more energy out over their lifetimes operation than is needed for their construction. By comparison, according a 2002 Hydro Quebec study, the output from nuclear plants over their lifetime is only around 16 times the energy needed for their construction and for the (energy intensive) production/processing of their fuel. Even more dramatically, the equivalent figure quoted for coal was 7 and gas CCGT 5. That for PV solar was initially about 9 – PV cell production is energy intensive – but it has been improving. Similarly for wind and other renewables – the energy/carbon debts are falling.

High energy (and therefore carbon) debts are only incurred when fossil, or to a lesser extent nuclear, fuels are used to build renewables: fossil fuels are obviously the worst, but the review by Ben Sovacool (Energy Policy 36 (2008) pp2940–2953) suggests that a nuclear system generates about seven times more CO2 over its lifetimes than wind turbines. However, after the high carbon input phase, the energy for building more renewables can come from these initial renewables. In effect we would then have renewable breeders. There are already some PV powered PV manufacturing plants – solar breeders.

That said, the rate of ramp up needs debate: what rate of new renewables growth could be sustained by the existing renewables – including providing the energy needed for producing basic materials?

Some say, rather bleakly, that we won't even be able to get to that point: with peak oil, peak gas, peak uranium and even peak coal, looming, we just won't have the energy to start ramping up renewables seriously – or for anything! Some add that this at least means that climate change will not be a problem! But others argue that, while Peak Oil and Gas now seem very likely soon, and Peak Uranium not far off (see the German Energy Watch Group's study), there is still plenty of coal, so that is what is likely to be used- and, tragically, can be, given slow progress on a global climate agreement.

While it might be possible to reduce the impacts by Carbon Capture and Storage, rather than just burning the remaining fossil fuels off carelessly, we really ought to use whatever is left to start to build the new renewable energy system. Using these fuels to build a nuclear system, which is the only other supply option on the table, seems to be very short-sighted – given the energy intensive nature of the nuclear fuel cycle and the relatively limited uranium reserves. Though, if all else fails, and in the worst case and there isn't enough fossil fuel, you could say that nuclear plants might, in the interim, provide some of the energy we need to start to build up the renewables system. More sensibly though, we should try to cut energy demand, so leaving more fossil energy to build the initial renewables.

Could it be done – will there be enough energy to support the development of a full renewable system? At the most obvious level if there is not enough energy to replace our existing plants with new plants of whatever type, then we are all doomed. But if we choose renewables for the future, it would be good not to incur massive emission debts. That rather depends in part on which renewables we choose – some are more energy intensive than others. It also depends on how quickly we need to ramp up renewables: if it could be done more slowly, then they can bootstrap themselves in energy terms, to some extent. But given the climate crisis, we may have to move faster than that. In which case, demand reduction and also possibly Carbon Capture and Storage, would become even more important, while we are still stuck in a mainly fossil-fuelled economy.

For the Hydro Quebec study see: L.Gagnon, C. Belanger and Y. Uchiyama ' Life Cycle assessment of electricity generation options: the status of research in the year 2001' Energy Policy 30 (2002) pp 1267–1278.

Reblog this post [with Zemanta]

One of the more unseemly sideshows of the Climategate fuss has been the argument about editorial treatment of two papers in the International Journal of Climatology. In late 2007, Douglass, Pearson, Christy and Singer published, online, a paper about the mismatch between modelled and observed temperature trends in the lower atmosphere. The paper did not appear in print until October 2008, so I will call it D08. It came just before a paper, S08, by Santer and co-authors that criticized the Douglass paper.

You can find more than you may want to know about the editorial treatment of D08, as filtered through the minds of Douglass and Christy, in a blog contributed in December 2009 to the right-wing magazine American Thinker. But that is not what I want to discuss here. I am more interested in the golden opportunity offered by American Thinker, 14 months on, for Douglass and his co-authors to rebut the criticisms of S08. Golden as it was, they passed it up.

The core of the argument is the assertion in S08 that D08 used an incorrect statistical test. Although the connection to glaciers may seem tenuous, it is real and of broad interest: the dispute is about wiggle room. I know I said wiggle room was dull — but just look at how excited we get about it. This kind of thing is the essence of the search for expensive signals buried in distracting noise.

The aim of the test used in D08 is to decide whether two sets of numbers, in this case observed temperatures and modelled temperatures, are "different" in a sense that can be defined precisely and with a known amount of confidence. There comes a point during the test where you have to divide by the square root of a number called the "effective sample size". In general this number is smaller than the sample size, because correlations between the numbers in the sample reduce the amount of wiggle room you have while making your test decision. In the jargon of statistics, the wiggle room is called the "degrees of freedom".

If you use the sample size instead of the effective sample size, you get an insidiously wrong answer. Your error bars come out too small and you end up being too confident about your decision. This is precisely the trap into which D08 walked. S08 did the test properly, and concluded correctly that there is no reason to believe that, on average, the climate models are mis-modelling the observed temperatures.

The trap is not widely understood, even among scientists, but that is no excuse when, as did D08, you choose to play for high stakes. American Thinker gave them a chance to respond to the criticisms of S08, and all they produced was whingeing about the editorial process.

In an appendix to their blog, D08 offer a scientific discussion of their work. They say that S08 "strongly objected to the narrowness of our error bars. Their view was to allow models to have a very wide range of possibilities of trends (roughly the range from the coolest model to the warmest) no matter what their associated surface trends might be." Never mind the "roughly"; the parenthesis misrepresents the meaning and importance of wiggle room. The bit about surface trends is a red herring. The quotation, and the appendix as a whole, show that D08 still misunderstand S08 comprehensively.

Santer and his co-authors are right. Douglass, Pearson, Christy and Singer are wrong. These two points ought to be at the centre of this part of the climate-wars discussion. They cannot be stressed too often.

There is a further irony in this story of denialism and wiggle room. An important part of the denialists' weaponry is the term "settled science", repeated over and over again as a criticism of the conventional wisdom about the climate. Yet the settled science elucidated in S08 features enormous error bars. The denialists often leave out the error bars, but D08 did have error bars. The trouble is that they were tiny, and wrong. Settled science with a vengeance.

The UK Department of Energy and Climate Change has been developing a 2050 Road map. A report was expected in parallel with the recent pre-election Budget, but in the event all that emerged was an interim statement, in an appendix to a Treasury/DECC 'Energy Market Assessment' up to and beyond 2020. This concluded tentatively that:

* Based on the analysis to date, total UK energy demand in 2050 will need to fall significantly, potentially as much as 25% lower relative to 2007 levels.

*A substantial level of electrification of heating and surface transport will be needed.

* Electricity supply needs to be decarbonised, and may need to double. It says: 'The use of electricity for significant parts of industry, heating and transport means that demand for electricity is likely to rise, even as overall energy use declines.'

It says we might look to higher levels of interconnection with neighbouring countries to allow fluctuations in demand and supply to be smoothed across a number of countries; new storage technologies, such as large-scale batteries; smart or flexible demand, such as off-peak charging of electric vehicles; the distribution network would need to become bigger and smarter to enable a potential doubling of overall electricity demand and to cope with new sources of energy supply and demand.

Overall it says that: ''Low-carbon electricity will provide a very large proportion of the UK's future low-carbon energy. It can be used for a wide range of activities, often with high efficiency compared to other fuels, and can, to a large extent, be scaled up to meet demand' although it accepts that 'other technologies are also likely to be required. For example, in heating, the use of waste heat from power stations, solar thermal technologies and energy from waste may be important and could reduce the burden on the electricity system. In road transport, biofuels and fuel cells may also be long-term contributors, particularly for modes that are hard to electrify. Even so, a significant degree of electrification appears to be necessary.'

www.hm-treasury.gov.uk/d/budget2010_energymarket.pdf

What the report doesn't say is which low carbon sources we might need, a key issue being the role of nuclear, something that the long term EU and US studies mentioned in my previous blog included only in some low renewables scenarios, tangentially, or not at all. Instead they all focused on renewables, with 100% by 2050 or even earlier being seen as viable.

For more information, visit:

www.pwc.co.uk/eng/publications/100_percent_renewable_electricity.html

www.roadmap2050.eu www.rethinking2050.eu/

www.energywatchgroup.org/Renewables.52+M5d637b1e38d.0.html

www.stanford.edu/group/efmh/jacobson/sad1109Jaco5p.indd.pdf

A nuclear future?

The UK governments recent National Policy Statement on Energy by contrast focuses mainly on nuclear, and asserted that 'by 2050 the UK may need to produce more electricity than today', as part of its justification for an expanded nuclear programme. But it had little to offer to back this up. And the new DECC statement above goes only a little further. Basically we don't yet have a 2050 UK scenario.

The Sustainable Energy Partnership (SEP) spotted this omission and also pointed to an apparent contradiction: SEP noted that the 2008 Nuclear White Paper said that, given attention to energy efficiency, by 2050, total electricity demand could 'remain at roughly today's levels despite the UK's GDP being three times larger than it is today'.

SEP brings together nearly all environmental and fuel poverty NGOs and relevant trade groups, including ACE, AECB, BWEA, CPRE, CHPA, FoE, Green Party, Greenpeace, Micropower Council, NEA, PV-UK, PRASEG, RSPB, REA, SERA, Solar Century and WWF-UK. In its submission to a Select Committee review of the NPS, SEP commented: 'It defies common sense to approve a massive [nuclear] building programme to achieve the long term objectives of energy policy without a proper assessment of the future long term need for electricity.'

In the absence of 2050 scenario, what the NPS does, says SEP, is to assert that 'under central assumptions there will be a need for approximately 60 GW of new capacity by 2025' – and then quotes a Redpoint study as the source for this. But Redpoint's study simply looked at how 'a goal of achieving around 28–29% of electricity from renewables by 2020' might be achieved, not at how or whether we could generate enough electricity without nuclear to meet demand. Or, one could add, what the 2050 situation might be.

The government was evidently aware that the longer term rationale for nuclear was a little weak (to put it mildly) which is no doubt why it commissioned a review of energy policy issues and options up to 2050. Hopefully the interim DECC statement will be followed by something more substantial in due course, which will take note of the very encouraging EU scenarios mentioned above.

Meanwhile in its reply to 'Energy Envoy' Malcolm Wicks' proposal for 30–40% of nuclear 'beyond 2030', the government indicated that, in addition to renewables, with around 35 GW expected by 2025, we might need 25 GW of new 'non-renewables' (presumably nuclear and CCS) by 2025. But a specific nuclear target was said not to be needed at this stage.

Maybe that's not surprising. Most of the large number of scenarios now available seem to agree that '100% from renewables by 2050' is possible, with the costs not being prohibitive – after all there would be no fuel costs.

Interestingly even the relatively conservative International Energy Agency, concluded in a recent report on the Projected Costs of Generating Electricity, produced with the Nuclear Energy Agency (NEA), that, in comparative cost terms, there was now not a lot in it, depending mainly on location: "Nuclear, coal, gas and, where local conditions are favourable, hydro and wind, are now fairly competitive generation technologies for baseload power generation."

Certainly, even under present economic framework, wind power is already competitive in some US states (it's the cheapest source on the grid in California), and it is moving ahead in the US rapidly, as it is in China, India and the EU. As costs for wind and other renewables fall further, while the cost of fossil and fissile fuels rise, we are likely to see a steady transition in most places around the world. And if more rational climate policies are adopted this transition could accelerate, so that 100% % by 2050 becomes a real possibility.

An all-electric future?

Most of the new scenarios focus on electricity, but accept that there are also other renewable heat supply and transport options which may be significant – solar and biomass in particular. Electricity is obviously important, but the best balance between large and small projects, local generation and supergrid feeds, heat and power production, as well as of course energy efficiency, remains to be determined.

Clearly the energy system of the future will look very different from the one that has grown up in the fossil fuel era. Demand and supply will be matched dynamically via interactive load management systems, with a wide range of renewables of varying scales and types linked up by supergrids and with energy storage becoming important. While some look to an almost all-electric future, others look to the use of hydrogen as a new more easily storable energy vector, while others again argue that heat is the best storage option. In the event a mixture of some or all of these is likely to emerge, differing in each geographical context.

Whether nuclear has a continuing long-term role in this remains to be seen.

Reblog this post [with Zemanta]