This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Skip to the content

IOP A community website from IOP Publishing

May 2009 Archives

There is reputedly a nuclear renaissance underway, with new reactor technology providing some of the impetus. However, problems seem to have emerged for one of the new reactor types that have been developed- the European Pressurised Reactor (EPR). The first two EPRs, being built in Finland and France, are both behind schedule and over budget. Olkiluoto 3 in Finland, now over three years behind schedule, was originally budgeted at €3bn, but is now expected to cost at least €4.5bn.The follow-up French EPR at Flamanville is around nine months behind schedule, with the cost of power now being expected to be around 20% more than planned- around 55 euros a megawatt hour, instead of the 46 euros announced when the project was launched in May 2006.

In the UK, much of the running is being made by the French company EDF, who have taken over British Energy and have talked of building possibly 4 new plants here, presumably EPRs. They have claimed that they will not need subsisdies, but on May 26 2009 Vincent de Rivaz, chief executive of the UK subsidiary of EDF, told the Financial Times that a "level playing field" had to be created, suggesting that the government needed to put a guaranteed floor under the price of carbon permits in the EU's emissions trading scheme. He said "We have a final investment decision to make in 2011 and, for that decision to give the go-ahead, the conditions need to be right," adding that "We will not deliver decarbonised electricity without the right signal from carbon prices."

Meanwhile, South African power company Eskom has decided not to press ahead with a planned nuclear build programme, with an EPR being one option, saying the costs were too high. This means the only current nuclear build programme underway in South Africa is the experimental 165MW Pebble Bed Modular Reactor (PBMR) and costs for that have risen significantly. In 1999, construction costs were budgeted at R2 billion rand (£200m). By 2005, they had risen by a factor of seven, to R14 billion (£1,400m), not including decommissioning and waste processing.

Interestingly, Eskom is seeking finance of R5 billion (£500m) to build a 100MW concentrating solar power (CSP) plant in the Northern Cape. CSP, which use light focussing mirrors, troughs or dishes to generate steam for a turbine, is still expensive, but even so, on the basis of the figures above, the PMBR will cost 1.7 times more per MW installed. Plus of course, once built, fuel cycle costs- which don't exist for CSP.

Around the world, CSP seems to be catching on. There are large projects operating in Spain and the USA and more are planned there and in Egypt, Algeria, Morocco, the UAE Iran, Israel and Jordan- in all there is 1.2 GW under construction.According to estimates in a CSP Today.com overview of new CSP in Europe, North Africa and the Middle East, last year more than 3000 MW of new CSP projects had been announced.

The US has 75 MW of CSP under construction, and 8.5 GW scheduled for installation by 2014, while the American Solar Energy Society claims that in theory CSP plants in the SW states of the USA 'could provide nearly 7,000 GW of capacity, or about seven times the current total U.S.electric capacity'. Globally, CSP could supply7% of electricity by 2030, and up to 25% by 2050, according to a report by the IEA SolarPACES group, Greenpeace, and the European Solar Thermal Electricity Association

That would of course take massive investment, but current investment this year was over 2 billion euros worldwide and technology advances are being made. Some of the 480MW of projects already in place globally are hybrid solar-gas plants, with gas providing the steam overnight, but some are now making use of molten salt heat stores to produce solar heat around the clock. There are also plans to transmit power from CSP plants in N Africa by High Voltage DC undersea links to Europe. That of course adds to the cost. Even so, CSP solar looks like it could be an interesting new renewable option.

Indirect solar, in the form of wind energy, is still the most economic of the major new renewables, with over 120 GW now in place globally, and biomass represents a very large solar-derived energy source, but the prospects for direct solar energy are also looking good. In addition to the new CSP projects, there is around 120GW(th) of solar heat producing capacity installed at present worldwide and over 10GW of solar PV electricity generation capacity.

There is clearly some way to go before these and other new renewables can rival nuclear, which has around 372GW of operating capacity. However, if the 760 GW or so of existing hydro capacity is included, along with contributions from geothermal plants and modern biomass/waste powered plants, then despite their generally lower capacity factors (e.g. around 20-30% for wind and CSP without storage, and 40-50% for large hydro, compared to 70-80% for nuclear), the renewables overall would, even now, seem to be able to offer a similar level of output. And more is coming on line rapidly.

By contrast, although some new plants are under construction or planned, the nuclear contribution fell over the last year to about 14% of global electricity generation, due in part to the extended shutdown at Japan's Kashiwazaki Kariwa plant. Six of the site's seven reactors have been out of action since the Niigata Chuetsu offshore earthquake in July 2007. The seventh unit restarted this month, but it is still not clear when the others will follow.

Cool cities

| | Comments (3) | TrackBacks (0)

Steven Chu is cheerleading another approach to fight global warming: Painting roofs and even pavements white to reflect sunlight. Such a measure is suggested to potentially offset 44 Gt of CO2 emissions via negative radiative forcing when implemented globally. White painting is related to a list of geo-engineering approaches, including ocean fertilization with iron and sending sulphate aerosols into the stratosphere. However, these approaches can endanger ecosystems, are potentially very limited in their efficacy, cost a lot and pose significant risks (e.g., see here for the sulphate case). It seems that white painting has less of these disadvantages, beside the need for more sunglasses.


Arcos_de_la_Frontera.JPG

White painting is also not an esoteric method. Precisely because is works so well, white painting is an established method all over the world. Consider the white towns in Andalusia and Nicaragua. From above, such a town - Arcos de la Frontera in Andalusia - looks like this.

Arcos de la Frontera in Andalusia. Source: Wikipedia.

Of course, the historic towns have not been built to mitigate or geo-engineer climate change. The motivation was simply to cool down the city in the summer heat. This observation is crucial to understand that climate change mitigation does not necessarily have to be conducted through the perspective of climate change. Cities may be motivated by the more direct impact of white painting, for example. The effect of white painting is double fold:

  • Direct: Higher urban albedo reduces summer temperature in cities
  • Indirect: Less air-conditioning is needed

Both effects mitigate climate change, benefit house dwellers, and benefit fellow city inhabitants. These co-benefits on different agent levels are pleasant from a game-theoretic perspective. Common national measures discussed in international negotiations are often framed in the context of a modified prisoner's dilemma where national action produces costs locally but externalizes the benefits. White painting, as well as many other measures, offers a no regret options for city dwellers. Either it helps to cool the global climate or it provides proper adaptation and protection for increased global heat. White painting has another advantage: no big investments or government action is needed. Motivated dwellers can act themselves, or municipalities can initiate urban schemes. Specifically, engaged US-cities do not need to wait for reluctant Congress. Action can be incremental and action is not dependent on a huge technological breakthrough.

Urban gardening or parks are another related measure to reduce the urban heat island effect and cool down cities. It would be interesting to explore to what degree urban greening and urban whitening complement or compete against each other and where what kind of strategy is more appropriate.>.

Today, prices for electricity are based almost entirely, over 90%, on the capital and fuel costs of power plants and transmission lines. The other small percentage of the price of grid-based electricity is based upon the ancillary services that coordinate the activities on the grid. Through the grid operator, consumers pay these ancillary services such as enabling new generation to come online or go offline on short notice (minutes) and demand response mechanisms where consumers can decide to get paid to turn off large loads, such as factories and commercial buildings.

The push for renewable energy essentially needs to flip the current price relationship to 90% for ancillary services and only 10% for fuel and capital. Well, getting the cost of electricity to be < 50% based upon fuel and capital costs may never happen, but because the fuel for renewable is free (sun, wind, geothermal heat, waves, etc.), certainly the fuel cost can be minimized and brought near zero. Many people believe this is why a grid based upon renewable electricity generation will ultimately be cheaper than the current grid based upon fossil fuels. It is not obvious that intermittent electricity with free fuel will be cheaper, but a good goal is to keep the price of electricity the same while taking the fuel costs to zero. Furthermore, electricity that is twice as expensive will not be a new burden if we design our buildings and use patterns to consume half of the electricity.

The reason that a renewable electric grid will not necessarily be cheaper than a fossil fuel grid is that the intermittent nature of renewable generation forces two basic actions to happen - both of which increase costs:

1) People adapt themselves to the grid – people can adapt their habits and usage of electricity to the output profile of renewable generation, and

2) People adapt the grid to themselves – we can adapt the electricity output profile of renewable generation to our electricity usage habits.

Both actions, or strategies, will happen simultaneously in an undetermined proportion to be discovered in the future. The first item in the list above means that people alter their schedule, which we could assume is now optimized for their leisure and income, and any changes to that schedule mean someone will have less leisure time or income, or both. No new gadgets necessarily need to be added to the electric grid except the actual generation systems themselves. The second item above implies that we install gadgets such as storage systems along with smart appliances and meters such that we can essentially program these new infrastructure items to shift and store electricity from when we don't want it to when we do want it. Obviously, manufacturing, installing, and operating these new gadgets will cost money that is not currently being spent - although many regions of the US are in the early stages of moving to the "smart grid" and "smart home" systems that will define how much the grid will adapt to people. Ideas go so far as to even include plug-in electric vehicles in the smart grid concept, marrying devices that can be used for transportation and home electricity usage.

The future cost of electricity from a high renewable scenario is unknown, but don't expect it to be cheaper. It may or may not even be more stable as stability costs money. Although, there are reasons that future electricity prices may be more variable on short time scales (minutes to hours), they will likely be with less long term impacts (years) for the magnitude of the variability as compared to natural gas-based generation, for example.

So we need to "flip flop" the composition of electricity prices from being primarily based upon fuel to being primary based upon the capital costs of renewable and storage systems with free fuel. We need to get these renewable systems on the grid so we can learn sooner or later how to best integrate them into our grid ... and our daily lives.

How many glaciers are there? For quite some time the standard estimate, offered in the 1990s by Mark Meier, was about 160,000. But when you dig a little deeper it becomes clear that there are probably a lot more than that. And of course the question connects with a closely-related question: Who cares how many glaciers there are?

The World Glacier Inventory is the place to go for a really long list of glaciers. The idea of listing the world's glaciers originated more than 50 years ago with the Special Committee for the International Geophysical Year, which resolved in 1955 that "at the conclusion of the International Geophysical Year there be published as complete a list as possible of all known glaciers".

I always like the definiteness and clarity of this directive. The International Geophysical Year ended in December 1958. The first recommendations about how to present the list of glaciers were published in 1959, in which year the first actual list, of some of the glaciers of Italy, also appeared. Since then, although there have been several ups and downs, the story has unfolded at about the pace with which it began.

Today the World Glacier Inventory exists as a computer database at the National Snow and Ice Data Center in Boulder, Colorado, containing records for just over 100,000 glaciers. I have made available a version, called WGI-XF, with records for 131,000 glaciers. If we kept counting glaciers at the average 1959-2009 rate, a little over 2,000 a year, we would have the Special Committee' s complete list 15-30 years from now, only about 70 years late. By that time a fair proportion of the glaciers that were there in the 1950s will have disappeared.

Why so slow? Part of the problem is remoteness. Surprisingly, for some parts of the world there are still no maps of large enough scale to show individual glaciers. For others there are maps but they are closely-guarded military secrets, or are very hard to obtain, or are not accurate enough. Nowadays satellite imagery is making a big difference, but even with modern data sources we come up against the second big problem, which is that compiling a useful list of glaciers, even for a single region, is very time-consuming. To be useful, the list has to give not just a position but an outline or at least an area for each glacier, and preferably a good deal of other information as well.

The third major problem is money, combined with indifference. Apparently the more money you have, the less do you care how many glaciers you have. Bhutan has a complete inventory of its glaciers. There are 677. The two least complete national inventories are those of the United States and of my country, Canada. It is true that both of these countries have a great many glaciers, and therefore a lot of work to do, but this is what makes Mark Meier's estimate of a total of 160,000 glaciers worldwide seem improbable. Guessing very roughly, there may be tens of thousands of unlisted glaciers in the U.S.A., and more than that in Canada. Add numbers like those to the 131,000 we have in hand, and speculate that there might be another 50-100,000 in the uncovered regions in the rest of the world, and you end up with an answer to the original question that has to be in the region of 300 to 500 thousand.

Who cares? I don't think anybody actually cares about the count. It is the ancillary information for which glaciologists are hungry. For the study of global change, you can't get a proper handle on glaciological change until you have a proper list of glaciers. You need to know, for example, how big each glacier was at some initial date before you can assess the significance of its size at a later date. The general picture is that all of them are smaller now, but that is an imprecise generalization. For informing policy in a responsible way we have to do better, if only because incomplete samples are vulnerable to the objection that they might be biased.

62% of global electricity from renewables by 2030

The UK Energy Research Centre's new study of the UK energy mix up to 2050, presents a range of possible energy scenarios with different mixes of technology, including versions with a lot of wind, some with a lot of Coal/Carbon Capture and Storage, and some with no nuclear - although most include it, in some cases, a lot. But the point is, as UKERC say, 'There are multiple potential pathways to a low-carbon economy'. It added that 'a key trade-off across the energy system is the speed of reduction in energy demand versus decarbonisation of energy supply'.

Clearly investment in energy efficiency and demand side management is going to be of central importance, but we will still need new low or zero carbon energy supplies. Accelerating renewables to avoid reliance on nuclear is a possibility, but it is seen by UKERC as quite a stretch and a viable mix would also require a contributions from fossil Carbon Capture and Storage (CCS). The dominant view remains that we will also need nuclear.

However, there are other views, some of which see renewables being able to expand very rapidly, and on a global basis. A new study by the Energy Watch Group in Germany claims that there is no need to construct new nuclear-power facilities to meet demand. It looks at two scenarios- 'high-variant' and 'low-variant'. By 2030, renewables could they say contribute about 62% to final electricity globally and about 16% to final heat in the 'High Variant', and 35% of final electricity and 10% of final heat in the Low Variant, with the overall share (heat and power) being 29% in the "High Variant" and 17% in the low Variant.

The study looks into the decrease in technology costs resulting from increased production volume, as well as the assumed individual development of the various world regions. It suggest that in the OECD region, on the High Variant, 54% of the electricity demand and 13% of the heat demand could be met from renewable sources by 2030, with the total final energy share (heat and power) being 27% (low variant: almost 17%). In the non-OECD region, renewables could supply almost 68% of electricity, and about 17% of final heat demand (low variant: 36% of electricity and 11% of heat), while the overall share of renewables rises to 30% in the high variant (low variant 18%).

Given that the present renewable energy generation capacity globally is just over 1000GW, including hydro, that is quite a stretch, but annual growth rates for wind and PV solar are around 30% and new technologies for electricity and heat production are emerging rapidly, including marine renewables like wave and tidal power, and Concentrating Solar Power.

Some of these new renewable options will take time to mature, but some are ready now for deployment- and can be installed quicky. We are talking months for wind farms, weeks for PV solar roof top devices and of course days for simple energy efficiency measures- compared to years for major nuclear plants. Of course, if the funding is available, it is also possible to install a large amount of nuclear capacity. However, one strategic issue is whether it makes sense to divert funding away from the wide variety of different types of renewables, to focus mainly on nuclear, or whether a more diverse programme would be better in terms of cost-effective carbon saving in both the short and long terms.

The study "Renewable Energy Outlook 2030" is at: http://www.energywatchgroup.org/Renewables.52+M5d637b1e38d.0.html

The UKERC report is at: http://www.ukerc.ac.uk/ResearchProgrammes/UKERC2050/UKERC 2050homepage.aspx

The Wilkins Ice Shelf in West Antarctica has been in the news again lately. As ice shelves go, it is rather small, but it has claimed attention because it is falling to pieces. This is not the breaking off and drifting away of one large iceberg, but a disintegration into lots of slivers, each a few tens of metres thick.

Why not one big chunk? Evidently forces are at work that cause cracks to appear and to propagate through a previously homogeneous block of floating ice, but they are more distributed in the Wilkins collapse than in the really big calving events we sometimes hear about. The slab of ice, weakly supported by the water underneath but pinned more firmly at its sides and at the grounding line, is not strong enough to resist what the engineers call "bending stresses", originating from variations in the vigour of ice flow or perhaps from the ocean tides.

Why now? Why has the Wilkins joined the growing list of spectacular recent ice-shelf failures? The trouble with the bending explanation is that it does not account for change. Ice shelves are always at risk of bending until they crack, and I know of no evidence that they are suffering more of this now than in the past. In the ordinary way, the loss by calving of icebergs (and perhaps melting on the underside) would balance the gain by snowfall and flow across the grounding line. The shelf would stay roughly the same size.

At this point, enter a now-familiar suspect: global warming. Cracks in ice shelves behave very differently when they have water in them. It is a thousand times denser than air, and magnifies greatly the forces encouraging the tip of the crack to propagate downwards. If the crack propagates all the way to the bottom of the shelf, and also sideways until it meets another crack or the shelf edge, then the job of making a new sliver is done. If there are lots of cracks, and they all fill with water and propagate right through the shelf, then the shelf falls to pieces. This seems to have happened to the Wilkins in recent months. On the other side of the Antarctic Peninsula, continuing study of the collapse of the Larsen B Ice Shelf, an even more dramatic event in summer 2002, is showing that it too disintegrated when, and probably because, its cracks filled with meltwater.

The water can only have come from melting at the surface. All of the West Antarctic ice shelves that have collapsed recently, in whole or in part, are now on the warm side of the -5ºC isotherm of mean annual air temperature at the surface. This isotherm has been migrating rapidly southwards in West Antarctica, a part of the world which, according to weather-station records, has warmed faster than almost any other during the past 50 years.

It would be wrong to think that -5ºC is a kind of tipping point. All it means, given the rather equable climate of West Antarctica, is that an ice shelf which crosses from the cold to the warm side of the isotherm begins to suffer a "significant" summer melt season. At the other end of the world, the ice shelves which once fringed the north coast of Ellesmere Island in Canada have all but disappeared over the last century. The mean annual temperature is much colder than in West Antarctica, but the annual range is much greater, and in northern Canada too the culprit seems to be increased summertime melting.

Why worry? The shelf ice made its contribution to sea-level rise when it flowed across the grounding line, roughly balancing the sea-level fall due to snow accumulating on the grounded ice. The main reason for concern is back stress. At the grounding line, just as there is a force pushing the shelf ice further out to sea, where it will eventually calve and drift away, so there is an opposing force - the back stress - discouraging the grounded ice from accelerating seawards. Take away the ice shelf and you remove the back stress.

Again Larsen B is instructive. Its collapse has provoked the land-based glaciers which formerly fed it to calve icebergs from their grounding lines at greatly increased rates. So the loss of an ice shelf is indeed a cause for concern in the context of sea-level rise. And we think we know why these losses are happening more frequently now.

Wind-powered electricity generation in Texas in 2008 reached 14.6 terawatt-hours (TWh) compared to 8.2 TWh in 2007. Given that the total electricity generation in Texas was 402.7 TWh in 2008, the wind generation as a percentage of the total was 3.6%, compared to approximately 2.0% in 2007. This is not the highest percentage of generation for any state, but it is the highest total quantity of wind electricity. Texas also had approximately 8,000 MW of wind power capacity installed by the end of 2008.

Likely due to the economic downturn, but perhaps also a relatively mild summer in 2008, total electricity generation in Texas dropped 2.9 TWh from 405.6 TWh in 2007. Another interesting note is that electricity generation using natural gas was 199.2 TWh in 2007 and 192.8 TWh in 2008, a drop of 6.4 TWh. Not entirely coincidentally, this 6.4 TWh is the same amount of increase in wind generation from 2007 to 2008. A study by General Electric in 2008 showed that as wind generation increases in the Texas electric grid, ERCOT, it will primarily displace natural gas generation. This is because natural gas generation is on the margin in Texas and also accounted for approximately 49% of electricity in 2007. Furthermore, Texas generation capacity is approximately 70% natural gas units. The high quantity of natural gas capacity makes it relatively easy, but not easy, for Texas to incorporate wind power into its electricity grid because they are the units most capable (as compared to coal and nuclear) of ramping up and down to follow the wind power fluctuations.

Early in 2009 the first wind farm along the Texas coast became operational in Kenedy County of South Texas. This wind farm is positioned in one of the most traveled migratory bird routes in the world as many of the North American birds get funneled by the Gulf of Mexico along their route. This makes the wind farm controversial as compared to all others in Texas that are in West Texas with different, but generally less wildlife. It was the only wind farm opposed by the Audubon Society due to being located in a highly traveled zone for birds. However, the wind farm operators, Babcock and Brown, have installed procedures for curtailing the wind turbines should weather force birds to fly as low as the turbines. Many remain skeptical.

It remains to be seen the full impact of these wind farms, and we may get some indication over the course of this year. Wildlife and scenic issues have so far help up all other offshore and coastal wind farm sites in the US, and scenic criteria have already been ruled not applicable in Texas for locating wind farms. Let's hope bird impacts do not occur widely such that they create a black eye for the wind industry after growing away from the initial bird issues of Altamont Pass in California. Designs using tubular towers eventually evolved, thus removing the built-in perches that attracted some birds. Let's hope the birds avoid the turbines on the Texas coast, because Texas is set up well to continue to lead in wind power production, and there's no better place than the #1 oil, gas, ... and wind power producer in the US.

Lunar Race

| | Comments (3) | TrackBacks (0)

The effects of the gravitational pull of the moon on the seas, coupled to a lesser extent with that of the sun, provide a significant potential source of renewable energy. There is a race underway to tap into it.

Tidal power extraction technology comes in various shapes and forms. Barrages across estuaries, trapping high tides to create a head of water, may be the most familiar, but free-standing tidal current turbines, working on the horizontal flows rather than the vertical tidal range, are a less environmentally invasive and easier to install option. There are now reputed to be 150 or so tidal current turbine projects of various types and scales underway in the UK and elsewhere, with the most developed being MCT's 1.2MW Seagen, now installed in Strangford Narrows in Northern Ireland.

Other UK projects include the nicely named 'Lunar energy' seabed-mounted ducted rotor, the 'Pulse Tidal' oscillating hydroplane system being tested in the Humber, and the 'Tidal Delay' system, which feeds power to a heat store, so that power can be generated continuously. There is also a proposal for a permeable 'tidal fence' across the Severn estuary, housing a series of tidal turbines, as an alternative to a solid 'tidal range' barrage, which would in effect dam the estuary.

There are many more tidal current projects and devices at various stages of development- there is very much a flurry of innovation going on, with tidal technology being seen as basically simpler than wave energy technology. That's hardly surprising since, with wave energy, we are trying to tap into complex chaotic wave motions in an interface between air and water, whereas tidal flows, running smoothly below the surface, are by comparison much more linear.

Although the UK still leads in this field, challenges are emerging from overseas. For example, Singapore-based Atlantis is planning to install some of their ducted rotor units in a 30MW project in Pentland Firth in Scotland. And Irelands Open Hydro is planning to install a series of 1MW versions of its novel Open Centre turbine in France, Canada- and Alderney. In addition, Voith Siemens have developed a novel gearless 1MW propellor turbine design, which is to be used in a 100 MW array in the Wando project in South Korea. Canada and the USA are also pushing ahead with a range of systems- the US Dept of Energy has allocated its first Marine Energy Grants- $7.3m in all, to 14 projects, with more now being planned.

Cleary tidal power has gone international and there is a race to be first in what could be a very large global market. In the end what will probably decide the winners is economics There is talk of some tidal current devices getting down to 2-3p/kWh in time, so the prospects look good, while there are dark mutterings amongst some of the tidal current enthusiasts about the cost of the main rival approach in the UK- the proposed Severn Tidal Barrage, put by some at 9p/kWh.

The race is on

Although it's often seen as the front-runner in the UK, the large Cardiff to Weston Barrage isn't the only tidal range option. There have been proposals for even larger barrages further down the Severn estuary. Alternatively, smaller barrages on the Severn and elsewhere (e.g. the Mersey Solway Firth, Humber etc) might be less invasive, and offshore tidal lagoons even less so.

There are also various different operational options. In terms of increasing the continuity of power output, there is the option of having segmented lagoons, so that some degree of storage might be possible, and it is also possible to pump water uphill behind barrages or lagoons, using excess off-peak power from the grid. Barrage operation on the incoming tidal flow is also an option, although that means using two-way turbines, which are more complex, expensive and prone to excessive wear and breakdown.

Although pumped storage is sometimes included as an option, most new barrage designs use conventional one-way turbines. The problem then is that they will only fire off twice roughly every twenty-four hours, with large pulses of energy for a couple of hours, which may not be matched to electricity demand. So, for example, although the 8.6GW Severn Barrage might be able to generate 4.6% of UK's electricity, only some of that could actually be used effectively in practice- unless we also spent money on major electricity storage facilities. According to the generally pro-barrage Sustainable Development Commission, by some time after 2020, when it was working, the barrage would only reduce UK emissions by about 0.92% - not very much for £20 billion, the expected construction cost.

By contrast, a network of smaller tidal turbines around the coast could deliver more continuous output, since peak tide occurs progressively later in time at each site. Tidal turbines can also be designed to swivel around to run on the flow and the ebb i.e. four times per 24 hour period. And they can be installed on a modular basis relatively quickly.

All in all, with most UK environmental groups strongly opposed to large estuary-wide barrages like that proposed for the Severn, the tidal current option looks like to one the watch. But who will win in that race is far from clear. The UK may be ahead technologically, but for example, S. Korea has plans for installing a total of around 500MW of tidal current projects.

Colleagues from the Energy and Resources Group in Berkeley tell me that California and the Western States can be supplied to a significant part by concentrated solar power (CSP) in 2022 - at least upon the implementation of a carbon tax > $40/tCO2. CSP is also part of the suggested EU-supergrid. Is CSP the second renewable energy technology after wind that breaks through? Let us contrast the abstract macro-economic modeling with a business perspective. Here some insights that I gained from a financial analyst, Shujia Ma.

How does CSP compete with other energy sources?

A CSP plant can be compared with a gas plant where heat steam drives a turbine. The CSP plant itself is currently more expensive than the gas plant but no fuel costs have to be paid for - solar energy is free. In this regard, CSP is similar to photovoltaics (PV). In contrast to current PV installation, CSP relies on economies of scale and is supposed to be cheaper. However, due to its plant characteristics, additional transmission lines are needed that are not required for roof-top solar panels. CSP and wind operate in different market segments, as CSP operates more in peak demand times whereas wind is mostly available at night (in California).

What are the challenges that limit CSP deployment?

For the US, currently financing is the dominant challenge. The few banks that have sufficient resources are reluctant to lend money. Another problem is state regulation which is often very slow in processing applications. The market for CSP is relatively weird, as only two firms control important parts of the supply chain such as mirrors or tubes (part of the heat transfer system). More competition in the supply market would bring down costs significantly.

Can technological progress bring down the price of CSP closer to grid parity?

Yes. Advancement in materials will lower the price for CSP deployment. However, Solar Millenium is mostly a developer and does not have sufficient resources for extensive research. The CSP business relies on R&D from public research agencies such as NREL.

That leaves open the question what role policies play. The current renewable portfolio standards (RPS) of the western US states clearly pushes utilities to purchase large-scale renewable capacities, including CSP, and by thus developing the market. Important condition here is that the RPS is actually enforced. It is ironic that the US currently relies much more on this regulatory approach, and that is Europe that builds on market instruments. In fact, also in the US, a combination of carbon price and feed-in tariff, providing long-term reliable incentives for investors, could brighten up the market for CSP and other renewables. A US-Europe comparison is somehow narrow: Given low labor costs (important for CSP), good solar resources, fewer transmission line problems and the potential availability financial resources from the government and uncomplicated procedures, also China may turn out to become an important CSP player.

Why don't they get the message? Environmentalists take different approaches to putting the message across. Some have allowed distinguished careers to evolve into activism. James Hansen, the director of NASA's Goddard Institute for Space Studies, is a leading example. Having spent several decades modelling climatic change and charting the course of global warming, he now wants energy producers to stop building coal-fired power stations, and is telling them why in increasingly emphatic terms. But they aren't listening to him.

bp.0901.f1.thumb.png
Fingerprint of glacier mass balance

I am not saying Hansen's approach isn't right, but in my little corner of the problem, in which I try to document the rate at which glaciers are shedding mass, I have followed most of my fellow-scientists by trying to present the facts deadpan, with due attention to all the caveats and uncertainties. The dinosaurs aren't listening to me either.

Of course, it isn't a small problem, but human inertia has a lot to do with the reluctance of the dinosaurs. Recently the politicians have realized that, apart from survival, there is money to be made out of trying to be a mammal instead. Perhaps this new message from a new quarter will help. On the whole, though, I am not disposed to be too hard on the dinosaurs, considering the scale of the problem, my own familiarity with inertia, and the fact that I am part of the problem. I buy the stuff they sell.

But we have lost a lot of time already. After Waterloo, Wellington said that it had been "a damn close-run thing", and solving this problem is going to be like that.

What can I do to help? The best I can think of is to keep sending the message. Time is running out, but it is not yet time to panic, and you can't beat facts if you want a cumulative effect. I will try in this blog to write about some of the ways in which the field of glaciology helps to pile up the facts.

I will try to show why the facts are not just important but interesting, but if I get carried away sometimes - if the facts seem more exciting to me than they do to you - you will have to bear with me if you can.

I want to begin, though, by regretting that scientists' caveats and uncertainties are not equally interesting to everybody. In my experience numbers, the scientist's stock in trade, tend to turn people off as soon as there are more than about three of them. Error bars are even less popular, but your average scientist probably spends more time on the error bars than on the measurements themselves.

The errors, of course, help to explain why we have spent more than two decades arguing about climatic change rather than taking action. But you have to be able to see the measurements in their context before you can know what they allow you to say. It is a risky mistake to think that science, because it seems to be about numbers, must therefore be crisp and focussed. In fact we spend most of our time wandering around in a fog, looking for clear patches.

Not long ago I was asked an interesting question about the fingerprint of glacier mass balance: has it changed in recent decades? It is quite clear, as we will see in a moment, that the pattern has become stronger. But has its geographical shape changed?

Consider the graph. The top panel shows the fingerprint for 1961-1990, presented as a sea-level equivalent (the result of spreading the mass lost by the glaciers uniformly over the ocean). The blip near 50 degrees south is the Patagonian icefields, the blip at 30-40 north is the Himalaya, Karakoram and so on, and most of the loss is from glaciers at high northern latitudes. We are in a fog about several of the latitude belts, because their error bars cross the zero line (no detectable change of mass).

But we are not in a fog about the global total. Add up the losses from all of the latitude belts, allowing for the uncertainty of each, and it becomes clear that in 1961-1990 the ocean was gaining mass from the glaciers as meltwater.

The lower panel shows the fingerprint for the more recent period 1993-2007, divided by the 1961-1990 fingerprint. First, the signal has grown stronger everywhere, although our confidence about this is low in many latitude belts (the error bars cross the 1 line which means no change). But second, where it matters (that is, where the signal was strong to begin with, north of 30 north, say), the signal is now two to five times stronger. Third, as in the upper panel, finding number two holds for the whole world when we average the ratios with due attention to their error bars. Glacier mass loss is much greater now than it was 20-50 years ago. We are not in a fog about this.

Fourth, the answer to the original question is Don't Know. The error bars, some of them grotesque, make it certain that we cannot say anything about whether the shape of the fingerprint has changed. We are definitely in a fog about this.

We are not entirely clueless about how to search for the nearest clear patch. For example, more measurements would help. It looks as though the glaciers in the Himalaya and Karakoram may have begun to suffer more than glaciers elsewhere. Tackling this question would require measuring them. So if the Taleban would kindly get out of the way, we could travel to the head of the Swat valley and produce an answer to a question which is more important than any of those they are asking.

We will know we are out of the fog when the error bars shrink to a more reasonable size. And at that point neither the Taleban nor the dinosaurs will be able to reject the message of the measurements, whatever it turns out to be.

A recent paper from the University of Minnesota, by Sangwon Suh and others, estimates the change in embodied water in corn ethanol from 2005 to 2008 (see: Water Embodied in Bioethanol in the United States; http://pubs.acs.org/doi/abs/10.1021/es8031067?prevSearch=water+ethanol+suh&searchHistoryKey).

This paper indicates that the state with the highest quantity of embodied water per corn ethanol is California - strangely higher than Arizona and New Mexico, both of which are estimated to be both growing corn and producing ethanol from it. The range of consumptive water embodied in corn ethanol is 5 to 2,140 liters H2O per liter ethanol. The high end of the range translates to a higher value as compared to my previous study on the water intensity of transportation, likely due to a different assumption regarding how much irrigation water is consumed. Suh assumes that all withdrawn irrigation water is consumed, whereas I used United States Geological Survey data for estimating consumption by subtracting how much withdrawn irrigation water is returned to the source.

The difference in the methods of Suh and myself are not that important, and Suh does us a favor by tracking the changes from 2005 to 2008. During this time period he estimates that the embodied water in corn ethanol has increased 46% and the total consumptive water use has increased by 68%. This implies that more marginal lands, with worse climates for corn agriculture, and being used to grow corn, for food or fuel. This tendency is of course the fear of many that we will be using irrigated agriculture for biofuels on marginal lands even though many are assuming annual crops or perennial grasses on these lands will not be irrigated. In many regions, such as over the Ogallala Aquifer, industrial agriculture has already been overexploiting groundwater resources. It is important to know that the vast majority of the water embodied in corn ethanol is from farming, and thus it is farming corn in general that impacts the water resources, and not necessarily the push for corn ethanol. The Renewable Fuels Standard simply creates another marker for corn, and exacerbates the situation.

Recent work we've done at the University of Texas at Austin's Center for International Energy and Environmental Policy together with members of the Bureau of Economic Geology shows that in 2005 the water embodied in light duty vehicle transportation fuels in the US accounted for approximately 2.5% of the total water consumption. By 2030, we estimate this could be up to 10%, mainly due to increasing ties in the nexus of energy and water in the form of biofuels. This consumption of nearly 14,000 billion liters holds for a vastly different diversity in the number of miles driven on different fuels from 23% - 71% based upon petroleum - quite a spread! So we have many ways to use our water resources for creating new fuels for transportation ... or food, but that's another story.

At the inaugural meeting of the Union for the Mediterranean recently, UK Prime Minister Gordon Brown said "...in the Mediterranean region, concentrated solar power offers the prospect of an abundant low carbon energy source. Indeed, just as Britain's North Sea could be the Gulf of the future for offshore wind, so those sunnier countries represented here could become a vital source of future global energy by harnessing the power of the sun".

How realistic is this? Dr Gregor Czisch from the University of Kassel in Germany claims that it is possible to provide a 100% renewable power supply for Europe at competitive costs if we build a trans continental supergrid using High Voltage Direct Current links. It's claimed that transmission losses with HVDC are low - about 2% per 1000 km. Then the EU could share renewable sources such as offshore wind farms in the North Sea- the EU has put the potential at 150 Giga Watt (GW) - and Concentrating Solar Power (CSP) plants in North Africa and the Middle East.

The later idea has already been floated by the DESERTEC group, and over 1GW of projects are already underway or planned in Morocco, Algeria, Egypt, Jordon, UAE and elsewhere, with, in some cases, under sea grid links to the EU also being planned. Some projects plan to have molten salt heat stores, to allow power generation to continue overnight. See http://www.desertec.org/

However, in the Czisch scenario, wind provides almost 70% of the energy, with over 1000GW of generating capacity installed, some of it in the North Sea, but some of it outside the EU, with electricity being brought in from wind projects in, for example, Kazakhstan, Russia and Southern Morocco, this wider footprint helping to compensate for local variations in wind availability. The wind resource in these regions is huge, with there being many hundreds of GWs potential in each. See: http://www.iset.uni-kassel.de/abt/w3-w/projekte/LowCostEuropElSup_revised_for_AKE_2006.pdf

The first stage might be Airtricity's plan for a Supergrid running from Spain to the Baltic Sea, linking up countries around the North Sea and beyond, with a network of offshore windfarms at nodal points in the North Sea and off the West coast of the UK and Ireland. Their initial proposal is for a £20bn 10GW wind farm project in the southern part of the North Sea, with a 5 GW HVDC link carrying power west to the U.K, and a second 5GW line running east to continental Europe, perhaps to the Netherlands.

See http://www.airtricity.com/ireland/wind_farms/supergrid/.

This idea has also been taken up by Mainstream renewables http://www.mainstreamrp.com and by Norwegian transmission company Imera Power, who have announced plans for what they call a Europgrid: www.imerapower.com/EuropaGrid/EuropaGrid.htm

The European Commission has indicated interest in this approach, allocating €150m (£139m) for early work on a possible North Sea grid, and €100m (£93m) for a link between the Ireland and Wales to help renewables generators in Ireland access the UK energy market.

The idea also got strong support from a meeting of more than 100 leading European engineers, representing 21 European countries, brought together last November under the auspices of the Royal Academy of Engineering in London to discuss Europe's renewables challenges.

Mega schemes like this do of course raise many issues. For example, about the security aspects of the supergrid and about the problems of negotiating grid link access across the whole continent. Some also worry that we would simply be switching from reliance on Middle Eastern oil and Russian gas to North African solar and eastern and southern wind. There is also the danger that EU governments might buy into this approach rather than sorting out energy problems at home- it could be used as an excuse for inaction. But, on the positive side, it could open up a vast new resource and provide income for some relatively poor areas on the periphery of the EU in the South and East- as long as the trading contracts negotiated were fair. One thing seems clear- the supergrid idea could open up not just a major new resource but also a new energy geopolitics.

Obama has promised to put climate change front and center in Washington politics. As one of the first direct measures, Obama has directed the EPA to reconsider permitting California to impose tougher fuel standards. What does this measure mean for fighting climate change?

Obviously, measures in the transportation sector are very relevant to avoid potentially catastrophic climate change. In the US, transportation, and specifically motor vehicle use, is the largest and fastest growing source of GHG emissions among all energy sectors. Transportation alone accounts for one-third of all US emissions [1].

Current federal fuel standards (CAFE) are around 25 mpg. The Californian standards would require a fleet-wide average standard of around 35 mpg by 2016. An increase of 40% in this short time sounds quite ambitious. So is it even feasible? A look across the Atlantic is useful: by 2002, Europe already had an average consumption of 37 mpg. Just comparing these numbers, it is obvious that no technological barrier hinders California from reaching its goal. But where does this difference come from? Popular wisdom suggests that the Big Three--GM, Ford and Chrysler--were unable to develop advanced technologies. But this is not really the heart of the issue. In fact, the Detroit manufacturers contributed significant technological advancements over the last 20 years. What then is going on here?

It helps to understand what exactly "fuel efficiency" means. It turns out that fuel efficiency in the US, measured in miles per gallon, did not improve in the last two decades. Yet in that same period, manufacturers pushed heavier, more powerful cars onto the market. In fact, the consumption of gasoline per vehicle weight improved dramatically. In terms of consumption per vehicle weight, the US fleet is as good as any other region in the world [2]. Instead, the bad overall mileage is related to the high number of heavy tank-like vehicles on the streets. By 2008, more than half of all vehicle purchased in the US were SUVs or light trucks (the trend is currently changing though). So in effect, additional weight consumed all the technological improvement.

That makes one curious about why car purchases went up for heavier cars. SUVs became more popular worldwide, but especially so in the US. Is it simply that Americans like big cars more than the rest of the planet? It is probably true that gas-guzzlers are chosen as status symbols. But the story is richer. Many car owners, in fact, cite safety concerns: they can't ride a small car when others are riding big cars. It is an arms race. Where did it find its origin?

Let us go back to the fuel standards. As a leader, the US introduced fuel standards in the 1970s as a reaction to the oil crisis. The Big Three feared the incoming Japanese competition, which produced much more fuel-efficient and smaller cars. Doing what they do best, the Big Three lobbied Congress for loopholes for light trucks, exempting them from stringent regulation and taxes. At the same time, a 25% tax on imported pickup trucks was put into place. That didn't seem like a big deal then as those big cars made up a small market share. However, the Detroit manufacturers invested heavily in this loophole rather creating a new market for big and, due to their size, fuel-inefficient vehicles, than competing with the Japanese. [3] Detroit manufacturers chose an intermediate successful but ultimately dead-end strategy.

From this perspective, it looks much more like supply first created the demand for big vehicles. Dysfunctional fuel standards are partially to blame, allowing for different categories.

Fig_weight.png
Comparison of fuel economy standards [4].

What lessons can the Obama administration learn from this when re-regulating fuel efficiency?

The Californian standard is promising but still adheres to the current double standard: a lower standard for SUVs and light trucks and a higher one for everybody else.. The updated federal CAFE standards from 2007 are also a step in the right direction, especially when the changes in the standards are front-loaded, i.e. the highest steps in fuel efficiency requirements must be taken first. The new CAFE standards are differentiated in size, i.e. larger cars have lower fuel efficiency requirements. This means that there is an incentive to reduce the vehicle weight for any given car size. However there is no incentives to reduce weight by going to smaller cars and promote them.

The Obama administration has indicated it wants to implement progressive but harmonized standards. (Harmonization makes sense, as it helps car manufacturers without harming greenhouse gas emissions). The harmonized standards could set weight independent standards. Another option is to supplement CAFE standards with market-based incentives for consumers to buy the overall more fuel-efficient cars, e.g. by a fee-bate scheme, where buyers of fuel-efficient cars get a rebate whereas buyers of fuel-inefficient cars pay a fee. This is a revenue neutral scheme.

There is much promise in fuel-efficiency standards. If Obama follows the proposed California regulation or a similar approach and implement them on federal level, overall US GHG emissions will be around 5-6% lower by 2016, assuming all else being constant. That is very successful achievement for a single measure!

References

[1] Energy Information Administration (EIA), Emissions of Greenhouse Gases in the United States 2004, Washington DC, 2005

[2] Lee Schipper, Automobile Fuel, Economy and CO2 Emissions in Industrialized Countries: Troubling Trends through 2005/6, EMBARQ, the World Resources Institute Center for Sustainable Transport, 2007

[3] Daniel Sperling, 2 billion cars, 2009.

[4] Feng An and Amanda Sauer, Comparison of Passenger Vehicle Fuel Economy and Greenhouse Gas Emission Standards around the World, Pew Center on Global Climate Change, 2004

Energy and water are inextricably linked. If we consider food as energy, which we should since it is the substance that powers our bodies, then the energy-water nexus is perhaps the most important in the agricultural sector. When we use agriculture to grow crops for biomass that is later converted to liquid fuels, then the energy-water nexus is even more apparent. I calculated how much water is used for driving light duty vehicles (cars, vans, light trucks and sport utility vehicles) and published the results in Environmental Science and Technology (10.1021/es800367m). Results are also summarized in a commentary in Nature Geoscience volume 1.

The results vary from 0.1-0.4 gallons of water per mile (0.2 - 1 L H2O/km) for petroleum gasoline and diesel, non-irrigated corn for E85 vehicles, and non-irrigated soy for biodiesel. Additionally, driving on electricity from the US grid consumes near 0.2-0.3 gallons of water per mile (0.5 - 0.7 L H2O/km). The reason is that water is used to cool off the coal, natural gas, and nuclear power plants on the grid. However, if irrigated corn is used to make E85 ethanol in the United States, then the water consumption jumps by one to two orders of magnitude to 10-110 gallons of water per mile (23-260 L H2O/km), with an average of 28 gallons of water per mile driven. Keep in mind that only about 15-20% of corn bushes are irrigated to any extend in the US.

This information is only an introduction to the energy-water nexus, but the US government is looking at it more closely recently due to research showing how constraints on one side can create problems for the other. This is typified by potential legislation proposed by Senator Bingaman: The Energy and Water Integration Act of 2009. This bill use similar language as in my Env. Sci. and Tech. paper in measuring the life cycle impact of water for transportation in terms of water consumed per distance traveled. Hopefully, research, industry, and government efforts can minimize impacts on water resources and use them wisely for our energy future.

The concept of using water resources sustainably, especially for growing biomass for liquid fuels, makes one wonder about water embodied in imports and exports in general. Is it better for the US to import biofuels from Brazil that are grown from sugar cane that might not stress water resources as much as corn agriculture does in the US? The US has a tariff on imported ethanol, but not imported oil. If the US has an energy policy goal of reducing imports from the Middle East, then it seems like the tariffs would be switched since the US has friendly relations with the Brazilians. So it seems the current US energy policy is to literally trade domestic water for foreign oil. I guess it could be worse.

A bit of a revelation emerged recently when EDF's submission to the UK governments renewable energy strategy consultation was made public. EDF, the French company that is planning to build a new fleet of nuclear plants in the UK now that it owns British Energy, said:

'As the intermittent renewable capacity approaches the Government's 32% proposed target, if wind is not to be constrained (in order to meet the renewable target), it would be necessary to attempt to constrain nuclear more than is practicable'.

EDF presumably want to build some European Pressurised -water Reactors (EPR's) but noted that, although 'EPR nuclear plant design can provide levels of flexibility that are comparable to other large thermal plant...there are constraints on this flexibility (as there are for other thermal plant). For example, the EPR can ramp up at 5% of its maximum output per minute, but this is from 25% to 100% capacity and is limited to a maximum of 2 cycles per day and 100 cycles a year. Higher levels of cycling are possible but this is limited to 60% to 100% of capacity'.

At present, when demand is low, e.g. at night in summer, fossil fuel plants are throttled back, and they are also run up to full power to meet peak demand. So, given demand changes, they may have to cycle up and down several times a day. With wind plants on the grid they may have to do this a few more times a year, when wind input is low, but that's hardly a major issue. However, if we have a lot of wind and other variable renewables on the grid, and also a lot of nuclear, then balancing the system gets harder. As EDF admit, the nuclear plants can't cycle up and down well - and there will be less fossil plant to do this. So, in the absence of major electricity storage or export facilities, when demand for energy is low but there is plenty of wind, what happens? Do we just curtail wind generation then? Or should we limit how much nuclear capacity we install?

Perhaps unsurprisingly, EDF say we will have to limit how much variable electricity generating capacity we build, although they are happier with heat supplying renewables - since they don't enter into the electricity balancing equation: 'A lower volume of intermittent renewable electricity generation and higher volume of renewable heat generation by 2020 would create a better investment climate for all low carbon technologies, including nuclear and CCS'.

Source: 'UK Renewable Energy Strategy: Analysis of Consultation Responses' Prepared for: Dept of Energy and Climate Change www.berr.gov.uk/files/file50119.pdf