This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Skip to the content

IOP A community website from IOP Publishing

January 2010 Archives

Prices of solar energy and wind turbines are dropping world wide. This article in the New York Times highlights the role of China in this market. The bottom line is as follows: China by now is the world leading manufacturer of solar panels and wind turbines. By this is achieves economics of scale and can offer products at a relatively low world market price.

There are two immediate consequences:

  1. Installation of renewable energies becomes cheaper worldwide 
  2. Manufacturing competitors in other countries have a tough stand competing against Chinese manufacturers

Why could China get into this position in first place? The NYT article delivers the following explanation:

"China's biggest advantage may be its domestic demand for electricity, rising 15 percent a year [...] In the United States, power companies frequently face a choice between buying renewable energy equipment or continuing to operate fossil-fuel-fired power plants that have already been built and paid for. In China, power companies have to buy lots of new equipment anyway, and alternative energy, particularly wind and nuclear, is increasingly priced competitively."

In other words: in the US and Europe, renewable energies have to compete against existing energy supply, in China they don't have to: there is enough space for both rapid expansion of coal power plants, and renewable energy. Practically unlimited demand requires expansion of energy supply across the board. Another reason the low prices, of course, is the lower labor costs in China.

An interesting exercise is to consider the consequences of this observation the other way around. If saturated economies decided to phase out conventional power plants - coal and nuclear - rapidly, there could be room for economies of scale in renewable energy supply locally, too. However, such a decision can only be made politically as those who would massively invest into renewable energy supply are the current owners of existing coal plants.

Understanding the dynamics of this game (and the relevance of the argument above) is of high relevance for OECD countries. For example, in Germany the phase-out of nuclear plants is renegotiated, with stakeholder arguing that longer running time of nuclear plants serves as a 'bridge' towards a renewable energy future. From a different perspective the nuclear power plants pose a barrier towards economies of scale in renewable energy supply with the two consequences of (a) losing an edge advantage in international economic competition in renewable energy technologies and (b) getting the intertemporal optimization wrong. The intertemporal optimization point is that investing now into renewable helps to get the prices down quicker, and have lower abatement costs in the future (this argument is so central that is deserves another blog).

The debate pro/con nuclear power from the climate perspective is still open. The Chinese evidence of the economies of scale, however, provides some quantitative indication in favor of phasing out conventional plans rapidly. 

The wrong FIT?

| | Comments (8) | TrackBacks (0)

In April the government is to launch a Feed In Tariff for small renewables- the 'Clean Energy Cashback' scheme – and details of the tariff rates should emerge next week. Under it, electricity supply companies will offer guaranteed payments for electricity generated from renewable energy devices that consumers have installed their own homes, or to small projects installed by community organisation and the like, the limit being 5MW. See my earlier blog for details: http://environmentalresearchweb.org/blog/2009/10/uk-tries-to-get-fit.html

Eligible technologies include micro wind turbines, photovoltaic modules, micro hydro plants, and biomass-fired units. In the domestic sector, PV solar seems likely to dominate – micro wind is only really viable in a limited number of places and micro biomass units for electricity generation (usually micro CHP) are still relatively novel.

The Feed-In Tariff (FiT) that has been running for several years now in Germany has certainly helped get PV established, so maybe that will also happen here. The theory is that this guaranteed subsidy helps build the market for PV, so that prices begin to fall – and the FiT support can then be reduced. The German system has a built-in annual price 'degression' formula to take account of that. And it seems to work – PV prices have fallen and installed capacity has grown.

However, it has to be said that this has come at a cost: the supply companies pass on the FiT charges on to all electricity consumers. PV is expensive. But since the PV element the German FiT has so far been relatively small (most of the FiT has been used to support wind, which is cheaper) the overall cost of the FiT to consumers has been relatively small – initially 3–4% or so extra on average bills. However, with demand for PV increasing due to the FiT and the reduced cost of PV, there have been concerns about loading consumers up with the higher costs. That has already led to a cap on total PV capacity supported under the FiT in place in Spain. And the German government has now decided to reduce the FiT support rate for PV by15% to reduce the cost to consumers.

Initially, the German government was clearly convinced that PV was a major option for the future – as is widely accepted to be the case. It did of course have to balance the costs to consumers, the expected reduction in prices as the FIT helped PV move down its learning curve, and how much capacity was wanted, but it obviously felt that it was right to push PV ahead rapidly. Now, however, following a shift to the political right, it's being more cautious. That change was no doubt buttressed by claims by the German news magazine Spiegel that the additional costs for subsidizing new PV installations in 2009, based on initial industry estimates for new installations of around 700 MW, could be as high as €10bn over the course of the 20-year FiT programme. And also by the study published last year by RWI (Rheinisch-Westfaelisches Institut für Wirtschaftsforschung), which claimed the extra cost added to consumers bills was around 7.5% p.a., and calculated the total cost of PV to German electricity users could be more than €77 bn over a 25-year period. These estimates may be inflated: the German Institute for Economic Research (DIW) put the latter cost at €50 bn. But it did seem that a continuing rapid expansion of PV was going to put more cost on consumers.

Basically the problem is that, although they are falling under the FiT, PV costs are still high at present, much higher than for other renewables, and the rapid expansion of PV meant the cost to consumers was too high. A problem really of success! By contrast, near-market options like large wind turbines are much cheaper/kW and per kWh, and so FiT support for wind cost consumers less in total. And so wind has been the main focus, with the result being that the German FiT has helped support 25 GW of wind capacity, and only about 4 GW of PV.

The UK FiT

What does this mean for the UK? If FiT's aren't that good at supporting expensive options like PV without loading up consumers with high costs, arguably we've got it the wrong way around in our approach. FiTs should be used for the big cost-effective stuff. We are using it for the small expensive stuff.

It could be that, nevertheless, as the UK's 'Clean Energy Cashback' FiT gets going, customers who are willing and able to borrow money to install the equipment will push ahead, as happened in Germany. The guaranteed FiT income does make it easier to get loans from banks. And it's certainly better than the than the UK's dismal ROC/Renewables Obligation system. But what smaller expensive projects like PV really need is up-front capital grants. The UK tried that earlier with the PV grants system in the Low Carbon Building programme – but the level of demand for grants was such that it overwhelmed the relatively small scheme, and there were limits to how much more taxpayers money the government felt it could provide. Hence the interest in a FIT for PV and other small renewables – then its the consumers who pay.

The FIT may work well for some people. At present, for those with money, investing in PV solar will give a better return, via the FiT, than banks offer! But what about those without money? In the pre-budget report last December, the government said that 'although feed-in tariffs and the Renewable Heat Incentive will make payments over the life of installations, low-income households may still find it difficult to meet upfront costs'. It added that 'building on the experience of pilot projects for Pay as You Save financing and Warm Front,' it will consult 'on measures to help low-income households take advantage of clean energy cash-back'. That could help. And some community schemes may also prosper.

Even so, sadly, not much is expected on the UK FiT. At best, the government sees it as delivering just 2% of electricity by 2020. The Renewables Obligation (RO) is seen as the main way ahead, helping us to get about 30% from renewables, mostly wind, by 2020. So far, using the RO, plus a few capital grants, we've barely made it to 6%, with only tiny amounts of PV. And the RO has loaded up consumers with much higher prices per kW and kWh than under the German FiT system- even though the latter also included support for much more PV. Ofgem, the energy regulator, has reportedly estimated that the RO cost consumers £1 bn last year and a total of £4.4 bn so far. But it has only helped support 4 GW of wind generation capacity (some of which also benefited from grants), compared to the 25 GW installed under the FIT in Germany. So in general, in terms of capacity and costs, FITs are a much better option.

Whether that will prove true of the limited UK version for small projects remains to be seen. It will be interesting to see what the government comes up with next week in terms of tariff levels. Will PV get enough to move ahead seriously? And if so, what will that cost us?

Interest in using the scheme seems high. A YouGov survey for Friends of the Earth the Renewable Energy Association and the Cooperative Group found that 71% of homeowners said they would consider installing green energy systems if they were paid enough cash – and 64% of those asked thought that the government's plans were not ambitious enough. But what if it puts bills significantly? The poll showed that 70% of respondents said that they would be prepared to pay an extra 10p on their electricity bills each month (£1.20 annually), on top of the already proposed annual increase of £1.17, until 2013 when the scheme is due to be reviewed. So maybe there is an appetite for change.

This blog is going to be a little bit different, because I need to let off steam about Himalayan glaciers, addressing myself mainly to readers, if any, who don't believe in global warming.

Ben Santer is a climatologist who has done much more than most to advance our understanding of human influence on the climate. In his words from the IPCC's Second Assessment, published in 1995, "The balance of evidence suggests a discernible human influence on climate." Advances since 1995 are encapsulated in the words of the IPCC's Fourth Assessment, published in 2007: "Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations."

In a press conference call last week, Santer asserted that it would be wrong to use the mouse to cast doubt on the elephant. He was reacting to recent excitement in the media about Himalayan glaciers. Himalayan glaciers are the mouse in the room. Denialists evidently have no interest in the fact that sustained dispassionate study of the Earth and its atmosphere shows unequivocally, as summarized in the IPCC's periodical assessments, that there is also an elephant in the room.

Santer is absolutely right about both the elephant and the mouse. I, however, want to focus on the mouse.

I am the guy who found the typo. That is, I found the sources of the mistaken claim, made in the second volume (section 10.6.2) of the IPCC's Fourth Assessment, that Himalayan glaciers are very likely to disappear by 2035 or perhaps sooner. I am also the guy who tipped off Fred Pearce, the author of the 1999 news story in New Scientist that is the de-facto source of the mistaken claim. Pearce's story in last week's New Scientist (16 January) is the spark that ignited the present firestorm threatening the IPCC in general and its chair, Dr Rajendra Pachauri, in particular. I am also the guy who, with three fellow glaciologists, wrote to Science describing the nature of the Himalayan errors.

Finally, I am a guy who like several thousand other scientists holds a tiny share of the 2007 Nobel Prize for Peace along with Dr Pachauri. That is, I contributed to the IPCC's Fourth Assessment.

Rating the news stories on clarity and factual accuracy, the widespread media coverage of the Himalayan mistake has ranged from not very good to very good indeed. On the whole, except for some misattributions and for their addiction to sound bites, I have no substantial fault to find with the journalists. But many of the online news outlets invite comments from readers. With rare exceptions, those comments make unutterably dismal reading.

No scientist can fault members of the public for not being experts. On big questions that are also complicated, they have to trust somebody. Any failure of trust must be painful. But that does not excuse illogic, ignorance and failure to check facts.

The most illogical of the comments on Himalayan-glacier stories last week are those that take the part for the whole. Those commenters who dismiss the entire Fourth Assessment should read the part of it (section 4.5 of the first volume) to which I contributed. If they find anything wrong with it (which I doubt) they should let me know and I will try to fix it. Pachauri is dead right when he says that the Himalayan mistake was a collective failure. We could have fixed section 10.6.2 of the second volume, but failed to because the right mechanisms for making 3000 pages of text all consistent with one another were not in place. We have to do better next time.

Ignorance is unpardonable, or at least very risky, if you feel inclined to shoot your mouth off. Speech is free, but if you want to be taken seriously you need to know your stuff. One point on which many of the newspaper readers are ignorant has to do with money. I don't know how many salaried persons work for IPCC. But none of the contributing authors are paid, up at least as high as the level of Chapter Lead Author. The unknown colleague who wrote the mistaken paragraph about Himalayan glaciers was not paid to do so. I got nothing for the couple of hundred hours I put in on my contribution to the Fourth Assessment, or for tracking down the typo. I do get a salary, but it is for being a university professor. Contributing to IPCC assessments is not what they pay me for.

Failure to check facts is a tough one for an IPCC contributor to tackle, given that we are talking about a failure of IPCC to check its facts. But the difference is that we have to take the consequences and the irresponsible commentator doesn't.

If you write that the atmospheric concentration of carbon dioxide is "widely accepted as being about 350 parts per million", and walk away, it doesn't do much good for me to answer that it is known with high confidence to be between 385 and 390 parts per million (in 2009, on a global annual average). If you write that the hockey-stick graph "has been discredited", you have a good chance of getting away with it, but that doesn't stop it being a wrong fact. Every objection to the hockey-stick graph, and there have been some plausible ones, has been unpicked, found to have no scientific basis, and explained. If you write that "Latest sea level measurements from benchmark island shows sea level is dropping", you need to be told, if you are still there, that that is rubbish. I don't know what "benchmark island" means, but the current best estimate of the rate of sea-level rise, averaged over the world during 2003 to 2008, is +2.5 millimetres/yr, give or take 0.4 mm. (I suspect it might be on the low side, but that is another story.)

You may have noticed that there is nothing about Himalayan glaciers in the last paragraph. That is because there is nothing about Himalayan glaciers in the readers' comments. Although they should be, they aren't interested in Himalayan glaciers.

Probably the least excusable of the failings of the denialist commentators, however, is muddle-headedness. Many of the opinions they express are actually about the levying and spending of tax, and are opinions to which as taxpayers they are clearly entitled. But you need a clear head to grasp that opinions about tax are not a warrant for any opinion whatsoever about Himalayan glaciers or the findings (as opposed to the funding) of the IPCC.

Few as they are, the real facts about Himalayan glaciers are disturbing enough that there is no need, or justification of course, for exaggerating them. Allowing for undersampling, measurement uncertainty and all the other things that make scientific pronouncements fuzzy, Himalayan glaciers are indeed losing mass, and it is more likely than not that they are losing mass faster now than a few decades ago.

When you make a scientific pronouncement about the future, you add new dimensions of fuzziness. Still, it is easy to show on the back of an envelope that there is no chance at all of Himalayan glaciers being gone by 2035. There is no plausible scenario, even with plausible exaggeration of human interference with the climate, that would deliver the energy required to melt the Himalayan ice in the time available. But Himalayan ice is a non-renewable resource. The more of it we pour into the ocean, the less our stock of fresh water, the less our chance of keeping life bearable for the people of the Indian subcontinent, and the less our chance of keeping sea-level rise within reasonable bounds.

Your options as a denialist are limited. You can elect legislators who have accepted the IPCC's findings and will use tax as an instrument for encouraging people to get to work more cheaply. Or you can allow them to use the law, making it an offence to drive to work. Or you can continue to refuse to accept the presence of the elephant in the room, seizing on mice as an excuse. Sooner or later the market will make driving to work too expensive for you, although that will be among the least of your worries. Pick the least unpalatable option.

Neil Crumpton, a member of the Bath-based Claverton Energy Group of energy experts and practitioners, and also Friends of the Earth Cymru's energy campaigner, has produced a draft zero-carbon, non-nuclear scenario to 2050 and beyond intended to initiate feedback and debate in the Claverton Energy Group. It aims to identify the low-carbon energy generating and supply infrastructure needed to build a resilient, demand-responsive UK energy system. It relies heavily on renewables, urban heat grids, possibly suburban hydrogen networks, and carbon capture and storage (CCS) during the four decades of transition.

It is very ambitious. Renewables would supply about 200TWh/y by 2020, scaling up to more than 1,100 TWh/y by 2050. Offshore windfarms, at least 10 miles from any coast occupying some 20,000 sq. km, would supply ~ 550 TWh/y, about half his estimated 2050 final energy demand. But the real innovation starts on the heat side, with much use of Combined Heat and Power plants and large heat pumps feeding industrial users and town/ city heat grids. Up to 15 GWe of industrial Combined Heat and Power (CHP) plants would supply industrial clusters, while 15 GWe or more of urban Combined Heat Pump and Power (CHP&P) schemes (typically 0.5–100 MW) would distribute reject heat from fast-response 'aero-derivative' gas turbines, and large heat pumps.

They would feed heat grids, with up to 5 GWe of 'initiator' CHP&P schemes, progressively linked up to form wider district and eventually town-wide and city-wide heat grids over the next 15–20 years. Large-scale heat pump installations would deliver renewable heat from air and ground- and from solar thermal and geothermal sources.

Even more innovatively, large thermal stores (accumulators), up to traditional gasometer-scale, would optimise the system. Peaking renewable electricity, particularly from marine technologies, would primarily be stored as heat at electricity 'regenerator' sites comprising a mix of technologies like molten salt stores and 10 GWe or more of steam turbines, electrolysers and hydrogen fuel cells and compressed air. Chemicals and fuel synthesis could also feature and connection to the heat grids would greatly aid conversion and regeneration efficiency and heat demand response.

Crumpton says 'such an energy storage and electricity regeneration capability would be a significant aid to delivering the UK's large but highly variable renewable energy resources, particularly wind energy, to consumers as and when demanded'.

Initially the energy input for the heat grids would be mostly from gas, but all the gas-fired industrial CHP and urban CHP&P capacity would be progressively converted to hydrogen, piped in from coal and biomass CCS gasifiers. There could also be a direct solar heat input to local heat stores, and possibly also some from geothermal sources. Low-pressure hydrogen might also be supplied to the 9.5 million sub-urban homes via the existing (upgraded) gas network to power 10–30 GWe of mCHP boilers (possibly fuel cell) and domestic heat pumps.

All large emitters would be fitted with Carbon Capture by 2025. CCS fitted gasifiers co-fired with 15+% biomass or imported solar synthetic fuels would then provide 'carbon-negative' baseload to the extent climate protection policy required. The 10 GW of CCGTs already consented would operate until about 2040, then be retained for occasional duty during prolonged winter anti-cyclones.

There would also be HVDC links to Europe, including Norwegian hydro and pumped storage schemes, which would help optimise the system to high marine renewable variability, and open the option of delivering net imports of around 10% of final energy from Saharan wind and concentrated solar power schemes.

The complete system, with molten salt heat stores at regeneration sites, would comprise some 50 GW of firm electricity generation, plus peaking plant, suburban mCHP, and inter-connectors. He says the system's firm generation and storage capacity would be designed to supply 'smart' demand even during a deep winter anti-cyclone lasting days. And he says that 'Depending on the availability of sustainable bio-sources and transport sector emissions, the UK could be net zero carbon by 2040'.

It is of course all very speculative, although the use of large heat pumps is not novel- The Hague has a 2.7 MW (ammonia) seawater community heating scheme and Stockholm has a total of 420 MW (multiple 30 MW units) of heat / cold pumps feeding its district heating / cooling grid. Crumpton says 'The large heat pumps would harness heat from sources which 11 million urban domestic heat pumps could not do, including large solar thermal arrays and geothermal'.

Using coal still might worry some environmentalists, but there would be CCS and he says it would be used in minimal amounts by 2050. Generating and piping hydrogen is also a novel idea – but there are now some pilot schemes in the UK. And piping heat is much more common – on the continent.

Installing that, and the rest of the system, would though involve a lot of new infrastructure, but he claims that 'strategic siting the gasifiers would combine locations with good transport access for coal and biomass (dock-sides, railheads, collieries), together with hydrogen pipeline routes to CHP schemes, and CO2 pipelines to geological storage sites under the North Sea or Liverpool Bay'. And similarly 'regeneration schemes should be sited adjacent to industrial clusters, refineries, and existing chemical sites with hydrogen, CO2 and heat grid pipeline access'. In addition, 'coastal locations with direct HVDC connection from marine renewables would minimise need for new cross-country transmission lines'.

So disruption would be reduced. Nevertheless, building the heat grids (polypropylene pipes) would involve some short-term local disruption to pavements and roads during the pipe/conduit laying. But he says it would 'provide low-carbon energy infrastructure for the children of today and future generations'.

The draft scenario is outlined in more detail in the current issue of Renew (183): www.natta-renew.org

How do we get land transport on the track towards sustainability?

This was one of the questions of last week that witnessed intense and exciting exchange at the Transport Research Board, and a special conference on Transforming Transportation in Washington, D.C. From a climate perspective, sustainable translates into low-carbon transportation. However, sustainable transportation also comprises equity and accesssibility, public heat, such as air pollution and noise but also effects related to physical activity, and time and monetary cost of transportation.

Land transportation is responsible for 5-30% of greenhouse gas emissions of countries. Currently, transport's share of GHG emissions is signficantly lower in developing countries than it is in OECD countries, notably the U.S. However, emission growth is heading north. Sustainable transport policies are not incredibly challenging to understand. They include pedestrian facilities, a network of well maintained bicycle lanes, parking facilities for bicycles, a bus or tram network for medium sized city, and an additional subway/metro network for larger cities and metropolitan regions. Crucially, non-motorized transport and public transit must have priority before car transport wherever these modes struggle for space. The spatial dimension is indeed the most interesting and challenging: what is the optimal land-use policy related to sustainable transport? When facilities, jobs and residential areas are well connected to public transit, sustainable modes of transport can guarantee accessibility. Now, sustainable transport is less expensive than building highways but it still must be financed. Let's look at the financial flows of the development banks as of 2007 (Figure below).

adb.png
adb.png

Source: ADB, 2009

·            The World Bank and the Asian Development Bank commit about three fourth of their transport lending towards roads and highways (ADB, 2009). There is basically no funding for pedestrian and cycling infrastructure.

·           More generally, multilateral development banks still fund dirty projects (fossile fuel related) with 4 times more money than green projects. Bilateral agencies are only little better (Hicks et al., 2008; for some further discussion and data see Creutzig and Kammen, 2009)

·            According to a former Bank member, the World Bank never funded a pedestrian project. One proposed project was rejected, as the financing volume was too small.

When financing of sustainable transport projects increases sustantially, a huge number of projects could be funded as bycicle lanes, but also bus rapid transit systems are not incredibly expensive. Of same importance is the reduction of conventional projects, such as highway construction. In transport, infrastructure supply induces demand, and additional road network will increase automobile dependency, can even lock-in developing countries into car dependency as it happened before to other countries. Hence, a goal from the top-down perspective is to inverse the factor 4 in financing: 4 times more money into sustainable road projects than into road construction (certain projects probably still make sense). As banks and donors work mostly with large chunks of money, but also as sustainable transport projects work best in a system's approach, it is in many cases best to bundle projects into city-wide packages.

As a side note: The factor 4 also finds itself in the paper of the TRB conference last week. A simple word search in TRB papers found approximately 4 times more hits for highway (1822) than sustainable (337), and cars (1822) versus pedestrians and bicycles combined (495). Science needs to switch, too.

ADB, 2009. Rethinking Transport and Climate Change. Working paper series.

Hicks, R., Parks, B. C., Roberts, J. T., & Tierney, M. J. (2008). Greening Aid? Understanding the Environmental Impact of Development Assistance. Oxford, UK: Oxford University Press.

F. Creutzig, D. M. Kammen (2009) The Post-Copenhagen Roadmap Towards Sustainability: Differentiated Geographic Approaches, Integrated Over Goals
INNOVATION, Vol 4 (4): 301-321

The term "basket-of-eggs topography" in glacial geomorphology is a metaphor for the appearance of drumlin fields. Drumlin is Gaelic for a rounded but elongate hill or ridge. Where you find one drumlin you usually find a whole field. They tend to be quite tightly packed, and a basket of eggs is a rather apt analogy.

More apt than you might think. Laying an egg is a practical problem in hydrodynamics, solved long ago by our amphibian and reptilian ancestors. Forcing glacier ice over a resistant bed is an analogous problem, at least to the extent that both the bird and the glacier – usually an ice sheet – have to balance force against resistance. One of the most distinctive attributes of drumlins is that they are smooth.

This does not get us very far, though. Drumlins might look like eggs because they represent roughening of an originally smooth (flattish) glacier bed or, equally likely, smoothing of a rough bed. But why did the ice sheet and its bed find it mutually convenient to generate the particular amount of smoothness that we can see today? Why don't we see drumlin fields everywhere? Are there drumlin fields beneath the modern Antarctic and Greenland Ice Sheets? And if so, can we learn about the behaviour of ice sheets, and in particular their behaviour in the worrisome near future, by working out how the ancient ice sheets drumlinized their beds?

The answer to the last question is Yes. Progress, however, has been frustratingly slow. Several intriguing papers demonstrate, either analytically or by numerical modelling, how drumlins could possibly form, but as yet there is no sign of a compelling universal explanation.

Now, Chris Clark and co-authors have fallen back on an old strategy, that of compiling a large sample of simple measurements in the hope that insight will emerge from the sheer weight of the evidence. It is easy to criticize this approach as mindless, and it is true that they have not tackled the big questions, but in my view they have indeed produced food for thought.

The first thing to note about the Clark sample is its impressive scale. They counted all of the drumlins in the British Isles – all 58,983 – and assembled aggregate statistics for half as many more from other glaciated regions. Inadequate sampling is not likely to be one of the major concerns about their results.

They measured the length and, when possible, the width of each drumlin. The average elongation (length divided by width) is 2.9, and the most common elongations are between 2.0 and 2.3, so drumlins are typically two or three times as long as they are wide.

Several non-obvious facts emerge immediately. First, drumlin lengths and widths have unimodal frequency distributions (well-defined single peaks). I buy the argument that this means that "drumlin" is a meaningful single concept and not, for example, a jumble of other concepts. Second, drumlins are no shorter than 100 m. This suggests that, whatever dynamical phenomena are represented by the word "drumlinization", they have a physical lower limit. (To me it smells like a fraction of the ice thickness, but that is as far as my intuition takes me.) Third, the frequency distributions are skewed, meaning that increasingly small proportions of the total sample are very long (or wide, or elongate). There does not seem to be any particular upper limit to the dimensions of drumlins. Perhaps they grade into the very elongate features that geomorphologists call megaflutes.

What Clark and colleagues find most surprising about their sample is that it exhibits a clear scaling law: for any given drumlin length, the greatest observed elongation is equal to the cube root of the length. I agree that this is both clear and surprising, and that it must mean something, although I have no idea what. But for me the most striking thing about their paper is Figure 7, a map that shows that drumlins are essentially lowland landforms. (For some reason, they left the ice-sheet margin off this map, but you can find it in many textbooks. Right now I am looking at Figure 12.1 of Glaciers and Glaciation by Benn and Evans.) Drumlin-free lowlands are not uncommon in the glaciated parts of the British Isles, but all of the uplands, especially the most rugged parts, seem to be entirely free of drumlins. Were they already too rugged, so that drumlinization was unnecessary? Was the ice too thin? Too slow? Too cold? As I said, food for thought.

I've been working on some analysis lately tying energy return on energy investment (EROI) to financial parameters such as project internal rate of return and levelized cost of energy. An interesting question arrives when you think of the energy costs of financing a project. This is a particularly relevant question today given the level of scrutiny and discussion that is ongoing regarding financial and banking regulation.

The conventional economic wisdom is that financial speculation, mostly in real estate combined with a decade of overspending and a lack of savings in general, led to a bubble in economic growth (e.g. GDP) that then popped resulting in a recession. We are now told that the recovery from the recession caused from this overspending is close to ending due to massive government spending. This logic certainly sounds backwards: that is to say, the way the government claims we will get out of a financial downturn, caused by spending over the rate of economic growth, is in fact to spend more money than we are making. Of course this reverse logic has not convinced many people. I now look at this logic by contrasting energy and money from the view of debt financing.

I'll define debt financing as simply spending less money/energy at the beginning of a 'project' than is actually the total required cost of the project. Thus, if my solar panel is $2M and I use debt financing, I might give a bank $400,000 at the beginning of the project and pay the other $1.6M over 20 years. However, when manufacturing and installing the wind turbine, I can't consume only 20% of the energy inputs at the beginning of the project, and consume the other 80% of the required energy inputs over the next 20 years. This is because approximately three quarters of the energy inputs for a wind turbine are consumed before turbine actually starts to operate. The other 25% of the energy inputs are nearly uniformly consumed for operations and maintenance while the turbine is generating electricity.

We know that the energy for manufacturing and construction has to occur at the beginning of the project, and we know it takes 2-4 years to payback this energy (when considering the consumption of the energy by employees of the wind farm) in the form of electricity generated by the turbine. Note that most life cycle analyses analyzing energy payback time for wind turbines counting only the fuel inputs to the wind turbine life cycle such that the energy payback is calculated at less than one year for modern turbines. Either way, 75% of the energy inputs (analogous to monetary capital costs) are required at the beginning to even make the wind turbine function in the first place. Twenty percent of a wind turbine produces no energy. With this point of initial energy consumption in mind, then how do we build turbines in the first place without "energy financing"? The answer is that nature inherently provided the "energy financing" for us over the last 100 million years, and we call the energy savings over that time "fossil fuels".

Thus, the concept of financing is one lens by which to view the difference between energy and money. Because energy is a physical quantity that must obey physical laws, we cannot make up concepts, such as financing, and have them apply to energy. It is arguable that the level of debt financing allowed in a society has a strong correlation with EROI. That is to say, it takes energy consumption 'now' to make goods 'now', so all of the extra energy to make those goods is based upon the extra energy (EROI > 1) that is currently flowing from the energy sector to all others.

Because society's high EROI for the last 200 years has been based upon a stock, or 'storage', of energy in the form of fossil fuels, it is likely that a similar EROI from a flow of renewable energy (mostly solar derived resources) will not yield as society with as much energetic/economic productivity or societal complexity. This lower potential for a complex society based upon renewables is because to create a stock of energy from renewable energy flows, we must build energy storage systems to work with the renewable technologies. With fossil fuels, nature built the storage systems in the ground for us. And those stored energy resources needed 10s of millions of years of sunlight - the reverse of financing and the definition of saving.

Thus, we are currently spending the energy savings that nature provided us a million times faster than that it took to build that fossil fuel 'nest egg.' What we do and learn while spending this nest egg will determine how complex of a society we can have without it. Time has an arrow, and if we consume the same amount of energy 200 years from now as we did 200 years ago, we will not necessarily have the same level of lifestyle. We can only speculate about how different 2200 will be from 1800, but our actions today will certainly dictate the outcome. I'm betting that by learning how to live without fossil fuels while we have them today, will give those in 2200 a better chance of living better than those in 1800.

The debate over how to deal with the variable energy output from wind turbines continues to rumble on. Some say that, when wind availability is low, there will be a need for extensive back up from conventional plant to maintain grid reliability. However, this backup may already exist: we have a lot of gas-fired capacity, much of which is used regularly, on a daily basis, to balance variations in conventional supply and in demand. Balancing wind variations means this will just have to be used a few times more often each year, adding a small cost penalty and undermining the carbon savings from using wind very slightly. But some say we will need much more that that. A report from Parsons Brinckerhoff (PB) claims that "the current mix of generating plant will be unable to ensure reliable electricity supply with significantly more than 10 GW of wind capacity. For larger wind capacity to be managed successfully, up to 10 GW of fast response generating plant or controllable load will be needed to balance the electricity system".
www.pbpoweringthefuture.com

"Controllable load" includes the idea of having interactive smart grids which can switch off some devices when demand is high or renewable supplies are low.

However even if that option is available, some say that, with more wind on the grid, to meet peak demand, we will still need more backup plants than we have. By contrast, wind energy consultant David Milborrow claims we have enough, and that some fossil-fired plants can actually be retired when wind capacity is added. That depends on the "capacity credit" of wind – how much of the wind plant capacity can be relied on statistically to meet peak demand. Milborrow puts the capacity credit of wind at around 30% with low levels wind on the grid, falling to 15% at high levels (at say 40% wind on the grid). That indicates how much fossil plant can be replaced.
www.greenpeace.org.uk/media/reports/wind-power-managing-variability

PB see it very differently: "A high penetration of intermittent renewable generation drastically reduces the baseload regime, undermining the economic case for more-efficient plant types with lower carbon emissions."

Milborow admits that balancing wind variations has the effect of reducing the load factor for thermal plant, but says that this only costs ~£2.5/MWh at 20% wind, or ~ £6/MWh at 40%. PB will have none of this: "Very high early penetration of wind generation is likely to have adverse effects on the rest of the generating fleet, undermining the benefits of an increased contribution of renewable electricity."

PB also seems to slam the door on a possible way out, importing power from continental Europe, the wider footprint then helping to balance variations across a much larger geographical area. It says: "Electricity interconnection with mainland Europe would offer some fast-response capability, but would be unlikely to offer predictable support. Without additional fast-response balancing facilities, significant numbers of UK electricity consumers could regularly experience interruptions or a drop in voltage."

Addressing the interconnector issue, among others, TradeWind, a European project funded under the EU's Intelligent Energy-Europe Programme, looked at the maximal and reliable integration of wind power in Trans European power markets. It used European wind power time series to calculate the effect of geographical aggregation on wind's contribution to generation. And it looked ahead to a very large future programme, with its 2020 Medium scenario involving 200 GW – a 12% pan-EU wind power penetration. It found that aggregating wind energy production from multiple countries strongly increased the capacity credit.
www.trade-wind.eu

It also noted that "load" and wind energy are positively correlated – improving the capacity factor – the degree to which energy output matches energy demand. For the 2020 Medium scenario the countries studied showed an average annual wind capacity factor of 23–25 %, rising to 30–40 %, when considering power production during the 100 highest peak load situations – in almost all the cases studied, it was found that wind generation produces more than average during peak load hours.

Given that "the effect of windpower aggregation is the strongest when wind power is shared between all European countries", cross-EU grid links were seen as vital. If no wind energy is exchanged between European countries, the capacity credit in Europe is 8%, which corresponds to only 16 GW for the assumed 200 GW installed capacity. But since "the wider the countries are geographically distributed, the higher the resulting capacity credit" if Europe is calculated as one wind energy production system and wind energy is distributed across many countries according to individual load profiles, the capacity credit almost doubles to a level of 14%, which it says corresponds to approximately 27 GW of firm power in the system.

Clearly then, with very large wind programmes you do get diminishing returns and need more backup, but it seems that can be offset to some extent by wider interconnectivity – the supergrid idea, linking up renewables sources across the EU.

That is already underway. The UK's National Grid has agreed with its Norwegian counterpart Statnett to draw up proposals for a £1 bn grid-interconnector grid link-up, to be funded on a 50:50 basis, which could help solve the problem of winds intermittency, given that Norwegian hydro could act as back-up for the UK, in return for electricity from the UK on windy days. As yet no UK landfall site has been indicated, but it could include connection nodes along the route with spurs taking power from offshore wind farms and become the backbone of a new North Sea "supergrid": the UK and eight other North West EU countries have now agreed to explore interconnector links across the North sea and Irish sea. National Grid said: "Greater interconnection with Europe will be an important tool to help us balance the system with large quantities of variable wind generation in the UK."

A little soot can make a big difference to the brightness of snow. Freshly fallen snow, when clean, is one of the brightest of substances, reflecting well over 90% of incident sunlight and presenting the risk of snow blindness to ill equipped travellers on glaciers.

As the snow ages, the snowflakes collapse and become rounded. Opportunities for photons to bounce off and head back into the sky become fewer. Opportunities for absorption become more frequent because the photons spend more of their time passing through grain interiors. Eventually, as the snow turns into glacier ice, the reflected fraction of incoming radiation drops to as low as one half or less.

There is more than this to the radiative physics of snow and ice. For example the wavelength of the impinging photon makes a difference, and so does the angle at which it strikes the surface (more reflection when the angle is closer to horizontal). When a thaw begins, some of the snow turns into liquid water, which, ironically, is one of the darkest of substances. So wet snow is not particularly bright. Dirt also makes a difference.

If the dirt is black enough then even a small amount reduces significantly the brightness, or albedo, of the snow. This was shown dramatically as long as 30 years ago by Warren and Wiscombe. The more soot, the more darkening, but as little as a few parts per billion by weight reduces the albedo of pure snow (that is, collections of grains of ice) by a few per cent in the visible part of the spectrum. We also get significant sunlight in the (invisible) near-infrared, but the effect of soot is much reduced there because ice is itself very dark in the near-infrared. All the same, soot makes a difference.

Photon for photon, exposed glacier ice yields two or more times as much melt water than clean snow, assuming both are at the melting point. So, we are very interested in anything, such as soot, that reduces the radiative contrast between the ice and the overlying snow. What with industrialization, growth of the human population and more intense clearance of forests by burning, there is more soot about now than there used to be. How much of it actually reaches the glaciers, and precisely how large its contribution is to the faster rates of mass loss observed in recent decades, remain open questions. But it would be surprising if we were to look for evidence of a link and failed to find it.

Evidence of a link is just what Xu Baiqing and colleagues, writing in a recent issue of the Proceedings of the National Academy of Sciences, appear to have found. They measured soot concentrations in ice cores from five Tibetan glaciers, and found radiatively significant amounts in all but one, with evidence for recent increases in at least two. These glaciers are downwind of two of the world's largest sources of airborne soot, India and western Europe. (Yes, Tibet is a long way from Europe, but the soot particles are tiny and once they are aloft they can travel thousands of kilometres before being washed out.)

And at the recent Fall Meeting of the American Geophysical Union, Bill Lau of NASA drew attention to another way in which soot can affect glacier mass balance. While the soot is still in the atmosphere it constitutes what he calls an "elevated heat pump". It heats the air (rather than the surface), the heated air rises, and new air is drawn in from elsewhere to replace it. In the Himalayan-Tibetan region, the new air comes from the south and is warm and moist, so this amounts to an induced intensification of the summer monsoon. Warmer air means more melting, but moister air means more precipitation and therefore, where the temperature is right, more snowfall. Working out the net impact on the glaciers, then, will be a challenge.

These studies leave us a long way from nailing down soot as one of the reasons for more negative glacier mass balance, which will require concurrent measurements of sootfall, incident radiation, temperature and rates of snowfall and melting. But at the very least, the soot concentration measurements show that the soot is there, and the most solid part of the deductive chain – the fact that soot makes snow absorb more radiation – is already firmly in place. Greenhouse gas is not the only pollutant we should be worrying about.

In recent years, "science" has increasingly been asked to provide guidance on issues such as climate change. However, there are limits both to what science can do and to its influence. For example, although there has been much talk of relying on "evidence-based research" as a guide to action, in reality, decisions are often made on the basis of other information, or views, or influences. In some cases science is just used to justify decisions already taken. Scientists are very much "on tap, not on top".

Science is not, however, entirely impotent. Climate change has been an example of where science has created a new agenda. But that involves some responsibilities. Given the cultural and ideological relativism that seems to be the norm these days, it is perhaps not surprising that some have latched on to the climate change issue as a new fundamentalism. Here at last was a really powerful determinism, providing a clear "scientifically backed" case, with added moral an ethical imperatives for the sort of technical, social and political prescriptions preferred by, among others, radical "greens". It has, for some, gone beyond science and on to a belief. Hence the bitterness at any hint of climate-change denial, or any scepticism about humans being responsible for it.

Trying to sustain this degree of absolute certainly, and denying rival views, is bad for science, which needs open, pluralistic, debate and challenges to keep it as objective as possible. That is not to say that all views have equal weight. There are processes for weighing the strength of arguments and analysis, for testing conjectures, checking data. The "scientific method" may not be perfect – it's a human system after all – but it is arguably the best that we have come up so far for trying to make sense of the material world.

As far as climate issues are concerned, what it has suggested is that, with about 90% certainty, climate change is underway and is mostly due to human actions. That still leaves room for other views – and other explanations. Less palatably, it also leaves room for bitter disagreements and invective. Some climate sceptics attribute base and deceitful motives to some climate scientists – and vice versa. "They" are, variously, in the pay of evil oil companies/or devious eco-fascists/leftists, and so on.

The recent (Nov 2009) affair involving e-mail leak at the UEA played well to the sceptics – and even to the full-on climate-change deniers. It even worried fundamentalist climate-change believers – some of the scientific priesthood looked like it might be corrupt! The reality seems to be that some poorly worded descriptions of quite normal data-processing activities were made public: with many data sets, it is necessary to subtract spurious trends to see what is actually the main process at work. That can of course sometimes be controversial: but we rely on scientists to make it clear what they are doing and why, to their peers, so that their analysis can be tested and, if necessary, challenged. This is not best done by leaked e-mail extracts and invective from climate sceptics. After all they are often, to put it mildly, prone to deceitful use of data. Pots calling Kettles black, springs to mind. Even so, this episode does remind us of the need for proper scientific rigor – on all sides.

Other leaks concerned the peer-review and publishing process, which is how analyses and conclusions are meant to be tested and checked. A degree of collusion seems to be implied – in part, evidently, to keep deviant views out. This is much less palatable. The risk is that we end up with a self-serving, self-selected elite, who review each others papers and funding bids. I am sure most climate scientists are not like this; what motivates them is finding the truth. But in the face of ever increasing, often very illiberal and incoherent attacks by climate deniers and sceptics, it is understandable that some scientists may resort to defensive measures – by "keeping nutters out". That is tragic if it corrupts the debate, which should be as open as possible. But both sides have to play by reasonable standards, to have a proper debate. Some climate deniers and sceptics do not. So it is understandable that some climate scientists feel that they have to fight back, and for example publicly disparage views they see as dubious. But, although there is a need for more "public understanding of science", it is demeaning for scientists to get too involved in brawls with lobby groups, some of whose mission seems to be an ideological one. It might be better to leave that to green pressure groups and the wider political process, and focus instead on cleaning up your own act, so that there are no reasons for adverse publicity.

All that said, in an imperfect world, maybe we have to accept the need for virulent sceptical oversight of all things, including science. So the climate sceptics may have done us all a favour by putting the scientists' work under tighter scrutiny. But if cynicism sets in across the board, then we are not much further forward, and we may even be losing ground to those who say all that matters is ''belief". Faith may be a wonderful thing, and vision too – science is not the whole story. As the UEA's Mike Hume noted in a recent Guardian article, science can't help you decide about values or ideology. But science, properly done, can help when you are trying to decide politically about practical changes in material reality.

Hume: http://www.guardian.co.uk/commentisfree/2009/dec/04/laboratories-limits-leaked-emails-climate.

The most satisfying experiment that I ever did was done with glass beakers and a cheap thermometer. A student, Mark Aikman, and I were trying to learn more about the composition of our local glacial till – the sediment deposited in our region by the Laurentide Ice Sheet. The till is a mixture of the local limestone, which is soluble in strong acid, and material from the Canadian Shield to the north, which is not soluble. Dissolve the till in acid and you get a measure of the ratio of local to distantly derived components. The distant or "erratic" component must have been delivered by the ice sheet.

One day, Mark wandered into my office wearing a worried look. His samples were still fizzing vigorously even after immersion in acid for the stipulated time, 30 minutes. By pestering my colleagues in chemistry I found that the time was a red herring. The author of the methods textbook we were using had blundered, prescribing an inadequate amount of hydrochloric acid. Mark and I were able to clear up this point with a thermometer because the reaction of hydrochloric acid with calcium carbonate, the basic ingredient of limestone, is exothermic – it releases energy in the form of a known amount of heat per amount of reaction products.

This "finding", new to us if not to chemistry, matched nicely the field work of another student, Dan Stokes. Dan had tried to explain the genesis of the till by counting different-coloured rocks in roadside exposures. The limestone is grey, while the erratics from the Canadian Shield are pink or black, making a striking contrast. The upshot of all this AIS (absurdly inexpensive science) was that most of the till is local, having been carried no more than 2 to 5 km by the ice, but about one eighth is distantly derived, having originated who knows how many hundred kilometres up-glacier. What does this mean? Search me.

When it comes to cheap glaciology, Ernst Sorge has the Trent University geography department beaten soundly. No contest. Sorge overwintered at Eismitte, in the middle of Greenland, during 1930–31, while the leader of the expedition, Alfred Wegener, sledded back towards the coast that he was destined never to reach. Documenting his results in Publication 23 of the International Association of Scientific Hydrology in 1938, Sorge says that he and his companions Fritz Loewe and Johannes Georgi were short not only of many necessities of life, such as a living hut, but also of scientific instruments. They solved the hut problem by digging a hole in the snow, but if they wanted to make measurements they would have to improvise.

Sorge's most important instrument was his Firnschrumpfschreiber or firn compaction recorder. (Firn is snow that has settled part way to the density of ice, but isn't there yet.) It was "contrived out of pieces of board, sheet metal, jam jars, wire, string, paper and ice" and measured the rate at which two horizontal arms frozen into the firn, 1–2 metres apart vertically, approached each other. Sorge smuggled a pen into the apparatus somehow. The jam jars served as recording drums, hand-turned, so that the pen would have something to write on. They worked very well, allowing measurements of the compaction rate with a precision of one part in a thousand.

The stimulus for the instrument was the observation that their cave was settling. Sorge wanted to know whether the settling betokened the impending collapse of his home. It cannot have taken him long to make his five Firnschrumpfschreiber, because during the course of the winter he also dug a 50-foot-deep shaft in which to install them and actually made a large number of measurements. Besides, he remarks that, "During an overwintering one has time to commune with one's self about how Nature is unfolding around one."

The results from the Firnschrumpfschreiber and from density measurements in the walls of the shaft showed that the compaction rate decreases, and the density increases, with depth. More importantly, they are steady at any given depth, and are now immortalized as the basis for Sorge's Law: the density remains constant at any given depth in a cold column of settling snow.

I have nothing against billion-dollar satellite missions, and I bet Sorge and his companions got sticky, but all in all, the jam jars were a sound scientific investment.

2010 looks like being the year when offshore wind power takes off globally in a major way. Europe is currently the global leader – during 2008, 366 MW of new offshore wind capacity was installed, bringing the cumulative total in EU waters to 1471 MW. And the UK is for once at the front of the pack- having overtaken Denmark last year with over 600 MW installed. Longer term, the European Wind Energy Association says cumulative offshore wind capacity in Europe may reach 40 GW by 2020 and 150 GW in 2030.

So far, the largest project planned is the 1 GW London Array in the Thames estuary, with work expected to start early next year, and completion of the first 630 MW part scheduled for 2012. But shortly details will emerge of the nine winners in the UK's 'Round Three' offshore wind project competition, which could see wind farms of up to 5 or even 10GW being established off the east coast.

Initially, in the first phase of offshore wind deployment, on-land wind turbine designs were in effect 'marinized' for offshore use, and located relatively near to shore. But now new design concepts are emerging specifically for the challenging offshore environment. They are larger: several 10 MW offshore turbines are currently under development, including Clipper wind's Britannia, and the novel vertical-axis 'Nova'. And they involve new ways of installing them further out to sea: seven radical new designs for offshore wind turbine foundations have been identified by the Carbon Trust, in a global competition involving over 100 candidates from engineering companies around the world. Selected designs include floating turbines anchored to the seabed, and spider-like tripod structures, along with more conventional monopiles – steel tubes driven into the sea-bed.

The overall aim of the Carbon Trust programme is to accelerate the construction of offshore wind farms by reducing construction costs and overcoming key engineering challenges of going further out to sea, in deep water up to 100 miles out, in depths of up to 60 metres, or maybe more. Up to three final winners will have their designs built and installed in large-scale demonstration projects in 2010–2012 with funding from a consortium led by the Carbon Trust.

The new wind turbine designs

The NOVA is perhaps the most dramatic new turbine design. It is a giant 120-metre tall 10 MW, V-shaped vertical-axis turbine, funded as part of the Energy Technology Institute's £20 m wind support programme. The NOVA consortium includes groups from Cranfield, Sheffield and Strathclyde Universities. Prof. Frergal Brennan from Cranfield told Cleantech (Vol. 4 Issue 4) that 'offshore vertical-axis wind turbines offer the potential for a break through in offshore wind energy availability and reduced life cycle costs due to their design characteristics of few moving parts and the sitting of the generator at the base level potentially allowing large-scale direct drive. Their relatively low centre of gravity and overturning movements make the turbines highly suitable for offshore installation'.

Meanwhile, VertAx Wind Ltd is working with SeRoc, Giffords and others on their own 10 MW H-shaped vertical axis offshore turbine. VertAx say it will be ready for operation in about 5 years. But Clipper wind's more conventional propeller type 10MW Britannia should be ready by 2011.

Perhaps the most novel design is the ETI-supported Blue H 'Deepwater' turbine project, which is focused on the design and feasibility of a 5 MW floating offshore wind turbine, for use in up to 300m of water. A steel prototype is being tested in the Mediterranean, but the ETI funded project will focus on concrete construction to cut costs. The development consortium is led by Blue H Technologies of the Netherlands with BAE Systems, the Centre for Environment, Fisheries and Aquaculture, EDF Energy, Romax and SLP Energy.

According to a study by the Carbon Trust, the UK will need to build at least 29 GW of offshore wind by 2020. The UK Energy Research Centre says that 'Whilst this represents a challenge similar in scale to developing North Sea oil and gas, it is seen as technically feasible'. But UKERC is currently looking at the economic prospects. It notes that 'during 2000 to 2004, offshore wind power costs were relatively stable with typical CapEx ranging from c. £1 m to £1.5 m/MW. But, in the last 5 years costs have risen dramatically, doubling from approx £1.6 m to £3.2 m/MW. The main drivers for this are supply chain constraints for components (e.g. wind turbine generators) and services (e.g. installation), and to a lesser extent, fluctuations in the Euro/Sterling exchange rates and commodity prices'.

Looking forward, it says recent estimates of the short to medium term cost outlook are that, in the absence of extreme movements in macro economic conditions and/or the onshore wind power market, offshore wind CapEx is not expected to alter dramatically over the next five years. In fact, it says, the industry consensus is for a slight rise in the next two years, followed by a slight fall out to 2014/2015.
See: www.ukerc.ac.uk/support/tiki-index.php?page=Offshore+Wind+Costs.

The UK is of course not alone in developing offshore wind. In addition to projects in Denmark, the Netherlands, Norway, Sweden, Belgium, France, Germany, and elsewhere in Europe, in 2009 China installed its first 3-MW 90 metre diameter 'Sinovel' offshore wind turbine, the first unit of a 100 MW Shanghai Donghai Bridge demonstration project. In parallel there are said to be some 37 offshore wind projects under development in the USA.

Although costs are higher offshore, load factors are also higher, and, potential impacts on sea mammals aside, there are fewer environmental impact issues to contend with than with on-land location. Moreover, as wind projects are sited further and further out to sea, there will be fewer conflicts with in-shore/coastal fishing and navigation interests. With the North Sea possibly able to offer 200 GW or more of generation capacity ultimately, and large resources also existing elsewhere in the world, offshore wind seems to have a bright future.

For a roundup see: http://www.renewableenergyworld.com/rea/news/article/2009/12/optimism-in-offshore-wind-a-market-buzzing-with-activity?cmpid=WNL-Friday-December11-2009