This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Skip to the content

IOP A community website from IOP Publishing

August 2010 Archives


An interesting paper has recently been published in the Proceedings of the National Academy of Sciences entitled "Public perceptions of energy consumption and savings" (see http://www.pnas.org/content/early/2010/08/06/1001509107.shortb) by Attari et al. This paper provides insights into how people view the quantity of energy consumed for various tasks that are normal in an industrial society. The paper authors conclude that people generally overestimate the energy savings for changing habits related to saving low quantities of energy while underestimating energy savings associated with saving larger quantities of energy.

This research shows some of the difficulties in using surveys to assess perceptions and reality of how energy impacts our lives. Take for example the following in which the respondent is asked to select how strongly he/she agrees or disagrees with the statement:

"We are approaching the limit of the number of people the earth can support."

Today, human population is approximately 6.7 billion. If you believe that the earth can only support 2 billion people, then you could strongly disagree with the statement on the grounds that we are not approaching that limit, but that we have far surpassed the limit. However, if you believe the earth can support 12 billion people, then you might also strongly disagree with the statement because you think we are far from the earth's limits (i.e. we are not yet "approaching the limit"). So two completely different answers might prompt selecting the same response to the statement.

The results for the questions pertaining to values and behavioral questions (e.g. how hard do you think it is to change your energy consuming habits) are not presented in the PNAS paper by Attari, but these are important questions to ask. Many people believe that the vast majority of people will not willfully conserve energy without financial penalties (e.g. high prices or taxes) for consumption. I fall into that category myself. We find ourselves in an interesting time as for only the second time in the last 40 years we (in the US) have reached a point where over 10% of GDP was spent directly on primary and secondary energy.

The first time period was from the mid 1970s-mid 1980s and likely in 2008 as well (see figure). The first time over 10% of GDP was spent on energy was driven by political events - particularly the Arab Oil Embargos and the Iran-Iraq War. This most recent worldwide economic recession starting in 2008 was not driven by a particular political event, but has been a growing trend for almost a decade (at least with particular reference to the US).

The US broke out of the recessions cause by the oil shortages of the 1970s by investing in energy efficiency for vehicles (Corporate Average Fuel Economy, or CAFE, standards), only to find itself equally or more dependent upon oil for economic growth today as in 1970. Important questions are: Will the US meet its new CAFE goals (reaching 35.5 miles per gallon for vehicles sales; 39 mpg for cars and 30 mpg for trucks and sport-utility vehicles) by 2016? This targeted increase is approximately the same percentage increase in fuel efficiency as occurred from the 1970s to the late 1980s in meeting the original CAFE standards. If the US (and the world) is successful in reducing oil consumption per mile traveled by 2016 (or soon thereafter), will we only find ourselves in the same position 10-30 years down the road? In other words, will we just wait until we consume too much gasoline for it to take too much out of our wallets to again think about restructuring the way our economy functions and consumes energy?

There are reasons to think this time is different. This time we are well past peak oil production for the US. Perhaps we have reached peak crude oil production in the US and so far the statistics seem to point to that possibly being true (but it will take several more years to confirm the full truth). In reading the August 15, 2010 issue of Science which talks about scaling up of renewable energy, there are two articles about biofuels. One article in particular ("Challenges in Scaling up Biofuels Infrastructure" by Tom Richard) notes the logistical issues with making fuels out of biomass. Richard discusses much about how we are supposed to create a viable supply chain for the relatively low-density biomass materials to go from the farm to the biorefinery and finally to the consumer. The reason that this is such a hard problem is that the net energy of the biomass fuel is so low that it is not obvious that we can run our current economy as designed if using these fuels to any large degree. That is also a major difference now from the 1970s - we're actually really trying to grow an economy using biofuels instead of just making cars run on less fuel and importing more oil.


Floating offshore wind turbines are one of the big new ideas in the renewable energy field. They allow us to go far out to sea in deep water in which would be impossible, or vastly expensive, to install devices fixed to the sea-bed. They also avoid the environmental intrusion of sea bed foundations or monopiles.

There is a range of systems being tested, many of them owing much to experience with offshore oil and gas rigs. So perhaps it's not surprising that the leaders in the field are the Norwegians. Statoil/Norsk Hydro have developed and tested a 2.3 MW Hywind system in water 220 metres deep off the coast of Norway, while Sway is developing a novel 10MW floating turbine. They both are essentially like the vertical bobbing floats used for fishing, with a long tail under the water providing buoyancy and supporting a conventional propeller-type wind turbine above the water line. They are kept vertical by sea-bed tethers, although the Sway system is designed to be able tilt by up to 8 degrees. http://www.sway.no http://www.statoil.com/en/TechnologyInnovation/NewEnergy/RenewablePowerProduction/Pages/default.aspx

The Dutch have also developed a tension leg platform system, based on oil rig technology, called Blue H. http://www.bluehusa.com/ A prototype is being tested in the Mediterranean- in 108 meters waters at a distance of 10.6 nautical miles from the coast of Puglia in Southern Italy. France plans to install four floating offshore wind turbines and a 3.5 MW. Blue H is a candidate.

The UK is also in the race. Engineering firm Arup is working with an academic consortium backed by Rolls Royce, Shell and BP and linked to architects Grimshaw, on a 10MW "Aerogenerator X", a novel V- shaped vertical axis floating offshore wind machine, measuring 300m from blade tip to tip. The first unit could be ready in 2013-14. See http://www.windpower.ltd.uk/ and http://vimeo.com/13654447

This grew out of the NOVA (Novel Offshore Vertical Axis) project, backed by the Energy Technology Institute (ETI) and involving Cranfield University, QinetiQ, Strathclyde University, Sheffield University. http://www.nova-project.co.uk/

The ETI has also funded development work on a concrete version of the Blue H design, with involvement from amongst others BAE Systems. In parallel US wind company Clipper, is planning to build the more conventional 7.5 MW "Britannia" deep-sea offshore turbine in north-east England in conjunction with NaREC, the New and Renewable Energy Centre. The first in UK waters though could be a version of Hywind- Statoil is planning to build an offshore wind farm using 3 to 5 floating turbines developed for its HyWind project off the northeast coast of Scotland.

Feargal Brennan, head of offshore engineering at Cranfield University, where much of the Aerogenerator X development work has been carried out, told the Guardian (26/7/10) 'There is a wonderful race on. It's very tight and the prize is domination of the global offshore wind energy market. The UK has come late to the race, but with 40 years of oil and gas experience we have the chance to lead the world. The new [Aero-generator] turbine is based on semi-submersible oil platform technology and does not have the same weight constraints as a normal wind turbine. The radical new design is half the height of an equivalent [conventional] turbine.' He added that the design could be expanded to produce turbines that generated 20MW or more.

Bigger machine have both advantages and problems. Doubling the diameter leads to four times as much power, but eight times as much weight and can also increase costs by a factor of eight. However vertical axis machines may be lighter, since they need no tower, and may also have the advantage of easier maintenance access and higher stability in high winds.

The USA, so far rather back-ward in developing off shore wind, has now also got into the race. One argument had been that, unlike the UK, which has shallow water offshore (with 1 GW of offshore wind now in place), US coastal waters were deep, so that offshore wind wasn't very practical. There were also environmental and visual intrusion objections to location near-shore. For example the wind farm proposed for off Cape Cod has only just now been given the go ahead after 10 years of regulatory and legal battles. It will be the USA first offshore wind project. But deepwater floating turbines may change the game.

The US National Science Foundation, has now funded a 3 year project to see if the construction of floating wind turbines in the deep- ocean is a viable. And the State of Maine has just proposed a plan for installing up to 30MW of deepwater marine power, of which only 5MW can be tidal the rest being wind, which must be floating -and be expandable to 100MW or more. Hywind is said to be a candidate.

Back in the UK, the potential for floating offshore wind is very large.. The PIRC Offshore Valuation puts the practical potential at 870TWh/yr, with 660TWh/yr more beyond 100nm out. In all, that's about four times UK electricity consumption. www.offshorevaluation.org

Of course location that far out will mean that the cost of the undersea grid links will be high, but deep sea wind seems likely to become increasingly attractive as a north sea supergrid network is established, with new UK- to-continental HVDC power grids linking in offshore wind farms across the area, helping the balance local and regional variations in wind availability.

For updates on renewable energy developments in the UK and elsewhere see : www.natta-renew.org

In glaciology we like to distinguish between maritime glaciers and continental glaciers, because the climates that sustain these two kinds of glacier are quite different. The two adjectives make it sound as though the maritime ones ought to be near the sea and the continental ones oughtn't, but the reality is more complicated and more subtle.

Most glaciers consist of an accumulation zone at high altitude, where they gain mass, and an ablation zone at low altitude, where they lose mass. Roughly in the middle is the equilibrium line, at an altitude (the equilibrium line altitude or ELA) where loss and gain just balance.

From year to year the ELA varies through hundreds of metres, but on average over the decades it changes much more slowly, as the changing climate alters the balance. But what pins the equilibrium line to a particular average altitude? Why 1000 m, say, and not 2000 m, or 0 m?

A short but inaccurate answer is "Temperature". The hotter it is, the more ice you can melt. A slightly longer but much more accurate answer is "Temperature and snowfall". The more snow falls, the more heat you need to melt it. So the climate at the slowly-changing ELA is a measure not just of how hot it is, or of how snowy it is, but of how hot and snowy it is.

By definition, you need just the right amount of heat at the ELA to melt just the amount of snow that falls. Observations show that, although the ratio varies quite a bit, you get about 5 mm of melted snow for every positive degree-day, that is, for every degree Celsius sustained above the melting point for 24 hours. The snowier it is, the lower the ELA has to be, because it is warmer at lower altitude.

If the winter snowfall is equivalent to 10,000 mm of meltwater, you need about 2,000 positive degree-days in summer to melt it all. A snowfall of 1,000 mm of meltwater-equivalent requires only 200 positive degree-days, and 100 mm means only 20 positive degree-days. These numbers span very roughly the range of actual snowfall on real glaciers, from coastal Norway and Patagonia near the snowy end to the highest glaciers in Bolivia and Tibet near the dry end.

If your glacier has to keep flowing downhill to find an ELA that is hot enough, it eventually reaches sea level. It becomes a tidewater glacier, and icebergs start falling off. The ocean is doing some of the work (of adjusting the size of the glacier to the climate) that the atmosphere can't manage by itself.

Mountain ranges are traps for moisture, and potentially for snow, because they force the air to rise, cooling it and condensing out the moisture. What is more, once the wind has negotiated the mountain range it will be a lot drier, so we invariably find that across our glacierized mountain ranges the ELA rises to leeward.

This is the essence of what "maritime" and "continental" mean to glaciologists. We stretch the words so that maritime means "warm and snowy ELA" and continental means "cold and dry ELA". Some of the most "continental" glaciers are close to shorelines. There are not many maritime glaciers far inland, but the ones in the eastern Himalaya probably qualify, because the monsoon still packs a punch even after blowing over peninsular India.

Like all scientists, I get a lot of scientific papers to review. Between a fifth and a third of the ones I get start with "Glaciers are sensitive indicators of climate change", and I always cross it out, commenting that it is a boring cliché. The thing about glaciers is that they are insensitive indicators of climate change, because they integrate over temperature, precipitation, winter, summer, altitude, and possibly a significant horizontal extent. In the process they tell us things that no thermometer or rain gauge can.

They tell us, for example, that there is more to climatic change than rising temperatures. Because it had got snowier, the maritime glaciers of coastal Norway, but not the continental ones further inland, were gaining mass until about a decade ago. But more recent measurements show that rising temperature has now overcome this effect. And in the big picture, careful comparisons make it clear that less snowfall is definitely not why nearly all glaciers are losing mass. The message from the glaciers is that ELAs are rising, and rising because it is getting warmer. The reduction of snowfall that would be required to explain rising ELAs is enormous, and far beyond what we observe.


A Desertec 'energy from the deserts' initiative was launched last year as a feasibility study by a group of major German energy companies and banks keen to install large Concentrating Solar Power (CSP) arrays in desert areas and transmit some of the power back to the EU by undersea High Voltage Direct Current supergrids.

Desertec is not the only player however. Transgreen is a new French led supergrid project now being developed as part of the Mediterranean Union's Solar Med programme. Solar Med includes proposals for installing 20 Gigawatts (GW) of renewables by 2020, using a mix of technologies: around 6GW of wind, 5.5GW of CSP and nearly 1GW of PV solar in North Africa and the Middle east. And to link it up the Mediterranean Union has proposed a 'Mediterranean Ring' - a grid system linking up countries around the Med, including power from CSP in North Africa/the Middle East being transmitted to the EU via HVDC links under the sea.

That's where Transgreen comes in. It might be seen as a rival to the German-led Desertec project, given that Transgreen's aim is to bring together power companies, network operators and high-tension equipment makers under the leadership of French energy giant EDF. But the idea seems to be that Transgreen will focus on transmission, and just deliver part of the energy generated by the Desertec CSP projects to the EU- and there is already some overlapping membership. A €5m study phase is underway.

The energy potential for CSP is huge- there is a lot of desert! There is talk of 200GW or more eventually being installed. The technology exists (in Spain and the USA) and its economics and performance is improving, with molten salt heat stores allowing for continued use of solar power overnight. So CSP could become a large-scale reality, while HVDC transmission losses are put a 1-2 % per 1000km, so long distance export seems credible. An EU commissioner recently said that the first power could arrive in the EU in 5 years.

However, despite talk of a $400bn programme, at present the Desertec initiative is only just a concept- not yet a formally funded project, although some independent CSP projects are already underway e.g. in Egypt, Jordon Algeria and Morocco, which could become part of it. A 470 Megawatt (MW) hybrid solar/ gas fired unit, with 22 MW of CSP, has just been started up in Morocco, Egypt is nearing completion of a €250m 150MW hybrid unit near Cairo, while the UAE has plans for 1000MW CSP unit.

These and other projects may of course just stay independent, at least initially and not export any power. After all, Transgreen too is only just starting up as a concept. And there will certainly be some interesting routing issues for it face. For example, does the proposed French Transgreen link-up to Morocco have to go across Spain, or can it go undersea direct? That's much more expensive, 1200km of marine cable at maybe €1m/km, rather than two stretches of 200km undersea. The trans-Spain option does also allow Spanish wind power to be fed in, but the undersea line avoids land use and local or regional or indeed national political conflicts. Spain has recently had a major a battle getting a HVDC grid link across the Pyrenees to France. It had to be put underground, at increased cost- although it's worth noting that it is evidently easier to put HVDC cables underground than conventional AC grid links, since there are less heat losses to deal with.

There are also some wider regional and indeed international political issues. We have suffered enough from the politics of oil, it is to be hoped that the politics of solar would be different.But there is the risk of a neocolonial resource grab- with investors rushing to get sites in North Africa and the Middle East and then seeking to get access to the power at favorable rates. At the very least it will be important to negotiate fair trading arrangements. However there is also a need to consider what is wanted locally. While the EU media has often waxed lyrical about the concept of power from the desert, some of the countries who would be the likely hosts to CSP projects to supply the supergrid, evidently feel they had not been sufficiently consulted. They may have their own views.

The trade journal Sun and Wind recently carried an interview with a leading Egyptian renewable energy supporter, Prof. Amin Mobarak, and others involved with a Masters course run jointly by the Universities of Kassel and Cairo, aiming to help local people to get on top of the technical and political issues associated with CSP. (Sun and Wind 5/20/10). One point that emerged was that the use of CSP for the desalination of water might be more important locally than electricity production (That actually is part of the Desertec plan). In addition, electricity demand was rising rapidly in the region, so there might not actually be that much spare for the EU! However, electricity prices were often subsidised locally (e.g. in Egypt) for social policy reasons, and that would be hard to change. So, initially at least, the relatively high price of CSP power might mean that it could only realistically be sold abroad.

But then comes the accreditation issue. EU Commissioner Guenther Oettinger recently said that, to avoid the import of non-renewable electricity from coal and gas-fired plant in north Africa, 'we need ways to ensure that our import of electricity is from renewables' However he believed it was technically possible to monitor electricity imports to the EU to see if they came from renewables. It is not immediately obvious how. And of course there is the wider question of overall security of supply: if it was getting 15% of its electricity from desert project, as Desertec plans, the EU would not want to risk being cut off. Plenty of issues to haggle over then- quite apart from the issue of local environmental impacts on fragile desert ecosystems. That has already led to some limitations on projects in the USA in relation to the protection of desert wildlife.

Some argue that we should not import green power, but should focus on our own resources in the EU. Certainly we should not use remote desert CSP as an excuse not to develop our own extensive renewable resources as fast as we can- large and small, locally and nationally. However a fully integrated supergrid system, including links to offshore wind, wave and tidal in the North West, as well as to CSP in desert areas, could offer benefits to all, in terms of helping to balance variations in the availability of renewable energy around the entire region. And locally, CSP could offer power, fresh water, jobs and income. Moreover, as far as the planet is concerned, it doesn't matter where the projects are located, as long as they reduce/avoid emissions. If the best sites for solar are in desert areas, so be it. But politics and economics intervene. The CSP/supergrid concept opens up range of new- and old- geopolitical and development issues. Not least who gets the power, and at what cost, and who gets the profits.

Parts of the above are from my presentation to a Summer Academy on 'Transnational Energy Grids' at Greifswald University, Germany, in July.

Strange and beautiful ice

| | Comments (18) | TrackBacks (0)

Rupert the Bear, a staple of my early literary diet, was regularly very impressed by the window art of his visitors Jack Frost and his sister Jenny Frost. We can explain much of this art rationally, but that enhances rather than diminishes its strange beauty.

At the Earth's surface, water ice crystallizes in the hexagonal system and snowflakes tend to have six arms or to grow as six-sided plates — which begins to account for their beauty. You can find beautiful snowflakes by the million in cyberspace. And of course when the weather is right you can just go outside, or even look at your window. But crystallography isn't the whole story.

What got me thinking about the beauty and strangeness of frozen water was a recent article by Toshiyuki Kawamura and co-authors. They describe spray ice, derived from the freezing of spray onto trees and shoreline structures. In bulk, the spray ice resembles "very large monster-like forms", as they put it.

They also describe ice balls, spheres (roughly) of ice, up to tens of centimetres across. The ice balls seem to form from lumps of slush on the surface of the lake. The lumps get rolled about by the waves and swell, frozen hard after a drop in temperature, then washed ashore by the waves and the wind.

The Kawamura photographs are technical rather than dramatic, but the freezing of liquid water onto solid objects can offer more drama than you need. Around where I live, we still remember the ice storm of 1998. Across eastern Ontario and Québec there are still trees, bent over in 1998, that haven't yet got back to something like vertical.

That ice was technically glaze, with a density near that of pure ice. The Kawamura spray ice is technically rime, less dense because it forms in irregular masses. Hoar is the equivalent, usually of still lower density, that forms by the deposition of water vapour as ice.

Rime or hoar often forms on our glaciological instruments when they are left out over the winter. If the instrument is a simple mass-balance stake, you just end up with an interesting photograph, like the one taken by my co-author Marco Möller on Austfonna in Svalbard. We like this one so much that we are hoping to persuade the publisher of our forthcoming Glossary of Glacier Mass Balance and Related Terms to put it on the front cover.

Rime on a mass-balance stake, Austfonna, SvalbardA mass-balance stake photographed by Marco Möller on the ice cap Austfonna in Svalbard. The sun has done just enough work to loosen the rime (or hoar?) with which the stake had been coated, and the rime has fallen to the surface as an intact body. (I also like the waves in the middle troposphere, picked out at the thinning edge of the cirrostratus veil by ice crystals condensing at the wave crests and sublimating in the troughs.)

But rime can be a glaciological nuisance. If your instrument is an automatic weather station or AWS, it will have been built for ruggedness and is likely to be still chugging away, recording the temperature, wind speed and other variables, even while more or less buried in the rime. How do you know how to interpret these records, or indeed whether you should?

For me, the weirdest frozen phenomenon of all has to be ice spikes. These protuberances grow out of confined bodies of water that are freezing from the top down. Their base-to-tip length can exceed 10 cm, and can be 10 or more times the base breadth. They taper towards the tip and grow upwards at seemingly random angles. According to Kenneth Libbrecht, the best way to study them is in ice-cube trays filled with distilled water and placed in an ordinary freezer with an air temperature near to —7° C. About half of the ice cubes will produce spikes in these conditions. A fan to promote air circulation also promotes spike formation. Although tap water doesn't yield as many spikes, plenty of spikes have been reported from out of doors, including the very first in 1921.

Water in a tray freezes at its free surface, starting at the confining walls and growing inwards. When the ice cover is all but complete, the water, under gentle pressure, has nowhere to go but up and into the little hole. If the liquid travels to the edge of the hole before entering the solid phase, you have a hollow tube that is evidently a working funnel for liquid water, which doesn't freeze until it reaches the propagating tip.

But why? Evidently there is an exquisite balance at the tip between the arrival of liquid, the removal of heat, the release of heat due to the freezing, and probably other factors. Nobody has yet managed to write down the algebra describing this balance.

Upside-down and inside-out icicles are somewhere beyond the borderline of relevance, but a lot of science is like that. The fact that the irrelevant is also the unexplained has a lot to do with why people get excited about ice spikes. And although they can't compete with Jack and Jenny Frost for beauty, they do show that strangeness can sometimes be strangely close to beauty.

By Graham Cogley

The Department of Energy and Climate Change has produced a new DIY energy supply-and-demand model, running up to 2050, developed under the supervision of Prof. David MacKay, DECC's chief scientific advisor. You can try it out: http://2050-calculator-tool.decc.gov.uk/.

It allows you to vary the energy supply mix so as to try to meet emission targets in line with various possible demand patterns, including charging electric car batteries overnight. Following the line adopted in earlier DECC scenarios, what emerges is that it's hard to stay in balance and get emissions down, without nuclear and Carbon Capture and Storage (CCS). The model tests the ability of the chosen supply mix to meet demand during a five-day anticyclone, with five days of low wind output and an increase in heating demand associated with the cold weather. But there are ways to balance variations like this, and the DECC report on '2050 energy pathways', which sets the model in context, does explore some of these options.

Even so, it concludes that, if CCS is not widely used, 'because of the large amount of renewables in this pathway, the challenges of balancing the electricity grid in the event of a five-day peak in heating and a drop in wind are more substantial. We would need a very significant increase in energy storage capacity, demand shifting and interconnection, together with 5GW of fossil-fuel-powered standby generation that would be inactive for most of the year.' And if nuclear was not used, 'the challenges of balancing the electricity grid are very substantial: we would need an extremely substantial increase in storage, demand shifting and interconnection'. By contrast 'without renewables in the system, it is easier to balance the electricity grid and no additional back-up capacity beyond what exists today is required'. Nuclear then dominates, and, DECC seems to say, that path gives lowest overall costs long term.

www.decc.gov.uk/en/content/cms/what_we_do/lc_uk/2050/2050.aspx

Not everyone may agree with that. Indeed, Chris Huhne, the new Lib Dem Energy and Climate Change Secretary of State, is on record as saying that nuclear economics means that it will need state support to be competitive – something he and the coalition government were not prepared to provide. A recent version of this view was his response to Lord Marlands statement on behalf of the government that 'there should be no dramatic increase' in current plans for around 14GW of on land wind power. Defending wind, Huhne commented: 'We have seen that with onshore wind, whose cost has come down dramatically precisely because of the encouragement of the public sector. I am afraid that the same argument cannot be made for nuclear power, which has been around for a long time. It is not an infant industry, but an established and mature one and it can and should compete on that basis, along with all other comers.'

However he seems to have been under pressure to back nuclear none the less. In a letter to The Financial Times (4 Aug) he said: 'Given our policy framework, and the outlook for oil, gas and carbon prices, I am nevertheless confident that there will be new nuclear power as planned by 2018.'

He went on: 'What nuclear will not have – and this is common across all three parties in Britain – is public subsidy specific to the industry.' But there will – or could – be a susbsidy, in all but name, in the form of a floor price for carbon, which will benefit all non and low carbon options.

So what could actually emerge? DECC's 2050 Pathways are not based on funding or support programmes, but simply offer four different 'levels' of possible technical response – from more or less none (1) to the absolute maximum (4), and then uses that as a basis for assembling a range of pathways with different mixes on both the supply and demand side, on the way to 2050.

Focusing here just on the supply side, on land wind comes out reasonably well with, by 2050, at Level 2, it being assumed that there could be 20 GW in place (delivering 55TWh of electricity then); at Level 3, 32GW (delivering 84TWh) and at the maximum Level 4, 50GW (132TWh). But offshore wind does much better – 60GW (delivering 184TWh) at Level 2,100GW (307TWh) at Level 3 and a massive 140 GW (430 TWh) at level 4. Wave/tidal stream devices do quite well at, respectively, 11.5GW (delivering 25TWh), 29 GW (68 TWh) and, at maximum, 58 GW (139 TWh), with wave leading tidal currents up to Level 4, when wave is put at 70TWh, tidal stream at 69TWh. But Tidal Range projects (barrages and lagoons) only manage, respectively, 1.7GW (3.4TWh), 13 GW (6TWh), and 20 GW (40 TWh) at Level 4 – significantly less than either wave or tidal stream in each case.

PV solar comes out of almost nowhere to yield, by 2050, 80 TWh at Level 3 and a massive 140 TWh at Level 4, although DECC is at pains to point out that the latter is very ambitious, expansion of this scale presenting 'an unprecedented challenge'. In addition, small-scale wind only delivers 8.6 TWh/yr max, while geothermal reaches just 5 GW by 2030 at Level 4, and hydro moves up to 4 GW in Level 4.

More radically, by 2050, there are 140 TWh of imports from Concentrating Solar Power in desert areas, covering 5000 sq km, in level 4. It's assumed that the UK's share of this is 20%. There are also some imports of biomass – at the maximum, around 135 TWh each of wet and dry biofuels, by 2050.

In terms of grid balancing for variable renewables, DECC notes that, being relatively inflexible, neither nuclear or CCS can be relied on for very large balancing capacity. So it looks to others options – energy storage, new grid interconnections to and from the continent, and flexibility (e.g. by using the proposed electric vehicles (EVs) or plug-in hydbrid electric vehicles (PHEV) car battery fleet as an overnight store). At Level 3 the storage capacity peak output increases to 7 GW by 2050. Interconnection increases to 15 GW, and around a half of all EVs and PHEVs have a shiftable electricity demand capacity. At Level 4, storage capacity peak output reaches 20 GW, interconnection 30 GW; and 75% of all EVs' and 90% of all PHEVs' storage capacity are utilised for shifting demand.

Finally there's nuclear. While at Level 1 it is assumed that 'the Government no longer wishes to take new nuclear forward and that a lack of clarity over planning and licensing timescales would lead to no planning applications coming forward and potentially the suspension of activities at sites where planning applications had been submitted'. At Level 2 there would be 'continued government and public support for new nuclear and that the facilitative actions would progress', with a total capacity of 39 GW at 2050 delivering 275 TWh of electricity per year with a build rate of just over 1GW/year. (Interestingly, by comparison, it notes that in Germany in recent years the build rate for wind has averaged around 2.1 GW/year.)

At Level 3, a nuclear build rate of 3 GW/year is achievable from 2025, leading to 90 GW at 2050 (633 TWh). And at Level 4 we move from a build rate of 3 GW/yr up to 2025 to a max of 5 GW/yr thereafter, with government interventions being needed, so that 146 GW in reached at 2050, delivering 1025 TWh. But we couldn't do that by ourselves – we would need overseas help!

Looking 40 years ahead is risky and hard to cost, but it's a brave effort, especially as it also tries to cover a range of heat suppliers, like solar, and end-use sectors, including transport. There is a call for evidence and there will no doubt be plenty of debate on the details and the viability and desirability of some of the Level 4 suggestions, not least of 146GW of nuclear!

If you burrow in the literature of palaeoclimatology, climatic dynamics and glacial geomorphology, you can find all kinds of snowline: the transient snowline, the annual snowline, the climatic snowline, the regional snowline, the orographic snowline, and for all I know some others. Most of them are misnomers.

It is easy to see how the word "snowline" became popular. If you look at a mountain the line separating snow-covered from snow-free terrain is often the most striking feature of the view. This transient snowline is often straight and, as far as the eye can judge, horizontal. But it doesn't have to be. It can be messy, because of outlying disconnected patches of snow and patches above the snowline that are free of snow. What is more, it can and does wander up and down over the contours, which leads to uncertainty because we often say "snowline" when we mean "snowline altitude". On a single mountain or even a single glacier the snowline can range in altitude through hundreds of metres, so we really mean "average altitude of the snowline".

In glaciology, we have two further complications: we don't really want to know about the transient snowline but about the annual snowline, and we don't even want to know that so much as the annual equilibrium line.

The annual snowline is the transient snowline just before the first snowfall of winter, or whenever the mass of the glacier reaches its minimum in the annual cycle. Year upon year, comparing annual snowlines is much safer than comparing transient snowlines from random points in the cycle of the seasons. On the other hand, observing these annual snowlines is much trickier because of the likelihood that you won't get there in time or won't see anything if you do (because of the weather).

The annual snowline sometimes separates the glacier into an upper part that has gained mass over the year from a lower part that has lost mass, which makes it the same as the annual equilibrium line. But not always. If meltwater from the surface refreezes when it reaches the contact of the snow with the underlying glacier ice, we distinguish it as "superimposed ice", which represents mass gained by the glacier during the current mass-balance year. This is important for accurate book-keeping, but it also means that the snowline altitude can be tens of metres or more above the equilibrium-line altitude, with exposed superimposed ice in between.

In climatology, especially at regional and broader scales, these complications are usually set aside. This is where the word "misnomer" comes into its own, because except when they plot the snowline altitude on a graph against latitude the climatologists' snowline (regional, climatic, orographic, whatever — let's agree to call it the climatic snowline) is not a line at all. It is a two-dimensional surface. But we all know what the climatologists mean, and misuse of words is sometimes curiously unimportant as a barrier to the advancement of understanding.

The climatic snowline is a generalization, but a very valuable generalization, summarizing the state of the atmosphere near the Earth's surface in a very distinctive way.

Look at the climatic snowline in the graph. Set aside the complications and inaccuracies of usage, and ignore for a moment the colour scheme and the fact that the "line" is discontinuous — pretty fat at some latitudes, missing altogether at others. What we see is the altitude at which, if there were some land, there would probably be glaciers, or glacier equilibrium lines to be precise.

A global approximation of the climatic snowline A global approximation of the climatic snowline. South Pole on the left, North Pole on the right. Each little square is at an altitude which is the average of many "mid-altitudes", each of which is the average of one glacier's minimum and maximum altitude.

One curious thing about the climatic snowline is that it is nearly always assumed to be an isotherm, often the one representing a mean annual temperature of 0° C. The colour scheme shows that this is only a good assumption if you are willing to do a lot of ignoring. For example at latitudes between 55° N and 65° N the mean temperature of the snowline during the warmest month can be as high as +8° C (the lowest snowlines) or as low as —4° C (the highest snowlines).

The graph has been lying around on my hard drive for two decades. I only dug it out because I wanted to put Snezhnika, at 44.8° N and an altitude of 2450 m, into context. You can find small but stable glaciers like Snezhnika in mountain ranges where the climatic snowline is well above the highest peaks, but they are not a reliable reflection of the big picture. There is a lot more to be said about the big picture, but it will have to wait for another occasion.

The EU is set to contribute 45% of the construction costs for ITER, the new international fusion reactor being built in France, which some estimates now put at €15 bn, three times the 2006 cost estimate. But the EU's financial problems may mean that it can't deliver all of its share of around €7.2 bn.

The most pressing problem is a €1.4 bn gap in Europe's budget for ITER in 2012–13. Nature commented: "Left unresolved, the impasse in Europe will, at best, delay the project further. At worst, it could cause ITER to unravel entirely." It added: "The crunch is so serious that some European states have gone as far as to ask the commission to investigate the possibility of withdrawing from ITER, according to sources familiar with the negotiations. The price of such a withdrawal would probably be in the billions, as the treaty governing ITER requires heavy compensation to other partners."

A temporary solution would be a loan from the European Investment Bank to cover the immediate €1.4 bn budget gap. Another possibility would be to build a smaller version. But that could compromise its viability and aims.

Following a crisis meeting in July, it now seems that some interim refinancing has been agreed, but details of who is paying more are scare. However, the larger point remains. As Stephen Dean, president of Fusion Power Associates, a US non-profit advocacy group, told Nature: "There are serious questions about the affordability of fusion as a whole as a result of ITER."

On their website, Fusion Power Associates say that: "It would be premature at this stage to judge which of the variety of magnetic and inertial fusion concepts will ultimately succeed commercially." But, although part of the ITER magnetic containment programme, the US is also pushing laser-powered inertia fusion strongly these days, while the UK is also the base for an international HiPER laser fusion project.

HiPER is being supported by a consortium of 25 institutions from 11 nations, including representation at a national level from six countries. Following positive reviews from the EC in July 2007, the preparatory phase project will run up to 2011, aiming to establish the scientific and business case for full-scale development of the HiPER laser fusion facility. This phase is timed to coincide with the anticipated achievement of laser fusion ignition and energy gain (on the National Ignition Facility laser in the US). Then the website says "…future phases can proceed on the basis of demonstrable evidence. Construction of the HiPER facility is envisaged to start mid-decade, with operation in the early 2020s…", possibly at Rutherford Appleton Lab in Oxfordshire, since the UK is a leading contender to host the HiPER laser facility.

The physics is tricky, but the engineering is even more so – 1 mm pellets of deuterium and tritium have to be presented accurately and fired by lasers continually, many times a second, and the debris cleared away. But the HiPer group seems confident it can be done and that inertial fusion may be easier than the "magnetic containment" fusion approach being adopted by the ITER project. They say that "Inertial fusion offers some unique benefits – for example the potential to use advanced fuels (with little or no tritium). This greatly reduces the complexity of the process and further reduces the residual radioactivity. Inertial Fusion also allows for the use of flowing liquid wall chambers, thus overcoming a principal challenge: how to construct a chamber to withstand thermonuclear temperatures for the lifetime of a commercial reactor. In addition, Inertial Fusion allows for the direct conversion of the fusion products into electricity. This avoids the process of heating water, and so increases the net efficiency of the electricity generation process." It would be good to hear more about that. Otherwise it's back to running pipes through the outer blanket to raise steam!

Interestingly though, electricity production may not be the main aim. As with other fusion projects, and some new fission projects, there is now talk of focusing more on hydrogen or synfuel production (e.g. for the transport sector, presumably either by electrolysis or by using the heat direct for high-temperature dissociation of water). Or the heat could be used for other industrial processing heating purposes.

Whatever the final end-use, the engineering does sound mind-boggling, even if the Hiper website does try to make it seem familiar: "The principle is conceptually similar to a combustion engine – a fuel compression stage and an ignition stage" (i.e. "analogous to a petrol engine (compression plus spark plug) approach").

Well yes, but it's at 100 million  °C, and, after a few firings, the whole thing will become fiercely radioactive due to the blast of neutrons that will be produced.

They are also the source of the energy that would have to be tapped if power is to be produced. But this bombardment means that, as with ITER, the containment materials and other components will be activated by the neutron flux, and have to be stripped out periodically and stored somewhere, although the half-lives would be relatively short – decades. The radioactive tritium in the reactor also represents a hazard; although the quantities at any one time would be small, accidental release could be very serious.

Overall it all sounds very complex and daunting and not a little worrying. On current plans we might be seeing a prototype working in the 2030s, although that sounds a little optimistic. Meanwhile, other ideas may yet emerge. There is talk of hybrid fusion/fission systems – perhaps using the neutron flux to convert thorium into a fissile material, or to transmute some of the active wastes from fission. And there was me thinking that fusion was meant to replace fission – not support it!

The UK is spending about half its energy R&D budget on fusion, including a contribution via EURATOM to ITER, as well as its national programme (£20 m last year), following on from JET at Culham. HiPER may achieve an earlier breakthrough than ITER, but the UK Atomic Energy Authority has said that, assuming all goes well with ITER and the follow up plants that will be needed before anything like commercial scale is reached, fusion only "has the potential to supply 20% of the world's electricity by the year 2100". Renewables already supply that now globally, including hydro, and the new renewables like wind, solar, tidal and wave power, are moving ahead rapidly – and could be accelerated.

As I said in a previous blog, since we need to start responding to the climate problem now, it might make more sense to speed the development and deployment of full range of renewable technologies, and make use of the free energy we get from fusion reactor we already have – the sun.

http://fusionpower.org/

http://www.hiper-laser.org

What a bad idea it was for some layout person at New Scientist to label the photo of a Himalayan glacier with the caption "Himalayan glaciers will vanish by 2035". Putting "Some" before "Himalayan" would have made the story true, as opposed to false. Of course it would also have made the story boring, as opposed to attention-grabbing. That glaciers are vanishing is a commonplace of the journalists, and up to a point it is a truism.

But truisms need to be seen in true perspective. Some Himalayan glaciers have undoubtedly vanished already. When any regional inventory of glaciers is repeated, typically after a few decades, the count usually goes down slightly. Sometimes it goes up, if larger glaciers have fragmented into smaller ones. More often a few percent, or even just a fraction of a percent, of the glaciers have indeed vanished.

The true perspective on this is that the glaciers that have vanished were never very big in the first place. The last large-scale episode of glacier growth, the Little Ice Age, culminated 100—300 years ago depending on where you look. But wherever we look, the evidence is that nearly all glaciers have been shrinking since that time. It is likely that the ones that have vanished already are mostly the ones that came into existence during the few centuries leading up to the date of peak ice.

There is more to the true perspective, though. For a start, given the climatological evidence for warming, we need to know whether the rate of loss of ice is greater now than, say, a few decades ago. Here the glaciological evidence is unequivocal: it is. But there is still more to be said.

Plenty of small glaciers have failed to make it. South Africa lost its only glacier during the 1990s. Chacaltaya Glacier in Bolivia was highlighted as a disappearing glacier in volume II of the IPCC's Fourth Assessment, where you can see it dwindling from 0.22 km2 in 1940 to 0.01 km2 in 2005. It disappeared in 2009.

On the other hand, some tiny glaciers have survived. Recently Grunewald and Scheithauer reported on those of southern Europe (excluding the Caucasus). It might be a challenge to identify in some of them the flow that is required by most definitions of the difference between a glacier and a snowpatch, but all are the genuine article in the sense that they are still there at the end of every summer.

I bet you didn't know that there are two glacierets in Bulgaria. (I am betting on whether you knew, not whether you care.) The authors managed to retrieve ice cores from one of them, Snezhnika. It was 12 m thick at its thickest, and on average 3 m thick over an area of 0.01 km2, or 10,000 m2. Since the early-20th-century disappearance of Corral del Veleta in the Sierra Nevada of southern Spain, Snezhnika has been western Europe's southernmost glacier, at 41.77° N. Working their way northwards, Grunewald and Scheithauer document glaciers in Albania, Montenegro (this one a monster, five times the size of Snezhnika) and Slovenia.

These shrimps seem to be doing OK, although the picture is mixed elsewhere. For example in the Cantabrian mountains of northwest Spain all that is now left of the Little Ice Age glacierets is four buried lumps of ice, while Calderone Glacier, in the Apennines of Italy, split in two during 2009.

All this might provoke subdued mirth among more macho glaciologists, but glaciers that refuse to go away should elicit admiration for their pluck and stubbornness. They also remind us that gains by snowfall and losses due to sunshine are not the whole story of glacier mass balance.

Chris de Beer and Martin Sharp studied 86 glaciers smaller than 0.4 km2 in southern British Columbia and showed that between 1951 and 2004 a few disappeared and a few shrank, but most didn't change much. By careful analysis, they found that these objects have found sizes that are in equilibrium with their prevailing microclimates. Nearly all were in shadow for much of the time and were nourished significantly by snow avalanches from the surrounding terrain.

So the survivors offer a twist in the plot of the mass-balance story, but they do not point to flaws in our understanding of climatic change, and nor do the less fortunate ones. We should expect more and more disappearances as time passes, but should not panic when the journalists tell us that "Glaciers are vanishing".


The current conundrum discussed in the news and the public is between (1) Western government spending to keep stimulating their economies after the decade-long period of overspending and (2) savings to prevent future collapse of governments under their own debt burden. Unfortunately, energy resource availability is rarely a part of the discussion, and pundits never point to it as a core driver. This is quite unfortunate.

There is no one consensus on the "economic growth" issue among mainstream economists as the proper choice, or series of choices, is quite unclear. There appears to be no good path, only a choice between bad paths. Ecological or biophysical economic arguments have historically been quickly dismissed as invalid, yet no other economic theories are based upon anything tangible. We hear of the need to "consumer confidence" as if that is a tangible and meaningful reason to invest. Irrational exuberance, or extreme confidence, is exactly what pushed us to two boom-bust cycles (dot-com and now housing) over the last two decades. Confidence only takes you so far, and at some point you need something tangible upon which to base economic theory. That tangible good is essentially natural resources, primarily energy, and the technologies that convert those resources to consumer products and services.

Because increasing consumption of natural and energy resources are the key driver of economic growth, if you do not increase their consumption, you do not grow. Yes, more efficient energy production and conversion systems (power plants, vehicles, mining, etc.) also induce economic growth, but the past only indicates the higher efficiency begets higher total consumption - due to Jevon's Paradox. However, when fossil resource availability does decline due to depletion, we'll be happy for higher efficiency services even when total consumption decreases.

Adding or switching to energy resources and technologies, where they exist, takes decades. Translation: this is longer than election cycles. Thus, a US president that implements energy efficiency or conservation policies will generally not reap the rewards or drawbacks of those policies. The next President, or perhaps a second one down the line, will be dealing with those problems. Since 2000, the United States has consumed roughly the same total amount of primary energy, about 100 quadrillion Btus per year. There has never been a time in US history at which total energy consumption was stagnant for this long. Much of the reason for the stagnation in energy consumption was offshoring of energy-intensive industries to developing countries, and thus there are less and less non-skilled jobs available after each economic downturn. The US economy restructured based upon increasing energy prices during the last decade, and companies traded cheap energy in the form of the muscle of Chinese, for more expensive energy, in the form of natural gas and petroleum.

Thus, major structural changes in the US economy have occurred over the last decade, and no policy can reverse these trends in less than another decade. The reason that economists, and even Federal Reserve Chairman Ben Bernake are calling the economic future "unusually uncertain" is that the US has never encountered the situation at which we now reside. Energy consumption is flat. World oil production is at a plateau. We have shipped jobs to China and borrow their profits to feed our consumption habit. Unemployment is high.

Policy can't ship more jobs to China because hindering employment even further is a political death nail. Policy can promote offshore oil and renewable energy technologies, but those resources and technologies have lower energy return on energy invested (EROI) than the resources we have used in the past. Lower EROI means more of the economy must focus on energy production itself rather than producing other more discretionary economic goods. And a change in transportation mode (electric cars, electric and/or high speed trains) will take decades, and these changes can work, but they may never be as economically as productive as burning petroleum at $20/BBL to $60/BBL.

So the reason that economists see a "sluggish" or "low-growth" economy in the foreseeable future is due to energy. From 2000-2008, we pretended that high rates of GDP growth could occur without increasing energy consumption. Increasing prosperity of the developing world has strained energy resources to the point that we must adjust to a future with energy consumption that is both lower and from new resources and technologies. These technologies and resources, even without considering altering them to prevent greenhouse gas emissions, are less productive. So if you put these concepts together, you end up with the result that we must (1) invest in new energy technologies that (2) employ more people per output (kWh, liter of fuel, etc.) and produce (3) lower net energy than historical coal, natural gas, and oil (even future coal, oil, and natural gas are less productive) such that (4) the energy sector grows as a proportion of the economy and (5) by definition the rest of the economy must shrink. Either this reality we become true, or the scientists working on fusion will pull a rabbit out of hat. No tax policy of a President will do much to significantly alter this equation. Only energy consumers can wait to see if we do or do not pull off sufficient technology solutions, and adjust their habits accordingly.