This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Skip to the content

IOP A community website from IOP Publishing

February 2010 Archives

I just attended the conference Understanding, Measuring, and Managing Water Scarcity Risks and Footprints in the Supply Chain this past week. This conference was primarily attended by sustainability managers of corporations along with a few academics and non-governmental organizations. There was much discussion of how to measure water impacts of industry as well as how to act on measured or calculated information. Many of the speakers and attendees were familiar with several methods for measuring water "usage" such as the Water Footprint (www.waterfootprint.org) and the Global Water Tool of the World Business Council on Sustainable Development (www.wbcsd.org/web/watertool.htm). The former presents information on the green water (soil moisture for the most part provided by precipitation) and blue water (stored water in rivers, lakes, and aquifers) consumed in the supply chain of a product. The latter is a mapping program that allows businesses to understand if they have operations in regions of the globe that have water scarcity.

There was general agreement within the community that the Water Footprint is not properly used as Jason Morrison of the Pacific Institute summarized by saying "different interests use the term 'water' footprint' to mean different things" for their own purposes. Technically speaking, the water footprint is in units of water volume per time. By multiplying by the time per product manufactured, one can obtain the water footprint in units of water per product. This last term is the one most commonly presented in such examples as the quantity of water needed for a pair of jeans or a cup of coffee. This water volume per product is a handy unit of measure that consumers and business people can easily grasp. The problem is that it doesn't seem to be helping either water resource management practitioners or sustainability managers at companies.

The issue stems from culminating into one term the water consumed over a supply chain that occurs in time and in space. If your supply chain for a product occurs in more than one location and/or at more than one time, then by definition you cannot capture all of that information into a single number. Mathematically this is like taking the derivative of a number. Each time you take the derivative, you lose one degree of freedom or information. For example, the volume of a sphere is described as V = 4/3*pi*r^3 and is in units of cubic meters (m^3) to describes a three-dimensional space. Taking the derivative of volume with respect to its radius results in the surface area of the sphere at A = 4*pi*r^2 in units of square meters (m^2) to describe a two-dimensional space. Hence we went from three dimensions to two dimensions. If I show only the final value for the surface area of the sphere, say 1 m^3, I do not know that a sphere is being described. However, if tell you the equation for the sphere's surface area and tell you it is equal to 1 m^2, then you know how to calculate the volume (or radius) because I have just provided more information that told you about the third dimension.

What does this have to do with water footprinting? Well, similarly to needing to know more than one piece of information about the surface area being described (need two of either equation, radius, and surface area) to know it is for a sphere, you need more than a single value for the water footprint of a product to understand the environmental impact caused by its production. For example, if a shirt requires water during farming of cotton and dyeing of the fibers, then one could present the information in two numbers on a bar chart (among many other means for presenting information). Part of the bar chart would represent the cotton farming, and the other part would represent the dyeing step. By telling people where you source your cotton and where you perform your dyeing, you have now presented more information – information again that cannot be understood using a single value. I have just described four pieces of information: water for cotton, water for dyeing, location for cotton, and location for dyeing. A map with the water consumption value in each location the water is consumed could present all four pieces of this information. I could go on for temporal components. The World Water Tool exists to take the information described in this paragraph to relate to water scarcity around the globe. They of course use a map for this.

This thought exercise is meant to show that people understand that describing environmental impacts is somewhat complex. In describing water flows for human appropriated needs, from a basic standpoint we should focus on avoiding the word "use" to describe water flows. Instead, use "consumption" to describe water that enters the system as a liquid and exits as water vapor or in another chemical form. Use "withdrawal" to describe water entering and exiting the system in a liquid form, and note that consumption is a subset of withdrawal. The water footprint is a consumptive descriptor that for the most part includes evapotranspiration (green water) on top of what the term consumption (the blue water component) takes into account. If we stick to some of these basic rules, we can better understand how human and ecosystem services are subjected risks in water availability.

Tides come in

| | Comments (11) | TrackBacks (0)

The use of tidal energy for generating electricity is moving ahead rapidly around the world, and the potential for expansion is significant, with the emphasis being on tidal current turbines, although some tidal barrages are also being developed or planned – for example, various barrage and lagoon scheme are still under consideration for the Severn estuary. A decision on which to go for should emerge later this year.

The global potential is quite large. Trade network Tidal Today's second annual 'Tidal Summit', held in London last November, heard from a speaker from the Fraunhofer Institute for Wind Energy and Energy System Technology (IWES) who relayed some estimates of tidal energy potentials: China: 50 TWh p.a; Ireland: 10 TWh; UK: 31 TWh; France: 10 TWh; Norway: 3 TWh; US: 115 TWh. The big ones, in terms of capacity, included Canada: >40 GW and South Korea: 1000 GW.

In terms of tidal turbine development and deployment, the UK still leads the pack, with Marine Current Turbines' 1.2 MW Seagen in Strangford Narrows to be followed soon by a 10MW tidal farm off Anglesey – MCT has just got £4.8 m from Siemens, EDF Energy, HighTide and others for the next stage. Meanwhile, the European Marine Energy Centre (EMEC) on Orkney is providing key test facilities for UK devices and systems from overseas. It has the world's only grid linked tidal test facilities – five open sea births, with 11 kv, 5 MW subsea cables. And, as the Crown Estate noted at the Summit, Scotland has 25% of the available European tidal potential – Pentland Firth and Orkney Waters contain six of the top-10 UK tidal sites. Crown Estates seem to have been taking their time assessing these, but are about to announce site licenses for selected projects. So prospects for the future look good.

One of the leaders of the new projects is Open Hydro's novel 'open centre' rim generator turbine, which was tested at EMEC. There are plans for deployment off Alderney and in the bay of Fundy. Next up, Atlantis Resources Corp, the international marine energy company, is to test its AK-1000 tidal turbine at EMEC this summer. Tidal Today reported that the AK-1000 has been designed specifically for harsh marine environments. It features what are claimed to be the world's most efficient blades – 18 meters rotor diameter – and has minimal moving parts. It should be capable of generating 1 MW of power in a 2.6 m/s tidal current. The company says that it will award contracts associated with the deployment supporting over 100 UK jobs. The AK-1000 will be its second grid-linked turbine. Its Nereus turbine in San Remo, Australia, first deployed in 2006, still continues to generate power for the grid.

The US is also active in the field. The summit heard updates on Verdant powers seven turbine projects in New York's East River, and there are several other North American projects, including the Canadian 'Clean Current' ducted rotor, which is now being developed in conjunction with Alstom. At the summit it was reported that 'in September 2006, clean current installed a 3.5 m diameter, 65 kW demonstration unit at race rocks, BC, Canada,' and that 'performance met or exceeded expectations'. On-going testing is now being carried out, with a view to installation in the Bay of Fundy.

There's also a lot happening in South Korea – who seem likely to become the world leader. Developer Voith Hydro noted that tidal range (barrage) projects in planning/execution included Shiwa: 254 MW; Garorim Bay 520 MW; Incheon Bay: 1320 MW, and that the Korean Ministry of Knowledge Economy estimates tidal current potential at 470 MW. Voith are installing 100 MW of its 500–1000 kW propeller type turbine at Jindo, Jeollanamdo Province and there's also a project involving the UK's Lunar Energy ducted-rotor system.

But not everyone was quite so positive. RWE npower told the summit that costs and financing were still issues. There was an 'immature market so commercial costs are not yet apparent' and 'utilities won't loss lead demo projects for uncertain future market – demo projects must promise stand alone economic returns'. There were also 'inadequate support mechanisms'. And there was a 'massive offshore wind potential', whereas tidal 'has a fair way to go'. Consultants Douglas Westwood evidently felt similarly: they told the summit that it cost about '£60 m to take a device through to commercialization' and felt that the 'forecast of 41 MW installed capacity by 2013 may be missed', and that without proper support, tidal might only provide '1% of UK power generation capacity by 2030'.

Fortunately, in February the UK government finally got its support system working and has allocated £22 m from its new Marine Renewable Proving Fund (MRPF), to six projects, two wave systems (Osprey and Pelamis) and four tidal current projects – using new turbines from Atlantis, Voith, MCT and Hammerfest Strom UK.

The MRPF is managed by the Carbon Trust, which notes that all of the devices receiving MRPF funding will be deployed in UK waters, and will stimulate supply chain opportunities associated with construction and deployment of these technologies, with over 75% of the funding released through the MRPF going to the UK supply chain.

The MRPF aims to accelerate the development of marine energy devices to the stage when they can qualify for the Government's existing, but so far unused, £42 m Marine Renewables Deployment Fund (MRDF) support scheme and, ultimately, be deployed on a commercial scale, with support from the Renewables Obligation. MCT's Seagen has in fact managed to jumped straight to that, after several years of grant aided and independent development, and is now getting 2 ROCs/MWh for its 1.2 MW unit in Strangford Narrows. The new MRPF money means that it, and the other developers, can now get moving on new projects.

In addition to those already mentioned, there is quite a range of projects at the starting gate, or at least under development, with some novel UK designs emerging. Scottish projects are well represented, with for example the Scotrnewables' floating device and TidalStream Ltd.'s unique tidal turbine platform design – Triton. But Wales is also in the race, with Swanturbines' Cygnet, developed at Swansea University, and Tidal Energy DeltaStream device, which is to be put through trials at Ramsey Sound, near St Davids.

Humberside is also figuring strongly. For example, following tests on models at the University of Hull, a full-scale prototype of Neptune's ducted vertical-axis turbine 'Proteus' device is being tested in the Humber Estuary at Hull. Pulse Tidal's hydrofoil device is also under test in the Humber. Its 100 kW test rig currently feeds power to a chemicals company on the banks of the river. Tidal Today reported that it is to receive a grant of €8 m from the EU's technology research and development fund (Framework Programme 7) to enable the company to begin work immediately on developing its first fully commercial tidal energy generator – a 1 MW unit, to be commissioned in 2012.

Not all of the many new tidal current devices emerging will succeed or get sufficient funding to prove themselves. But there is a mood of pioneering enthusiasm, with developers like Pulse Tidal's chief executive Bob Smith, being very positive about the future: 'We have developed an economic way to recover predictable, renewable energy from the tides and are entering a young market predicted to be worth at least £6 bn annually in electricity sales'. He added: 'The Pulse system is expected to match the cost of offshore wind after only 100 MW has been installed. In the future tidal energy is set to surpass wind as the most economic and predictable source of offshore power.'

* For more information, visit www.tidaltoday.com.

The idea that a super-enormous volcanic eruption — or hypereruption — would alter the climate dramatically has been around for a long time. It fits the facts about the biggest historical eruptions we know of, and also our understanding both of how volcanoes work and how the atmosphere works. But could the drama extend to tipping the climate from an interglacial state to a full-blown ice age?

The answer, as has long been believed, is still No, according to Alan Robock and colleagues in a paper published last year. They added several new kinds of potential cooling mechanism to two climate models, and were unable to trigger an ice age.

When a volcano goes off, it is always unpleasant for those in the immediate neighbourhood. The climatologist's concern, however, is with the broader consequences. A violent enough eruption can loft its products into the stratosphere, where they can persist long enough to spread around the world.

The main culprit is sulphur dioxide, SO2. It reacts with water vapour to form a haze of sulphuric acid droplets. The droplets increase the scattering of incoming solar radiation, making the atmosphere more reflective and cooling the Earth slightly. The more SO2, the more cooling.

The snag is that the haze doesn't last. The atmospheric effects of Pinatubo in 1991, the largest eruption of recent times, were detectable for a few years at most.

Krakatau in 1883 was bigger than Pinatubo. Tambora in 1815 was even bigger, and still stands as the largest eruption in the historical record. If we turn to the geological record, the largest eruption we know of is that of Toba in Sumatra, in about 72,000 BC. Toba yielded a quantity of stratospheric SO2 hundreds of times that of Pinatubo, which was about 20 megatonnes.

Robock and colleagues injected 300 "Pinatubos" of SO2 into the baseline run of their models, but also tried amounts as great as 900 Pinatubos. With a dynamic vegetation module, they explored the feedback on global temperatures of widespread death of vegetation due to the volcanic cooling. The feedback was not very impressive. Precipitation dropped markedly, but cooling reached about 10 degrees at most, and recovery was nearly complete after about a decade. Coupling the climate model to an interactive model of atmospheric chemistry, they found that the SO2 reaction products persist for longer and produce greater total cooling — as much as 18 degrees — but still no permanent, ice-age-like change in the climatic state. The cooling was partly offset by warming influences, such as more water vapour and ozone in the stratosphere, and more methane in the troposphere. All of these are greenhouse gases.

One thing that bothers me about the Robock study, which is a step forward, is that it still may not cover all the bases. For example the model runs may not have been long enough to pick up delayed responses of the ocean to reduced inputs of heat during the cooling episode. And the climate models were unable to follow the behaviour of the other sluggish players in the drama, the glaciers themselves.

On the other hand, look at what actually happened. In an older paper, Zielinski and co-authors found a signal from Toba in an ice core drilled in Greenland: about six years of strongly enhanced deposition of sulphate, followed by a 1,000-year long "stadial". Stadials, identified by looking at ratios of the isotopes of oxygen, are relatively short cool episodes within ice ages. However Toba was preceded by 2,000 years of more moderate cooling, which suggests that the stadial proper might have happened anyway. What is more, the oxygen isotopes repeat a very similar pattern in the 2-3,000 years after the end of the "Toba stadial": rapid warming, moderate cooling, rapid cooling, with no evidence for volcanism at all. In fact, these two excursions look rather like Dansgaard-Oeschger events.

So we have a plausible but not compelling link between our only known hypereruption and a limited amount of long-term cooling. If a Toba happened tomorrow, it might presage a short stadial, but not a long one, and anyway stadials ought not to be at the top of your list of things to worry about. But on the purely intellectual side, the effort to understand Toba nevertheless bears on an important question. How hard do we have to hit the climate system before it really gets upset, or, putting it another way, what does "tipping point" mean?

Carbon dioxide is not the only greenhouse gas. Coemitted air pollutants also significantly affect global climate. By various interactions, their absolute effect can be complicated to evaluate. In principle, however, most air pollutants have a relatively short time span in the atmosphere, and hence, can be classified as short lived species. Many air pollutants, including organic aerosols have a global cooling effect, whereas black carbon and ozone contribute to global warming.  The effect of the short lived species is not marginal. In fact, their combined radiative forcing may outweight that of carbon dioxide (Forster et al., 2007). In view of this observation, there is increased attention on the mitigation potential of some air pollutations, such as ozone and black carbon, short lived species with high radiative forcing. On the other hand, there are other aerosols, excluding black carbon, which exert a cooling effect that may have masked about 50% of the global warming by GHG. The fight against air pollution and for public health can, hence, have an unwanted impact by inducing accelerated global warming. Starting with this background, Unger et al. from NASA ask in PNAS in their article Attribution of climate forcing to economic sectors the following question: How is the total radiative forcing effect organized according to economic sector, the drivers of emissions?

The authors point out that emissions of black carbon (positive RF) and organic aerosols (negative RF) are often coupled. Hence, the ratio between both emittants is important to evaluate the total radiative forcing in a specific sector. An analysis of sectors according to this ratio reveals significant differences across sectors:

  •  There are sector such as the power industry that have high emissions of species with both positive and negative radiative forcing.
  •  Other sectors, such as road transport, are dominated by species with positive radiative forcing. More specifically, the ratio between black carbon and organic carbon aerosols is relatively high in road transport.

The accumulated climate impact over all sectors is visualized in the figure below (Source: Unger et al., 2010). In fact, in the short term (2020), road transport dominates the accumulated climate impact. In the long run (2100), the power sector dominates as greenhouse gases persists significantly longer in the atmosphere than short lived species.

Unger2010.jpg
Climate impact of economic sectors in 2020 and 2100.

From this technical analysis, effective  climate change mitigation can most easily be obtained in on-road transportation - with significant co-benefits as air pollutants from transport are more harmful than from other sectors. This is for example due to a higher intake fraction (fraction of pollutants that is inhaled, e.g. Marshall et al., 2005), but can have more general benefits for public health and overall mobility (Creutzig and He, 2009). However, from a climate perspective road transportation is underregulated. For example, in Europe road transport is not part of the emission trading scheme, and, by this, is positively discriminated against electrified rail transport. A clever mix of instruments that prices the harmful parts of mobility while increasing its beneficial aspects (e.g. accessibility, thence, could have a near and long term positive impact).

Thanks to Jan Minx for pointing out the PNAS article.


References

Forster P, et al. (2007) Changes in Atmospheric Constituents and in Radiative Forcing, in Climate Change 2007: The Physical Science Basis, Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, eds Solomon S, et al. (Cambridge Univ Press, New York)

Unger, N.,  Bond, T. C., Wang, J. S., Koch, D. M., Menon, S., Shindell, D.T., Bauer, S. (2010) Attribution of climate forcing to economic sectors PNAS 2010 : 0906548107v1-6

Marshall, J.D., Teoh, S.K., and Nazaroff, W.W. (2005) Intake fraction of nonreactive vehicle emissions in US urban areas. Atmospheric Environment 39 (7), 1363

Creutzig, F. and He, D. (2009) Climate change mitigation and co-benefits of feasible transport demand policies in Beijing  Transportation Research D 14: 120-131

What happens in China, in terms energy use, is widely seen as a key to whether serious global climate impacts can be avoided or limited. China is relying heavily on coal but is also turning increasingly to non-fossil energy sources. Its nuclear programme often gets the headlines but in 2008 China had as much wind capacity in place as it had nuclear capacity – 8.9 GW. Of course, the relatively low load factor for wind (under 20%) meant that nuclear produced more energy – 68 TWh as against 13 TWh for wind. Moreover, new nuclear plants are planned, including fast neutron reactors to be supplied by Russia. In all, plans announced in recent years call for nuclear stations to supply 4% of China's power needs by 2020, up from about 2% now, although of course its energy use is expanding rapidly, so that is more than a doubling in capacity. But wind has now more than doubled – installed capacity reached 25 GW in 2009, and a 2020 wind target of 150 GW has been mentioned. China's wind programme is also moving offshore: it recently installed its first 3 MW 90-metre diameter "Sinovel" offshore turbine, the first unit of a 100 MW Shanghai Donghai Bridge demonstration project.

Certainly renewable energy, along with clean coal (i.e. with carbon capture) seems to be seen as a key way ahead. Chen Mingde, vice-chair of the National Development and Reform Commission, in comments quoted by the China Daily newspaper last year, claimed that "nuclear power cannot save us because the world's supply of uranium and other radioactive minerals needed to generate nuclear power are very limited". He saw the expansion of China's nuclear power capacity a "transitional replacement" of the country's heavy reliance on coal and oil, with the future for China being in more efficient use of fossil fuels and expanded use of renewable energy sources like wind, solar, and hydro.

China's current target is to get 15% of its energy (not just electricity) from renewables by 2020, although this is likely to be raised to 20%. In addition to wind, it's pushing ahead with solar as well as hydro and biomass. China's hydro capacity is expected to nearly double to 300 GW by 2020. And a recent REEEP study suggested that 30% of China's rural energy demand could be met through bioenergy. China already has 65 GW of installed solar thermal power, and the potential for expansion is significant (e.g. for large scale, concentrating solar power units in desert areas, feeding power by HVDC links to the cities). A 1GW prototype plant is planned.

PV solar is also set to expand rapidly. China is already the largest producer of solar cells globally and, although until recently most of them were exported (around 1 GW in 2007), the emphasis has now changed, so that the current national target of having 3 GW of capacity in place by 2020 could be exceeded by perhaps a factor of three. Looking further ahead, work in also underway on tidal and wave energy projects.

Some major integrated projects are also emerging. For example, Reuters reports that China is currently developing a demonstration zone in Hangjin Banner, with a planned 11,950 MW renewable-energy park, which, when completed, should have 6,950 MW of wind generation, 3,900 MW of photovoltaics, 720 MW of concentrating solar power, 310 MW of biomass plants and 70 MW of hydro/storage.

Some innovative new grid links are also being established, designed to deal with the problem that much of the renewable electricity resource is remote from mostly coastal centres of population. The new extended grid system could also help with balancing the variable output from some renewables. Modern Power Systems reports that Siemens Energy and China Southern Power Grid has started commissioning part of a High Voltage Direct Current (HVDC) transmission line, with a capacity of 5000 MW, covering a distance of more than 1400 km. It's claimed to be the first HVDC link in the world operating at a transmission voltage of 800 kV. Commissioning of the second phase, and startup of the full system, is scheduled soon.

The Yunnan–Guangdong interconnector will transmit power generated by several hydro power plants in central China to the rapidly growing industrial region in the Pearl River delta in Guangdong Province with its megacities Guangzhou and Shenzhen. This system can, it us claimed, reduce the annual CO2 emissions that would otherwise have been produced by fossil-fuelled power plant by over 30 megatonnes. In addition Modern Power Systems reports that there is the 800 kV Xiangjiaba–Shanghai link, on which ABB has been working with the State Grid Corporation of China (SGCC). It will be capable of transmitting 6400 MW of power from the Xiangjiaba hydropower plant, located in the southwest of China, to Shanghai – a distance of over 2000 km. It is claimed that transmission losses on the line will be less than 7%.

China is now the world's largest carbon dioxide emitter and its energy demand is still rising rapidly, despite the global economic recession. However, in the run up to the COP 15 climate negotiations in Copenhagen last December, while not willing to commit to reductions in net emissions, China said it would cut its energy intensity (emissions/GNP) dramatically – by 40–45% by 2020. That's not the same as reducing net emissions of course, but it would be a start. And if that is acted on, renewables would clearly play a major part.

China's role at COP 15 has been much debated – essentially it seemed to want to protect its continued growth, and avoid imposed emission targets targets – much like the US. But, like the US, it also seems keen to be a leader in the move to green energy technology – perhaps becoming the "green workshop of the world" feeding the expanding markets for renewable energy systems around the world. In addition to exporting solar PV cells, it was even planning to build wind turbines for and in the US – although a US senator's objections may have scotched that.

How rapidly China can and will green itself though is less clear. Certainly China has massive renewable resources: for example the wind resource is put at around 2 TW. And a new study by Michael McElroy and colleagues at Harvard and Tsinghua University in Beijing, published in the journal Science, has claimed that, in theory, wind power could meet all of China's electricity demand by 2030.

That is very unlikely happen by then of course, but China is likely to become a major player in the green-energy revolution.

According to Japanese researcher Taikan Oki and Shinjiro Kanae, approximately 500,000 cubic km of water per yr are evaporated over the ocean (437,000 cubic km) and land (66,000 cubic km).1 With water at a density of 1000 kg per cubic meter, this is 5x10^17 kg of water evaporated per year. Using a latent heat of vaporization for water of 2,270 kJ/kg, this means that a minimum approximately 1,135,000 exajoules per year (1 exajoule, or EJ, = 10^18 J) of solar energy are used to evaporate the world's water and drive the much of the hydrologic cycle of the planet.

Given that humans consume approximately 500 EJ/yr in primary energy, this means that the Earth's water consumes at least 2,000 times more solar energy each year when evaporating water that we consume in primary energy resources. Eighty-seven percent of this water is desalinated by evaporating from the oceans. So when we talk about desalinating water, or recycling water, just remember that it means that we are inherently deciding that 2,000 times our direct consumption of primary energy resources for the creation of fresh water is not enough!

So when we think if using desalination, but matching it up with carbon-free sources or technologies that are more efficient than a couple of decades ago, what we are really saying is that our original use of solar energy for water desalination is no longer sufficient for our purposes. With that mindset, should we rearrange our priorities in terms of the uses of water and locating people to where the solar resource combines with precipitation patterns and the Earth's contours to deliver water to us renewably? Or should we continue to bet that energy will be cheap such that we become more dependent upon it for delivering fresh water? These kinds of questions are mainly for rich countries, as we can only be so lucky to have these options.

1Oki, Taikan and Kanae, Shinjiro (2006). Global Hydrological Cycles and World Water Resources. Science, 25 August 2006, Vol. 313. no. 5790, pp1068–1072, DOI: 10.1126/science.1128845.

We have made some astounding intellectual advances in the past few millennia, and we do right to honour our fellows who make these forward leaps. It is proper to regard Isaac Newton, for example, as one of the most important human beings ever. But the 1% of inspiration would be nothing without the necessary 99% of perspiration, and most intellectual advances have been anonymous.

The oldest recognizably modern ideas about glaciers are no older than a couple of centuries, but the way for them was paved by a lot of preparatory observation and thought. Although neither the Greeks nor the Romans had a word for glacier, I cannot believe that nobody had observed or thought about glaciers before the eighteenth century.

Ötzi, the Iceman who died at the main drainage divide of the Alps about 5,300 years ago, probably crossed the divide regularly for pastoral purposes. Surely he must have had a word for the thing, now called the Niederjochferner, on which he died. Besides glacier and its relatives, there are several other words for the thing in alpine languages: vedretta and vadret in Romansch and Friulian, ferner in the dialect of the Tyrol, and kees, another Tyrolean dialect word. Several small neighbours of Ötzi's glacier are called kar. We do not know whether Ötzi used an ancestor of one of these words, but it would have been hard for him to move around and do his work without some such token.

I have not found any information about the history of vadret. The -et may be a diminutive suffix, or a relic of some meaning that has now been lost, but is it naive to wonder whether the vadr- part is a relative of English water? You have to allow a certain slipperiness in the meanings once attached to these tokens. The evidence consists of seeming parallels between the tokens. If you accept the parallelism, you may uncover evidence of thinking. In this case, the implied intellectual achievement is the recognition that ice and water are different aspects of the same thing. Somebody had to be the first to work this out.

Of course the tokens themselves are not unchanging. They gain and lose bits from time to time, which is why the linguistics experts are satisfied that water and Greek hudor, the ancestor of our prefix hydro-, are the same. For some reason speakers of Greek are not keen on the w sound.

kar, kees and ferner seem also to have no known history before the last few centuries, although ferner is interesting because it resembles firn, a German word for compacted snow, and perhaps fonna, Norwegian for an ice cap or snowfield.

Perhaps we can get somewhere by looking for the most basic idea. Latin glacies, ice, is traceable to a reconstructed Indo-European root *gel-, cold, freezing, with descendants in the Italic, Teutonic and possibly Slavic branches of Indo-European. English belongs to the Teutonic branch, and according to The American Heritage Dictionary of Indo-European Roots modern descendants of *gel- in English include chill, cool and cold itself.

When the root acquired its -k suffix, and what it signified, are unknown. Pokorny, in his monumental 1959 Indogermanisches etymologisches Wörterbuch, suggests that it was in fact -g and was simply an example of reduplication of the initial consonant. Pokorny also thinks that *glag became glacies under the influence of other Latin nouns such as facies, appearance, and acies, edge. Some say there is a connection with Greek galaktos, milk, explicable because milk and bubbly ice can resemble each other in colour. If this is correct, which word influenced the other is not clear, and anyway the independent status of Latin *gel- is demonstrated by gelu, frost, and gelidus, frosty, icy cold.

If we have not lost the track, the deepest layer of meaning in the word glacier is the idea of cold. It makes sense to me. Even Isaac Newton showed early promise as a glaciologist. The second sentence of his Mathematical Principles of Natural Philosophy, published in 1687, was about the compaction of snow. It is true that he then lost the track, for which we should be grateful because of the new path he opened up for later glaciologists. But we should also be grateful for all of the thinking that went on before Newton. Somebody had to be first to notice that you only get snow (Indo-European *sneigwh) and ice (Indo-European *eis) where it is cold.

"Global Sustainability - a Nobel Cause", is the title of a book that just has been published by Cambridge University Press. A number of nobel price winners and eminent scientists in sustainability research gathered to write up their thoughts on sustainability. This book certainly does not fulfill coherent scientific standards (peer-reviewed, novel insights) but is very promising in bringing some relatively simple but profound thoughts together. Also, as a clear pro: this book is freely available online, and individual chapters can be downloaded as pdf.

What can we, then, learn from this book? Let us start with Geoffrey West's observation that current scientific endeavors have, to a large degree, failed to come to grips with the essence of the long-term sustainability challenge: the pervasive interconnectedness and interdependency of energy, resources, environmental, ecological, economic, social, and political systems (West, 2010; see also Creutzig and Kammen 2009). My own specific example of this statement (Creutzig and Kammen, 2010) looks at the specific examples of biofuels, rephrasing the by now established view that a simple-minded view of biofuels as zero-carbon sources of energy misses the point, as a) the carbon content can be not only significant but even surpassing that of conventional fuels and b) a number of additional sustainability issues such as biodiversity loss, deforestation and food insecurity threaten to diminish any feasible positive impact on the climate change front.

So what is the possibility and challenge of using a coarse-grained systemic approach that builds on, but also goes beyond, piece-meal wise investigations? Wolfgang Lucht calls for the enterprise to construct new mental images of the whole - or taking a crude look at the whole as Murray Gell-Mann says - that must be founded in rational analysis and, equally important, cultural production (Lucht, 2010). In fact, according to Lucht, a controlled transition in the interlinked social-environmental world system is to be achieved, only by making transitional progress not just in the environmental domain, where the impacts have to be lessened, but also in the social domain, where the problems have their origin. This transitional progress building on rational-scientific insights (that transcend the techno-economic totalitarian aspects of enlightenment) then could enable a sustainable transition path as depicted in the figure below.

luchtschellnhuber.png
(taken from Lucht, 2010, partially based on Schellnhuber, 1999) 

The small problems left are (1) to fill these grand themes with contents and (2) to integrate the findings into the "societal self constructions that dominate human processes" (Lucht, 2010). Regard the first issue, my blogging colleague Carey King suggests to look at EROI - energy return on energy invested -, noting that EROI is lower for renewables than for fossil fuels with respect to human time scales, and that a future lower EROI implies a reduction in societal complexity (this conjecture requires more thoughts though). Geoffrey West points at the fundamental nature of time scales, explaining the supra-linear scaling of agglomerations, and hence supra-exponential growth that requires an ever accelerating rate of innovation (not sustainable). These spotlights indicate the pressing need for a consistent growth theory which includes natural capital degradation (and appropriate ressource flow exploitation), agglomeration dynamics and structural change, the clear definition of a perspiciuous welfare function of sustainability, and finally a proposal for a non-catastrophic deceleration of the human socio-economic system.


References

Bettencourt, L. M. A., Lobo, J., Helbing, D., Kühnert, C. and West, G. B. (2007) Growth, innovation, scaling, and the pace of life in cities. Proceedings of the National Academy of Sciences of the United States of America, 104(17), 7301- 6.

Creutzig, F.,  Kammen, D (2009) The Post-Copenhagen Roadmap Towards Sustainability: Differentiated Geographic Approaches, Integrated Over Goals INNOVATION, Vol 4 (4): 301-321

Creutzig, F.,  Kammen, D (2010) Getting the carbon out of transportation fuels. In H. J. Schellnhuber, M. Molina, N. Stern, V. Huber & S. Kadner (Eds.), Global Sustainability - A Nobel Cause. Cambridge: Cambridge University Press, UK.

Schellnhuber, H. J. (1999). Earth System Analysis and the Second Copernican Revolution. Nature 402, Suppl., C19 - 23.

West, G. (2010) Integrated sustainability and the underlying threat of urbanization. In H. J. Schellnhuber, M. Molina, N. Stern, V. Huber & S. Kadner (Eds.), Global Sustainability - A Nobel Cause. Cambridge: Cambridge University Press, UK.

 

 

Wind power seems likely to remain the main new renewable energy source for the UK given the large offshore and on land resource and its relatively good economics. However, some lobby groups, like the Renewable Energy Foundation (REF), argue that we have placed too much emphasis on wind and should also look more seriously to other renewable options. Actually, although the British Wind Energy Association (BWEA) sees wind supplying 30% of UK electricity in the decades ahead, it may agree. The BWEA no longer focuses just on wind – it has increasingly been looking to wave and tidal power, particularly tidal current turbines. It has just changed its name, to "RenewableUK" (RUK), to reflect this.

It's not surprising that the BWEA/RUK has been keen to take wave and tidal power under its wing, as well as wind. They can all work together beneficially to help cope with the variability of each source. Wave energy is in effect stored/delayed wind energy and so is less sensitive to wind variations, while tides, though cyclic, are unrelated to wind.

A recent Redpoint scenario, produced for the BWEA, is the starting point for a study of the optimum balance between wind, wave and tidal. In particular it looks at the extent to which wave and tidal power could help reduce the grid balancing costs associated with the use of variable renewables, and also reduce the excess wind "spillage", when there was too much wind to be used on the grid. The study, The Benefits of marine technologies with a diversified renewable mix, suggests that, to get the best from the different time correlations of these sources, the optimum might be around a 70% wind and 30% wave/tidal current mix, or, if tidal range projects were included, along with tidal current systems, a 60/40 wind to wave/tidal ratio. The former ratio could reduce the need for fossil fuel backup plants by 2.15 GW, the later by 2.3 GW, and the overall carbon savings could be increased by up to 6%, with wholesale costs reduced by up to 3.3%, since there would be less spillage of wind.

All of these options are about electricity production, and are mainly on the larger scale, whereas it can be argued that smaller scale electricity generation, and also renewable heat production, are equally important in developing an optimal mix. The BWEA has taken an interest in micro wind, but otherwise it has mainly been another trade lobby, the Renewable Energy Association (REA), which has covered the microgeneration area (e.g. PV solar), along with biomass-based heat and power generation (e.g. micro CHP). One of the REA's main strengths has always been biomass/waste related energy systems, including sewage gas, landfill gas and other sources of biogas, with new community-scaled AD (Anaerobic Digestion) plants being one of the latest growth areas. Along with others, it's pushed hard for a Feed-In Tariff for micro power systems, with some success – the government is introducing a clean Energy Cashback scheme for small project in April.

A year or so ago the BWEA and REA were discussing a merger, but that came to nothing. So now, while BWEA/RUK will focus on wind, wave and tidal, the REA will be left covering the rest – and possibly, increasingly, renewable heat options. That's the focus of the proposed new Renewable Heat Initiative Feed-In Tariff that the government is planning, to start next year. It's also something that the REF has focussed on in their belief that we need a more diversified approach.

The division of areas of technological interest by the trade lobbies is not absolute (e.g. REA still has a strong interest in wave and tidal power). And while some sort of rough division may make sense institutionally, it would be a shame if the potential for a more integrated approach was reduced – there is a lot of overlap. BWEA/RUK and REA have collaborated in the past. Hopefully that will continue. After all, what seems likely to emerge is a new energy system in which a range of electricity and heat producing renewable energy based technologies, large and small, are integrated together to balance heat and power needs via heating networks and smart power grids. For example, along the lines proposed by Neil Crumpton – as I reported in an earlier post.

The issue of integration, and of choosing the right mix of renewables, will no doubt be high on the agenda being addressed by Prof. Bernard Bulkin, ex-BP and ex-AEAT, who is now the "expert chair" of DECC's new Office for Renewable Energy Deployment. DECC is currently looking at what we might expect by 2050. It will be interesting to see what emerges in its "2050 Roadmap", which should be published in conjunction with the Budget in April.

www.renewable-UK.com
www.bwea.com
www.r-e-a.net
www.ref.org.uk

Climatic change in Antarctica is complicated. The northernmost part of the continent, the Antarctic Peninsula, is warming at extreme rates, while elsewhere the pattern is mixed and in some parts there appears to be little or no warming. Up to a point, we glaciologists don't mind whether Antarctica is warming or not. It is so cold that even an implausible temperature increase wouldn't come close enough to the melting point to affect the mass balance.

Indeed, there is a plausible argument that warming would make the mass balance more positive. The Antarctic interior is extremely dry because the capacity of the intensely cold atmosphere to deliver water vapour, and therefore snow, is minimal. Warmer air can carry more water vapour, so snowfall should increase in a warmer Antarctica.

The evolving mass balance of Antarctica is most interesting around the edges, though. Warmer ocean water is increasing melting at the bases of ice shelves and pulling grounded ice across the grounding lines at increasingly scary rates. A modest increase in interior snowfall would not make this picture less scary.

Ice-stream dynamics is not the only interesting thing about the periphery of Antarctica. Here, in the least cold latitudes, we observe what little melting does happen. Spread over the continent, it amounts to a few mm of water-equivalent loss per year, against gains by snowfall of about 150 mm/yr. Losses by discharge across the grounding line are much greater. But melting, if negligible in the big picture, is still interesting.

In a recent paper, Tedesco and Monaghan update a standard measure of melt intensity in Antarctica, the so-called melting index. They watch the ice sheet's emission at microwave wavelengths (8 to 16 mm) and exploit one of the most useful radiative attributes of water. At these wavelengths, the emissivity of frozen water is low, and as conventionally presented in imagery it looks bright, but when it melts its emissivity rises dramatically and it looks black. An intermittently wet snow surface flickers between bright and dark, and we can keep track of melting by noting, in twice-daily overpasses by the imaging satellites, whether the image pixels are bright (cold) or black (warm).

The melting index, summed over a glacierized region for a span of time, is measured in square-kilometre-days, an odd-sounding unit but one that captures what we want to know. For each pixel it is just the number of days on which the pixel was black times the area of the pixel. For the whole region it is the sum of these pixel counts.

The Antarctic melting index has averaged about 35 million km2 days per year (October to September, to be sure of keeping the austral summer months together) between 1980 and 2008. Here comes the intriguing feature: in 2009 it was only 17.8 million km2 days, which is not only a record low but also continues a trend towards lesser annual indices that began in 2005. The melt extent (the area experiencing at least one day of melting) was the second lowest recorded, reaching only half the average of 1.3 million km2.

Tedesco and Monaghan account for this oddity in terms of slow organized variability in how the atmosphere behaves. Two patterns of multi-annual variation in the circulation of the southern atmosphere, the Southern Oscillation and the Southern Annular Mode, together correlate rather well with the melting index. But the authors acknowledge that the correlation breaks down in some Antarctic regions, and that the common variance does not point to a clear-cut physical explanation. (Translation: we don't understand what is happening.)

Antarctica is a happy hunting ground for climate denialists, but they need to be ignored because they are on a wild goose chase. In the first place, anomalous patterns of temperature change haven't stopped melting rates from accelerating, and ice shelves from disintegrating, in the warmest part of the continent. Second, global warming is global. Regional non-warming, and even regional cooling, don't invalidate the main conclusion. The fact that we don't understand why Antarctica is anomalous doesn't invalidate it either. Finally, when it comes to Antarctic change it's the ocean that we need to worry about. From the glaciological standpoint, warmer water is the problem, not warmer air.

The economic struggles since mid-2008 are bringing out factions that highlight both the uncertainty of the future together with ignorance of how the past has led us to where we are today. In the US, we have the conservative "Tea Party" movement of the right that is complaining about excessive government spending and the liberal "anti-banking" faction on the left that is fed up with the fat cats on Wall Street skimming too much off the top. Both sides are correct in coming to grips with the fact that large organizations and bureaucracies (e.g. government and banks) are having a harder time coping with the current economic and social problems of today.

What has unfortunately been quite absent from most of the political discussions about how to get the economy "back on track" is the true role of energy resources and technologies. With all of the talk in the United States about the need to "connect the dots" for the "War on Terrorism", what we really need to do is accept the way the energy and economic dots are connected in our modern industrial society.

By taking the following factors into account and enhancing our knowledge of how we can and cannot affect these indicators, we will "connect the dots" on our future as well as possible:

  • (1) Jevon's Paradox states that increased efficiency in the use of resources (in this case energy resources) through the use of technology and structural change increases total resource consumption.
  • (a) Policy point: if we target increasing efficiency, we can expect to only delay environmental problems.
  • (2) The energy return on energy invested (EROI) for the combination of energy resources, renewable and fossil, together with technology that converts those resources into services dictates the level of complexity attainable by society.
  • (a) Policy point: society seems to have reached a level of complexity in the last 1–3 decades such that:
  • (3) The EROI of energy services has been extremely high with the use of fossil fuels, and EROI will eventually come to a value such that it is equal for fossil and renewable resources. That time of EROI equality will mark a turning point in human civilization.
  • (4) The human species has now grown in size that it is capable of affecting the environment on a global scale as opposed to only very localized impacts before the industrial revolution.

The connecting of the dots goes as follows:

  • (1) Humans organized into agrarian societies, and this was beneficial because it raised the EROI from farming, where the energy produced in this case was that energy embodied in food, not primary energy for operating machinery. The invention of tools and use of beasts of burden (horses, oxen, etc.) also enhanced human EROI (i.e. the amount of human energy required to grow food for human consumption).
  • (2) The discovery of fossil fuels and subsequent technological change to enable further exploitation of fossil fuels led to the industrial revolution and the capabilities of production and economy in our present industrialized society.
  • (3) Resource constraints via any combination of technical, physical, economic, and political factors act as a driver to increase efficiency in the use of energy resources, but there are thermodynamic limits.
  • (a) For example, the Arab oil embargoes of the 1970s drove up the price of oil which in turn drove the US and Europe to increase fuel efficiency of vehicles to get the same service (move passenger and cargo from point A to point B) with less fuel, or energy. Subsequently, energy efficiency increased since the 1970s but the rate of consumption of energy changed from exponential growth to linear growth, and economic growth also slowed compared to the previous post World War II rates for the US.
  • (4) Today the rate of technological change in terms of increased energy efficiency and high EROI has not increased at the same rate as needed to enable economic growth equal to the pre-2000 years and subsequently the top of the economic food chain has decided to hoard recent profits at the expense of distributing those profits to the middle and lower classes. This is evidenced by the increased income gap between the top and the bottom.
  • (5) The inherently lower EROI of renewable resources will not enable the same level of economic production and societal complexity as provided by higher EROI fossil fuels. This is because renewable technologies are based upon current flows of energy (e.g. sunlight, wind, waves), as compared to fossil fuels which are based upon stocks of energy stored over hundreds of millions of years.

To contemplate the final point above, consider that Earth stored the renewable energy of the Sun (in the form of biomass) on the order of 100 million years, and now we are consuming this energy on the order of hundreds of years. What humans learn and choose to practice during this century will dictate the type of societies that are even possible after peak fossil-fuel production.

Most people are surprised how quiet modern wind turbines are when they visit a wind farm. Mechanical noise is usually minimal – even right up close. And the aerodynamic blade noise is often less than the noise of wind in any trees or bushes near by. However in some locations some problems have still been reported.

An international panel of experts convened by the American and Canadian Wind Energy Associations, recently released a report based on a review of a large body of scientific literature on sound and health effects with regard to sound produced by wind turbines. After extensive review, analysis and discussion, the panel concluded that sounds or vibrations emitted from wind turbines have no adverse effect on human health – an issue that was recently given new prominence by a US report, which claimed that physiological damage could be caused by low-frequency sound from wind turbines (see my earlier blog).

The new review states:

  • There is no evidence that the audible or sub-audible sounds emitted by wind turbines have any direct adverse physiological effects.
  • The ground-borne vibrations from wind turbines are too weak to be detected by, or to affect, humans.
  • The sounds emitted by wind turbines are not unique. There is no reason to believe, based on the levels and frequencies of the sounds and the panel's experience with sound exposures in occupational settings, that the sounds from wind turbines could plausibly have direct adverse health consequences.

Even so, there are still reports that aerodynamic "swishing" sounds from wind farms are an issue at some sites at night – disturbing some people's sleep. The Telegraph quoted a nurse, Jane Davis, who says that she was forced to move from her home in Lincolnshire after eight wind turbines were built in 2006. "All I know is the amount of health problems people have suffered," which she said included sleep deprivation, tinnitus, depression and psychological stress "seem to be excessive". She added: "These things have devastated my life."

The AWEA/CWEA report does say that, although "work with low frequencies has shown that an audible low frequency sound does not normally become objectionable until it is 10 to 15 dB above hearing threshold", an exception is "when a listener has developed hostility to the noise source, so that annoyance commences at a lower level". They note that "a major cause of concern about wind turbine sound is its fluctuating nature; some may find this sound annoying, a reaction that depends primarily on personal characteristics as opposed to the intensity of the sound level" and report that a "study of more than 2000 people suggested that personality traits play a role in the perception of annoyance to environmental issues, such as sound".

However, they add, a bit abruptly perhaps, that though "some people may be annoyed at the presence of sound from wind turbines; annoyance is not a pathological entity". It's certainly true that once a noise gets annoying, however low the level (e.g. a tap dripping), it can become intolerable.

The report concludes that, though "there is no evidence that sound at the levels from wind turbines as heard in residences will cause direct physiological effects…a small number of sensitive people, however, may be stressed by the sound and suffer sleep disturbances".

That rendition may not please sufferers! Of course, you could say that road traffic and aircraft landing and taking off can lead to much more noise annoyance for a lot more people, as can city living. But should we be adding more stress? Rural areas are, after all, usually quieter, which is one of their attractions. Or, assuming it cannot be resolved by careful wind-turbine location or modified operational patterns, is that just a cost that has to be borne by a small minority, who might in any case find a conventional power plant near them significantly less attractive?

The government's view certainly seems unchanged. NewEnergyFocus reported that in January, energy minister David Kidney dismissed claims that the permitted night-time noise limit for onshore wind turbines is too high. He said that the 43 dB night-time limit in the ETSU-R-97 guidance was derived from the sleep disturbance criteria in Planning Policy Guidance 24, with an addition to allow for an open window. He said that ETSU-R-97 gave indicative noise levels thought to offer "a reasonable degree of protection to wind farm neighbours" and that there was no evidence that they needed to be reviewed. He added that residents' comfort had to be balanced with the needs of wind-farm developers and so the guidelines should not place unreasonable restrictions on wind-farm development or add unduly costs and administrative burdens on wind-farm developers or local authorities. He concluded: "We have no robust new evidence to suggest that the current guidance is not achieving its aim."

AWEA/CanWEA report

It seems that if you want a bunch of comments about your blog you have to say something controversial. At least, that is what happened to this blog last week. I hope that we can get back sooner rather than later to tranquil consideration of the pleasures of studying glaciers, but in response to some of the comments there is more to be said about "denialists".

I will stick with that term, which some do not like, because it emphasizes a valuable distinction between denial and doubt. I was not talking about doubters, and in fact as an antidote to breach of trust I have lately been plugging the wisdom of Bertrand Russell as encapsulated in his first commandment: "Do not feel absolutely certain of anything". There is a world of difference between denial on the one hand and healthy scepticism, or even just asking questions because you don't know what to think, on the other.

There is also a world of difference between genuine ignorance and culpable ignorance. It is a capital mistake to suppose that I know a lot more than you do. I remember a long-ago field trip to look at glacial sediments during which we managed to get, er, lost. I was sitting at the front of the bus and by a fluke managed to get us unlost. One of the students said "How did you do that?!", at which point the bus driver interjected": "That's why you're a student and he's a professor." True, but superficial. There is an infinity of subjects about which I am genuinely ignorant.

I do know a superficially greater amount about glaciers than you (probably), which is why I am the blogger and you are the reader, but that is not important. One of the points I tried to make last time was that the denialists who commented on the news stories about the Himalayan-glacier fiasco are culpably ignorant.

I admit that "trust me, I'm a scientist" makes a lousy sales pitch, but nearly all of the denialist comments that I was deploring boil down to "trust me, even though I'm a dope". Seen from one angle, what I have just put down is a terrible thing for a scientist and university professor to say. It is rude and probably hurtful. It breaks elementary rules about how to make conversations work. (Don't rile your adversary. Give him a way out.) So it cannot possibly advance the discussion. Or can it? I have been worrying a good deal about this recently.

First of all, I am not selling anything. My scientific contributions about glaciers are just contributions, aimed at pushing the frontier of knowledge and understanding forward by a little bit. They are intended to be read critically and accepted or rejected according to the best judgement of the reader. Second, and more fundamentally, there is an awkward attribute of "trust me, even though I'm a dope" that I can't shake out of my mind, namely that whether or not it is helpful or kind or sensible it is a true paraphrase of the denialist comments.

To put it as diplomatically as I can, there is a problem at the core of the debate about climatic change, and the problem is the uniformly low calibre of the arguments on one side. The arguments on the other side vary from pretty good to compelling. There are loopy environmentalists, of course, but none of them contributed to the newspaper discussions I am talking about. I don't know how to solve this problem, but winking at it doesn't make it go away.

One thing about glaciers that doesn't get a lot of attention is that they are independent indicators of the state of the atmosphere. The river of reasoning has the spectral absorption bands of carbon dioxide at one of its sources, but further downstream it is braided. The information from weather stations is one of the channels, but the information from glaciers is a different channel. Even if, against all probability, the denialists were to succeed in knocking out my colleagues at the Climatic Research Unit at the University of East Anglia, they would still have succeeded at most in blocking one of the channels temporarily. The carbon dioxide molecules would still be absorbing and re-emitting infrared radiation. The consequent feedbacks would still be at work. The atmosphere would still be getting warmer. Most awkwardly for the denialist cause, the glaciers would still be shedding mass at an accelerating rate.

If you forget for a moment about the weather-station channel and about the carbon dioxide molecules at the headwaters of the stream, and try to explain why the glaciers are shrinking, and shrinking faster now than formerly, you come up against the considered judgement of the scientific community. Science has agreed that you can't answer these questions satisfactorily if you forget about the carbon dioxide molecules. And even if you persist in forgetting, you still have no coherent basis for tackling the question "What should we do about this?". The denialist answer is "nothing", but that brings me to my last point.

I want to emphasize that my comment-eliciting remarks last time were a direct criticism of those members of the public who can be described accurately as denialist, as opposed to sceptical or doubting. I haven't got a satisfactory answer for Clif Carl's poser about how to have his doubts addressed or for Steve Carson's thoughtful analysis of how best to bring travellers back from the borders of denial. I am not attacking the shadowy "vested interests" that are often blamed for climatic misinformation. Nor am I saying that the denialist citizens who comment on the newspaper articles are the dupes or stooges of these vested interests – which would be truly insulting. I am saying that we have to do something about improving the calibre of the debate, and I have no idea what. When it comes to the study of how the public makes up its mind, I am just another member of the public.

What I am saying seems to lead us to the absurdity of requiring ordinary citizens to spend their evenings and weekends boning up on glaciology, spectroscopy and a long list of other special subjects. The alternative seems to be for them to trust somebody, to which, as we have seen, there are objections. That is why I prefer to write about jam jars, baskets of eggs, fiords that turn out to be astonishing and stuff like that. Boning up on glaciers can be a lot more fun than it sounds like.