This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Skip to the content

IOP A community website from IOP Publishing

October 2010 Archives

Scotland the brave?

| | Comments (4) | TrackBacks (0)

Scottish First Minister Alex Salmond has announced that Scotland's renewable electricity target for 2020 is being raised from 50% to 80% of electricity consumption, putting Scotland well in the lead in the EU. He's also said that 100% by 2025 was possible.

Scotland's existing 50% target was established in 2007 and, aided by a rapid expansion in wind power, the country is on course to exceed its interim target of 31% in 2011. According to the Scottish government, much higher levels of renewables could be deployed by 2020 with little change to Scotland's current policy, planning or regulation framework. A separate study commissioned by industry body Scottish Renewables, reported similar conclusions- 123% was possible! And Scottish Green Party co-leader Patrick Harvie even called for setting a 100% renewable target, 'perhaps even before 2020'!

The Scottish Renewables' report notes that, since the original 50% target was implemented in 2007, industry and government have announced agreements for 10.6GW of offshore wind development, commitments to 1.2GW of wave and tidal power in the Pentland Firth and Orkney Waters, 1.2GW of additional potential hydro capacity and proposals for over 500MW of biomass heat and power. It says that, together, even a small proportion of these plans would add significant capacity to Scotland's generation mix, changing the scale of development that Scotland can achieve over the next decade and beyond.

www.scottishrenewables.com/MultimediaGallery/a7bd4f4f-efb2-477d-9576-26a0dd9a5dea.pdf

At present, Scotland has installed 7GW of renewables, under construction or consented. Salmond claimed that, given the scale of lease agreements now in place to develop offshore wind, wave and tidal projects over the next decade, 'it is clear that we can well exceed the existing 50% target by 2020.' He may be right, but 80% by 2020 is stunningly ambitious. Even the Centre for Alternative Technology only looked to 2030 in their 'Zero Carbon Britain', and that was pushing it very hard. While visionary scenarios can inspire/ motivate people to try harder, they have to be at least in principle credible. But unless there is a radical deployment/infrastructure development programme, beyond anything so far discussed, getting to 80% by 2030 might be a bit more realistic for Scotland.

They do seem to be trying though, with their own versions of support schemes that are much more ambitious than those so far introduced by the Whitehall government (e.g. under the Renewables Obligation Scotland they offer 5ROCs/MWh for wave energy projects and 3ROCs/Mwh for tidal projects, compared with the 2ROCs/MWh offered by the UK-wide RO schemes). And Scotland also has a direct grant-support system for marine renewables which has provided £13m for wave and tidal projects so far. Plus a £10m Saltire prize for marine renewables.

However, this may not be enough, an implication that emerges from a new report by Geoff Wood from Dundee University, which looks critically at the way the Scottish government has adjusted the Renewable Obligation Scotland. 'Renewable Energy Policy in Scotland: An Analysis of the Impact of Internal and External Failures on Renewable Energy Deployment Targets to 2020' is available at https://www.buyat.dundee.ac.uk/?compid=2.

Nevertheless there is no denying that Scottish government is pressing ahead hard. It has outlined its plans for achieving ambitious targets for reducing emissions by 42% by 2020, after a draft order to set annual emissions targets for 2010–22 was laid in parliament. The targets proposed in the draft order take account of advice from the Committee on Climate Change and the deliberations of a cross party working group over the summer. The annual targets for 2011–2022 start at 0.5% for 2011 and end with 3% for 2022, peaking at 9.9% in 2013 – going further than those recommended by the Committee. Scottish climate-change minister, Stewart Stevenson, said: 'Scotland has the most ambitious climate-change legislation anywhere in the world and these annual targets set a clear framework for achieving our 2020 target'.

It's certainly bold stuff. The SNP is clearly being courageous – some might say adventuristic. But even if their brave targets are not met, Scotland will still be doing more than many countries. And their non-nuclear approach does seem to be popular. A recent poll by the Scotsman newspaper found that only 18% of Scots supported new nuclear construction: http://thescotsman.scotsman.com/news/Only-1837-of-Scots-say.6551329.jp?utm_source=newsletter&utm_medium=email&utm_campaign=send.

You do have to be careful with poll data. An earlier YouGov poll for EDF found that 47% of Scots supported replacing existing nuclear plants when they closed. But it also found that 80% backed offshore wind farms and 69% were in favour of onshore turbines. It looks like they will get what they want in that area.


This past weekend on the Public Broadcasting Service (PBS) an episode, "A Murder of Crows," of the show series Nature focused on research related to the behavior of crows. Due to the fact that crows have an equivalent brain size more closely representative of primates than other animals, crows have high levels of intelligence. Their intelligence allows them to be resilient and use multi-step problem solving skills. As noted in the documentary, crows are social animals that play, mourn death, and pick monogamous mates for life.

One interesting story form the movie with an energy slant is that the crows in Japan use wire clothes hangers for making nests. And crows nest in many places given the scarcity of good trees in the cities. One of these nesting locations is often among power lines and transformers. Put wire meshes together with electricity, and that creates an obvious recipe for disaster for the crows and reliability for electricity. So much has this become a problem that one utility has created a dedicated team of employees that travel power lines in search of crow nests to clear. This nest-clearing is not the normal task that one imagines in estimating operation and maintenance costs for operating a utility electric grid, but it shows the resiliency of nature to cause problems for humans. It seems the term "smart grid" is turned on its head in this case as a smart animal is making use of our trash to rewire the grid for us.

Dirt on glaciers

| | Comments (6) | TrackBacks (0)

Dirt is more or less ubiquitous on glaciers. Even when there isn't much, it makes the ice darker, a climatically important fact that can also be put to intriguing uses. Sometimes the dirt gives us striking visual effects in the form of multiple medial moraines. When there is enough dirt to cover the surface nearly completely, we call the glacier a debris-covered glacier.

The glacier isn't necessarily buried entirely. In the accumulation zone, fresh snow will mask older debris, and usually not all of the ablation zone, where all of the snow melts, is debris-covered. Indeed the debris typically covers only a few to perhaps 20 percent of the surface, at the lowest elevations. But this part is critical, because it is where we expect to observe most of the mass loss that is supposed to balance the mass gain in the accumulation zone.

Debris-covered glaciers are a nuisance from a number of standpoints. Probably the biggest nuisance is that we don't know how to judge the effect of the debris on the mass balance. Conventional wisdom has it that thin debris increases, and thick debris reduces, the melting rate of the underlying ice. The debris, being darker than the ice, absorbs more of the incoming radiation, but the more debris there is the less radiative heat reaches the ice.

In a recent laboratory study Natalya Reznichenko and co-authors reinforced the conventional wisdom. In an interesting twist, they showed that the daily cycle of radiation makes a big difference. If you irradiate a sample of debris-covered ice continuously, then after a delay of some hours, increasing with the thickness of the debris, you get the same rate of meltwater production as if there were no debris at all. But if you cycle your lamps with a 12-hour period, to mimic day and night, the underlying ice never gets a steady input of heat. The night undoes much of the work accomplished during the day.

The melt rate is slower beneath debris more than 50 mm thick, about half a handsbreadth, and faster beneath thinner debris. This is the "critical thickness", at which the debris has no net impact on the melting rate. In an elegant graph summarizing the field measurements, the Reznichenko study shows that the critical thickness varies with altitude or equivalent latitude. On higher-latitude glaciers, or equivalently at lower altitude, the critical thickness can be as low as 20 mm, but it can exceed 100 mm at low latitudes or high altitudes.

So the real world is a bit more complicated than the laboratory when it comes to the effect of real debris on real glaciers. What about real-world melting rates? There are distinct signs that real debris-covered glaciers are even harder to understand than ideal ones.

For example, Akiko Sakai and co-authors pointed out that on debris-covered glaciers in Nepal there are lots of small meltwater ponds. One such pond absorbed seven times more heat than the debris cover as a whole, accelerating the melt rate and producing a knock-on effect because the departing warm meltwater enlarged its own conduit, a phenomenon known as "internal ablation". And on a debris-covered glacier in the Tien Shan of central Asia, Han Haidong and co-authors showed that exposed ice cliffs, accounting for only 1% of the debris-covered area, produced about 7% of all the meltwater generated across the debris-covered area.

It would be nice if we knew the fractional debris cover of every glacier, and even nicer if we could say by how much the mass balance of each glacier is altered by its debris cover. This is a pipe dream for the moment, but the careful small-scale measurements on debris-covered ice seem to suggest that ignoring the debris, as we have to do now when estimating mass balance on regional and larger scales, could overestimate the mass loss quite seriously.

The trouble is that although we have very few measurements of the whole-glacier mass balance of debris-covered glaciers, they seem not to be consistent with the detailed laboratory and field studies. One of the most careful regional-scale measurements, by Etienne Berthier and co-authors, found that Bara Shigri Glacier, in the Lahul region of the western Himalaya, thinned by —1.3 m/yr over five years, a rate significantly faster than the —0.8 m/yr determined by photogrammetry for all the glaciers in the region.

"Bara Shigri" means "great debris-covered glacier" in Hindi. Clearly we don't know enough about the behaviour of debris-covered glaciers, great or small, and as there are lots of them, and lots of people depend on them, they need continued study.

A better future

| | Comments (4) | TrackBacks (0)


In Sept, Dr Paul Hatchwell, a consultant with ENDS, wrote a useful overview of UK energy options in an article for The Independent supported by Shell: www.independent.co.uk/life-style/ newenergyfuture/britains-energy-challenge-meeting-energy-generation-and-carbon-emission-targets-2068598.html

It laid out the problems as he saw them - chiefly, on the supply side, that nuclear was likely to be limited, Carbon Capture and Storage (CCS) was unsure, and renewables were not expanding fast enough, and set this against the governments evident belief that all was basicially well, with, for example, renewables on target to reach 30% of electricity by 2020.

He didn't however say what could be done to make sure this happened- or to do better than 30%, as many feel we must and could, although he did mention the supergrid idea.

But the recent 'Declaration of Support for an efficient renewable energy future', signed by several leading international academics, laid out a basic framework:

'We need to plan, optimize, and implement the renewable and efficiency revolution with all deliberate speed. We must improve the efficiency of our existing processes. This includes reducing waste and taking advantage of thermodynamic efficiencies through cogeneration, district heating and cooling, and heat pumps.

Renewable electrification means we must intelligently integrate and optimize decentralized and distributed energy resources, linking them over long distances with energy storage on a continental and even intercontinental scale if needed. Whatever we do must be subject to analysis of triple bottom line consequences--economic, ecological, and social, as subject to democratic control.

We must adopt, as needed, new market rules and regulations, such as proper utility and manufacturer incentives for efficiency and distributed generation, feed-in tariffs, high renewable portfolio standards, provision of sufficient loan and investment capital, and varied opportunities for investment.

We might use natural gas as a diminishing transition fuel for district heating and cooling, cogeneration, and transport applications while renewable electrification and fuels are not yet fully available.

We need to be cognizant of the security aspects and consider all possible advantages of the use and decentralized control of modern end-use devices, renewable generators, and cogenerators'.

See: www.policyinnovations.org/ideas/innovations/data/000170

To fill the gaps, in terms of specific policies, here are some suggestions for the UK context that have emerged recently focussing of electricity:

  1. Impose a 'wind-fall' tax on oil companies to fund major energy efficiency programmes across the board.

  2. Impose a 'wind-fall' tax on electricity supply companies to fund the rapid upgrade of the power grid, including more interconnections with the rest of the EU- the supergrid

  3. Introduce a Cross-Feed Tariff to support the flow of green power between the UK and the rest of the EU on the supergrid.

  4. Introduce a Feed-in Tariff (FiT) to support wave power and tidal stream turbine projects.

  5. Expand the existing FiT to cover larger community projects and local biomass-waste fired Combined Heat and Power (CHP), and push ahead with the RHI.

  6. Re-direct the various nuclear subsidies and R&D programmes to support rapid development of new offshore wind technologies, like floating wind turbines.

A new White Paper on Energy is due out next year, following the revised National Policy Statement on Energy. Much of this will focus on nuclear. But there may also be opportunities for more progressive commitments as outlined above- e.g. a revised FiT.

Similar lists are emerging for 'green heat' options- some looking to solar and biomass/biogas fed micro-CHP at the domestic level, but others suggesting a new focus on local district heating grids fed increasingly by medium/large biogas fired CHP or even large solar collector arrays and heat stores, as is being done widely on the continent- see my earlier Blog http://environmentalresearchweb.org/blog/2010/10/solar-power-brightens-up.html.

Some of the proposals above are contentious, and most would increase energy costs to consumers, at least in the short term. But then so would just about any measure to deal with climate change, and the proposals above are all targeted to meet specific technological and/or sectors goals, with there being good prospects for costs to fall as the technology develops.

Another approach is an across-the-board carbon tax of the type attempted unsuccessfully earlier this year by France. That relied on market mechanism to steer the choice of technology, in response to the revised costs of energy- a short-term market approach, focusing on the currently cheapest low carbon options.

The UK government's proposals for providing a guaranteed 'floor price' for carbon to boost the EU Emission Trading System, would make renewables and/or nuclear look more attractive economically. But otherwise it's untargeted, with, again, a short term focus. Moreover, if it worked to raise the (currently very low) value of carbon, consumer costs would rise and taxpayers money might also have be provided to maintain the high carbon price, if there was a market down turn..

It might be easier, although still untargeted, just to tighten the carbon cap set for the next round of EU ETS. But as happened last time, that would lead to conflicts with, and special pleading from, countries with currently high levels of fossil emissions. They might for example ask for continuation of the system where some carbon permits are offered free rather than being auctioned.

We certainly need to push harder to get green energy technology deployed, but as can be seen, there are disagreements about how bet to do this. While market led approaches have been adopted so far, and are still promoted, a more targeted approach, using Feed-In Tariffs coupled with hypothecated special taxes, might be a more effective way to raise and direct the money that will be needed.

You can find all sorts of things in glaciers if you look hard enough. Among the oddities that come to mind are volcanic sulphur, soot, and bacteria and fungi.

But now Andrei Kurbatov and co-authors, writing in the Journal of Glaciology about fieldwork on the western margin of the Greenland Ice Sheet, have found something really surprising: diamonds. Don't get too excited. If you look in just the right place you can expect to find trillions of them per litre of melted ice, but these are nanodiamonds, the biggest only a few hundred billionths of a metre across. There is no danger of prices collapsing in the international diamond market.

However there is definitely a likelihood of a diamond rush spearheaded by scientists. The stimulus for this work was the discovery of nanodiamonds in ordinary sediments from several sites across North America. At all of the sites, much of the diamond is actually lonsdaleite, and there are other indications that point to the material being non-terrestrial. Lonsdaleite is elemental carbon that has crystallized in the hexagonal system, so it is a polymorph of the more familiar diamond belonging to the cubic system. Cubic diamond forms at temperatures and pressures appropriate to depths greater than about 150 km beneath the Earth's surface. To make lonsdaleite it appears that you need much greater temperatures and pressures even than that. At any rate, it is known only from meteorites and impact craters. We conclude that either it arrived with the meteorite or it formed during the impact.

The next exciting thing about these non-terrestrial diamonds is their age. They are found exactly at the base of the Younger Dryas cold snap, dating to about 11,000 BC. You could not ask for a sharper spike in abundance than the one shown in the Kurbatov paper, and it matches the evidence from elsewhere perfectly.

The first synthesis of this evidence showed that there are non-terrestrial "event markers" all across North America at the base of the Younger Dryas. It was a bold, if partly conjectural, synthesis, linking the impact not just to the cold snap but to the extinction of the mammoths, the disappearance of the palaeoamerican Clovis culture and the formation of the Carolina Bays.

The Carolina Bays can be seen in the atlas as the multiple arcs that form the coastline of the two Carolinas, but inland from the coast there are also numerous lakes of elliptical outline. They might be just quirks of Nature, but they would also be consistent with the putative Younger-Dryas impact having been in fact an airburst, followed by the impact of multiple smaller fragments.

We are now unambiguously in the realm of conjecture, but the Carolina Bays have been a geographical puzzle for centuries. Perhaps they are about to turn out to be not just a puzzle, as happened with the jigsaw fit of western Africa and eastern South America. Whatever their status, we can expect an energetic search over the next few years for the locality of the impact or airburst at the base of the Younger Dryas. We can also expect energetic discussions about its efficiency as a trigger for cooling.

The Greenland nanodiamonds are thus a small part of what is beginning to look like a much bigger picture, but they also represent a glaciological tour de force. I said that the Kurbatov spike was found "exactly" at the base of the Younger Dryas. So it was, but not in an ice core, as you might have guessed. The authors went to the ice exposed in the ablation zone about 1 km in from the margin of the ice sheet. All of it must have travelled some hundreds of kilometres from where it fell as snow in the ice-sheet interior. The text is rather coy here: "One of the authors (Jorgen Steffensen) ... identified a candidate for the Younger-Dryas-age section based on visual inspection of dust stratigraphy."

The atmosphere is slightly dustier when it is colder, and the dust makes ice that accumulated during cold episodes greyer. The nanodiamonds were found at the base of a band of greyish ice bounded above and below by whiter ice. There is therefore a sense in which the "exact" location of the base of the Younger Dryas has only been pinpointed circumstantially. But I doubt that there will be much questioning of the identification, and the success of the search is not less astonishing and gratifying for being due to use of the human eyeball as a search tool.

Solar power is often seen as nice, but a bit marginal in chilly northern countries. The reality is different. There is now over 20 GW thermal of solar heating capacity in the EU, which much of it being in northern countries like Germany, Austria and Denmark.

A lot of it is roof top domestic-scale, but there is also now a growing contribution from large-scale systems. For example, solar district heating is now moving ahead around Europe. The District Heating network in the Austrian city of Graz has 6.5 MW of solar thermal capacity. And further North, Danish collector manufacturer Arcon Solvarme has installed a 10,073 sq. m installation in the village of Gram in the region Syddanmark, and a 8,019 sq. m system in the village of Strandby in North Jutland, which meets 18% of the average energy demand for heating and domestic hot water of 830 households. A third solar thermal system, with 10,000 sq. m, has also been installed in the town of Broager in the south of Denmark. It's claimed that schemes like this can achieve payback times of 7–9 years. See www.solarcap.dk and www.arcon.dk.

Germany also as some solar/DH projects. Nine research and demonstration plants have been built since 1996, including some with inter-seasonal heat stores. Depending on their size, they can meet 40–70% of the annual heating needs of a residential estate. In Friedrichshafen, a residential estate with some 600 housing units has a www.managenergy.net/products/R430.htmsmall-scale solar district-heating system.

PV solar meanwhile is also moving ahead rapidly around the world. The main issue has often been the costs, but they are now falling (some thin-film amorphous Silicon modules are now at below 7 cents/watt), with claims being made that PV will be competitive with grid power in some locations within two or three years. Indeed, it has been claimed that in North Carolina consumer charges in $/kWh for PV-delivered power are now less than for power that might be delivered at some point from new nuclear plants: www.ncwarn.org/?p=2290.

Of course PV enjoys subsidies in the US, but then so does nuclear. PV has also benefited from subsidies in the EU under the various Feed-In Tariffs, to the extent that a major market boom emerged, leading to price reductions, which further stimulated uptake. That rapid expansion lead to some cuts backs in subsidy levels in Germany and Spain, since it was claimed that too much extra cost was being imposed on electricity consumers, who in the end pay for the subsidy. But with prices continuing to fall, the extra cost should fall too, and the rapid progress of PV seems likely to continue around the world. One recent area of expansion in the UK, stimulated by the new Feed In Tariff, is on farms – with solar arrays now being installed: see my earlier 'solar farm' blog.

Globally there is around 22 GW of PV capacity in place, still much less than the 150 GW (Thermal) or so of solar heating capacity around the world, but catching up fast. By 2020 PV will generate 126 TWh of renewable power around the world, according to the latest International Energy Outlook 2010 from the US Department of Energy. By 2025 it will generate 140 TWh and by 2035, 165 TWh. China alone aims to have 20 GW by 2020. The DECC 2050 Pathways suggests that the UK might have 70–95 GW peak of PV in place by 2050, or more, if we really went at it hard, supplying 140 TWh by 2050 in their maximum scenario.

In parallel we are like to see rapid expansion of Concentrating Solar Power (CSP) in desert areas, with focused solar heat being used to generate electricity as well as CPV, focused solar PV units in deserts, some of this being exported to the EU. The International Energy Agency says 11.3% of global electricity could be provided by CSP by 2050. Others say much more. See my earlier CSP blog.

Impacts

As solar expands around the world a key issue, which will become increasingly important, is cleaning. Like windows, the cell/mirror surfaces will collect up grime, dust and road grit and that must be regularly removed or else performance will fall – by perhaps 5% pa. Desert dust and sandstorms can also present problems for CSP mirrors – it's known as 'soiling'. But it could be that self-cleaning technology developed for lunar and Mars missions could be used to keep terrestrial solar panels dust free. Working with Nasa, Malay Mazumder from Boston University originally developed the technology to keep solar panels powering Mars rovers clean. But now he is working on a terrestrial version. It uses a layer of an electrically sensitive material to coat each panel. Sensors detect when dust concentrations reach a critical level and then an electric charge energises the material sending a dust-repelling wave across its surface. He says that this can lift away as much as 90% of the dust in under two minutes and only uses a small amount of electricity. Sadly, though ideal for deserts, back in the EU, it probably won't be useful for bird droppings!

The use of water and detergents for cleaning PV cells and solar heating panels could clearly open up some new environmental issues, but otherwise, as long care is taken to dispose of old PV cells carefully, or better recycle the constituent materials, there would seem to be few negative environmental implications from the domestic use of solar. Apart perhaps from the issue of glare, which is a siting issue, shared with other forms of glazed area. There can be toxic materials/health and safety issues in PV cells production, and some conflicts have been identified with desert wildlife in relation to CSP, but these problems should be amenable regulatory resolution.

Water use by CSP in deserts has been raised as an issue. It not just the water needed for washing mirrors. CSP needs water for efficient operation, as with any heat engine you need cooling. It can be done with air (fans blowing air across radiators) but that's inefficient (it uses energy) and adds about 10% to the cost. Water-cooling is better, but that's one thing you don't have in deserts. However you could import sea water, if you are within reasonable reach of the sea. That's one idea that being considered for some CSP projects in North Africa – piping in sea water from the Med for cooling, and also for desalination. The pipes could be hundreds of miles long, although that adds to the capital cost and uses some energy. And you'd end up with a lot a salt. But then, to be fair, other energy technologies also need water. A study by Virginia Tech University's Water Resources Research Center found that conventional fossil fuel require anywhere from 5 to 8 times as much water per million kWh produced as CSP, while nuclear plants need even more – 10 to 20 times as much/kWh as CSP: nuclear plants consume about a gallon of water for each kWh of electricity produced.

http://switchboard.nrdc.org/blogs/pbull/at_the_confluence_of_water_use_1.html

Of course, nuclear plants are not (yet) usually in deserts…although with climate change worsening there are likely to be increasing problems in providing cooling water even so. France has already had to shut nuclear plants down in the summer since the exit water temperature was higher than local river regulations allowed. It's likely to get worse. Getting access to cooling water could be an increasing issue for many land-based energy technologies- solar PV and wind apart.

For more, see www.rivernetwork.org/resource-library/energy-demands-water-resources and www.sandia.gov/energy-water/docs/121-RptToCongress-EWwEIAcomments-FINAL.pdf.

Chains of reasoning can be quite long, and quite tortuous, but they can join up the most surprising places. Consider beryllium-ten.

Almost all of the Earth's beryllium is beryllium-nine, the isotope symbolized as 9Be and defined by having four protons and five neutrons in its nucleus. We also have a small stock of 10Be, which has an additional sixth neutron. Most of the 10Be is created in the atmosphere when incoming cosmic rays collide with gas molecules: it is a cosmogenic nuclide.

Cosmic rays are not rays but particles, mostly protons, that originate outside the solar system. Some are energetic enough to destroy the molecules with which they collide. For example, not only may a molecule of nitrogen be split into its two constituent atoms, but one of those atoms, a 14N (seven protons and seven neutrons), may be split into a helium (4He) and a 10Be.

10Be dissolves in rainwater, which is weakly acidic. When the rainwater falls on land, it becomes more alkaline by reacting with the surface minerals, which induces the 10Be to precipitate out. Unless it is carried away by running water or the wind, it accumulates.

The point of the story so far is simple: the longer a patch of surface has been accumulating the products of cosmic-ray collisions, the greater its stock of 10Be. If it remains in place, as is likely on the surfaces of large boulders, we can count the 10Be atoms, correct for the slow radioactive loss (10Be has a half life of 1.36 million years), and obtain the exposure age of the surface by a relatively straightforward calculation — as long as we know the rate of arrival of the cosmic rays.

But here we come to a tangle in the chain. The cosmic-ray flux is not constant. The list of corrections that have to be made is quite long, allowing for factors such as the varying strengths of the solar wind and the terrestrial magnetic field, the altitude and latitude of the exposed boulder, and the extent to which it is truly "exposed" and not shielded by the surrounding terrain, nearby obstacles, seasonal snow and so on.

The allure of that exposure age, however, has stimulated intense efforts over the past couple of decades. We have developed a good understanding of many of the required corrections, and, like the atoms on the boulders, reliable 10Be exposure ages are now accumulating in the literature.

Writing in Nature, Michael Kaplan and co-authors offer a new collection of 10Be ages from the terminal moraines of a glacier that once occupied a cirque in the Southern Alps of New Zealand. Three of the 37 ages are oddballs, but the remainder all cluster nicely, in groups from different moraines and morainic ridges.

The outermost moraine, about 2 km from the cirque headwall, has boulders that were first exposed to the atmosphere just before 11,000 BC. In sequence, the moraines nearer the headwall have ages of about 10,700 BC, 10,100 BC, 10,200 BC and finally, only a couple of hundred metres from the headwall, 9,500 BC. All of these ages are uncertain by 400-600 years.

The payoff of these observations is in the dates, which span very neatly the cold snap of the Younger Dryas. While abundant evidence for cooling was piling up in various northern palaeoclimatic archives, this little New Zealand glacier was dwindling into nothingness.

Kaplan and co-authors have thus nailed down, much more firmly than before, the conclusion that the hemispheres were out of sync at the end of the last ice age.

The atmospheric concentration of carbon dioxide increased during the Younger Dryas, ruling out a reduced greenhouse effect as an explanation for the northern cooling. The generally agreed explanation is that the north Atlantic received a rapid influx of buoyant fresh water from North America. This reduced the overturning of ocean water, and damped down the meridional circulation. But it also pushed the climate belts southwards, shifting the southern-hemisphere westerlies to higher southern latitudes at which they were better able to provoke oceanic upwelling. The resulting enhanced outgassing of deep-ocean carbon explains the increased atmospheric CO2, and the 10Be atoms on the boulders show that the Younger Dryas was a time of net warming at least as far north as New Zealand.

So beryllium helps us to understand the hemispheric asymmetry of glacial climate. Long and tangled the chain of reasoning may be, but it does illustrate how complexity can be unravelled given doggedness and ingenuity.


This week I attended the meeting of the US chapter of the Association for the Study of Peak Oil and Gas (ASPO-USA) in Washington, D.C. (http://www.aspousa.org/worldoil2010/). What is interesting about this meeting is the range of backgrounds of the individuals that attend and speak. There were practicing and academic economists, persons working in the oil and gas industry, ecologists, a congressman (Roscoe Bartlett), and a Navy admiral (Lawrence Rice). With all of these backgrounds, the basic consensus of the group is that oil production is in fact peaking now as production has been within 5% of the same level of production around 83 to 85 million barrels per year over the last five years, and will begin to irreversibly decline within the next five years. Additionally, the current economic downturn and high unemployment levels are directly tied to the precipitous rise in oil price from 2003 to summer of 2008.

Simply put, the world economy, and primarily that of the US and the rest of the OECD, could not afford and is not structured to function in a world of oil price > $100/BBL. Southeast Asia is growing up in an oil economy as it peaks out, but they are adjusting from transport systems such as scooters and bicycles. Additionally, as Jeff Rubin (http://www.jeffrubinssmallerworld.com/meet-jeff/) likes to point out (and he's a good speaker), the OPEC exporting countries are consuming oil at a faster rate than anyone because they keep their prices artificially low (Iran, Saudi Arabia, Venezuela, etc.). Thus, if the Asian economies have to go back soon to lower oil consumption, the adjustment won't be that drastic. Additionally, the OPEC countries will just export less without imposing anything close to market price on their citizens - a necessity to maintain order. On the other hand, the OECD countries will have a hard time adjusting, and this adjustment of the economy will probably take at least a decade. Think about people in the suburbs of the USA going to carpooling, then trying to move closer in to cities or out of urban life altogether, subsequently leaving some abandoned lots in surburbia with which the remaining inhabitants can use for suburban farming. This is not such a bad outcome depending upon your world outlook. But as Jeff Rubin pointed out this past week (and in his book), this is how the world will get smaller. Oil simply gets too expensive to "lubricate" world transportation of goods and raw materials that is necessary for much of globalized trade.

The opinion of more and more "mainstream" organizations are accepting the reality of peak oil production. Widely mentioned and quoted at the conference was a report by the US military from the Joint Chiefs of Staff: the Joint Operating Environment (JOE) (http://www.jfcom.mil/newslink/storyarchive/2010/JOE2010o.pdf), and I quote a few passages here:

"Peak Oil As the figure at right shows, petroleum must continue to satisfy most of the demand for energy out to 2030. Assuming the most optimistic scenario for improved petroleum production through enhanced recovery means, the development of non-conventional oils (such as oil shales or tar sands) and new discoveries, petroleum production will be hard pressed to meet the expected future demand of 118 million barrels per day."

"By 2012, surplus oil production capacity could entirely disappear, and as early as 2015, the shortfall in output could reach nearly 10 MBD."

These statements are strong support for the oncoming peak oil scenario, but the rest of the section on energy does little to make me think that the JOE report is going too far on a limb due to the caveats and continuing discussion of possible 100 million barrel per day (MMBBL/d) production in the future. If you are a real peak oil person, then you believe we're at the peak now near 85 MMBBL/d.

On the notion of other fossil fuels, there was a good presentation on the "true" economics and production levels from natural gas shales from Arthur Berman - who has often presented interpretations of well data and financial statements that support his view that is quite contrary to the shale gas producers. Presentations from David Rutledge and David Summers regarding much less coal production (and hence CO2 emissions from coal) than used for emissions scenarios (so-called SRES) for the various Intergovernmental Panel on Climate Change (IPCC) climate model simulations. The data are compelling, and along with the recent paper from Tad Patzek on the soon-to-peak world coal production (i.e. 2011). Granted there were audience members who greatly disagreed that we are anywhere near peak coal production, and obviously we do not precisely know the speed of development of new coal mining areas. However, I'd say the evidence is leaning toward a near term peak coal scenario given the remoteness and coal quality of some virgin coal field locations (e.g. lignite in Eastern Siberia).

Please visit the ASPO-USA website for more information as the presentations are uploaded for public viewing and download in the future (http://www.aspousa.org/worldoil2010/).

There has been a proposal from B9 Coal to use AFC Energy's alkaline fuel cell technology with hydrogen produced from burning coal in situ underground in a 500 MW in Northumberland. Underground coal gasification (UCG) produces syngas, which is then passed through a clean-up process, resulting in separate streams of hydrogen and carbon dioxide. Upwards of 90% of the CO2 can then, it is claimed, be captured as a by-product at no extra cost. The pure H2 is passed through the fuel cell, converting to electricity at 60% efficiency at a projected cost as low as 4p per kWh.

UCG does avoid mining, with all its costs and risks. But UCG has some problems, as was found in early projects in the US and Russia. There have been accidental fires in coal seams underground which have been hard to control: one in Columbia County, Pensylvania, started 1962 and is still burning. The official response is that though relatively shallow coal seams can burn if an air flowpath exists, UCG cannot burn out of control. Combustion requires a source of oxygen, and this can in theory be controlled, so there is allegedly no possibility of oxygen reaching the coal which, for UCG, needs to be at a depth of 500 to 2000 metres and lying beneath impermeable rock strata. But even if that is true in practice, in situ coal does not burn cleanly or evenly – you get partial oxidation and a range of pollutants, including tars, phenols, ammonia. So there are clean-up costs.

More on UCG at: www.ucgp.com/.

Also the efficiency of making hydrogen from coal is usually said to be only about 65%. So with fuel-cell efficiency at best 60%, that gives and overall efficiency of under 40%, which may be low compared with direct use of mined coal in Integrated Gasification Combined Cycle plants, even with Carbon Capture and Storage (CCS).

However, why bother with complex and expensive IGCC plants? Why not just use the hydrogen direct as a heating fuel, sent to users via the gas main (piping gas is cheaper than power distribution by electricity grid). Or, if you really do need electricity, then use the hydrogen in homes in a CHP fuel cell – so recycling some of the otherwise wasted heat and raising the efficiency to maybe 70%.

Biomass as an alternative

Then again why use coal for the hydrogen source? What's wrong with biomass? That's more or less carbon neutral if it's replaced by regrowing. There is a range of ways for producing hydrogen, methane or syngas from biomass including anaerobic digestion, pyrolysis, and gasification. In one approach, biomass is gasified to make carbon monoxide, and then using the standard shift reaction (CO+H2O = CO2 + H2) this is converted to hydrogen, and while the CO2 is captured and stored, so making it overall carbon negative.

See: www.claverton-energy.com/wp-content/uploads/2010/07/Tetzlaff_Birmingham2010.pdf.

There are land-use and biodiversity limits to how much we want to rely on biomass, but, intriguingly, the Sahara Forest Project includes the idea of growing algae in seawater-fed desert greenhouses, and there is plenty of desert and sea water.

Of course, there is also quite a lot of coal and in situ coal gasification may open up a new approach to using old part-worked coal seams. But if we want to avoid both coal and biomass, then what's wrong with getting hydrogen using solar-, wind-, wave- or tidal-derived electricity, via electrolysis of water, or even by direct high-temperature dissociation of water via focused solar?

See www.hionsolar.com/n-hion96.htm.

The latter is still relatively inefficient (1–2%) and both approaches are still expensive compared with conventional approaches to hydrogen production. However, the technology is improving. One 2009 study suggested that, while hydrogen produced via steam reforming of natural gas costs around $6–8 per kilogram of hydrogen, H2 from solar (via electrolysis) costs $10–12 per kg, from wind (via electrolysis) $8–10 per kg, and from solar via thermo-chemical cycles (assuming the technology works on a large scale) $7.50–9.50 per kg.

www.h2carblog.com/?p=461

So we are getting there. For example, a 2002 study noted that PV costs of ˜$300/kWpk were needed to get H2 cost of $7-8/MMBtu via electrolysis, comparable with the cost of hydrogen production from coal, which, with current gasification technology, is $6.50-7.00 per MMBtu, or just over $8.00/MMBtu with CCS. www.netl.doe.gov/technologies/hydrogen_clean_fuels/refshelf/pubs/Mitretek%20Report.pdf.

But some PV modules are now claimed to cost below 76 cents/Wpk, and it's claimed that in some locations PV can deliver energy at costs below that from new nuclear plants, as can wind power. See www.ncwarn.org/?p=2290 and www.sourcewatch.org/index.php?title=Comparative_electrical_generation_costs.

However, electrolysis is only about 60% efficient, unless the waste heat can be recovered, and there is still a way to go before it, and other novel renewable powered or biomass-fed approaches, can rival conventional steam reformation of fossil fuels for hydrogen production on a large scale. So, if we want hydrogen, maybe we could go for coal UCG/ CCS just as an interim step?

Some of the above is based on discussions in the www.claverton-energy.com/Claverton Energy Group.

Everyone knows something about Ötzi, the Iceman of the Ötztal valley in the Tyrol of Austria. For example, many know that he was found, slightly embarrassingly for Austria, just inside South Tyrol, which is on the Italian side of the frontier. He is now at rest in the South Tyrol Museum of Archaeology in Bolzano.

Schnidi, by contrast, is less well known. There are reasons. First and foremost, he doesn't exist. He, or quite possibly she, is a collage of human detritus spread over the northern approach to the Schnidejoch, a 2,756 m pass in the Bernese Alps in southwestern Switzerland. More remarkably, Schnidi is spread over 6,500 years of Alpine prehistory. Like Ötzi, though, he has a lot to tell us, and a lot to ask us.

The archaeological finds at the Schnidejoch are documented by Martin Grosjean and co-authors in the Journal of Quaternary Science. They were exposed by the recession of a small ice patch, recently detached from the larger Tungel Glacier, during the record-breaking hot summer of 2003.

This is the second remarkable thing about Schnidi. Among the clothes he discarded were perishable goatskin leggings and shoes, from as long ago as 4,500 BC according to new results announced in 2008. Apart from making the early part of Schnidi a good deal older than Ötzi (about 3,300 BC), this means that 4,500 BC was the last time it was as warm in the Bernese Alps as it is now. The leggings must have been preserved beneath the ice since then.

The Schnidejoch is not a particularly hard climb, but a kilometre or two downvalley a moderate advance of Tungel Glacier from its modern extent would close off the valley, making the route difficult if not positively impassable. This is the simplest explanation for the next remarkable thing about Schnidi. He clusters in time. There is late Neolithic clothing and hunting gear from 2,950 to 2,500 BC; arrows, pins and other material from the early Bronze Age (2,150 to 1,700 BC); shoe nails, coins and a woollen tunic from Roman times (the first century BC to the second century AD); and a few items from mediaeval times.

These intervals coincide rather well with nearby evidence for warm periods, but they are also complementary because Schnidejoch is much higher than the other sources of information, and it is a "binary and non-continuous archive" — the pass was either open or closed.

Ötzi and Schnidi raise all sorts of questions, some sobering and some frivolous. Why do we westerners see Ötzi as someone who can tell us things, while aboriginal Americans see his counterparts on their continent as in need of re-burial, to be left in peace? All I can offer is the reflection that it is a pity we can't tell things to Ötzi, and the thought that if I were to make an exit like his I would be rather happy than otherwise to have the chance to tell things to my distant descendants.

Why was Schnidi so careless of his belongings? They are strewn over about 100 m of the route just below the the pass. It is easy to see why the discards are preserved just here, in the former accumulation zone of a now-vanished glacier. But I cannot think of a reason why they should have been discarded just here.

Did Schnidi's religion oblige him, as thanksgiving for a successful crossing of the pass, to take off his trousers? More plausibly, perhaps our ancestors were about as careless as we are, losing stuff at random all along the route, but only the items buried by the ice have been preserved. On this interpretation, the Schnidejoch, when the valley was passable, was a moderately busy thoroughfare. In that case, why didn't the travellers cross by either of the passes lying a few kilometres to east and west, which are 200 to 500 m lower than Schnidejoch? Perhaps they did. Those passes may never have been in the accumulation zone of a glacier, in which case the Bernese Alps might have been even busier than implied by the Schnidejoch evidence.

Lastly, a question I have asked before, knowing that it won't be answered. What was Schnidi's word, or words, for the glacier over which he walked? He or she is concrete evidence for human interaction with glaciers in Roman times, and yet we have no record of a word for glacier in Latin.