This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Skip to the content

IOP A community website from IOP Publishing

September 2010 Archives

Solar farming

| | Comments (23) | TrackBacks (0)

Solar farms seem to be catching on with farmers - stimulated by the Feed-In Tariff which offers a good income if you have the money to invest and space for large PV arrays. Madeleine Lewis of Farming Futures, a body backed by the Department for Environment, Food and Rural Affairs (Defra), the NFU and Forum for the Future, the environmental group, told the Daily Telegraph (14/8/10): "A year ago, farmers thought of solar as not very profitable and that's obviously changed. They are now very keen to invest in renewables. They are using a lot of energy, prices are going up and that has hit their businesses hard. Renewables projects insulate them against rising prices and provide a new income. There's a lot of buzz around it."

According to the Telegraph's report, nearly 40 farmers in Cornwall 'have already inquired about planning permission for solar projects in the county and more are likely to follow. Installations are also being planned in Herefordshire, Somerset and even the North East of England, despite there being 20% less sun than in England's south-western foot.'

It noted that Michael Eavis, the high-profile farmer and host of the Glastonbury music festival, is installing a £550,000 project using 1,100 panels on the roof of his cow barn. It will be the biggest solar roof in the country. In Herefordshire, a group of eight farmers is clubbing together through advisory company 7Y Energy, owned by 450 farmers, to buy solar panels for their barn roofs. Another group of 35 are undergoing surveys to see if their sites are suitable.

The Telegraph noted that one of the most ambitious investors in solar energy in the UK is MO3 Power, which is already engaged with 15 farmers and wants to have 100 sites generating 500 MW of energy within five years. A typical site would it says cover 13–15 hectares, generating 5 MW with the potential to give an annual income of £50,000 a year for farmers leasing their land for the solar farm.

And as I noted in an earlier blog, Ecotricity also has plans for solar farms – 500 MW in the SW. And it has recently also submitted a proposal for a 1 MW 'sun farm' at Fen Farm, Conisholme, near Grimsby, on land next to a 20-turbine wind farm. It would consist of 59 rows of south-facing solar panels on a 4.7 acre site. It's also applied for permission to install five more wind turbines and sees the project 'one of the first combined wind and sun energy parks in the world'.

However the first to go ahead seem likely to be a 5,000 panel project developed by 35 Degrees, occupying 7.3 acres of Wheal Jane, a Cornish tin mine, abandoned in 1992. It's expected to generate 1.34 million kWh each year. It has evidently just got approval from Cornwall Council planners. Cornwall Council has estimated that solar power developments could bring up to £1 bn in investment. Lucy Hunt, a manager at Cornwall Development, commented: 'We're seeing the start of a Cornwall solar gold rush as developers need to have built their farms, with full planning consent, by April 2012 to take full advantage of the Government scheme.'

35 Degrees says that it plans to install 100 MW of solar farms in due course.

Another project, Benbole farm in St Kew in Cornwall, is also well advanced. It has applied for planning permission for a 2 MW array in a seven-acre field. It's being backed by Penzance-based renewables specialist Renewable Energy Cooperative (R-ECO), along with the commercial arm of the University of Exeter, in a consortium of local companies calling itself "Silicon Vineyards", which has a wider £40 m programme aiming for 20 MW of capacity with 10 solar farms.

Impacts

The NFU evidently believes that as many as 100 farmers will be setting up major solar projects by next year with many more already planning small-scale developments. The NFU is encouraging farmers to mount PV panels on barn roofs or to use land around the edge of fields for solar panels rather than using fertile agricultural land for so-called 'solar vineyards'. Dr Jonathan Scurlock, the NFU's chief adviser on renewable energy, told the Telegraph that farmers could graze chickens, geese or even sheep underneath field-based panels to maintain agricultural use.

The Campaign to Protect Rural England, which has opposed the construction of many wind farms, said it would prefer that 'car parks and factory roofs' were considered first when siting these sorts of projects. However, R-ECO says that it is 'very carefully selecting sites that will not impact the local environment, either visually or through impacting on natural ecosystems. As a co-operative business, we understand and respect the importance of maintaining natural beauty of our environment and we are working diligently to ensure that our farms visual impact will be mitigated through bush and tree plantation which in turn will act to offset the carbon footprint caused by our developments.'

Commenting on Benbole Energy Farm, R-ECO told the Guardian (18/5/10) that 'the visual interference will be negligible. It's very low to the ground and the surface of the panels are matt rather than reflective. No planning concerns have been raised by the local planning authority after initial inspections.' It added 'the array will be hidden from view behind willow coppice or by traditionally built Cornish hedge rows by our in-house Cornish Hedger'.

R-ECO says that around 10% of the income from the scheme will be set aside for a community fund, and they also keen to support local community-owned solar farms. It says it is 'going to open up the doors to PV farms to everyone by developing solar farms paid for by communities and individuals. This will allow anyone and everyone to benefit from the financial rewards of solar PV even if they do not own their own home, do not have a south-facing garden, live in a flat/apartment and do not have the thousands of pounds to invest in their own system.'

It adds: 'Our community solar farms will allow anyone to invest any sum of money and get the lucrative financial rewards for doing so as well as be satisfied that their investment has gone towards bettering the environment and improved fuel security.'

If you expect a global agreement on curbing greenhouse gas emissions, the upcoming climate negotiations in Cancun will certainly disappoint you. A climate bill was stalled in US Senate, and without the US China will not move either. However, meanwhile negotiators focus on more tangible topics, such as REDD (reducing emissions from deforestation and forest degradation). UN parties are ready to settle on a agreement that includes a 4.5 billion dollar fund that is used to refund the protection of forests. Hence, global forests and the ecosystem service they provide get a market price. As deforestation and land-use change contribute currently to 20% of annual greenhouse emissions, a reduced rate of deforestation would finally benefit the climate.

One caveat with the market-based approach comes from the literature on managing the commons. Elinor Ostrom (Nobel Price 2009) and co-workers have accumulated massive evidence on how commons - such as forests - are successfully managed. They demonstrate that a decentralized just sharing of resources, monitoring and social sanctions are crucial ingredients of commons management. Inversely, a pure market-based approach, especially if it is top-down, can decrease acceptance of local stakeholders and increase the risk of gaming: Stakeholders could follow REDD literally but not in spirit, e.g. by substituting bioversity rich forest with tree monocultures.

Now, deforestation and GHG emissions are a global problem and, hence, need a global solution. REDD is urgently needed. Ostrom's results, however, suggest that REDD (beside qualifications that aim to avoid gaming) would profit from explicit inclusion of local shareholders, such as indigenious tribes and squatters. In many cases, ownership of forests is not well defined (offering opportunities for land grabbing). In these cases, community-based commons management, requiring the build up of institutional capacity, may be the best way forward.

From another perspective, however, there is not enough market involved in REDD. More precisely, a crucial market distortion is painfully ignored. There are drivers of deforestation and increasing land consumption, mostly economic forces. Locally, squatters live on burning down rain forest to gain land for agricultural production. Logging companies live from selling wood. In addition, however, global market forces demand further land: population growth, increased demand of meat, and increased demand of biofuels. Industrially produced biofuels and industrially produced meat have been shown to produce large carbon footprints, a significant part of which is related to land-use change. A suitable market signal, hence, would be for the US and the EU to reconsider their 2020 quota targets for (first generation) biofuels, and for OECD countries and China to introduce carbon taxes for biofuels and meat. This would decrease deforestation pressure in countries such as Brazil and Indonesia and would provide a strong stick complement to the REDD carrot. Income of taxes can then fund REDD and other forest protection schemes.

Enhanced by Zemanta

Last month I found out what a mosh pit is. Doug MacAyeal told me. I gather that anybody more than about 15 years younger than me already knows this, but for the rest of us a mosh pit is the place in front of the stage at a rock concert, where extreme violence is likely to break out. (But apparently it is good-natured violence.)

Like me, Doug MacAyeal was attending a symposium of the International Glaciological Society to mark the 50th anniversary of the Byrd Polar Research Center in Columbus, Ohio. He is a glaciologist who is unusually gifted in the understanding of forces. In fact, one of the reasons he was at the symposium was to accept the Byrd Polar's Goldthwait Polar Medal, an award made only intermittently to the most distinguished of glaciologists. Doug's talk entitled "The glaciological mosh pit" made clear why he is a Goldthwait medallist.

The glaciological mosh pit, just before the violent release of gravitational potential energy that justifies the name, is a collection of icebergs, detached from each other but in close physical contact. They are the descendants of blocks of ice in an ice shelf that were formerly separated along crevasses, but the crevasses have now propagated through the whole thickness of the shelf. The bergs are typically much longer than they are wide, but the crucial point is that they are in a gravitationally unstable state, being several times taller than they are wide.

Although their weight is supported by the water, they would topple over if they were not propping each other up. Do they topple as soon as the crevasses break through to the base of the shelf? If so, why do all the crevasses apparently make the breakthrough at the same time? If not, what keeps them from toppling, and what triggers the eventual catastrophe?

I have trouble sorting out the ways in which this adds up to an exciting mechanical and glaciological problem. First and foremost, perhaps, the mosh pit is already a horrible mess and is about to get very much messier, but there is the prospect of reducing the chaos to intellectual order.

Then there is the question of how it got that way in the first place. There is a link here to global warming. The crevasses probably would not penetrate to the base of the ice shelf if they were not strengthened by an influx of surface meltwater. The ice shelves have been around for a long time, and that they are disintegrating now suggests that surface meltwater is now more abundant than formerly.

A related question is why some floating slabs of ice disintegrate mosh-pit fashion but some others break up along just a few cracks, or even a single crack, to form ice islands.

And then there is the really big question: what happens to the released energy when the berg finally switches from being a vertically extended slab to being a more civilized, horizontally extended slab? It seems that, apart from a little bit of heat and a little bit of noise (waves in the air), nearly all of the gravitational energy becomes kinetic energy (waves in the water). This wave energy has to go somewhere in turn, and it can do an astonishing amount of damage when the waves break.

Doug MacAyeal and his students are grappling with this question along several lines of attack. Doug's talk was mainly about the theory of the balance of forces on a collection of gravitationally unstable icebergs, and about how tricky it is to write this balance down algebraically. Justin Burton told us about the research group's efforts to simulate the mosh pit with fake plastic icebergs in a large tank of water, showing fascinating movies of the collapse and "seaward" advance of the collection and the subsequent sloshing about of the water. Nicholas Guttenberg described early work on computational simulation of the mosh pit, with an equally fascinating movie showing "virtual collapse".

So far we have a sample of only two observed glaciological mosh pits, the disintegration of most of Larsen B Ice Shelf in 2002 and of part of Wilkins Ice Shelf beginning in March 2008. But two examples are more than twice as interesting as one, as well as being infinitely more interesting than none at all. Two mosh pits suggest a pattern, and the possibility of more to come and perhaps to guard against.

Biochar reviewed

| | Comments (10) | TrackBacks (0)

'Contemporary interest in biochar is, first and foremost, driven by its potential role as a response to the problem of climate change, through the long-term storage of carbon in soils in a stable form.' With that preamble in mind, and noting that it could also reduced the use of fertilisers, the Biochar Research Group at Edinburgh University was commissioned by DEFRA to review the potential of biochar. and in particular to look at some major uncertainties surrounding its impacts upon soils and crops, its overall performance and its costs compared to other carbon mitigation options. As the preamble put it 'whilst biochar might improve productivity, is this effect really understood well enough that we can factor-in a long-term enhancement of the carbon sink in vegetation and soils?'

As I indicated in an earlier blog, there were also uncertainties as to whether it would be as effective in terms of carbon dioxide gas abatement as other ways of using biomass, including use simply as a carbon-neutral fuel, offsetting fossil-fuel emissions. There were also concerns that if it did prove effective, the result might be vast biomass plantation undermining biodiversity and competing with food production.

The final report on Pyrolysis-Biochar Systems ('PBS') from the project has now emerged. It says that it provides 'preliminary evidence that PBS are an efficient way to abate carbon, and tend to out-compete alternative ways of using the same biomass (in terms of carbon abated per tonne of feedstock, or in terms of abatement per hectare of land).' It suggests that you can get abatement of 1.0–1.4 t CO2eq per oven dry tonne feedstock used in slow pyrolysis. 'Expressed in terms of delivered energy PBS abates 1.5-2.0 kg of CO2eq/ kWh, which compares with average carbon emission factor (CEF) of 0.5 kgCO2eq/ kWh for the national electricity grid in 2008, and current CEF for many biomass feedstocks of 0.05–0.30 kgCO2eq/kWh. Expressed in terms of land-use, PBS might abate approximately 7–30 t CO2eq/ha/yr using dedicated feedstocks compared with typical biofuel abatement of between 1–7 t CO2eq /ha/yr.'

And so it concludes, provided the Carbon Stability Factor (the proportion of total carbon in freshly produced biochar, that remains fixed as recalcitrant carbon over a defined time period), remains above 0.45, 'PBS will out-perform direct combustion of biomass at 33% efficiency in terms of carbon abatement, even if there is no beneficial indirect impact of biochar on soil greenhouse-gas (GHG) fluxes, or accumulation of carbon in soil organic matter'. But it says there is also 'an, in principle, credible case that biochar deployment in UK soil will produce agronomic gains (and possibly suppress GHG emissions)' so it's doubly blessed, though, perhaps inevitably, the report says that more research is needed to be sure.

There are also some other caveats (e.g. on costs, which it puts at maybe £42/t CO2). It says that: 'Biochar is, currently, an expensive way of abating carbon, although the costs would likely come down with investment'. It notes that: 'There has been relatively little attention to the logistics of PBS, even though this is likely to be very important to the economic and practical viability. The issues raised include the need for (and cost of) storage, the acceptability of truck movements, and how economies of scale in producing and distributing biochar might be achieved. Biochar is currently expensive to produce due to feedstock, capital and operational costs. Extensive PBS implies an extensive infrastructure, involving pyrolysis units probably at a range of scales that will take some time to be built and operated, especially given the current lack of dominant design.'

Nevertheless it says 'Biochar could, however, increase quite significantly the opportunities for carbon abatement in the agriculture and land-use sectors. In the UK the availability of land is unlikely to present an absolute barrier to biochar deployment, although the land potentially providing the highest returns from biochar addition (such as horticulture) is relatively small in extent. The supply and cost of biochar also depends upon the extent to which organic waste feedstocks could be utilised. There are some 'niche' areas where PBS could have particular advantages over alternative ways of dealing with organic residues, even within current economic conditions.'

The Centre for Alternative Technology came to similar conclusions in its 'Zero Carbon Britain 2030' study: see my earlier blog. They saw biochar playing a key role.

Although the Edinburgh study does highlight some potential problems and unknowns (e.g. on cost and how long carbon will stay trapped), it calls for more pilot projects and it does look like biochar, if sensibly managed, could be a winner. However, somewhat oddly, DECCs new '2050 Pathways' report only sees biochar as playing a fairly limited role, as one possible geo-sequestration option – perhaps trapping 1Mt CO2 p.a. in the UK by 2050.

By contrast a US study, 'Sustainable biochar to mitigate global climate change' is very positive. Biochar could it says offset 1.8 bn tonnes of carbon emissions annually, in its most successful scenario – around 12% of current global greenhouse-gas emissions – without endangering food security, habitat or soil conservation.

The DEFRA/Edinburgh Biochar report is at: http://randd.defra.gov.uk/Document.aspx?Document=SP0576_9141_FRP.pdf

If you drill a hole right through your glacier, one of the things you get is a measurement of its thickness. But if you want the mean thickness of the entire glacier, an expensive and time-consuming borehole doesn't get you very far. The only realistic way to measure the mean thickness of a glacier is ground-penetrating radar (GPR).

You drag your radar across the glacier surface. It emits pulses of radiation and keeps track of the echoes, in particular those reflected from the bed. With one or two additional items of information you can convert the travel time of the echo to a thickness. This is still expensive, especially if you try to improve coverage by flying your radar in an airplane instead of dragging it over the surface.

But with reasonably dense coverage, you do end up with a reasonable estimate of the mean thickness. With a measurement of the area, and some reasonable assumption about the bulk density, you can estimate the total mass.

One problem with all this is that we only have measurements of mean thickness for a few hundred glaciers at most. What do we do about the mean thickness of the remaining several hundred thousand?

The most common answer is "volume-area scaling". The term, which is a firm fixture in glaciological jargon, is misleading because it is really thickness-area scaling. When we plot the measured mean thicknesses against the areas of their glaciers, we get a nice array of dots that fall on a curved line — or a straight line on logarithmic graph paper. The thickness appears to be proportional to the three-eighths power of the area. There is an equally nice theoretical scaling argument that predicts this power and makes us suspect that we are working on the right lines.

Unfortunately the so-called coefficient of proportionality, the factor by which we multiply the three-eighths power of the area to turn it into an estimated thickness, is much harder to pin down. It varies substantially from one collection of measurements to another.

Recently I have been using volume-area scaling to try to say something useful about the size of the water resource represented by Himalayan glaciers. As you may have noticed, the fate of Himalayan glaciers has been in the news lately. Will they still be there in 2035? Yes. Will they be smaller in 2035? Yes. How much smaller? Don't know.

Among the reasons why we can't say anything useful about Himalayan glaciers as they will be in 2035, one is that we can't say much about how they are in 2010. So I have been trying to work out some basic facts, by completing the inventory of Himalayan glaciers and using the glacier areas to estimate their thicknesses and masses. The inventory data were obtained over a 35-year span centred roughly on 1985. So forget the challenge of getting to 2010. What can we say about Himalayan glaciers in 1985 or thereabouts?

It turns out that, including the Karakoram as well as the Himalaya proper, there were about 21,000 of them. To estimate total mass by volume-area scaling, we have to treat each glacier individually. The result depends dismayingly on which set of scaling parameters you choose. Five different — but on the face of it equally plausible — sets give total masses between 4,000 and 8,000 gigatonnes. (Difficult to picture, I agree, but these numbers translate to region-wide average thicknesses between 85 and 175 metres.)

In short, we only know how much ice there used to be in the Himalaya to within about a factor of two. Let me try, like a football manager whose team has just been given a hammering on the pitch, to take some positives from this result. For example, it pertains to a definite time span and to a region that is defined quite precisely. Earlier estimates have been hard to compare for lack of agreement on, or definition of, the boundaries. It is also a better estimate than the 12,000 gigatonnes suggested casually by the Intergovernmental Panel on Climate Change in 2007.

But what does "better" mean in this context? Apart from being wrong about the longevity of Himalayan glaciers, it looks as though the IPCC was also wrong about the size of the resource, which is a good deal smaller than suggested. Where does that leave us as far as water-resources planning is concerned? With a lot of work still to do, that's where.

Is banking, as practiced today, sustainable? Does it become more sustainable with Basel III regulation? An article by Mark Joob in the Financial Times Deutschland from 16 September 2010, claims: no. 

Basel III requires banks - among other measures - to retain more core equity capital, limiting banks ability to hand out credits they cannot cover themselves. It has been critized from a variety of perspectives (e.g. that administrative complexity crowds out small banks and gives big banks an additional edge).

The FTD article is more fundemental in its criticism - credit growth itself is the problem. The argument follows Binswanger's book "Die Wachstumsspirale", literately, the spiral of growth. Companies borrow money to invest into production. To cover the opportunity costs of money, and to compensate for the risk of bankruptcy, creditors ask for interests. To pay back credits, companies, hence, need to earn profits. As the credits enter the exchange flow as additional money, more products can be bought from already existing production, guaranteeing the profits of companies. Crucially, the total amount of money grows along - otherwise, no net profits are possible.

The author of the FTD article and Binswanger now suggest that this growth dynamic is sort of a large scale Ponzi scheme, because the additional income is derived from the income of natural (finite) capital.

How does this claim relate to concepts of sustainability? Sustainability is a normative perspective that tries to care of the well-being of both current and future generations. There are two dominant frameworks: weak and strong sustainability. Weak sustainability assumes that one has to consider aggregrate well-fare only, and that in particular natural resources can be substituted with men-made capital. More precisely, if we know the (marginal) shadow prices of natural resources, services and sinks, and price them in, there is nothing to worry about. Strong sustainability assumes that substitutability between natural and men-capital is rather limited, and hence, that we have to  strictly protect sources and sinks.

Clearly, the growth spiral argument belongs to the camp of strong sustainability. Assume the opposite view: growth is based on real added value (and not substantially based on natural capital), then the growth spiral is not so much of a problem - at least from an ecological point of view.

Whatever one precisely thinks about weak and strong sustainability - it is interesting to see that these sustainability perspectives have their natural correlate in monetary policies.

Enhanced by Zemanta

Wind power enthusiasts sometimes point to Denmark as a good example of what can be done- claiming that it gets around 20% of its electricity from wind. Wind opponents also sometimes point to Denmark, claiming that in fact not much of this is actually used in Denmark since it's often available at the wrong time, when demand is low, and so has to be exported. Worse still, Denmark has to import power (mainly from hydro) from Norway and Sweden, to back up it's wind plants when they can't deliver enough and demand is high. This costs them more than they get from their exported wind. So wind is a net financial loss, and a drain on the Danish economy. A debate on this issue has been raging on the Claverton Energy Group website and e-conference over the past year.

Some say it's all because Denmark has a small, inflexible energy system. It's been pointed out that a major effort was made in Denmark to move away from oil over to coal. Much of the coal capacity consists of large centralised Combined Heat and Power (CHP) plants, which although more efficient that conventional electricity plants, can't easily be used to balance the variable wind output. Worse still, heat is often needed when there is no major demand for electricity. In these circumstances some CHP electricity has had to be exported, along with any excess from wind. Unfortunately this will be when there is likely to be a surplus of wind power in the surrounding countries, so the price which is offered is quite low.

Against that, these exports will be reducing the amount of fossil fuel which is used in places like Germany. In Norway and Sweden though this electricity may sometimes only be replacing (zero carbon) hydro, but that depends on the timing – if their hydro reservoirs are low, it can be used to pump them up, in effect storing wind power and excess CHP power for later use. On this view all is well in climate terms, though it does cost the Danes more: what's really needed is a pan-EU balancing system, with perhaps a Cross Feed tariff to ensure that using stored power in not prohibitively expensive. More flexible generation/load management in Denmark would be good too. Along with better interconnections- and an economically synchronised market system covering all the participating and connected countries (e.g. the Baltic states and Germany). Certainly, some say that Denmark is too small to be treated as a coherent energy system, but others say that being small is actually a great advantage for them, it means that wind can be backed-up (via interconnectors) by larger grid neighbours – something that would be much more difficult in UK's case.

Lessons for the UK

Some argue though that, being larger with a range of flexible back up plants, the UK could in fact do much better than Denmark – although they accept that it must strengthen its grid and interconnections. But even without much of that, there should be no need for extra backup for some while: you could add 30 GW wind and use existing UK fossil plants as back up – you don't need to build 30 GW of new plant and therefore "pay twice", as some anti-wind people claim. We already have it – and have paid for it. And some of it is already used as back-up – for conventional and nuclear plants, and to deal with the daily demand peaks. On this view, basically, when available, what wind power does is replace some output from the existing power system – so you don't need to build any extra back-up for when wind is not available.

Although the probability of zero wind output is relatively low, as things stand at present, we would have to retain all, or most of, the existing system – wind has a low capacity credit, perhaps 15% depending on the total capacity. However, more existing capacity could be retired if we had a more flexible system, with more load management, more storage and more interconnections, plus inputs from firm non variable renewables like biomass and geothermal, though the optimal mix is as yet undetermined. There is also the problem that operating fossil plants occasionally at lower power means they are run inefficiently, part loaded, which adds to costs. But they may not have to run often and the cost penalties will therefore be low. Inputs from other renewables could also help (e.g. although they are variable/cyclic, wave and tidal availabilities are phased differently from wind). Inflexible nuclear however just gets in the way. At least Denmark doesn't have that problem!

You can join the debate at www.claverton-energy.com.

On or shortly before 5 August 2010, a big chunk of the floating tongue of Petermann Glacier in northwest Greenland broke off. It is now an ice island, about 260 km2 in area, and is destined to do a left turn into Nares Strait, between Greenland and Ellesmere Island, whence it will drift southwards, falling to pieces as it goes. The new ice island joins a quite long list of old ice islands.

The calving event had been expected for at least a couple of years, based on observations of the floating tongue of the glacier. The island itself seems to have been noticed first by Trudy Wohlleben of the Canadian Ice Service, which scrutinizes satellite imagery continuously for the monitoring of hazards to navigation in northern waters.

These days, big environmental events invite speculation that they are "caused" by global warming. Thus a large new iceberg has to be a sign that either its parent glacier is disintegrating or the global-warming alarmists are at it again. The truth, as usual, is that we cannot put any single event down to global warming in this simple-minded fashion. (Which doesn't mean, by the way, either that the glacier isn't disintegrating or that we alarmists are not at it again.)

Some ice shelves in the Antarctic have disintegrated spectacularly in the past couple of decades, and there we do suspect a link with global warming. But the calving of icebergs is a normal part of the mass balance of any tidewater glacier, and once in a while we get a berg that, like the new Petermann berg, is big enough to qualify as an ice island. To show that the balance has shifted to faster calving is very difficult because the big events happen so infrequently.

That doesn't mean that the new ice island isn't interesting, and especially not that it isn't dangerous. The Canadian Ice Service will no doubt eventually produce a story about this one at least as interesting as the one about the last big berg from Petermann. It calved in July 2008, and bits of it remained identifiable near the southern tip of Baffin Island a year later.

But as ice islands go, even the much bigger Petermann island of 2010 is not that big a deal. The first ice island to be given a name — of a sort — was T1. The T stands for "target". T1 was discovered by U.S. Air Force pilots flying out of Barrow, Alaska, in August 1946. By that date it was clear that the tense wartime alliance between the western allies and the Soviet Union had fallen apart. T1 immediately became a military secret, but it took only a few years for the U.S. military to work out that it is a bit silly trying to keep a 700 km2 chunk of ice secret. T1 was followed by T2, of more than 1000 km2; by T3, of about 50 km2; and eventually by several dozen smaller islands, all of them bigger than your typical iceberg.

Most were in the Arctic Ocean. Each of the biggest ones was spotted from time to time, and found to be drifting in about the expected direction, that is, clockwise, around the Beaufort Sea.

Apart from a debatable suggestion that T2 might have been seen at 72°N off east Greenland in 1955, I haven't managed to find out what happened to either T1 or T2, but T3 became a research station in 1952. It was occupied intermittently until the early 1970s and was last sighted in 1983, after which it is conjectured to have found its way into Fram Strait and thence into the Atlantic.

The odds are heavily in favour of all these objects having broken free from the northern coast of Ellesmere Island, sometime during the 1920s or later. The evidence of the earliest visitors, and the results of more recent field studies, agree that there was once a continuous ice shelf all along that coast. Today, it consists of a dwindling collection of small remnants. According to the Canadian Ice Service, relying on imagery up to 22 August 2010, another 50 km2 fragment has just detached from what is left. This fragmentation over decades is wholly consistent with the emergence of the global climate from the Little Ice Age.

Moira Dunbar, in the paper from which I have distilled this information, presents several accounts from 19th-century explorers which sound persuasively like descriptions of ice islands. So we have a long record of ice islands off the northern coast of North America. What we cannot do, and will probably be unable to do given the small number of calving events, is to establish that the rate of breakoff has increased.

Energy from the air

| | Comments (35) | TrackBacks (0)

Rather than seeing excess carbon dioxide in the atmosphere as a problem to be dealt with, by, for example, expensive carbon capture and underground storage, why not make use of it to produce fuel? The basic chemistry is simple, assuming you have some spare hydrogen: CO2 + H2 = CO + H20. The CO can then being converted to hydrocarbon fuel, for example by via the Fischer–Tropsch process. That of course needs more hydrogen. Fortunately, not only do we have plentiful supplies of CO2 from the air, air movements can also supply the energy, via wind turbines, to make hydrogen, via the electrolysis of water. Problem solved! Except of course each stage in this process is difficult, with conversion efficiency's of around 60%, unless the waste heat can be recycled/used.

Carbon capture techniques are of course being developed for use with power stations emissions. But there has been a parallel idea of 'air capture' – for example the 'absorption tower' approach developed by Prof. David Keith from Calgary University, in which a fine mist of strong sodium hydroxide solution is brought into contact with an air flow. The big advantage of that is that you can do it anywhere – not just at power plants. But rather than just storing the resultant mix, the CO2 can be recovered, ready for conversion into a fuel.

That is what a team led by Prof. Tony Marmont are planning to do, using an electrochemical process based on a patented design developed by Prof. Dereck Pletcher, formally of Southampton University.

Prof. Marmont is a long time proponent of renewable energy in the UK and funded the set-up of the CREST organisation at Loughborough University. His team at Beacon Energy in Leicestershire has been working on hydrogen generation, storage and use for some while – with the electricity supplied by their own wind and PV systems, so the next stage should be a bit easier.

In the 'air fuel synthesis' (AFS) approach being developed by Marmont, the recovered carbon dioxide will then be reacted with electrolytic hydrogen; either directly to make methanol and thence to petrol via the Mobil Methanol-to-Gasoline route; or via the Reverse Water Gas Shift reaction (as above) with hydrogen, to make carbon monoxide, which in turn will be reacted with more hydrogen in a Fischer–Tropsch reaction to make hydrocarbons. In the latter case, variation of the reaction conditions could enable petrol, diesel or aviation fuel to be made.

It's a fascinating idea, essentially using renewable energy to do what nature does with photosynthesis – convert atmospheric carbon dioxide back into organic molecules. But it does rely on multiple stages, each with significant energy losses. In terms of road transport applications, you would presumably get a much better return on the wind-generated energy if it was just used in battery electric cars – with the overall conversion efficiency then being 90% or so. So the AFS approach may only be an interim option while electric cars are improved. But liquid fuels have a much higher utility/energy storage density than batteries, and there may well be some application (e.g. for heavy goods vehicles and, crucially, for aircraft) where liquid fuels will have major advantages.

So there could well be a future for ideas like this, and the wind power resource could be up to it in time. The Marmont team calculate that 'to make all UK oil – 140,000 tons a day – as synthetic, would take a windfarm area 175 miles by 175 miles in the North Sea. To make only aviation, marine and military fuel as synthetic, would take an area 72 miles by 72 miles.'

www.airfuelsynthesis.com/

The Los Alamos Lab in the US has proposed something similar, but with the energy supplied by a nuclear reactor. They gave their idea the somewhat cringeworthy label 'Green Freedom'.

For good measure they suggested that conventional power station cooling towers could be used for the carbon dioxide trapping NaOH spay system. But as with the wind-AFS approach, it would probably be more efficient to use the nuclear electricity directly in electric cars, a point made elegantly at http://ergosphere.blogspot.com/2010/01/revisiting-green-freedom.html.

Nevertheless, with there being no easy aviation fuel substitutes, Air Fuel Synthesis does still seem worth exploring, as are the various other novel 'Green Chemistry' ideas for fuel production being developed around the UK and elsewhere, including the use of biomass as a feed stock for hydrogen production. See for example: www.claverton-energy.com/wp-content/uploads/2010/07/Tetzlaff_Birmingham2010.pdf.

For more on new renewable-energy developments, visit www.natta-renew.org from which some of the above was drawn. Thanks to Dave Benton for his input on AFS.

If the climate were to change, you would expect the snowline altitude to change. It does, and we can show that it has in recent centuries, but we can also turn the proposition around. The snowline makes a very good tool with which to think about the climate. Here, again, is a graph of the global snowline.

A global approximation of the climatic snowlineA global approximation of the climatic snowline. South Pole on the left, North Pole on the right. Each little square is at an altitude which is the average of many "mid-altitudes", each of which is the average of one glacier's minimum and maximum altitude.

It isn't just that the snowline makes sense of the glaciologist's definition of "maritime" and "continental". The temperature at the snowline varies in the graph from purple (very cold and continental, on the ice cap covering Illimani and on other peaks above 6 km in the Bolivian Andes) to dark red (very warm and maritime, in the northern mid-latitudes).

Why is the "line" fat in the northern mid-latitudes? Obviously there are a good many glaciers to sample there, but it is not obvious until we colour the little squares that the fatness is because regional climates vary in continentality. In southern Alaska, for example, the shoreline runs crudely east-west, continentality increases inland, and the snowline actually rises towards the pole.

Putting aside regional variations, why doesn't the global snowline define a neat triangle, highest at the equator and lowest at the poles? The answer lies in the so-called general circulation of the atmosphere. The snowline dips in the tropics, between 30°S and 30°N, because that is the region through which the Inter-Tropical Convergence Zone travels as it follows the Sun. Here the airflow derived from subsidence over the desert belts of each hemisphere converges on the ITCZ. The subsidence implies warming of the air and therefore reduction of its relative humidity, which is why the desert belts are desert belts. Glaciologically, the subsidence means that you don't need much heat to melt what little snow accumulates, so the snowline (strictly, the equilibrium line) is very cold and therefore very high. Between the desert belts, convergence at the ITCZ forces the air to rise and cool, provoking snowfall. The extra snow requires more heat, and a lower and therefore warmer equilibrium line, than in the desert belts.

I wonder if I can convince you that in the mid-latitudes of each hemisphere the snowline is concave up? It is a subtle but physically genuine depression of the equilibrium line, and as at the ITCZ it is due to convergence and thus to lifting and cooling of air. Again, the cooling provokes more snowfall and in turn a lowering of the equilibrium line. This time the converging airmasses are flowing poleward from the desert belts and equatorward from the poles.

Did you notice the asymmetry of the hemispheres? Anywhere poleward of the tropics, the snowline is hundreds of metres or more lower in the southern hemisphere than at the equivalent latitude in the northern hemisphere. It reaches sea level at about 60—65°S, but where we run out of land at 84°N it is still a few hundred metres above sea level.

The temperature at the surface of the Antarctic Ice Sheet is about 25°C colder than at the surface of the Arctic Ocean. Something like 18°C worth of the difference is simply because the ice sheet is about 3 km above sea level. The remainder, and the depression of the snowline throughout the southern extra-tropics relative to the north, are due to the chilling effect of the ice sheet on the general circulation.

Finally, a question that always makes my head spin. What would the altitude of the snowline be if there were no mountain range? There would be no orographic moisture trap, and no glaciers of course. If we knew the temperature of the snowline, we could go to the atmospheric temperature records and find where the snowline would be if there were land. But, first, by supposition there isn't any land. Second, we know that if there were it would draw the snowline down to meet it, that being why glaciers start out maritime at the coast and become more continental the further inland you go. Third, that means that if there were land the temperature would be different from what it is in the free atmosphere, which returns us to where we started from but with the realization that we ought not to have started from there.

So I can't produce an answer to the question. But I can see that the snowline teaches us a lot about the climate, including the proposition that the general circulation, the temperature and the topography are all mixed up in it together.

How much carbon dioxide is produced from nuclear generation? Certainly the power plants do not generate carbon dioxide directly. But there are indirect carbon implications – including from uranium mining and fuel fabrication, which arevery energy intensive activities.

In a 2001 study, Jan-Willem Storm van Leeuwen and Philip Smith commented that: 'The use of nuclear power causes, at the end of the road and under the most favourable conditions, approximately one-third as much CO2-emission as gas-fired electricity production. The rich uranium ores required to achieve this reduction are, however, so limited that if the entire present world electricity demand were to be provided by nuclear power, these ores would be exhausted within three years. Use of the remaining poorer ores in nuclear reactors would produce more CO2 emission than burning fossil fuels directly'. They developed this analysis is subsequent studies: www.stormsmith.nl

This analysis was strongly challenged by the World Nuclear Association which disputed some of the figures and assumptions and the nuclear industry has also pointed to the use of in situ leaching techniques that are claimed to reduce the energy costs of uranium ore extraction. www.world-nuclear.org/info/inf11.htm

However this claim, and the WNA figures, as well as the estimates by Smith/Leeuwen, have been challenged by Prof. Danny Harvey. In a new textbook on Carbon Free Energy (Earthscan), he estimates the 'energy return over energy invested' (EROEI) ratio for nuclear power production as being 19.5 for uranium ore grades of 1%, down to 17–19 for the current world average grade of 0.2–0.3%. For an ore grade of 0.01%, the EROEI ratio drops to 5.6 for underground mining and to 3.2% for open pit mining, but could be as low as 2 or as high as 10 for in situ leaching ('ISL') techniques. However he suggests that ISL involves 'significant and irreversible chemical and radioactive contamination of underground aquifers.' And of the 5.4 million tonnes of identified uranium resources, he says only 0.6 mT are amenable to ISL.

Although there can be debates about assumptions and methodology, the basic issue seems clear: as lower and lower grades of uranium ore have to be used, increasing amounts of energy are needed to make the fuel, so that, since the bulk of this energy will for the present come from fossil-fueled plants, the emissions they produce will undermine the advantage of the zero-emission nuclear plants, and ultimately could make the whole exercise pointless – you would be producing more CO2 than if you just used the electricity from the fossil-fueled plants directly as normal.

The assessment of when the so called 'point of futility' is reached, when the energy used (and carbon produced), to mine and process the fuel is more than the carbon-free energy produced by the reactor, depends on a variety of complex factors, including the energy efficiency of the fuel fabrication and enrichment processes, and how this energy is provided. Centrifuge methods are much less energy intensive than the diffusion processes so far mostly used for enrichment, but it's hard to see how improvements in fabrication efficiency could continually compensate when lower and lower quality ores have to be used. The high-grade ores currently used contain around 2% of uranium (20,000 parts per million), the lower grade ores only 0.1% (1000ppm). Granite contains just 4ppm and seawater – 0.0003 ppm. If we had unlimited cheap carbon free energy, then maybe we could extract some of this, but then we wouldn't need to!

Other analysts have focused on the energy balance issue- and compared nuclear with other options. A study by Gagnon from Hydro Quebec looking at energy outputs to energy inputs ('energy payback ratios') over the complete life cycle, indicated that, at present, nuclear plants (PWRs) only generate up to 14–16 times as much energy as is required to build them and produce their fuel. By comparison, on-land wind turbines could produce up to 34 times as much energy as in needed for their construction (they of course don't need any fuel for operation). Moreover, this figure is likely to be improved as new technologies emerge (an earlier paper by Gagnon had wind ranging up to 79), while as we have seen, the figure for nuclear is likely to fall as lower grade ores have to be used. (Gagnon. L Civilisation and energy payback Energy Policy 36 2008, 3317–3322).

The study of energy balances by Harvey mentioned earlier came to similar conclusions: while as we have seen he claimed that the energy returns over energy invested (EROEI) ratio for nuclear was below 20 and possibly as low as 2, he found that the EROEI ratios for most major renewables were much better than this, even for nuclear using current grades of uranium, and 'will be decidedly better at lower grades'. Solar PV, which is one of the more energy intensive renewables, had a EROEI ratio of 10–20 at present and this is expected to rise to over 20 given new technical developments. While the EROEI ratio for wind was, he calculated, already up to 50.

Perhaps the last word should go to Benjamin Sovacool from the National University of Singapore, who has produced a paper trying to resolve the differences in views on this issue. It assessed 103 life-cycle studies of the nuclear fuel cycle. He says that the quality of most life-cycle estimates is very poor, with a majority obscuring their assumptions (sometimes intentionally) and relying on poor and/or non-transparent data; but when one selects only the most methodologically rigorous studies, typical life-cycle emissions from nuclear plants appear to be about 66 g CO2e/kWh. Although that is less than the estimate of 112–166 g CO2/kWh produced by Storm van Leeuwen and Smith, it is more than most renewables and 10 times greater than the industry often claim for nuclear power – he says they typically put the life-cycle emissions from nuclear plants, including ancillary fuel fabrication and (in some studies) waste disposal, at 1–3 grams of CO2e/kWh.

Sovacool. B Valuing the greenhouse gas emissions from nuclear power: A critical survey, Energy Policy 36, 2008 pp2940–2953.

All of this may not matter if we are just talking about a few extra nuclear plants, but if larger programmes are envisaged here and elsewhere, then it begins to be important. Certainly if we are thinking in terms of very major UK expansion along the lines of the 146 GW by 2050 seen as possible, if very ambitious, in the new DECC 2050 Pathways report, or even perhaps in the case of the UK 'nuclear renaissance' programme envisaged by Robin Grimes and Bill Nuttall in their recent Science review paper (www.sciencemag.org/cgi/content/full/329/5993/79) and by the IMEch in their recent report, which seemed to back the earlier suggestion by the Malcolm Wicks MP, that nuclear should provide 35–40% of UK electricity 'beyond 2030': www.imeche.org/industries/power/nuclear

Hoar, the medium in which Jack and Jenny Frost work on our windowpanes and other canvases, is formed by the condensation of water vapour as ice. But there is also depth hoar, a product of Jack Frost's ingenuity underground, or rather under the surface because it forms in snow, not in the soil.

Glaciologists take a dim view of depth hoar. So do snow scientists, and so should you.

Snow is an excellent insulator, especially when it is not very dense and most of its volume is air. That is why igloos work: partly because air flows only inefficiently through the tortuous void spaces in the snow, and still or sluggish air is an even better insulator — not much use either at conducting heat or at carrying it around — than ice; and partly because, although they are better conductors of heat than the air, the snow grains are in limited contact with each other — so the contacts are thermal bottlenecks.

Good insulation means that the snow can be much warmer below the surface than at the surface. Or colder, but that doesn't favour depth hoar. In Jenny Frost's favourite subsurface setup, the snow at depth is near to the freezing point but the surface is very cold indeed. Because it is in close contact with lots of (frozen) water, the air at depth saturates with water vapour — no, wait, the air throughout the snowpack is saturated. The point is that the warm air below has a much higher capacity to hold water vapour than the very cold air above.

Air flow being inefficient, this gradient in concentration (saturation specific humidity, to get technical) is why water vapour diffuses upwards through the pores to a depth where, because of the cold, it condenses as the crystalline substance we know as hoar.

Crystals like to begin to grow at solid nucleation sites, and the surfaces of the snow grains are perfect for the purpose. Beyond this point, things are explained well in a classic paper by Sam Colbeck. When the temperature gradient is very steep, the crystals like to grow as plates or facets that often join to form upside-down cup-like shapes. What is more, they begin to consume the grains on which they nucleated.

A vertically elongated facet is a better conductor than the mixture of air and grains at the same depth, so its base is slightly colder than average for its depth, while the top of the grain it is consuming is slightly warmer. This means that, at the scale of single facets and their grains, vapour tends to sublimate from the grain top and diffuse downwards to the tip of the facet.

Relying on this physics, Jack Frost can make lots of depth hoar in a single cold snap, say a few days. Sometimes a half or more of the snow gets turned into depth hoar. The resulting facets and cups are commonly a few millimetres across, and single crystals the size of your fingernail are not unknown. These are giants compared to the original snow grains, whose typical sizes might well have been much less than a millimetre.

That is why we are not keen on depth hoar. The cups look cool, but they have replaced not just countless small grains of snow but countless bonds between grains. Depth hoar is weaker than the granular snow it replaces because the giant crystals haven't had time to bond to each other, a phenomenon called "sintering".

What are the consequences? First of all, depth hoar is so friable that it makes retrieving shallow ice cores very difficult. Second, depth hoar complicates the interpretation of microwave emissions from snow and ice which we could otherwise use to estimate the accumulation rate. And finally, layers of depth hoar are among the prime reasons for avalanches. When they collapse, they make excellent slip surfaces for the snow above.

The glaciological attitude to depth hoar is not uniformly disapproving, though. A good place to grow depth hoar is near the bottom of autumnal snowfalls that rest on the so-called summer surface — the glacier surface as it was at the end of summer. When we come along at the end of the winter, we want to measure the mass balance, that is, the mass between the summer surface and the surface at the time of measurement. The depth hoar can be very useful as a marker.

But, all things considered, life would be simpler, and safer, if Jack and Jenny Frost were to concentrate on window art.

World production forecast Made by Khebab of Th...

Image via Wikipedia

Peak Oil has grown up from on obscure theory of some weirdos (or so it was understood) into a mainstream theory - most disagreement is about the exact point in time of peak oil but not on the fact itself.

The Strategic Institute of the German Bundeswehr has now published a document on the implications of peak oil for security (more precisely: the study was leaked). The study is very well written and recommended as an essential read not only for geostrategist but especially for those involved in global sustainability questions. In fact, at least in wording the authors care about such diverse issues as environmental impact of unconventional oils and the impact of global-marked-induced land-use change on indigenous populations. It is worthwhile to have a closer look on some of their results:

  •  While resource conflicts have existed before, peak oil poses a systemic risk for global economies as oil is required for a multitude of energy-related processes and for chemical purposes (e.g. as fertilizer).
  •  Scarcity of oil is coupled with a concentration of remaining resources, notable in the Middle East.
  •  Aggressive oil resource policy used to be expensive in terms of exploration and political costs (e.g. China in Africa). However, with rising oil prices, the ratio will change and benefit China. With concentrated resources, geopolitical leverage of oil-rich regions increases; this will be reflected in UN institutions.
  •  Scarce resources are most efficienctly distributed via market mechanisms. However, morald hazard of national actors (trying to get preferred access to resources via bilateral agreements, and perhaps secret arrangments) may induce a straight forward prisoner's dilemma.
  •  It is speculated that unconventional resources in environmentally-conscious nations are used as a strategic reservoir only (I don't see this is agreement with current exploitation of oil tar sands in Alberta).
  •  Oil-dependend agriculture (both in terms of production processes and associated transportation of intermediate goods and products) means increasing food prices, impacting poor city populations globally, and - jointly with increased demand on "bio"-fuels - increased pressure on land use.
  •  In fact, a sustained global food prices could be the most harmful outcome of peak oil.
  •  Increasing oil prices induces (beside increasing food prices) higher coal prices, and higher demand on further electrification. The latter requires other resources (such as copper, lithium, etc.), possibly inducing further "peaks".
  •  Finally, there is a risk of reaching a "tipping point" via peak oil, i.e. higher food and transport prices induce the crash of industries (such as the car industry), flashing out a global recession with banking crisis and whatever you don't want to think about.

The report has more to say on LNG and other crucial issues. As a bottomline, however, the report repeatedly calls for proactive countermeasures, notably a reduction of oil consumption and more efforts into  renewable energy (infrastructures). A conclusion founded in a security-centered analysis, and surprisingly in broad scope in accordance with both climate change and sustainability concerns. 

Enhanced by Zemanta