March 2010 Archives
Holes in glaciers are of interest because of the work they do, transferring meltwater through the glacier. They are also interesting in their own right, although crawling into them is an acquired taste. They are dark and cold, you don't know where you are going, and you do know that the journey will get harder and even more dangerous the further you go. But this is just what attracts one group of people, the cavers or speleologists.
Two recent papers illustrate the (to me) doubtful pleasures of glacier speleology, while also offering valuable insights into how the meltwater finds its way into the hole and, once there, keeps on doing its job (which is to obey the forces of gravity and pressure, while also conforming to the constraints of thermodynamics).
Doug Benn and co-authors describe caving in three glaciers of diverse type, in Svalbard, Nepal and Alaska, while Jason Gulley has more detail about the ice caves of Benn's Alaskan glacier, Matanuska Glacier.
Even cavers draw the line, usually, at plunging into water-filled holes, but some of these caves are only partly full. Others are dry. The most favourable time for exploring ice caves is after the summer meltwater has gone, but before the ice has had time to squeeze them shut.
The photographs alone are worth it. Some of the dry caves show evidence of former water levels in the shape of sills. These are shelves of ice protruding from the cave wall. They record episodes in which the water began to refreeze at its contact with the overlying air, but then drained away.
Others have keyhole-shaped cross-sections, recording a transition from enlargement of a water-filled hole, by net melting all around the cross-section, to deepening of just the cave floor by a lesser flow of water.
Two of the caves have scalloped walls. The scallops are dish-shaped, and roughly dish-sized, indentations covering all surfaces around the cave wall. Their size reflects the vigour of interaction between the turbulent water flow and the wall, which is vulnerable to erosion by melting. Scallops are common features in many caves in limestone, where the wall is vulnerable to erosion because the rock is slightly soluble in water.
Veins of clear ice are common, running through the milky-white ice making up most of the cave wall. The veins record the refreezing of meltwater in a now-vanished crack. Refreezing yields clear ice. The milkiness of the milky-white ice derives from the abundant tiny bubbles that get left over because you can never squeeze all of the air out as you turn snow into ice.
Probably the most significant item of photographic evidence is that every one of these holes preserves some evidence of its origin as a much narrower crack. Sometimes you can see the crack in the cave roof, sometimes in the floor.
So the unifying theme of this work is hydrofracturing. Suppose your glacier already has a crack in it. If the crack is big enough we call it a crevasse. Whatever its size, it tells us that the body of the glacier is under a tensile stress exceeding the fracture toughness of ice. Whether the crack grows depends on the balance of forces at its tip, the point (in a two-dimensional diagram; in the glacier it is a line) where its width becomes negligible. This is where meltwater comes in. It is a thousand times denser than air, and may well be under the pressure of still more meltwater arriving from the glacier surface. The force balance is very different when the matter pushing against the wall of the crack is water instead of air.
It is now widely accepted that hydrofracturing — crack enlargement facilitated by meltwater — is a crucial piece of several glaciological puzzles, notably the disintegration of ice shelves.
I am glad to have these studies of holes because they increase my understanding of how surface meltwater gets to the bed of the glacier, where it spells trouble, or at least complexity. But it takes all sorts to make a world. No doubt my caving glaciological colleagues, in addition to being curious in general about glacier meltwater, are also glad to have the holes just so they can crawl into them.
As insights from climate sciences become more and more policy relevant, dominate international negotiations and begin to reshape industrial policies of advanced as well as developing economies, these insights become themselves a political issue. In recent months, some errors in the latest IPCC Assessment Report have been rightly uncovered, and there is appropriate pressure on climate scientists to work even more accurately. Yet these shortcomings have also been used to attack the credibility of (climate) sciences per se. To understand these public relations campaigns, a book published last autumn--i.e. before the campaign's start--is of some interest. Entitled Climate Cover-Up - The Crusade to Deny Global Warming written by James Hoggan and Richard Littlemore, the book accumulates valuable insights on the backdrop to what they call "crusade".
The main points can be summarized as follows:
- A number of fake grassroots organiziations (so-called astroturfs) produce and distribute documents and press releases featuring doubts on results from climate sciences.
- Grassroots organizations and higher level think-tanks (such as the Cato Institute of the Heartland Institute) are said to receive funding from the oil and coal industries.
- Initial focus is on provincial media that doesn't have resources to do research on their own.
- A number of so-called climate "experts" are employed who typically lack expertise in climate science but - as a relevant qualification - know the PR business and how to frame messages.
- A main strategy is to induce doubts rather than to produce valuable counter evidence. Rather than winning the argument it is sufficient to keep climate science from winning the argument.
- The media is then overwhelmed with information and - rather than trying to understand the scientific literature itself - prefers to present a two-sided story with "experts" from different sides. Joe Romm specifically critizes The Washington Post and The New York Times for this. These two media should have sufficient resources to do real science reporting.
Another piece of strategy is labeled "The O.J. Simpson Moment" by Bill McKibben [see footnote below]. Altogether, Climate Cover-Up accumulates valuable information and evidence of how the public perception of climate change is formed by public relation agencies. On the downside, Climate Cover-Up is a sometimes tedious read - more a small encyclopedia on climate misconception than a brilliant piece of journalism. However, one can appreciate the author's effort to gather this information as it provides the reader with the tools to understand the current wave of media outbursts. Of course, the dynamics have already changed again in the last months. It seems that rather than focusing on inducing doubts on climate facts, the post-Copenhagen game is now on to induce doubts on climate science institutions such as the IPCC. Finally, Climate Cover-Up focuses mostly on North America. However, it would be interesting to read about the European side, too.
Footnote "If anything, [O.J. Simpson's defense team] were actually helped by the mountain of evidence. If a haystack gets big enough, the odds only increase that there will be a few needles hidden inside. Whatever they managed to find, they made the most of: in closing arguments, for instance, Cochran compared Fuhrman to Adolf Hitler and called him "a genocidal racist, a perjurer, America's worst nightmare, and the personification of evil." His only real audience was the jury, many of whom had good reason to dislike the Los Angeles Police Department, but the team managed to instill considerable doubt in lots of Americans tuning in on TV as well. That's what happens when you spend week after week dwelling on the cracks in a case, no matter how small they may be. Similarly, the immense pile of evidence now proving the science of global warming beyond any reasonable doubt is in some ways a great boon for those who would like, for a variety of reasons, to deny that the biggest problem we've ever faced is actually a problem at all. If you have a three-page report, it won't be overwhelming and it's unlikely to have many mistakes. Three thousand pages (the length of the latest report of the Intergovernmental Panel on Climate Change)? That pretty much guarantees you'll get something wrong."
Some see the term "clean coal" as an oxymoron – you can't burn coal without producing carbon dioxide as well as other environmentally problematic solids, liquids and gases. But it is possible to move towards "cleaner coal" by filtering out or capturing some of the emissions. We already do this for acid emissions, but since the sulphur content of most fuels is relatively small, SO2 is a relatively small issue: the big issue is of course carbon dioxide – the main product of combustion.
"Carbon Capture" is the buzz word – separating out the CO2 produced in power stations. It requires complex and relatively expensive chemical processes, and then you have to find somewhere to store the resultant gas. The options for that include old coal seams, exhausted oil and gas wells, and as yet undisturbed geological strata of various types.
Along with many others, the IEA Clean Coal Centre has been promoting Carbon Capture and Storage as vital for the future. They point out that around 40% of the electricity generated globally comes from coal and that this is bound to grow – as countries like China expand. If they don't adopt CCS, then climate impacts could be significant.
There are some CCS-type projects in China, including the Green Gen project due to come online next year, eventually planned to expand to 650 MW, but most of the running is being made in the US, and to a lesser extent the EU.
The IEA group suggests that globally there is the potential for replacing 300 GW of existing coal plant with new CCS plants and for 200 GW of upgrading with CCS. But it currently looks like we will only see around 29 GW of coal-fired CCS in place globally by 2020. So what's stopping us?
Firstly, and mainly, the cost. CCS adds perhaps 50% to the capital cost of a of plant ($735/950 m for a new/retrofitted 400 MW plant). It also reduces overall energy conversion efficiency – some energy is needed for CCS. The IEA team says that the "energy penalty" is somewhere between 10–15% at present, although higher figures have been quoted, especially for CCS added on to existing plants. However, there are hopes of reducing the energy penalty to below 6% of output by 2030. Improved technology may also reduce the costs.
The cheapest option at present, which can be added on to existing plants, is simple post-combustion capture, but it's inefficient – it is hard to extract the CO2 from the large volumes of low-pressure exhaust gases. It's much cleverer to extract CO2 at an earlier stage – as in pre-combustion capture systems. In these Integrated Gasification Combined Cycle (IGCC) plants the coal is gasified, to produce a mix of methane, carbon monoxide, carbon dioxide and hydrogen. The carbon dioxide is then separated out while the other gases are used as fuel for power production. An extension of this approach is provided by firing with oxygen rather than just air – that increases overall efficiency, but adds complexity. Vattenfall has built a 30 MW (thermal) oxy-fuel demonstration plant. Meanwhile, there are many plans and programmes underway around the world (e.g. the EU has already put €1 bn into CCS, and the US $3.4 bn. Within the EU, RWE is planning a 450 MW IGCC unit, while the UK is planning four CCS plants, two pre-combustion, two post-combustion, to be backed by a new levy of 2–3% on electricity charges.
As this indicates, CCS is still a relatively expensive option for carbon reduction, at $35–70/tonne of CO2. Although there are hopes of it falling to $25–35t CO2, that's still much more than the current value of carbon under the EU Emission Trading System. So far CCS has not be eligible for support under the Clean Development Mechanism – India and Brazil amongst others have objected to its proposed inclusion. The fear is that supporting CCS will deflect resources away from renewables and other low-carbon options more relevant to them.
The second issue is eco impacts. It is conceivable that stored CO2 might suddenly be released in large amounts – and the resultant ground-hugging cloud of dense cold gas could asphyxiate any people it engulfed. The CCS lobby sees this as unlikely – we already pump gas into part-empty wells for Enhanced Oil Recovery, and oil and gas strata have stored methane and oil for millennia, so replacing them with CO2 should not lead to any risk of sudden catastrophic release. That may not be as certain with as yet undisturbed aquifers though (e.g. undersea earthquakes do occur). And on land storage in, for example, old coal seams, seems even more potentially risky, given nearby populations.
The IEA Group says that globally there is room for perhaps 40 Gt CO2 in coal seams, and that is being followed up in the USA – where there has been some local opposition on safety grounds. There is more room, possibly for 1000 GT globally in old oil and gas wells, but the big option is saline aquifers – maybe 10,000 GT. For comparison, according to Vaclav Smil's Energy at the Crossroads: Global Perspectives and Uncertainties, in 2005 world CO2 production was around 28 GT p.a.
The main driver for CCS is clearly the fossil-fuel industries desire to stay in business despite pressures to reduce emissions. In theory CCS can cut CO2 emissions from coal burning by 85% or more, and of course it's not just coal – some of the CCS projects involve natural gas. For example, French oil company Total has retrofitted a gas-fired plant at Lacq in the South of France with CCS in a £54 m pilot project, with the CO2 being sent down the existing pipeline, back to a major local gas well at Rousse, which used to supply to the plant, for storage at a depth of 4,500 metres. Moreover, power production may not be the only option – as noted above, coal gasification can produce a range of synfuels, some of which can be used for heating or for fuelling vehicles. Indeed it could be that this may offer a way to improve the economic prospects for fossil-fuel CCS – by moving into new/additional markets.
However, a rival option is biomass. That could be burnt just for power production or gasified to also provide synfuels, with, in either case, the CO2 being separated and stored. And if the biomass feed stock is replaced with new planting, then in effect, CCS would mean not just zero- or low-net emissions, but an overall reduction in carbon in the atmosphere – negative emissions. Production of biochar from biomass, with CCS, is seen by some as an even better option. But then there are land-use limits to the widespread development of biomass, whereas there is claimed to be lot of coal available around the world, with large reserves in China, N America and Russia/Eastern Europe, enough for more than 150 years at current use rates (although estimates vary).
While many environmentalist would like coal to be left in the ground, some see coal CCS as not only inevitable, but also as positively attractive, at least in the interim, and possibly as being a low-carbon alternative to nuclear. As long as it does not detract from the development of renewables.
For more information about Clean Coal, visit www.iea-coal.org.uk.
Thank goodness for isotopes. The conventional wisdom about the history of glacier ice used to allow for four ice ages in the geologically recent past. Then, in the 1960s and 1970s, oxygen-isotope records from ocean sediments obliged us to increase the number of ice ages, to eight in the past million years (Ma) and many more in the past three or so Ma. It also became clear — although clear should be in quotation marks — that there has been an ice sheet's worth of ice in Antarctica, apparently continuously, since about 14 Ma. More recently, it has become clear that major glaciation began in Antarctica around 34 Ma.
But there is an increasingly persuasive argument that clear should still be in quotation marks. Kenneth Miller and co-authors argue that the isotope records suggest episodic withdrawals of water from the ocean as far back as the later Cretaceous, 100 Ma ago. The only place to put the implied amounts of withdrawn water is into ice sheets.
The argument is appealing because it does away with a long-standing puzzle. Why hasn't Antarctica been glacierized ever since it first drifted into place over the South Pole, where it has been sitting for the past 100 Ma? Miller's answer is simple: it has.
Roughly two oxygen atoms in every thousand are of the heavier 18O isotope, with two more neutrons and therefore 2/16ths more mass than the lighter, more abundant 16O. (The superscript to the left of the elemental symbol O is the mass number, or number of neutrons and protons in each atom, of the isotope.)
Different isotopes of the same element are chemically indistinguishable. But any process that moves stuff around, such as evaporation, is likely to be sensitive to mass. It takes more energy to move heavier objects. Water molecules with an 18O instead of a 16O tend to lag behind in the liquid reservoir. Technically, they have a lower vapour pressure. So they also tend to condense out of the vapour phase more readily.
One result of this fractionation, on the scale of global glaciology, is that an ice sheet, necessarily fashioned out of ocean water, must be isotopically light (more 16O) and the ice-age ocean correspondingly heavy (more 18O). By coring the ice sheet, or the sediment that accumulates on the ocean floor, we get highly accurate and detailed records of — what?
There is a serious complication. Fractionation depends on the temperature as well as the isotope masses.
In ice-core records, the dominant influence is temperature. In ocean sediment cores, the signal due to sequestration of ice tends to be stronger. Unlike the ice sheets, the ocean does not suffer the preferential condensation of heavy oxygen that goes on as the evaporated water makes its way, cooling as it goes, to the site of snowfall. Ocean temperature is still a major confounding factor, however. The story is preserved in fossil micro-organisms, the shells of which are usually assumed to have the isotope abundances of the water in which they lived. But when it is colder the micro-organisms prefer water (and carbonate) molecules with more 18O.
Without information from some other source, therefore, Miller and his colleagues are tackling a problem that is underdetermined, with more unknowns than equations. They draw on several such sources, but the most important is a record of sea-level changes from New Jersey. Of course one sea-level record does not establish a case, but as they cannot resolve the lowest sea-level stands very well their estimates are conservative. The extra independent data turn the exercise into a kind of intellectual, if still speculative, triangulation: a heavy-isotope excursion is likely to be glacial if it coincides with a sea-level fall, and thermal if it does not.
I put the argument for Cretaceous glaciation of Antarctica on the persuasive side of the persuasive/convincing borderline. So many factors contribute to the way the world used to be — palaeogeography, greenhouse gas concentrations and short, sharp changes of sea level are just a few — and hardly any of the evidence remains. But work like Miller's is a fine demonstration of how tantalizing the frontier of knowledge can be. And without the isotopes we would never get anywhere at this speculative frontier.
For a 30-minute interview with myself and two others on the energy-water nexus topic, with particular focus upon renewable energy, visit Renewable Energy World.
What with new observations from space of the flow of water beneath the Antarctic Ice Sheet, and against a backdrop of long-standing knowledge that glaciers go faster in the summertime, holes in glaciers have seen an upsurge of interest. We are not talking about cracks in glaciers — crevasses — although the holes usually owe their genesis to pre-existing cracks. These holes, or moulins, are tubes of crudely circular cross-section, they are made by meltwater, and all the evidence suggests that we need to know a lot more about them if we want to understand glacier flow properly.
One way to find out more about natural holes in glaciers is to drill artificial ones. Among the more useful but fundamentally simple technologies in glaciology is borehole video. You lower a camera down your borehole and shoot. You may find, as did Luke Copland and his fellow-workers on Haut Glacier d'Arolla in Switzerland some years ago, that your own hole has intercepted a hole made by the glacier itself. That is, there is a hole in the wall of your own hole.
The forces at work inside glacier ice are varied, but only one can produce this kind of hole: transfer of thermal and mechanical energy from flowing meltwater. The borehole video is showing us conduits. It is reasonable to conclude that the water is coming from the glacier surface. But where is it going?
One thing we have learned from holes in the walls of boreholes, a few centimetres in diameter at most, is that some of the conduits of the englacial drainage system are small. We also know that some are not so small, because boreholes sometimes penetrate larger voids, and sometimes with disconcerting results. If you tap into a void that is filled not just with water but with water under pressure, you get a geyser.
So boreholes in general, and borehole video in particular, are showing us fragments of a complicated system that conveys meltwater through the glacier. Presumably the water sometimes ends its journey by refreezing, but if it can transfer enough heat to the conduit wall it will keep the conduit open and may even enlarge it. If that happens, the water will eventually reach the margin of the glacier or, more interestingly, the bed.
Finding englacial meltwater conduits the size of your finger is a pretty impressive feat, but there is a limit to the number of boreholes we can drill, and finger-sized holes are not physically impressive. A recent study by Catania and Neumann of holes in the Greenland Ice Sheet is impressive as to both technology and physical scale.
They used ice-penetrating radar to image the holes remotely. The ice was 400-600 m thick and they made a number of assumptions, such as that the holes, with diameters of about a metre, are vertical cylinders. In their radar traverses the holes show up as strong diffractors, visually striking hyperbolical shapes, superimposed on the well-defined layering that represents the history of accumulating ice.
The layers are downwarped in association with two of their holes. They argue persuasively that this is because the meltwater delivered by the holes keeps on melting the basal ice, releasing gravitational potential energy as it flows away. Further, to achieve observable downwarping the holes must be persistent. The system of conduits is embedded in a medium that is flowing slowly downhill. Any one hole ought to be short-lived, getting pinched off as the ice carries it away from its source of surface meltwater, or simply squeezing it shut. The two persistent holes appear to have supplied meltwater to the bed for long enough, one to a few decades, to achieve about 30 m of further basal melting in one case and 15 m in the other.
Yet again we have fragments of an evidently complicated picture: two long-lived holes, several more short-lived ones (no layer downwarping), and part of the study area in which the diffractors are so numerous that they obscure the layering completely. The simplest explanation of the numerous diffractors is that they are more closely spaced, smaller holes, perhaps on the same scale as the finger-sized ones seen directly by Copland.
Evidently we still have a lot to learn about holes in glaciers.
A few weeks ago on This Week (ABC, http://abcnews.go.com/ThisWeek/video/exclusive-sen-alexander-9969974) US Senator Lamar Alexander (R) of the said the United States is now too complex for there to be very large sweeping bills to pass that will be good for the country. The reasoning is that the bills are now so long that there are too many unintended consequences and surprises embedded in them. He thus pushed for more incremental bills to make continuous progress. On the other hand, President Obama says the health care system is so complex that you can't overhaul it in a piecemeal fashion. So which is it?
What does these conflicting statements from the US elected officials say about the state of governing the United States, or perhaps generally the industrialized world, regarding the reaching a point of diminished marginal returns on the complexity of how we are organized? And in the reasoning of Joseph Tainter (http://www.cnr.usu.edu/htm/facstaff/memberID=837) are energy resources, or the lack of the abundance per capita of the past, have something to do with our inability to solve new problems? I'll quote from an article in Slate's website (http://www.slate.com/id/2225820/):
"Over the last several decades, the number of bills passed by Congress has declined: In 1948, Congress passed 906 bills. In 2006, it passed only 482. At the same time, the total number of pages of legislation has gone up from slightly more than 2,000 pages in 1948 to more than 7,000 pages in 2006. (The average bill length increased over the same period from 2.5 pages to 15.2 pages.)
Bills are getting longer because they're getting harder to pass. Increased partisanship over the years has meant that the minority party is willing to do anything it can to block legislation--adding amendments, filibustering, or otherwise stalling the lawmaking process. As a result, the majority party feels the need to pack as much meat into a bill as it can--otherwise, the provisions might never get through. ... And as new legislation is introduced, past laws need to be updated. The result: more pages."
So governing the country is becoming more and more difficult to increasing size and complexity. Theoretically, this requires more and more money and energy to operate the government and distribute services among the citizens. Given that US energy consumption has been effectively flat at between 99 and 101 quadrillion (1 quad = 1 x 10^15) BTUs since 2004, perhaps this has finally caught up to us in the form of the mortgage and financial crisis causing the current recession. The economists are stating that they don't see jobs recovering much at all this year even if the overall economy does grow by any percentage.
It is disappointing to hear, or rather not hear, more of a discussion among politicians of how energy resource quality (measured by energy return on energy invested (EROI), net energy, etc.) is not brought more into the general discussion as an indicator of the future path of our society. I hosted a panel session at the American Association for the Advancement of Science Annual Meeting on "The Consequences of Changes on Energy Return on Energy Invested" (see: http://aaas.confex.com/aaas/2010/webprogram/Session1710.html). During this session we discussed how the quality of energy resources (being primarily fossil fuels) as measured by EROI are getting lower. Thus, the same amount of energy production (in total Btus/yr) at a low EROI is not able to sustain the same level of complexity and growth as when that same quantity of energy has a higher EROI. More fuel and parts of the economy are literally needed to support the functioning of society, and society must rearrange itself. Many people believe this rearrangement is happening by switching to alternative energy resources such as renewables for liquid fuels and electricity, but these resources are inherently inferior (when thinking only from an EROI standpoint) that the fossil fuels we have used in the past and are still consuming today. Thus, energy systems must inherently get simpler not more complex. It is not clear whether the "smart grid" is more simple or more complex. In some instances, it allows decisions to be made more locally and that sounds simpler. On the other hand, there are more decision-making nodes or locations, and that sounds more complex. I'm inclined at the moment to think that the smart grid is an increase in complexity, but this is a ripe area for future research.
I send out a call to the energy community to call for a more integrated approach to thinking about how critical energy quality is to economic production and societal organization. Instead of blaming the current politician in office for running up the budget or spending too many tax dollars, we need to show that our future options for private and public services are fundamentally limited by the quantity and quality of the energy resources we consume. Thus, we should not be surprised when our politicians are having extreme difficulty in solving the current challenges. The lesser amount of excess energy floating in the economy simply demands that actions be performed much more precisely with less and less room for error. When there is excess energy available, you can simply more easily afford to mess up, and for that matter, clean up your mess.
Under Ofgem's new 'Green Energy Supply' Guidelines, launched in February, suppliers offering 'green electricity' to consumers under the voluntary tariff system must demonstrate that their green tariff involves a commitment above and beyond what is required from existing government targets for sourcing renewable electricity and reducing emissions. In most cases that will involve some sort of fund to support additional projects, which might include community-scaled renewables or energy saving projects, or even carbon offsetting projects.
The rules for domestic tariffs in the new scheme require that offsetting projects save or avoid the emission of at least 1 tonne of carbon dioxide equivalent annually, and 50 kg of CO2 equivalent emissions p.a. for all other environmental activities, such as community-scaled renewable-electricity projects, these all having to be additional to that saved from any existing programmes (e.g. as counted within the Renewable Obligation (RO)). Basically they can't just use power already credited under the RO. To meet the new rules they must do more, and the new scheme provides specifications, which will be accredited by an independent panel, overseen by the National Energy Foundation. Visit www.greenenergyscheme.org.
The voluntary green-power market has always sat uneasily on the margins of the UK Renewables market – which is driven by the Renewable Obligation. All electricity consumers already pay their suppliers extra for that, so the voluntary green power schemes have to offer something else to give extra value – they just can't charge extra twice for the same electricity used to meet the suppliers RO requirements. Most suppliers have already been offering additional green benefits to justify premium prices – some have set up funds to support green projects. But not all have charged more. For example, npower set up a self-financed fund for its Juice scheme to support new marine-renewables projects – it's reached over £2 m so far.
Quite a range of schemes have emerged, with there being some confusion and indeed scepticism about the validity of some of the claims to 'green-ness' being made. The new rules puts these schemes, and the additional elements, on a more formal basis.
All the large main suppliers including British Gas, E.On, EDF Energy, RWE Npower, Scottish and Southern Energy and Scottish Power, and linked groups, have signed up to the new scheme, as well as independent supplier Good Energy.
Unlike the 'big six' suppliers, who also sell non-green power, Good Energy buys in and sells 100% green power from mostly local independent sources – and retires any Renewable Obligation Certificates (ROCs) it gets, rather than selling them on. So it claims that it will help renewables to expand, since the value of ROCs will rise proportionately. The other main independent, Ecotricity, sells a roughly 50/50 mix of green/conventional power, which it sees as being reasonable since it is still four times as much green power than currently required by the RO targets. It also charges a premium green-tariff rate, but says the income helps it to invest in new renewable-energy projects – and it certainly has been pushing ahead with major wind projects.
However, Ecotricity has been very critical of the new OFGEM scheme and has not joined. It argues that the renewable energy used under the new tariffs will still all come from Britain's same pool of RO-linked renewable electricity, which meant that the big energy companies would not be required to build any extra major source of renewable energy. They will simply provide added-on schemes such as carbon offsetting, help with micro-generation or energy-efficiency schemes.
When the guidelines were first proposed last year, Ecotricity's CEO Dale Vince said: "In these guidelines Ofgem are accrediting everything you can imagine except the thing that really counts – green electricity. Of course we believe in planting trees, protecting wildlife and cutting carbon, all of these things have an important role to play – but not in green tariffs. Green tariffs and consumer choice of green-tariffs – people power – could play a crucial role helping us to reach government renewable-energy targets. But Ofgem has sidelined the consumer in one fell swoop by excluding real green electricity from its definition of so-called green-tariffs."
After the launch earlier this year he reaffirmed his view: "Green-electricity tariffs should be about more than feel-good charity schemes. If suppliers want to plant trees or even help old ladies across the road, I'm all for that but not under the guise of green electricity. Ofgem's new 'rules' set an artificial standard of what green electricity really is. This can only result in them becoming an expensive niche product in a charity ghetto, doing more harm than good. Consumers will get poorer, but Britain won't get any greener as a result of this."
That may be overstated, after all the new scheme does require real carbon-emission reductions, but he may be right in principle – while some small community project may get some support, it won't lead to extra capacity in the mainstream renewables sector. Basically the problem is that the government wants the Renwables Obligation to be the main vehicle for supporting renewables and sees the green consumer tariff as additional and voluntary. Certainly, so far, the uptake has been marginal – only about 2% of UK consumers have signed up to such schemes. What's not clear is what will happen when the new Feed-In Tariffs (FiTs) for small projects come online from April onwards. Since it's outside the RO, will that power, including some from community projects, be available for 'voluntary' tariff schemes? That might change things, even though the FiT is also only seen as a small, marginal exercise, leading to at most to 2% contribution to UK electricity by 2020.
Elsewhere in the EU, Feed-In Tariffs and green-energy certificates schemes used by consumers are taken seriously, and have had major impacts. In the UK though they are still seen as marginal – the focus remains on the competitive market orientated RO, despite the fact that, so far, this has been poor at delivering much renewable-energy capacity.
For more on renewable-energy developments and policies, visit Renew: www.natta-renew.org.
As by volume the most relevant renewable fuel standard world wide, a closer look at the details of this regulation is worthwhile - to understand the issues behind decarbonization policies in transport. The RFS2 details that
“EPA is making threshold determinations based on a methodology that includes an analysis of the full lifecycle of various fuels, including emissions from international land-use changes resulting from increased biofuel demand. EPA has used the best available models for this purpose, and has incorporated many modifications to its proposed approach based on comments from the public, a formal peer review, and developing science. EPA has also quantified the uncertainty associated with significant components of its analyses, including important factors affecting GHG emissions associated with international land use change.”
Specific lifecycle GHG emission thresholds for each of four types of renewable fuels were established, requiring a percentage improvement compared to lifecycle GHG emissions for gasoline or diesel. One of these fuels, ethanol produced from corn starch produced at a new natural gas facility using advanced efficient technologies will meet the 20% reduction threshold compared to the 2005 gasoline baseline (says EPA). Other fuels meet the 50% or 60% benchmark.
Iowa harvest 2009 (Bill Whittaker, licenced under GNU free documentation licence)
While the life cycle methodology of the EPA if fairly comprehensive, a few important caveats were noted in a review of the RFS2 by Richard Plevin:
- EPA performs its analysis in a projected 2022 world, assuming a variety of technology changes. This is similar to accounting for today’s emissions from coal power plants as if they had implemented anticipated CCS technology. In 2012 all and in 2017 most corn ethanol pathways analyzed by the EPA do not meet the 20% GHG reduction requirement, or even produce greater GHG emissions than the gasoline baseline.
- In the EPA model corn ethanol achieves productivity gains without additional use of fertilizer. The peak of corn ethanol production is achieved in 2016 - inducing most ILUC - while productivity assumptions refer to 2022 with additional 9.4% crop yield. Hence, ILUC are systematically underestimated.
- EPA attributes large soil carbon sequestration to biodiesel, most likely for increased used of no-till. However, no-till may increase N2O emissions (Six et. al). There is uncertainty on this issue, but EPA treats net soil carbon sequestration as a fact.
- Cellulosic ethanol obtains a low GHG rating by co-product credits generated by electricity from biochemical cellulosic refineries that displaces the average US grid electricity. Taking the average US grid as benchmark is a courageous assumption. More detailed analysis could significantly change the life cycle emissions.
- An additional supply of biofuels reduces the world market price of petroleum, by this increasing its demand. In one study, the global petroleum effect is estimated to be around 27% implying that each MJ of biofuel replaces 0.73 MJ of petroleum (Stoft, 2009). Hence, biofuels that are less then 27% below gasoline baseline could have a net positive global warming effect. This effect is acknowledged but not modeled by EPA.
Interested readers should consult the detailed analysis of Richard Plevin (here). Most importantly perhaps is the treatment of uncertainty. EPA performs a basic uncertainty analysis. A number of uncertainties are completely ignored, most importantly the uncertainty about the fraction of land displaced by biofuels that must be replaced elsewhere and the assumed production period (Plevin et al., forthcoming). As a result, numbers are presented with relative certainty where epistemic uncertainty dominates. There are two additional important issues that go beyond pure carbon accounting. First, there is considerabe interaction between biofuel and food production. The EPA’s comprehensive analysis treats reduction in food consumption, e.g. in India and Africa, as a GHG benefit. Without these shift from food to fuel production, biodiesel from soybean would not meet the threshold. Second, the economic feasability of large scale cellulosic ethanol production is unclear. For example, target values for biodiesel have already been scaled down by more than 90% for 2010.
In summary, EPAs carbon accounting should be taken with some care. In particular, today’s corn ethanol may have higher than baseline gasoline GHG emissions (e.g., Hertel et al., 2010). By focussing on potential 2022 technologies, this emission disbenefit is insufficiently reflected. Some policy maker pressure the EPA with respect to corn ethanol, arguing that corn ethanol production decreases energy independence and produces jobs. However, from this perspective, pro-corn ethanol policies should be designed from the perspective of jobs and energy independence, rather than using the RSF2 as camouflage.
Hertel, T. W., A. Golub, et al. (2010). “Global Land Use and Greenhouse Gas Emissions Impacts of U.S. Maize Ethanol: Estimating Market-Mediated Responses.” BioScience 60(3): 223-231.
Plevin, R. J., M. O’Hare, et al. (forthcoming). The greenhouse gas emissions from market-mediated land use change are uncertain, but potentially much greater than previously estimated, UC Berkeley.
Six, J., S. M. Ogle, et al. (2004). “The potential to mitigate global warming with no-tillage management is only realized when practised in the long term.” Global Change Biology 10(2): 155-160.
Stoft, S. (2009). “The Global Rebound Effect Versus California’s Low-Carbon Fuel Standard”
The shrinking extent of sea ice in the Arctic has been a cause of concern for some decades, and the record low extent measured with passive-microwave radiometers in September 2007 gathered a good deal of publicity. The September minimum was 7.11 million km2 on average during 1979-1998. In 2007 it was 4.30 million km2. The two minima since then have each been greater. 2008 saw the second lowest and 2009 the third lowest extent.
You can check out the state of Arctic sea ice at the National Snow and Ice Data Center in Boulder, Colorado. Thus far during the present winter, 2009-2010, the extent has been tracking pretty closely the course followed in 2007, so two successive years of increased minimum annual extent do not justify us in concluding that the ice pack might be recovering. Equally, there is no sign of an impending catastrophe at the top of the world, but we would still like to understand why 2007 was a record-breaker. There have been several interesting attempts to explain it.
A particularly interesting attempt by Ron Kwok and colleagues appeared last month. They focus on a detail of the bigger picture, the outflow of sea ice through Nares Strait, the narrow gap between northwest Greenland and Ellesmere Island. To put this study in context, we need to step up from thinking about sea-ice extent to thinking about sea-ice mass.
Very roughly, the ice is 3 m thick on average, for a total mass each average September of about 19 million gigatonnes (but only 12 million Gt in 2007). The mass of the ice pack is the result of a balance between freezing, melting and export. The exported bergs and floes eventually melt, but not within the Arctic Ocean.
Most of the export, about 2000 Gt/yr, is through Fram Strait, between Greenland and Svalbard. Of the other possible outlets, the channels between the islands of the Canadian arctic archipelago contribute little. Apart from being narrow, they are most often blocked at their northward ends by plugs or "arches" of immobile ice. The arches form during the winter and persist until the end of summer, so that for much of the year there is effectively no southward ice export.
Kwok and colleagues found that no arch formed in 2007 at Nares Strait, which was therefore an open passageway for the full 365 days. Between 1998 and 2006, the open-channel state prevailed for only 140 to 230 days per year. From 2004 to 2006, when ice thickness measurements are available from the ICESat laser altimeter, the mass export was about 80-85 Gt/yr, but in 2007 it was 230 Gt/yr.
Why worry about such tiny amounts? The export through Nares Strait in 2007 was only 10% of that through Fram Strait, and insignificant in comparison with the total mass of the pack. The answer is that the ice in the Lincoln Sea, just north of Nares Strait, is some of the thickest, at about 5 m on average, in the whole Arctic Ocean.
The decline of Arctic sea ice is usually discussed in terms of its extent, but that is mainly because we have lots of information about extent. Measurements of thickness are harder to come by, and therefore so are estimates of total mass (area times thickness, multiplied by 900 kg m-3, the density of ice). But one of our main concerns about Arctic sea ice is that apart from shrinking in extent it is also getting thinner.
The impact on ice extent of the free evacuation of thick Lincoln Sea ice in 2007 was small, but it depleted the thick end of the frequency distribution of ice thickness. An ice pack with relatively more thin ice is less likely to survive the summer melt season, so the non-formation of ice arches in Nares Strait constitutes a positive feedback, magnifying the vulnerability of the ice pack as a whole. And it may be a positive feedback in another sense. Presumably arching, that is, blockage, is more likely when the supply of thick chunks of ice is greater. If an episode of free outflow decreases that supply, future episodes of free outflow become more probable.
The debate on the UK's new Feed-In Tariff (FiT) has been quite lively, with the Guardian's George Monbiot arguing that, with solar PV being still very expensive, the way the FiT provided the support needed was economically regressive.
It does look that way at first glance – those that could afford to invest say £10,000 in PV might get £1000 p.a. back for the electricity they generated and used, paid for by all the other consumers, who would be charged extra via their electricity bills. It's been suggested that this would lead to a £11 p.a. surcharge on bills by 2020.
However, in a rebuttal to Monbiot's analysis, Jeremy Leggett from Solar Century said "the average household levy in 2013, when tariff rates are all up for review, is likely to be less than £3" and he added "this is far less than the average saving from the government's various domestic energy efficiency measures over the same period. So there is no net subsidy. The levy is not 'regressive' at all".
The extra cost is certainly small, since the expected size of the FiT scheme is small, only maybe leading to 2% of UK electricity by 2020, so maybe this is not a major issue. But it is good to see that the government has now announced a "green-energy loan" scheme (part of its new "Warm Homes, Green Homes" strategy) under which energy-supply companies and others (e.g. the Co-op) may offer consumers zero or low interest loans for installing new energy systems, to be paid back out of the resultant energy savings. Details have yet to be agreed, but up to £7 bn may be made available over the next decade in this way – although it seems it will start off slowly, from 2012 onwards.
This scheme could help the less well-off to invest in new energy technologies like PV, and join in the FiT. Providing up-front loans via a "pay-and-you-save" system certainly seems likely to be more effective at ensuring wide uptake than just using revenue over time from a FiT. And there would be no extra charges on the taxpayer or the other consumers. So it could be popular.
There does seem to be a lot of support for self-generation. A YouGov survey for Friends of the Earth, the Renewable Energy Association and the Cooperative Group found that 71% of homeowners who were asked said that they would consider installing green-energy systems if they were paid enough cash. So perhaps, one way or another, uptake will be significant.
However, there are still some uncertainties. I argued in an earlier blog, before the UK FiT details emerged, that, while it worked very well for wind in Germany, using a FiT to push PV down its learning curve, to lower prices, might not be the most effective approach for PV.
Now we have the details of the tariff, which has set the price for PV so that those who install it get the same rate of return as those using other cheaper options. This may be fine if you are desperate to get PV accelerated. That's a matter of judgment. For electricity, in the UK context, large-scale on-land and off-shore wind is clearly a better bet for the moment in terms of price, and also the scale of the resource. But PV prices are falling, and it could well be next in line for expansion, helped by the FiT, plus the loan scheme. Certainly there are benefits: localized generation using micro-power units like PV do avoid long-distance transmission losses, which can amount to up to 10% across the whole UK, and that is important.
However, domestic micro-generation has it limits – it's arguably the wrong scale. PV is one of the better ones – there are no real technical economies of scale, except via bulk buying and sharing installation costs for larger projects. But micro wind is only relevant in a very few urban UK locations – larger grid-linked machines in windy places are so much more efficient and cost effective. Solar heating (to be supported under the forthcoming Renewable Heat Incentive) maybe be the best domestic option, but even then there are economies of scale (e.g. for grouped-solar schemes sharing a large heat store or even solar-fed district heating). Micro Combined Heat and Power (CHP) similarly: larger-scale mini or macro CHP, linked to district heating networks, are arguably more sensible.
Fortunately the 5 MW UK FiT ceiling, though low, gives us a chance to operate at slightly larger community scale, which may redeem the whole thing. See the excellent Energy Saving Trust report Power in Numbers, which states that "the economics of all distributed energy technologies improve with increasing scale, leading to lower cost energy and lower cost carbon savings and justifying efforts for community energy projects". And for some smaller-scale renewables, it adds that "it is only when action occurs at scales above 50 households, and ideally at or above the 500 household level, that significant carbon savings become available".
Palaeogeography is a seductive subject. The appeal of conjuring a vanished landscape out of a few strands of evidence and a good deal of restrained conjecture is irresistible. We know that the reconstructed world is imaginary, but with right treatment of the evidence, under the right constraints, we also know that it must represent a realistic approximation to the way things were.
That is what Douglas Wilson and Bruce Luyendyk appear to have done for West Antarctica at the end of the Eocene, about 34 Ma (million years ago). Records of oxygen isotope ratios in the microfossils preserved in deep-sea sediment suggest that glacier ice began to accumulate in Antarctica at about that time. But models of the palaeoclimate have trouble simulating as much ice on East Antarctica as the isotopes suggest. East Antarctica, palaeoclimatically interesting in itself but not our focus here, is the larger, higher part of the continent.
West Antarctica is not big enough to house the missing ice. Most of it is below sea level. The prevailing wisdom is that you can't grow a marine ice sheet — that is, one with its bed below sea level — from scratch, at least not quickly. So, assuming we are reading the isotopes aright, where was the missing ice?
Enter Wilson and Luyendyk . Their geography of West Antarctica as it may have been at 34 Ma offers a persuasive answer: most of it wasn't below sea level back then.
The first step in the palaeogeographic reconstruction, starting from a map of the modern ice thickness, is to remove the ice and allow the underlying lithosphere to recover from the removed load, rebounding and flexing. You have to juggle with the calculated new surface elevations because in some parts, where the new surface is below sea level, the place of the ice load is taken by a new load of ocean water. This requires care, but it is not a deep problem. On the other hand the result isn't much help. Most of West Antarctica remains under water.
The second step is to account for thermal contraction of the lithosphere. As constrained by palaeomagnetic and other measurements, the main feature of the tectonic-plate system around here until about 28 Ma, the West Antarctic Rift System, was the surface expression of the rising limb of a convection cell in the Earth's mantle. The convection stretched the overlying lithosphere while causing the two sides of the rift to spread apart. Now too high for the fluid mantle rock supporting it, the lithosphere subsided gradually. Wilson and Luyendyk estimate that the subsidence since 34 Ma has varied from 200 to 500 m across West Antarctica.
Apart from correcting for the subsidence, you also have to undo the stretching, moving the Pacific side of the rift some tens of kilometres back towards the East Antarctic side.
A lot of erosion can happen in 34 million years. How do you restore a landscape that has ceased to exist? The answer is that we know, first of all, that the erosive products go downhill, and second that they have to end up somewhere. Most of them end up as sediment offshore, and not too far away. Wilson and Luyendyk rely on sediment thickness estimates for this, their third step. Not all of the sediment is due to erosion, a good deal of it being the fossils of marine microorganisms, and the eroded rock would have been more dense than the deposited sediment. Nevertheless, their approach is very conservative. The volume they restore is only 13% of the volume of offshore sediment.
This step also requires corrections for rebound and subsidence. Shifting loads of sediment are just like shifting loads of water and ice when it comes to the response of the underlying lithosphere.
In the end, Wilson and Luyendyk found another 1.5 million km2 of land, turning the West Antarctica of 34 Ma from an archipelago into a landmass. This plausible landmass, imaginary as it is, is consistent with all the evidence and is constrained by basic principles of physics. In turn it makes what the isotope records imply, that there was quite a lot of ice at 34 Ma, more plausible.
Getting beyond plausibility is a challenge, but one that would be worth rising to because it would allow us to move on to the next questions. Why so much ice? And why then and not earlier or later?
Alternatively you can browse the blog’s category archives: