This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Skip to the content

IOP A community website from IOP Publishing

November 2010 Archives

How long have the Gamburtsev Mountains been there, deep in the interior of Antarctica? In a paper just published in Geophysical Research letters, S.E. Cox and co-authors explain how they think have the answer, which is a bit surprising.

Apatite is an interesting mineral. It contains most of the phosphorus in the Earth's crust, is familiar to many as the mineral that defines a hardness of 5 on Mohs' scale of hardness, and is unfamiliar to just about everybody as a basic constituent of tooth enamel. Its name comes from Greek apatao, "I cheat" — allegedly because of the variety of its forms, although all the chunks of apatite I have ever seen are a pleasing shade of light green with a hint of lemon.

One curious attribute of apatite is that uranium quite likes it, snuggling in happily, in trace amounts, into the basic structure of the crystal lattice. Every so often, an atom of uranium-238 splits into two fragments that set off at high speed, crashing through the molecules in their neighbourhood. The collisions slow the fragments down and eventually they stop, but not before having done a good deal of damage. The trail of wreckage is a fission track, and it can be brought to light under the microscope.

Here comes one of the more fascinating twists in the tale: the damaged crystal lattice gets better. It can heal itself by restoring the disordered array of molecules to something like its original tidy state, a process called annealing.

The payoff for the drudgery of counting fission tracks in apatite crystals is that annealing reduces the number of tracks in a way that depends principally on temperature and time. Above about 120°C, the so-called closure temperature, annealing erases the tracks as fast as they form. Below about 90°C, annealing is so slow that the number of tracks depends on the time elapsed since cooling through the closure temperature.

The temperature decreases as the apatite crystal travels upwards through the geothermal gradient, which is about —30°C for every kilometre nearer to the surface. The fission tracks tell us when the crystal was last at a depth greater than 3 to 4 km. (Very roughly. The geothermal gradient had to be guessed in this study.)

In other words, fission-track dating is a way to estimate long-term erosion rates.

How do you estimate the erosion rate of a mountain range buried beneath several kilometres of ice? You go to the sediments deposited offshore as a result of the erosion. Cox and co-authors went to Prydz Bay, offshore from Lambert Glacier, the largest outlet of the Antarctic Ice Sheet. It drains the northern part of the Gamburtsev Mountains. They sampled Eocene sediments, about 35 Ma (million years) old, and found fission-track erosion rates of the order of 10 to 20 m Ma-1 that must have been sustained for at least 250 Ma.

Such rates are extraordinarily low. The Alps are shedding sediment at 400 to 700 m Ma-1, and while the Appalachians are suffering rates of only about 30 m Ma-1 they are much less rugged than the Gamburtsevs. The Gamburtsev rates are more typical of very low-relief terrains like the Canadian Shield. Incidentally, they are upper limits. The crystals sampled in this study are likely to have come from whichever part of the Lambert basin has been shedding sediment fastest.

The geomorphologists, then, have the problem of explaining why the Gamburtsev Mountains have been rugged without yielding significant detritus for several hundred Ma. One possibility is aridity. If the Gamburtsevs and their surroundings were a desert for most of the required time span, that would account for their not evolving very rapidly. It doesn't seem probable. They have been far from the desert belts for at least 100 Ma.

Burial beneath glacier ice seems like a better bet, according to the Cox paper. It also seems harder to swallow. Before, we glaciologists had the problem of the survival of alpine relief in the heart of Antarctica for tens of Ma, and the related problem of the apparent non-glaciation of the polar continent for tens of Ma before that. If Cox and co-authors are on the right track, the problem metamorphoses into trying to explain a protective ice cover on the Gamburtsevs even though they were not near a pole, and even though the rest of the world was warm. They are holding up what Winston Churchill called the flickering lamp of history, and the scene it reveals is decidedly murky at present.

As scientists modeling sustainable urban transport, we are confronted with a significant conundrum.  On the one hand, non-motorized transport (NMT) comprises the most sustainable modes. Neither walking nor cycling emits GHG emissions, nor does it contribute to air pollution, nor does it produce noise externalities, nor does congestion result from NMT activities. Hence, we are of course interested in understanding what incentivizes NMT, and how the modal share of cyclists and pedestrians can be increased while satifying mobility demands. On the other hand, however, the literature on incenvizing factors of NMT and crosselasticities between other modes and NMT is sparse. This is to some degree because investments and political attention go into motorized transport. NMT is mostly handled as a given that will find its place, and does not need further attention. Furthermore, monetary costs of motorized transport make motorized transport an accessible object for transport economists. The intrinsic non-monetary nature of NMT make incentives much harder measurable - a considerable knowledge gap results. As sustainable urban transport gains more attention, pedestrians and cyclists shift into the spotlight.

A recent study of Montreal cycling by Larsen and El-Geneidy helps to shed light on the relationship between bicycle lane availability and attractivity of use.  These are some of the main conclusions:

  •           Recreational cyclists are more likley to use bicycle facilities (e.g. bike lanes)
  •           Frequent cyclists use lanes less and travel greater distances
  •           Greater separation of bike lanes from vehicle traffic - e.g. by bicycle alleys - increases trip distance
  •           Connectivity of the network matters: The longer bicycle lanes are, and the better they are connected with other bicycle lanes and facilities, the more attractive they become for users

The authors conclude by suggesting that physically separated bike lanes are best to encourage novice cyclists. The connectivity of a bicycle network may, however, be the most important design and investment criterion.

It would be very interesting seeing more studies on this topic. What is, for example, the cost-benefit relation of different kind of bicycle facilities (measured in $ infrastructure investment versus marginal increased bicycle use)? How does this cost benefit relation change as a function of bicycle network connectivity? Can these results be generalized to other cities? 

Enhanced by Zemanta

The UK government has now dismissed three proposed sites for new nuclear power plants: first it dropped Dungeness in Kent and then Braystones and Kirksanton in the Lake District, so that now leaves eight locations, all of them at already established nuclear sites. The revised National Policy Statements, Regulatory Justification and Generic Design sign-offs are all done, more or less, or will be done soon, with, following consultation exercises, the Secretary of State deciding that 'it would not be expedient to the making of his decisions to hold a public inquiry or other hearing'.

What's not sorted is the money. It may be hard to reform the market and the EU-Emission Trading System enough to make nuclear viable without formal subsidy. Energy and climate change secretary Chris Huhne's shifting definition of subsidies has raised some eyebrows. While he still says 'there will be no levy, direct payment or market support for electricity supplied or capacity provided by a private sector new nuclear operator' he now adds 'unless similar support is also made available more widely to other types of generation'.

So Huhne seems to end up diluting his initial hard anti-subsidy line: 'Arguably, few economic activities can be absolutely free of subsidy in some respect, given the wide-ranging scope of state activity and the need to abide by international treaty obligations. Our "no subsidy" policy will therefore need to be applied having regard to proportionality and materiality.'

Letting developers duck the full potentially huge insurance liability is no doubt one such grey area. The government says that it 'has not ruled out the maintenance of a limit on operator liability set at an appropriate level provided that it is justifiable in the public interest, is the right way of ensuring that risk is appropriately managed'. Energy minister Charles Hendry has explained that DECC were 'not ruling out action by the government to take on financial risks or liabilities for which they are appropriately compensated or for which there are corresponding benefits'.

Hendry has also indicated that, in addition to the EU-ETS floor price, other forms of support might also be offered – a capacity payment for low-carbon electricity generation and an obligation on suppliers to provide a certain proportion of low-carbon power (i.e. something like the Renewables Obligation extended to include nuclear).

One way or another the government does seem desperate to provide support for nuclear companies . But the financial risks are large – and apparently growing. Stephen Thomas, professor of energy policy at the University of Greenwich, says that when the 'nuclear renaissance' was first talked about six years ago, the capital costs quoted were about $1,000 per kilowatt. Now they have risen to about $6,000 per kilowatt. The problems with the European Pressurised-water reactor – a candidate for the UK programme – have further weakened the case for nuclear. EDF-Avera's EPR construction projects in Finland and France are now years behind schedule and heavily over budget. And the loan guarantees for new plant construction on offer in the US were evidently not enough for Constellations EPR project – expected to cost $10 bn. It has now been abandoned, leaving EDF in an even more difficult financial situation: this was meant to be one of the follow-ups to the Finnish and French projects. In a new report on the EPR, Prof. Thomas says: 'From a business point of view, the right course for EDF and Areva seems clear. They must cut their losses and abandon the EPR now.'

http://216.250.243.12/EPRreport.html

Meanwhile, the other major contender for the UK programme, the Westinghouse AP1000, is also in trouble. Westinghouse has been told by the US Nuclear Regulatory Commission (NRC) to resubmit its assessment of aircraft impact on the AP1000 reactor. The NRC said that documents put to it did not include 'realistic' analysis.

The UK Health and Safety Executive the Environment Agency has been engaged with a 'Generic Design Assessment' (GDA) of the AP1000 and EPR and issued a statement of design acceptability (SODA) for each design in August, though they said that there were still a 'number of potential issues still to be resolved' and put their conclusion out for consultation. They noted that GDA is 'solely to decide the acceptability of a design for permitting in the UK, and will not be used to express a preference for any particular design'. Following the consultation they hope to come to a view about the acceptability of the designs in June.

https://consult.environment-agency.gov.uk/portal/ ho/nuclear/gda

Meanwhile opposition continues at the proposed UK sites, led by local groups, like BANNG at Bradwell, who are in particular concerned about the plan to keep highly active spent fuel on site for many decades, and are perturbed that the overall Justification Process has been completed before regulatory approval of the designs, and before we have any idea where the waste will ultimately go.

Opposition is even stronger elsewhere, notably in Germany, where in September yet another 100,000-strong protest in Berlin indicated the scale of resistance to the plan to delay the agreed nuclear phase out, and extend the operating lives of Germany's existing nuclear plants. More major demonstrations are likely, right up to the end of year deadline for the plan to become law, and beyond, with the government coalition's majority clearly being threatened. No-one in the government dares to even consider new replacement plants, much less a nuclear expansion.

A study by Engineering the Future (EtF), an alliance which includes the Institution of Civil Engineers (ICE) and the Royal Academy of Engineering (RAEng), brings together lessons learnt from past and current nuclear projects that, if adopted, should it says help to ensure the success of the future UK nuclear new build programme. The report, Nuclear Lessons Learnt, includes a look at what went wrong at Olkiluoto 3 (Finland) and Flamanville (France).

In parallel with but, for practical purposes, independently of higher temperatures, we expect the environment to respond to an enhanced greenhouse effect with a more intense hydrological cycle. More evaporation where there is enough water (for example over the ocean) and a lot of evaporation already, and more precipitation where there is already a lot of precipitation. There are some pretty good indications that this is happening, but now a group of oceanographers has found more evidence in a surprising place (surprising to non-oceanographers like me, I suppose).

Kieran Helm and co-authors document just the kind of changes in the distribution of salt in the sea that you would expect if the hydrological cycle had intensified. Between 1970 and 2005 the maximum salinity of the water column, found at a depth of about 100 m, increased. In contrast, the minimum salinity, at about 700 m, decreased.

They analyzed the measurements by projecting them on to isopycnals, surfaces of constant density. The density of seawater increases when you add salt and decreases when you add heat. The payoff for the extra complexity is that heat and salt, added to or withdrawn from the ocean at the surface, are carried into or out of the interior of the ocean along these surfaces, and it is reasonable to interpret changes of salinity observed (strictly, inferred) on isopycnals as being due to changes at the surface.

The water balance of the atmosphere is a sort of zero-sum game. There isn't room up there to store more than the equivalent of a few tens of millimetres of liquid water. In the big picture, more evaporation means more precipitation, but probably in a different place. Added water vapour stays in the air for long enough, on average, to be carried up to several thousand kilometres by the wind before it condenses and falls back out.

The atmospheric water balance is usually studied in terms of the single quantity PE, precipitation minus evaporation, which (because I used to be a hydrologist) I will call Q for brevity. If Q is positive, the surface beneath the air column we are studying is getting wetter. If Q is negative, the surface is getting drier. If the air column is over the ocean, and its Q is positive, the ocean beneath, which is already as wet as it can be, is getting fresher (less salty), while if Q is negative the ocean is getting saltier.

The simplest way to make sense of the Helm results is to interpret the 1970-2005 changes in the distribution of salt as due to increases in oceanic Q of 7% in the higher latitudes of the Northern Hemisphere and 16% in the Southern Ocean, with decreases of 3% in the tropics. Each of these changes is subtle but statistically significant. (Another recent analysis, by Paul Durack and Susan Wijffels, suggests that the numbers might be on the large side.)

What has this got to do with glaciers? For one thing, Q is not the whole story. Glaciers that lose mass, as most do nowadays, are freshening the ocean, and sea ice that melts, as at the surface of the Arctic Ocean, is doing the same. But the thing that really interests me from the glaciological angle is the challenge. The hydrologists and now the oceanographers have produced evidence for a more intense hydrological cycle, and by implication a more intense greenhouse effect. Can we glaciologists rise to the same challenge?

A global approximation of the climatic snowlineA global approximation of the climatic snowline. South Pole on the left, North Pole on the right. Each little square is at an altitude which is the average of many "mid-altitudes", each of which is the average of one glacier's minimum and maximum altitude.

A more intense hydrological cycle should make the shape of the snowline more curvaceous, lowering it by increasing snowfall near the equator and in the middle latitudes, and raising it by increasing evaporation in the desert belts. The snowline, remember, is at the altitude at which accumulation of snow is just balanced by losses due to melting and evaporation (actually, sublimation).

So the challenge is to detect snowline change due to the more intense hydrological cycle, against a background of snowline rise due to general warming. My guess is that, although it would be a big job, we might just be able to manage it. It would also be a race against time, because some of the most important glaciers for the purpose are losing mass so fast that they will not be with us much longer. But it would be worth the attempt, because demonstrating a change in the shape of the snowline is different from demonstrating simply that glaciers are losing mass, which in turn is different from demonstrating that the temperature is rising. The more independent but mutually consistent lines of evidence we have, the more confident can we be that we are on the right lines in interpreting what is happening to our world.

Recently I had a new article published in Environmental Research Letters, the journal associated with environmentalresearchweb. The title of the letter is "Energy intensity ratios as net energy measures of United States energy production and expenditures". In this letter I explore the Energy Intensity Ratio (EIR) as a proxy measure for energy return on energy invested (EROI). In calculating the EIR by dividing the energy intensity of a fuel (Btu/$) by the energy intensity (total energy consumption/GDP) of the overall US economy, I can track the relative cost of energy over time. In this way, the price of energy is scaled to the energy efficiency of the economy. Essentially, high EIR values mean that energy is cheap and is not constraining the economy. Low EIR values mean that energy is expensive, and if the value becomes low enough, can constrain economic growth because too much economic activity is spent obtaining and purchasing energy instead of other activities.

A major benefit of this EIR approach is that it uses readily available data: energy prices, energy consumption totals, and gross domestic product (although GNP would also provide additional insight). Thus, this method connects economists (who believe in an efficient market that price includes all information) and those of the energy analysis community that work to calculate EROI from core energy and materials data. The analysis shows that EIR is an effective proxy measure for EROI as they follow the same trends over time.

Often people interpret the steady decline of the economy's energy intensity as an indicator that the economy is becoming more decoupled from energy consumption. However, as my paper shows, this is a misleading view. What matters more is whether or not obtaining energy also takes less energy inputs over time. As seen in the figure, during the 1970s the EIRs for oil, natural gas, and coal all dropped for over a decade (due to the Arab Oil Embargos raising oil prices) that economic growth was negative for 38 out of 96 months (40% of the time) from November 1973 to November 1982 (also see www.nber.org/cycles/cyclesmain.html). It took a decade for the US to effectively break from the stagnant economy by investing in energy efficiency (vehicle fuel standards, appliance efficiency, etc), new energy resources and technologies (Western subbituminous coal, enhanced oil recovery), and largely removing oil from electricity generation.

A parallel scenario exists for the last ten years in that again the EIRs of coal, natural gas, and oil all dropped significantly to the levels not seen since the early 1980s. And also, at the end of this drop in energy quality was a prolonged economic recession (18 months from December 2007 to June 2009) from which the economy has not fully recovered. US unemployment has been above 9.5% for an unprecedented amount of time since the Great Depression.

Image: http://environmentalresearchweb.org/blog/assets_c/2010/11/EIRPlot_RecessionMatching-1828.html

The conclusion from this analysis is that three decades after the oil crises of the 1970s, today we are essentially at the same point we were with respect to EROI and EIR as in 1980. In other words, for all of our technological advances in the last three decades – including computers, information technology, horizontal drilling and unconventional oil and gas development, and energy efficient appliances – we are just treading water with respect to energy quality. The US economy broke free from the energy chains of the 1970s by using energy more efficiently, and that will be the key to new economic growth. Unfortunately, these efficiency investments can take another decade to pay off. Although not widely cited as the reason by most economists and "experts" on news shows, low EIR and EROI energy supplies are the major reason why economists do not see near term economic growth being as large as in the past.

The government's spending review brought fears that the government would backtrack or water down the existing Feed-In Tariff (FiT) for electricity and also the proposed Renewable Heat Incentive (RHI). A coalition of 22 groups, including the Renewable Energy Association, the National Farmers Union and the Federation of Master Builders, warned energy secretary Chris Huhne that cutting schemes that subsidise household generation of renewable energy would jeopardize job creation, energy security and greenhouse-gas targets. An open letter to Vince Cable and Danny Alexander from 64 companies, including E.ON & British Gas, adopted a similar stance: 'premature adjustments to the tariff would have a profoundly damaging effect on long term investor confidence in the clean tech and renewable energy sectors, and may cause investors to flee altogether'.

Energy & climate change's minister, Charles Hendry, had said: 'We inherited a situation where we could see who was going to benefit commercially but we couldn't really see how it was going to be paid for and that it would create pretty substantial bills.'

Neither the existing FiT or the RHI cost the government anything directly, other than administrative effort – it's suppliers and then consumers who pay ultimately. But if the FiT leads to a take-up boom, these costs could grow faster than the prices falls due to the FiT, and overtake the built in price degression mechanism, as arguably happened in Germany and Spain. And the government may then wish to limit the cost to consumers. The electricity FiT levels are due for reassessment in 2012, but it was feared that this might be advanced prematurely.

One of the problems with the RHI is that, whereas it's relatively easy to identify who the suppliers are for grid electricity, and levy a FiT charge on them accordingly, heat is supplied by a range of companies in a range of forms – natural gas, propane, butane, oil, wood and other biomass and even direct heat. And the scale is much larger than just for electricity – heat is about 49% of UK energy end use. But it ought to be faced, and as the REA/NFU coalition argued: "Costs come down when the industry can plan and invest with confidence, and economies of scale are achieved- that is one of the simple aims of these policy mechanisms."

In the event, the campaigning seems to have paid off: the electricity FIT was left untouched for now, and the RHI will go ahead, although cut back to £860 m p.a. and with a two-month delayed start, until June 2011. The government said: 'This will drive a more-than-tenfold increase of renewable heat over the coming decade, shifting renewable heat from a fringe industry firmly into the mainstream.' However it added that it would 'not be taking forward the previous administration's plans of funding this scheme through an overly complex Renewable Heat levy'.

The government also noted that the existing Feed-In Tariffs will be refocused on the most cost-effective technologies, saving £40 m in 2014–15. 'The changes will be implemented at the first scheduled review of tariffs, unless higher than expected deployment requires an early review', presumably because of high cost PV solar.

There may be a case for changes, but it does seem sensible to leave the FiT system to bed in first to see how it goes. Friends of the Earth had commissioned Arup to review the current Feed-in Tariff. The report Small Scale Renewable Energy Study: FIT for the Future uses financial modelling of the performance of 20 generic renewable-energy schemes, and concluded that for some technologies, it could 'seriously damage investor confidence' to amend the tariff levels before the end of the previously announced review period in 2013.

Arup found that, while there were some perverse scale effects for wind and also PV project, due to the structure of the FiT price bands, in some cases, the FIT could work very well (e.g. a community co-operative that buys a 1.5 MW wind turbine could earn 15.9% return on investment annually for 20 years). This would mean the scheme would pay for itself in seven years. But micro-wind only had an Internal Rate of Return (IRR) of 7%. Micro hydro was in the range 10–13% IRR.

On the heat side, heat pumps had an IRR of 7%, unless used in off-gas grid contexts, when they were 12%. Domestic scale biomass boilers had very good returns: IRR 18%, but biomass fired micro-CHP was less attractive, with solid biomass micro- CHP coming in at under 5%. Individual domestic solar heating was also very poor, with an IRR of only 3%, although grouped schemes were better. The IRRs for AD biomass were even lower.

So coming up with a viable RHI system is obviously going to be tricky. That point was made strongly in a report The Renewable Heat Initiative: Risks and Remedies produced on behalf of Calor Gas Ltd by the Renewable Energy Foundation. It said that the government should scrap the proposed Renewable Heat Incentive (RHI) scheme and start again because it would be bad for the sector by encouraging technologies that 'are not quite ready'. RHI was 'an expensive leap into the dark', relying on a major deployment of technologies that are new to, and untested in, the UK context. REF also uses government data to estimate that the RHI could, in practice, consume around 2% of the annual income of the poorest households – funds that REF claims will go directly towards reducing bills of the richest households, who are able to put up the initial capital for installations and so benefit from the RHI subsidies.

Dr John Constable, REF research director, said: "It appears to be a severely regressive policy; I can't believe the previous government anticipated this impact as it is clearly an iniquitous policy. The only winners from this are those with initial capital to install the technologies in the first place." The same argument that has been used by some against the FiT.

Overall REF claimed that the cost of the RHI could potentially increase the average domestic gas bill by 14% p.a. by 2020. Constable commented: 'The simplest thing to do is to stop it. It is in the public interest to cross this one off and start again. Otherwise, significant changes will need to be made to avoid the risks we have identified.' In free market mode, he added: 'Left to its own devices, the market will learn. The RHI on the other hand would embed and shelter bad technologies and bad implementations,' pointing to the recent Energy Saving Trust's report on heat pumps as an example. That had found that about 87% of heat-pump systems tested in the UK didn't achieve a system efficiency (COP) of 3 ,which the Trust considers the level of a "well-performing" system. And 80% failed to meet 2.6. EST blamed the use of multiple contractors for fitting systems instead of a single contractor as used in Europe, wrongly sized systems, complicated controls and a lack of education for householders using them. Obviously there are some issues to be resolved before the RHI can be sorted but, although the government did say that it would scrap the RHI levy idea, it clearly did not take REF's advice and scrap the whole thing.

The EST study: www.energysavingtrust.org.uk/Generate-your-own-energy/Heat-pump-field-trial

The FoE/Arup study: www.foe.co.uk/resource/reports/fit_for_future.pdf

As an interesting coda to the debate on the FiT and its possible amendment, energy minister Greg Barker seems to be worried about the recent boom it has created in solar farms – large ground-mounted PV arrays. He said that the government would not act retrospectivel, but 'large green field-based solar farms should not be allowed to distort the available funding for domestic solar technologies'. Roof-mounted PV is probably preferable aesthetically but what seems to be the issue here is a concern that the very limited FiT allocation will get used up rapidly by large commercial schemes.

Hannibal is not the only figure from deep in history who is known to have come close to noticing a glacier. One of the better known references to glaciation is from early renaissance times, in the Travels of Marco Polo.

There is a good deal of uncertainty about this book. Marco Polo set off from Venice in 1271, bound for the Orient. On his return to Italy in 1291 he was captured by the Genoese, who were then at war with Venice, and clapped into jail. The usual account is that he told the story of his travels to a cellmate, Rustichello of Pisa, who wrote them up in Old French. There is, however, no authoritative text. The travels were an immediate hit, and manuscript copies proliferated in several languages.

The uncertainty extends to the contents. It is unclear how close Marco Polo ever came to Mount Ararat, of which Rustichello says he said (in the English rendition of Henry Yule and Henri Cordier from 1902):

And you must know that it is in this country of Armenia that the Ark of Noah exists on the top of a certain great mountain on the summit of which snow is so constant that no one can ascend; for the snow never melts, and is constantly added to by new falls. Below, however, the snow does melt, and runs down, producing such rich and abundant herbage that in summer cattle are sent to pasture from a long way round about, and it never fails them. The melting snow also causes a great amount of mud on the mountain.

Except perhaps for some in the Icelandic sagas, this is one of the earliest glaciological remarks ever written down. It is therefore worth a close look. Resist the tempting byways (Noah's Ark; the pastoral aspect; the mud), and never mind whether it is the account of an eye-witness. This is an avenue for gauging the extent to which late 13th-century observers understood glaciers.

First, it is not true that no one can ascend Mount Ararat, as alpinists have shown repeatedly since the first ascent in 1829. This late date has more to do with lack of time, lack of inclination, and in short with attitude, than with any real difficulty. Of course Ararat is a long way from Italy, and there may have been a religious tint in the attitude of 13th-century Armenians. But the scientific attitude to glaciers, and to mountains generally, was a thing of the future.

Second, it is probably not true that the snow on top of Mount Ararat never melts. Ararat is about 50 km south of Yerevan on what is now the Turkish side of the River Araks. At 5,137 m above sea level in latitude 39.7° north, there should be at least a short season of above-freezing temperatures every year. But here Marco Polo was on the ball at least to the extent of recognizing, or even taking for granted, a basic fact of glaciology and meteorology: it is colder higher up. He was, however, more a traveller than an analytical thinker. Taken literally, his account implies that Mount Ararat should have been getting steadily higher and, probably, pointier.

And so we come to the big gap in 13th-century understanding. How does the snow manage to stay perpetual at the top of the mountain but to stay ephemeral part way down? Apparently Marco Polo and his contemporaries didn't even notice the contradiction — that you cannot pile snow up indefinitely, as observed at the tops of mountains (including the Alps, only 200 km from Marco Polo's birthplace), without something having to give.

If this contradiction was difficult to recognize, it was yet harder to explain. What was required was the realization, first, that snow will turn into ice if it keeps on piling up, and then that if the snow keeps coming the ice must flow.

Neither of these discoveries was proposed until the 18th century, and neither was nailed down firmly until the 19th. Making the necessary intellectual progress called not just for more detailed observation, but for a change of attitude. To show that the ice moves you can put a stake in it, and measure its position accurately twice — not all that difficult. Why it was not sensible, or possible, to do this or to think this way in the 13th century, but it became sensible by the 18th century, is another question.

Barrage sinks

| | Comments (13) | TrackBacks (0)

It's been a long running saga – over the years there have been many reports and studies on the idea of building a tidal barrage across the Severn Estuary. Most have said it was technically possible, but economically and environmentally challenging – although no final conclusion emerged, just more studies. The most recent study looked not just at the large Cardiff to Weston barrage idea, but also at other options for exploiting the large tidal range in the estuary – smaller barrages and tidal lagoons. However this time a decision emerged – basically they were all too expensive. The government's 'Severn Tidal Power Feasibility Study: Conclusions and Summary Report' in October seems to have finished off tidal range projects in the UK, at least for the moment. Although it did say that the results for other locations around the UK might be different, it is hard to see how they could do better than the Severn – the best site by far in terms of tidal range. Given that most environmental groups strongly opposed large barrages, the government decision not to provide support did not lead to complaints from them about 'ignoring green options'.

The main conclusions were pretty clear. 'The Government has concluded that it does not see a strategic case to bring forward a tidal energy scheme in the Severn estuary at this time, but wishes to keep the option open for future consideration. The decision has been taken in the context of wider climate and energy goals, including consideration of the relative costs, benefits and impacts of a Severn tidal power scheme, as compared to other options for generating low carbon electricity. The Severn Tidal Power feasibility study showed that a tidal power scheme in the Estuary could cost in excess of £30bn, making it high cost and high risk in comparison to other ways of generating electricity.'

www.decc.gov.uk/severntidalpower

The report text is a little less abrupt. It says: 'The Cardiff-Weston barrage is the largest scheme considered by the study to be potentially feasible and has the lowest cost of energy of any of the schemes studied. As such it offers the best value for money, despite its high capital cost which the study estimated to be £34.3 bn including correction for optimism bias. However this option would also have the greatest impact on habitats and bird populations and the estuary ports.'

It went on: 'A lagoon across Bridgwater Bay (£17.7 bn estimated capital cost) is also considered potentially feasible, as is the smaller Shoots barrage (£7bn). The Bridgwater Bay lagoon could produce a substantial energy yield and has lower environmental impacts than barrage options. It also offers the larger net gains in terms of employment.' By contrast: 'The Beachley Barrage and Welsh Grounds Lagoon are no longer considered to be feasible. The estimated costs of these options have risen substantially on investigation over the course of the study' It added 'combinations of smaller schemes do not offer cost or energy yield advantages over a single larger scheme between Cardiff and Weston.'

It noted that, in addition, the study funded further work on three proposals using innovative and immature technologies (the Severn Embryonic Technologies study). It said: 'Of these, a tidal bar and a spectral marine energy converter showed promise for future deployment within the Severn estuary – with potentially lower costs and environmental impacts than either lagoons or barrages. However, these proposals are a long way from technical maturity and have much higher risks than the more conventional schemes the study has considered. Much more work would be required to develop them to the point where they could be properly assessed.'

The spectral marine energy converter (SMEC), which makes use of the venturi effect, tapping off tidal flows to drive a separate generator, certainly looks interesting for the Severn and also, in smaller versions, elsewhere.

See www.verderg.com/.

DECC had previously indicated that if a scheme had been considered viable for the Severn, further consultation would have to continue up to maybe 2014 with construction then between 2015 to 2022, followed by operation perhaps in 2023. But all that is now wiped out. With DECC faced by spending cuts and the capital cost of the big barrage now put at £34.3 bn, it just wasn't viable as a public project and, with the generation costs of the other options put higher, private-sector interest would be unlikely. So tidal range seems to been written out of the story for some while. Which leaves tidal currents – a much less invasive and rapidly developing approach, with, it is claimed (in the DECC 2050 Pathways study and the PIRC Offshore Valuation), a larger overall UK energy resource than offered by tidal range projects.

Tidal-current turbines can be much more flexible and modular – although there are site limitations. It seems that the Severn estuary isn't suited to the pile driven support system needed. It does seem tragic to ignore the huge tidal energy potential of the Severn, so a tidal-range project or some other concept may yet be back there (e.g. SMEC), and also elsewhere (there are several proposals for smaller barrages (e.g for the Mersey, Duddon, Solway Firth) but maybe not a for while. A rearguard action is being fought by some of the original backers of the Severn Barrage (see www.corlanhafren.co.uk/. But it seems to be doomed in the current financial and political climate.

I'll be looking at the tidal-stream options shortly. See also www.natta-renew.org for regular updates.

Abundant newspaper analysis point out the higher costs of renewable energies (e.g. Cost of Green Power Makes Projects Tougher Sell, New York Times, 7 November). However, both industry employees and economists who analyse intertemporal dynamics understand renewable energies to be long-term cost effective and socially desirable. What is the source of this conceptual divergence?

A recent paper by Christian Breyer and colleagues highlights a possible source of this divergence, by looking at the photovoltaics (PV) industry (Breyer et al., 2010). Cost reduction is mostly influenced by growth rates and technological learning curves. A technological learning curve shows the cost reduction achieved by technological progress, economics of scale and market volume growth. Annual growth rates for PV have hovered around 30% for the last three decades, learning rates at 20%. Hence, initially PV was economically unviable but can by now successfully compete on an industrial scale at geographically advantageous locations, and within a few years, in Western Europe and the northern US as well. In contrast, conventional electricity-generating technologies have lower growth rates, lower learning rates, or – in the case of nuclear energy – may even have negative learning rates (Grubler, 2010). Meanwhile, PV achieved a manyfold cost reduction with only 2% of the R&D costs of nuclear-power plants.

There is possibly a deeper explanations underlying these observations. PV is a highly sophisticated technology that relies heavily on knowledge, experience and research but less on resources. By contrast, economies of scale for nuclear and fossil-fuel technologies are fundamentally bounded by their respective fuels. In fact, because they are resource-based technologies, conventional technologies are more subject to decreasing returns to scale, whereas PV as very knowledge-intensive technology is more subject to increasing returns to scale.

Another crucial property of PV – in comparison to coal and nuclear power plants – is its modularity and also its lower energy density. As a result, PV allows for and requires many agents (including homeowners) and diffusion over many sites. Interestingly, rent seeking by investors can be (but is not necessarily) more equally distributed across the population, contrasting with rent seeking of a few highly leveraged investors in resource-based, conventional technologies. This opposing view of not only environmentally technologies, but also of socially desirable economic systems, is exemplified by the recent protests and demonstrations around Gorleben in Germany, with thousands of people blocking a nuclear-waste transport in cold November weather for several days and nights (Despite Protests, Waste Arrives in Germany New York Times, 8 November).

The GRACE satellites have transformed our understanding of how kilograms dance around on and beneath the Earth's solid surface, but nobody would claim that analyzing what they are telling us is a simple job. A recent analysis by Riccardo Riva and co-authors exemplifies this point.

The problems start with a list of technical details to do with processing of the raw observables. "Observables" is jargon, short for "observable quantities", but it is a valuable clue to how to think about the "inferables" that we are concerned about.

The point is that an "inferable", such as relative sea-level change, may be quite some distance down the chain of reasoning from the observable, which in this case is the rate at which the two satellites are accelerating away from or towards each other. This rate depends directly on all the gravitational attractions they feel at the time of each measurement. We want to remove the technical noise so that we can infer the signal of the fluctuating gravity field experienced by the satellites, and so infer the transfers of mass that explain the gravitational fluctuations.

One of the technical details, for example, has to do with spatial resolution, which for GRACE is about 300 km. But the regions between which mass is being transferred generally have quite sharp boundaries, for example the coastline. The jargon for this part of the problem, "leakage", is quite expressive. It hints that part of the signal we want has strayed out of our study region and into neighbouring regions.

Riva and co-authors have two study regions, the land and the ocean. Signal could be leaking either way across the coastline, but they argue that the oceanic signal of mass gain, expressed as relative sea-level change, is probably much smoother than the terrestrial signal of mass loss. So they simply define a 250-km wide buffer in the offshore waters and "unleak" all of its supposed signal back onto the landmasses.

There then follow a number of other corrections, including a correction for movements of mass within the solid Earth and a trial-and-error phase that seeks to undo the addition of some oceanic signal to the land signal during the unleaking phase.

Now the geophysical part of the problem can be addressed. Riva and co-authors reckon that +1.0 mm/yr of equivalent sea-level rise moved from the continental surfaces to the oceans between 2003 and 2009, give or take 0.4 mm/yr. This surprises me.

My estimate for the transfer from small glaciers (those other than the ice sheets) is about +1.2 mm/yr for the same period. Several recent estimates for the transfer from the Greenland Ice Sheet lie between about +0.5 and +0.7 mm/yr, and for the Antarctic Ice Sheet at about +0.5 mm/yr. (All of these abouts are partly because of the uncertainty of the measurements, or rather of the inferables, but also because of the difficulty of matching the different time spans of the analyses.) The glaciers, then, seem to be adding more than twice the mass to the ocean that is estimated by the Riva analysis.

It gets worse. Yoshihide Wada and co-authors, in a paper to appear shortly, argue that the mining of groundwater is running at present at a rate equivalent to +0.8 mm/yr. This addition is partly offset by the filling of reservoirs, estimated at —0.5 mm/yr over the past 50-60 years. The rate during the past decade is probably lower, because the frenzy of dam-building has abated somewhat recently. But it is not possible to get all of the continental surface contributions to add up to less than, say, +2.6 to +2.8 mm/yr, give or take perhaps 0.4 mm/yr.

What we have here is stark discord, well outside the error bars, between several "inferables", and we haven't even got to the sea-level rise due to thermal expansion and the estimated sea-level rise itself. This is a classic example of unsettled science in a context of settled science. We can draw a diagram to depict the water balance of the ocean, or write down a little equation. A balance is, after all, simple arithmetic. The boundary between the settled and unsettled parts of the problem lies somewhere beyond the diagram, and indeed beyond the signs, + or —, attached to the various terms in the equation. But at the moment it is definitely before we get to the first decimal digits of the numbers, at least one of which must be wrong.

Green bonfires

| | Comments (6) | TrackBacks (0)

Bonfire Night (5 November) came early this year with the government's 'Bonfire of the Quangos' in late October. It is to abolish or downgrade many quangos (quasi-autonomous non-departmental public bodies), notably the Sustainable Development Commission, and the long-established Royal Commission on Environmental Pollution, along (less worryingly) with the Infrastructure Planning Commission. The full list, of 200 or so, also includes British Nuclear Fuels Ltd, NESTA, the Design Council and, crucially, the Renewables Advisory Board, the Renewable Fuels Agency and even the Regional Development Agencies – who have been strong in backing renewables locally. Still evidently under review (although not necessarily for abolition, just reorganization) are the Environment Agency, the Carbon Trust, the Energy Saving Trust, and the UK Atomic Energy Authority.

Some of this was just sabre (or rather axe) rattling, and some of the agenciesa or organisations were pretty defunct shells (e.g. most of the UKAEAs work has been privatized, as has BNFLs). But some, like the SDC, the RCEP, the RAB, and the (so far untouched) EST and Carbon Trust, might be seen as crucial to the proper development of a sustainable future – although some rationalisation could be merited. The Regional Development Agencies will be sorely missed, but some of their work will be taken over by central government. And over the last few years the government has set up some new agencies and functions, which may in effect replace some of those now lost – notably the Climate Change Committee. Soon we may get more details of the proposed Green Investment Bank, which some see as eclipsing some of the Carbon Trust's functions. So far, all we've been told though is that the Department of Business Innovation and Skills will 'lead the creation of a UK-wide Green Investment Bank that will be capitalized initially with a £1bn spending allocation with additional significant proceeds from the sale of government-owned assets, to catalyse additional investment in green infrastructure'. That's less that the £3–4bn thought to be needed, and the £2bn initially proposed, but it's a start.

There has been some discussion about the future role of the Environment Agency. It's hard to imagine how it could be abolished. But actually these days, much of the running is being made by the Crown Estate, given that many new renewable-energy projects are offshore. And there are some interesting new issues emerging. For example, WWF recently highlighted an obscure legality in Crown Estate leases that continues to prioritize oil and gas exploration off the UK's coast to the detriment of renewables. Basically it seems, Crown Estates can terminate existing rights granted to offshore wind-farm operators whenever the government declares a license for oil and gas exploration in the same area. Not only can wind farm operators lose their lease, but they face premature decommissioning costs when their lease is revoked and are not entitled to any compensation to recover any expected financial returns.

WWF claims that such uncertainty over the financial viability of these leases could potentially detract investors, with knock-on effects for the renewables industry and the future growth of the green economy. This, it says, comes in a context that is already very favourable to the oil and gas industry.

WWF says that where there is a conflict between offshore renewables and oil and gas exploration, priority should clearly be given to renewable energy projects, in light of the UK's climate-change commitments and the sector's potential to create a substantial number of new jobs in the UK. But the lure of oil and gas revenues and taxes may dominate. We need agencies that resist this sort of thing and push effectively for more progressive approaches. Sadly we may have lost some of them.

Much of the skill and expertise base created by the offshore oil and gas industry is of course very relevant to the newly emerging offshore wind industry, as is some of the supply chain and servicing infrastructure. So there ought to be positive opportunities for collaboration. But there are clearly also conflicts. As a further example, looking to the future, Trade body Oil and Gas UK recently said that a number of planned wind farms around the UK potentially impinge on the operations of offshore oil rigs, and there needed to be clearer legislation to avoid legal ambiguities over rights. In its submission to a government consultation on the National Policy Statement on energy policy, Oil and Gas UK said that the policy statements did not take account of the way offshore wind farms could impede mobile drilling rigs, disrupt helicopter flights and get in the way of pipelines and underwater equipment. And it added: 'It would be most unfortunate if individual licensees were forced to resort to legal processes in order to defend the rights granted under their existing petroleum licences.' Although, according to Windpower Monthly, Oil and Gas UK has denied reports that it was actually planning legal action against wind developers, the potential for conflict is clearly there.

We have got used to stand-offs between oil and gas exploration companies and Greenpeace, especially these days in deepwater sites. Are we to expect a new version, with wind developers defending their patch – as they too move out to deepwater locations? Maybe at some point, in the absence of an appropriate Quango, the Navy will have to intervene!

No Bonfire of the Heretics

On 4 November a Channel 4 TV documentary assembled some environmental heretics in an attempt to demonstrate splits in the green movement over nuclear power and GM, but their views were met with polite, if somewhat bemused, reactions from representatives of green organizations in a subsequent studio discussion session. Stewart Brand's minority views on nuclear have any case been pretty much demolished in a un-hectoring analysis by Amory Lovins at www.rmi.org/images/PDFs/Energy/2009-09_FourNuclearMyths.pdf, while a conflict over their evidently very different views on climate change was avoided by not having Mark Lynas and Patrick Moore in the studio together.

The first impression of an East European city such as Poznan or Krakow in Poland, Chişinâu in Moldova, or Sofia in Bulgaria is that of old crumbling public transport systems and unrenovated housing stock in Soviet stilo-style packing. Indeed, the majority of inhabitant strife for higher material well-being, in particular within the service industry (e.g. education and public health) which pays rather low salaries and is decoupled from gains in the rent economy.

Nonetheless, East European cities maybe considerably ahead in crucial aspects to their Western counterparts. Their public transport system is comprehensive, and public ridership consistently high. In Sofia, the 1.3 million people large capital of Bulgaria, distances within the city center are walkable, and access to peripherical parts of the city is mostly guaranteed by low cost tramways and busses.

As a side effect of a high modal share of public transport and walkable distances, the inner part of the city is full of people, making public space enjoyable and relaxing. Indeed, as Jane Jacobs put it:

"Under the seeming disorder of the old city, wherever the old city is working successfully, is a marvelous order for maintaining the safety of the streets and the freedom of the city. It is a complex order. Its essence is intricacy of sidewalk use, bringing with it a constant succession of eyes. This order is all composed of movement and change, and although it is life, not art, we may fancifully call it the art form of the city and liken it to the dance – not to a simple-minded precision dance with everyone kicking up at the same time, twirling in unison and bowing off en masse, but to an intricate ballet in which the individual dancers and ensembles all have distinctive parts which miraculously reinforce each other and compose an orderly whole."

Jane Jacobs (The Death and Life of Great American Cities)

These East European compact cities with old, sometimes slow, public transport, came out of the Soviet era – and though of course are results of a planned economy, their form is also rooted in a lack of resources to build a car dependent infrastructure. The built environment reacts slowly to external forces, and the historical circumstances induce a path dependency of the land-use transport interaction. As a collateral, GHG emissions of urban transport is relatively low. Furthermore, future oil price rises, as for example anticipated by the IEA, may cause little effect on the urban functioning of these cities. As such, East European cities may be more 'resilient' then more car-dependent (and richer) West European cities.

Surely, it is social romanticsm to indulge on existing infrastructures. To stay with transport, bicycle networks are not (yet) existing or very purely implemented, a relatively small number of cars, coming from more distant suburbs swamp the city and cause safety issues for non-motorized transport (NMT), and noise and air pollution. Parking management, though formally in place, lacks enforcement, and wild parking causes considerable congestion and inconvenience for pedestrians who walk on sidewalks that sometimes more resemble adventure parks.

Even considering these considerable draw-backs, the overall perspective is differentiated. While East European cities surely can improve on NMT infrastructures, modern public transport stock, and car regulation, it is West European cities that are more vulnerable to external shocks, and can learn to some degree from Eastern transport land-use models.

Suppose you have a kilogram of something, and you know where it is, somewhere near the surface of the Earth. And suppose it has been there for quite a long time.

It will have been obeying Newton's laws of gravitation, like all the other six trillion trillion kilograms. They will all have got used to each other, and will be relatively at rest, because all of the gravitational accelerations will have decreased to zero (pretend).

Now suppose you take your kilogram and put it somewhere else. It will attract all the other kilograms towards its new location, more strongly the nearer they are. Remember, Newton says that the acceleration drawing any two bodies together is inversely proportional to the square of the distance between them.

As kilograms move around, they induce other kilograms to move around as well. Recently Julia Fiedler and Clinton Conrad identified the steps in one part of this dance of the kilograms: the removal of about 10,000 trillion kilograms from the ocean into reservoirs since 1950. These kilograms represent a hypothetically uniform lowering of sea level by 28 mm, and a nearly equivalent displacement of fresh air by reservoir water. (Only nearly, because almost a quarter of the sea water has seeped into the aquifers beneath the new reservoirs.)

The surface of the sea is an equipotential, a surface on which the gravitational potential is a constant. The value of the constant is of no interest, except that it is just right for accommodating all the sea water there is. Take some water out of the sea and the new sea surface is still an equipotential, but a different, lower one (the new constant is smaller).

Sea level, though, has been rising steadily. As the ocean warms, it expands — each of its kilograms takes up more space. And as the glaciers melt — which is where I come in — they add kilograms to the ocean. Since the early 1990s we have been able to track this rise with satellite altimeters, but for times before then we have to rely on tide gauges. A tide gauge measures RSL or relative sea-level, the distance between the sea-surface equipotential and the part of the solid Earth to which it is attached.

In the present context the solid Earth is more like toothpaste than rock. It moves because the reservoirs squeeze the toothpaste, which flows away towards where the kilograms came from. The solid surface falls beneath the reservoirs, so relative sea level rises there. There is a compensating relative fall, spread over the oceanic source of the kilograms.

But here comes a new arabesque of the dance. The dammed kilograms are busily attracting all the others — including the ones still in the ocean — towards the reservoirs. They have changed the shape of the sea-surface equipotential, which is higher (further from the Earth's centre of mass) near the reservoirs than it used to be, and lower over the oceanic source. For practical reasons we can only install tide gauges on coastlines, so they give us a biased view: no sampling at all of the open ocean, and an index of coastal RSL that deviates from the global average, gauge by gauge, depending on the number of kilograms we have moved into reservoirs nearby.

Fiedler and Conrad estimate that some gauges, in southern locations far from reservoirs, have been recording less than the global-average change of RSL that is due to the reservoirs. Others have been recording more, and at some the sea level has actually gone up simply because they are close to big reservoirs. Gauges in Ghana, not far from the 148 trillion kilograms that we moved into Lake Volta beginning in 1965, are good examples. But, based on a sample of 200 gauges, Fiedler and Conrad reckon that the tide gauges have been seeing only about —0.3 mm/yr instead of the true average reservoir signal, —28 mm over 58 years or about —0.5 mm/yr.

So the tide-gauge estimates of global-average sea-level rise are too high by +0.2 mm/yr. There are reasons for thinking that the necessary correction might be smaller, but the total rate (over the past few years) is in the neighbourhood of +2.5 to +3.0 mm/yr. Looking on the bright side, we have reached the stage of worrying about tenths of a millimetre. All the same, people like me, who try to estimate contributions to the water balance of the ocean, now have to learn new dance steps because the band is playing a subtly different tune.