This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Skip to the content

IOP A community website from IOP Publishing

April 2010 Archives


One current climate and energy bill in the Committee on Energy and Natural Resources of the United States Senate is S. 1462, the American Clean Energy and Leadership Act of 2009. The stated purpose of this bill is to:

" ... promote the domestic development and deployment of clean energy technologies required for the 21st century through the improvement of existing programs and the establishment of a self-sustaining Clean Energy Deployment Administration that will provide for an attractive investment environment through partnership with and support of the private capital market in order to promote access to affordable financing for accelerated and widespread deployment of-- (1) clean energy technologies; (2) advanced or enabling energy infrastructure technologies; (3) energy efficiency technologies in residential, commercial, and industrial applications, including end-use efficiency in buildings; and (4) manufacturing technologies for any of the technologies or applications described in this section."

To achieve the goal of the deployment of clean technologies, not research, a Clean Energy Deployment Administration (CEDA) is proposed to be established in the Department of Energy. The agency will be an independent administration within the DOE with a Technology Advisory Council to advise on the technical aspects of new technologies. CEDA is to provide different types of credit such as loans, loan guarantees, other credit enhancements as well as secondary market support such as clean energy-backed bonds that are aimed at allowing less expensive lending in the private sector.

The mission of CEDA is to help deploy (not research) technologies that are perceived as too risky by commercial lenders. Thus, the agency aims to promote riskier technologies but with high potential to solve climate and energy security needs. At the same time, a portfolio approach is supposed to mitigate risk and enable CEDA to become economically self-sustaining over time after getting initial seed capital allocated by Congress (possibly up to $16 billion from existing funds reallocated to CEDA).

If other private investors are also pursuing balanced portfolios of risky and safe energy investments, what exactly might be the difference between the government CEDA and a private equity energy investor? Would it be that CEDA has a mandate to only invest in energy and climate technologies whereas a private fund can invest mostly in energy technologies or even change it energy-related portion of its portfolio over time? No doubt many would be skeptical that the government, even with private advice via the Technology Advisory Council, could make a profitable investment fund for clean energy, much less specifically having to invest in technologies that are too risky for the private market. It is also not clear how far $16 billion can go in this endeavor. For instance, for a wind turbines (not a risky clean energy technology) at a cost of $2000/kW, $16 billion could purchase 8 GW of installed capacity. Riskier and unproven technologies would be much more expensive such that the CEDA fund could invest no more than the order of 10s to maybe 100s of MW of installed effective capacity (via energy conservation or generation technologies) or less. If a new technology were deployed and operated successfully for a year or two at a scale of 0.1 - 1 MW, then it would begin to get established as less risky from an investment standpoint, and more business model and upscaling issues could take over in importance with CEDA divesting and hopefully handing the reigns to private capital. Thus, possibly up to a few dozens of technologies could get funding from CEDA to expedite their deployment.

It is not clear what the returns to CEDA will be in what will surely be rare cases of success. CEDA is meant to be more creative and flexible than existing government programs that have loan guarantees as the only funding and assistance mechanism. On the grand scale of problems and budgets, $10-$20 billion on CEDA may be a worthwhile bet. After all, that's only about a dozen stealth bombers!

A cold snap that began about 6250 BC is attributed to catastrophic drainage of Lake Agassiz-Ojibway. Meltwater from the waning Laurentide Ice Sheet, ponded north of the Arctic-Gulf of Mexico drainage divide, was able to force open a channel beneath the ice and pour enough fresh water into the north Atlantic Ocean, via Hudson Strait, to weaken the meridional overturning circulation.

The palaeoclimatic records of the ice age tell us of other cold snaps lasting up to a millennium or two. The longest and best known, the Younger Dryas, began about 11,000 BC, lasted about 1,300 years, and is also thought to have been due to catastrophic drainage of North American meltwater.

The Younger Dryas is named after Dryas octopetala, a flower which is an indicator for the episode in records from Scandinavian peat bogs. And yes, there was an Older Dryas, and even an Oldest Dryas, but they are much less prominent and may not have been worldwide.

Dryas integrifolia, a cousin of the Younger Dryas's eponymous D. octopetala
Dryas integrifolia, a cousin of the Younger Dryas's eponymous D. octopetala, growing on Ellesmere Island in northern Canada. Both are also referred to as the "mountain avens".

Now Julian Murton and co-authors describe what they believe to be evidence of the flood that triggered the Younger Dryas cooling. They worked on Richards Island, at the eastern edge of the Mackenzie River delta. The evidence consists in essence of an erosion surface truncating older glacial till and overlain by coarse gravel. The gravel is interpreted as a lag deposit, left over when the flood waters ceased to be capable of continuing to carry it.

The interpretation is rather persuasive. The dates bracket the events, and match the onset date of the Younger Dryas, about as closely as could be asked for. The gravel is pebbly to bouldery in size, not at all what would be expected in the delta of one of the world's biggest rivers. The ice-sheet margin of the time lay 600 km to the south, ruling out a local source of glacial meltwater. And there is a possible candidate for the lake outlet from which the flood might have issued, near Fort McMurray in northern Alberta, where the ice sheet formed a dam by abutting on hills to the west.

Persuasive as it is, this may well prove controversial. I have some naive questions of my own. For example, even this putative flood might have had trouble carrying boulders the 2,000-plus km from Fort McMurray, at 370 m above modern sea level (lower at the time, because of differential rebound of the land surface) to Richards Island. And if the boulders didn't come from the lake outlet or the uplands, where did they come from? And I am not clear at all about why the authors reject the Saint Lawrence as a possible outlet. They don't seem to have evidence against it, just evidence in favour of a different outlet.

On a personal level, I can't decide whether to be happy or sad about the relocation of the Younger Dryas flood to the Mackenzie and away from the Saint Lawrence. The building that houses my office sits on the floor of a spillway that drained meltwater from the ancestral Great Lakes, and perhaps even Lake Agassiz further to the north, for a brief period at about the right time. But the spillway's cross-sectional area is only 50-100 times that of the modern river which occupies a small part of its floor. That is probably not big enough. The evidence offered by Murton and colleagues suggests a flood many kilometres wide and up to tens of metres deep. There are other spillways, to the north of mine, which also fed meltwater to the Saint Lawrence, but their carrying capacities, dates and durations of occupancy are not pinned down as well as they need to be.

A number of further questions arise. Is it just a coincidence that both this cold snap and that at 6250 BC show signs of having been double bursts? I can't think of a plausible mechanism that would require these floods to happen in exactly two stages. But perhaps there is a prosaic explanation in terms of fluctuations of the ice-sheet margin. Repeated brief advances might close multiple spillways, not just two, and therefore produce multiple incarnations of the lake with its surface at different elevations at different times. Second, when is somebody going to find evidence of catastrophic drainage of the Eurasian equivalents of Lake Agassiz? Were there such outbursts? Finally and most generally, why does caprice, in the form of unpredictable outbursts of meltwater, seem to play such a substantial role in the evolution of past climates?

Europe could switch to low carbon sources of electricity, with up to 100% coming from renewables by 2050, without risking energy reliability or pushing up energy bills, according to a major new study, Roadmap 2050: a practical guide to a prosperous, low-carbon Europe, developed by the European Climate Foundation (ECF) with contributions from McKinsey, KEMA, Imperial College London and Oxford Economics. It says that a transition to a low- or zero-carbon power supply based on high levels of renewable energy would have no impact on reliability, and would have little overall impact on the cost of generating electricity.

Matt Phillips, a senior associate with the ECF, said: "When the Roadmap 2050 project began it was assumed that high-renewable energy scenarios would be too unstable to provide sufficient reliability, that high-renewable scenarios would be uneconomic and more costly, and that technology breakthroughs would be required to move Europe to a zero-carbon power sector. Roadmap 2050 has found all of these assertions to be untrue." (As quoted by BusinessGreen.com).

ECF claimed that the widely held assumption that renewable energy is always more costly than fossil fuels is increasingly outdated, arguing that while the initial capital investment needed for low carbon energy infrastructure is more than for conventional high carbon system, the long term operating costs for low carbon energy will be lower. As a result of this, the reduction in use of increasingly expensive fuels and the gradual adoption of more efficient energy generation and using systems, it says that, although initially the GDP might be depressed very slightly, from 2020 it would rise and in the 2030 to 2050 period, the cost of energy per unit of GDP output could be about 20 to 30% lower.

The study focuses on electricity generation and use, including use in the transport and heating sectors, but says that 'should other (non-electric) decarbonisation solutions emerge for some portion of either sector, these will only make the power challenge that much more manageable'.

It looks at scenarios supplying 40% more electricity than at present by 2050, with various mixes of renewables, from 40% up to 100%, all of which it claims are technically viable. Carbon capture and storage (CCS) and nuclear are used in all its scenarios up to the 80% renewables mix, but in that scenario about half of the current level of nuclear production is replaced, and in the 100% renewable scenario all of it goes, as does CCS.

However the report notes that a successful transition to zero carbon power will depend on EU member states prioritising energy efficiency measures (it assumes a cumulative energy saving of 2% p.a.) and supporting the rapid development of a European electricity "supergrid" to help distribute and balance the green energy and manage demand.

For the 40–80% renewable scenarios there would also be a need for 190 to 270 GW of backup generation capacity to maintain the reliability of the electricity system, but ECF notes that 120 GW of that already exists. For new backup it looks to more gas-fired plants, biomass/biogas fired plants, and hydrogen-fueled plants, potentially in combination with hydrogen production for fuel cells.

In the case of the 100% renewables scenario, 15% of the energy would be imported via a supergrid link from Concentrating Solar Power (CSP) plants in North Africa, and 5% is also obtained from enhanced geothermal around the EU. But given the wider footprint and supergrid links, backup requirements in this scenario were reduced to 215 GW. However, the extra cost was put at 5–10% more than the 60% renewables option.

www.roadmap2050.eu

A study by consultants PriceWaterhouseCoopers, in collaboration with researchers from the Potsdam Institute for Climate Impact Research (PIK), the International Institute for Applied Systems Analysis (IIASA) and the European Climate Forum (ECF), has also claimed that Europe and North Africa could be powered exclusively by renewable electricity by 2050, if this is supported by a single European power market, linked with a similar market in North Africa.

Like ECF above, they also look to a cross-national power system, the proposed Super Smart Grid, to allow for load and demand management, and to integrate in green energy. They too see power coming from concentrating solar projects in the deserts of North Africa, and also in southern Europe, as well as from the hydro capability of Scandinavia and the European alps, onshore wind farms and offshore wind farms in the Baltic and North Sea, plus increasingly tidal and wave power and biomass generation across Europe.

Like the ECF study, they concludes that 'the most recent economic models show that the short term cost of transforming the power system may not be as large as previously thought', and that overall reliability would not be compromised. And they add that the development of North African resources 'could pay big dividends in terms of regional development, sustainability and security.'

www.pwc.co.uk/eng/publications/100_percent_renewable_electricity.html

An even more radical conclusion was reached in the study by the European Renewable Energy Council (EREC), Rethinking 2050, which claims that the EU could not only meet up to 100% of its electricity demand from renewables by 2050, but also all of its heating/cooling and transport fuel needs.

Like the studies above, it assumes a major commitment to energy saving – overall energy demand it says can be reduced by 30% against the consumption assumption for 2050. And there would be a parallel rapid rise in renewables, with an average annual growth rate of renewable electricity capacity of 14% between 2007 and 2020, and then an even more rapid expansion of some options. Between 2020 and 2030, geothermal electricity is predicted to see an average annual growth rate of installed capacity of about 44%, followed by ocean energy with about 24% and CSP with about 19%. This is closely followed by 16% for PV, 6% for wind, 2% for hydropower and biomass with about 2%. By 2030, total installed renewable capacity amounts to 965.2 GW, dominated in absolute terms by PV, wind and hydropower. Between 2020 and 2030, total installed renewable capacity would increase by about 46% with an average annual growth rate of 8.5%. And after 2030, expansion continues leading to almost 2,000 GW of installed capacity by 2050.

www.rethinking2050.eu/

Some even more radical scenarios have emerged, suggesting that we could move even more rapidly. For example, the German Energy Watch Group claims that (non hydro) renewables could supply 62% of global electricity, and 16% to global final heat demand, by 2030.

www.energywatchgroup.org/Renewables.52+M5d637b1e38d.0.html

And last November, Prof. Mark Jacobson and Mark Delucchi from Stanford University in the US published a very ambitious scenario in Scientific American, which suggested that up to 100% of global energy could be obtained from renewables by 2030, with electricity also meeting heating and transport needs.

www.stanford.edu/group/efmh/jacobson/sad1109Jaco5p.indd.pdf

Although they claimed that 100% was technically feasible by 2030, recognising that there were sunk costs in existing systems, in their conclusion they pulled back a bit and said that, in practice, 'with sensible policies', nations could set a goal of generating 25% of their new energy supply from renewables 'in 10 to 15 years and almost 100% of new supply in 20 to 30 years'. But they insisted that 'with extremely aggressive policies, all existing fossil-fuel capacity could theoretically be retired and replaced in the same period' although, 'with more modest and likely policies full replacement may take 40 to 50 years'.

Delucchi is scheduled to report on this analysis at a conference on long range scenarios being organised jointly by the UK Energy Research Centre and Claverton Energy Group on 21 May at University College London. Other contributors will include Dr Mark Barrett from UCL, who has developed a detailed 100% UK Renewables scenario. Visit www.bartlett.ucl.ac.uk/markbarrett/Elec/Electricity.htm.

Conference details: www.claverton-energy.com/.

Shortly the Centre for Alternative Technology in Wales is expected to publish its revised and updated Zero Carbon Britain scenario for up to 2030. That too is likely to be very radical. Perhaps somewhat less so, the Department of Energy and Climate Change meanwhile is still working on its own 2050 Road Map. Some brief interim conclusions have emerged, but the full thing is still being developed. I'll be reporting on that in my next blog. Clearly we are not short of scenarios!

Reblog this post [with Zemanta]

Whether this blog is the dullest or the second-dullest blog you have ever read will depend on whether you think shortage of wiggle room on glaciers is duller than spatial interpolation. But the two are connected, and there is a bright side to lack of wiggle room. It is what makes mapping possible and useful.

Interpolation is what we have to do when we have a geographically broad problem and measurements that don't cover the ground densely enough. If the measurements were not correlated, reducing the wiggle room for estimating uncertainty, we couldn't interpolate between them. We are about equally uncertain in all of the measurements, but at least the correlation lets us look at the broad picture.

We tend to trust contoured topographic maps because the height of the terrain is highly correlated over distances that are smaller than the typical distance between adjacent contours, and because they are based on dense samples of heights from air photos or satellite images. However, interpolating variables that are sparsely sampled presents problems.

Glacier mass balances tend to be well correlated over distances up to about 600 kilometres, making spatial interpolation possible. In turn that makes it possible to judge whether our sample of measurements is spatially representative, but it isn't much help for estimating mass balance in all the regions where are no measured glaciers within 600 km.

There are several methods of interpolating to a point where there is no measurement. I like to fit polynomials — equations in the two spatial coordinates, easting and northing — to obtain smooth surfaces centred on each interpolation point.

All of the methods have one thing in common: adjustable parameters. Broadly speaking, you get a tradeoff. You can choose smoothness of the resulting map or fidelity to the raw measurements, or any combination. Some methods guarantee, mathematically, that your interpolated numbers will agree exactly with the measured numbers at the points where there are measurements. Most methods, however, acknowledge that the measurements themselves are uncertain, and allow a bit of wiggle room. To tell the truth, by twiddling the knobs on your interpolation algorithm you can seem to create as much wiggle room as you like — in other words, any shape of surface you like (almost).

There seems to be only one objective way of judging the merit of your interpolated surface: cross-validation. If you have n measurement points, redo the interpolation n times, leaving out one point each time, and calculate the typical (technically, the root-mean-square) disagreement between the omitted measurement and the corresponding interpolated estimate. The trouble with this is that it tells you nothing about how you are doing at places where there are no measurements, which is the aim of the exercise.

One thing that no interpolation algorithm can do is manufacture facts. This is an insidious problem, because it often looks as though that is what they are doing. But what are the alternatives?

The most obvious is more measurements. But we scientists always say that, don't we, and although technology keeps advancing we still have lots of gaps. Lately I have been wondering whether we need to be more brazen about this. Yes, more measurements mean more money for scientists to spend. But suppose it were more money on an economy-altering scale. Jobs in glacier monitoring, and environmental monitoring generally, would multiply. The diversion of financial resources would mean that jobs such as making, fuelling, driving and repairing motor vehicles would dwindle. That would be a good thing, wouldn't it?

Before it happens, though, the economists will have to work out how to fool the economy into thinking that accurate maps of glacier mass balance are worth more than motor cars.

A more practical alternative is to take the average of what you know as the best guide to what you don't know. In fact the average is just a polynomial of order zero, that is, a limiting case of the idea of spatial interpolation. It works beyond 600 km, and indeed it works anywhere. But best does not necessarily mean good. New measurements might not change the picture much. On the other hand, they might.

Data voids mean irreducible uncertainty. It may be uncertainty we can live with, but it is also uncertainty in the face of impending trillion-dollar decisions. From that angle, a billion-dollar decision to make better maps and fewer motor cars begins to look good. In the meantime, beware of smooth maps and of maps that are slavishly faithful to the measurements. There is more going on beneath the contours than meets the eye.

Beyond baseload

| | Comments (14) | TrackBacks (0)

We are always told that it's vital to have 'baseload' – that is 'always available' generation capacity – to meet minimum energy demand. Otherwise the lights would go out! Baseload used to be provided mainly by coal plants, these days it's also nuclear. Indeed, on summers nights when UK demand drops to 20GW or so, it can mostly be nuclear, plus whatever we are getting from our ~6GW of wind and other renewables. But when wind expands (to maybe 40GW!) and nuclear also expands (to say 20GW), then there will be conflicts over which to turn off ('curtail'), during those periods, especially if there is also, say, 10GW of tidal on the grid. In which case the concept of baseload starts to look unhelpful – the problem being a potential surplus of electricity, not a shortfall.

To avoid 'curtailment' problems, we might store some of the excess power or export it to other countries on a supergrid system. That might also help us to balance the variations in output from wind and other renewables – in effect we export excess and then re-import it later when and if there is shortfall. It get stored in, say, large hydro reservoirs in Norway or Sweden (as Denmark does with its excess wind output), although it's really just 'virtual' storage. We don't get the same electrons back! But for this to be possible we need the grid links.

The same message emerges from recent US National Renewable Energy Laboratory studies of wind curtailment, though their problem is a bit different. A 2009 NREL study concludes that, so far, congestion on the transmission grid, caused mainly by inadequate transmission capacity, is the primary cause of nearly all US wind curtailment.

NREL says that wind curtailment has been occurring frequently in regions ranging from Texas to the Midwest to California. For example it notes that at one point in 2004 nearly 14% of wind generation MWh had to be curtailed in Minnesota, though this fell to under 5% subsequently. It notes that, curtailment has also become a significant problem in Spain, Germany, and the Canadian province of Alberta – up to 60% in Germany in some cases, while in Spain, NREL notes 'the amount of wind power curtailed as part of the congestion management program has increased steadily over the past two years' . This is a terrible waste of potential green power…

NREL says that building additional transmission capacity is the most effective way to address wind curtailment, a point also made in its 2010 'EWITS' study of eastern US options – which concluded that wind could replace coal and natural gas for 20–30% of the electricity used in the eastern two-thirds of the US by 2024. That would involve 225–330GW of wind capacity, and an expensive revamp of the power grid. However, like the earlier NREL study, it says that, with an improved grid, especially with long distance HVDC transmission allowing for balancing across the country, the amount of wasted wind energy, and the need for back-up, would decline.

The 2009 study also discusses other possible measures for reducing wind curtailment, including greater dynamic scheduling of power flows between neighbouring regions. That's moving in the direction of 'smart grids' and possibly on to dynamic load management – adjusting demand in line with supply. It's already common to reschedule some large load to meet shortfalls in supply – some supply contracts specify interruption options – and reduce prices accordingly. But more sophisticated smart grid systems may have fully interactive load control – switching off some loads temporarily when supply is weak. The classical example is domestic (and retail) freezers, which can happily coast for several hours without power or damage to food stocks.

What we are seeing is a move beyond simple real-time 'baseload' thinking and on to balancing supply and demand dynamically, over time. This can involve more than just rescheduling loads or shifting electricity across regions via supergirds, and more than just storing electricity virtually. It can mean shifting to storing some of the electrical energy as heat – heat is much easier to store (e.g. in molten salt heat stores, than electricity). And local heat stores can be topped up with heat from solar and other renewable sources. Most of this heat would be used to meet heating needs directly, but some could be converted back to electricity, for example in steam turbine units, as is planned for the large Concentrating Solar Power plants being built in North Africa – to allow them to carry on generating power from stored solar energy overnight.

A parallel option is conversion of electricity to hydrogen gas via electrolysis, for later use as a fuel, for vehicles, or for heating, or for electricity (and heat) generation in a fuel cell. The efficiency losses from some of these conversion processes may limit how much of this we can use cost-effectively, but we need to start thinking about new optimisation approaches which go beyond simple real-time power links. Neil Crumpton's scenario tries to do that: see my earlier report.

It's definitely a challenge to conventional thinking. As Eric Martinot puts it in Renewable World (Green Books): "The radical concept that 'load follows supply' on a power grid (i.e. the loads know about the supply situation and adjust themselves as supply changes) contrasts with the conventional concept of 'supply follows load' that has dominated power systems for the past hundred years. Storage load represents a variable-demand component of the power system that can adjust itself, automatically within pre-established parameters, according to prevailing supply conditions, for example from renewable power."

Energy storage is of course expensive, which is the main reason why we don't have much of it at present. But as we move to a new more interactive energy supply and demand system, then the value of stored energy will increase. You could think of it as 'virtual' baseload. But it's more flexible than that – and flexibility seem likely to be a key requirement in future.

For more on renewable energy and allied smart energy developments, visit Renew.

Reblog this post [with Zemanta]

By Hamish Johnston, PhysicsWorld

"We saw no evidence of any deliberate scientific malpractice in any of the work of the Climatic Research Unit and had it been there we believe that it is likely that we would have detected it."

That is the main conclusion of an independent panel of scientists nominated by the UK's Royal Society to scrutinize the scientific methodology of researchers at the University of East Anglia Climate Research Unit (CRU).

The seven-member panel was set up by the university and chaired by Ron Oxburgh – a geologist, former oil-company executive and member of the UK's upper house of parliament. It released its findings today.

The panel looked at 11 "representative publications" produced by CRU members over the past 24 years.

While the report is good news for CRU scientists, some climate-change sceptics have accused the panel of being biased because Oxburgh is chairman of the wind energy company Falck Renewables and president of the Carbon Capture and Storage Association. Oxburgh has insisted that the panel had no pre-conceived views on the CRU science.

This is the second report published after private e-mails of CRU members were hacked last year and made public. Critics of the CRU have alleged that the e-mails show that the scientists incorrectly interpreted data to support manmade climate change and also flouted freedom-of-information requests to make data and computer code available to their critics.

The first report – which was released on 31 March by the House of Commons Science and Technology Committee – concluded that the University of East Anglia was mostly to blame for supporting a culture of non-disclosure.


I haven't written all that many blogs, but this one could be the dullest I have ever written. How do you persuade readers to be interested in how confident the rules of statistics allow them to be?

Statistics is a minefield in large part because we call on it mainly when we have to work out an explanation of what is happening, given a few scraps of more or less reliable information. You can tiptoe through the minefield and hope to reach the other side without getting blown up, or you can turn yourself into a professional statistician. But the latter option is time-consuming.

One option that sometimes crops up is to start not with a small but with a large amount of information and work out what the statistics are. This happened to me a while ago when I wanted to work out how confident I ought to be about the mass balance of a glacier in the Canadian High Arctic, White Glacier.

We measure the mass balance by inserting stakes into the glacier surface. Once a year we measure the rise or fall of the surface with respect to the top of each stake. Mass is volume times density, so the calculation of mass change (per unit of surface area) is simple: rise or fall times density. We measure the density in snow pits. To get the mass balance of the whole glacier, we add up the stake balances, multiplying each by the fraction of total glacier area that it represents.

Now the statistical fun begins. People have been measuring several dozen or more stakes per year for several decades on White Glacier, so we have a large sample and a "forward" problem: what is the uncertainty in the annual average mass balance?

There is a simple answer from classical statistics. This uncertainty is equal to a measure of the uncertainty in each single stake measurement divided by the square root of the number of stake measurements. It is a pity that this simple answer is, for practical purposes, wrong.

The uncertainty in the stake measurement itself is not the problem. You can measure changes of stake height to within a few millimetres, even when it is cold and windy. What you want is a measure of how representative your single stake is of the part of the glacier which it supposedly represents. Working this out is much trickier, but a typical number is a couple up to a few hundreds of millimetres.

Next, and fatally from the minefield standpoint, the number of stake measurements is not the number whose square root you need to divide into the stake uncertainty. The statisticians are not to blame for the prevalence of this crude error, although I think their predecessors of a few generations ago could have done a better job of discouraging its spread.

The early statisticians coined the term degrees of freedom for the correct divisor. It is the number of numbers that are free to vary in your formula. The trouble is that degrees of freedom doesn't suggest much, if anything, to normal people, who have grasped ill-advisedly at the fact that it may be equal to the sample size as a crutch on which to tiptoe through the minefield. Recently, the statisticians have tried the term effective sample size instead. It is better, because it suggests that the actual sample size may not be the right number, but "effective" is still mysterious jargon for most people, including many scientists.

I think wiggle room would be a still better alternative. The essential point is that you are only allowed to divide by the square root of your sample size if all your samples are independent, meaning that you cannot predict any one stake measurement from any other. The more dependent or correlated they are, the less your wiggle room, the smaller your divisor, and the bigger your uncertainty.

Suppose you measure the mass balance at 49 stakes, a convenient number because its square root is seven. If the stakes were independent you could divide your stake uncertainty by seven to get the uncertainty of your whole-glacier mass balance. If the stake uncertainty happens to be 210 mm, another convenient number, the whole-glacier uncertainty comes out as only 30 mm.

Correlations between records of annual mass balance at stakes on White Glacier
Correlations between series of annual balances measured at thousands of pairs of stakes on White Glacier. The pairs are arranged by how far apart the two stakes are in altitude, and by their correlation — +1 being perfect correlation and 0 being complete lack of correlation. White means no stake pairs; otherwise, the paler the little box the more pairs fall into it. The green dots are the best estimates, according to statistical theory, of the real as opposed to the observed correlation at each value of vertical separation. (Notice that the observed correlation is sometimes negative even when the "real" correlation is quite large.) The red curve, again from theory, summarizes the green dots.

Too bad that, call it what you will, the wiggle room for the mass balance of White Glacier is about one. The large collection of correlations in the graph is strongly skewed towards predictability. If you know one stake balance, you can do a fair to excellent job of predicting others. That the correlations drop off as the stakes grow further apart turns out not to make much difference to the wiggle room. For the purpose of estimating uncertainty, it is as if we had only one stake, not dozens.

The stakes on White Glacier are so highly correlated that we have to live with the fact that we only know our mass balances to within a few hundred millimetres. This is a serious constraint, considering that typical annual mass balances these days are negative by only a few hundred millimetres. No wonder it has taken a long time for signals of expected change to emerge.

Will installing large numbers of wind turbines have an impact on the environment – not just visual intrusion, but actual measurable effects on the atmosphere and climate? After all if you are taking many giga watts of power out won't that slow the winds down? And have other effects?

Well the first thing to realise is that wind turbines only extract very small amounts of the energy in the wind front – which after all can be a mass of air hundreds of miles wide and several miles high. That said there can be local wind shadow effects. Indeed in medieval times there were regular disputes about blocking wind access to corn grinding windmills. But in macro terms the extraction issue would seem to be negligible.

What about more subtle impacts? In a paper in Atmos. Chem. Phys. 10 2053–2061, 2010, entitled "Potential climatic impacts and reliability of very large-scale wind farms" C. Wang and R. G. Prinn from the Center for Global Change Science and Joint Program of the Science and Policy of Global Change, Massachusetts Institute of Technology, argue that the widespread use of wind energy could lead to temperature changes. They used a three-dimensional climate model to simulate the potential climate effects associated with installation of wind-powered generators over vast areas of land or coastal ocean. They claim that 'Using wind turbines to meet 10% or more of global energy demand in 2100, could cause surface warming exceeding 1 °C over land installations. In contrast, surface cooling exceeding 1 °C is computed over ocean installations,' but they add ' the validity of simulating the impacts of wind turbines by simply increasing the ocean surface drag needs further study'.

They go on 'Significant warming or cooling remote from both the land and ocean installations, and alterations of the global distributions of rainfall and clouds also occur,' and explain that 'these results are influenced by the competing effects of increases in roughness and decreases in wind speed on near-surface turbulent heat fluxes, the differing nature of land and ocean surface friction, and the dimensions of the installations parallel and perpendicular to the prevailing winds'. They also say: 'These results are also dependent on the accuracy of the model used, and the realism of the methods applied to simulate wind turbines. Additional theory and new field observations will be required for their ultimate validation.'

www.atmos-chem-phys.net/10/2053/2010/acp-10-2053-2010.html

Well yes, they do seem to have adopted a rather simplified 'top down' model – assuming increased drag from increased surface roughness averaged out over entire coarse-resolved grid cells, rather than looking at the impacts of individual wind turbines. An earlier 'bottom up' study 'Investigating the Effect of Large Wind Farms on Energy in the Atmosphere', in Energies 2009, 2, 816-838 by Magdalena R.V. Sta. Maria and Mark Z. Jacobson of the Atmosphere/Energy Program, Civil and Environmental Engineering Department, Stanford University, used Blade Element Momentum theory, to calculates forces on individual turbine blades. It claimed that 'Should wind supply the world's energy needs, this parameterization estimates energy loss in the lowest 1 km of the atmosphere to be ~0.007%', which it said was 'an order of magnitude smaller than atmospheric energy loss from aerosol pollution and urbanization, and orders of magnitude less than the energy added to the atmosphere from doubling CO2.'

It added that, although there may be small moisture content changes and other minor effects, 'the net heat added to the environment due to wind dissipation is much less than that added by thermal plants that the turbines displace'.

www.mdpi.com/1996-1073/2/4/816/pdf

Pretty clearly then we have a disagreement: especially given that Wang and Prinn say '1 degree from a 10% wind contribution': would that means 10 degrees for the 100% total energy contribution looked at by Maria and Jacobson?

Underlying this conflict are pro and anti wind postures, with Maria and Jacobson obviously very much in favour, while Wang and Prinn's paper adds in for good measure some familiar negative comments about the intermittency of wind power and the need for backup generation capacity, very long distance power transmission lines, and onsite energy storage.

Wind power does have its problems, but they seem to scraping the bottom of the barrel by trying to talk up miniscule temperature effects.

Reblog this post [with Zemanta]

Writing in Geophysical Research Letters, Nick Barrand and Martin Sharp tell us about the evolution of glaciers in the Yukon Territory of Canada over the past 50 years. They are less studied than their counterparts across the border in southern Alaska, and it is valuable to have a more complete picture. As with the glaciers of the Subantarctic, the message is not exactly headline material — "Yukon Glaciers Not Surprising". But repeating the message ad nauseam, in the face of public weariness with global change, seems to be the best we can do. After all, the changes are happening.

Barrand and Sharp measured glacier extents on air photos from 1958-1960 and Landsat images from 2006-2008. An important point is therefore that, with information from only two dates, there is no way to tell whether rates have changed. For practical reasons they truncated a small proportion of the glacier outlines at the territorial boundary, but as a compensation they did include the glaciers of the eastern Yukon, which are less impressive than the big ones nearer to the Pacific. They are often forgotten and have never been studied in detail.

The main result is robust. In the late 1950s the extent of glacier ice in the Yukon was 11,622 km2 and in the late 2000s it was 9,081 km2. The rate of shrinkage was therefore —0.44% per year, give or take about 0.03%/yr. Four glaciers grew, 10 remained the same size, and the other 1,388 shrank. Of the shrinking ones, about a dozen disappeared.

Barrand and Sharp also venture on a so-called volume-area scaling analysis. The name of this method is slightly confusing. It exploits the observation that glacier thickness "scales" with glacier area: the bigger the glacier, the thicker it is likely to be. Multiply the average thickness by the area and you get an estimate of the volume. The problem is that although measurements of area are comparatively easy, there are very few reliable measurements of thickness. So volume-area scaling is based on relationships fitted statistically to the small number of glaciers with good measurements of both area and thickness.

Estimates of volume obtained in this way are wildly variable for single glaciers. But if you have 1,400 single glaciers, and if your scaling coefficients are not biased (systematically off — the evidence for this is pretty good), then the law of large numbers kicks in and your wild errors cancel each other out. Even so, Barrand and Sharp reckon, by comparing different published sets of scaling coefficients, that their Yukon-glacier total volumes are uncertain by about ±30%.

Ice volume is interesting, but the mass is even more interesting. Assuming a density of 900 kg m-3, the volumes translate to masses of 1832 gigatonnes in 1956-58 and 1497 Gt in 2006-08. The difference equates to a 50-year mass balance of —6.7 Gt/yr, or —650 mm/yr of water-equivalent depth spread uniformly over the glaciers (give or take 30% in both cases, remember). These numbers could be biased high by the assumed density being too great, but the bias is not likely to be large.

Are there any exceptions to what looks like a global rule that glaciers are losing mass? Not in the regions with adequate measurements, for sure. Two other recent analyses agree broadly with the Yukon findings, and like them exemplify our modern ability to estimate long-term mass balance over regions of substantial size. Nuth and co-authors studied most of Svalbard and found a mass balance of —360 mm/yr water equivalent over varying periods of a few decades. Next door to the Yukon, Berthier and co-authors constructed digital elevation models of Alaskan glaciers and found multi-decadal rates that average to —480 mm/yr water equivalent for the whole state. This last number revises upwards an earlier estimate of —700 mm/yr water equivalent.

The number of regions like these three is increasing gradually, and the closer we get to complete coverage the less likely does it become that our global estimates are seriously wrong. One region without adequate measurements is the Karakoram in Kashmir, where limited information suggests that some of the glaciers are not only advancing but thickening. But even if there is a surprise in store in the Karakoram, we need more than one such surprising region to make the global rule look debatable.

The Earthscan book version of 'Prosperity without Growth' substantially updates Tim Jackson's groundbreaking report for the Sustainable Development Commission, which rapidly became the most downloaded document in the Commission's nine year history and contributed to a burgeoning debate about economic growth and its consequences for people and planet.

In the new book Prof. Tim Jackson from the University of Surrey argues that building a new economic model fit for a low carbon world is 'the most urgent task of our times'. He says 'The current model isn't working. Instead of delivering widespread prosperity, our economies are undermining wellbeing in the richest nations and failing those in the poorest. The prevailing system has already led us to the brink of economic collapse and if left unchecked it threatens a climate catastrophe.'

Global carbon emissions have risen 40% since 1990 and will continue to rise inexorably unless action is taken urgently. By the year 2050, the carbon content of each dollar of economic activity will need to be a staggering 130 times lower than it is today, if we are to make room for much-needed development in the poorer nations and remain within a 2 degree C warming.

He says it won't be easy. 'We are caught in a profound dilemma. Economic growth is the default mechanism for achieving social stability. And at the same time it drives the scale of ecological damage. What's needed now is an urgent commitment to building a different kind of economic system, one which puts people and planet at its heart. For the advanced economies of the western world, prosperity without growth is no longer a utopian dream. It is a financial and ecological necessity'.

Jackson challenges the belief that growth in material/resource use can be decoupled from growth in material/resource consumption via increased technological productivity- more for less. This is key tenet of what has been called 'ecological modernisation'. But he says that improvements in energy (and carbon) intensity are offset by increases in the scale of economic activity- a macro economic version of the so-called 'rebound' effect. Moreover, even if we can find ways of using resources more efficiently in net terms, we are wanting more, and so are using more resources overall.

While, this may be true so far, what would happen if, for example, we switched entirely over to renewables as our energy source? There would still be some material resource requirements, e.g. for constructing the systems, but the main operational input (fuel) would be zero and would remain so, however much energy we used, and could be supplied up to the overall limit of renewable energy availability- which ultimately is set mainly by the level of incoming solar energy and the efficiency of the system we can develop for using it. Of course, there may be ecological and resource limits to economic growth before that ultimate energy limit- e.g. competing land-uses for food production. That clearly is an issue with biofuels. But for most other renewables the limits seem some way off. So, while we really do need to begin to face these issues, if we can make the transition to using renewables, there could be room for continued growth for a while yet.

That said, it would obviously be easier to meet our needs, and make the transition, if we were less demanding, less driven by consumption as part of our sense of identity. We might even be happier, and less conflictual. That's the basis of the idea of moving to 'quality' of consumption rather than 'quantity'. However that disguises some social and economic issues. Well off people are able to be more selective about quality and, if they want, can choose less eco-impacting options. The current economic system is also well able to meet this kind of need- profitably. But what about the rest- and especially those in developing countries living at or below subsistence level? They don't consume much at present, and so are in effect outside the system. Many others will also be unhappy to give up expectations of benefits from continued growth in material consumption. Tom Burke (a visiting professor at Imperial College London) once put it nicely: 'Life has got better in unsustainable ways', but not everyone is willing to change. We need to think in terms of sustainable consumption, but as Maurie Cohen, and Joseph Murphy put it in 'Exploring Sustainable Consumption: Environmental Policy and the Social Sciences (Pergamon, 2001) ' any action that tries to limit the use of material objects but does not offer alternative ways of satisfying social and psychological objectives is likely to fail'. We need new motivations and new visions of what we feel life is about.

It is important to talk about the limits of the current approach- that provides negative warnings, but we also need to look to the future, both to the possible attractions of a less consumerist world, and also to its limits. Jackson talks of 'prosperity within limits', but it's not really clear what those limits are- technically or socially. Green energy technology might extend them, and so might green living. While changing technology is usually seen as easier than achieving social changes, we don't know if either will be enough long term.

Reblog this post [with Zemanta]