July 2009 Archives
I just loved reading the following article about North Carolina State Representatives discussing if people can hang their clothes out to dry for all to see: http://www.news-record.com/blog/53964/entry/64277. What this exemplifies is a very "down home" version of the energy-security nexus. This time we're not directly talking about national security regarding nuclear proliferation of reprocessed spent nuclear fuel. We're talking about the privacy of someone not having to look at your under pants blowing in the wind. How much electricity savings is that worth?
The article discusses how the initial drive to use clotheslines was to keep home owners associations from imposing restrictions on them. Then some of the representatives noted how using the sun and wind to dry your clothes saves energy. This was not a very convincing argument, but it poses the interesting point about the energy service that is really provided by your gas or electric clothes dryer.
Ask yourself, after you have washed your clothes and they are wet, how soon do you want them dry? Another way of asking this is: When are you going to wear the clothes that you just washed? If you are like most people, you wash several days worth of clothes at one time (to save some combination of time, water, and energy). Therefore, you can't really wear them all in the few hours out of the wash. So letting them dry over several hours is extremely feasible. If you are a person that hangs their clothes, you can hang them on hangars (while being careful not to stretch the shirt collars) and already be much of the way toward putting them in your closet! Putting clothes in the dryer certainly makes the entire process go faster at the expense of using on-demand primary energy versus the intermittent solar and wind energy (not electricity!) from the environment.
The fact that using the clothes dryer is so mainstream in America speaks to our desire for convenience and cheapness of electricity in that we pay extra for a service (clothes getting dry) that occurs FASTER THAN WE NEED. I applaud Representatives Pricey Harrison and Malcolm Graham for pushing for both freedom (the freedom to literally hang your clothes out to dry for all to see if you so choose) and promoting energy security with climate change implications. What these Representatives may have in gusto (or perhaps it was the article's author), they lack in energy knowledge as the article mentions that Rep. Harrison notes 10%-25% of household energy usage can be consumed by the clothes dryers. I guess they "can be used" that much, but the Annual Energy Outlook (2007) of the Energy Information Administration indicates that approximately 0.9 quads of primary energy is consumed by household clothes dryers. This is approximately 4% of residential energy use.
But I forgive the state Reps. If we all can think outside of the dryer like they can, then we can probably save 200% of our energy consumption, and that's got to be good for the economy!
It sounds like something they might do to you at a health spa, doesn't it? But to students of glaciers, basal lubrication is the key that unlocks a long list of puzzles.
Why do precise measurements of glacier motion often show stick-slip behaviour, that is, hours and hours of near motionlessness punctuated by half-hours of rapid movement? Why do some glaciers surge, that is, accelerate suddenly every few decades, flowing rapidly for a year or two before returning, sometimes suddenly but more often gradually, to normal? Why does the landscape of southern Ontario, which I can see from my window, undulate? Why, in the sediment of the northern Atlantic Ocean, are there occasional layers of sand, interrupting the blanket of ultra-fine-grained mud?
The layers of sand beneath the Atlantic are spaced irregularly, 10,000-15,000 years apart, according to the Principle of Superposition, at depths below the sea floor that correspond to the last ice age. They are thin on the European side, thicker towards the northwest, and thickest of all in the neighbourhood of Hudson Strait, which separates Quebec from Baffin Island. The simplest explanation of this pattern is that every so often the bed of the Laurentide Ice Sheet, that covered most of Canada, became much more slippery. Much of its interior was drained by the Hudson Strait Ice Stream, which accelerated occasionally and discharged icebergs in huge numbers. With the icebergs came the sand. All of the plausible accounts of this instability have variations in basal meltwater supply, or possibly just its behaviour, as a critical ingredient.
Around where I live, we are rather proud of our drumlin field. Somebody counted these egg-shaped hills and got up to about four thousand. But geomorphologists now reckon that the tunnel channels are even more interesting. Tunnel channels are drainage networks shaped by subglacial meltwater at the end of the last ice age, after the ice had shaped the drumlins and indeed not long before the ice disappeared altogether. For a long time I simply could not see these things, and I still suspect that the geomorphologists are asking for more meltwater than is probable, but recent evidence from beneath the modern ice sheets is vindicating their interpretations. Now I can see the ancient tunnel valleys in the light of modern ones, apparently hard at work, beneath the Antarctic Ice Sheet.
I don't know why most glaciers do not surge but a few do. Nor does anyone else. Surging is a phenomenon that has eluded explanation over several decades of concentrated observation and analysis. But we are all positive that subglacial hydrology contains the answer if we can only put together the pieces of the puzzle. The most recent instance of a surging glacier, detected by the U.S. Geological Survey on 3 July 2009, happens also to be a famous glacier - Malaspina Glacier in Alaska.
Many glaciers go faster in summer, suggesting that meltwater supply has something to do with glacier speed. Where the ice is observed to move in short bursts, there is usually also a suggestion, from one line of evidence or another, that it spends most of the time frozen - that is, stuck - to its bed. Slip happens when that immobile state is disturbed, in other words when the bed is lubricated upon the arrival of meltwater. But where does the meltwater come from? And go to?
It might not go anywhere, if the stuff that is moving around is not water but heat. That is, stick-slip may be telling us not about patterns of meltwater flow but about patterns of thawing and freezing. In fact, there may not be any heat moving around either. The melting temperature depends, slightly but measurably, on the confining pressure. So the thaw-freeze patterns could actually be patterns of subtle fluctuations of pressure, not just squeezing the water from one place to another but determining which of the two states, solid or liquid, it is stable in.
It is all very complicated, at scales from sticky patches up to the width of the north Atlantic and beyond. Great fun for glaciologists, but not without consequences for society - for example, if the Antarctic or Greenland Ice Sheet should decide to do what, according to the lesson from the sand under the Atlantic, the Laurentide Ice Sheet did repeatedly.
Construction has started on the first part of the 'Leningrad II' nuclear plant on the existing nuclear plant site on the outskirts of what is now St Petersburgh. The first 1170 MWe pressurised water reactor is scheduled for commissioning in October 2013 and the second a year later, at a cost of $3.0-3.7 bn per pair. Leningrad II will eventually have four new reactors.They are claimed to be super safe, with passive as well as active safety features.
As well as supplying electricity to the grid, the four new Leningrad II reactors, like the existing plant there, will also provide heat to the city- 9.17 petajoules per year of district heating. In well insulated pipes, heat losses over 10-20 km are relatively low and the demand for heat in Russia is high given the climate. Nuclear Cogeneration/Combined Heat and Power (CHP) capacity in Russia will then be supplying about 12 petajoules of heat per year, and it plans to have 5 GW of small nuclear reactors for electricity generation and district heating by 2018 at Arkhangelsk, Voronezh, Chukoyka and Severodvinsk.
So will others follow the lead and install nuclear plants in or near cites- and feed the waste heat into district heating pipes? So far most nuclear plants in the West have been located in relatively remote areas- for safety reasons and to be near sources of cooling water. But there are economic attractions in being able to use the otherwise wasted heat produced by the steam turbines. As with all steam raising power plants, whatever the fuel, the losses represents nearly 70% of the heat energy produced from the fuel, more than half of which can be reclaimed by operating in CHP mode i.e. supplying heat and well as electricity.
Even if you don't like the idea of urban nukes (and if nothing else it could have a significant impact on property values!), there are some other interesting and possibly equally controversial ideas related to CHP/district heating. We are all used to the standard argument that putting insulation on buildings is the cheapest energy option. 'The cheapest watt is the negawatt,' and so on. But is it always so? Studies by CHP consultants Orchard Partners London Ltd have suggested that, in terms at least of retrofit/rehab options, it may be cheaper and lead to more carbon emission savings, to provide piped heat from urban gas-fired CHP plants, than to install insulation, especially for some hard to access high rise flats. Put simply, the argument is that it's easier and cheaper to insulate pipes than whole buildings. They have developed a clever kerb stone pipe module that they say makes urban retro-installation easy. They claim that 'Houses, when connected to low CO2 piped heat supply would immediately achieve the highest rating for buildings without any investment in demand side measures. The disruption to residents and traffic with the new route that has been identified will be minimal compared to the disruption replacing an unsustainable gas network under the roads and minimal for residents compared to retrofitting insulation and glazing to existing premises particularly all our pre 1950s stock.'
If you also moved to biomass, or biowaste /biogas fuelled CHP plants, then you would be more or less zero carbon- and cities have a lot of biowastes, sewage biogas for example being one of the cheapest energy sources around. At present much of it is used just for electricity production, but CHP is the logical next step. With interest in renewable heat supply growing (heating accounts for about 40% of UK energy use) some fascinating new ideas are emerging about new ways of supplying consumers' needs- using pipes. At present we transmit electricity long distances from large power plants, with up to 10% transmission loses and even more local distribution losses. But you could have a large remote biomass fired plant, perhaps using biogas from a landfill site, which transmits heat to consumers in cities. Or a large anaerobic biogas digestor on the edge of a town or city, or even miles away, using municipal waste, and feeding biogas along a pipe to an inner city CHP plant. More likely biogas will just be added into the conventional gas main - that's actually now been agreed as an acceptable option and some projects are underway or planned. But one way or another we may be seeing a lot more pipes in future...whether running heat or gas, the attraction over electricity being that both can be stored. We may even see the return of the classic large inner city gasometer for gas storage, along with local buffer heat stores, as are used in parts of northern Europe as a part of local district heating systems
Other renewable energy sources can also be run into heat stores- solar heat for example, allowing for variable supply to be matched to variable demand. There are even some inter-seasonal solar heat stores in operation. Once again, put simply, while no one is suggesting we don't do both where appropriate, it's easier to insulate a heat store than a whole house. And it's not just solar heat, or geothermal heat, we could store. Dr Mark Barrett, senior researcher at the UCL Energy Institute, has suggested that we could use our domestic hot water cylinders as a national distributed heat store, storing excess energy from wind turbines and other large but variable electricity supplying renewables, like wave and tidal power, ready for use to reduce heat demand peaks.
Plenty of new ideas then to suit all tastes, and good news for plumbers and pipelayers! And an interesting challenge to those who think in terms of an 'all-electric' future.
James Thurber taught us that the world is divided into two kinds of people: those who think the world is divided into two kinds of people and those who don't. I belong to the former group, believing that the world is divided into two kinds of people: those who think words are interesting and those who don't. Again, I belong to the former sub-group. If you belong to one of the other sub-groups, you might want to read something other than this article, because an obsession with words can be tiresome.
One of my jobs recently has been to coordinate the compilation of a glaciological glossary for the International Association of Cryospheric Sciences. The Glossary of Mass-balance and Related Terms is likely to be of interest mainly to specialists, but one of the words we had to define was "glacier". We had a lot of fun with this, it being the sort of thing about which specialists find it easy to disagree. But where did the word come from?
The Oxford English Dictionary says that a glacier is "a large accumulation or river of ice ...", and quotes William Windham and Peter Martel as the first persons to use the word in English, in a letter printed in 1744 in which they describe excursions from Geneva to the glaciers around Chamonix in 1741 and 1742. They spelt it glaciere, apparently conforming to the spelling and pronunciation of the local inhabitants. The French speakers of Savoy seem to have been undecided about whether the noun should be feminine or masculine. In modern French a glacier is a glacier and a glacière is an icehouse or cold cellar. That the English word comes from French will not surprise anyone, but where did the French, or the Savoyards, get it from?
The word may first have appeared in print in 1574, in Josias Simler's De Alpibus Commentarius: "... it is called Gletscher by our people". Gletscher is one of the words for glacier in modern German, and it looks like a fairly obvious borrowing from a French or Italian dialect of the Alps. The modern Italian word is ghiaccaio. The Trésor de la langue française in my university's library tells me that glacier or glacer is first recorded from the west of Switzerland in 1332.
You can read Windham and Martel if you can find a library with a subscription to Eighteenth Century Collections Online. It is an intriguing insight into the enquiring outdoor 18th-century mind, and is the first serious work of glaciology ever written in English. One of the intriguing things about it is that the authors' understanding of glacier has all the ingredients that we settled on a few months ago as essential for our 21st-century glossary definition: low temperature, frozen water on land and, most intriguing of all, motion: "the Glaciere is not level, and all the Ice has a Motion from the higher Parts towards the lower".
Windham and Martel were relying for these facts on the local inhabitants, who not only understood that the ice itself "has a motion", but knew from their own observation that the glaciers were extending progressively further into the valleys. We have the opposite problem in the 21st century.
So the idea that a glacier is ice in motion has a long history. I suspect that the idea goes back long before 1332, to the unknown first coiner of the word. The glac- with which it begins is from glacies, the Latin for ice, but the ending -ier is more suggestive. Speakers of French have made heavy use of this suffix. The Trésor de la langue française takes several pages to list all the work it does, but a leading part of its job is to describe the idea of carrying or delivering. Just as a pear tree is a poirier or carrier of pears, so it seems reasonable to guess that a glacier is a deliverer of ice.
We like to think that our new Glossary of Mass-balance and Related Terms is an up-to-the-minute summary of glaciological thinking, but this kind of thinking has evidently been going on for longer than we are apt to think.
Austin Energy is the municipal utility of Austin, Texas that sells the most renewable energy in the United States, and it has done so for the last several years. They sell the renewable power via their voluntary GreenChoice® program, and have done so since about 2000. However, as they strive to increase the percentage of renewable energy in their total mix, the latest batch of green power is not selling as it is almost 80% higher cost than the last.
To get residential and commercial customers to sign up for the renewable power that was more expensive than the normal rate, Austin Energy sold the GreenChoice® power using a fixed charge for ten years. This was the selling point that caused the program to sell out for the last 8 years. The customer gets 100% renewable power at a fixed price that can hedge against rising natural gas prices that dictate the marginal cost of electricity in Texas. And since 2000, the natural gas price at the wellhead have risen from approximately $2/MMBtu to the range of $6/MMBtu, even though as of this writing the price is below $4/MMBtu.
Austin Energy sells electricity based upon two different charges based on a $/kWh basis. One charge is applied to all customers, but different for residential, commercial, and industrial customers and covers many of the costs of distributing electricity. For residential customers during the Summer, this base charge is 3.55 cents/kWh for the first 500 kWh consumed in a month and 7.82 cents/kWh for each kWh over 500. The second charge is the fuel charge that fluctuates as necessary to cover the costs of fuel, and the GreenChoice® charge replaces this fuel charge for those who sign up. Listed here are the charges for the GreenChoice® "fuel charge" for all of the batches of renewable power sold by Austin Energy:
GreenChoice® "Fuel Charge"
Batch-1 Green Power Charge: $ 0.0170 per kWh
Batch-2 Green Power Charge: $ 0.0285 per kWh
Batch-3 Green Power Charge: $ 0.0330 per kWh
Batch-4 Green Power Charge: $ 0.0350 per kWh
Batch-5 Green Power Charge: $ 0.0550 per kWh
Batch-6 Green Power Charge: $ 0.0950 per kWh
and for comparative purposes, the fuel charge since 2000 that is replaced by the GreenChoice® charge:
Austin Energy "Fuel Charge"
Jan 1999 - Jul 2000 1.372 cents/kWh
Aug 2000 - Oct 2000 1.635 cents/kWh
Nov 2000 - Jan 2001 2.211 cents/kWh
Feb 2001 - Dec 2001 2.682 cents/kWh
Jan 2002 - Jun 2003 1.774 cents/kWh
Jul 2003 - Oct 2003 2.004 cents/kWh
Nov 1, 2003 - Dec 31, 2003 2.265 cents/kWh
Jan 1, 2004 - Dec 31, 2005 2.796 cents/kWh
Jan 1, 2006 - Dec 31, 2006 3.634 cents/kWh
Jan 1, 2007 - May 31, 2007 3.343 cents/kWh
Jun 1, 2007 - Dec 31, 2007 3.044 cents/kWh
For electric bills received beginning Jan 1, 2008 3.653 cents/kWh
Batch-5 of GreenChoice® power was sold out during 2008. Batch-6 is now open for voluntary subscription, but is not selling. Recall that this charge substitutes for the normal fuel charge that is currently 0.03653 $/kWh, so the GreenChoice® charge is 2.6 times larger than the comparable fuel charge. Therefore, considering the base charge plus the fuel charge, a residential customer signing up for the latest batch of GreenChoice® power will pay 13.05 cents/kWh for the first 500 kWh, and 17.32 cents/kWh for each additional kWh - all at a fixed price for 10 years. Comparatively, a non-GreenChoice® residential customer power will pay 7.2 cents/kWh for the first 500 kWh, and 11.5 cents/kWh for electricity after the first 500 kWh in the month.
This difficulty in selling the #1 renewable power program in America provides useful data on how people perceive the value of renewables. Before Batch-6 of GreenChoice®, the only renewable energy that Austin Energy was purchasing was wind power generated in West Texas. As of this year, Austin Energy has made power purchase agreements to buy power from 100 MW of biomass generation and 30 MW of photovoltaic solar power from land just to the east of Austin. Due to the increase in wind turbine prices leading up to the middle of 2008, most of the new wind farms constructed in 2008 were built at significantly higher cost than those just a few years ago - when usually everyone expects costs to decrease due to better technology. In fact, data from the Energy Efficiency and Renewable Energy office of the US Department of Energy clearly show that wind turbine prices bottomed out in 2001 at approximately $1,400/kW and were over $1,900/kW in 2008. However, the price of power ($/MWh) is still decreasing.
All of this means that renewable energy may be becoming more expensive and some of the easy options that people thought were expensive in the past were not. This also brings to the forefront the need for conservation of electricity as using half of the electricity at twice the price is still the same expenditure. The model of decoupling electricity sales from utility profits is a good start that many utilities and regulatory bodies have made. Let's see what other good ideas we can come up with. "Cash and prizes" seem to work for gameshows, maybe it could work for "energy conservation games!"
Dealing with nuclear waste has proved to be the Achilles heal of the nuclear fuel cycle. Although there are plans, no one has yet established a full-scale long-term repository for high-level waste. However, enthusiasts for reprocessing spent nuclear fuel have claimed that, as well as producing more fuel, this is a way to reduce the amount of high level wastes.
That's true up to a point, in that reprocessing basically is about extracting the plutonium and also left over uranium- so what's left is lower or medium grade waste. But that doesn't change the total amount of radioactive material. Indeed the chemical separation process used for plants like THORP at Sellafied leads to a lot of secondary activated materials- more low and medium grade wastes to deal with. And the separation process is complex, expensive and a major source of radiation exposure risk problems for both workers and the public e.g. most of the accidental leaks and also allowed emissions from the nuclear cycle have come for reprocessing plants. It's been estimated that nearly 80% of the collective occupational radiation dose associated with the complete nuclear fuel cycle comes from reprocessing activities, measured on the basis of the dose per kWh of power finally generated.
The UK government has decided that fuel from any new reactors built here should not be reprocessed. It is cheaper and easier to dry store the spent fuel. Quite apart from the cost, that's not a surprising decision given that THORP has been out of action since an internal leak in 2005- it's not clear if it will ever reopen fully before it's due for final decommissioning. Moreover, we don't now need plutonium for weapons (we have enough) or to fuel Fast breeder reactors- the UK Fast Breeder programme at Dounreay was closed a decade or so ago. Some of the existing stock of plutonium has been turned into a new fuel, mixed with uranium oxide, called MOX, but the plant for making this has also had problems and may soon be abandoned.
The UK and France have been the only countries with major reprocesssing plants- they have reprocessed fuel for other countries. The US backed away from reprocessing in the 1970's- there were worries about the cost and about the risk of illegal diversion of plutonium. But President (W) Bush reinstated the idea- as part of a major US led global nuclear push, the Global Nuclear Energy Partnership (GNEP). One version of the idea was that the USA would supply small sealed nuclear plants (e.g. mini nukes of the sort being developed for remote sites) to client states around the world, focussing on developing countries, the spent fuel from which would be brought back to the US for reprocessing- to extract the plutonium and uranium. This material could then be used in a new fleet of US nuclear plants, including possibly fast breeder reactors. The claim was that this would be a closed fuel cycle, controlled by the US, with no (or less) risk of proliferation/diversion.
Enthusiasts talked up the role that could be played by the proposed Integral Fast Reactor (IFR) linked in with on-site pyroprocessing to recycle spent fuel. That approach is claimed to produce much less secondary waste than conventional chemical PUREX reprocessing. And it was claimed that some wastes could actually be burnt up in the reactor itself.
However, President Obama seems much less enamoured of the nuclear option. While not opposed to it, his pre-election New Agenda web site said 'Before an expansion of nuclear power is considered, key issues must be addressed including: security of nuclear fuel and waste, waste storage, and proliferation'.
And in office he withdrew $50billion in loan guarantees for new nuclear plants that had been expected to be included in the US Economic Stimulus funding, and also halted work on the proposed nuclear waste repository at Yukka Mountain in Nevada. It was expected to cost $96.2 bn. About $13.5 bn has already been spent on it. Other sites may now be looked at.
Most recently the US Dept of Energy cut its assessment work on the US part of the Global Nuclear Energy Partnership programme, since, as the World Nuclear News services put it, the US ' is no longer pursuing domestic commercial reprocessing, which was the primary focus of the prior administration's domestic GNEP program'. It added that 'As yet, DoE has no specific proposed actions for the international component of the GNEP program'.
25 countries had joined the GNEP, but the US was the leader, so it's unclear what will happen to it - and to reprocessing. Japan has been trying to develop its own reprocessing capacity, but at $20 billion, it's proved to about three times more expensive than originally budgeted. And as the world's only non-nuclear weapons state operating such a facility, the project attracted substantial national and international opposition. Overall, while it's certainly not yet abandoned everywhere, it does seem clear that reprocessing is increasingly falling out of favour around the world- except perhaps in countries seeking a way to make bombs!
An article in the July 7th edition of the Wall Street Journal (WSJ) describes how the use of wood pellets is on the rise as a fuel for electricity in the EU. They are the "new Tulip" now being traded as a commodity on the Amsterdam energy exchange. In just looking up the price for industrial wood pellets, the price is hovering in the range of 130-140 â‚¬/MT (metric ton). With an energy content near 7,500 Btu/lb, this is similar to low rank (brown or lignite) coal.
With the well-written WSJ article referenced above, I won't discuss the issue further here, other than to marvel that the use of wood is competing for the generation of electricity and heat with fossil and other modern renewable. Granted, the wood pellet stoves are much more controlled and efficient than your great-great-great-great-great-great-great-great (i.e. great8 to great10) grandfather's wood burning stove. But this story is still a good example of technology struggling to overcome the limits of resources and moving back to a previous fuel. Unfortunately, we know that there are limits to the quantity of wood that can be replaced sustainably, but this is also an important feedback.
Over the last 300 years industrialized society has moved away from wood because the fossil energy resources had much higher energy density and concentration in mines and fields with oil and gas. Now industrialized society is moving somewhat (albeit on a very small scale) back to biomass in the form of these wood pellets for heat and electricity (often thrown into boilers with coal), but also including biomass for biofuels. This represents the struggle of "technology" to solve problems that society wishes to solve (in this case less greenhouse gas emissions) and the lack of focus of businesses to create substitutes for the services that primary energy resources (i.e. oil, gas, nuclear materials, biomass, sun, etc.) provide and that people want.
Unfortunately we have traditionally waited for the retrospective economic effects to tell us to return to a focus upon the value that energy resources and services provide to civilization. Data from the last 40 years shows us that people and the government focus upon energy costs when they become approximately 9-10% (at least half is for oil) of the gross domestic product of the US (for example see data at the EIA and chart in this paper by Charles Hall). Full data are not out for the last two years, but it appears as though we passed this threshold in 2007 and were likely well above 10% in 2008.
But this focus upon energy services has become, will become further, and should be center to discussions of economics. What percentage of GDP should we be spending upon energy? If this percentage gets too low, consumers and industry are not focused upon energy conservation and long term impacts. If this percentage gets too high (or too high too quickly) then general economic slowdowns tend to occur, mostly driven by oil prices in the last 40 years. Another question: How much we are in control of what this "energy/GDP" is or can be? The amount of energy obtained compared to searching and obtaining primary energy resources, or energy return on energy invested (EROI), is highly influential. If we get 50 barrels of oil when using the energy equivalent of 1 barrel of oil (e.g. EROI = 50) to find and extract that oil, then we have 49 barrels of oil to power the rest of the economy. If the EROI is 20, then this is significantly different. Less energy for health care. Less energy for education. Less energy for agriculture and food production. Etc.
We must understand to associate EROI with GDP much better than we currently understand. Otherwise, we won't be able to foresee the longer term and very important implications of our energy policy and technology decisions. EROI must be used and understood as a measure of technical innovation.
It's well known that cats kill many more birds than anything human beings have come up with, including cars and aeroplanes, but there's an interesting survey of avian deaths from wind, fossil and nuclear plants: 'Contextualizing avian mortality: A preliminary appraisal of bird and bat fatalities from wind, fossil-fuel, and nuclear electricity', by Benjamin K. Sovacool in Energy Policy 37 (2009) 2241-2248.
Based on operating performance in the US and Europe, this study offers an approximate calculation for the number of birds killed per kWh generated for each systems. It estimates that wind farms and nuclear power stations are responsible each for between 0.3 - 0.4 fatalities per gigawatt-hour (GWh) of electricity, while fossil-fuelled power stations are responsible for about 5.2 fatalities/GWh. Although this is only as a preliminary assessment, the estimate means that wind farms killed approx. 7000 birds in the USA in 2006, but nuclear plants killed about 327,000 and fossil-fuelled power plants 14.5 million. However, the data is sparse, especially on bats, and the paper concludes that further study is needed.
Even so it provides a useful introduction. It notes that coal, oil, and natural gas-fired power plants induce avian deaths at various points throughout their fuel cycle: e.g. during coal mining, through collision and electrocution with operating plant equipment and transmission cables, and from poisoning and death caused by acid rain, mercury pollution and climate change. The scale of the direct impacts is quite surprising. e.g. an observation of 500m of power lines feeding a 400MW conventional power plant in Spain estimated that they electrocuted 467 birds and an additional 52 were killed in collisions with lines and towers over the course of two years. By comparison wind turbines seem quite benign (birds tend to avoid moving objects), although they too will have power grid links, and there were some early major problems with multiple bird strikes when wind farms were located in migratory paths e.g. in Southern Spain. Clearly sites like that should be avoided and we must also reduce casual impacts to the minimum possible, by sensitive location.
That is something the Royal Society for the Protection of Birds is keen to improve. It has come out with a positive approach to wind farm spatial planning, which it says can avoid problems and help ensure the development of wind power - which it backs as a response to climate change and energy security problems.
The RSPB report, 'Positive Planning for Onshore Wind', warns that 'inappropriately sited wind farms can damage fragile wildlife and habitats, through habitat loss, mortality through collisions and a range of different disturbance effects' but Ruth Davis, RSPB's head of climate change policy, said 'Left unchecked, climate change threatens many species with extinction. Yet, that sense of urgency is not translating into action on the ground to harness the abundant wind energy around us. This report shows that if we get it right, the UK can produce huge amounts of clean energy without time-consuming conflicts and harm to our wildlife. Get it wrong and people may reject wind power. That could be disastrous.'
It now wants to see a UK system of strategically chosen areas. That was the basis of Welsh "TAN8" planning policy introduced in 2005, which set out seven "Strategic Search Areas" that wind developers were to consider for wind farms. However, TAN8 has had mixed results so far, although the RSPB suggests there wasn't enough consultation over the selection of the Strategic Search Areas. Scotland is now beginning to implement a "spatially explicit" new approach to onshore wind planning, via local development plans, but the RSPB report also highlights successful systems in Germany and Denmark- the RSPB commissioned a report from the Institute for European Environmental Policy, which found that wind farms were being developed rapidly on the Continent without harmful impacts on bird populations.
* A one year research programme carried out in the USA by the Bats and Wind Energy Cooperative, a partnership of the wind industry, government and conservation groups, at the Mountaineer windfarm in W. Virginia, and at the Meyersdale windfarm in Somerset County, Pennsylvania, found "substantial" mortality of bats at two U.S. windfarms, with a daily kill rate of 0.7 bats per turbine. At both windfarms, most bats were killed on nights when average wind speeds and power production were low, but while turbine blades were still moving at relatively high speeds, with fatalities increasing just before and after the passage of storm fronts and when bat activity was highest in the first two hours after sunset. Temporary close down during such periods is one possible remedial response.
Sorry for yet another acronym, and sorrier still that the odds are against your guessing correctly what it stands for. No, it is not a file in Portable Document Format, nor yet a Post-Doctoral Fellow, but a Probability Distribution Function. These PDFs are an essential tool in modern efforts to grapple with environmental risk, uncertainty and just plain ignorance.
Suppose you knocked on a lot of doors, asking to measure the height of the person answering the door. (This is just a thought experiment. Actually doing it would not be a good idea.) At each door you would have a fair chance of getting a measurement equal to the average height of the human population. In fact, averaging your large sample of heights would be the best way to estimate that height, but the sample would also tell you a lot about how often different actual heights crop up. Once in a while the door would be opened by a dwarf or a giant, but the odds are in favour of the person who opens the door being average.
That is the essence of a probability distribution function. The PDF is just the frequency with which anything that varies is spread across the range of its possible values. The idea works just the same for the outcomes of climate model runs as for measured human heights.
We know, from previous large samples, that the PDF of human height is symmetrical. Short people are less likely than average people, and so - with about the same odds - are tall people. This is not true of climate model runs. Their PDFs have fat tails.
All credible climate models predict warming over the next century. The most common outcomes are those in which, by the time the carbon dioxide concentration reaches twice the value it had in about 1850, the temperature will have risen by between 2.0 and 4.5 °C.
The snag is that "most common" is not the same as "average". A rather small proportion of the model runs predict much greater warming than do the commonest ones, and there is no compensating fatness of frequency on the less-warming side. The average of the predictions is noticeably warmer than the commonest prediction.
How do we respond to this undoubted, unpleasant fact? It is telling us something about the real climate of the future, but in an unsatisfactory way. I think that it is at least a guide to right action. There are actions that will make an improbable catastrophe even more improbable, and if we would not be unhappy with either their costs or their benefits then these actions become more apparently prudent. Policies founded on this reasoning are called "no-regrets policies". We probably need more of them.
This is supposed to be a blog about glaciology, and I haven't yet said anything about glaciers. Well, consider the West Antarctic Ice Sheet. Most of it is grounded below sea level, and there is a chance that continued warming will cause it to drain catastrophically into the ocean, starting perhaps in the next hundred years. Our best understanding is that not only is this chance extremely slim but also it would take several hundred years for the catastrophe to play out. But we cannot be certain on either of these points, or on whether the other tail of the PDF (ice-sheet growth because of increased snowfall in a warmer world) is equally probable.
In fact, we glaciologists are so far behind the climatologists that we haven't even got a PDF yet. One course of right action is therefore obvious: study the problem intensively. The PDF either will or will not turn out to be fat-tailed, and if it is fat it will be either because of physics or because of ignorance. The first exploratory efforts to model the vulnerability of ice sheets are just beginning to appear, and the rush is on to be able to say something quantitative about the likelihood of ice-sheet collapse in time for the next major assessment by the Intergovernmental Panel on Climate Change, due in 2014. At the moment, though, all we can say is "Watch this space".
Arctic sea ice has become one of our bellwethers, in part because for the past 30 years we have been able to watch it expanding and retreating nearly in real time, but also because it definitely obeys the laws of physics.
Recently the extent of Arctic sea ice has decreased fairly steadily. It is also notable that 2007 was a year of record minimum extent (in September), prompting conjecture about the Arctic becoming ice-free soon, with an ice-free September perhaps as early as 2013. However 2008 did not quite match the 2007 record. This year, and the next couple of years, will show us whether 2007 was a blip or the start of something even more serious than the 30-year trend.
I still remember our geography teacher at school starting to teach us climatology by saying "The climate is the average weather". He went on to tell us about the convention of presenting the climate as "normals", or averages, over 30 years. One bad summer simply doesn't add up to a climatic change. But it seems that it is human nature both to get rattled when a record is broken and to forget about the problem when the record-breaking behaviour is not repeated.
Equally, and unfortunately, it is not in human nature to worry much about systems like the climate that evolve over 30-year time scales. It is too easy to make the mistake of thinking that bad things aren't going to happen for 30 years.
Knowing that sea ice obeys the laws of physics, we should expect replacing bright sea ice with dark open water to illustrate the idea of feedback. The less reflective the surface, the more the radiative heating, and the more the loss by melting, and the less reflective the surface ... . In other words, the future of sea ice could start to look much, much worse quite suddenly. However, knowing too that we don't know everything, we should be concerned about the inability of climate models to agree on the future of sea ice.
Sea-ice predictions come from a couple of dozen large-scale climate models. They diverge wildly over the next century. Many of them don't even reproduce the recent evolution of sea ice, a clear indication that the modellers have development work to do. Two recent analyses show just how unsure we are about what is coming, but both also offer interesting ideas about how to cope with uncertainty as manifested in poor model performance.
Boé and colleagues point out that the models that do the best job of simulating the past 30 years are also the models that predict the earliest disappearance of sea ice. Using the A1B scenario for greenhouse-gas emissions, considered middle-of-the-road, this happens in about 2060-80 if we agree that disappearance means dropping below 10% of the 1979-2007 average in September. Boé and colleagues explain this observation in terms of the models' accuracy in describing the proportion of the ice that was thin to begin with, and therefore more at risk of disappearing altogether.
Wang and Overland selected the best-performing models by requiring them not only to come within 20% of the September observations for 1980 to 1999, but to beat the same target for the range of extents observed during the whole year. The thinking is that a model is likely to be more trustworthy if it matches more of the firm evidence. They find, with the six models that qualify, that September sea-ice extent is most likely to drop below about 10% of the recent average in 2030-2050.
Comparing the two studies, Mat Collins prefers to put his money on the later Boé estimate than on Wang and Overland. His reasons are cogent, but I think that the crucial point is to exclude the models that obviously get the recent history of sea ice wrong, something Boé and colleagues do not do.
Our understanding of the laws of physics already gives us the message "Sooner or later", but focusing on the models that are not obviously wrong - not the same thing as obviously right, of course - the message becomes "Sooner rather than later".
One of the big hopes for the energy future is nuclear fusion- not messy uranium fission, with all its problems, but allegedly clean and hopefully prolific hydrogen based nuclear fusion. However, it's a long time coming - and it's costing a lot. $20 billion so far globally, and more soon. The cost of the new ITER project at Cadarache in Southern France. has risen from Â£9 billion to, reportedly, around Â£18 billion. It's a joint EU, Russia, US, China, Japan and S. Korea project toward which it seems the UK is contributing around Â£20m p.a. That's in addition to the Â£26 m for nuclear fusion research in the UK through the Engineering and Physical Sciences Research Council for 2007-8.
For comparison, government expenditure on research and development for all the renewable energy sources in 2007-8 was Â£ 15.92 million via the Research Councils and Â£ 7.53m via the Technology Strategy Board, plus some related policy work via the UK Energy Research Centre and Tyndall Centre for Climate Change Research.
Fusion is clearly getting favourable treatment compared to renewables. Is this wise? After all there is a wide range of renewable technologies, a dozen or more very different systems, not just one. Why the imbalance?
The claim is that fusion offers, as the EURATOM web site says, 'an almost limitless supply of clean energy'. And yet the UK Atomic Energy Authority say that, assuming all goes well with ITER and the follow up plants that will be needed before anything like commercial scale is reached, fusion only 'has the potential to supply 20% of the world's electricity by the year 2100.' That's not a misprint - 20%, if all goes well, in 90 years time. Renewables already supply that now globally, including hydro, and the new renewables like wind, solar, tidal and wave power, are moving ahead rapidly- and could be accelerated.
By comparison, the prospects for fusion are actually rather mixed. The physics may be sorted, up to a point. The UK's JET experiment at Culham managed to generate 16MW briefly (but not of course net- it needed 23MW input). But the engineering is going to be complicated. How do you generate electricity from a radioactive plasma at 200 million degrees C? The answer it seems is by absorbing the neutron flux in a surrounding blanket that then gets hot, and has pipes running through to extract the heat, which is then used it to boil water and raise steam -as with traditional power plants. Not very 21st century...
As yet, few people would hazard a guess as to the economics of such systems. The ITER web site (www.iter.org) says 'it is not yet possible to say whether nuclear fusion based on magnetic confinement will produce a competitive energy source'.
But at least there won't be any fission products to deal with. However, the neutron flux will activate materials in the fusion reactor which will interfere with its operation, and will have to be stripped out regularly- so there will still be a radioactive waste storage problem, albeit a lesser one. The materials will only have to be kept secure for a hundred years or so, rather than thousands of years as with some fission products.
The risk of leaks and catastrophic accidents is said to be lower than with fission. Fusion reactions are difficult to sustain, so in any disturbance to normal operation the reaction would be likely to shut itself down very rapidly. But it is conceivable that some of the radioactive materials might escape. The main concern is the radioactive tritium that would be in the core of the reactor, which if accidentally released, could be dispersed in the environment as tritiated water, with potentially disastrous effects. To put it simply, it could reach parts of the body, which other isotopes couldn't.
Finally what about the fuel source? The basic fuels in the most likely configuration to be adopted would be deuterium, an isotope of hydrogen, which is found in water, and tritium, another isotope of hydrogen, which can be manufactured from Lithium. Water is obviously plentiful while it is claimed that Lithium reserves might last for perhaps 1000 years, depending on the rate of use. That presumably depends on the competing use in Li Ion batteries in consumer electronics and possibly soon, on a much larger scale, in electric vehicles.
While that could be a problem for the future, there is a long way to go before we need worry about fuel scarcity. The ITER project is small (500 Megawatt rated) and won't start operating until 2018, and, even assuming all goes well, it's only a step toward a commercial pilot plant. And that at best is decades away. There could of course be breakthroughs. While the ITER magnetic constriction device is seen as the main line of attack, the US is also looking at laser fusion. But, even if it works, that too is unlikely to provide a practical energy source for some while.
We need to start responding to the climate problem now. So why then are we spending so much taxpayers money on fusion? It might eventually be useful for powering space craft. But on Earth? Wouldn't it make more sense to speed the development and deployment of full range of renewable technologies, and make use of the free energy we get from fusion reactor we already have- the sun.
The June 13th edition of The Economist has a cover story about the heavy debt burden being placed upon the younger generation of the US citizens due to government spending on bailout packages. There are some staggering numbers (e.g. $483,000 for every household is owed to pay for unfunded obligations for elderly pensions and health care) and we're almost assured to reach a debt of at least 100% of GDP in 1-2 years. This debt is worrisome not only because of the typical concerns about paying back money that is owed, but because the energy situation is different now than when the US had gross debt of 120% of its gross domestic product just after World War II. Future energy resources are poised to have less energy return on energy invested than past energy resources - that means less of a driver for economic growth.
The situation in the late 1940's was that much of the US spending that caused the debt to increase was into factories, technology development, and worker training. Then these investments in infrastructure and skill transferred to the private sector after much of it came from the private sector because of the war - think Singer typewriter factories making M1 Garand carbine rifles. Then, after much of Europe and Japan was destroyed by acting as the battlefield, the US was the remaining industrialized country with no infrastructure damaged at home. Therefore, the world bought American products ... and the US nor the world had yet not peaked in conventional oil production.
Thus, there are two important concepts (at least) that are fundamentally different for our future than our past:
(1) In dealing with the current economic recession and increasing debt load, the US is no longer the only industrial game in town, and
(2) Energy resources are generally more scarce and those remaining have less energy return on energy invested (EROI), or "bang for the buck".
I will not dwell on item (1) as it is obvious today that China is the world's manufacturer. They emit as much CO2 now as the US, although Americans still consume more (i.e. is responsible for more) CO2 embodied in purchased products and services than the Chinese. China is a net exporter of CO2 and the US is a net importer.
It is item (2) above that is the more long term concern and a fundamental driver in how world society will change. The US used its oil production capacity to fuel the Allied victory in World War II as the Germans had succumbed to turning coal into liquid diesel, and the Japanese were forced to pursue Southeast Asian islands in search of oil. The US strategy was still affected by not having oil and diesel available at any quantity desired, but to a much lesser extent than that of the opposition.
During the 1950's the available oil in the US enabled the boom of the automobile age and rebel attitudes embodied by James Dean. With the US oil production peak in 1970 and the Arab oil embargos of 1973-74, Phase I of the "limits to growth" movement began. People could now conceptualize that there were limits to finite resources such as oil, but that is only part of the story. The energy return on energy invested - the amount of energy obtained, say in barrels of oil, after using one unit (e.g. barrel of oil) to obtain the new energy resource - for fossil resources is on an inevitable decrease and the EROI of renewable resources is not as high as the best (and past) EROI for oil, natural gas, and coal. In fact, some research (http://www.esf.edu/EFB/hall/images/Slide1.jpg) suggests that the renewable resource/technologies with the highest EROI are wood and hydropower - the oldest form of primary energy and the oldest form of powering automated processes (water wheels). You could also say that investing in modern hydropower dams played a large part in bringing the US out of the Great Depression.
We are now in Phase II of "limits to growth" movement with greenhouse gas emissions looming as an even larger environmental limit. The original energy resource limit concerns are also back at the center of discussion. The "easy oil", or that with high EROI, has been found and extracted. Shale gas finds and extraction methods (i.e. fracing) have enabled new natural gas resources to become economic reserves accessible at any time, but at a higher cost than in the past, and likely with higher EROI than past natural gas fields (an ongoing study is attempting to quantify EROI for shale gas).
One set of solutions that is envisioned to solve our energy, environmental, and economic dilemma is the use of renewable energy technologies. Modern renewable technologies such as wind turbines, solar photovoltaic panels, and concentrating solar panels produce power when the input resource (wind and sun) are available. Since these resources are intermittent, research and technology focus upon how to either create steady electrical output or output on demand. This essentially means generating excess electricity at some time, storing that energy, and releasing it when desired. This extra investment in storage systems requires energy and again lowers the EROI for the entire system of electricity generation devices.
Because economic growth and energy availability are highly correlated, lower EROI also implies lower monetary return on investment. This in turn means that large investments in modern renewable energy infrastructure will not return the same amount of energy or money in the future as the same quantity of investment in hydropower in the 1930's and oil in the 1950's and 1960's. So economic models that do not account for the energy return on clean/green/renewable technologies as well as returns on nuclear and fossil systems will likely overestimate the economic growth of the future. Some think that we have significantly decoupled our economic system from our energy system, but this is a short-sided view of the use and influence of historically high EROI fossil fuels on human civilization. Look at the figure below and ask yourself two questions:
(1) how this increase in primary energy consumption and GDP could have existed without fossil fuels?
(2) Can energy consumption and GDP continue their upward trends into the future, or even maintain current levels, without fossil fuels?
When looking on this 3000 year time scale, it is appropriate to ask these questions. One can visualize the day when fossil fuels will no longer be economic to extract. This is why more research into EROI and how it affects economic growth is needed. The understanding of how energy conservation (e.g. less energy consumption overall) programs affect economics and business structures needs to be better understood. This means going beyond energy efficiency ("energy/unit output") which businesses are technically incentivized to do now already. It's time to find out how people (at least those in the middle class who pay taxes yet don't have too much disposable income) value their time and habits vs. investing in concepts such as the smart grid, electricity storage systems, biofuels, and oil shale.
Why do Americans who consume on average 350 GJ/person/yr (1 GJ = 1 billion joules of energy), compared to an average of 80 GJ/person/yr for the world, think that any sacrifice in energy consumption is not tolerable when other countries consume 1/3-1/2 the energy and have just as fulfilling, if not more fulfilling, lives? There are a few good reasons such as the US is a relatively large country that requires a good deal of energy to move from city to city. But designing cities and transportation systems to use less energy should be a priority instead of not even an afterthought. Transportation across a large country is just one example of a strategic challenge, but until we stop paying photographers to chase around Angelina Jolie and Britney Spears, then I'll know we still aren't serious as a society about tackling the energy challenges of our future. I believe we can have equal if not better livelihoods in the US by consuming less energy in the future. It's not hard to convince Western Europeans of this, but they still have a large number of paparazzi ...
Alternatively you can browse the blog’s category archives: