This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Skip to the content

IOP A community website from IOP Publishing

September 2009 Archives

For a long time, the conventional wisdom was that the maximum extent of glaciers during the last ice age was reached in about 16,000 BC. Then it was realized that better calibration of the rate of production of carbon-14, the main radioactive clock we have for that period, required that the date of the maximum be pushed back to 19,000 BC. Now Peter Clark and colleagues, writing in a recent issue of Science, provide further clarity. They show that the maximum extent of the ice lasted from about 24,500 to about 17,000 BC.

Plants and animals don't grow underneath ice sheets, so datable evidence for them means no ice at the site of observation. Assemble several thousand of these dates and you get a picture of the changing extent of the ice sheets. The picture is sharper for some ice sheets than for others, but cumulatively the evidence for several thousand years of near-stability is fairly impressive.

It contrasts with some other guides to the palaeoclimate. Oxygen isotopes in ocean-floor microfossils are a guide (unfortunately) to two variables: sea level (or the volume of the ice sheets, which amounts to the same thing) and water temperature. The heavier of the two common isotopes of oxygen tends to stay behind when molecules of liquid H2O evaporate from the ocean surface, so a) if the light molecules accumulate in the ice sheets instead, the oceanic oxygen gets relatively heavier, but b) this effect is less marked when it is warmer because now the heavier molecules evaporate more readily.

Disentangling these two controls on oxygen isotopes is a major challenge, but accepted pictures of deep-ocean temperature show a quite sharp minimum at about 17,000 BC. What Clark and colleagues have shown, in contrast, is that the ice volume was near to its maximum, equivalent to a sea level about 128 m lower than today's, for several thousand years before that. This agrees with calculated variations of the solar radiation regime due to changes in the Earth's orbit. The calculations are reliable, and show a broad minimum of Northern Hemisphere radiation centred near 20,000 BC. The concentration of atmospheric CO2 also varies only moderately, between 185 and 200 parts per million, during their suggested glacial-maximum span.

This is evidence that the climate can be stable in more than one state. On graphs which span tens or hundreds of thousands of years, the last ten thousand years, the Holocene, stand out as a time of little change. I am not saying that the climate has not changed during this time. We know, for example, that it was a bit warmer in its earlier than its later part, and the cool spell known as the Little Ice Age, which bottomed out about 1850 AD, is well documented. But the Holocene climate was much less variable than the irregularly cooling climate during the preceding hundred thousand years of the ice age. Now Clark and colleagues have shown that peak ice can be stable as well, in the sense of looking rather flat on a graph (but in a much cooler way). We would naturally like to know why.

Another interesting point is that different ice sheets reached their maxima at different dates. The big standout in this respect is West Antarctica, where the maximum was as late as 12-13,000 BC, but the big northern ice sheets, now long gone except for Greenland, had maxima up to a few thousand years apart, reminding me of the smaller glaciers during the Little Ice Age, which peaked as early as the late 17th century in some places and as late as the early 20th century in others. Evidently, as they paced through their stately dance, neither the Little Ice Age glaciers nor the ice-age ice sheets were much good at keeping in step.

Regional variations are therefore a fact of the global climate, about which it is nevertheless legitimate to make on-average statements. One implication is that we should not pay much attention to people who point out correctly that some parts of the world are not warming very much at the moment, but argue wrongly from this that global warming is a myth.

On January 29, of this year the Environmental Defense Fund, together with the UK Consulate, hosted a climate conference at the capitol: "Texas' Changing Economic Climate." At the beginning of the conference, we heard a personal message from Prince Charles of Wales to the State of Texas imploring Texans to lead the US, and hence the world in climate mitigation. At the end of the conference, one of our elected officials suggested Texas may in fact already be a leader in carbon emissions mitigation while at the same time increasing the gross state product. And if Texas has been taking this leadership role by promoting things like a business-friendly environment and a deregulated electricity market, then perhaps other states, and countries, should look to Texas for how to mitigate carbon emissions.

Are those claims true? Is Texas a leader in reducing carbon emissions while increasing economic productivity?

On the surface, it seems plausible. From 2000 to 2005, total CO2 emissions in the state decreased 4.4 percent while economic output increased 16.5 percent. But dig deeper, and claims of real leadership on climate mitigation evaporate. It turns out that global energy prices were the main drivers of those changes, not the state's regulatory environment or business initiatives. Much of the CO2 reduction came from decreased natural gas use by the chemical industry as a result of the rising cost of natural gas. Electricity deregulation in Texas fostered the increased use of natural gas combined cycle technology for electricity generation – helping to maintain relatively steady electric sector CO2 emissions since 2000. Much of the rise in the state's economic output is attributable to the oil and gas industry, buoyed by the same rise in global energy prices.

It is a mistake to think that significant steady and long term CO2 emissions reductions, together with increased gross state product, can be achieved by simply continuing actions of the past five to ten years.

This report examines the data behind claims that Texas has been a leader in reducing carbon emissions while increasing economic productivity. The data shows that the external economic factor of higher energy prices was the main driver in decreasing emissions in Texas from 2000 to 2005, not our pro-business or deregulatory policies. Furthermore, Texas must prepare for the future. Federal climate legislation is on the horizon. This legislation is likely to impose constraints on the Texas economy that will demand even greater reductions in emissions. Texas and the rest of the US states should work to understand how specific industries and consumers will be affected by a federal CO2 constraint. By promoting those businesses that are well-positioned and facilitating restructuring for those ill-positioned, Texas can successfully transition to and maintain leadership within the new carbon-constrained energy economy.

Texas CO2 emissions data

In looking at aggregated data from the Energy Information Administration of the Department of Energy, from 2000 to 2005, the CO2 emissions of Texas went from 654 million metric tons (MtCO2 ) to 625 MtCO2 – a decrease of 4.4% F F. By looking at the data in Figure 1, one can see that the peak year for Texas CO2 emissions was 2002 at 672 MtCO2. Emissions in both 1999 and 2001 were less than in 2000 with the decrease from 1999 to 2005 being only 0.2%, as Texas' CO2 emissions in 1999 are listed at 626 MtCO2. Thus, in thinking about a specific baseline year for CO2 emissions, the choice can have a large impact. This fact provides reasoning for using a running average that can level out short-term fluctuations in the economy and energy prices.

The evidence for the emissions decrease is revealed by looking one level deep into the data – emissions from the industrial sector (see Figure 2). In 2005, the Texas industrial sector was responsible for 179 MtCO2 compared to 218 in 2000 – a 17.6% decrease. As a comparison, the drop in the overall US industrial sector emissions was only 6.4%. No other major sector, transportation or electric power, decreased in emissions in Texas during the 2000–2005 span. Furthermore, the Texas industrial sector is dominated by the consumption of natural gas as they are correlated very closely: Texas total consumption of natural gas dropped 21% from 2000 to 2005.

Figure1_TXCO2.JPG
Figure1_TXCO2.JPG

Figure 1. Texas' CO2 emissions by fuel.

Figure2_TXCO2.jpg
Figure2_TXCO2.jpg
Figure 2. Texas' CO2 emissions by sector.

Table1_TXCO2.jpg
Table1_TXCO2.jpg

Table 1. Comparison of US and Texas CO2 emissions from 2000 to 2005. Emissions in Texas and the US (MtCO2).

Interpreting Texas CO2 emissions data

There is an important question to ask in terms of interpreting the data showing a drop in industrial natural gas usage and subsequent emissions: Did the industries in Texas quit making as many goods or find a way to make the same amount, or even more, goods while consuming less natural gas?

From 2000 to 2005, the Texas Comptroller of Public AccountsF F shows that the gross state product increased from $850 billion to $989 billion in constant 2005 dollars. This is a 16.5% increase in economic output. During that same 2000-2005 span, Texas' total industrial output dropped a few percent before coming back to 2000 levels (see Figure 3). The only industries with substantial economic growth were oil and gas extraction, refining, and primary metals (not shown). The real price of oil and natural gas rose 40% from 2000 to 2005 – and roughly doubled from 1999 to 2005, providing substantial income and revenue to the Texas oil and gas sector, as well as the state budget. However, the chemical sector, which uses substantial quantities of natural gas as a feedstock was down 11%, perhaps tied to the increase in cost of natural gas. Additionally, a 13% drop in employment in the chemical industry from 2000 to 2005 provides some evidence to a drop in the number of chemical goods produced.

Figure3_TXCO2.jpg
Figure3_TXCO2.jpg

Figure 3. Industrial productions indices for Texas.

One can still ask what industrial energy efficiency improvements occurred early this decade in Texas. At the beginning of 2000, approximately 10.3 MW of cogeneration was installed in Texas. By the end of 2005, this was 17.5 MW – a 71% increase in capacity in six years F F. This is important because cogeneration, also commonly known as combined heat and power facilities, get more useful energy out of the same amount of fuel. Generating electricity and heat from more efficient systems decreases fuel consumption and emissions when it displaces less efficient systems.

However, electricity generation within the industrial sector was relatively constant from 2000 to 2005. Electricity generation from combined heat and power (CHP) facilities increased from 70 to 97 million MWh from 2000 to 2002, and then decreased to 85 million MWh by 2005. Overall, CHP generation increased 21% from 2000 to 2005, practically all outside of the industrial sector. Thus, many CHP facilities were installed, but the demand for their services did not seem to hold up.

The signing of SB 7 in 1999 began the deregulated electricity market in Texas. This change in policy ended up launching a tremendous increase in the installation and use of natural gas combined cycle units (NGCC) for electricity generation (see Figure 4). However, the move to NGCC generation technology had already begun in the early 1980s. The NGCC units use the excess heat from a combustion turbine to generate steam for a steam turbine. This combination makes NGCC power generation much more efficient than generating electricity from either the steam or combustion turbine alone. Amazingly, Figure 4 shows the clear impact that deregulation policy had on the strategy in the electric power sector. From 2000 to 2005 the installations of NGCC units increased by 400%.

Figure4_TXCO2.jpg
Figure4_TXCO2.jpg

Figure 4. The cumulative installed capacity of natural gas plants in Texas shows that installation of combined cycle plants increased significantly starting in 2000F F. ST = steam turbine operating stand-alone, CT = combustion turbine of an NGCC plant, CA = steam turbine of a NGCC plant, GT = gas combustion turbine operating stand-alone, and CS = an NGCC plant where the combustion turbine and steam turbine are connected mechanically.

The employment situation in the industrial manufacturing sector shows a marked contraction (see Figure 5). Employment in the chemical and plastics industry was representative of the overall Texas manufacturing employment trend from 2000 to 2005. Employment in the oil and gas extraction industry was slightly up from 2000 to 2005, and followed the continually climbing energy prices through 2007. Interestingly, even in some industries that saw economic growth during the time span of interest due to an increase in prices for the manufactured good, employment went down (e.g. primary metals). Also, industries that experienced decreasing employment are many of those that are energy and natural gas intensive.

Figure5_TXCO2.jpg
Figure5_TXCO2.jpg

Figure 5. Employment indices for the overall Texas manufacturing sector as well as selected industries.

Conclusions

What this analysis shows are a few major points regarding Texas gross state product and CO2 emissions from 2000 to 2005: (1) the major growth of the Texas gross state product increased during the first half of this decade due to a rise in global energy prices and increased value of chemical products, (2) the boom in natural gas cogeneration installations does not nearly account for the 32% drop in natural gas consumption in the industrial sector as the generation from these facilities only slightly increased from 2000 to 2005, and (3) a drop in cogeneration systems from 2002–2005 together with a drop in output from the chemical industry accounts for a large portion of the decrease in natural gas consumption, and subsequently Texas' CO2 emissions. Texas' emissions may have even slightly decreased since 2005 with continued increases in natural gas and oil prices.

It is a mistake to think that significant steady and long term CO2 emissions reductions, together with increased gross state product, can be achieved by simply continuing actions of the past five to ten years. High energy prices benefit some Texas industries while hurting others, and there is evidence to suggest that higher energy prices have been influential in decreasing emissions from 2000 to 2005. Impending federal climate legislation will impose constraints on the economy that go beyond the reductions in emissions that have occurred in Texas as a consequence of external factors rather than by directed policy. Texas and the rest of the US states should work to understand how specific industries and consumers will be affected by a CO2 constraint. By promoting those businesses that are well-positioned and facilitating restructuring for those ill-positioned, Texas can successfully transition to and maintain leadership within the new carbon-constrained energy economy.

View image

Underground energy

| | Comments (7) | TrackBacks (0)

Geothermal energy is coming back in favour in the UK as an energy option, after some years in the wilderness. A major geothermal "hot dry rock" test project in Cornwall was abandoned in the 1980s after it was assessed as not being likely to produce sufficient energy for electricity production, and although Southampton persevered with a more conventional aquifer heat-based system, geothermal was basically left to other countries.

There is now over 11 GW of installed geothermal aquifer electricity generation capacity around the world and even more heat supplying capacity, but deep "hot rock" geothermal technology has recently had a renaissance, in part due to the availability of improved drilling techniques developed in the oil industry.

Enhanced geothermal systems (EGS) as they are now called, are beginning to move towards commercialization. A 2.9 MWe plant is operating commercially in Landau, western Germany, while projects are now being developed in Australia, the US and Japan, and plans are taking shape for a 3 MWe plant in Cornwall, at the Eden Project.

In its recent Renewable Energy Strategy, the UK government said that it would "commit up to £6 m to explore the potential for deep geothermal power in the UK helping companies carry out exploratory work needed to identify viable sites".

Depths of 3,000 to 10,000 metres can now be reached, with water pumped down to be heated by the hot rocks to around 200 degrees centigrade. Feeding back to the surface, this water can then be used to drive turbines to generate electricity.

Martin Culshaw of the Geological Society's engineering group, said: "Cooling one cubic kilometre of rock by one degree provides the equivalent energy of 70,000 tonnes of coal. This has the potential of equalling the nuclear industry in providing 10–20% of Europe's energy." Geothermal systems have the big advantage over many other renewables of supplying "firm" continuous power, although they aren't strictly 100% renewable, in that heat wells exhaust the local heat resource over time. But it's topped up eventually by the heat from deeper inside the earth- derived from nuclear isotope decay. That makes it one form of nuclear power that seems benign.

However there can still be problems. Iceland has been developing geothermal electricity production on a large scale, but as Lowana Veal has reported in IPS News, there have been concerns about emissions of hydrogen sulphide gas. Moss in the area was being effected. Levels were well below what was thought have any health risks for humans, but it is being monitored. H2S can be filtered out of the steam and water vapour that is emitted by geothermal plants, or it can be reinjected back into the well in a closed loop binary system. But that all adds to the cost.

Perhaps more importantly, carbon dioxide gas is also present. To deal with this a "Carb Fix" programme is being developed.  The idea is to dissolve the CO2 in water under high pressures and then pump the solution into layers of basalt about 400–700 m underground, in the expectation that the dissolved CO2 will react with calcium in the basalt to form solid calcium carbonate. The project is a form of carbon capture and storage (CCS). But rather than filling empty oil or gas wells with CO2 gas under pressure, mineral storage offers a safer bet, since there is less chance of leakage.

There may also be a potential problem with earthquake risks from deep drilling in some locations. Drilling kilometres down and then pressurising the system can lead to release of geological stresses, lubrication of fissures and small earth tremors. There were some recorded for example with a geothermal project in Switzerland in 2006, when water was injected at high pressure into to 5 km deep borehole. A shock measuring 3.4 on the Richter scale was detected, which caused local alarm, but evidently no injuries or serious damage, although further work was halted. There was a 3.1 scale tremor subsequently. This issue has recently led to concerns about some of the new German projects.

Problems like this apart, the prospects for geothermal seem good. The USA is in the lead in terms of geothermal electricity production, and has around 4,000 MW of new capacity under development. Google.org recently put £5.4 m into enhanced "hot rock" geothermal systems , supporting three new projects in the USA, and Obama allocated $350 m to geothermal work under the new economic stimulus funding.

The resource potential is very large. The US Department of Energy has suggested that in theory the US could ultimately have at least 260,000 MW of geothermal capacity. Large resources also exist elsewhere in the world and there are many projects in operation or being developed. As already mentioned, Iceland is a leading user, but the Philippines, which generates 23% of its electricity from geothermal energy, is the world's second biggest producer after the US the United States. It aims to increase its installed geothermal capacity by 2013 by more than 60%, to 3,130 MW. Indonesia, the world's third largest producer, plans to have 6,870 MW of new geothermal capacity over the next 10 years – equal to nearly 30% of its current electricity generating capacity from all sources. Kenya has announced a plan to install 1,700 MW of new geothermal capacity within 10 years – 13 times greater than the current capacity and one-and-a-half times greater than the country's total electricity generating capacity from all sources.

www.earthpolicy.org/Updates/2008/Update74.htm

Finally, the use of ground-source heat-pump technology is also expanding rapidly, with perhaps 200,000 units having been installed in domestic and commercial buildings around the world. This is also sometimes labelled as "geothermal", not really completely correctly, since at least for surface based heat pipe extraction, the heat is mostly ambient heat ultimately derived from the sun, not from deep in the earth. But some heat pumps do use deeper pipes and they can also be used to upgrade the value of geothermal heat. In addition, heat can be stored in the earth via underground piping, creating local underground heat stores.

Death and glaciers

| | Comments (2) | TrackBacks (0)

When I was starting my PhD, I met Fritz Müller. He had no stake in my studies, but he insisted on being taken to my field site and delivering a string of inspirational remarks that helped me through the following few years. He was a man of boundless energy and foresight, gifted with the rare ability to make others do what he wanted them to do without them minding (at least, most of them). I keep finding that ideas that come into my head were actually in Fritz Müller's head several decades ago, and it is a tragedy for glaciology that one day in 1980, when he was conducting an excursion for journalists on Rhone Glacier in Switzerland, he felt unwell, sat down and died of a heart attack.

Fritz's wife Barbara suffered the compound tragedy of being widowed by the ice twice. Her first husband, the glacial geomorphologist Ben Battle, was drowned in a meltwater stream on Baffin Island in 1953.

Alfred Wegener may be the best known of all victims of the ice because of his arguments for continental drift, first advanced in 1912. Like Fritz Müller, he was decades ahead of his time, but what many do not realize is that Wegener was actually a meteorologist and glaciologist. Some of his measurements of snow accumulation on the Greenland Ice Sheet are still part of standard compilations. In October 1930, having resupplied the forward camp of his meteorological expedition at Eismitte, near the centre of the ice sheet, he and Rasmus Villumsen began the return journey by dog sled to the western margin of the ice. Villumsen was never seen again, but Wegener's body was found the following May. He seems to have died of overexertion and heart failure.

Perhaps Robert Scott is even better known than Wegener. The staggering story of his 1911–1912 trek to the South Pole, which he reached five weeks later than Roald Amundsen, has been retold times without number. The most recent retelling, Susan Solomon's The Coldest March (Yale University Press, 2001), may well be the best, and not just for the way it sets out the heroism, fading into fatalism, of Scott and his companions. It also offers insight into the role in Scott's tragedy of unusually cold weather, and of some of the physiological implications of low temperature that are not fully understood even today.

Amundsen's journey to and from the South Pole was rather uneventful, but he too died on the ice, in this case the sea ice of the north Atlantic, into which his seaplane is believed to have plunged in 1928 while he was searching for Umberto Nobile. Nobile had flown an airship to the North Pole, but it crashed north of Svalbard on the return flight. Amundsen and his crew joined several of Nobile's crew on the list of fatalities. Nobile himself, and most of his crew, were fortunately found and rescued, not without further loss among the search parties, and Nobile died at an advanced age in 1978.

One of the lessons we learn from these famous fatalities is that your physical condition can have a lot to do with whether you come back alive. Setting out in good health, and in company, and equipped with ways of keeping warm, dry and well-fed, are necessities of safe travel on the ice. But the glacier itself can strike at you without warning. The annual toll taken worldwide by crevasses, avalanches and meltwater is difficult to determine because nobody keeps a centralized record, but deaths on glaciers are reported regularly in the media. You don't have to be famous to fall a victim to the ice.

All of the deaths are tragedies, but many were avoidable. We have learned a lot from the sacrifices of the explorers, and it is a shame that we continue to repeat some of their mistakes. Andy Selters' Glacier Travel and Crevasse Rescue (The Mountaineers Books, 1999), and the freely-available manual compiled by Georg Kaser, Andrew Fountain and Peter Jansson (UNESCO, 2003; 3 megabytes), are two among many sources for an understanding of how to come back from the glacier alive.

As we have reached the one-year anniversary of Lehman Brothers bank collapsing, many are still wondering what happened to the US and world financial system. Many in the government are calling for better regulation of the financial and banking industry, but perhaps there is one regulation that towers above all others: banking reserve ratio.

The reserve ratio, or reserve requirement, identifies the amount of customer bank deposits that must be held within the bank. The bank is allowed to lend out the rest of the money. Currently the US reserve requirement is 10%. Thus, for every 100 dollars deposited, 90 dollars can be lent to borrowers.

The reason that the reserve ratio is important, is that it parallels conceptually to another ratio of concern in the area of energy: energy return on energy invested (EROI). To some, the question remains whether or not this parallel is also a correlation caused by the physics and thermodynamics describing energy, rather than "laws" of economics and financial practice. But to me, there is no debate. To think that we can have an industrialized society without much excess energy, or high EROI, is not feasible. Also, because net energy and economic growth are so highly coupled, there likely cannot be a continuing industrialized society without a relatively low banking reserve ratio.

Economists model the macroeconomic output (GDP) as a function of three basic factors (that are not necessarily independent): labor, energy (energy services), and capital. Research since the 1970s by a group of dedicated ecological economists has unequivocally shown that the modern US economy grows significantly with more energy (energy services) and capital. Over the last 100 years in the US the labor factor has become insignificant. That is to say, and increase in the labor force will cause practically no economic growth (see Robert Ayres (2008) Ecological Economics). The reason is that in the US, labor has been almost 100% replaced by primary energy sources including fossil fuels, nuclear energy, and renewables. Consider that economic capital includes the intellectual capital and education of the workforce, and we see that physical human labor is valued quite poorly.

Before you say this doesn't make any sense, then keep in mind that people expect a "jobless recovery" yet again after we apparently had such an economic recovery (in the US) after the dot-com bust. I say apparently because it is probable that the US fiscal policies fighting off recession during the early 2000s just kicked the can down the road until the current economic recession.

While there are no systematic analyses of how EROI should relate to banking reserve ratio, I think this is a fruitful area for study. Lending money and expecting a return on investment is analogous and reliant on lending energy to invest for future energy return. It is likely that the inverse of the reserve ratio (that is amount of money lent out to that held in deposit in the bank) cannot be larger than EROI. As the EROI of fossil fuels to energy services seems to be only slightly above 10 (where the inverse of US reserve ratio is 9), or in the 10–20 range at the "mine mouth", and even less for finished products such as gasoline and electricity, we might very well already be operating society on an energy services EROI <10. Can our society operate as it exists if we lend more money than we our lending ourselves energy? I hope we can learn the answer to this soon.

In its recent report on geo-engineering, the Royal Society argues that "air capture" carbon dioxide absorption techniques are probably the best geo-engineering option in that we should "address the root cause of climate change by removing greenhouse gases from the atmosphere". Solar heat reflector techniques were seen as generally less attractive. It may well be true that carbon dioxide absorption is the best type of geo-engineering option,but surely, geo-engineering of whatever type in no way deals at source with the "root cause" of climate change – which is the production of carbon dioxide in power stations, gas boilers and vehicles.

The Royal Society report, like the parallel report from the Institution of Mechanical Engineers, does stress that "No geo-engineering method can provide an easy or readilyacceptable alternative solution to the problem of climate change" and that mitigation and adaptation programmes are vital. However, there is the risk that "technical fix" geo-engineering approaches may be seized on as an alternative to dealing with the problem at source, since they could seem to offer ways to allow continued use of fossil fuels. That's not to say there is no role for geo-engineering, but we need a hierarchy of options.

Mitigation via renewables would come top of my list, along with improved energy efficiency. Adaptation will inevitably have to occur – given the emissions that we have already produced, whatever we do about mitigation, or for that matter geo-engineering, we are going to be faced with some climate change. Geo-engineering, as a pretty inelegant "end of pipe", trying to clean up "after the event" approach, might be seen as an ancillary option, rather than as a last line of defence, or as "Plan B".

Tim Fox, who led the IMechE study, commented sensibly that "We're not proposing that geo-engineering should be a substitute for mitigation [but] should be implemented alongside mitigation and adaptation. We are urging government not to regard geo-engineering as a plan B but as a fully integrated part of efforts against climate change."

Even so, there are major uncertainties over costs, reliability and eco-impacts, as both reports recognised. Both proposed a £10 m pa UK research programme, which seems not unreasonable, to try to identify the best options and the risks more clearly. But let's not get too deflected from what ought to be the primary aim of avoiding carbon dioxide release in the first place.

Prof. John Shepherd, from Southampton University, who chaired the Royal Society's study, said: "It is an unpalatable truth that unless we can succeed in greatly reducing CO2 emissions, we are headed for a very uncomfortable and challenging climate future. Geo-engineering and its consequences are the price we may have to pay for failure to act on climate change."

Fair enough. But, if we really are worried about climate change, it would be better if we got seriously stuck into mitigation, and didn't have to add to our problems by launching potentially risky large-scale geo-engineering programmes.

Of course not all will be risky – though they still may not be wise. It was good to see re-afforestation mentioned by the Royal Society as an option, even if it could only realistically absorb a smallish proportion of our ever-increasing emissions. However, while the IMechE backed the idea of painting roof tops white to reflect solar heat and reduce global or least local heating, the Royal Society said: "The overall cost of a 'white roof method' covering an area of 1% of the land surface would be about $300 billion/yr, making this one of the least effective and most expensive methods." Putting solar collectors on roof-tops might be a better idea! I'm not so sure about chemical air capture though. Both reports back the "Artificial Tree" idea for carbon dioxide absorption. Submarines, and, famously, Apollo spacecraft, used sodium, hydroxide to do this. If we are thinking along the same lines now for the whole planet, we must be getting desperate. Biochar might be a better option – but not if on a very large scale, surely?

Geo-engineering may have a role, and these reports are useful, but there are still a lot of unknowns – after all its basically about tinkering further with the climate and linked ecosystems, albeit consciously rather than accidentally. Quite apart from the cost, there is the risk that, if we adopt large-scale programmes like seeding the oceans with nutrients to increase CO2 uptake, or pumping aerosols into the atmosphere to reflect sunlight, we could create major new unexpected eco problems.

Geoengineering the climate

http://www.imeche.org/about/keythemes/environment/Climate+Change/Geoeng

For more discussion of renewable energy options and policies, visit Renew.

By Hamish Johnston, physicsworld.com

Last week James Dacey blogged about the growing skepticism of the British public regarding the dangers of manmade global warming.

One reason could be that in Britain – and some other places bordering the North Atlantic – it doesn't seem to have become warmer recently. The two places that I am familiar with (the west of England and eastern Canada) have recently had relatively cold winters and cool summers.

Anecdotal and unscientific I know, but I'm guessing that most people form opinions on global warming based on personal experience – which is why climate expert Mojib Latif of Kiel University in Germany is concerned about what he believes to be happening in the North Atlantic.

Despite relentless manmade climate change, Latif believes the North Atlantic is actually cooling thanks to something called the Atlantic multidecadal oscillation – which seems to occur with a period of about 60–80 years.

Speaking on BBC Radio 4, Latif said that this oscillation could be significant enough to make it cooler in the North Atlantic over the next ten years. In other words, people in the rich and carbon intensive countries that border the North Atlantic could be lulled into thinking that there is no problem. Until the oscillation turns and it gets hotter very quickly.

You can listen to the interview here.

Hamish Johnston, physicsworld.com

When Slartibartfast was given the job of shaping the Earth's surface, the part he enjoyed most was the fiords. I am sure he could explain how to create a fiord much better than I can, but I will have a go. The explanation is informative but disturbing.

To make a fiord, you need fast-flowing ice, which implies that the glacier bed must be a site of intense dissipation of energy. Most of the motion is by basal sliding, which implies that the bed cannot be frozen and also suggests that much of the energy will be used up in entraining and removing bed material. This agrees with the visible fact that fiords are much deeper than the mountain and plateau terrains through which they are threaded. It also agrees with an explanation of what is going on that appears in a recent study.

Michèle Koppes and colleagues measured the volume of sediment beneath the waters of Marinelli Fiord in Tierra del Fuego. These waters have taken the place of the tongue of Marinelli Glacier, which has retreated 13 km since about 1945. Before then, its terminus was stable at the mouth of the fiord. The sediment must have been delivered by the glacier since its retreat began. The implied rate of erosion is an astonishing 39 millimetres/yr, give or take 40%. That is, the sediment implies that the glacier has stripped 39 mm off the land surface during each of the past 50–60 years. It seems certain that most of the erosion must be happening beneath the fast-flowing trunk of the glacier. If so, its bed is being overdeepened very rapidly indeed.

Why was Marinelli Glacier stable before 1945 and unstable thereafter? The answer has something to do with climatic change, but more immediately with the glacier's own behaviour. When it was stable, its terminus rested on the pile of sediment it had itself deposited at the mouth of its fiord. Beyond that there was deeper water, into which the glacier did not advance because the ice arriving at the terminus simply broke off as icebergs. The deeper the water, the faster the iceberg discharge rate. When the climate changed, the glacier was no longer able to deliver the amount of ice needed to keep the terminus where it was. So it retreated.

The trouble is that the retreat moved the terminus down a slope. Not towards the deeper water beyond the glacier's terminal moraine, but towards a deeper part of its own bed, carved by its own erosive handiwork. Just as it was unable to advance into deeper water, so Marinelli Glacier has been unable to stop retreating into deeper water. Its overdeepened bed has aggravated its inability to deliver the ice to keep up the iceberg discharge rate. It has done its best, by thinning and accelerating, with the side effect of increasing its erosive performance dramatically and thus setting itself up for renewed unstable behaviour after the next period of cool climate. But it is not going to settle down this time until it finds a part of its bed that slopes upwards inland, bringing the delivery of ice and the discharge of icebergs back into balance.

Many of the fiords we know about have shallow lips at their mouths, suggesting that they are deglaciated analogues of Marinelli. Most of the spectacular glacier retreats that have been documented in recent years, such as those of Jakobshavn Glacier in Greenland, and Columbia Glacier and the glaciers of Icy Bay in Alaska, have, like that of Marinelli, been from shoals produced at the heads of fiords by the glaciers themselves during less benign climatic times. But the scariest fiords of all are the ones we can't see because they are still full of ice, and the scariest retreats are the ones that haven't begun yet.

Marinelli Glacier is a shrimp (area about 160 km2) beside the leviathans of West Antarctica. Pine Island Glacier, draining about 185,000 km2 of the Antarctic Ice Sheet, resembles Marinelli in having a deep channel and a bed that slopes inland. As yet its terminus position hasn't changed by much, but in a study just published Duncan Wingham and colleagues report that the ice just upstream from the terminus of Pine Island Glacier is now thinning at about 80 metres/yr, more than ten times the rate of just ten years ago.

The public debate and the government consultations in 2006 and 2007 on nuclear power were framed in the context of a replacement programme for existing reactors scheduled to close. On this basis it has been suggested that there was if not a clear consensus then at least a majority in favour.

However, subsequently the government began to talk about going beyond replacement. For example, in May 2008 Prime Minister Gordon Brown commented "I think we are pretty clear that we will have to do more than simply replace existing nuclear capability in Britain" while Secretary of State John Hutton said, that, although it was up to the private sector developers, he would be "very disappointed" if the proportion of electricity generated by nuclear did not rise "significantly above the current level". In August 2009 Malcolm Wicks MP, the PM's Special Representative on International Energy, produced a report calling for a UK nuclear contribution of 35–40% "beyond 2030".

The government has also indicated that it saw a major role for exporting UK nuclear technology and expertise. Gordon Brown has indicated that he believes the world needs 1,000 extra nuclear power stations and has argued that Africa could build nuclear power plants to meet growing demands for energy. In 2009 a new UK Centre for supporting the export of nuclear technology was set up with a budget of up to £20 m.

You do not have to be anti-nuclear to feel some sense of unease over the global expansion programmes being discussed, not least since they could lead much greater long-term risks for global security in terms of the proliferation of nuclear weapons making capacity and the potential for nuclear terrorism. There are other geopolitical issue as well. For example, uranium is a finite resource and, if a major global expansion programme emerges, based on existing burner technology, then there must inevitably come a time when there will be conflicts over diminishing high-grade reserves. That is one reason why interest has been rekindled in fast breeder reactors, which can use the otherwise wasted parts of the uranium resource, and also in the use of thorium, which is more abundant than uranium. But those options are some way off. For the moment, the programmes around the world are mostly all based on upgraded versions of the standard Pressurised Water Reactor, with passive safety features to reduce the risk of major accidents, plus in some case, higher fuel burn up, so as to improve their economics – though that wlll result in higher activity wastes, which could present safety and waste management problems.

There are also other operational issues. In the UK the various contenders – EDF, E.ON etc – have "reserved" a total of 23.6 GW of grid links for new nuclear capacity with National Grid. That's about the same as the wind power capacity we are aiming to have by 2020, albeit with lower load factors. But as EDF have pointed out, there are operational and economic reasons why a major expansion of nuclear would be incompatible with a major expansion of renewable electricity generation – at periods of low demand you would not need both. So which would give way?

In addition, the renewables and nuclear will inevitably also be in direct conflict for funding. A major nuclear programme could divert money, expertise and other resources away from renewable energy and energy efficiency, which arguably are the only long term sustainable energy options.

It used to be argued that renewables were interesting but marginal. Now however, they have moved into the mainstream – with, for example, more than 120 GW of wind generation capacity in place around the world. And they are expanding. Last year solar PV generation capacity grew by 70% around the world, wind power by 29% and solar hot water increased by 15%. By 2008, renewables represented more than 50% of total added generation capacity in both the United States and Europe, i.e. more new renewables capacity was installed than new capacity for gas, coal, oil and nuclear combined. Interestingly, by 2008 China had installed as much wind capacity as it had nuclear capacity (8.9 GW) and there are plans for continued rapid expansion of wind, to 100 GW and beyond. However, there are also plans for nuclear expansion.

It is sometimes argued that you can and should have both nuclear and renewables – to ensure diversity. But, quite apart from the conflicts mentioned above, nuclear is not only one of the most expensive options, it is only just one option. By contrast, there are dozens of renewable energy technologies of various sorts, using a range of sources. It is true that they are at varying stages of development, but given proper funding, they seem likely to offer a more diverse set of options.

What's the best bet for the future? An energy source with limited resource availability and major waste and security implications? Or a range of new technologies based on natural energy flows, with no emissions, no wastes, no fuel resource limits, no fuel price rises, and no security implications, unless that is we start squabbling over the wind and solar resource around the planet.

I used some of the arguments above in a recent resignation letter to the Labour Party, as reported to the Guardian.

Too hot to handle

| | Comments (1) | TrackBacks (0)

By James Dacey, physicsworld.com

As the scientific community has moved towards a stronger consensus that man made climate change is happening, the general public must have become less sceptical about the issue – right?

Wrong.

Well, wrong in the case of the British public, according to social scientist Lorraine Whitmarsh, who carried out separate opinion surveys in 2003 and 2008.

Over this five year period, the number of respondents who believe that claims about the effects of climate change have been exaggerated has risen from 15% to 29%.

What's more, over half of respondents in the latest survey feel that the media have been too "alarmist" in their reporting of the issue.

Sceptics are more likely to be men, older people, rural dwellers and – perhaps surprisingly – higher earners.

Speaking at the British Science festival in Guildford, Whitmarsh also referred to a recent poll by Euro barometer to say that Brits are more sceptical than most Europeans on the issue.

When asked if she could explain the rising scepticism, Whitmarsh replied that it could be something to do with the way science is taught in British schools.

"Perhaps the way we teach science should reflect the inevitable uncertainty of the scientific process," she said.

James Dacey, physicsworld.com

Update from Beijing

| | Comments (22) | TrackBacks (0)

Back in Beijing. On first appearance, things (read 'Transportation') didn't change very much since spring 2008. That is a surprising statement. It means that congestion as perceived by the casual observer didn't detoriate - it even looks acceptable at most times during the day. A Beijing success story?

Beijing implemented a number of measures for the Olympic Games to deal with congestion and air pollution. That included an alternate driving ban based on odd/even numbers, effectively excluding half of the vehicle fleet every day. To no surpise the congestion situation improved dramatically during the Games. Due to this success, some of the measures were continued, modified or extended:

• Ban of cars according to licence plate number once a week • Parking management (from 1 Yuan/hr outside of the 4th ring to 5 Yuan/hr for the 'affluent' neighbourshoods inside the 4th ring • By now 650 bus routes, >250 km bus-only lanes, 200km subway with 6 lines, and 3 BRT lines • Car free day (but forget that one: 2 streets are to be closed on Sep 22).

Thumbnail image for beijingtrnas.png
Beijing added subway and BRT routes

According to BJTRC, peak hour speed increased by 2-3 km/h, and the congestion index decreased from 7.54 to 5.15 (a scale from 0-10, where 10 is worst congestion and 0 zero congestion; the index is calculated as as nonlinear function of the percentage in time where streets are below a certain congestion speed - 20km/h for Expressways, 15km/h for secondary roads, and 10km/h for collectors). Such an improvement translates into billions of Yuan per year in time savings. The real savings/social benefits depends on the opportunity costs of those who can't drive their car due to the ban. Can they work at home? Can they easily switch to public transit, or car pool? A pricing instruments, such as a city toll for Beijing would be more efficient according to economic theory, but a full accounting of transaction and opportunity costs might reveal that a car ban is not such a bad thing in economic praxis. It's not a long-term measure, however, as rising car ownership overcomes the driving ban by sheer numbers.

Public transit, i.e. bus plus subway increased its modal share from 30.2% to 36.6%, thus absorbing nearly all additional transport demand (4 million more trips per day). However, the number of car trips also increased. Hence, there is probably not much of a modal shift from car to public transit. At least some additional car transport may have been avoided.

There are currently two issues that need more attention: land-use and NMT (non-motorized transport). Land-use means the construction of new satellite cities, their distance to work places, and work place distribution. For non-motorized transport the big problems are safety, and barriers. Whereas Beijing has separate broad bike lanes along many roads, crossing of big arteries is a huge safety issue and many people cite this as their number one reason for not cycling. Also, at some streets its difficult to find a suitable crossing - hence streets/expressways constitute a considerable obstacle for pedestrians and cyclists. Much to be done - but Beijing is on the right track.

A recent article ("Leaping the Efficiency Gap") in the August 14, 2009 edition of Science discussed the now classic argument of how far energy efficiency takes you to conserving energy.  The general answer is so far, not much.  The article discusses Arthur Rosenfeld starting an energy efficiency program at Lawrence Berkeley National Laboratory in the mid-1970s, and how he and many were convinced that reductions in energy consumption could be achieved by advances in technology.  The article also notes how Lee Schipper of Stanford's Precourt Energy Efficiency Center took offense to President Jimmy Carter's 1977 'cardigan' speech in saying that in order to save energy sacrifice was needed and some sacrifices would be painful.  Schipper wrote a letter to a congressman arguing that conservation did not have to be painful. Schipper is then quoted as saying he was wrong and Carter was right.

After 35 years of the efficiency v conservation debate, I think there is much more understanding that energy in the US is still cheap and has not generally dicatated decision making by businesses and citizens.  Perhaps the last few years represent a turning point in that the CO2/climate argument has put a differnet spin on the question.  But when we think of the word sacrifice, particularly in the US, are we really sacrificing to reduce our annual per capita energy consumption from the range of 350–370 GJ/person (330–350 million BTUs/person)? The world average is 75–80 GJ/person, and approximately 23% of the world's population live in countries consuming up to 100 GJ/person/year.

Since the 1970s the accumulation of statistics on energy and human development have allowed us to see that most human basic needs in terms of food access, health, and longevity are achieved at approximatley the 100 GJ per capita level. There are of course a few exceptions to any rule, but the tendency of diminishing returns on most of the important quality of life measurements when consuming over 100 GJ/person is extremely enlightening and provides great perspective.

As a possible extreme example, I recently purchased a house that had a three-star green building rating (by the local utility) yet has 17 recessed lighting sockets in the main room of approximately 800 square feet. Two light switches control 7 light bulbs each, so there was clearly a need to install low power-consuming light bulbs (or remove some of the light bulbs) so that 400–500 W of power are not consumed just to light half of a room.  When materials and energy got cheaper from efficiency gains and technological advancement, many times people just bought or installed more gadgets.

On the other hand, the per capita energy consumption of the US has remained relatively flat for the last 35 years due to many infrastructural and behavioral changes, even though total energy consumption has gone up due population increase. This same pattern holds for the electricity consumption of the state of California – a point that the Science article makes as the total electricity consumption rose similarly to the rest of the country.

It seems that most of solid energy conservation gains in the US stemmed from actual mandates and legislation, not business operations. After all, we measure economic growth based upon the flow of goods and services, not the amount of resources that we have in stock. Businesses are naturally incentivised to increase efficiency of operations, including using less energy per product, so subsidizing them to do what economics should drive them toward is perhaps a bit ridiculous. Subsidizing them to actually consume less energy, measured as total energy, not energy/product, can make more sense.

Many famous entrepreneurs and politicians have stated their visions in the past, and we have acheived them. President Kennedy targeting putting a man on the moon. Bill Gates (Microsoft founder) targeting a personal computer in every home. But none of these targets have anything to do with using less, they are always for using more. 

I think what we need is confidence that we can actually remove something from our homes and lives (here I'm thinking of the US), because we know that decreasing our energy consumption by 100 GJ/person/yr probably won't affect anthing fundamental in that Americans will then conserve energy at a rate that is still more than Western Europe. Granted, some things would certainly be hard to give up – I'm sitting in Texas right now where high temperatures of at least 38°C can occur for 3 months of the year, but other regions of the world experience this as well. But do we seriously think that there isn't the ability to 25% of our per capita energy consumption? Most of the reduction would likely come in the combination of reduced travel and smaller/lighter vehicles. Telecommuting and teleconferencing exist also. Exposing people to the correct prices of energy and food will also help.

So I propose this new challenge of removal, not addition: remove 100 GJ/person from the average American footprint.

The level of 'sacrifice' is to be determined.

The technology for finding out more about glaciers keeps getting more diverse. In addition to bouncing laser beams and radar waves off them to find out about changes in their shape, and to finding out about changes in their mass by weighing them, an intriguing recent innovation is to listen to them.

At glaciers that terminate in tidewater, the noise of great chunks of ice cracking off and collapsing into the water inspires awe if you happen to be within earshot, but I am thinking here of noises that are detectable not by the human ear but rather by seismographs. Most earthquake waves are sound waves, that is, more or less rapid fluctuations of pressure. Except that the seismographs are picking up waves that have travelled through the earth rather than the air, seismic waves are not different fundamentally from those to which our ears are sensitive. But the study of icequakes is still in its infancy.

Icequakes are fluctuations of pressure that originate in sudden motions of glacier ice, not of the rocky earth. They have been known for quite some time, but their interest and potential were first highlighted by Göran Ekström and colleagues (here, but you can't get back to this page from there for some reason), who filtered the records from seismic observatories and identified several hundred long-period (that is, rumbly) events that did not look like ordinary earthquakes. Of the 71 found in areas usually regarded as seismically quiet, 46 were from glaciers, and of these 42 were from fast-flowing outlet glaciers in southern Greenland.

People sat up and took notice. Since the Ekström report in 2003, at least three different ways in which glaciers can make a noise have been documented. Their original suggestion can be interpreted as stick-slip motion at a patchily-frozen glacier bed. The ice lurches jerkily downstream, and some of the energy thus released finds its way to the seismic observatories, or to specially-deployed arrays of seismometers in the neighbourhood of the icequake.

Abrupt failure at the propagating tip of a water-filled crevasse can do essentially the same thing. This is an explanation favoured by Shad O'Neel and Tad Pfeffer, and is interesting because such failures seem to be possible precursors of even bigger events due to the calving of icebergs.

That brings us to the third mechanism, the calving itself, and to a recent analysis by J.A. Rial and colleagues that may be the most interesting of all. These authors studied Jakobshavn Glacier in west Greenland, a major ice-sheet outlet of which the terminus is falling to pieces dramatically. The rumblings they observed are consistent in many ways with earlier explanations, and in particular with the idea that things start with an iceberg breaking off the end of the glacier. But the rumblings go on for tens of minutes, and end with a large "culminating event" that can often be pinpointed to a part of the glacier margin 10-12 km upstream from the calving front. Evidently the loss of back-pressure due to loss of the iceberg leads to abrupt release of stress about half an hour later, at the frozen contact between the glacier and its valley wall.

This concern with the physics of how glaciers lose icebergs may strike you as finicky. What makes it worthwhile for the authors, and for their readers, is given away when they write that this sort of pattern suggests "a highly repeatable process of local glacier dynamics currently unknown to us". In other words, something genuinely new is awaiting efforts to make sense of it.

What is more, it offers the prospect of finally being able to measure the calving rate. Seismic measurement of calving rates is not around the corner, but it would clear away one of the major obstacles to quantifying the mass balance of large tidewater glaciers. The rate at which the terminus advances or retreats is not the main part of the problem. Rather, we need to know the rate at which ice is arriving at and discharging from the terminus. At present the best method, at least for regular monitoring, is to watch the icebergs falling off and guess at their sizes. Listening to them might turn out to be a better idea.

One of my earlier posts discussed how Austin Energy, the #1 US utility in selling renewable electricity, had posted a price for its latest GreenChoice batch of renewable electricity such that it was too high for any more takers. The major issue coming to the fore is that at some point, a small percentage of residential and commercial customers cannot pull along an entire city, much less a state or a country, toward high percentages of renewable energy all by themselves.

In trying to find a way to meet its goals, Austin Energy changed its standard 10-year fixed price (at 9.5 cents/kWh for GreenChoice charge + a standard 3.6 cents/kWh) offer for renewable energy by adding a 5-yr option as well (at 8.0 cents/kWh for GreenChoice charge + a standard 3.6 cents/kWh). Now, after no one is buying the latest batch of green pricing, the charged price has now come under scrutiny by some local experts, saying that in fact Austin Energy is not open enough about how it calculates this price. So in attempting to come to a solution, a task force has been set up to come up with a solution. Additionally, Austin Energy is now proposing charging 5.7 cents/kWh and a 5-yr fixed price for the Green Choice charge.

A local paper covered the issue well, see this Austin Chronicle article. Also see a website, PowerSmack, organized by a local energy consultant to discuss these issues.

Much of the consternation over the price for the green electricity stems from the electric grid transmission charges that are applied to much of the wind power coming from West Texas through a limited set of transmission lines. The state of Texas has a plan in motion to build more transmission lines to relieve this congestion, but the solution is 4-5 years out during the siting and construction of the transmission lines. So, we wait for the transmission lines, but this is not a unique problem, and further expansion of renewable energy in Texas and other locations will face similar issues. Even with the transmission constraint charges, reports are showing that overall electricity prices in Texas are actually lower

But as Austin Energy general manager Roger Duncan states in the Austin Chronicle article and in regard to the GreenChoice program, it was intended to stimulate the market for renewables and not continue forever. The city council of Austin (who officially approves pricing for electricity that Austin Energy) is now coming to grips with the unavoidable fact that to meet goals for low carbon emissions (and we really haven't even started) and high percentages of renewable electricity, sooner or later everyone must contribute in one form or another. These levels of contribution by poor, middle class, rich, environmentalists, industrialists, greenies, turquoisers, left, right, up, down and everything in between is what the future is all about. The future is being determined on a local level by a small group of people representing just under 1 million citizens in Austin, TX USA, and perhaps on a global level this December in Copenhagen by thousands of world representatives representing almost every country in the world.

Chemical engineers keep coming up with clever new ideas for producing green energy from novel sources.In many, hydrogen gas plays a key role. It can be produced, as mostly at present, by high temperature steam reformation of methane (natural gas or biogas), or by electrolysis of water using electricity – which can be generated from renewable sources, or from nuclear plants. It can also be produced by very high temperature direct dissociation of water – using focused solar, or heat in, or from, nuclear plants. Or, more efficiently, by thermo-chemical processes aided by catalysis. Once produced, hydrogen can be used as a fuel for a conventional combustion engine or a gas turbine, or fed to a fuel cell, so as to generate electricity. It can also be used as a feedstock to produce synfuels.

Dr Charles Forsberg from MIT's Nuclear Fuel Cycle Study project, has proposed using nuclear plants combined with biomass gasification to provide the energy and carbon feedstock for the production of liquid synfuel using the well known Fischer–Tropsch process. The nuclear plant provides electricity for the electrolysis of water – generating hydrogen and oxygen. The oxygen is used to run a biomass gasifier the output of which is used, along with the hydrogen, as a feed stock for the production of synfuel – diesel or gasoline. He says that the use of external heat and hydrogen can double to triple the liquid fuel output per ton of biomass compared to using just biomass as the feedstock and as the process energy source.

There are various possible variations (e.g. heat or steam from the nuclear plant can be used for high temperature biomass reforming, rather than electrolysis of water). Or you can go for direct hydrogenation of biomass with nuclear-derived hydrogen. The production of ethanol from biomass using steam from nuclear plants is another option. Forsberg's claim is that approaches like this offer a way to make better use of high capital cost nuclear plants to produce a high-value storable fuel as well as electricity. He doesn't see hydrogen as being a replacement fuel across the board, but as being used for producing products like this.

Even so, there may be some options for hydrogen as a new energy vector. Advocates of the hydrogen economy argue that, one attraction of hydrogen is that, like natural gas, not only can it be transported down a pipe with relatively low losses, it can also be stored, so it has advantages over electricity as a energy vector. However it's bulky. Cryogenic storage, as a liquid, is expensive and energy inefficient. Chemi-absorption techniques exist, for trapping it in organic lattices, but are not yet widely available on any scale. Storage as a gas in pressurised tanks is the easiest option, but takes up room. Underground storage in caverns is about the cheapest bulk option.

The generation and storage of hydrogen could be one way to allow nuclear plants to be able to meet variable/ peak energy demands. Forsberg suggests using excess electricity from nuclear plants at low grid demand periods to generate hydrogen by electrolysis and then using the electrolyser in reverse to generate electricity to meet demand peaks. Evidently high-temperature electrolysis units can be operated as high-temperature fuel cells. A parallel, probably more energy efficient, approach would be to use the nuclear hydrogen and the oxygen also produced by electrolysis, to fuel an oxy-hydrogen burner unit, producing high temperature steam for a gas turbine. That could have an overall efficiency of 70% – since no boiler is required.

Solar hydrogen/synfuel

In passing, Forsberg does mention that solar thermal could be used as the heat source for some of the systems he outlines. This would make a lot of sense. The SOLASYS 'Power Tower' in Israel is already being used to steam 'reform' methane into hydrogen and carbon monoxide at around 700 °C.

Meanwhile the US Dept of Energy Energy Efficiency and Renewable Energy Web site, describes how a solar concentrator can use mirrors and a reflective or refractive lens to capture and focus sunlight to produce temperatures up to 2,000 °C. The CNRS solar heliostat/parabolic mirror system at D'Odeillo in southern France has in fact been doing that since 1970. Direct dissociation of water at temperatures like this is possible but is relatively inefficient – perhaps 1–2%. However, there are systems being developed: e.g. see Hion Solars approach: http://www.hionsolar.com/n-hion96.htm.

An alternative is to use the high-temperature focused solar to drive chemical reactions that produce hydrogen, possibly aided by catalysis. For example, the US DEn EERE web sites say that in one such system 'zinc oxide powder is passed through a reactor heated by a solar concentrator operating at about 1,900 °C. At this temperature, the zinc oxide dissociates to zinc and oxygen gases. The zinc is cooled, separated, and reacted with water to form hydrogen gas and solid zinc oxide. The net result is hydrogen and oxygen, produced from water. The hydrogen can be separated and purified. The zinc oxide can be recycled and reused to create more hydrogen through this process. See: http://www1.eere.energy.gov/hydrogenandfuelcells/production/water_splitting.html

Clearly the nuclear fission lobby is looking at how to redeem its capital intensive technology, for example by using otherwise wasted heat. Combined Heat and Power operation is one option, if there are heat loads nearby. But supporting hydrogen to synthetic fuel production, as proposed by Forsberg, is obviously another. General Atomics has developed a Sulphur–Iodine cycle for thermo-chemical water splitting, which in principle, can, it's claimed, achieve cycle efficiencies of 50% using heat at 850 °C. That's achievable by some fission plants – and possibly also, at some point in future, by fusion plants. But it's also a route that could be taken by solar, with arguably less problems- no wastes to deal with, or fuel to find.

*Forsberg spoke at the World Nuclear University in Oxford last July. For more from him, see International Journal of Hydrogen Energy 3, 4 (2009).

If you want to keep up to date on issues and developments in the renewable-energy field, then see the long-running Renew newsletter, now re-launched as a PDF-delivered file. Details can be found at http://www.natta-renew.org.

The dangerous vagary in 'geoengineering'

| | Comments (6) | TrackBacks (0)

By James Dacey, physicsworld.com.

Ok, maybe I'm being a bit pedantic here but am I the only one to be slightly confused and a little concerned by the vagary of the term "geoengineering"?

I raise this question now because yesterday the UK's most prestigious scientific academy, the Royal Society, released a major report on the topic with the aim of clarifying the technical issues to better-inform climate policy. Politicians, however, like things to be spelt out veeerry cleeaarrly. Therefore, any confusion surrounding the central term in this policy document could stall the debate on what may become a key component of the fight against climate change.

So, let's consult the Chambers English Dictionary, which just happens to be the only dictionary within grabbing distance at the time of writing:

"Geo" is the prefix – taken from Greek – for "Earth"; and engineer means "to put to practical use, engines or machinery of any type".

I think you'll agree that both of these words hold a broad range of meanings and a combination of the two makes for a very wide semantic field indeed. Use your own imagination here but I can picture all sorts of ways in which the naked Earth could be engineered – from spectacular agricultural terraces like those in the Andes to the idea of a giant Eiffel Tower replica carved into the Antarctic ice.

The Royal Society report, however, gives a specific definition of geoengineering as the "deliberate large-scale manipulation of the planetary environment to counteract anthropogenic climate change".

The report reviews a range of proposals such as launching giant mirrors into interstellar space to reflect the Sun's rays, or injecting iron into the world's oceans to rapidly increase the amount of phytoplankton that consume carbon dioxide.

The point is that all of these geoengineering proposals are related to the climate – specifically, technological solutions to minimizing the effects of anthropogenic climate change. So why are we not calling this "climate engineering"? It's certainly not perfect but it's surely a closer fit to the definition.

It just seems like the Royal Society has missed a great opportunity to kick this vague, poorly-chosen term into touch once and for all.

Anyway, if you can think of a better term that more accurately fits the definition then please feel free to offer your suggestion.

James Dacey, physicsworld.com