December 2010 Archives
Pine Island Glacier is a giant, an outlet glacier draining about 160,000 km2 of the West Antarctic Ice Sheet. It is the focus of intense current concern because the area near its grounding line, where it feeds a floating ice shelf, has exhibited rapidly increasing rates of thinning and concurrent retreat of the grounding line. With its neighbours along the coast of the Amundsen Sea, it is now contributing something like 0.15 to 0.30 mm per year to a total rate of sea-level rise of about 2.5 to 3.2 mm/yr.
It is natural to be rattled by these observations. There is no immediately obvious reason why the rate of ice loss should not continue to increase. Indeed, the recent observations might presage even faster acceleration, perhaps involving the discharge of a substantial fraction of the 1500 mm of sea-level equivalent still stored in Pine Island Glacier and its neighbours. And we have a serious enough problem even if Pine Island Glacier simply maintains its present rate of loss.
Knowing what they know and what they don't know, "alarmist" is therefore not a label about which glaciologists need to be embarrassed. But they also know that alarmist projections have a way of turning out to be exaggerated.
Consider the energy-balance models, that describe how the climate responds to changes in radiative forcing. The two first such models, published independently by Mikhail Budyko and William Sellers in 1969, projected that the Earth's surface temperature would drop to tens of degrees below freezing if the output of the Sun were to decrease by only two percent. That made people sit up, and yielded a flurry of publications showing that there are plenty of ways in which the climate system moderates the severity of the negative feedback which was the basis for the original findings.
Even though they are based on measurement rather than on modelling, might our concerns about the recent behaviour of outlet glaciers in Antarctica and Greenland be similarly exaggerated? In a recent modelling study, Ian Joughin and co-authors suggest that the answer is "Probably, but not necessarily".
The model is not quite state-of-the-art, in that it does not solve the full Stokes equation but a simpler form of the dynamical system that is appropriate for ice shelves and ice streams. The authors were obliged to handle the grounding line, where the grounded ice stream feeds into the floating ice shelf, somewhat roughly. Nevertheless the calculations allow for careful treatment of the rapid sliding at the base of the ice stream, and the implied very large rates of basal melting. And the model does a good job of reproducing the documented behaviour of Pine Island Glacier up to 2009.
Most of the ice in the Pine Island Glacier catchment is flowing very slowly indeed, at a few metres per year at most. But as it converges on the outlet of the catchment it accelerates spectacularly, and is moving at thousands of metres per year by the time it starts to float at the grounding line. Most of the speed is the result of basal sliding, so the ice stream is not unlike a rigid plug, punching its way through the much slower ice on its flanks. This peculiar setup is the core of the problem.
Joughin and his co-authors simulated responses of the glacier to a variety of scenarios that might or might not represent the next hundred years. Even the more extreme scenarios, featuring basal melting at four times the present rate, did not lead to flotation of the entire 200-kilometre length of the ice stream, as one earlier study had suggested. Nor did the model come anywhere close to an even simpler extrapolation of current behaviour, based on kinematics rather than dynamics.
Don't breathe out yet, however. The results considered by the authors to be the most probable have Pine Island Glacier continuing to lose mass at rates comparable to the recent rates. It doesn't continue to accelerate, but it doesn't slow down either. The grounding line doesn't continue to migrate inland, but the inland thinning implied by the fast flow does continue.
It would be wrong to write off this heroic but tentative modelling effort, which is an important step towards the goal of understanding Pine Island Glacier. Models like this one, and like the energy-balance models that followed up on Budyko and Sellers, are part of the learning process. They suggest that doomsday isn't going to happen just yet. But, in short, doomsday scenarios are educational.
As a lapsed nuclear physicist, nowadays active in the renewable energy policy area, I sometimes make forays back to see how the subject in progressing.
I once worked in high-energy physics, so it's interesting to see how fusion research is moving on. A while back I did a tour of Culham and was impressed by the dedication of the research staff there- who seem prepared to spend their careers churning through data from endless JET runs, while knowing that it will be many decades before anything solid comes of it by way of a viable energy device. Maybe I'm just unable to defer gratification that long! But, more prosaically, I was also struck by the row of bottles in the toilet- for staff urine samples. That reminded me of one of the reasons why I got out of nuclear research. It seems that, while often touted as being a 'clean' option, fusion still has safety issues, just like fission.
I see that the EU is trying to find a way to maintain funding for the follow-up, ITER in France, after it was discovered that there was a €1.4bn shortfall in the EU budget for the programme over the 2012-3 period. One option considered was to raid the EU 7th Framework research programme, but that would have reduced funding for other projects. It seems that the start of ITER construction may have to be pushed back to 2012. The EU's eventual contribution to construction is now expected to be around €6.6 bn. It seems a lot of money for a very long-term project, which may (or may not) eventually lead to a technically and economically viable energy device sometime after mid century. No help then with our current energy problems.
More recently I visited CERN in Geneva and the Large Hadron Collider, and went round their ATLAS project display. Sadly visitors can't go underground, but it is still an impressive project. Another €6 billions worth I'm told. Pure curiosity-led research, although their PR made much of the training aspects, international collaboration, and technical spin-off possibilities.
Well yes, but €6 billion would go a long way to helping us develop cheaper more efficient renewable energy technologies. As would the €6bn for ITER. It may be good to try to develop novel energy options for the very long term, and to know what happened in the first few nano-seconds after the Big Bang, but personally, I'm more concerned about what will happen in the next few years as we try to grapple with climate change and energy security.
However, there is no denying that 'big science' can be intriguing, inspiring and even fun! So, good luck to them. But spare a thought for the hard pressed innovators trying to develop and deploy new solar, wind, wave, tidal and bioenergy systems in an ever more competitive and risk averse market environment, often with minimal state funding.
We did get a Christmas present of sorts though from the UK government - a set of proposals for Electricity Market Reform, including, maybe, a new form of support for low carbon technologies. They say it's a Feed-In Tariff, but the variant they seem to favour has variable market determined prices and possibly involves a contract auction/tendering process. I know it's traditional not to like your Xmas presents, but I wonder if we can swop the one they are offering with a proper fixed-price Feed In Tariff, of the sort that has worked so well in Germany.
I have a horrible suspicion that what actually has happened, as occasionally does at Christmas, is that they have wrapped up an old unwanted, discarded present from a few years back to try to offload it - in this case the old Non Fossil Fuel Obligation. The NFFO used a contact tendering process and led to lots of optimistic bids for renewable energy projects, many of which were then given to go ahead on the basis of price/capacity conflation. Tragically though, very few projects actually happened- developers often found they couldn't deliver at the price they had specified to win the contract.
As with the system that was eventually to replace the NFFO , the Renewables Obligation, the competitive mechanism in the NFFO also meant that only the most developed renewables got supported- sewage gas, landfill gas and then wind. And it could be the same with the proposed new 'auction contracts for difference' system- emerging options, such as wave and tidal stream, could be squeezed out. As Chris Huhne put it, there was the risk that 'the contract arrangements exclude technologies that may in the long run actually perform a very useful role in providing low-carbon electricity.' So some other form of support might have to be offered.
It's good that the government has recognised, at long last, that the Renewables Obligation has problems, and is prepared to phase it out. That will cause disruption of course, but we have to make changes - the RO is an expensive way of subsidising a limited range of projects (the relatively high payments may be why some who get projects supported under it, like it). But before we throw away the wrapping on its proposed replacement, maybe we could ask, via the handy consultation process that is attached, for a proper fixed-price FiT, and while we are at it, one that doesn't also support nuclear. That was the really unwelcome part of the present- if nuclear projects are eligible for support they could well squeeze out renewable projects. Indeed some even see that as the aim: http://realfeed-intariffs.blogspot.com/2010/12/uk-government-to-subsidise-nuclear.html
Certainly anti-nuclear Scotland won't want anything to do with it. Scottish First Minister, Alex Salmond, said 'it could see support mechanisms for nuclear generation in England at the expense of renewable energy sources and CCS [carbon capture and storage] in Scotland.' Oh dear. Whatever happened to peace and good will to all men.
Image via Wikipedia
California has long been leading the US towards environmentally sound policies, starting with air quality standards in the early 1970s. The California Global Warming Solutions Act of 2006 (AB32) established a comprehensive program of regulatory and market mechanisms to achieve quantifiable, cost-effective reductions in greenhouse gases.
Cap and trade is the most prominent mechanism envisioned by AB32. Now - in December 2010, the Californian Air Resource Board endorsed the cap-and-trade regulation.
The decision is a huge step forward towards climate protection - but also has drawbacks. The major drawback involves the free give-away of GHG allowances to polluters. As allowances still have a market value, energy companies can (partially) charge the value of allowances to consumers and generate windfall profits. While efficiency of the instrument may still be guaranteed, rewarding the polluter is inappropriate. California, by this, repeats the mistakes of the European Union.
On the big plus side, California tries to be comprehensive as soon as possible. In 2015, transport fuels and natural gas will already be included under the cap. In doing so, California overtakes the EU in terms of coverage (the EU does not cover the transport sector).
A recent study "Car Industry, Road Transport and an International Emission Trading Scheme" (CITIES)" of our group at TU Berlin and PIK, commissioned by BMW*, concludes that including transportation in cap and trade increases flexibility in abatement, and by this overall efficiency, without significantly increasing abatement costs for utilities.
A key finding of CITIES is that witht the rise of electric cars etc. the energy sector and transport sector become closer intertwined. The CO2 emissions will no longer be determined by the end-of-pipe emissions of the automobile alone, but to a large degree by upstream fuel production. To include the emissions from the transport sector at the fuel production level will provide for a more stringent macroeconomic treatment of the greenhouse gases. The respective responsibilities of the actors will be more precisely defined with an emission trading for fuel producers and complementary efficiency regulation for automobile manufacturers.
Including transport into cap and trade, hence, will leverage the effectiveness of cap and trade, and increase efficiency. While the US Senate blocks any action on climate change and global warming, individual states, like California make leeway, and signal to the world that - at state level - the US takes action.
REFERENCE: F. Creutzig, Flachsland, C., McGlynn, E., Minx, J., Brunner, S., Edenhofer, O. (2010) CITIES: Car industry, road transport and an international emission trading scheme - policy options. A report commissioned by BMW. (pdf)
* Find here a press release of BMW supporting the cap and trade scheme.
Plenty of evidence has emerged recently to show that the beds of glaciers can be complicated places, especially when we consider the liquid water down there and the fact that much of that water must have come from the surface.
In a paper just published in Nature, Christian Schoof explores this complexity and explains at least some of it.
One of Schoof's insights is that cavities and channels are two very different ways to store subglacial water. The size of a bed cavity, in the lee of a bump for example, is governed by the respective rates at which the basal ice continues to flow horizontally down-glacier, opening the cavity, and creeps downwards, lowering the cavity roof. The size of a channel is governed by the rate at which its wall expands by melting and the rate at which the wall shrinks by inward creep of the ice. Cavity or channel, the creep rate depends on the difference between the pressure of the ice on the void and the pressure of whatever is in the void, water or air, on the ice.
For a given thickness of ice overburden, this pressure difference depends on the fluid pressure. Air is useless at opposing the weight of the ice, so we are only interested in the water pressure, which requires that we acknowledge the importance of meltwater from the surface. Water melted at the bed, by geothermal heating and by friction between the ice and the bed, is insignificant, and the pressure of the void-filling water on the overlying ice is likely to depend entirely on how fast water is arriving at the bed from above.
So in fact three variables determine how the meltwater at the bed organizes itself: the melting rate at void walls, the opening rate due to down-glacier flow, and the closure rate due to the pressure difference at void walls, the latter depending in turn on the rate of delivery of surface meltwater. These variables are entangled with each other, but Schoof combines them ingeniously, and consistently, in a model that shows that this is a one-thing-or-the-other problem. A collection of linked cavities can be a stable way to organize the meltwater, and so can a tree-like network of channels, but any other arrangement of the voids will evolve into one of these two.
Linked cavities can be kept full, and can transfer meltwater not too inefficiently, if, or rather because, the water pressure is high. At high water pressure, the ice will flow faster because less of it is in contact with its solid bed, meaning that cavity opening will proceed faster. More of the ice will reach lower, warmer elevations sooner, increasing the production of surface meltwater.
But channels are different. The more meltwater in them, the faster their walls melt and the bigger they get, lowering the water pressure and so tending to drive pressurized cavity water towards and into them. In Schoof's simulations a few big channels end up discharging the meltwater. But because more of the water is in big channels, less is spread over the bed. More of the ice is in contact with the bed and not with water, and the ice will slow down.
But now comes an intriguing twist in the plot. Surface meltwater tends to reach the bed in pulses, once a day. Closure of the channels by creep is a slower process, requiring days or longer. So the daily pulses raise the water pressure in the channel network, driving water out of the channels, weakening the contact of the ice with the solid bed, and thus speeding the ice up. This speedup is not fully integrated into Schoof's analysis, but is clearly a way for the subglacial drainage network to have its cake and eat it. More meltwater implies channelization, reduced water pressure, and deceleration of the glacier. But more meltwater arriving in pulses means that a glacier can still slide rapidly over its bed even though the drainage network at its bed has become channelized.
If, over the next century or two, we lose a large fraction of the ice now in the Greenland Ice Sheet — or, perish the thought, the Antarctic Ice Sheet — then greenhouse gases will have a lot to answer for. But Christian Schoof's analysis shows that so will the Sun. Or, to be more accurate, so will the Earth, because it turns to meet the Sun once a day.
A recent story in the domain of the water-energy nexus caught my eye. The story describes the Oyster Creek nuclear power plant in New Jersey will be shutting down in 2019, 10 years earlier than planned, because it otherwise would have had to install cooling towers as a retrofit to the power plant. Environmental groups seem mostly behind the decision, but the Sierra Club is an example of one group that is far from satisfied. From the website of Exelon, the power plant owner, "Oyster Creek began operating in December 1969 as the first large-scale commercial nuclear power plant in the United States. Its single boiling water reactor produces 645 net megawatts (MW), enough electricity to power 600,000 average American homes."
The reason for the decision to shut down the plant instead of retrofitting it with cooling towers stems from an US Environmental Protection Agency (EPA) rule. This rule calls for existing power plants that use "once-through" or open-loop cooling to cease using that design in replacement of wet cooling towers that withdraw less water. The reason this rule exists is that that once-through and open-loop cooling systems withdraw high flow rates of water (up to tens of thousands of liters per kWh) into the power plant to cool the steam cycle, and then discharge that water, now heated, back to the water source. Cooling tower systems withdraw much less 1-5 liters per kWh. In the case of Oyster Creek the water source is Barnegat Bay seawater, and the plant has been blamed for depletion of much of the marine life of the bay.
This EPA ruling that demands conversion of cooling systems from once-through to cooling towers is meant to mitigate impacts upon marine life from sucking in marine animals into the water intake, impinging larger animals onto filter screens, and discharging warm water that disrupts the ecosystem's normal temperature balance. The drawbacks to this retrofitting are increased capital costs, slightly less net power output, and higher water consumption. Cooling towers are generally not used with intake of seawater because the cooling mechanism is via evaporation of the water. Thus, after the water evaporates, salt and other minerals deposit onto the cooling fins of the cooling tower creating a maintenance issue. The costs of chemicals and maintenance are generally not worth using cooling towers with seawater, although the use of cooling towers with freshwater is very common.
It is not clear if the 10-year early close down of Oyster Creek nuclear station is the beginning of a trend or one of a few to be highly affected by the cooling tower ruling. Given that Oyster Creek was the first large nuclear power plant in the US, it perhaps was destined to be one of the first to be retired. Any power plant using once-through cooling with seawater and that is planning on operating more than 5 more years will have a difficult decision to make. For power plants using seawater for cooling, I think it is likely that cooling tower retrofits will benefit the environment via less impacts on marine environments and lower profits (and/or higher electricity costs) of the power plant operator passing on to consumers to lower electricity consumption. Conversions of once-through to cooling tower on rivers and freshwater lakes will have lower economic impacts and the higher water consumption will affect water flows downstream. Thus, the environmental benefits are less clear, but lean toward more beneficial.
A Memorandum of Understanding on the EU supergrid was signed recently by ten European ministers from countries bordering the North Sea, covering the plan to develop an offshore electricity grid enabling interconnection between continental, offshore and British energy resources. In addition to allowing more trade in energy across the channel and North Sea, thus increasing energy security, it will also link up offshore wind projects (140GW are currently planned) and other variable renewables to pumped hydro storage facilities across the EU. That will help to balance the grid with power in and out, the wider geographical footprint averaging out local variations in renewable supply.
As the European Wind Energy Association put it in its new 'Powering Europe' report, 'The grid plays a crucial role in aggregating the various wind power plant outputs installed at a variety of geographical locations, with different weather patterns. The larger the integrated grid - especially beyond national borders- the more pronounced this effect'.
However building supergrids will not be easy. Nature published a useful article on the supergrid (2 Dec 2010, Vol 468 pp 624-5) which highlighted some of the technical problems with High Voltage Direct Current (HVDC) links. They have to be used for long distance undersea grid links, since otherwise, with AC, the energy losses are too high. But it noted, there's no such thing currently as circuit breakers for high-voltage DC. Power on AC grids can be disconnected relatively easily using circuit breakers, which fire off just at the point when the cyclic alternating current momentarily reaches zero. However you need milli- or even micro- second disconnection with DC. Nature reports that this sort of issue is being addressed in a 3 year €60m EU programme called TWENTIES, a consortium of 26 academic and industrial partners.
Ministers from the UK, Ireland, Belgium, Denmark, France, Germany, Luxembourg, the Netherlands, Norway and Sweden have agreed to start working on regulatory and technical issues, but of course cost is the big one- and who pays. Nature says that Germany is more interested in its Desertec Solar project, and France is presumably more focused on its proposed HVDC Transgreen links to North Africa. See my earlier Bog: http://environmentalresearchweb.org/blog/2010/08/only-connect--cspsupergrid-iss.html
However, these southern projects are still at the early planning stage, and given the spread of renewable capacity within the EU, there will be a growing need to balance variable renewables. So the more inter-connectors there are the better, and hopefully France and Germany will stay on board the North Sea project. Otherwise we may have to curtail, and waste, valuable output from some of our wind farms when the wind is high, but demand is low- especially if there is also a lot of inflexible nuclear on the grid.
In some locations this is already happening. For example, the Orkney Isles distribution company, supported by OFGEM, has introduced an active grid management system, which curtails output from their wind plants during high wind-low demand periods. This may be reasonable in isolated island areas with small local grids, where the cost of undersea grid links to the main land, to export occasional excess power, is very high, and it does mean that more/new renewable capacity can be added to supply a larger contribution at other times. And it certainly makes sense to use local resources to meet local needs as far as is possible. But as a national strategy this decoupling has its limits. Local grids have their place, and, in some locations local energy storage too e.g. via batteries or even hydrogen production, despite its cost. See for example the PURE wind-hydrogen project on the Shetlands: http://www.pure.shetland.co.uk/html/pure_project1.html. But to help balance large contributions from variable renewables effectively there is also a need to link to national and international grids.
Some look to very large scale integration. For example there have been proposal for links between the grid systems of Europe and Asia, and even for a cross-Atlantic undersea grid link. The Nature article suggests that piecemeal, incremental, development is more likely, and given the technical complexity and political difficulty of making cross country links, that may well be how it plays out. But it does seem clear that we will be seeing supergrids stretching out around the world soon. Although organisations like GENI (http:geni.org) have been promoting world-wide links using HVDC, so far no one is proposing anything like the vast round- the-planet global power transfer system once famously outlined by Nicola Tesla, using the upper atmosphere and the earth itself as the paths. Round-the-planet links, of whatever sort, would of course mean that solar energy from the sunlit side could be fed to meet demand from those on the dark side, but so far ideas like this remain the province of science fiction.
For more on renewable energy-related developments and policies see www.natta-renew.org
When the weather is warm enough, meltwater is produced at the surface of the glacier. Some runs off directly. Some finds its way into the glacier interior and, although much of this englacial meltwater flows out again, some of it is left over at the end of summer.
The Phillips paper focusses on the thermodynamics of the leftover englacial meltwater. If the ice is at the melting point, or is temperate in glaciological jargon, it can't get any hotter without melting. But what if the ice is cold, which in the glaciological jargon means "below its melting point"? The meltwater can be no colder than the melting point, so we have a difference of temperature and therefore a flow of heat from the water to the cold ice.
If, or rather once, the meltwater is at the melting point, it freezes as the winter advances. The freezing releases about 335,000 Joules of heat for each kilogram of water that turns to ice, roughly equivalent to one 60-watt light bulb burning for an hour and a half (but of course we are talking about lots and lots of kilograms, not just one). This latent heat of fusion adds to the thermal contrast between the cold ice and the gradually freezing meltwater.
Phillips and his co-authors show that, far from being just an interesting curiosity, the whole phenomenon of cryohydrologic warming, heat transfer from meltwater to cold ice, might be highly significant.
Internal accumulation, by refreezing of meltwater, implies warming of the glacier interior. It explains why, in glaciers that are mostly cold, the ice at high altitude in the accumulation zone is usually warmer than the ice at lower altitude in the ablation zone. But Phillips and his co-authors are more interested in cryohydrologic warming of the ablation zone. In particular, they point out that when the equilibrium line rises in a warmer climate, the part of the glacier that was formerly above the equilibrium line switches from net gain of mass (more snowfall than melting) to net loss (more melting than snowfall). The warming climate produces more meltwater, and any of the meltwater that fails to get out of the glacier drainage system will add fast cryohydrologic warming to the slow climatic warming.
It is a matter of simple physics to work out what "slow" and "fast" mean. The warming proceeds by conduction, so divide the heat content per unit volume by the thermal conductivity, both of which can be looked up in a book. The resulting number is about 212,000 seconds per square metre. Then imagine that the ice is divided into a grid of square columns, every one of which has a meltwater conduit in the middle. Now multiply 212,000 by the cross-sectional area of each column. If the conduits are 20 m apart, the cross-sectional area is 400 square metres and the cryohydrologic warming happens on a time scale of 2.7 years. (There are 31,536,000 seconds in a year.)
The same kind of back-of-the-envelope calculation works for the slow climatic warming, but now all of the heat has to be conducted downwards from the surface. An appropriate number to substitute for the conduit spacing is the ice thickness, say 100 to 1000 m. The resulting time scale for the climatic warming is about 70 to 7,000 years.
Bringing cold ice to its melting point in a few years, instead of a few centuries, implies that the ice suddenly becomes able to move a lot faster. Temperate ice is ten times less viscous (less stiff; runnier) than ice at —10°C.
Cryohydrologic warming has further implications for the response of cold glaciers to climatic change, but for the present there are loads of questions to be answered, starting with geometrical ones. What about the varying size and spacing of the meltwater conduits? Is 20 m a good representative number for the spacing? How thoroughly does the system of conduits permeate the bulk of the cold ice? What if no englacial meltwater remains at the end of summer? What if there is some, but not all of it freezes in the winter? Does the ice really speed up as expected, and if so does that mean more cracks for the meltwater to penetrate, and thus still faster cryohydrologic warming?
All this reminds me of the undergraduate essay I had to write on the subject 'Clever ideas, whether right or wrong, stimulate research.' Discuss.
DECC's new report on 'young people and energy', based on participative surveys, shows massive support for renewable energy among young people. 94% of those questioned said that offshore wind was the 'fairest' energy technology, 81% said onshore wind, and 94% supported solar energy. This is compared to 2.2% for coal energy and some very critical responses on nuclear- 19.8% of people taking part in the survey thought nuclear power was fair, 26.6% not so fair, 30.8% not fair and 22.8% a raw deal.
These figures are in a report presented by DECC's pioneering Youth Advisory Panel to energy and climate change minister Charles Hendry. The report calls for greater youth consultation on energy and climate change policy and for young people to get involved.
Based on DECC's 2050 Pathways project, the report looks at the UK's energy policies from the perspective of those people who will have to live with those decisions for their entire adult lives. The report was drafted by young people aged between 16 and 25 who visited power stations, nuclear plants and projects promoting renewable energy sources to investigate the issues at first hand and met with experts, industry, pressure groups and innovators, to look at how we can keep the lights on in 2050 while reducing carbon emissions.
The report says while it is 'important that there is enough energy to go around', it would be 'irresponsible for us to only focus on providing energy to keep living the same way as we are today'. It calls for: • a fair deal for young people in the decision-making process; • work to ensure that Government does not lock young and future generations into ecological debt; and • continued engagement in dialogue with the youth constituency and stakeholdership to ensure that the youth perspective is heard, and responded to, by Government.
Youth Panel member Tom Youngman, 17, Bath, from Eco-Schools and a Green Flag School said: 'We do not want to inherit a diminished planet, as it often seems we are being asked to, and this is a huge step towards ensuring a sustainable and equitable future for our and subsequent generations.'
Nuclear power was the issue on which opinion was most divided, although a clear majority were against- under 20% were for it. This divide was evidently consistent across our surveys, at the face-to-face workshop and within the Panel itself. The panel found that a critical issue for everyone, regardless of their position on nuclear power, was whether or not the waste can be transported and disposed of safely. They commented "We are very concerned that short-term reasoning is being used to justify building a technology with substantial long-term impacts and responsibilities. The risks associated with nuclear cannot be ignored. Dangerous nuclear waste is a legacy we would rather not leave to future generations, and the heavy investment that will be required threatens to distract us from pursuing safer, cleaner and more future-friendly energy solutions."
Their recommendations on nuclear included: • The government must develop a transparent and viable long-term strategy for dealing with our legacy of nuclear waste. This long-term strategy must forecast beyond the current Parliamentary term to at least a minimum of 150 years; • The government must make sure that adequate funding for the decommissioning of current and any future nuclear power plants is assured in the long-term, and that this financial burden is not unfairly placed upon future generations; •Any funding or governmental support for further nuclear power development must not detract from any funding or support for alternative, renewable forms of energy.
The Youth report emerged just after a Commons vote on the Justification process for nuclear power, focusing on the case for the European Pressurised-water Reactor (EPR) and the Westinghouse AP 10000. 80% of MP backed the Justification package with just 27 and 26 MPs respectively voting against them. In all, 520 backed the EPR, 517 the AP1000, out of 649 MP eligible to vote, so there were significant abstentions, or no shows. Interestingly though, Nick Clegg and Chris Huhne voted for both, despite the Lib Dem agreement that the party would maintain an anti- nuclear stance, but abstain from voting.
There had been no commons debate preceding this vote, although the House of Lords Statutory Instruments Committee looked at the new nuclear legislation and review process. Greenpeace had told them that the EPR and AP1000 reactor designs were untried and untested anywhere in the world; and that the vendors and operators of the potential new reactors had not yet presented firm plans for the longer-term storage of spent fuel. The Generic Design assessments have also not yet been completed.
With students already taking to the street in large numbers on the University fees issue, and incensed by what they evidently see as a sell out by the Lib Dems, it could be that, if the DECC Youth panel's anti-nuclear views are representative, we might even see demonstrations on the nuclear issue of the sort already happening in Germany in repose to their coalitions policies.
That's unlikely in the UK context perhaps, but a clash of views on generational lines does seem possible. Poll data is often misleading (it depends on the questions asked), but for what it's worth, a recent Ipsos Mori poll for the Nuclear Industry Association found 40% of the adults they asked were in favour of nuclear (up 7% from 2009), 17% anti (down 3%). 47% backed new nuclear build, while 19% did not. Only 25% of women were in favour of nuclear- but that was up 4% from 2009. The result for women apart, these adult figures, and the MPs voting choices, are almost exactly the inverse of those reflected in the, admittedly small, Youth panel survey- with around 80% being against nuclear.
Given the evident concern about nuclear waste, it will be interesting to see if there is any reactions from young people to the governments recent admission that, on current NDA plans, the proposed Geological Disposal Facility (GDF) is not expected to be available to take spent fuel from new nuclear power stations until around 2130, which they note 'is approximately 50 years after the likely end of electricity generation for the first new nuclear power station'. (From the Government Response to Parliamentary Scrutiny of the draft National Policy Statements for Energy Infrastructure).
The point is that the government hopes that a final site for high level waste will be found and ready by 2040, but it seems it will take 90 years to emplace the existing 'legacy' waste in it, so the accumulated wastes from the new plants will have to wait, somewhere, until 2130- long after the new pants have all closed. That's quite some wait- a few generations of students ahead, if there are any then!
DECC Youth Panel report: www.decc.gov.uk/en/content/cms/news/pn10121/pn10121.aspx
I was wondering about when the first measurement was made on a glacier. This is probably a diffuse thing to wonder about, because you can measure properties of the atmosphere and even of the solid Earth as a whole while standing on a glacier. You could, for example, measure the air temperature or the air pressure, the latter giving you a decent chance of estimating the surface altitude.
So I sharpened my focus slightly, to the first measurement of a glacier, but still on the glacier. A measurement of retreat or advance of the terminus might be very interesting, but it would not count because it would be made from in front of the terminus. But a measurement of the glacier's velocity would count.
The idea is fundamentally simple, and is conveyed well by one of the terms we use for it: feature tracking. Identify some feature on the glacier surface that is easy to see, such as a boulder, a particular crevasse, whatever. Measure its position accurately once, relative to an immobile reference point or baseline on land rather than on the ice. Measure it again at some later time. The distance between the two positions divided by the time between the measurements is the glacier speed. The velocity (that is, the speed and the direction, both together) requires only the simple trigonometry that you needed anyway to work out the distance.
The only problem might be lack of features that are easy to see. So make your own feature. As long as it is immobile relative to the ice, and you have a fixed point from which to observe it, any artificial object will do. For 240 years the object of choice has been a stake, jammed into the glacier by brute force or, much better, lowered into a hole drilled for the purpose.
Simple as it is, the idea had yet to occur to anyone at the time — the late 13th century — of the description in Marco Polo's Travels of the glacier on Mount Ararat.
Apparently the idea that the motion of glaciers presents a problem did not arise until the 18th century. One intellectual roadblock was the need to get clear on the two motions in question. First, the glacier can get longer or shorter. That is, its terminus can move forward or backward, but this is an intellectual trap. The motion of the terminus depends on the mass balance. If more ice arrives than melts or falls off, the terminus advances; if less ice arrives, the terminus retreats.
I smuggled the second kind of motion into that description of the first: the ice can only "arrive" if it is itself in motion. The velocity of the ice and velocity of the terminus are two different things.
Once in a while I come across someone who is surprised to learn that No, the ice itself does not go backwards up the valley. The ice is obeying the forces that are driving it — gravity, pressure and the frictional resistance of its bed. There is no force that can make it flow backwards. But I am not surprised at this surprise in someone who has never had to think about the problem before. It is a tough nut, and it seems to have taken some decades to crack it.
The first person to assert that glacier ice moves was Peter Martel, writing in 1742. His assertion did not go unchallenged. At least one critic thought that ice flow was impossible. On the other hand, at least one interested person thought that a good approach to the question was to look into it. In November 1772, at the instigation of Pierre-Michel Hennin, three stakes were placed in the Mer de Glace on the northeast flank of Mont Blanc. The next spring, it seems, they had advanced about 4.5 metres with respect to a fir tree on the valley side.
So that is our first serious record of a measurement of a glacier. It is a bit of a pity that it was probably flawed. In 1842 James Forbes, one of the giants of 19th-century glaciology, measured velocities more than ten times as great at nearly the same place.
But that Hennin got it wrong isn't really the point. The point, grasped by Hennin and made repeatedly, and forcibly, by Forbes, is that if you want the truth about a matter of fact, the best bet is a measurement.
The costs of offshore wind have been rising, due in part to the rising cost of energy, which has pushed up the costs of materials like steel. As a result, offshore wind capital costs have doubled in the last five years. The UK Energy Research Centres new report on the issue, 'Great Expectations', estimates that costs will remain high for the next few years, but suggests that they will begin to fall by 2015, with a 'best-guess' reduction in costs of 20% by 2025, and continued reductions after that.
Some of the costs are linked to grid connections, which can be very expensive for marine cables. As I've reported before there has been a debate over the merits of sharing grid-links to shore between rival projects, with, given the competitive market framework evidently favoured by Ofgem, the risk being that we could end up with multiple links running close by in parallel. However, businessgreen.com says that research carried out by National Grid suggests an integrated approach, with shared grid and services, could cut the capital cost of grid connections by 25%, halving the number of onshore cable landing sites from 61 to 32 and reducing the number of offshore substations from 73 to 45 in the process. The report also claims that the approach would cut the number of onshore AC cables by 77%, and half the length of offshore AC cables required from 1206km to 603km.
Even so, offshore wind is much more expensive than on-land wind, costing £157 and £186 per megawatt hour (MWh) depending on location, compared to £94 per MWh for on-land projects, according to a report published in June by the Department of Energy and Climate Change.
It commented: 'While offshore is projected to see a large reduction in costs, compared with onshore wind, it will still face much higher costs at £110–125/MWh for projects commissioned from 2020.' For comparison, new nuclear will, it's claimed, cost £99 per megawatt hour, while new coal- and gas-power generation will cost an estimated £105–115 per MWh, with carbon capture and storage attached. But the offshore-wind resource is very large, perhaps up to 200 GW in the North Sea, or much more if deep-sea floating wind turbines can be successfully deployed – up to 406 GW in all according to the Public Interest Research Centres Offshore Evaluation. And of course there are no emissions or wastes to deal with, or problems with fuel supply – it's free and everlasting.
The proof of the pudding though is in the eating. The EU has been at the forefront of offshore-wind development, with the UK now in the lead at 1.34 GW, and the UK, Norway, Denmark, Spain and Portugal are all developing floating turbine systems of 10 MW and above. However, the US has also decided to try to catch up, with a series of offshore-wind projects on the Atlantic seaboard.
A new report, 'Untapped Wealth: The Potential of Offshore Energy to Deliver Clean, Affordable Energy and Jobs,' by international ocean conservation organisation Oceana, puts the offshore-wind potential for the US Atlantic coastal region at 127 GW. It claims that harnessing offshore-wind power in Atlantic waters is a much more cost-effective way to generate energy than oil and gas drilling.
Although a five-turbine, 20 MW pilot wind farm 10 miles offshore in Lake Erie may actually be first, in terms of ocean locations, Cape Wind's 430 MW project in Massachusetts is closest to operation, with Deepwater Wind's 20 MW pilot project planned for Rhode Island following closely behind. But Maine recently entered the race, as a possible site for Norway's Hywind floating device, as part of a 30 MW offshore renewable programme, and New Jersey is looking to have 1.1 GW of offshore wind capacity.
There is strong interest in going further out to sea to avoid visual intrusion – with floating devices also being seen as a key potential breakthrough, since, unlike the UK and some other EU countries, the US does not have shallow offshore areas on its Atlantic coast. Moreover, the longer term potential may be very large. A new NREL report puts the total theoretical US offshore wind resource at 4,150 GW, nearly four times current US total installed generation capacity (1,010 GW in 2008). The potential generating capacity was calculated from the total offshore area within 50 nautical miles of shore, in areas where average annual wind speeds are at least 7 meters per second (approx. 16 mph) at a height of 90 meters, assuming 5 MW of wind turbines could be placed in every square km.
China is also developing some of its very large offshore wind potential, put at up to 750 GW in all. So far it's been relatively cautious, focussing on near-shore options, as happened initially in the EU. A new study by the Wind Energy and Solar Energy Resources Evaluation Centre, run by China's Meteorological Administration, put the near-shore wind resource at 200 GW, at depths of 2–25 meters, which are ideal for in-shore and inter-tidal wind farms, the latter being in coastal areas that are submerged during flood tides but exposed during ebb tides. But the study didn't look further out.
In May, according to reports in Windpower Monthly, China opened a public tender for 1,000 MW of intertidal and offshore wind farms in Yancheng, Jiangsu Province, in East China. Electric power company Huaneng has now announced that it will invest $82 m to install 100 of the 3 MW wind turbines developed Sinovel, the largest wind turbine producer in China. Jiangsu Province already has the largest number of projects and an ambitious target to install 10.75 GW offshore wind power by 2020. But it seems that China will experiment with four intertidal and offshore wind farms first, before considering whether to go further out, with costs being a key issue.
Deep-sea wind, as being pioneered in the EU and now possibly in the US, is certainly unproven so far, and the cost of installation are higher in deep water, but the new generation of floating wind turbines may yet make it possible to exploit the very large resource further out at lower costs.
*It may not all plain sailing though. Fred Olsen Renewables (FOR) has pulled out of the Crown Estate's Scottish offshore wind programme to concentrate its efforts on land since it evidently sees that as more commercially attractive in the short term. It has ceased working as the preferred developer for the 450 MW Forth Array wind farm off the east coast of Scotland. In addition, the UK's offshore wind programme is throwing up some conflicts, not involving objections from local communities, as in the case of some on-land project, but from existing offshore wind projects, worried about new projects, in effect, stealing their wind. The developers of Boulfruich Windfarm near Dunbeath have evidently complained that plans by Caithness Power to build four larger wind turbines less than a mile away at Latheronwheel will cut their electricity production by a quarter – and have lodged an official objection to the new site. Lawyers for Boulfruich told Highland Council planners: 'It's too close and will impede performance of the wind turbines.' Conflicts over wind-access rights used to happen occasionally in the middle ages. Seems like, as ever, it will create more work for lawyers!
Unbeknown to most lay people and many energy insiders, Iran has more going on in the energy industry than nuclear controversy. Given the global location of Iran provides it with some of the world's highest solar insolation, there is actually solar research occurring. Much of the research involved concentrating solar-power technologies and the Iranian scientists were working to ensure that Iran had the domestic expertise to design and install solar CSP systems.
To support much of this solar research, Iran has a government-sponsored Renewable Energy Organization of Iran (SUNA) that is part of the Ministry of Energy. The objective of this organization is to develop applications for renewable energy. The staff includes 300 persons with 150 of those engineers and scientists. The budget is approximately $60 m. Iran even has a feed-in tariff for wind and biomass energy of approximately 13 cents/kWh. Additionally, Iran has a renewable-portfolio standard to meet 10% of its electricity from non-hydro renewable-electricity technologies.
I had the privilege of learning of some of these details while joining a workshop this November between Iranian and US energy and solar-energy scientists and engineers. Joining the Iranians was the head of their National Academy. This workshop was arranged by the US State Department with assistance of the US National Academies. As the US and Iranian governments are officially not on speaking terms, it is nice to know that some parts of the governments are finding ways to keep some communication channels open. Sharing ideas on solar- and renewable-energy technologies may help us find a way to share ideas and shed light on renewing cordial relations. Kudos to the US State Department for finding ways to have international relations using science.
Alternatively you can browse the blog’s category archives: