This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.

A community website from IOP Publishing

# In from the cold: May 2010 Archives

## Giving glaciers the full-Stokes treatment

| | Comments (6) | TrackBacks (0)

The full Stokes equation is a precise description of the flow of a deformable continuum. It says that, in an ice sheet, the pressure gradient force and the force of gravity, resisted by the temperature-dependent stiffness of the ice, are balanced by the motion of the ice. Stated that way, it is simple arithmetic – but there is a devil of a lot of arithmetic to do.

At each point in the ice sheet, the pressures, or more accurately the shear stresses and the normal (compressive or extensional) stresses, are directed along each of three coordinate axes (pointing either way), and they can change from point to point along each of those axes. You want the best spatial resolution you can afford. But the resolution you need, if you are to maintain accuracy, is simply not affordable even on today's fastest computers. In a recent study, Gaël Durand and others remarked laconically that in going from a grid spacing of 20 km to 2.5 km one of their simulations ballooned from two days of supercomputer time to two weeks.

Various simplifications of the full-Stokes treatment have been developed. If you ignore along-flow velocity gradients, tending to stretch or squeeze the ice in the horizontal direction, you get the so-called shallow-ice approximation, oddly named because it works better the thicker the ice. Ice sheets tend to feed floating ice shelves. Here the underlying water cannot support horizontal shearing, so the ice flows at the same speed throughout the thickness; the vertical gradients of the horizontal velocities are negligible. This is the shallow-shelf approximation. Both approximations are valuable time-savers.

Unfortunately they both break down near the grounding line that separates the ice sheet from the ice shelf. Faced with the impracticality of the full-Stokes treatment and of any one-size-fits-all approximation, the dynamicists have been working hard to make the problem tractable.

Rectangular arrays of grid cells are definitely old hat. Nowadays the favoured approach to the numerics is the finite-element method, in which you describe your ice sheet with cells of variable size and shape. This is laborious but reduces the computational burden later. You give the ice sheet the full-Stokes treatment everywhere, but spend little time where full Stokes isn't really necessary.

There is an obvious snag. The grounding line is the focus of interest because it might migrate unstably towards the interior of the ice sheet. But if it migrates away from where you have laboriously set up lots of little cells, you are sunk. Instead of migrating in little steps it gains the computational freedom to take great big ones. It can, and may well, end up in some entirely unrealistic location.

So Durand and co-authors created an adaptive grid consisting of cells that were small near the grounding line, growing larger progressively with distance from it. But they re-centred the grid on the grounding line after each model time step, such that the little cells kept company with the grounding line. More purely preliminary labour, but with the smallest cells only 200 m in size they were able to obtain consistent numerical behaviour and to confirm Christian Schoof's finding, from a different theoretical angle of attack, that grounding lines are indeed unstable when the bed slopes upwards.

There is irony in this greed for number-crunching power. Long before ice sheets became objects of scientific scrutiny, Stokes laid all of the conceptual groundwork with a pencil (or maybe a quill – did they have pencils in the 1840s?). Much of our understanding of how ice sheets work was developed on computers to which you would not give desk room (even if they would fit). Now, the glacier dynamicists are right up there with the astrophysicists, climate modellers and the like, baying for time on unimaginably fast computers that have trouble satisfying the demand.

The glaciological demand, though, is real and pressing. The full-Stokes treatment is getting attention because of the socioeconomic risks of grounding-line instability, which was identified in the Fourth Assessment by the Intergovernmental Panel on Climate Change as one of our biggest gaps in understanding of how the Earth works. My dynamicist colleagues have to have something to say about it in time for the IPCC's next assessment, due in 2014. They have made tremendous progress by working overtime, but if yet more time is what it takes to crack the problem then I hope they will resist this pressure to deliver.

## How hard is snow?

| | Comments (3) | TrackBacks (0)

We are all familiar with the idea of hardness. Falling on your knees is more painful if you fall on a pavement than on a lawn. But most of us would be puzzled to make the idea precise and quantitative. The geologists, thanks to Friedrich Mohs (not Moh), have a good working scale for the hardness of minerals. More surprisingly, so do the snow scientists for the hardness of snow.

Hardness can indeed be defined precisely. All those with a serious interest in the hardness of substances agree that the everyday concept is proportional to the force required to produce an indentation in the surface of the substance.

In Mohs' hardness test, you press a series of test minerals into the mineral whose hardness is to be estimated. The one that produces an indentation, and powder when you drag it across the test surface, gives you the Mohs' hardness of the unknown mineral. Mohs' hardness is just a number on a scale from 1 to 10, but careful studies have shown that it is proportional to the logarithm of the force, measured in newtons, N, or preferably N m-2 (newtons per square metre, because the size of the indenter makes a difference) that is applied to the surface.

It is the same with snow, but the test indenters are even simpler. In the classical hardness test for snow, introduced by de Quervain in 1950 and explained in last year's new edition of The International Classification for Seasonal Snow on the Ground (search on "Fierz"), you use successively (1) your fist, (2) the ends of your four fingers, (3) just one finger, (4) the tip of a pencil and (5) the blade of a knife. You apply gentle force, and your hardness index is the number of the first list entry that penetrates the snow.

It sounds very fuzzy, doesn't it? Surely these indenters differ in size and shape? What does "gentle" mean? Are you allowed to wear gloves? (The rugged answer to that one, apparently, is No.) Do you sharpen the pencil? The International Classification says Yes, but in the only photo I have ever seen of a pencil in snow-science service it was unsharpened. Why would people bother with a procedure with so many question marks attached?

The answer to that one, of course, is that the procedure works. It is also fast, and above all cheap. Assuming that you borrowed the pencil and somebody gave you a penknife for your birthday, the cost is nil. And in a recent paper in Annals of Glaciology, Höller and Fromm show that the hand hardness index does reproduce, with acceptable accuracy, measurements using more advanced instruments. If your fist indents the snow, the implied force is about 20 N and the implied strength or resistance about 4,000—8,000 N m-2. If you need a knife to do the job, the implied strength is about 0.1 up to 2 million N m-2.

Why worry about the hardness of snow? There are plenty of reasons. Soft snow, especially a lot of soft snow, is a bore if you have to walk over, or rather through, it. Skiers have a fairly obvious interest in the hardness of snow on the surface. But probably the biggest justification for snow scientists who stick their fingers into snow is the risk of avalanches.

Snow can be jerked into catastrophic motion in a range of ways, and its hardness is only one of the factors to be considered. But a safe prediction is that abrupt failure is more likely where there is a sharp discontinuity of strength between adjacent layers. More precisely, the shear strength is the ability of the snow on either side of an interior plane to resist relative motion over that plane. The hardness test, applied at regular intervals in a snow pit, measures the shear strength in a roundabout way, by measuring the compressive strength of adjacent layers.

Predicting avalanches is difficult at best, but if it were expensive then the prediction might not happen at all. So the hand hardness index has won and held a place for itself in keeping the death rate down in cold, mountainous terrain. It also tends to confirm my hypothesis that science doesn't have to be expensive to be worthwhile.

## In praise of grey literature

| | Comments (1) | TrackBacks (0)

In the aftermath of the fuss about Himalayan glaciers, I have noticed a tendency among my colleagues to hesitate about citing so-called "grey" literature – loosely, stuff that has not been reviewed by scientific peers and accepted for publication by an editor acting on recommendations from such reviewers.

Some have argued that we should stop citing any publication that has not appeared in a peer-reviewed journal. The snag about this idea is that it would make scientific studies of the climate in general, and glaciers in particular, almost impossible. Much of the raw data appears in documents, and nowadays files on the internet, published by governments or quangos. Sometimes there is a reviewed paper to document the work underlying the measurements, sometimes not.

In glaciology, some of our mass-balance measurements are superbly documented in high-profile journals. I don't know of any wrong numbers in this kind of source, but by definition the documentation is not superb if it doesn't include a thorough analysis of uncertainties. It is a pity, but hardly the fault of the authors, that readers in a hurry tend to read the name of the journal but to skip the thorough analysis.

Some measurements are mentioned only briefly in very obscure documents, with few or no accompanying details. You can only decide whether to accept this kind of measurement by reading critically and judging whether the measurers knew what they were doing. (I will come back to this idea of reading and judging.)

Most of the measurements lie between these extremes. You can find them in black and white, with some background information, but in a grey source. Your choices, in a context in which you are desperately short of hard facts, are to reject the measurements because they were not peer-reviewed, reject them because they do not stand up to judgement, or accept them with appropriate reservations.

It is not as if publication in the peer-reviewed literature is a guarantee of correctness. There are some appalling wrong results in the literature. Among the most famous examples is the 1989 claim by Fleischmann and Pons (in the Journal of Electroanalytical Chemistry, volume 261(2A), pp301–308) that they had observed cold fusion, that is, the fusion of atoms at room temperature. It would have altered our world forever, but it was a report better suited to the Journal of Irreproducible Results.

During the recent furore, a recurrent criticism of the Intergovernmental Panel on Climate Change was that it had cited grey literature, as if that were a mortal sin. Indeed, at the centre of the Himalayan maelstrom was a report by the World Wildlife Fund, usually berated by the ill-disposed or mischievous as a "green advocacy group". Somebody then counted other IPCC references to WWF reports and found 16. I haven't read the other 15, but the Himalayan one was in fact pretty good – thorough, reliable except for one howler and, ironically, reviewed. The WWF made the same regrettable error as the IPCC's Himalayan-glacier authors, namely swallowing nonsense uncritically from a popular science magazine. It is further ironic that the WWF and the IPCC can be shown to have made this error independently, but that the IPCC erred additionally by splicing in a reference to the WWF instead of to the popular magazine. So the WWF was a victim of friendly fire, but not an innocent victim.

The culminating irony, however, is that the IPCC's guidelines for the treatment of grey literature, in Appendix A of the Principles Governing IPCC Work, are a model of reasonableness. (See Annex 2 in particular.) Had they been followed with respect to the Himalayan glaciers the fiasco would never have happened. The IPCC's guidelines for handling work that has not been reviewed by peers boil down to "read the darn thing for yourself and do your own review".

It strikes me as good advice. If more people – preferably everybody – were to heed it, we would all be better off. And whether the source were grey or not wouldn't make any difference. The most basic error is accepting authority as a substitute for reasonableness.

## Glaciological classics

| | Comments (4) | TrackBacks (0)

I was asked recently to make suggestions for a list of classic papers in the Journal of Glaciology and Annals of Glaciology, the two main publications of the International Glaciological Society (IGS). If you are keen, you can expect to see a feature on glaciological classics in the Journal's 200th issue later this year. It will be interesting to see what the community of glaciologists comes up with as its selection of the papers about which it is proudest.

My own little list begins with Anonymous 1969. We have got into the habit of calling it that, although it baffles people from neighbouring disciplines (and in fact most of us know who wrote it). It codified the thinking on which we have relied over the past 40 years for describing the components of glacier mass balance, enshrining for example bn as the symbol for net mass balance, and c and a for accumulation and ablation respectively. Even its author would not, I imagine, describe it as exciting, but that hundreds of glaciologists take it for granted every day shouldn't disqualify it from classic status to my mind.

Then I added Jay Zwally's 1977 paper about the emission of microwaves by cold snow. This work opened up a new part of the electromagnetic spectrum to glaciological investigation. We know what glaciers look like in the visible part of the spectrum, in which our eyes make pretty good sensors. But microwaves, with wavelengths of millimetres rather than nanometres, show us a new world. Not the least of their advantages is that they pay no attention to clouds and don't need sunlight. If you have a microwave radiometer, or better still a radar with which to make your own microwaves and bounce them off your target, you can look at your glaciers whenever you want. (Oh, you also have to have an orbiting satellite on which to mount your instrument.)

Zwally's particular contribution was to show that the strength of microwave emission from cold snow is proportional to temperature and grain size and therefore, by an ingenious and very productive analysis, to the rate of accumulation of the snow. This has become a leading way of estimating accumulation rates above the dry-snow line. (Things become a lot more complicated if the snow starts to melt).

My all-time most significant IGS paper is probably Geoff Boulton's 1979 work on the deformation of the glacier bed by the flowing ice. He showed that, by comparing the along-glacier stress due to the glacier's flow to the downward pressure due to its thickness (possibly offset by pressurized basal water), you can fashion any of a variety of intriguing and familiar shapes. For example you can make drumlins (by lodgement of the glacier's sediment load; steep end up-glacier) or roches moutonnĂ©es (by abrasion of the bed; steep end down-glacier) algebraically, both from the same equation.

Boulton's classic measurement at the bed of Breiðamerkurjökull.

The crucial insight for this study came from a "Why didn't I think of that?" measurement at the bed of Breiðamerkurjökull, a large outlet glacier in southern Iceland. Drill a hole in the sediment of the bed, and drop into it a metal rod with many metal rings fitted around it, one on top of another. Withdraw the central rod. Return ten days later, dig an access pit, and make the observations summarized in the diagram and its caption: "90% of the forward motion of the glacier sole is accounted for by deformation of the till".

Deformable glacier beds are now universally understood to be fundamental pieces of many glaciological puzzles, on scales ranging from the 50-metre tunnel dug by Boulton (or, if he had any sense, by his student assistants) to reach the bed of Breiðamerkurjökull up to the behaviour of whole ice sheets.

Boulton has gone on to a distinguished career as a glacial geologist, elaborating his early ideas about the interaction of glaciers with their beds and about the intellectual importance of coupling observation with thought. At about the time you read this you will be hearing from him as a member of the team commissioned by the University of East Anglia to look into the doings of its Climate Research Unit. I notice that there is a surprising quantity of nonsense about Boulton on climate-denialist web sites. You can safely ignore it. Reading his classic paper would be a far more profitable investment of your time.

Come to think of it, reading classic papers is a profitable investment of time, period.

## Settled science with a vengeance

| | Comments (4) | TrackBacks (0)

One of the more unseemly sideshows of the Climategate fuss has been the argument about editorial treatment of two papers in the International Journal of Climatology. In late 2007, Douglass, Pearson, Christy and Singer published, online, a paper about the mismatch between modelled and observed temperature trends in the lower atmosphere. The paper did not appear in print until October 2008, so I will call it D08. It came just before a paper, S08, by Santer and co-authors that criticized the Douglass paper.

You can find more than you may want to know about the editorial treatment of D08, as filtered through the minds of Douglass and Christy, in a blog contributed in December 2009 to the right-wing magazine American Thinker. But that is not what I want to discuss here. I am more interested in the golden opportunity offered by American Thinker, 14 months on, for Douglass and his co-authors to rebut the criticisms of S08. Golden as it was, they passed it up.

The core of the argument is the assertion in S08 that D08 used an incorrect statistical test. Although the connection to glaciers may seem tenuous, it is real and of broad interest: the dispute is about wiggle room. I know I said wiggle room was dull — but just look at how excited we get about it. This kind of thing is the essence of the search for expensive signals buried in distracting noise.

The aim of the test used in D08 is to decide whether two sets of numbers, in this case observed temperatures and modelled temperatures, are "different" in a sense that can be defined precisely and with a known amount of confidence. There comes a point during the test where you have to divide by the square root of a number called the "effective sample size". In general this number is smaller than the sample size, because correlations between the numbers in the sample reduce the amount of wiggle room you have while making your test decision. In the jargon of statistics, the wiggle room is called the "degrees of freedom".

If you use the sample size instead of the effective sample size, you get an insidiously wrong answer. Your error bars come out too small and you end up being too confident about your decision. This is precisely the trap into which D08 walked. S08 did the test properly, and concluded correctly that there is no reason to believe that, on average, the climate models are mis-modelling the observed temperatures.

The trap is not widely understood, even among scientists, but that is no excuse when, as did D08, you choose to play for high stakes. American Thinker gave them a chance to respond to the criticisms of S08, and all they produced was whingeing about the editorial process.

In an appendix to their blog, D08 offer a scientific discussion of their work. They say that S08 "strongly objected to the narrowness of our error bars. Their view was to allow models to have a very wide range of possibilities of trends (roughly the range from the coolest model to the warmest) no matter what their associated surface trends might be." Never mind the "roughly"; the parenthesis misrepresents the meaning and importance of wiggle room. The bit about surface trends is a red herring. The quotation, and the appendix as a whole, show that D08 still misunderstand S08 comprehensively.

Santer and his co-authors are right. Douglass, Pearson, Christy and Singer are wrong. These two points ought to be at the centre of this part of the climate-wars discussion. They cannot be stressed too often.

There is a further irony in this story of denialism and wiggle room. An important part of the denialists' weaponry is the term "settled science", repeated over and over again as a criticism of the conventional wisdom about the climate. Yet the settled science elucidated in S08 features enormous error bars. The denialists often leave out the error bars, but D08 did have error bars. The trouble is that they were tiny, and wrong. Settled science with a vengeance.

## More content

You can find more content in the blog’s main index or archives.

Alternatively you can browse posts for this category archived by month: