This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Skip to the content

IOP A community website from IOP Publishing

Powered by Movable Type 4.34-en

In from the cold: April 2010 Archives

A cold snap that began about 6250 BC is attributed to catastrophic drainage of Lake Agassiz-Ojibway. Meltwater from the waning Laurentide Ice Sheet, ponded north of the Arctic-Gulf of Mexico drainage divide, was able to force open a channel beneath the ice and pour enough fresh water into the north Atlantic Ocean, via Hudson Strait, to weaken the meridional overturning circulation.

The palaeoclimatic records of the ice age tell us of other cold snaps lasting up to a millennium or two. The longest and best known, the Younger Dryas, began about 11,000 BC, lasted about 1,300 years, and is also thought to have been due to catastrophic drainage of North American meltwater.

The Younger Dryas is named after Dryas octopetala, a flower which is an indicator for the episode in records from Scandinavian peat bogs. And yes, there was an Older Dryas, and even an Oldest Dryas, but they are much less prominent and may not have been worldwide.

Dryas integrifolia, a cousin of the Younger Dryas's eponymous D. octopetala
Dryas integrifolia, a cousin of the Younger Dryas's eponymous D. octopetala, growing on Ellesmere Island in northern Canada. Both are also referred to as the "mountain avens".

Now Julian Murton and co-authors describe what they believe to be evidence of the flood that triggered the Younger Dryas cooling. They worked on Richards Island, at the eastern edge of the Mackenzie River delta. The evidence consists in essence of an erosion surface truncating older glacial till and overlain by coarse gravel. The gravel is interpreted as a lag deposit, left over when the flood waters ceased to be capable of continuing to carry it.

The interpretation is rather persuasive. The dates bracket the events, and match the onset date of the Younger Dryas, about as closely as could be asked for. The gravel is pebbly to bouldery in size, not at all what would be expected in the delta of one of the world's biggest rivers. The ice-sheet margin of the time lay 600 km to the south, ruling out a local source of glacial meltwater. And there is a possible candidate for the lake outlet from which the flood might have issued, near Fort McMurray in northern Alberta, where the ice sheet formed a dam by abutting on hills to the west.

Persuasive as it is, this may well prove controversial. I have some naive questions of my own. For example, even this putative flood might have had trouble carrying boulders the 2,000-plus km from Fort McMurray, at 370 m above modern sea level (lower at the time, because of differential rebound of the land surface) to Richards Island. And if the boulders didn't come from the lake outlet or the uplands, where did they come from? And I am not clear at all about why the authors reject the Saint Lawrence as a possible outlet. They don't seem to have evidence against it, just evidence in favour of a different outlet.

On a personal level, I can't decide whether to be happy or sad about the relocation of the Younger Dryas flood to the Mackenzie and away from the Saint Lawrence. The building that houses my office sits on the floor of a spillway that drained meltwater from the ancestral Great Lakes, and perhaps even Lake Agassiz further to the north, for a brief period at about the right time. But the spillway's cross-sectional area is only 50-100 times that of the modern river which occupies a small part of its floor. That is probably not big enough. The evidence offered by Murton and colleagues suggests a flood many kilometres wide and up to tens of metres deep. There are other spillways, to the north of mine, which also fed meltwater to the Saint Lawrence, but their carrying capacities, dates and durations of occupancy are not pinned down as well as they need to be.

A number of further questions arise. Is it just a coincidence that both this cold snap and that at 6250 BC show signs of having been double bursts? I can't think of a plausible mechanism that would require these floods to happen in exactly two stages. But perhaps there is a prosaic explanation in terms of fluctuations of the ice-sheet margin. Repeated brief advances might close multiple spillways, not just two, and therefore produce multiple incarnations of the lake with its surface at different elevations at different times. Second, when is somebody going to find evidence of catastrophic drainage of the Eurasian equivalents of Lake Agassiz? Were there such outbursts? Finally and most generally, why does caprice, in the form of unpredictable outbursts of meltwater, seem to play such a substantial role in the evolution of past climates?

Whether this blog is the dullest or the second-dullest blog you have ever read will depend on whether you think shortage of wiggle room on glaciers is duller than spatial interpolation. But the two are connected, and there is a bright side to lack of wiggle room. It is what makes mapping possible and useful.

Interpolation is what we have to do when we have a geographically broad problem and measurements that don't cover the ground densely enough. If the measurements were not correlated, reducing the wiggle room for estimating uncertainty, we couldn't interpolate between them. We are about equally uncertain in all of the measurements, but at least the correlation lets us look at the broad picture.

We tend to trust contoured topographic maps because the height of the terrain is highly correlated over distances that are smaller than the typical distance between adjacent contours, and because they are based on dense samples of heights from air photos or satellite images. However, interpolating variables that are sparsely sampled presents problems.

Glacier mass balances tend to be well correlated over distances up to about 600 kilometres, making spatial interpolation possible. In turn that makes it possible to judge whether our sample of measurements is spatially representative, but it isn't much help for estimating mass balance in all the regions where are no measured glaciers within 600 km.

There are several methods of interpolating to a point where there is no measurement. I like to fit polynomials — equations in the two spatial coordinates, easting and northing — to obtain smooth surfaces centred on each interpolation point.

All of the methods have one thing in common: adjustable parameters. Broadly speaking, you get a tradeoff. You can choose smoothness of the resulting map or fidelity to the raw measurements, or any combination. Some methods guarantee, mathematically, that your interpolated numbers will agree exactly with the measured numbers at the points where there are measurements. Most methods, however, acknowledge that the measurements themselves are uncertain, and allow a bit of wiggle room. To tell the truth, by twiddling the knobs on your interpolation algorithm you can seem to create as much wiggle room as you like — in other words, any shape of surface you like (almost).

There seems to be only one objective way of judging the merit of your interpolated surface: cross-validation. If you have n measurement points, redo the interpolation n times, leaving out one point each time, and calculate the typical (technically, the root-mean-square) disagreement between the omitted measurement and the corresponding interpolated estimate. The trouble with this is that it tells you nothing about how you are doing at places where there are no measurements, which is the aim of the exercise.

One thing that no interpolation algorithm can do is manufacture facts. This is an insidious problem, because it often looks as though that is what they are doing. But what are the alternatives?

The most obvious is more measurements. But we scientists always say that, don't we, and although technology keeps advancing we still have lots of gaps. Lately I have been wondering whether we need to be more brazen about this. Yes, more measurements mean more money for scientists to spend. But suppose it were more money on an economy-altering scale. Jobs in glacier monitoring, and environmental monitoring generally, would multiply. The diversion of financial resources would mean that jobs such as making, fuelling, driving and repairing motor vehicles would dwindle. That would be a good thing, wouldn't it?

Before it happens, though, the economists will have to work out how to fool the economy into thinking that accurate maps of glacier mass balance are worth more than motor cars.

A more practical alternative is to take the average of what you know as the best guide to what you don't know. In fact the average is just a polynomial of order zero, that is, a limiting case of the idea of spatial interpolation. It works beyond 600 km, and indeed it works anywhere. But best does not necessarily mean good. New measurements might not change the picture much. On the other hand, they might.

Data voids mean irreducible uncertainty. It may be uncertainty we can live with, but it is also uncertainty in the face of impending trillion-dollar decisions. From that angle, a billion-dollar decision to make better maps and fewer motor cars begins to look good. In the meantime, beware of smooth maps and of maps that are slavishly faithful to the measurements. There is more going on beneath the contours than meets the eye.

I haven't written all that many blogs, but this one could be the dullest I have ever written. How do you persuade readers to be interested in how confident the rules of statistics allow them to be?

Statistics is a minefield in large part because we call on it mainly when we have to work out an explanation of what is happening, given a few scraps of more or less reliable information. You can tiptoe through the minefield and hope to reach the other side without getting blown up, or you can turn yourself into a professional statistician. But the latter option is time-consuming.

One option that sometimes crops up is to start not with a small but with a large amount of information and work out what the statistics are. This happened to me a while ago when I wanted to work out how confident I ought to be about the mass balance of a glacier in the Canadian High Arctic, White Glacier.

We measure the mass balance by inserting stakes into the glacier surface. Once a year we measure the rise or fall of the surface with respect to the top of each stake. Mass is volume times density, so the calculation of mass change (per unit of surface area) is simple: rise or fall times density. We measure the density in snow pits. To get the mass balance of the whole glacier, we add up the stake balances, multiplying each by the fraction of total glacier area that it represents.

Now the statistical fun begins. People have been measuring several dozen or more stakes per year for several decades on White Glacier, so we have a large sample and a "forward" problem: what is the uncertainty in the annual average mass balance?

There is a simple answer from classical statistics. This uncertainty is equal to a measure of the uncertainty in each single stake measurement divided by the square root of the number of stake measurements. It is a pity that this simple answer is, for practical purposes, wrong.

The uncertainty in the stake measurement itself is not the problem. You can measure changes of stake height to within a few millimetres, even when it is cold and windy. What you want is a measure of how representative your single stake is of the part of the glacier which it supposedly represents. Working this out is much trickier, but a typical number is a couple up to a few hundreds of millimetres.

Next, and fatally from the minefield standpoint, the number of stake measurements is not the number whose square root you need to divide into the stake uncertainty. The statisticians are not to blame for the prevalence of this crude error, although I think their predecessors of a few generations ago could have done a better job of discouraging its spread.

The early statisticians coined the term degrees of freedom for the correct divisor. It is the number of numbers that are free to vary in your formula. The trouble is that degrees of freedom doesn't suggest much, if anything, to normal people, who have grasped ill-advisedly at the fact that it may be equal to the sample size as a crutch on which to tiptoe through the minefield. Recently, the statisticians have tried the term effective sample size instead. It is better, because it suggests that the actual sample size may not be the right number, but "effective" is still mysterious jargon for most people, including many scientists.

I think wiggle room would be a still better alternative. The essential point is that you are only allowed to divide by the square root of your sample size if all your samples are independent, meaning that you cannot predict any one stake measurement from any other. The more dependent or correlated they are, the less your wiggle room, the smaller your divisor, and the bigger your uncertainty.

Suppose you measure the mass balance at 49 stakes, a convenient number because its square root is seven. If the stakes were independent you could divide your stake uncertainty by seven to get the uncertainty of your whole-glacier mass balance. If the stake uncertainty happens to be 210 mm, another convenient number, the whole-glacier uncertainty comes out as only 30 mm.

Correlations between records of annual mass balance at stakes on White Glacier
Correlations between series of annual balances measured at thousands of pairs of stakes on White Glacier. The pairs are arranged by how far apart the two stakes are in altitude, and by their correlation — +1 being perfect correlation and 0 being complete lack of correlation. White means no stake pairs; otherwise, the paler the little box the more pairs fall into it. The green dots are the best estimates, according to statistical theory, of the real as opposed to the observed correlation at each value of vertical separation. (Notice that the observed correlation is sometimes negative even when the "real" correlation is quite large.) The red curve, again from theory, summarizes the green dots.

Too bad that, call it what you will, the wiggle room for the mass balance of White Glacier is about one. The large collection of correlations in the graph is strongly skewed towards predictability. If you know one stake balance, you can do a fair to excellent job of predicting others. That the correlations drop off as the stakes grow further apart turns out not to make much difference to the wiggle room. For the purpose of estimating uncertainty, it is as if we had only one stake, not dozens.

The stakes on White Glacier are so highly correlated that we have to live with the fact that we only know our mass balances to within a few hundred millimetres. This is a serious constraint, considering that typical annual mass balances these days are negative by only a few hundred millimetres. No wonder it has taken a long time for signals of expected change to emerge.

Writing in Geophysical Research Letters, Nick Barrand and Martin Sharp tell us about the evolution of glaciers in the Yukon Territory of Canada over the past 50 years. They are less studied than their counterparts across the border in southern Alaska, and it is valuable to have a more complete picture. As with the glaciers of the Subantarctic, the message is not exactly headline material — "Yukon Glaciers Not Surprising". But repeating the message ad nauseam, in the face of public weariness with global change, seems to be the best we can do. After all, the changes are happening.

Barrand and Sharp measured glacier extents on air photos from 1958-1960 and Landsat images from 2006-2008. An important point is therefore that, with information from only two dates, there is no way to tell whether rates have changed. For practical reasons they truncated a small proportion of the glacier outlines at the territorial boundary, but as a compensation they did include the glaciers of the eastern Yukon, which are less impressive than the big ones nearer to the Pacific. They are often forgotten and have never been studied in detail.

The main result is robust. In the late 1950s the extent of glacier ice in the Yukon was 11,622 km2 and in the late 2000s it was 9,081 km2. The rate of shrinkage was therefore —0.44% per year, give or take about 0.03%/yr. Four glaciers grew, 10 remained the same size, and the other 1,388 shrank. Of the shrinking ones, about a dozen disappeared.

Barrand and Sharp also venture on a so-called volume-area scaling analysis. The name of this method is slightly confusing. It exploits the observation that glacier thickness "scales" with glacier area: the bigger the glacier, the thicker it is likely to be. Multiply the average thickness by the area and you get an estimate of the volume. The problem is that although measurements of area are comparatively easy, there are very few reliable measurements of thickness. So volume-area scaling is based on relationships fitted statistically to the small number of glaciers with good measurements of both area and thickness.

Estimates of volume obtained in this way are wildly variable for single glaciers. But if you have 1,400 single glaciers, and if your scaling coefficients are not biased (systematically off — the evidence for this is pretty good), then the law of large numbers kicks in and your wild errors cancel each other out. Even so, Barrand and Sharp reckon, by comparing different published sets of scaling coefficients, that their Yukon-glacier total volumes are uncertain by about ±30%.

Ice volume is interesting, but the mass is even more interesting. Assuming a density of 900 kg m-3, the volumes translate to masses of 1832 gigatonnes in 1956-58 and 1497 Gt in 2006-08. The difference equates to a 50-year mass balance of —6.7 Gt/yr, or —650 mm/yr of water-equivalent depth spread uniformly over the glaciers (give or take 30% in both cases, remember). These numbers could be biased high by the assumed density being too great, but the bias is not likely to be large.

Are there any exceptions to what looks like a global rule that glaciers are losing mass? Not in the regions with adequate measurements, for sure. Two other recent analyses agree broadly with the Yukon findings, and like them exemplify our modern ability to estimate long-term mass balance over regions of substantial size. Nuth and co-authors studied most of Svalbard and found a mass balance of —360 mm/yr water equivalent over varying periods of a few decades. Next door to the Yukon, Berthier and co-authors constructed digital elevation models of Alaskan glaciers and found multi-decadal rates that average to —480 mm/yr water equivalent for the whole state. This last number revises upwards an earlier estimate of —700 mm/yr water equivalent.

The number of regions like these three is increasing gradually, and the closer we get to complete coverage the less likely does it become that our global estimates are seriously wrong. One region without adequate measurements is the Karakoram in Kashmir, where limited information suggests that some of the glaciers are not only advancing but thickening. But even if there is a surprise in store in the Karakoram, we need more than one such surprising region to make the global rule look debatable.