|
| |||
|
|
On the average temperature at the surface of the Earth "Global warming" means that the surface temperature grows with time, on the average. How can we define the average temperature? What is the precision of the definition of this value? For simplicity, let us first assume that we know how to average the temperature over the entire Earth to obtain a single value. So, after this averaging we find a value T(t) as a function of time. This function typically exhibits large oscillations, both regular (daily, seasonal) and irregular. We can certainly choose a particular time interval [a,b] and compute the average of T(t) over that interval. However, the results may depend on the interval quite sensitively. Here is what happens when one takes different intervals and fits a line to the data, computing the rate of increase: ... a 15-year period starting in 1996 shows a rate of increase of 0.14 [0.03 to 0.24] °C per decade, but taking 15 years from 1997 the rate reduces to 0.07 [–0.02 to 0.18] °C per decade. https://en.wikipedia.org/wiki/Global_war If we take other periods for analysis, we might conclude that there is a global cooling. https://en.wikipedia.org/wiki/Global_coo So it is clear that, if we are to talk sensibly about the rate of increase of T, we first need to define the "time-averaged" value of T, and estimate the inherent uncertainty in the definition of the time-averaged T. Suppose that T(t) were a stationary random process with zero mean and a fixed correlation function C(t): E [ T(t) ] = 0 E [ T(t_1) T(t_2) ] = C(t_1 - t_2) . Suppose further that we can only observe a single sample of T(t), and only for t within a fixed interval of duration D. Let us denote by A(D) the average of T over the observed interval. The value of A(D) is a random variable with zero mean and nonzero standard deviation. Can we estimate its standard deviation from a single available sample of T(t)? Calculation shows that ![]() where p(k) is the power spectrum, i.e. the Fourier transform of the correlation function: ![]() We also defined an auxiliary function w(k), which is a non-negative window function that selects a region of width 1/D around zero in the frequency space. Now, given a particular data set T(t), we can certainly estimate p(k) and perform this calculation numerically. Here I will only make a qualitative estimate of the result under the assumption that T(t) has a stochastic oscillating behavior throughout the observed interval. More precisely, assume that T(t) has n oscillations of typical amplitude M. If a function has n large oscillations over an interval of length D, it means that its power spectrum p(k) has a maximum around the value k_0 = n/D. The integration with the window function w(k) will cut out most of this maximum to the right of k_0. At k=k_0, the window function is small, of order 1/n^2. If nothing were cut out, the integral would have given us the variance of T at fixed t, which is M^2. With a window that is of order 1/n^2 at the maximum of p(k), the integral will give M^2/n^2. Therefore, a rough estimate is that the standard deviation of A(D) is M/n. So, if we are looking at a temperature plot such as this one: ![]() we observe n=10 large oscillations of typical amplitude 4 °C through 100 years of observation (this data is already annual averaged temperature). Therefore, the mean temperature is defined with a minimum uncertainty of about half a degree. It is meaningless to talk about a 0.1 °C or 0.2 °C change in a quantity that is only defined up to 0.5 °C. These small changes are statistically insignificant and should be ignored. If we nevertheless proceed with computations, fitting a straight line to the curve in the graph, we will certainly obtain a "result", i.e. a number. However, this number will depend on chance to such an extent as to become meaningless. The Wikipedia quote above is an illustration of this: changing the reference point brings about large changes in the result. It is important that the 0.5 °C is not the precision of the measurement of T; it is also not the precision with which we now managed to determine the "true" value of A(D). It is the precision of the definition of A(D). There is no absolutely precise value of A(D) to which we could even theoretically make successive and arbitrarily more precise approximations. The value 0.5 °C is an inherent uncertainty in A(D). If D grows, the inherent uncertainty in A(D) decreases, but we can't do anything at fixed D to decrease this uncertainty. (Gathering temperature data every microsecond won't help because the uncertainty is determined by M and n, which will not change as we gather more data.) As an analogy, consider the distance between Boston and Chicago. It is meaningless to measure the distance between Boston and Chicago up to 1 centimeter: the concept of "location of Boston" or "location of Chicago" is not meaningfully defined with that kind of precision. We may certainly employ a precise distance measuring device and publish a figure of the distance between Boston and Chicago with 8 significant digits. However, the last digits in that figure will be meaningless; they will depend on extraneous and random factors, such as on which street corner we chose as the "center" of Boston, the soil under that street corner, or even on the temperature on that day. So, for instance, if our measurements were to show an increase of 2.1 cm per year in the distance between Boston and Chicago, this "increase" should be simply ignored. The computation of the uncertainty of temperature was performed under the assumption that T(t) is a well-defined scalar value (which it was in the above graph). In that case, the longer we observe the more precise the average becomes. In reality, T(t) is defined as the average of temperature over time and also over the entire Earth. So let us consider the quantity T defined as "average temperature over the Earth". Here we again face the same task: to define an average of a fluctuating quantity, given only a finite sample. At any given time, temperature differences across Earth are about plus and minus M=40 °C. There are several places on Earth where the temperature has a large maximum (Sahara, mainland China and some parts of Siberia, continental Africa, etc), while other places have minima (the polar regions and other glaciers, etc). Treating the temperature across the Earth as an oscillating function, let us estimate that it has about n=20 oscillations across the Earth. Applying the same qualitative reasoning, we find that the average temperature across the Earth is defined up to M/n = 40°C / 20 = 2°C. Again, this figure is not a measurement error. Rather, it is an indication that there is no quantity X that "precisely" denotes the "average temperature T over the entire Earth at time t". Our T is not an approximation to any such X, because no such X exists. Now, unlike the case of time sampling, there won't ever be more temperature variability to sample across the Earth at a given time. The temperature will always have about n=20 oscillations of magnitude about M=40°C. Gathering more data about the temperature at various places on Earth will not help define T(t) more precisely than with uncertainty plus or minus 2°C! We can conclude that any change in T that is less than a few degrees is a mere statistical fluctuation. In particular, standard statements about global warming are quite meaningless as written: Earth's average surface temperature rose by 0.74±0.18 °C over the period 1906–2005. The rate of warming almost doubled for the last half of that period (0.13±0.03 °C per decade, versus 0.07±0.02 °C per decade). It is impossible to talk about 0.8 °C rise in a quantity that is only defined up to 2°C. You can get any numerical results (that the temperature is rising, sinking, or constant) depending on the choice of averaging procedures, window functions, and so on. The same arguments apply to the average water level and the average concentration of CO2 in the atmosphere. These quantities are not defined with enough precision to create any mathematical models whatsoever about 0.2°C changes in temperature per century, or a 3 mm per year rise of ocean levels. |
||||||||||||||