Physics / Critical Essay

Vol. 1, NO. 1 / October 2014

Physical Theories and Computer Simulations in Climate Science

William Kininmonth

Letters to the Editors

In response to “Physical Theories and Computer Simulations in Climate Science


In February 1979, the World Meteorological Organization (WMO), with the support of other United Nations agencies, responded to a series of widely reported climate-related humanitarian disasters by convening the First World Climate Conference.1 The conference was intended to assess the state of climate knowledge and to consider the effects of climate variability and climate change. The participants also discussed the potential impact of human activities on the climate, especially with respect to industrial activities that were increasing the atmospheric concentration of carbon dioxide.

A subsequent conference convened in October 1985 in Villach, Austria, by the United Nations Environment Program (UNEP), and cosponsored by the WMO and the International Council for Science (ICSU), further assessed the role of increased carbon dioxide levels and other radiatively active atmospheric constituents in climate change. In a statement released following the conference, the participants concluded, inter alia:

Many important economic and social decisions are being made today on long-term projects–major water resource management activities such as irrigation and hydro-power, drought relief, agricultural land use, structural designs and coastal engineering projects, and energy planning–all based on the assumption that past climatic data, without modification, are a reliable guide to the future. This is no longer a good assumption since the increasing concentrations of greenhouse gases are expected to cause a significant warming of the global climate in the next century. It is a matter of urgency to refine estimates of future climate conditions to improve these decisions.2

The Villach Conference Statement also noted that:

The most advanced experiments with general circulation models of the climatic system show increases of the global mean equilibrium surface temperature for a doubling of the atmospheric CO2 concentration, or equivalent, of between 1.5 and 4.5°C.3

In promoting anthropogenic climate change as an international issue, the Villach Statement revealed the extent to which projections of climate change had become dependent on computer models. Climate change projections continue to rest on the validity of their underlying computer models.

The purpose of this essay is to evaluate the ability of computer models to represent the naturally varying climate system and predict how rising concentrations of greenhouse gases, especially carbon dioxide, affect the system.

Throughout, I argue that the relatively simple representation of the climate captured in computer models is inadequate for the purposes of prediction. I shall argue, in addition, that our rudimentary and incomplete understanding of natural variations in ocean and atmosphere fluids has made it difficult to interpret recent climate trends. Moreover, the scale of energy exchange processes associated with evaporation, precipitation and cloud formation (the hydrological cycle) are constraints on climate response to anthropogenic forcing. These processes are probably underestimated in climate models, leading to exaggerated projections of warming from carbon dioxide.

Global Temperature, Carbon Dioxide, and the Greenhouse Effect

Global average surface temperature is widely accepted as a meaningful proxy for the earth’s climate variations. But this simple index is both ambiguous and uncertain. The surface temperature of the earth, as measured, refers to air near the surface of the earth rather than the surface itself. The data for this index were largely derived from meteorological observations, which began systematically in the early nineteenth century, albeit initially from few locations. Observations from large areas of the earth’s land and oceans remain either unavailable, or are collected by different technologies. Nonetheless, researchers have constructed monthly and annual global average near-surface histories dating from roughly 1850 to the present. These histories depict a rise in temperature of about 0.8°C. The rise was not uniform; it was confined mainly to the periods 1910–1945 and 1975–2000.

Since 1979, satellites have provided a more consistent level of global coverage for near-surface air temperature. The satellite data do not reflect the real temperature, but, instead, the mean temperature in various layers of the atmosphere. The global average temperature derived from this data is widely accepted as a climate monitoring index, although space-based and surface histories do have important differences.

A linkage between greenhouse gases and surface temperature was first proposed by Jean-Baptiste-Joseph Fourier in 1827, who conjectured that naturally occurring gases, such as water vapor and carbon dioxide, were restricting the escape of radiant heat from the earth’s surface, thus leading to a greenhouse effect.4 In a series of experiments commencing in 1859, John Tyndall confirmed that otherwise colorless and invisible minor gases in the atmosphere, such as water vapor, carbon dioxide, and ozone, could absorb radiant heat with differing capacities.5

The Swedish chemist Svante Arrhenius drew an explicit connection, in 1896, between the presence of carbon dioxide in the earth’s atmosphere and its climate.6 His general conjecture was challenged in 1920 by the Serbian physicist Milutin Milankovitch, whose work suggested that cyclical changes in the earth’s axial tilt and orbit—what are now called Milankovitch cycles—played an additional and important causal role in climate change.7 Milankovitch identified the predominant cycles of the earth’s orbital variations and calculated their impact on the magnitude of solar heating over polar regions.8 In 1938, Guy Stewart Callendar, on the basis of the observed rise in global temperatures from 1888 to 1938, and with Arrhenius loitering in the background, argued that a doubling of carbon dioxide concentration would raise global temperatures by about 2°C. This was the first specific prediction about the future of global warming.9

Researchers made little further progress toward establishing the sensitivity of global temperature to atmospheric carbon dioxide concentrations until technology advanced in aeronautics and instrumentation during the 1950s. These advances led to an improved understanding of atmospheric radiation. Fourier had conjectured that greenhouse gases inhibited the escape of infrared radiation from the earth’s surface. Not so. Infrared radiation to space across the wavelengths associated with greenhouse gases emanated from within the atmosphere; greenhouse gases emitted more infrared radiation than they absorbed, thus cooling the atmosphere at a rate of 1–2°C per day.

Clearly, radiation processes alone could not account for the greenhouse effect. A consideration of the roles of clouds and the hydrological cycle was essential.

In a paper published in 1958, Herbert Riehl and Joanne Simpson Malkus clarified the importance of clouds in the exchange of energy.10 The buoyant ascent of air within deep convection clouds, they argued, took heat and latent energy from near the surface and distributed the energy through the atmosphere, thereby offsetting the cooling effect of greenhouse gases. An essential requirement for buoyant convection is thermodynamic instability, a decrease of temperature with altitude.

The hydrological cycle, with its uptake of latent energy from the surface of the earth during evaporation and its release of latent energy to the atmosphere by means of convection and cloud formation, is an integral part of the earth’s energy exchanges. The excess warmth of the surface compared to the average radiating temperature of the atmosphere (the greenhouse effect) is an outcome of the thermodynamic requirement of convection.

The radiation–convection model became the accepted explanation for the earth’s vertical energy exchanges. Solar radiation is largely absorbed by and warms the surface of the earth; net infrared radiation loss from the clouds and greenhouse gases cool the troposphere; and the hydrological cycle distributes excess heat from the surface through the troposphere.

The new understanding and assessments of surface exchanges led a number of researchers to adopt surface energy budget calculations in the early 1960s.11 Using these methods, they calculated the Equilibrium Climate Sensitivity (ECS)—the change in equilibrium global average surface temperature associated with a doubling of atmospheric carbon dioxide concentration—as only a few tenths of a degree Celsius.

Another innovation of the 1960s, based again on the work of Riehl and Malkus, was the construction of a one-dimensional radiation–convection model for vertical energy exchanges. The model suggested that as carbon dioxide concentration increased, radiation to space across the active carbon dioxide infrared wavelengths would decrease.12 In turn, radiation to space across other infrared wavelengths would increase, because the retention of radiation energy across the carbon dioxide wavelengths would warm the surface and atmosphere. A new equilibrium temperature would be established when the reduction to infrared radiation emission to space across the carbon dioxide wavelengths was offset by the increase across the other wavelengths.

Radiation–convection models displayed an equilibrium surface-temperature sensitivity of about 2°C, significantly larger than indicated by surface-energy budget calculations. The magnitude was sufficient to rekindle interest in anthropogenic global warming.

A characteristic of the radiation–convection model, it should be noted, is that the energy exchanges associated with the hydrological cycle remain implicit. Specification of the vertical temperature profile defines the water vapor content for radiation processes; but for surface energy exchanges, there is no explicit inclusion of surface evaporation. In 1966, the work of C.H.B. Priestley, demonstrating the limiting effect of evaporation on surface temperature,13 provided a cautionary reminder that radiation–convection theories assessing the sensitivity of the earth’s surface temperature to rising carbon dioxide concentrations were inadequate.14 The earth’s surface is roughly 70 percent ocean, and transpiring vegetation covers much of the remainder. A proper evaluation requires an explicit representation of the hydrological cycle.

The Introduction of Climate Models

A climate model is a theoretical mathematical representation, in the form of a system of differential equations, of the earth’s climate system. That system includes interacting ocean and atmospheric circulations, as well as the energy exchanges that are internal to the system. A solution to these equations projects an initial state of the system into the future.

The roots of climate modeling can be traced to the mid-1950s and the development of Atmospheric General Circulation Models (AGCMs) for numerical weather prediction.15 The basic equations were derived from the Navier–Stokes equations, to this day not fully understood, and represented the atmosphere hydrodynamically as something very much like a two-dimensional fluid in motion around a spherical shell. The models assumed, as a governing idealization, that vertical atmospheric motions were small in comparison with horizontal atmospheric motions. Given the Navier–Stokes equations, a hydrostatic assumption was virtually inevitable. Vertically accelerating acoustic waves and vertically descending gravitational waves were for this reason allowed nicely to balance. Motion across the surface of the earth remained.

Since these equations are differential in their nature, they are local in their structure; they specify the behavior of a system, speaking loosely, in an infinitesimal region of space and time. Some differential equations may be solved globally, but not these. A scheme of simulation in which solutions are assessed step by step is therefore analytically inevitable. A feature of these equations is that they link temporal rates of change to existing spatial distributions. In a simplistic manner, we can estimate a rate of change to an entity’s value at a location and project its future value if we know the local spatial distribution of its controlling variables.

AGCMs represented the controlling factors relating to changing atmospheric motion on a three-dimensional spatial grid. The evaluation of various parameters across the grid space for successive time steps to project a future state is referred to as a simulation. Any evaluation of these equations faces inherent difficulties because the equations represent waves with two solutions. The first wave, or physical mode, has a similar phase speed and amplitude to the real wave; the second wave, or computational mode, is generally of small amplitude, but has the same phase speed and travels in the opposite direction to the real wave. The magnitude of the computational mode is constrained by limiting the time step in relation to the grid spacing. For any grid spacing, there is a limiting time step interval beyond which the computational mode amplifies, and the simulation becomes unstable.

It is important that the grid spacing be as small as possible because spatial gradients at a grid point are computed as differences across adjacent grid points. This means that for a 400km grid spacing (typical of early climate models), the gradient is computed as the variation across an 800km distance. Errors due to such a crude approximation feed into the time step projection, accumulating over subsequent time steps. Thus, significant benefits are obtained by reducing the grid spacing.

When reducing the grid spacing to better define local gradients, it is also necessary to reduce the time step, otherwise the computational mode would amplify. For every halving of the grid spacing, it is necessary to halve the time step. As a result, the number of computations increases by approximately an order of magnitude.16 A judgment is thus required regarding the appropriate grid spacing for the available computing power and the duration of the intended simulation.

The earliest AGCMs typically had a horizontal grid spacing of up to 500km and 9–12 vertical levels. The Global Climate Models (GCMs) currently in use have a horizontal grid spacing of about 100km and up to thirty vertical levels for the atmospheric component. The current configuration limits the time step interval for stable computation to about ten minutes. The ocean component of the GCMs has larger grid spacing and fewer vertical levels because the ocean characteristics are judged to change more slowly than those of the atmosphere. Contemporary GCMs contain several million grid points, which means that even with the fastest supercomputers, a fifty-year projection can take several months to complete.

For weather forecasting purposes, the AGCM grid spacing must be as small as practically possible to ensure that important weather systems are adequately described. In contrast, for climate models it is generally accepted that for climate purposes, it is not necessary to describe individual weather systems; it is the aggregate impact of the weather systems that counts, and their role in heat transport that is important; and thus the grid spacing can be larger. A larger grid spacing and increased time step reduce the number of computations and thus the computing resources required to carry out a simulation.

However, it should be noted that a primary function of the atmospheric circulation is to transport heat from the tropics to higher latitudes, thus maintaining the overall global energy balance. Much of the transport is by way of the mean flow and the stationary waves (the system of permanent troughs and ridges of the middle latitude westerly winds). There is also significant transport by way of transient eddies in the weather systems. An inability to resolve the scale of weather systems in climate models leads to errors in the totality of poleward heat transport.

Early AGCMs required essential boundary constraints to maintain their stability. For weather forecasting purposes, the surface temperature was defined. To counter the natural cascading of eddy energy that accumulated at the grid scale, a form of mathematical filtering was introduced and rationalized as frictional loss. These models quickly became computationally unstable without ongoing adjustment of the vertical temperature profile. Known as convective adjustment and likened to the role of convection in the atmosphere, the adjustment returned a vertical temperature profile similar to that within convection clouds when the local model atmosphere becomes saturated.

Physical processes associated with energy exchange added to the AGCMs during the 1970s included:

  • solar radiation to introduce diurnal heating of the surface;
  • infrared radiation processes to shed energy to space;
  • climatological distributions of clouds to regulate the solar and infrared radiation exchange processes;
  • energy balance at the surface to compute temperature and regulate the exchanges of heat and latent energy between the surface and atmosphere;
  • and improved representations of convection.

As a result, the AGCMs were better at representing the energy exchange processes of the atmospheric component of the climate system, but the additional calculations at each grid point created an increasing load on available computing resources.

Notwithstanding their simplicity, the early AGCMs provided atmospheric physicists, in a gross sense, with the ability to investigate the climate system. It was possible to evaluate the stability of the AGCM, and, by inference, the climate system, by changing a component value and observing how other climate features of the AGCM evolved and changed, often for reasons that could not be explicitly determined. It also became apparent that it was possible to tune the model to produce a desired outcome by a judicious representation of the physical processes, so long as the representation came within limits of uncertainty.

AGCMs subsequently grew in complexity to include the specification of infrared radiation by wavelength. Consequently, it became possible to perform experimental simulations in which the magnitude of the infrared radiation processes changed according to concentrations of carbon dioxide. This development made possible an assessment of the sensitivity of global surface temperatures to changing carbon dioxide concentrations. Such studies provided the first model estimates of Equilibrium Climate Sensitivity (ECS) to anthropogenic carbon dioxide emissions.

The earliest AGCMs returned an equilibrium sensitivity of about 2°C warming for a doubling of carbon dioxide concentration.17 The general similarity between climate model sensitivity, which varied between models, and the results of earlier radiation–convection models, strengthened confidence in these models’ validity. This was not a strong test given the limitations of the radiation–convection models, but it was surely of some significance.

During this period, the number of institutions developing and using AGCMs increased, but few models were truly independent because the complexity of the underlying computer programs inevitably led researchers to share their code. In turn, computing resources available at the time largely determined the grid scale in use. The representation of physical processes was generally of a common form, but with local variations to values associated with assumptions and approximations relevant to component processes.

By the time of the 1985 Villach Conference, the sensitivity range for a doubling of atmospheric carbon dioxide concentration derived from the available AGCMs was between 1.5°C and 4.5°C, with a most likely value of about 3.0°C.

However, there are fundamental problems surrounding the early AGCMs and their ability to link climate sensitivity to changing carbon dioxide levels. First, the benchmark index of global average surface temperature was poorly defined from observations; computer models and observational estimates often diverged. To circumvent this limitation, a common reference for model sensitivity to radiation forcing was used as a revised benchmark. A model’s sensitivity was expressed as the change in global average temperature between an unforced equilibrium state and under the forcing scenario. Hence, the change to equilibrium temperature brought about in each model by a doubling of carbon dioxide concentration functioned to determine the equilibrium sensitivity of the AGCM. Second, early AGCMs omitted ocean circulations, the inertial and thermal flywheels of the climate system.18 The earth’s oceans were represented by a shallow swamp. These models were thus incapable of reproducing the year-to-year, decadal, and centennial variability that might result from changing ocean circulations. However, cycles of such periodicity are identified in the various records from which climate history is constructed.

These limitations notwithstanding, early AGCMs19 provided the basis for the Villach Conference Statement.20 Similar models were also the basis for the 1990 Intergovernmental Panel on Climate Change (IPCC) First Assessment Report (FAR).21 Scientists compiling the FAR expressed caution about predicting the timing, magnitude, and regional implications of anthropogenic global warming.22 These cautionary sentiments were largely filtered from the FAR’s Summary for Policymakers.

AGCM estimates of equilibrium sensitivity to atmospheric carbon dioxide were the basis for negotiating the UN’s 1992 Framework Convention on Climate Change (UNFCCC).23 The UNFCCC defines climate change as that resulting from human activities; by this definition, natural variations in climate are ignored, or at least deemed unimportant for mankind.

A New Generation of Climate Models

The FAR findings and the signing of the UNFCCC provided the impetus for an expansion of climate research and modeling efforts. International action to mitigate anthropogenic climate change obviously required a more substantial scientific base.

As I have already noted, a major limitation in early AGCMs was their limited representation of the earth’s oceans. The coupling of Ocean General Circulation Models (OGCMs) to AGCMs through physical processes representing the exchange of heat, moisture and momentum (or fluxes) across the ocean–atmosphere interface aimed to address this deficiency. These exchange processes occur on a scale below that of the computational grid and with inadequately understood seasonal and regional distributions. Small errors locally accumulate anomalous energy and distort the temperature fields. Consequently, the early coupled Global Climate Models (GCMs) displayed a tendency toward climate drift because, even without forcing, global temperature departed from the initial state over time, usually to a warmer state.

To counteract this tendency, the models were initially stabilized by a mathematical artifice known as flux adjustment.24 Regionally varying adjustments regulated the exchange of heat and moisture across the ocean–atmosphere to ensure that the starting (present-day) climate was realistic and maintainable. In some models, the magnitude of regional flux adjustment was large.

The use of flux adjustment was controversial because either surface temperature or local vertical temperature gradients regulate the magnitudes of many of the fluxes. Specification of surface temperature regulates the flux exchanges; but equally, regulation of the fluxes constrains surface temperature. The specification of a magnitude for surface flux introduces a bias to the calculations and detracts from the representativeness of the local surface temperature returned by the model. Faced with the dilemma of choosing between a GCM that contained inherent climate drift or flux adjustments, modelers ultimately chose the path of flux adjustment.

Another challenge that arose in the coupling of AGCMs and OGCMs involved the times needed for their circulations to arrive at a steady state.25 The atmospheric component reaches a steady state in weeks, the upper ocean circulation in seasons, and the deep ocean in centuries. A pragmatic strategy, considering the existing constraints on computing resources, was to initialize the AGCMs and OGCMs independently, with each linked to modern climate representations of ocean surface temperature and ocean–atmosphere fluxes. As a result, the problems associated with coupling restricted the ability of individual GCMs to represent a common benchmark climate. Even with surface flux adjustment, the benchmark equilibrium global average surface temperature ranged between 12°C and 16°C for different models.

For its 1995 Second Assessment Report (SAR),26 which concluded that climate sensitivity to carbon dioxide forcing was in the range of 1.5°C to 4.5°C,27 the IPCC predominantly utilized GCMs with flux adjustments. A range of scenarios for future climate was generated using forcing scenarios derived from varying assumptions about likely increases in carbon dioxide concentrations.28 These scenarios were, in turn, based on varying expectations of industrialization across the globe and potential future constraints on carbon dioxide emissions. The economic assumptions underpinning these scenarios proved controversial.29

The combination of temperature projections and forcing scenarios meant that the envelope of potential future global temperature diverged with time. The IPCC concluded that the best estimate of global average temperature rise was 2°C above 1990 levels by 2100. If carbon dioxide emissions were to be constrained, the global average temperature rise could be restricted to about 1oC. Conversely, an accelerating rate of carbon dioxide emissions could lead to a rise in global average temperature of about 3.5°C by 2100.30

The projection of relatively rapid warming under the unconstrained emissions scenario was a stimulus for further international negotiations. These led to the 1997 Kyoto Protocol to the UNFCCC, a first and binding step toward the regulation of anthropogenic activities.31

With time, techniques to reduce and eliminate the need for surface flux adjustment in GCMs were developed. Although there were significant differences in the component transport between models, there was an improved ability to match the large-scale heat transports, especially the ocean component, to the observed top of the atmosphere’s radiation fields.32 To the extent that observations of ocean and atmosphere energy transport were available, the various GCM transports were mostly within the uncertainty errors associated with observations.

An important additional resource available to the scientists compiling the 2001 IPCC Third Assessment Report (TAR) was a set of model-generated climate records spanning a period of a thousand years.33 Together with shorter simulations, these provided a basis for the analysis of natural variability within climate models and a comparison of computer-generated theoretic variability within the climate record itself. The magnitudes associated with internal variability on interannual and longer timescales of these climate histories were small.34

The magnitude of internal variability in a climate model is crucial in assigning climate change over a given period to a particular factor. In the TAR Summary for Policymakers, the IPCC concluded: “The warming of the past 100 years is very unlikely to be due to internal variability alone, as estimated by current models.”35 This is a surprising conclusion because scientists, in the bulk of the document, had suggested that intramodel variability was less, in fact, than the climate record suggested.36 Either the models were not capturing internal variability in the climate system, leading to lowered estimates, or there were unrecognized sources of external forcing contributing to observed climate variability that needed to be better understood and included in the model simulations.

The standard used by the IPCC to assess model performance was the ability to reproduce the global temperature record of the twentieth century.37 Without external forcing, the existing models failed to produce a temperature trend. However, with a seemingly plausible pattern of anthropogenic and natural forcing, the models were able to replicate the observed temperature record. On this basis, the scientists compiling the TAR expressed confidence in both the models and in the linkage between carbon dioxide and global warming.38

Due to the uncertainties associated with the different forcing factors, a measure of expert judgment is required to quantify the magnitude of incremental climate forcing over different times. The ability of some models to meet the IPCC standard of reproducing twentieth-century global trends may have been due to a judicious choice of forcing factors. Well-tailored forcing meant that various GCMs were able to return a fair representation of the twentieth-century temperature history. However, when the models were projected forward into the twenty-first century, their temperature trajectories diverged. The TAR estimates of temperature rise from 1990 values to 2100 ranged from 1.4°C to 5.8°C.39

Established in 1985 at the Lawrence Livermore National Laboratory, the Coupled Model Intercomparison Project (CMIP) database facilitates intermodel comparison and independent performance assessment. The organization of the database allows for the comparison of model outputs generated for each of the IPCC assessment reports: the CMIP2 database holds model outputs compiled for the 2001 TAR assessment, CMIP3 for the 2007 AR4 report, and CMIP5 for the 2013 AR5 report.

Studies conducted with the CMIP databases showed that in general, the models qualitatively reproduce the main climatological features of the twentieth century, with broad agreement on patterns of surface temperature and pressure distribution, wind fields, and precipitation. There are some important exceptions. In CMIP2 models (GCMs with surface flux adjustment), the global mean temperature varied over a range from 12°C to more than 16°C. These differences in ability to reproduce benchmark temperature are reflective of intermodel differences in the representation of the climate system.

With further additions to the range of chemical and biological processes included in the set of interacting processes, the models progressed from GCMs to Earth System Models (ESMs). This additional complexity aside, the question remains: Is the representation of the fundamental processes of the climate system adequate? A dynamic ocean circulation has replaced the shallow swamp representation, but a dearth of ocean observations limits confidence in the resulting representation. Similarly, the hydrological cycle, the representation of clouds and their interaction with radiation fields, aerosols, and the treatment of land surfaces are all sources of uncertainty.

Internal Climate Variability and Ocean Circulation

The denial of significant internal variability in the climate system is a crucial component in the IPCC’s argument attributing the temperature rise in the second half of the twentieth century to anthropogenic activity. If the climate system has only limited internal variability, the response of GCMs to applied natural and anthropogenic forcing patterns can largely explain the course of the twentieth-century global temperature record. If significant internal variability is absent, the IPCC can confidently make the claim that carbon dioxide is the dominant factor contributing to the global average temperature rise over the latter half of the twentieth century.

The recent 15-year hiatus in the global warming trend forced the scientists compiling the 2013 IPCC Fifth Assessment Report (AR5) to reassess earlier conclusions about limited internal variability:

The observed reduction in surface warming trend over the period 1998 to 2012 as compared to the period 1951 to 2012, is due in roughly equal measure to a reduced trend in radiative forcing and a cooling contribution from natural internal variability, which includes a possible redistribution of heat within the ocean (medium confidence). The reduced trend in radiative forcing is primarily due to volcanic eruptions and the timing of the downward phase of the 11-year solar cycle. However, there is low confidence in quantifying the role of changes in radiative forcing in causing the reduced warming trend. There is medium confidence that natural internal decadal variability causes to a substantial degree the difference between observations and the simulations; the latter are not expected to reproduce the timing of natural internal variability. There may also be a contribution from forcing inadequacies and, in some models, an overestimate of the response to increasing greenhouse gas and other anthropogenic forcing (dominated by the effects of aerosols).40

The IPCC was unable to provide an unambiguous explanation for the significant divergence between global average temperature and GCM projections, especially since carbon dioxide concentration continued to rise steadily during this period. The temperature hiatus, the IPCC argued, might be due to a cooling trend linked to some combination of internal variability, missing or incorrect radiation forcing, or GCM model response error. This conclusion is couched in a suggestion of confidence that has no basis in quantitative analysis.

The downward phase and reduced intensity of the solar cycle, and a series of small volcanic eruptions that ejected cooling aerosols into the stratosphere after 2000, the IPCC suggested, were factors that might contribute to a reduction in natural forcing. Satellite measurements cast doubt on the importance of volcanic eruptions because there was no upward trend in stratospheric aerosol loading, and therefore no evidence of cooling from increased volcanic activity over the period in question.

The suggestion, therefore, was that the recent hiatus in observed temperature might be a manifestation of internal variability not captured by the various models, with the attendant expectation that the warming trend was shortly to resume.41 The AR5 report noted that some models, whose initial state is close to the beginning of the hiatus, showed a reduction in projected warming over the near term. This might suggest that these models capture some of the system’s internal variability.42 Objective testing does not support this suggestion.

When examining the question of internal variability, a crucial element to be considered is the role played by ocean circulation. The oceans and atmosphere are interacting fluids in motion, and a degree of internal variability is to be expected. At the interface between the two, especially over the tropics, heat and latent energy (the evaporation of water vapor) are exchanged; this energy drives atmospheric circulation.

The impacts of changing ocean surface temperature can be qualitatively assessed from El Niño events, an interannual phenomenon associated with abnormal warming of the surface waters of the eastern equatorial Pacific Ocean. Significant El Niño events, such as those of 1997 and 1998, have produced a rise in global average surface temperature greater than 0.5°C and major disruption to seasonal weather patterns on a near-global scale. Climate fluctuations on decadal and longer timescales, including the Pacific Decadal Oscillation, North Atlantic Oscillation, and Arctic Oscillation, have been observed and possibly linked to ocean–atmosphere interactions.43

The thermohaline circulation, which is a meridional overturning circulation involving the deep oceans with a period on the order of one thousand years, is driven by the annual formation of winter sea ice over polar regions, where seasonal excess radiation to space cools the ocean surface. As sea ice forms, salt is ejected; the water below increases in density and salinity before sinking toward the ocean floor to form bottom water. Elsewhere, and away from the polar formation regions, cold interior water rises toward the surface in compensation. The ongoing formation of bottom water and its outward spread has maintained low temperatures within the ocean interior across millennia.

In contrast, solar radiation is continuously absorbed over the tropical oceans to a depth of ten or so meters. Some of this heat is mixed downward by wave action to a depth of several hundred meters. The ascent of bottom water creates a layer of sharp temperature gradient (the thermocline) at the boundary where the ascending cold interior water meets the warm, mixed surface layer.

Over tropical regions, the local temperature of the surface mixed layer remains relatively constant while absorbed solar radiation is exchanged with the atmosphere. A component of the solar energy is also used to warm the ascending bottom water as it mixes across the thermocline. The importance of this vertical mixing, known as upwelling, can be gauged by its impact on the ocean surface temperature gradient across the equatorial Pacific Ocean. Upwelling is strongest in the east, and as a result, temperatures are 5°C to 7°C cooler than the near-30°C of the western equatorial Pacific Ocean, even though the relatively cloud-free eastern Pacific receives more solar radiation than the cloudier western Pacific.

An estimate of the magnitude of the component of tropical solar energy used to warm the water mixing through the thermocline can be deduced from the characteristics of the thermohaline circulation. The earth’s oceans have an average depth of 3,682.22 meters,44 while an overturning cycle on the order of a thousand years is needed for the bottom water to rise, warm by about 25°C, and once again assume the thermal characteristics of tropical surface water. Over equatorial regions, about 25W/m2 (watts per square meter) of absorbed solar energy are used to warm the cold interior water as it mixes into the surface layer.

The magnitude of solar energy consumed to maintain the thermohaline circulation indicates its importance to the natural variability of the climate system. A 15-percent range of the cycle period (7.5 percent each side of the nominal mean period) is equal to a 3.7W/m2 variation in the heat available to regulate the surface layer temperature. Such a variation is equivalent to that associated with radiation forcing from a doubling of atmospheric carbon dioxide concentration.

Although the data are meager and not definitive, in 1999, Wallace Broecker et al. reported changes to the formation of bottom water around Antarctica over recent centuries, including a possible slowdown during the twentieth century.45 If their analysis is correct, the slowdown is consistent with a concurrent reduction in tropical upwelling and the observed warming of tropical surface waters. In 2002, Michael McPhaden and Dongxiao Zhang analyzed the changing circulation of upper layers of the Pacific Ocean, concluding that the surface warming from the mid-1970s to the mid-1990s could be attributed, at least in part, to a reduction in upwelling.46

The potential role of ocean variability as a source of global temperature change has long been ignored in IPCC assessments. This is surprising, because in the coupling of dynamic ocean circulation to atmospheric circulation during the initial evolution of GCMs, a mismatch between ocean and atmospheric heat transports led to climate drift, a tendency for the models to change temperature even without forcing.

The 2013 AR5 Technical Summary contains a detailed discussion of possible change in observed ocean heat content as a contributor to the aforementioned hiatus in the global warming trend.47

Ocean warming dominates that total heating rate, with full ocean depth warming accounting for about 93% (high confidence), and warming of the upper (0 to 700 m) ocean accounting for about 64%. Melting ice (including Arctic sea ice, ice sheets and glaciers) and warming of the continents each account for 3% of the total. Warming of the atmosphere makes up the remaining 1%. The 1971–2010 estimated rate of ocean energy gain is 199 × 1012 W from a linear fit to data over that time period, equivalent to 0.42 W m–2 heating applied continuously over the Earth’s entire surface, and 0.55 W m–2 for the portion owing to ocean warming applied over the ocean’s entire surface area. The Earth’s estimated energy increase from 1993 to 2010 is 163 [127 to 201] × 1021 J with a trend estimate of 275 × 1015 W. The ocean portion of the trend for 1993–2010 is 257 × 1012 W, equivalent to a mean heat flux into the ocean of 0.71 W m–2.48

The conjecture is that there has been an increase in ocean heat, especially in the ocean interior, and this can explain why the surface and atmosphere have not warmed as expected.

But the claimed heat retention in the oceans represents an extrapolation from an observational database of limited spatial density and frequency. Additionally, the claimed radiation retained in the earth’s system is not determined from observations, but is a theoretical calculation based on the increase in greenhouse gas concentrations. Satellite observations are not sufficiently accurate to verify the claimed heat retention, nor are the observations of ocean temperature on which the heat retention of the oceans are based.

There is no empirical data to support the AR5’s claim of a reduction in total infrared emission to space as atmospheric carbon dioxide concentration increased from 1971 through 2010. Conversely, the Earth Radiation Budget Experiment (ERBE) data suggest that total infrared emission to space increased by several W/m2 during the 1980s and 1990s, despite the increasing concentration of carbon dioxide.49

The evidence from El Niño events suggests that significant time variation in the rates of upwelling in tropical surface water occurs at the regional level. Their wider climatic impact suggests that variations in overturning associated with local, regional, and global ocean circulations have a potential impact on similar scales.

Unfortunately, techniques for ocean observation are still in their infancy. The global network of Argo profiling buoys provides less than a decade of data. This relatively short period of data collection is insufficient to define the scales of ocean variability. Nonetheless, analyses of these limited data sets have led to the identification of previously unsuspected structure and variability in ocean circulation.

If the crucial ocean circulation is so poorly understood, GCMs cannot be expected to reproduce the natural internal variability of the climate system. The absence of significant decadal and long period cycles in GCMs does not necessarily mean such variability does not exist in the climate system.

The Hydrological Cycle

The hydrological cycle is important to climate modeling because evaporation and condensation are major elements of energy exchanges from the surface of the earth and are distributed through the atmosphere. Water vapor is also a major greenhouse gas, and along with clouds, regulates the transfer of radiation energy through the atmosphere and to space. Except for the transport of water vapor, hydrological cycle processes occur on a scale below that of the GCM computational grid. The equations that determine the atmospheric flow and transport cannot explicitly take into account these processes and their interactions with solar and infrared radiation. Many of the processes associated with the hydrological cycle are poorly quantified. Their inclusion in GCMs requires a variety of approximations and assumptions.

Satellite observations have provided a clearer understanding of the impact of clouds on the earth’s energy budget. Over the tropics, infrared emission to space can vary between about 300W/m2 in cloud-free regions and about 100W/m2 in regions of high-altitude cirrus clouds, which tends to warm the surface and atmosphere by allowing the passage of solar radiation with some scattering, but little attenuation. Heat is retained in the system due to severely constrained infrared emission from the cold cloud tops. Fractional changes in the amount and distribution of cloudy and cloud-free areas modify the earth’s energy budget. Because of the geographical distributions of cirrus clouds, the annual average outgoing radiation to space over the tropics varies from less than 200W/m2 over regions associated with deep convection to more than 300W/m2 in the mainly cloud-free regions of the subtropics and eastern equatorial Pacific Ocean.50

An analysis of satellite-derived data from the tropical band (latitude 20°N–20°S) has identified decadal scale variability in radiation to space caused by changes in tropical cloudiness.51 Local changes of up to 100W/m2 and anomalies over the tropical band averaging several watts per square meter suggest the possibility that cloud feedbacks are linked to climate shifts on decadal timescales. It should be noted that climate model simulations have failed to reproduce the observed variation in the tropical top of the atmosphere radiation budget, pointing to a need for improved cloud specification.

It is entirely possible that cloud feedback explains the apparently high sensitivity of GCMs to carbon dioxide forcing.52 Under this hypothesis, the initial warming due to increasing carbon dioxide expands the area occupied by high cloud, thus further warming the system. Cloud formation processes are complex, and their representation in GCMs requires simplifications, assumptions, and approximations. Inadequate specifications of cloud formation could readily bias surface temperature reports. This could lead to exaggerated warming under carbon dioxide forcing. Empirical evidence in support of potential cloud feedback is limited and inconclusive.

An alternative view is that clouds may act as an automatic correction mechanism against climate drift. In 2001, Richard Lindzen et al. published a study that linked tropical cloudiness and humidity to sea surface temperature, finding a tendency for the area of cirrus outflow to decrease with warmer temperatures.53 An expansion of the cloud-free area with relatively dry air would enlarge the area of strong radiation to space. This is a negative cloud feedback effect. As tropical surface temperature warms (for whatever reason), the iris, or cloud-free region, would expand to allow the shedding of more radiation energy to space, thus stabilizing surface temperature. Such an effect would reduce the sensitivity of the climate system to carbon dioxide forcing.

Convection is an integral component of the tropical hydrological cycle because of its role in distributing heat and latent energy from the surface through the troposphere. The horizontal scale of convection clouds is more than an order of magnitude less than the scale of GCM computational grids. Consequently, convection is represented by simplified schema that approximate how the convection clouds might affect the grid values of the larger scale flow.

A number of studies have used the CMIP database to investigate the way processes associated with convection were being represented. In 2006, Isaac Held and Brian Soden examined the response of the hydrological cycle of models in the 2007 CMIP3 (AR4, the Fourth Assessment Report) database to warming under given carbon dioxide forcing scenarios.54 A common feature identified was the systematic reduction in vertical convective mass transport as model temperature rose under carbon dioxide forcing. As model temperature rose, the column water-vapor specific humidity increased according to the Clausius–Clapeyron relation (about 7 percent per degree Celsius),55 but the surface evaporation rate increased more slowly, on average at less than 2 percent per degree Celsius. The different rates meant that as temperature rose, the necessary increase in vertical energy transport through the cloud base could be achieved more efficiently. That is, the same or greater vertical energy transport could be achieved with reduced vertical mass transport through the cloud base.

In 2002, Junye Chen et al. analyzed the atmospheric circulation data covering a period of the earth’s warming from 1985 to 2000.56 They demonstrated that the intensity of the Hadley cell circulation actually increased as surface temperatures warmed.57 This was not what the model predicted. Moreover, their study concluded that the upward motion of the circulation across equatorial convective regions intensified, and these regions moistened; by contrast, the equatorial and subtropical regions that were further from areas of active convection became drier, and the number of clouds decreased.

The observations point to an increase in convective mass flow as tropical temperatures warmed. This is inconsistent with GCMs. One possible explanation for the different responses by GCMs and the atmosphere is that under global warming, the areas with active convection decreased, and those of subsiding air increased. This reduction in the overall area occupied by active convection may occur, as evidenced by the more intense regions of convection, but does not satisfactorily explain the observed increase in vertical mass transport by convection (the overall intensification of the Hadley cell circulation) occurring with warming.

In 2008, Ingo Richter and Shang-Ping Xie used the CMIP3 model data set to better understand the apparently low rate of increase of surface evaporation with warming.58 In GCMs, the rate of evaporation is commonly calculated by a bulk aerodynamic formula, where the rate of evaporation varies with surface wind speed, boundary layer stability, and the vertical gradient of specific humidity. A consistent pattern of decreasing surface wind speed, increasing boundary-layer stability, and a reduced vertical gradient of specific humidity was noted across the GCMs. Each of these factors has a tendency to suppress the evaporation response of the model as surface temperature increases.

The characteristically low rate of increase of evaporation with warming exhibited by the GCMs thus cannot be accepted as typical of the climate system. However, data for identifying what might be a true value are not available. Empirical estimates of global evaporation rates have been made using precipitation as a proxy. The average residence time of water vapor in the atmosphere is about twelve days; over periods longer than a month, evaporation is equivalent to precipitation, and trends in both should be similar. However, precipitation is only measured over populated land areas.

Algorithms have been developed for estimating precipitation rates based on satellite-derived data. Such satellites began functioning in 1979, but it remains uncertain whether these algorithms correctly identify precipitation in prevailing weather systems, and for this reason, it remains unclear whether such estimates are useful for climatological purposes. In 2007, Frank Wentz et al. used just such algorithms to relate the rate of global precipitation (and hence evaporation) to warming over the period since 1979.59 Their estimate was six percent per degree Celsius of temperature increase, about three times greater than the rate GCMs returned.

Resolving the rate of increase of evaporation with temperature is of fundamental importance in identifying the true response of global temperature to carbon dioxide forcing because, as noted previously, the partitioning of surface energy loss between evaporation and the other exchange processes regulates surface temperature.60 The greater the increase in the rate of evaporation with surface temperature, the less the surface temperature response to external forcing. If the GCMs are returning a rate of evaporation increase with temperature that is three times too low, then the sensitivity to carbon dioxide forcing could be commensurately too high.

Conclusion

For nearly half a century, computer models have been the basic tool used to predict the response of the earth’s climate system to forcing influences, both natural and anthropogenic. Despite the evolution in development from the limited AGCMs to the more encompassing GCMs, the apparent equilibrium climate sensitivity to changing atmospheric carbon dioxide concentration has changed little over time. Some view this consistency as a vindication of the computer models.

The theoretical and structural limitations of computer models, falling within three broad categories, are sufficient to raise serious doubts about such a conclusion. The grid spacing of the ocean and atmosphere components do not adequately resolve smaller scales of motion that are important for heat transport from the tropics to polar regions. The internal variability of the climate system has not been identified from model studies and may well be significant in relation to climate variability on longer timescales. And the representations of energy exchanges associated with processes occurring below the resolution of the grid spacing rely on approximations and assumptions of varying justification. Such limitations are particularly relevant in relation to clouds and their interactions with both the radiation fields and the hydrological cycle.

Nearly three decades after the Villach Conference, there is still no empirical evidence that climate sensitivity to carbon dioxide concentration is of the magnitude that computer models have consistently projected. The apparent ability of GCMs to represent twentieth-century global average temperature change is a poor benchmark, given the uncertainties in the magnitudes of natural and anthropogenic forcing parameters and the unknown contribution of internal variability. Without resolving the role of internal variability, scientists compiling the recent IPCC assessment reports are unable to sustain the claim that most of the warming of the second half of the twentieth century is due to human activities. Moreover, attribution and sensitivity should not be confused. Even if the IPCC’s claimed attribution were proven correct, this in itself would not imply high sensitivity.

The real sensitivity of climate to anthropogenic activity must remain an open question. The complexity of the climate system and the importance of energy exchange processes on scales below that of the computational grids in use, with their necessary approximations and assumptions, mean that in their present state of development, GCMs are an inadequate tool to resolve the question of sensitivity or to project future climate states. Empirical evidence suggests that the shortcomings of GCMs, particularly in relation to their representation of the hydrological cycle, have exaggerated the potential for climate change due to anthropogenic activities.

Endmark

  1. In particular, the decade-long drought in the Sahel region of West Africa, but also increasing destruction and loss of life resulting from hurricanes, typhoons, and other severe meteorological events that occurred during the 1950s and 1960s. 
  2. WMO, Report of the International Conference on the Assessment of the Role of Carbon Dioxide and of Other Greenhouse Gases in Climate Variations and Associated Impacts, Villach, Austria, 9–15 October 1985 (Geneva, Switzerland: WMO, 1986). 
  3. WMO, Report of the International Conference on the Assessment of the Role of Carbon Dioxide and of Other Greenhouse Gases in Climate Variations and Associated Impacts, Villach, Austria, 9–15 October 1985 (Geneva, Switzerland: WMO, 1986). 
  4. Jean-Baptiste-Joseph Fourier, “Mémoire sur la température du globe terrestre et des espaces planétaires (Memoir on the Temperature of the Earth and Planetary Space),” Mémoires de l'Académie Royale des Sciences de l'Institut de France VII (1827): 570–604. 
  5. John Tyndall, “Journals of John Tyndall Vol. 3, 1855–1872,” (Unpublished Tyndall Collection, Journal 8a, Royal Institution, London): 36–55. 
  6. Svante Arrhenius, “On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground,” Philosophical Magazine and Journal of Science, series 5, vol. 41, no. 251 (1896): 237–76. 
  7. Milutin Milankovitch, Théorie mathématique des phénomènes thermiques produits par la radiation solaire (Mathematical Theory of Heat Phenomena Produced by Solar Radiation), (Paris: Gauthier-Villars, 1920). 
  8. The earth’s rotation axis has a precession cycle of about 26,000 years; the earth’s obliquity—the angle between rotational and orbital axes—varies between 22.1°C and 24.5°C over a cycle of about 40,000 years. The earth’s eccentricity—a measure of the departure from circularity of earth’s orbit around the sun—has a cycle of about 100,000 years. 
  9. Guy Stewart Callendar, “The Artificial Production of Carbon Dioxide and its Influence on Temperature,” Quarterly Journal of the Royal Meteorological Society 64, no. 275 (1938): 223–40. 
  10. Herbert Riehl and Joanne Simpson Malkus, “On the Heat Balance in the Equatorial Trough Zone,” Geophysica 6, nos. 3–4 (1958): 503–38. 
  11. The earth’s surface energy budget is the net flow of energy at the surface, and includes: energy received in the form of solar radiation; infrared radiation reflected back from clouds and atmospheric greenhouse gases; energy lost in the form of outgoing emitted infrared radiation; and heat and latent energy of evaporation that are exchanged with the atmosphere at the surface-atmosphere interface. 
  12. Radiation to space is directly affected by changes to the concentration of atmospheric carbon dioxide and other greenhouse gases. When atmospheric concentration increases, the altitude from which radiation emanates rises; the temperature at the higher altitude is lower. The average temperature of the layer regulates the magnitude of emission to space. Thus higher concentrations reduce emissions to space. 
  13. C.H.B. Priestley, “The Limitation of Temperature by Evaporation in Hot Climates,” Agricultural Meteorology 3, Issues 3–4 (1966): 241–46. 
  14. Priestley noted that for tropical ocean surfaces with profuse evaporation, the surface temperature rarely exceeds 30°C; within equatorial rainforests and areas with more limited evapotranspiration, the temperature is regulated to about 35°C; and in open tropical grasslands and deserts, with relatively dry surfaces, temperatures can exceed 50°C. 
  15. The theoretical meteorologist Norman Phillips was the first to simulate the general circulation of the atmosphere using a numerical model. This type of model came to be called a General Circulation Model (GCM). 
  16. The Courant–Friedrichs–Lewy condition. 
  17. Syukuro Manabe, “Carbon Dioxide and Climatic Change,” in ed. Barry Saltzman, Theory Of Climate: Advances in Geophysics 25, (New York: Academic Press, 1983), 39–80. 
  18. The heat content of the surface mixed layer of the oceans dominates that of the atmosphere. 
  19. Modelers of the time claimed that the ACGMs were capable of reproducing the change to equilibrium climate (i.e. global average temperature) that would result from a doubling of atmospheric carbon dioxide. 
  20. WMO, Report of the International Conference on the Assessment of the Role of Carbon Dioxide and of Other Greenhouse Gases in Climate Variations and Associated Impacts, Villach, Austria, 9–15 October 1985 (Geneva, Switzerland: WMO, 1986). 
  21. John Houghton, Geoffrey Jenkins, and James Ephraums, eds., Climate Change: The IPCC Scientific Assessment (Cambridge: Cambridge University Press, 1990). 
  22. John Houghton, Geoffrey Jenkins, and James Ephraums, eds., Climate Change: The IPCC Scientific Assessment (Cambridge: Cambridge University Press, 1990), xii. 
  23. United Nations, “United Nations Framework Convention on Climate Change,” May 9, 1992. 
  24. W. Lawrence Gates et al., “Climate Models – Evaluation,” in Climate Change 1995: The Science of Climate Change, eds. John Houghton et al. (Cambridge: Cambridge University Press, 1996), 237. 
  25. W. Lawrence Gates et al., “Climate Models – Evaluation,” in Climate Change 1995: The Science of Climate Change, eds. John Houghton et al. (Cambridge: Cambridge University Press, 1996), 236. 
  26. John Houghton et al., eds, Climate Change 1995: The Science of Climate Change (Cambridge: Cambridge University Press, 1996). 
  27. John Houghton et al., eds, Climate Change 1995: The Science of Climate Change (Cambridge: Cambridge University Press, 1996), 35. 
  28. Arie Kattenburg et al., “Climate Models – Projections of Future Climate,” in Climate Change 1995: The Science of Climate Change, eds. John Houghton et al. (Cambridge: Cambridge University Press, 1996), 320. 
  29. Ian Castles and David Henderson, “Economics, Emissions Scenarios and the Work of the IPCC,” Energy and Environment 14, no. 4 (2003): 415–35. 
  30. Preface to John Houghton et al., eds., Climate Change 1995: The Science of Climate Change (Cambridge: Cambridge University Press, 1996), 6. 
  31. The Kyoto Protocol to the UN Framework Convention on Climate Change was adopted in Kyoto, Japan, on December 11, 1997, and entered into force on February 16, 2005. 
  32. For the climate system to be in a steady state it is necessary that, globally averaged at the top of the atmosphere, the net incoming solar radiation is offset by the emission of an equivalent amount of infrared energy to space. However, over the tropics, net incoming solar radiation greatly exceeds the infrared emission to space, while over higher latitudes, the emission of infrared radiation to space dominates. Continuous poleward transport of energy by the ocean and atmosphere circulations is necessary to achieve and maintain the global balance. 
  33. IPCC, Climate Change 2001: The Scientific Basis: Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change (Cambridge: Cambridge University Press, 2001). 
  34. Bryant McAvaney et al., “Model Evaluation,” in Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change, eds. John Houghton et al. (Cambridge: Cambridge University Press, 2001), 512. 
  35. Daniel Albritton et al., “Summary for Policymakers,” in Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change, eds. John Houghton et al. (Cambridge: Cambridge University Press, 2001), 10. 
  36. Bryant McAvaney et al., “Model Evaluation,” in Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change, eds. John Houghton et al. (Cambridge: Cambridge University Press, 2001), 499. 
  37. In broad terms, global temperature rose about 0.4°C between 1910 and 1940; the temperature then varied little until 1975; from then to the end of the century there was a further rise of about 0.4°C. 
  38. John Mitchell et al., “Detection of Climate Change and Attribution of Causes,” in Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change, eds. John Houghton et al. (Cambridge: Cambridge University Press, 2001), 730–731. 
  39. Daniel Albritton et al., “Summary for Policymakers,” in Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change, eds. John Houghton et al. (Cambridge: Cambridge University Press, 2001), 13. 
  40. IPCC, “Summary for Policymakers,” in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, eds. Thomas Stocker et. al (Cambridge: Cambridge University Press, 2013), 15. 
  41. Ten- to fifteen-year periods of temperature hiatus have been observed in long unforced GCM simulations. While discounted in earlier IPCC assessments, these hiatuses suggest a degree of internal variability. However, GCMs are not held to be capable of predicting the timing and magnitude of this internal variability. 
  42. Thomas Stocker et al., “Technical Summary,” in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, eds. Thomas Stocker et. al (Cambridge: Cambridge University Press, 2013), 61–63; Gregory Flato et al., “Evaluation of Climate Models,” in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, eds. Thomas Stocker et. al (Cambridge: Cambridge University Press, 2013), 769–771. 
  43. The occurrence of millennia-scale climate variability is controversial. Proxy records such as those based on tree rings, so it is claimed, suggest that for a thousand-year period in the northern hemisphere, temperatures varied little until human activity began altering atmospheric carbon dioxide concentrations in the twentieth century. 
  44. Matthew Charette and Walter Smith, “The Volume of Earth’s Ocean,” Oceanography 23, no. 2 (2010): 113. 
  45. Wallace Broecker, Stewart Sutherland, and Tsung-Hung Peng, “A Possible 20th-Century Slowdown of Southern Ocean Deep Water Formation,” Science 286, no. 5442 (1999): 1132–35. 
  46. Michael McPhaden and Dongxiao Zhang, “Slowdown of the Meridional Overturning Circulation in the Upper Pacific Ocean,” Nature 415 (2002): 603–608. 
  47. Monika Rhein et al., “Observations: Ocean,” in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, eds. Thomas Stocker et al. (Cambridge: Cambridge University Press, 2013), 264–265. 
  48. Thomas Stocker et al., “Technical Summary,” in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, eds. Thomas Stocker et al. (Cambridge: Cambridge University Press, 2013), 39. 
  49. Bruce Wielicki et al., “Evidence for Large Decadal Variability in the Tropical Mean Radiative Energy Budget,” Science 295, no. 5556 (2002): 841–44. 
  50. Deep tropical convection and accompanying cirrus clouds tend to be confined to preferred regions, including the Indonesian archipelago, southern and eastern Asia, the Amazon basin, and the Congo basin. 
  51. Bruce Wielicki et al., “Evidence for Large Decadal Variability in the Tropical Mean Radiative Energy Budget,” Science 295, no. 5556 (2002): 841–44. 
  52. Olivier Boucher et al., “Clouds and Aerosols,” in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, eds. Thomas Stocker et al. (Cambridge: Cambridge University Press, 2013), 588. 
  53. Richard Lindzen, Ming-Dah Chou, and Arthur Hou, “Does the Earth Have an Adaptive Infrared Iris?” Bulletin of the American Meteorological Society 82, no. 3 (2001): 417–32. 
  54. Isaac Held and Brian Soden, “Robust Responses of the Hydrological Cycle to Global Warming,” Journal of Climate 19 (2006): 5686–99. 
  55. The Clausius–Clapeyron relation links the increase in saturated specific humidity (the water vapor fraction in the air at 100-percent relative humidity) with air temperature. It is a near-exponential relationship increasing at about 7 percent per degree Celsius. 
  56. Junye Chen, Barbara Carlson, and Anthony Del Genio, “Evidence for Strengthening the Tropical Circulation in the 1990s,” Science 295 (2002): 838–41. 
  57. The Hadley cell circulation is the mean meridional overturning circulation in both hemispheres of the tropical atmosphere. Mass ascent takes place in the deep convection clouds over equatorial regions and spreads poleward in the upper atmosphere. Compensating mass subsidence takes place over the subtropical regions. The circulation is completed by lower atmosphere air moving equator-ward, known as the trade winds. 
  58. Ingo Richter and Shang-Pang Xie, “Muted Precipitation Increase in Global Warming Simulations: A Surface Evaporation Perspective,” Journal of Geophysical Research 113, D24 (2008), doi:10.1029/2008JD010561. 
  59. Frank Wentz et al., “How Much More Rain Will Global Warming Bring?” Science 317, no. 5835 (2007): 233–35. 
  60. C.H.B. Priestley, “The Limitation of Temperature by Evaporation in Hot Climates,” Agricultural Meteorology 3 (1966): 241–46. 

William Kininmonth is the former head of Australia’s National Climate Center.


More on Physics


Endmark

Copyright © Inference 2024

ISSN #2576–4403