Microwave Sounding Unit temperature measurements

Microwave Sounding Unit temperature measurements refers to temperature measurement using the Microwave Sounding Unit instrument and is one of several methods of measuring Earth atmospheric temperature from satellites. Microwave measurements have been obtained from the troposphere since 1979, when they were included within NOAA weather satellites, starting with TIROS-N. By comparison, the usable balloon (radiosonde) record begins in 1958 but has less geographic coverage and is less uniform.

Microwave brightness measurements do not directly measure temperature. They measure radiances in various wavelength bands, which must then be mathematically inverted to obtain indirect inferences of temperature. The resulting temperature profiles depend on details of the methods that are used to obtain temperatures from radiances. As a result, different groups that have analyzed the satellite data have obtained different temperature trends. Among these groups are Remote Sensing Systems (RSS) and the University of Alabama in Huntsville (UAH). The satellite series is not fully homogeneous – the record is constructed from a series of satellites with similar but not identical instrumentation. The sensors deteriorate over time, and corrections are necessary for satellite drift in orbit. Particularly large differences between reconstructed temperature series occur at the few times when there is little temporal overlap between successive satellites, making intercalibration difficult.

Creation of the satellite temperature record
From 1979 to 2005 the microwave sounding units (MSUs) and since 1998 the Advanced Microwave Sounding Units on NOAA polar orbiting satellites have measured the intensity of upwelling microwave radiation from atmospheric oxygen. The intensity is proportional to the temperature of broad vertical layers of the atmosphere, as demonstrated by theory and direct comparisons with atmospheric temperatures from radiosonde (balloon) profiles.

Different frequencies sample a different weighted range of the atmosphere, depending on the absorption depth (i.e., optical depth) of the microwaves through the atmosphere. To derive data of the temperature profile at lower altitudes and remove the stratospheric influence, researchers have developed synthetic products by subtracting signals at different altitude and view angles; such as "2LT", which has a maximum at about 650 hPa. However this process amplifies noise, increases inter-satellite calibration biases and enhances surface contamination.

Records have been created by merging data from nine different MSUs and AMSU data, each with peculiarities that must be calculated and removed because they can have substantial impacts on the resulting trend. The process of constructing a temperature record from a radiance record is difficult and some of the required corrections are as large as the trend itself:

Analysis technique


Upwelling radiance is measured at different frequencies; these different frequency bands sample a different weighted range of the atmosphere. Since the atmosphere is partially but not completely opaque, the brightness measured is an average across a band of atmosphere, depending on the penetration depth of the microwaves. The brightness temperature (TB) measured by satellite is given by:


 * $$T_B = W(0)T(0) + \int\limits_{0}^{TOA} W(z)T(z)\, dz $$

where $$W(0)$$ is the surface weight, $$T(0)$$ and $$T(z)$$ are the temperatures at the surface and at the atmospheric level $$z$$ and $$W(z)$$ is the atmospheric weighting function.

Both the surface and atmospheric weights are dependent on the surface emissivity $$e_S$$, the absorption coefficient $$\kappa(z)$$ and the Earth incidence angle $${\theta}$$; the surface weight is the product of $$e_S$$ and an attenuation factor:


 * $$ W(0) = e_Se^{-\tau (0, \infty) \sec\theta} $$

where the secant theta term accounts for the dependence of optical path length on the vertical angle, and $$\tau $$ is the optical depth:


 * $$\tau = \int\limits_{z1}^{z2} \kappa (z) dz $$

The atmospheric weighting functions $$W(z)$$ can be written as:


 * $$W(z) = \kappa(z)\sec\theta e^{-\tau (z, \infty) \sec\theta} + \kappa(z)\sec\theta e^{-\tau (0,z) \sec\theta}(1-e_S) e^{-\tau (0,\infty) \sec\theta}$$

The first term in this equation is related to the radiation emitted upward from the level $$z$$ and attenuated along the path to the top of the atmosphere (∞), the second include the radiation emitted downward from the level z to the surface (0) and the radiation reflected back by the surface (proportional to $$e_S$$) to the top of the atmosphere, the exact form of $$W(z)$$ is dependent upon the temperature, water vapor and liquid water content of the atmosphere.

Channels
MSU Channel 1 is not used to monitor atmospheric temperature because it's too much sensitive to the emission from the surface, furthermore it is heavily contaminated by water vapor/liquid water in the lowermost troposphere.

Channel 2 or TMT is broadly representative of the troposphere, albeit with a significant overlap with the lower stratosphere; the weighting function has its maximum at 350 hPa (corresponding to about 8 km altitude) and half-power at about 40 and 800 hPa (roughly 2–22 km).

Figure 3 (right) shows the atmospheric levels sampled by different wavelength from the satellite measurements, where TLS, TTS, and TTT represent three different wavelengths. Note that the lowest measurement, TTT, includes brightness from both atmospheric and surface emission. TMT and TLT represent the altitude range computed lower troposphere temperature calculated using an atmospheric model as discussed below.

The T4 or TLS channel in representative of the temperature in the lower stratosphere with a peak weighting function at around 17 km above the Earth surface.


 * Calculation of lower troposphere temperature

In an attempt to derive data for lower altitudes and remove the stratospheric influence, several researchers have developed synthetic products that subtract the higher-altitude values from the lowest altitude (TMT) measurement. Such a data-analysis technique is dependent on modeling the effect of altitude on temperature. However, this process amplifies noise, increases inter-satellite calibration biases and enhances surface contamination. Spencer and Christy developed the synthetic "2LT" (later renamed "TLT") product by subtracting signals at different view angles; this has a maximum at about 650 hPa. The 2LT product has gone through numerous versions as various corrections have been applied. Another such methodology has been developed by Fu and Johanson, the TTT(Total Troposphere Temperature) channel is a linear combination of the TMT and TLS channel: TTT=1.156*TMT-0.153*TLS for the global average and TTT=1.12*TMT-0.11*TLS at tropical latitudes

Measurement corrections

 * Diurnal sampling

All the MSU instruments and to a lesser extent AMSU drift slowly from the Sun-synchronous equatorial crossing time changing the local time observed by the instrument, therefore the natural diurnal cycle may be aliased into the long term trend. The diurnal sampling correction is in the order of a few hundredths °C/decade for TLT and TMT.


 * Orbit decay

All polar orbiting satellite lose height after launch, the orbital decay is stronger during period of elevated solar activity when the enhanced ultraviolet radiation warm the upper atmosphere and increase the frictional drag over the spacecraft.

The orbital decay change the instrument view angle relative to the surface and thus the observed microwave emissivity, furthermore the long-term time-series is constructed by sequential merging of the inter-calibrated satellite data so that the error is summed up over time, the required correction is in the order of 0.1 °C/decade for TLT.


 * Calibration changes

Once every Earth scan MSU instrument use the deep space (2.7K) and on-board warm targets to make calibration measures, however as the spacecraft drifts through the diurnal cycle the calibration target temperature may change due to varying solar shadowing effect, the correction is in the order of 0.1 °C/decade for TLT and TMT.

One widely reported satellite temperature record is that developed by Roy Spencer and John Christy at the University of Alabama in Huntsville (UAH). The record comes from a succession of different satellites and problems with inter-calibration between the satellites are important, especially NOAA-9, which accounts for most of the difference between the RSS and UAH analyses. NOAA-11 played a significant role in a 2005 study by Mears et al. identifying an error in the diurnal correction that leads to the 40% jump in Spencer and Christy's trend from version 5.1 to 5.2.

Trends
Records have been created by merging data from nine different MSUs, each with peculiarities (e.g., time drift of the spacecraft relative to the local solar time) that must be calculated and removed because they can have substantial impacts on the resulting trend.

The process of constructing a temperature record from a radiance record is difficult. The satellite temperature record comes from a succession of different satellites and problems with inter-calibration between the satellites are important, especially NOAA-9, which accounts for most of the difference between various analyses. NOAA-11 played a significant role in a 2005 study by Mears et al. identifying an error in the diurnal correction that leads to the 40% jump in Spencer and Christy's trend from version 5.1 to 5.2. There are ongoing efforts to resolve differences in satellite temperature datasets.

Comparison with surface trends
To compare the MSU retrievals to the trend from the surface temperature record it is most appropriate to derive trends for the part of the atmosphere nearest the surface, i.e., the lower troposphere. As discussed earlier, the lowest of the temperature retrievals, TLT, is not a direct measurement, but a value calculated by subtracting higher altitude brightness temperature from the lower measurements. The trends found from the UAH and the RSS groups, shown in the table below, are calculated by slightly different methods, and result in different values for the trends.

Using the T2 or TMT channel (which include significant contributions from the stratosphere, which has cooled), Mears et al. of Remote Sensing Systems (RSS) find (through January 2017) a trend of +0.140 °C/decade. Spencer and Christy of the University of Alabama in Huntsville (UAH), find a smaller trend of +0.08 °C/decade.

In comparing these measurements to surface temperature models, it is important to note that resulting values for the lower troposphere measurements taken by the MSU is a weighted average of temperatures over multiple altitudes (roughly 0 to 12 km), and not a surface temperature (see TLT in figure 3 above). The results are thus not precisely comparable to surface temperature models.

Trends from the record
Another satellite temperature analysis is provided by NOAA/NESDIS STAR Center for Satellite Application and Research and use simultaneous nadir overpasses (SNO) to remove satellite intercalibration biases yielding more accurate temperature trends. The STAR-NOAA analysis finds a 1979–2016 trend of +0.129 °C/decade for TMT channel.

Using an alternative adjustment to remove the stratospheric contamination, 1979–2011 trends of +0.14 °C/decade when applied to the RSS data set and +0.11 °C/decade when applied to the UAH data set were found.

A University of Washington analysis finds 1979–2012 trends of +0.13 °C/decade when applied to the RSS data set and +0.10 °C/decade when applied to the UAH data set.

Combined surface and satellite data
In 2013, Cowtan and Way suggested  that global temperature averages based on surface temperature data had a possible source of bias due to incomplete global coverage if the unsampled regions are not uniformly distributed over the planet's surface. They addressed this problem by combining the surface temperature measurements with satellite data to fill in the coverage. Over the time period 1979-2016, combining the HadCRUT4 surface data with UAH satellite coverage, they show a global surface-warming trend of 0.188 °C/decade.

History of satellite temperature data interpretation
The early (1978 to early 2000s) disagreement between the surface temperature record and the satellite records was a subject of research and debate. A lack of warming then seen in the UAH retrieval trends 1978-1998 was noted by Christy and Spencer and commented on in a 2000 report by the National Research Council and the 2001 IPCC Third Assessment Report

Christy et al. (2007) claimed that tropical temperature trends from radiosondes matches closest with his v5.2 UAH dataset. Furthermore, they asserted there was a discrepancy between RSS and sonde trends beginning in 1992, when the NOAA-12 satellite was launched.

In 1998 the UAH data had showed a cooling of 0.05K per decade (at 3.5 km – mid to low troposphere). Wentz & Schabel at RSS in their 1998 paper showed this (along with other discrepancies) was due to the orbital decay of the NOAA satellites. Once the orbital changes had been allowed for the data showed a 0.07K per decade increase in temperature at this level of the atmosphere.

Another important critique of the early satellite record was its shortness—adding a few years on to the record or picking a particular time frame could change the trends considerably.

Through early 2005, even though they began with the same data, each of the major research groups had interpreted it with different results. Most notably, Mears et al. at RSS found 0.193 °C/decade for lower troposphere up to July 2005, compared to +0.123 °C/decade found by UAH for the same period.

There were ongoing efforts to resolve these differences. Much of the disparity in early results was resolved by the three papers in Science, 11 August 2005, which pointed out errors in the UAH 5.1 record and the radiosonde record in the tropics.

An alternative adjustment to remove the stratospheric contamination has been introduced by Fu et al. (2004). After the correction the vertical weighting function is nearly the same of the T2(TMT) channel in the troposphere.

Another re-analysis, by Vinnikov et al. in 2006, found +0.20 °C per decade (1978–2005).

Analysis over a longer time period has resolved some, but not all, of the discrepancy in the data. The IPCC Fifth Assessment Report (2014) stated: "based on multiple independent analyses of measurements from radiosondes and satellite sensors it is virtually certain that globally the troposphere has warmed and the stratosphere has cooled since the mid-20th century. Despite unanimous agreement on the sign of the trends, substantial disagreement exists among available estimates as to the rate of temperature changes, particularly outside the NH extratropical troposphere, which has been well sampled by radiosondes, and concluded "Although there have been substantial methodological debates about the calculation of trends and their uncertainty, a 95% confidence interval of around ±0.1 °C per decade has been obtained consistently for both LT and MT (e.g., Section 2.4.4; McKitrick et al., 2010).

Corrections to UAH data trends
As well as the correction by Wentz and Schabel, doubts had been raised as early as 2000 about the UAH analysis by the work of Prabhakara et al., which minimised errors due to satellite drift. They found a trend of 0.13 °C/decade, in reasonable agreement with surface trends.

Since the earliest release of results in the 1990s, a number of adjustments to the algorithm computing the UAH TLT dataset have been made. A table of the corrections can be found in the UAH satellite temperature dataset article.

Recent trend summary
To compare to the trend from the surface temperature record (+0.161±0.033 °C/decade from 1979 to 2012 according to NASA GISS ) it is most appropriate to derive trends for the part of the atmosphere nearest the surface, i.e., the lower troposphere. Doing this, through December 2019:
 * the RSS reconstruction linear temperature trend shows a warming of +0.208 °C/decade.
 * the UAH reconstruction linear temperature trend 1979-2019 shows a warming of +0.13 °C/decade,

Comparison of data with climate models
For some time the only available satellite record was the UAH version, which (with early versions of the processing algorithm) showed a global cooling trend for its first decade. Since then, a longer record and a number of corrections to the processing have revised this picture, with both UAH and RSS measurements showing a warming trend.

A detailed analysis produced in 2005 by dozens of scientists as part of the US Climate Change Science Program (CCSP) identified and corrected errors in a variety of temperature observations, including the satellite data. Their report stated:
 * "Previously reported discrepancies between the amount of warming near the surface and higher in the atmosphere have been used to challenge the reliability of climate models and the reality of human induced global warming. Specifically, surface data showed substantial global-average warming, while early versions of satellite and radiosonde data showed little or no warming above the surface. This significant discrepancy no longer exists because errors in the satellite and radiosonde data have been identified and corrected. New data sets have also been developed that do not show such discrepancies."

The 2007 IPCC Fourth Assessment Report states:
 * "New analyses of balloon-borne and satellite measurements of lower- and mid-tropospheric temperature show warming rates that are similar to those of the surface temperature record and are consistent within their respective uncertainties, largely reconciling a discrepancy noted in the TAR."

Tropical Troposphere
Climate models predict that as the surface warms, so should the global troposphere. Globally, the troposphere (at the TLT altitude at which the MSU sounder measure) is predicted to warm about 1.2 times more than the surface; in the tropics, the troposphere should warm about 1.5 times more than the surface. However, in the 2005 CCSP report it was noted that the use of fingerprinting techniques on data yielded that "Volcanic and human-caused fingerprints were not consistently identifiable in observed patterns of lapse rate change." (Where "lapse rate" refers to the change in temperature with altitude). In particular, a possible inconsistency was noted in the tropics, the area in which tropospheric amplification should be most clearly seen. They stated:


 * "In the tropics, the agreement between models and observations depends on the time scale considered. For month-to-month and year-to-year variations, models and observations both show amplification (i.e., the month-to-month and year-to-year variations are larger aloft than at the surface). This is a consequence of relatively simple physics, the effects of the release of latent heat as air rises and condenses in clouds. The magnitude of this amplification is very similar in models and observations. On decadal and longer time scales, however, while almost all model simulations show greater warming aloft (reflecting the same physical processes that operate on the monthly and annual time scales), most observations show greater warming at the surface.


 * "These results could arise either because “real world” amplification effects on short and long time scales are controlled by different physical mechanisms, and models fail to capture such behavior; or because non-climatic influences remaining in some or all of the observed tropospheric data sets lead to biased long-term trends; or a combination of these factors. The new evidence in this Report favors the second explanation."

The most recent climate model simulations give a range of results for changes in global average temperature. Some models show more warming in the troposphere than at the surface, while a slightly smaller number of simulations show the opposite behavior. There is no fundamental inconsistency among these model results and observations at the global scale, with the trends now being similar.

Globally, most climate models used by the IPCC in preparation of their third assessment in 2007 show a slightly greater warming at the TLT level than at the surface (0.03 °C/decade difference) for 1979–1999 while the GISS trend is +0.161 °C/decade for 1979 to 2012, the lower troposphere trends calculated from satellite data by UAH and RSS are +0.130 °C/decade and +0.206 °C/decade.

The lower troposphere trend derived from UAH satellites (+0.128 °C/decade) is currently lower than both the GISS and Hadley Centre surface station network trends (+0.161 and +0.160 °C/decade respectively), while the RSS trend (+0.158 °C/decade) is similar. However, if the expected trend in the lower troposphere is indeed higher than the surface, then given the surface data, the troposphere trend would be around 0.194 °C/decade, making the UAH and RSS trends 66% and 81% of the expected value respectively.

Reconciliation with climate models
While the satellite data now show global warming, there is still some difference between what climate models predict and what the satellite data show for warming of the lower troposphere, with the climate models predicting slightly more warming than what the satellites measure.

Both the UAH dataset and the RSS dataset have shown an overall warming trend since 1998, although the UAH retrieval shows slightly less warming than the RSS. In June 2017, RSS released v4 which significantly increased the trend seen in their data, increasing the difference between RSS and UAH trends.

Atmospheric measurements taken by a different satellite measurement technique, the Atmospheric Infrared Sounder on the Aqua satellite, show close agreement with surface data.