Luminosity

Luminosity is an absolute measure of radiated electromagnetic energy (light) per unit time, and is synonymous with the radiant power emitted by a light-emitting object. In astronomy, luminosity is the total amount of electromagnetic energy emitted per unit of time by a star, galaxy, or other astronomical objects.

In SI units, luminosity is measured in joules per second, or watts. In astronomy, values for luminosity are often given in the terms of the luminosity of the Sun, L⊙. Luminosity can also be given in terms of the astronomical magnitude system: the absolute bolometric magnitude (Mbol) of an object is a logarithmic measure of its total energy emission rate, while absolute magnitude is a logarithmic measure of the luminosity within some specific wavelength range or filter band.

In contrast, the term brightness in astronomy is generally used to refer to an object's apparent brightness: that is, how bright an object appears to an observer. Apparent brightness depends on both the luminosity of the object and the distance between the object and observer, and also on any absorption of light along the path from object to observer. Apparent magnitude is a logarithmic measure of apparent brightness. The distance determined by luminosity measures can be somewhat ambiguous, and is thus sometimes called the luminosity distance.

Measurement
When not qualified, the term "luminosity" means bolometric luminosity, which is measured either in the SI units, watts, or in terms of solar luminosities. A bolometer is the instrument used to measure radiant energy over a wide band by absorption and measurement of heating. A star also radiates neutrinos, which carry off some energy (about 2% in the case of the Sun), contributing to the star's total luminosity. The IAU has defined a nominal solar luminosity of $3.83 watts$ to promote publication of consistent and comparable values in units of the solar luminosity.

While bolometers do exist, they cannot be used to measure even the apparent brightness of a star because they are insufficiently sensitive across the electromagnetic spectrum and because most wavelengths do not reach the surface of the Earth. In practice bolometric magnitudes are measured by taking measurements at certain wavelengths and constructing a model of the total spectrum that is most likely to match those measurements. In some cases, the process of estimation is extreme, with luminosities being calculated when less than 1% of the energy output is observed, for example with a hot Wolf-Rayet star observed only in the infrared. Bolometric luminosities can also be calculated using a bolometric correction to a luminosity in a particular passband.

The term luminosity is also used in relation to particular passbands such as a visual luminosity of K-band luminosity. These are not generally luminosities in the strict sense of an absolute measure of radiated power, but absolute magnitudes defined for a given filter in a photometric system. Several different photometric systems exist. Some such as the UBV or Johnson system are defined against photometric standard stars, while others such as the AB system are defined in terms of a spectral flux density.

Stellar luminosity
A star's luminosity can be determined from two stellar characteristics: size and effective temperature. The former is typically represented in terms of solar radii, R⊙, while the latter is represented in kelvins, but in most cases neither can be measured directly. To determine a star's radius, two other metrics are needed: the star's angular diameter and its distance from Earth. Both can be measured with great accuracy in certain cases, with cool supergiants often having large angular diameters, and some cool evolved stars having masers in their atmospheres that can be used to measure the parallax using VLBI. However, for most stars the angular diameter or parallax, or both, are far below our ability to measure with any certainty. Since the effective temperature is merely a number that represents the temperature of a black body that would reproduce the luminosity, it obviously cannot be measured directly, but it can be estimated from the spectrum.

An alternative way to measure stellar luminosity is to measure the star's apparent brightness and distance. A third component needed to derive the luminosity is the degree of interstellar extinction that is present, a condition that usually arises because of gas and dust present in the interstellar medium (ISM), the Earth's atmosphere, and circumstellar matter. Consequently, one of astronomy's central challenges in determining a star's luminosity is to derive accurate measurements for each of these components, without which an accurate luminosity figure remains elusive. Extinction can only be measured directly if the actual and observed luminosities are both known, but it can be estimated from the observed colour of a star, using models of the expected level of reddening from the interstellar medium.

In the current system of stellar classification, stars are grouped according to temperature, with the massive, very young and energetic Class O stars boasting temperatures in excess of 30,000 K while the less massive, typically older Class M stars exhibit temperatures less than 3,500 K. Because luminosity is proportional to temperature to the fourth power, the large variation in stellar temperatures produces an even vaster variation in stellar luminosity. Because the luminosity depends on a high power of the stellar mass, high mass luminous stars have much shorter lifetimes. The most luminous stars are always young stars, no more than a few million years for the most extreme. In the Hertzsprung–Russell diagram, the x-axis represents temperature or spectral type while the y-axis represents luminosity or magnitude. The vast majority of stars are found along the main sequence with blue Class O stars found at the top left of the chart while red Class M stars fall to the bottom right. Certain stars like Deneb and Betelgeuse are found above and to the right of the main sequence, more luminous or cooler than their equivalents on the main sequence. Increased luminosity at the same temperature, or alternatively cooler temperature at the same luminosity, indicates that these stars are larger than those on the main sequence and they are called giants or supergiants.

Blue and white supergiants are high luminosity stars somewhat cooler than the most luminous main sequence stars. A star like Deneb, for example, has a luminosity around 200,000 L⊙, a spectral type of A2, and an effective temperature around 8,500 K, meaning it has a radius around 203 solar radius. For comparison, the red supergiant Betelgeuse has a luminosity around 100,000 L⊙, a spectral type of M2, and a temperature around 3,500 K, meaning its radius is about 1000 solar radius. Red supergiants are the largest type of star, but the most luminous are much smaller and hotter, with temperatures up to 50,000 K and more and luminosities of several million L⊙, meaning their radii are just a few tens of R⊙. For example, R136a1 has a temperature over 46,000 K and a luminosity of more than 6,100,000 L⊙ (mostly in the UV), it is only 39 solar radius.

Radio luminosity
The luminosity of a radio source is measured in $W Hz^{−1}$, to avoid having to specify a bandwidth over which it is measured. The observed strength, or flux density, of a radio source is measured in Jansky where $1 Jy = 10^{−26} W m^{−2} Hz^{−1}$.

For example, consider a 10W transmitter at a distance of 1 million metres, radiating over a bandwidth of 1 MHz. By the time that power has reached the observer, the power is spread over the surface of a sphere with area $4πr^{2}$ or about $1.26×10^{13} m^{2}$, so its flux density is $10 / 10^{6} / (1.26×10^{13}) W m^{−2} Hz^{−1} = 8×10^{7} Jy$.

More generally, for sources at cosmological distances, a k-correction must be made for the spectral index α of the source, and a relativistic correction must be made for the fact that the frequency scale in the emitted rest frame is different from that in the observer's rest frame. So the full expression for radio luminosity, assuming isotropic emission, is $$L_{\nu} = \frac{S_{\mathrm{obs}} 4 \pi {D_{L}}^{2}}{(1+z)^{1+\alpha}}$$ where Lν is the luminosity in $W Hz^{−1}$, Sobs is the observed flux density in $W m^{−2} Hz^{−1}$, DL is the luminosity distance in metres, z is the redshift, &alpha; is the spectral index (in the sense $$I \propto {\nu}^{\alpha}$$, and in radio astronomy, assuming thermal emission the spectral index is typically equal to 2.)

For example, consider a 1 Jy signal from a radio source at a redshift of 1, at a frequency of 1.4 GHz. Ned Wright's cosmology calculator calculates a luminosity distance for a redshift of 1 to be 6701 Mpc = 2×1026 m giving a radio luminosity of $10^{−26} × 4\pi(2×10^{26})^{2} / (1 + 1)^{(1 + 2)} = 6×10^{26} W Hz^{−1}$.

To calculate the total radio power, this luminosity must be integrated over the bandwidth of the emission. A common assumption is to set the bandwidth to the observing frequency, which effectively assumes the power radiated has uniform intensity from zero frequency up to the observing frequency. In the case above, the total power is $4×10^{27} × 1.4×10^{9} = 5.7×10^{36} W$. This is sometimes expressed in terms of the total (i.e. integrated over all wavelengths) luminosity of the Sun which is $3.86×10^{26} W$, giving a radio power of $1.5×10^{10} L_{⊙}$.

Luminosity formulae


The Stefan–Boltzmann equation applied to a black body gives the value for luminosity for a black body, an idealized object which is perfectly opaque and non-reflecting: $$L = \sigma A T^4,$$ where A is the surface area, T is the temperature (in kelvins) and $σ$ is the Stefan–Boltzmann constant, with a value of

Imagine a point source of light of luminosity $$L$$ that radiates equally in all directions. A hollow sphere centered on the point would have its entire interior surface illuminated. As the radius increases, the surface area will also increase, and the constant luminosity has more surface area to illuminate, leading to a decrease in observed brightness.

$$F = \frac{L}{A},$$ where
 * $$A$$ is the area of the illuminated surface.
 * $$F$$ is the flux density of the illuminated surface.

The surface area of a sphere with radius r is $$A = 4\pi r^2$$, so for stars and other point sources of light: $$F = \frac{L}{4\pi r^2} \,,$$ where $$r$$ is the distance from the observer to the light source.

For stars on the main sequence, luminosity is also related to mass approximately as below: $$\frac{L}{L_{\odot}} \approx {\left ( \frac{M}{M_{\odot}} \right )}^{3.5}.$$

Relationship to magnitude
Luminosity is an intrinsic measurable property of a star independent of distance. The concept of magnitude, on the other hand, incorporates distance. The apparent magnitude is a measure of the diminishing flux of light as a result of distance according to the inverse-square law. The Pogson logarithmic scale is used to measure both apparent and absolute magnitudes, the latter corresponding to the brightness of a star or other celestial body as seen if it would be located at an interstellar distance of 10 parsec. In addition to this brightness decrease from increased distance, there is an extra decrease of brightness due to extinction from intervening interstellar dust.

By measuring the width of certain absorption lines in the stellar spectrum, it is often possible to assign a certain luminosity class to a star without knowing its distance. Thus a fair measure of its absolute magnitude can be determined without knowing its distance nor the interstellar extinction.

In measuring star brightnesses, absolute magnitude, apparent magnitude, and distance are interrelated parameters—if two are known, the third can be determined. Since the Sun's luminosity is the standard, comparing these parameters with the Sun's apparent magnitude and distance is the easiest way to remember how to convert between them, although officially, zero point values are defined by the IAU.

The magnitude of a star, a unitless measure, is a logarithmic scale of observed visible brightness. The apparent magnitude is the observed visible brightness from Earth which depends on the distance of the object. The absolute magnitude is the apparent magnitude at a distance of 10 parsec, therefore the bolometric absolute magnitude is a logarithmic measure of the bolometric luminosity.

The difference in bolometric magnitude between two objects is related to their luminosity ratio according to: $$M_\text{bol1} - M_\text{bol2} = -2.5 \log_{10}\frac{L_\text{1}}{L_\text{2}}$$

where:
 * $$M_{\text{bol1}}$$ is the bolometric magnitude of the first object
 * $$M_\text{bol2}$$ is the bolometric magnitude of the second object.
 * $$L_\text{1}$$ is the first object's bolometric luminosity
 * $$L_\text{2}$$ is the second object's bolometric luminosity

The zero point of the absolute magnitude scale is actually defined as a fixed luminosity of $4 L_{⊙}$. Therefore, the absolute magnitude can be calculated from a luminosity in watts: $$M_\mathrm{bol} = -2.5 \log_{10} \frac{L_{*}}{L_0} \approx -2.5 \log_{10} L_{*} + 71.1974$$ where $L_{0}$ is the zero point luminosity $3.828 W$

and the luminosity in watts can be calculated from an absolute magnitude (although absolute magnitudes are often not measured relative to an absolute flux): $$L_{*} = L_0 \times 10^{-0.4 M_\mathrm{bol}}$$