Measure problem (cosmology)

The measure problem in cosmology concerns how to compute the ratios of universes of different types within a multiverse. It typically arises in the context of eternal inflation. The problem arises because different approaches to calculating these ratios yield different results, and it is not clear which approach (if any) is correct.

Measures can be evaluated by whether they predict observed physical constants, as well as whether they avoid counterintuitive implications, such as the youngness paradox or Boltzmann brains. While dozens of measures have been proposed, few physicists consider the problem to be solved.

The problem
Infinite multiverse theories are becoming increasingly popular, but because they involve infinitely many instances of different types of universes, it is unclear how to compute the fractions of each type of universe. Alan Guth put it this way:
 * In a single universe, cows born with two heads are rarer than cows born with one head. [But in an infinitely branching multiverse] there are an infinite number of one-headed cows and an infinite number of two-headed cows. What happens to the ratio?

Sean M. Carroll offered another informal example:
 * Say there are an infinite number of universes in which George W. Bush became President in 2000, and also an infinite number in which Al Gore became President in 2000. To calculate the fraction N(Bush)/N(Gore), we need to have a measure – a way of taming those infinities. Usually this is done by “regularization.” We start with a small piece of universe where all the numbers are finite, calculate the fraction, and then let our piece get bigger, and calculate the limit that our fraction approaches.

Different procedures for computing the limit of this fraction yield wildly different answers.

One way to illustrate how different regularization methods produce different answers is to calculate the limit of the fraction of sets of positive integers that are even. Suppose the integers are ordered the usual way,


 * 1, 2, 3, 4, 5, 6, 7, 8, ...

At a cutoff of "the first five elements of the list", the fraction is 2/5; at a cutoff of "the first six elements" the fraction is 1/2; the limit of the fraction, as the subset grows, converges to 1/2. However, if the integers are ordered such that any odd number is followed by two consecutive even numbers,


 * 1, 2, 4, 3, 6, 8, 5, 10, 12, 7, 14, 16, ...

the limit of the fraction of integers that are even converges to 2/3 rather than 1/2.

A popular way to decide what ordering to use in regularization is to pick the simplest or most natural-seeming method of ordering. Everyone agrees that the first sequence, ordered by increasing size of the integers, seems more natural. Similarly, many physicists agree that the "proper-time cutoff measure" (below) seems the simplest and most natural method of regularization. Unfortunately, the proper-time cutoff measure seems to produce incorrect results.

The measure problem is important in cosmology because in order to compare cosmological theories in an infinite multiverse, we need to know which types of universes they predict to be more common than others.

Proper-time cutoff
The proper-time cutoff measure considers the probability $$P(\phi, t)$$ of finding a given scalar field $$\phi$$ at a given proper time $$t$$. During inflation, the region around a point grows like $$e^{3 H \Delta t}$$ in a small proper-time interval $$\Delta t$$, where $$H$$ is the Hubble parameter.

This measure has the advantage of being stationary in the sense that probabilities remain the same over time in the limit of large $$t$$. However, it suffers from the youngness paradox, which has the effect of making it exponentially more probable that we'd be in regions of high temperature, in conflict with what we observe; this is because regions that exited inflation later than our region, spent more time than us experiencing runaway inflationary exponential growth. For example, observers in a Universe of 13.8 billion years old (our observed age) are outnumbered by observers in a 13.0 billion year old Universe by a factor of $$10^{10^{60}}$$. This lopsidedness continues, until the most numerous observers resembling us are "Boltzmann babies" formed by improbable fluctuations in the hot, very early, Universe. Therefore, physicists reject the simple proper-time cutoff as a failed hypothesis.

Scale-factor cutoff
Time can be parameterized in different ways than proper time. One choice is to parameterize by the scale factor of space $$a$$, or more commonly by $$\eta \sim \log a$$. Then a given region of space expands as $$e^{3\Delta \eta}$$, independent of $$H$$.

This approach can be generalized to a family of measures in which a small region grows as $$e^{3 H^\beta \Delta t_\beta}$$ for some $$\beta$$ and time-slicing approach $$t_\beta$$. Any choice for $$\beta$$ remains stationary for large times.

The scale-factor cutoff measure takes $$\beta = 0$$, which avoids the youngness paradox by not giving greater weight to regions that retain high energy density for long periods.

This measure is very sensitive to the choice of $$\beta$$ because any $$\beta > 0$$ yields the youngness paradox, while any $$\beta < 0$$ yields an "oldness paradox" in which most life is predicted to exist in cold, empty space as Boltzmann brains rather than as the evolved creatures with orderly experiences that we seem to be.

De Simone et al. (2010) consider the scale-factor cutoff measure to be a promising solution to the measure problem. This measure has also been shown to produce good agreement with observational values of the cosmological constant.

Stationary
The stationary measure proceeds from the observation that different processes achieve stationarity of $$P(\phi, t)$$ at different times. Thus, rather than comparing processes at a given time since the beginning, the stationary measure compares them in terms of time since each process individually become stationary. For instance, different regions of the universe can be compared based on time since star formation began.

Andrei Linde and coauthors have suggested that the stationary measure avoids both the youngness paradox and Boltzmann brains. However, the stationary measure predicts extreme (either very large or very small) values of the primordial density contrast $$Q$$ and the gravitational constant $$G$$, inconsistent with observations.

Causal diamond
Reheating marks the end of inflation. The causal diamond is the finite four-volume formed by intersecting the future light cone of an observer crossing the reheating hypersurface with the past light cone of the point where the observer has exited a given vacuum. Put another way, the causal diamond is
 * the largest swath accessible to a single observer traveling from the beginning of time to the end of time. The finite boundaries of a causal diamond are formed by the intersection of two cones of light, like the dispersing rays from a pair of flashlights pointed toward each other in the dark. One cone points outward from the moment matter was created after a Big Bang—the earliest conceivable birth of an observer—and the other aims backward from the farthest reach of our future horizon, the moment when the causal diamond becomes an empty, timeless void and the observer can no longer access information linking cause to effect.

The causal diamond measure multiplies the following quantities: Different prior probabilities of vacuum types yield different results. Entropy production can be approximated as the number of galaxies in the diamond.
 * the prior probability that a world line enters a given vacuum
 * the probability that observers emerge in that vacuum, approximated as the difference in entropy between exiting and entering the diamond. ("[T]he more free energy, the more likely it is that observers will emerge.")

Watcher
The watcher measure imagines the world line of an eternal "watcher" that passes through an infinite number of Big Crunch singularities.

Guth–Vanchurin paradox
In all "cutoff" schemes for an expanding infinite multiverse, a finite percentage of observers reach the cutoff during their lifetimes. Under most schemes, if a current observer is still alive five billion years from now, then the later stages of their life must somehow be "discounted" by a factor of around two compared to their current stages of life. For such an observer, Bayes' theorem may appear to break down over this timescale due to anthropic selection effects; this hypothetical breakdown is sometimes called the "Guth–Vanchurin paradox". One proposed resolution to the paradox is to posit a physical "end of time" that has a fifty percent chance of occurring in the next few billion years. Another, overlapping, proposal is to posit that an observer no longer physically exists when it passes outside a given causal patch, similar to models where a particle is destroyed or ceases to exist when it falls through a black hole's event horizon. Guth and Vanchurin have pushed back on such "end of time" proposals, stating that while "(later) stages of my life will contribute (less) to multiversal averages" than earlier stages, this paradox need not be interpreted as a physical "end of time". The literature proposes at least five possible resolutions:


 * 1) Accept a physical "end of time"
 * 2) Reject that probabilities in a finite universe are given by relative frequencies of events or histories
 * 3) Reject calculating probabilities via a geometric cutoff
 * 4) Reject standard probability theories, and instead posit that "relative probability" is, axiomatically, the limit of a certain geometric cutoff process
 * 5) Reject eternal inflation

Guth and Vanchurin hypothesize that standard probability theories might be incorrect, which would have counterintuitive consequences.