Large extra dimensions

In particle physics and string theory (M-theory), the ADD model, also known as the model with large extra dimensions (LED), is a model framework that attempts to solve the hierarchy problem (Why is the force of gravity so weak compared to the electromagnetic force and the other fundamental forces?). The model tries to explain this problem by postulating that our universe, with its four dimensions (three spatial ones plus time), exists on a membrane in a higher dimensional space. It is then suggested that the other forces of nature (the electromagnetic force, strong interaction, and weak interaction) operate within this membrane and its four dimensions, while the hypothetical gravity-bearing particle, the graviton, can propagate across the extra dimensions. This would explain why gravity is very weak compared to the other fundamental forces. The size of the dimensions in ADD is around the order of the TeV scale, which results in it being experimentally probeable by current colliders, unlike many exotic extra dimensional hypotheses that have the relevant size around the Planck scale.

The model was proposed by Nima Arkani-Hamed, Savas Dimopoulos, and Gia Dvali in 1998.

One way to test the theory is performed by colliding together two protons in the Large Hadron Collider so that they interact and produce particles. If a graviton were to be formed in the collision, it could propagate into the extra dimensions, resulting in an imbalance of transverse momentum. No experiments from the Large Hadron Collider have been decisive thus far. However, the operation range of the LHC (13 TeV collision energy) covers only a small part of the predicted range in which evidence for LED would be recorded (a few TeV to 1016 TeV). This suggests that the theory might be more thoroughly tested with more advanced technology.

Proponents' views
Traditionally, in theoretical physics, the Planck scale is the highest energy scale and all dimensionful parameters are measured in terms of the Planck scale. There is a great hierarchy between the weak scale and the Planck scale, and explaining the ratio of strength of weak force and gravity $$G_F/G_N= 10^{32}$$ is the focus of much of beyond-Standard-Model physics. In models of large extra dimensions, the fundamental scale is much lower than the Planck. This occurs because the power law of gravity changes. For example, when there are two extra dimensions of size $$d$$, the power law of gravity is $$1/r^4$$ for objects with $$r \ll d$$ and $$1/r^2$$ for objects with $$r \gg d$$. If we want the Planck scale to be equal to the next accelerator energy (1 TeV), we should take $$d$$ to be approximately 1 mm. For larger numbers of dimensions, fixing the Planck scale at 1 TeV, the size of the extra-dimensions become smaller and as small as 1 femtometer for six extra dimensions.

By reducing the fundamental scale to the weak scale, the fundamental theory of quantum gravity, such as string theory, might be accessible at colliders such as the Tevatron or the LHC. There has been recent progress in generating large volumes in the context of string theory. Having the fundamental scale accessible allows the production of black holes at the LHC, though there are constraints on the viability of this possibility at the energies at the LHC. There are other signatures of large extra dimensions at high energy colliders.

Many of the mechanisms that were used to explain the problems in the Standard Model used very high energies. In the years after the publication of ADD, much of the work of the beyond the Standard Model physics community went to explore how these problems could be solved with a low scale of quantum gravity. Almost immediately, there was an alternative explanation to the see-saw mechanism for the neutrino mass. Using extra dimensions as a new source of small numbers allowed for new mechanisms for understanding the masses and mixings of the neutrinos.

Another problem with having a low scale of quantum gravity was the existence of possibly TeV-suppressed proton decay, flavor violating, and CP violating operators. These would be disastrous phenomenologically. Physicists quickly realized that there were novel mechanisms for getting small numbers necessary for explaining these very rare processes.

Opponents' views
In the traditional view, the enormous gap in energy between the mass scales of ordinary particles and the Planck mass is reflected in the fact that virtual processes involving black holes or gravity are strongly suppressed. The suppression of these terms is the principle of renormalizability – in order to see an interaction at low energy, it must have the property that its coupling only changes logarithmically as a function of the Planck scale. Nonrenormalizable interactions are weak only to the extent that the Planck scale is large.

Virtual gravitational processes do not conserve anything except gauge charges, because black holes decay into anything with the same charge. Therefore, it is difficult to suppress interactions at the gravitational scale. One way to do it is by postulating new gauge symmetries. A different way to suppress these interactions in the context of extra-dimensional models is the "split fermion scenario" proposed by Arkani-Hamed and Schmaltz in their paper "Hierarchies without Symmetries from Extra Dimensions". In this scenario, the wavefunctions of particles that are bound to the brane have a finite width significantly smaller than the extra-dimension, but the center (e.g. of a Gaussian wave packet) can be dislocated along the direction of the extra dimension in what is known as a "fat brane". Integrating out the additional dimension(s) to obtain the effective coupling of higher-dimensional operators on the brane, the result is suppressed with the exponential of the square of the distance between the centers of the wave functions, a factor that generates a suppression by many orders of magnitude already by a dislocation of only a few times the typical width of the wave function.

In electromagnetism, the electron magnetic moment is described by perturbative processes derived in the QED Lagrangian:

\int \bar{\psi} \gamma^\mu \partial_\mu \psi + {1\over 4}F^{\mu\nu}F_{\mu\nu} +\bar{\psi} e\gamma^\mu A_\mu\psi \,$$

which is calculated and measured to one part in a trillion. But it is also possible to include a Pauli term in the Lagrangian:



A \bar\psi F^{\mu \nu} \sigma_{\mu \nu} \psi \,$$

and the magnetic moment would change by $$A$$. The reason the magnetic moment is correctly calculated without this term is because the coefficient $$A$$ has the dimension of inverse mass. The mass scale is at most the Planck mass, so $$A$$ would only be seen at the 20th decimal place with the usual Planck scale.

Since the electron magnetic moment is measured so accurately, and since the scale where it is measured is at the electron mass, a term of this kind would be visible even if the Planck scale were only about 109 electron masses, which is $1,000 TeV$. This is much higher than the proposed Planck scale in the ADD model.

QED is not the full theory, and the Standard Model does not have many possible Pauli terms. A good rule of thumb is that a Pauli term is like a mass term – in order to generate it, the Higgs must enter. But in the ADD model, the Higgs vacuum expectation value is comparable to the Planck scale, so the Higgs field can contribute to any power without any suppression. One coupling which generates a Pauli term is the same as the electron mass term, except with an extra $$Y^{\mu\nu}\sigma_{\mu\nu}$$ where $$Y$$ is the U(1) gauge field. This is dimension-six, and it contains one power of the Higgs expectation value, and is suppressed by two powers of the Planck mass. This should start contributing to the electron magnetic moment at the sixth decimal place. A similar term should contribute to the muon magnetic moment at the third or fourth decimal place.

The neutrinos are only massless because the dimension-five operator $$\bar{L} H H L$$ does not appear. But neutrinos have a mass scale of approximately $$10^{-2}$$ eV, which is 14 orders of magnitude smaller than the scale of the Higgs expectation value of 1 TeV. This means that the term is suppressed by a mass $$M$$ such that



\frac{H^2}{M} = 0.01\,\text{eV}. \,$$

Substituting $$H \simeq 1$$ TeV gives $$M \simeq 10^{26}$$ eV $$\simeq 10^{17}$$ GeV. So this is where the neutrino masses suggest new physics; at close to the traditional Grand Unification Theory (GUT) scale, a few orders of magnitude less than the traditional Planck scale. The same term in a large extra dimension model would give a mass to the neutrino in the MeV-GeV range, comparable to the mass of the other particles.

In this view, models with large extra dimensions miscalculate the neutrino masses by inappropriately assuming that the mass is due to interactions with a hypothetical right-handed partner. The only reason to introduce a right-handed partner is to produce neutrino masses in a renormalizable GUT. If the Planck scale is small so that renormalizability is no longer an issue, there are many neutrino mass terms which do not require extra particles.

For example, at dimension-six, there is a Higgs-free term which couples the lepton doublets to the quark doublets, $$\bar{L}L\bar{q}q$$, which is a coupling to the strong interaction quark condensate. Even with a relatively low energy pion scale, this type of interaction could conceivably give a mass to the neutrino of size $$\scriptstyle {f_\pi}^3/TeV^2$$, which is only a factor of 107 less than the pion condensate itself at $200 MeV$. This would be some $10 eV$ of mass, about a thousand times bigger than what is measured.

This term also allows for lepton number violating pion decays, and for proton decay. In fact, in all operators with dimension greater than four, there are CP, baryon, and lepton-number violations. The only way to suppress them is to deal with them term by term, which nobody has done.

The popularity, or at least prominence, of these models may have been enhanced because they allow the possibility of black hole production at the LHC, which has attracted significant attention.

Empirical tests
Analyses of results from the Large Hadron Collider severely constrain theories with large extra dimensions.

In 2012, the Fermi/LAT collaboration published limits on the ADD model of Large Extra Dimensions from astrophysical observations of neutron stars. If the unification scale is at a TeV, then for $$n < 4$$, the results presented here imply that the compactification topology is more complicated than a torus, i.e., all large extra dimensions (LED) having the same size. For flat LED of the same size, the lower limits on the unification scale results are consistent with n ≥ 4. The details of the analysis is as follows: A sample of 6 gamma-ray faint NS sources not reported in the first Fermi gamma-ray source catalog that are good candidates are selected for this analysis, based on age, surface magnetic field, distance, and galactic latitude. Based on 11 months of data from Fermi-LAT, 95% CL upper limits on the size of extra dimensions $$R$$ from each source are obtained, as well as 95% CL lower limits on the (n+4)-dimensional Planck scale $$M_D$$. In addition, the limits from all of the analyzed NSs have been combined statistically using two likelihood-based methods. The results indicate more stringent limits on LED than quoted previously from individual neutron star sources in gamma-rays. In addition, the results are more stringent than current collider limits, from the LHC, for $$n < 4$$.