User:Гармонический Мир/Loopholes in Bell tests

In experimental tests of Bell inequalities, known as Bell tests, there may be problems of experimental design or set-up that affect the validity of the experimental findings. These problems are often referred to as "loopholes". The purpose of the experiment is to test whether nature can be described by local hidden-variable theory, which would contradict the predictions of quantum mechanics.

The most prevalent loopholes in real experiments are the detection and locality loopholes. The detection loophole is opened when a small fraction of the particles (usually photons) are detected in the experiment, making it possible to explain the data with local hidden variables by assuming that the detected particles are an unrepresentative sample. The locality loophole is opened when the detections are not done with a spacelike separation, making it possible for the result of one measurement to influence the other without contradicting relativity. In some experiments there may be additional defects that make "local realist" explanations of Bell test violations possible.

Although both the locality and detection loopholes had been closed in different experiments, a long-standing challenge was to close both simultaneously in the same experiment. This was finally achieved in three experiments in 2015. Regarding these results, Alain Aspect writes that "... no experiment ... can be said to be totally loophole-free," but he says the experiments "remove the last doubts that we should renounce local realism," and refers to examples of remaining loopholes as being "far fetched" and "foreign to the usual way of reasoning in physics."

Detection loophole
A common problem in optical Bell tests is that only a small fraction of the emitted photons are detected. It is then possible that the correlations of the detected photons are unrepresentative: although they show a violation of a Bell inequality, if all photons were detected the Bell inequality would actually be respected. This was first noted by Pearle in 1970, who devised a local hidden variable model that faked a Bell violation by letting the photon be detected only if the measurement setting was favourable. The assumption that this does not happen, i.e., that the small sample is actually representative of the whole is called the fair sampling assumption.

To do away with this assumption it is necessary to detect a sufficiently large fraction of the photons. This is usually characterized in terms of the detection efficiency $$\eta$$, defined as the probability that a photodetector detects a photon that arrives at it. Garg and Mermin showed that when using a maximally entangled state and the CHSH inequality an efficiency of $$\eta > 2\sqrt2-2 \approx 0.83 $$ is required for a loophole-free violation. Later Eberhard showed that when using a partially entangled state a loophole-free violation is possible for $$\eta > 2/3 \approx 0.67 $$, which is the optimal bound for the CHSH inequality. Other Bell inequalities allow for even lower bounds. For example, there exists a four-setting inequality which is violated for $$\eta > (\sqrt5-1)/2 \approx 0.62$$.

Historically, only experiments with non-optical systems have been able to reach high enough efficiencies to close this loophole, such as trapped ions, superconducting qubits, and NV centers. These experiments were not able to close the locality loophole, which is easy to do with photons. More recently, however, optical setups have managed to reach sufficiently high detection efficiencies by using superconducting photodetectors, and hybrid setups have managed to combine the high detection efficiency typical of matter systems with the ease of distributing entanglement at a distance typical of photonic systems.

Locality loophole
One of the assumptions of Bell's theorem is the one of locality, namely that the choice of setting at a measurement site does not influence the result of the other. The motivation for this assumption is the theory of relativity, that prohibits communication faster than light. For this motivation to apply to an experiment, it needs to have space-like separation between its measurements events. That is, the time that passes between the choice of measurement setting and the production of an outcome must be shorter that the time it takes for a light signal to travel between the measurement sites.

The first experiment that strived to respect this condition was Alain Aspect's 1982 experiment. In it the settings were changed fast enough, but deterministically. The first experiment to change the settings randomly, with the choices made by a quantum random number generator, was Weihs et al.'s 1998 experiment. Scheidl et al. improved on this further in 2010 by conducting an experiment between locations separated by a distance of 144 km.

Coincidence loophole
In many experiments, especially those based on photon polarization, pairs of events in the two wings of the experiment are only identified as belonging to a single pair after the experiment is performed, by judging whether or not their detection times are close enough to one another. This generates a new possibility for a local hidden variables theory to "fake" quantum correlations: delay the detection time of each of the two particles by a larger or smaller amount depending on some relationship between hidden variables carried by the particles and the detector settings encountered at the measurement station.

The coincidence loophole can be ruled out entirely simply by working with a pre-fixed lattice of detection windows which are short enough that most pairs of events occurring in the same window do originate with the same emission and long enough that a true pair is not separated by a window boundary.

Memory loophole
In most experiments, measurements are repeatedly made at the same two locations. Under local realism, there could be effects of memory leading to statistical dependence between subsequent pairs of measurements. Moreover, physical parameters might be varying in time. It has been shown that, provided each new pair of measurements is done with a new random pair of measurement settings, that neither memory nor time inhomogeneity have a serious effect on the experiment.

Superdeterminism
A necessary assumption to derive Bell's theorem is that the hidden variables are not correlated with the measurement settings. This assumption has been justified on the grounds that the experimenter has "free will" to choose the settings, and that it is necessary to do science in the first place. A (hypothetical) theory where the choice of measurement is determined by the system being measured is known as superdeterministic.