Quantum limit

A quantum limit in physics is a limit on measurement accuracy at quantum scales. Depending on the context, the limit may be absolute (such as the Heisenberg limit), or it may only apply when the experiment is conducted with naturally occurring quantum states (e.g. the standard quantum limit in interferometry) and can be circumvented with advanced state preparation and measurement schemes.

The usage of the term standard quantum limit or SQL is, however, broader than just interferometry. In principle, any linear measurement of a quantum mechanical observable of a system under study that does not commute with itself at different times leads to such limits. In short, it is the Heisenberg uncertainty principle that is the cause. A more detailed explanation would be that any measurement in quantum mechanics involves at least two parties, an Object and a Meter. The former is the system whose observable, say $$\hat x$$, we want to measure. The latter is the system we couple to the Object in order to infer the value of $$\hat x$$ of the Object by recording some chosen observable, $$\hat{\mathcal{O}}$$, of this system, e.g. the position of the pointer on a scale of the Meter. This, in a nutshell, is a model of most of the measurements happening in physics, known as indirect measurements (see pp. 38–42 of ). So any measurement is a result of interaction and that acts in both ways. Therefore, the Meter acts on the Object during each measurement, usually via the quantity, $$\hat{\mathcal{F}}$$, conjugate to the readout observable $$\hat{\mathcal{O}}$$, thus perturbing the value of measured observable $$\hat x$$ and modifying the results of subsequent measurements. This is known as back action (quantum) of the Meter on the system under measurement.

At the same time, quantum mechanics prescribes that readout observable of the Meter should have an inherent uncertainty, $$\delta\hat{\mathcal{O}}$$, additive to and independent of the value of the measured quantity $$\hat x$$. This one is known as measurement imprecision or measurement noise. Because of the Heisenberg uncertainty principle, this imprecision cannot be arbitrary and is linked to the back-action perturbation by the uncertainty relation:


 * $$\Delta {\mathcal{O}} \Delta {\mathcal{F}} \geqslant \hbar/2\,,$$

where $$\Delta a = \sqrt{\langle\hat a^2\rangle-\langle\hat a\rangle^2}$$ is a standard deviation of observable $$a$$ and $$\langle\hat a\rangle$$ stands for expectation value of $$a$$ in whatever quantum state the system is. The equality is reached if the system is in a minimum uncertainty state. The consequence for our case is that the more precise is our measurement, i.e the smaller is $$\Delta \mathcal{\delta O}$$, the larger will be perturbation the Meter exerts on the measured observable $$\hat x$$. Therefore, the readout of the meter will, in general, consist of three terms:


 * $$\hat{\mathcal{O}} = \hat x_{\mathrm{free}} + \delta \hat{\mathcal{O}} + \delta \hat x_{BA}[\hat{\mathcal{F}}]\,,$$

where $$\hat x_{\mathrm{free}} $$ is a value of $$\hat x$$ that the Object would have, were it not coupled to the Meter, and $$\delta \hat {x_{BA}}[\hat{\mathcal{F}}]$$ is the perturbation to the value of $$\hat x$$ caused by back action force, $$\hat{\mathcal{F}}$$. The uncertainty of the latter is proportional to $$\Delta \mathcal{F}\propto\Delta \mathcal{O}^{-1}$$. Thus, there is a minimal value, or the limit to the precision one can get in such a measurement, provided that $$\delta\hat{\mathcal{O}} $$ and $$\hat{\mathcal{F}}$$ are uncorrelated.

The terms "quantum limit" and "standard quantum limit" are sometimes used interchangeably. Usually, "quantum limit" is a general term which refers to any restriction on measurement due to quantum effects, while the "standard quantum limit" in any given context refers to a quantum limit which is ubiquitous in that context.

Displacement measurement
Consider a very simple measurement scheme, which, nevertheless, embodies all key features of a general position measurement. In the scheme shown in the Figure, a sequence of very short light pulses are used to monitor the displacement of a probe body $$M$$. The position $$x$$ of $$M$$ is probed periodically with time interval $$\vartheta$$. We assume mass $$M$$ large enough to neglect the displacement inflicted by the pulses regular (classical) radiation pressure in the course of measurement process.



Then each $$j$$-th pulse, when reflected, carries a phase shift proportional to the value of the test-mass position $$x(t_j)$$ at the moment of reflection:

where $$k_p=\omega_p/c$$, $$\omega_p$$ is the light frequency, $$j=\dots,-1,0,1,\dots$$ is the pulse number and $$\hat{\phi}_j$$ is the initial (random) phase of the $$j$$-th pulse. We assume that the mean value of all these phases is equal to zero, $$\langle\hat{\phi}_j\rangle=0$$, and their root mean square (RMS) uncertainty $$(\langle\hat{\phi^2}\rangle-\langle\hat{\phi}\rangle^2)^{1/2}$$ is equal to $$\Delta\phi$$.

The reflected pulses are detected by a phase-sensitive device (the phase detector). The implementation of an optical phase detector can be done using e.g. homodyne or heterodyne detection schemes (see Section 2.3 in and references therein), or other such read-out techniques.

In this example, light pulse phase $$\hat\phi_j$$ serves as the readout observable $$\mathcal{O}$$ of the Meter. Then we suppose that the phase $$\hat{\phi}_j^{\mathrm{refl}}$$ measurement error introduced by the detector is much smaller than the initial uncertainty of the phases $$\Delta\phi$$. In this case, the initial uncertainty will be the only source of the position measurement error:

For convenience, we renormalise Eq. ($$) as the equivalent test-mass displacement:

where



\hat{x}_{\mathrm{fl}}(t_j) = -\frac{\hat{\phi}_j}{2 k_p} $$

are the independent random values with the RMS uncertainties given by Eq. ($$).

Upon reflection, each light pulse kicks the test mass, transferring to it a back-action momentum equal to

where $$\hat{p}_j^{\mathrm{before}}$$ and $$\hat{p}_j^{\mathrm{after}}$$ are the test-mass momentum values just before and just after the light pulse reflection, and $$\mathcal{W}_j$$ is the energy of the $$j$$-th pulse, that plays the role of back action observable $$\hat{\mathcal{F}}$$ of the Meter. The major part of this perturbation is contributed by classical radiation pressure:



\langle\hat{p}_j^{\mathrm{b.a.}}\rangle = \frac{2}{c}\mathcal{W} \,, $$

with $$\mathcal{W}$$ the mean energy of the pulses. Therefore, one could neglect its effect, for it could be either subtracted from the measurement result or compensated by an actuator. The random part, which cannot be compensated, is proportional to the deviation of the pulse energy:



\hat{p}^{\mathrm{b.a.}}(t_j) = \hat{p}_j^{\mathrm{b.a.}} - \langle\hat{p}_j^{\mathrm{b.a.}}\rangle = \frac{2}{c}\bigl(\hat{\mathcal{W}}_j - \mathcal{W}\bigr) \,, $$

and its RMS uncertainly is equal to

with $$\Delta\mathcal{W}$$ the RMS uncertainty of the pulse energy.

Assuming the mirror is free (which is a fair approximation if time interval between pulses is much shorter than the period of suspended mirror oscillations, $$\vartheta\ll T$$), one can estimate an additional displacement caused by the back action of the $$j$$-th pulse that will contribute to the uncertainty of the subsequent measurement by the $$j+1$$ pulse time $$\vartheta$$ later:



\hat x_{\mathrm{b.a.}}(t_j) = \frac{\hat{p}^{\mathrm{b.a.}}(t_j)\vartheta}{M} \,. $$

Its uncertainty will be simply



\Delta x_{\mathrm{b.a.}}(t_j) = \frac{\Delta {p}_{\mathrm{b.a.}}(t_j)\vartheta}{M} \,. $$

If we now want to estimate how much has the mirror moved between the $$j$$  and  $$j+1$$ pulses, i.e. its displacement   $$\delta\tilde x_{j+1,j} =  \tilde x(t_{j+1}) - \tilde x(t_{j})$$, we will have to deal with three additional uncertainties that limit precision of our estimate:



\Delta \tilde{x}_{j+1,j} = \Bigl[(\Delta x_{\rm meas}(t_{j+1}))^2+(\Delta x_{\rm meas}(t_{j}))^2+(\Delta x_{\rm b.a.}(t_{j}))^2\Bigr]^{1/2} \,, $$

where we assumed all contributions to our measurement uncertainty statistically independent and thus got sum uncertainty by summation of standard deviations. If we further assume that all light pulses are similar and have the same phase uncertainty, thence $$\Delta x_{\rm meas}(t_{j+1}) = \Delta x_{\rm meas}(t_{j}) \equiv \Delta x_{\rm meas} = \Delta\phi/(2k_p)$$.

Now, what is the minimum this sum and what is the minimum error one can get in this simple estimate? The answer ensues from quantum mechanics, if we recall that energy and the phase of each pulse are canonically conjugate observables and thus obey the following uncertainty relation:



\Delta\mathcal{W}\Delta\phi \ge \frac{\hbar\omega_p}{2} \,. $$

Therefore, it follows from Eqs. ($$ and $$) that the position measurement error $$\Delta x_{\mathrm{meas}}$$ and the momentum perturbation $$\Delta p_{\mathrm{b.a.}}$$ due to back action also satisfy the uncertainty relation:



\Delta x_{\mathrm{meas}}\Delta p_{\mathrm{b.a.}} \ge \frac{\hbar}{2} \,. $$

Taking this relation into account, the minimal uncertainty, $$\Delta x_{\mathrm{meas}}$$, the light pulse should have in order not to perturb the mirror too much, should be equal to $$\Delta x_{\mathrm{b.a.}}$$ yielding for both $$\Delta x_{\mathrm{min}} = \sqrt{\frac{\hbar\vartheta}{2M}}$$. Thus the minimal displacement measurement error that is prescribed by quantum mechanics read:



\Delta \tilde{x}_{j+1,j} \geqslant \Bigl[2(\Delta x_{\rm meas})^2+\Bigl(\frac{\hbar\vartheta}{2M\Delta x_{\rm meas}}\Bigr)^2\Bigr]^{1/2} \geqslant \sqrt{\frac{3\hbar\vartheta}{2M}}\,. $$

This is the Standard Quantum Limit for such a 2-pulse procedure. In principle, if we limit our measurement to two pulses only and do not care about perturbing mirror position afterwards, the second pulse measurement uncertainty, $$ \Delta x_{\rm meas}(t_{j+1})$$, can, in theory, be reduced to 0 (it will yield, of course, $$ \Delta p_{\rm b.a.}(t_{j+1})\to\infty$$) and the limit of displacement measurement error will reduce to:



\Delta \tilde{x}_{SQL} =  \sqrt{\frac{\hbar\vartheta}{M}}\,, $$

which is known as the Standard Quantum Limit for the measurement of free mass displacement.

This example represents a simple particular case of a linear measurement. This class of measurement schemes can be fully described by two linear equations of the form~($$) and ($$), provided that both the measurement uncertainty and the object back-action perturbation ($$\hat{x}_{\mathrm{fl}}(t_j)$$ and $$\hat{p}^{\mathrm{b.a.}}(t_j)$$ in this case) are statistically independent of the test object initial quantum state and satisfy the same uncertainty relation as the measured observable and its canonically conjugate counterpart (the object position and momentum in this case).

Usage in quantum optics
In the context of interferometry or other optical measurements, the standard quantum limit usually refers to the minimum level of quantum noise which is obtainable without squeezed states.

There is additionally a quantum limit for phase noise, reachable only by a laser at high noise frequencies.

In spectroscopy, the shortest wavelength in an X-ray spectrum is called the quantum limit.

Misleading relation to the classical limit
Note that due to an overloading of the word "limit", the classical limit is not the opposite of the quantum limit. In "quantum limit", "limit" is being used in the sense of a physical limitation (e.g. the Armstrong limit). In "classical limit", "limit" is used in the sense of a limiting process. (Note that there is no simple rigorous mathematical limit which fully recovers classical mechanics from quantum mechanics, the Ehrenfest theorem notwithstanding. Nevertheless, in the phase space formulation of quantum mechanics, such limits are more systematic and practical.)