User:ItsTheEquations/sandbox

The Entire Discussion under "Uncertainty Principle...
...is nonideal and perhaps incomplete or erroneous in some respects. (Revised: I deleted these explanations on refactoring.)

I propose to replace this section text and examples with the following here (I am 99.999% done at this point.):

Heisenberg's Uncertainty Principle is another core concept of Quantum Mechanics where Planck's constant plays a key role. Perhaps Fourier Analysis best demonstrates the idea of an indeterminacy, i.e., something inherently indeterminable for a reason other than apparatus error or the Observer effect. In Fourier Analysis, as in Quantum Mechanics, these indeterminacies are associated with observing wave phenomena, although Heisenberg himself may not have initially understood this.

Fourier Analysis involves transform / inverse transform pairs between the time and frequency domains, between time and energy, between position and momentum, and between other examples of what physicists call canonical conjugates. One may calculate a set of transformed frequency data from a set of measured time data and view either on a 2D plot – but not both at once; it takes all the time data to calculate the transform for each frequency. However, plotting a 3D surface of frequency data (a spectrum) vs time (in frames $$ \scriptstyle \Delta t_F $$ long) seems to reveal all of it: signal power vs frequency and at each frame time. This 3D plot, sometimes called a waterfall or a spectrogram, is commonly found in Speech, Music, Communications, and Geology; and it is how voiceprints are made. The Spectrogram is a rich example of uncertainty and waves.

The Fourier Transform is defined with integrals from $$ \scriptstyle - \infty $$ to $$ \scriptstyle + \infty $$; however, in real measurements, the apparatus goes on at $$ \scriptstyle t = 0 $$ and measures samples at intervals of $$ \scriptstyle \Delta t_S $$ until $$ \scriptstyle N $$ samples are collected. Data are analyzed by the Discrete Fourier Transform (DFT), typically with an FFT algorithm. Since the $$ \scriptstyle t $$ domain was sampled, the $$ \scriptstyle f $$ domain also becomes discrete, having increments of $$ \scriptstyle \Delta f \, = \, 1/ \Delta t_F \, = \, 1/(N \Delta t_S) $$; and values of $$ \scriptstyle f(k) = k \Delta f $$ for $$ \scriptstyle k = 0 \, to \, (N-1) $$. However, these measurement parameters result in aliasing in the frequency domain that compels one to drop the terms for $$ \scriptstyle k = 0 $$ because time-framing under-samples frequencies $$ \scriptstyle f \, < \, 1 / \Delta t_F $$, and to drop the $$ \scriptstyle k > (N/2)-1 $$ terms because the time data are real and $$ \scriptstyle f_H \, \, \thickapprox \, 1/(2 \Delta t_S) \, = \, f_S/2$$ is the Nyquist frequency. Thus, there are uncertainties limiting both ends of the calculated spectrum as well as the resolution: $$ \scriptstyle f_L \, \thickapprox \, \Delta f \, = \, 1/ \Delta t_F $$ and $$ \scriptstyle f_H \, \, \thickapprox \, 1/(2 \Delta t_S) $$.

A voiceprint with 10ms frames and a 20kSa/s sampling rate will produce a series of spectra each of which goes from 100Hz to 10kHz in 100Hz increments. Put another way, regardless of how accurately you measure; you cannot detect 100Hz (or a difference of 100Hz) in less than ~10ms $because it takes that long$ for 100Hz to happen. Thus, the indeterminacy of frequency in time means two things:


 * 1) Within a given time frame $$ \scriptstyle \Delta t_i $$, if an event were detected, there can be no more precise knowledge of when it happened than to observe it was during that particular time frame $$ \scriptstyle \Delta t_i $$, and
 * 2) Within any time frame $$ \scriptstyle \Delta t $$, the actual frequency can be no more precisely known than to observe it is within a range $$ \scriptstyle \Delta f \,\, \gtrsim \,\, 1 / \Delta t $$, while frequencies $$ \scriptstyle f \,\, < \,\, 1 / \Delta t $$ are indeterminable.

In actual lab practice, scientists often use window functions, overlapping data, and other advanced processing to overcome some indeterminacy limitations. Technical choices of windows and strategies affect the $$ \scriptstyle \Delta t $$ and the $$ \scriptstyle \Delta f $$ in various ways to balance requirements for signal levels, noise levels, spectral leakage, etc. For more details and examples see the Short-time Fourier transform (STFT), the Discrete Fourier transform (DFT), the Discrete-time Fourier transform (DTFT), and the Fast Fourier Transform (FFT).

By introducing Planck's constant, one may obtain Heisenberg's Energy vs Time and Momentum vs Position Uncertainty Relations in one step:


 * The Fourier Analysis indeterminacies for temporal frequency $$ \scriptstyle f $$ and spatial frequency $$ \scriptstyle \xi $$:
 * $$ \scriptstyle f_o \,\, = \,\, 1 / T_o $$   $$ \scriptstyle \to $$    $$ \scriptstyle \Delta f \Delta t \,\, \gtrsim \,\, 1$$      and      $$ \scriptstyle \xi_o \,\, = \,\, 1 / \lambda_o $$   $$ \scriptstyle \to $$   $$ \scriptstyle \Delta \xi \Delta x \,\, \gtrsim \,\, 1$$


 * The de Broglie Relations introduce Plancks constant:
 * $$ \scriptstyle f \,\, = \,\, E/h $$    and     $$ \scriptstyle \lambda \,\, = \,\, h / p $$    (or $$ \scriptstyle \xi \,\, = \,\, p / h $$  where $$ \scriptstyle \lambda \,\, = \,\, 1 / \xi $$ )


 * Substitution reveals the familiar inequalities:
 * $$ \scriptstyle \Delta E \Delta t \,\, \gtrsim \,\, h$$     and      $$ \scriptstyle \Delta p \Delta x \,\, \gtrsim \,\, h$$

Comparing these results with $$ \scriptstyle \Delta f \Delta t \,\, \gtrsim \,\, 1$$, one would expect the following also to hold in Quantum Mechanics due to indeterminacies associated with observing wave phenomena characterized by Planck's constant:

There are other ways like this one that show how Plancks constant relates the Heisenberg Uncertainty Principal inequalities to the Fourier transforms familiar to a broader technical audience. Nevertheless, Heisenberg's first derivation of $$ \scriptstyle \Delta p \Delta x \,\, \gtrsim \,\, h$$ in 1927 was complex, utilizing the Matrix Mechanics published in 1925-26 by him, Max Born, and Pascual Jordan. Matrix Mechanics is regarded as the first conceptually autonomous and logically consistent formulation of quantum mechanics, comparable to the Wave Mechanics formulation based on the Schrödinger wave equation, published in 1927. The form $&sigma;_{ p }&sigma;_{ x } &ge; ħ /2$, where the $&sigma;_{ p }$ and $&sigma;_{ x }$ are standard deviations and $ħ$ is the reduced Planck constant, is often called the exact solution of the Uncertainty Principle, and it has been shown valid for all wave functions – not just Gaussian waves as Heisenberg later had shown. In the modern mathematical formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits; for example, $$ \scriptstyle [\hat{p}_i, \hat{x}_j] = -i \hbar \delta_{ij}$$ where $$ \scriptstyle \hat{p}$$ is $$the momentum operator, $$ \scriptstyle \hat{x}$$ is the position operator, and $δ _{ij}$ is the Kronecker delta.

The Uncertainty Principle enables an alternate way of looking at certain classical wave problems, for example, diffraction. Assume the width of the central bright spot is determined by uncertainties in the momentum wave in space introduced by a single slit. The slit, with $width = a$, causes position uncertainty $Δ y = a$ in both the positive and negative $y$-directions, which in turn causes momentum uncertainty $Δ p _{y} = h / Δ y$ in each $y$-direction. The momentum in the $x$-direction towards the screen is given by $p _{x} = h / λ$ (de Broglie). If $&theta;$ is the angle from the center of the central bright spot to the first minimum on one side, then $&theta; = arctan(Δ p _{y} / p _{x}) = arctan( λ / a )$. When the screen is very far from the slit and $&theta;$ is small (Fraunhofer diffraction), then $&theta; ≈ sin &theta; ≈ tan &theta;$, and this reduces to $&theta; = λ / a$. This is the same result obtained from classical methods for diffraction (e.g., Huygens' principle) for the minima of single slit diffraction, $a sin &theta; = nλ$, when $n=1$ and $&theta;$ is small, and for the envelope of the interference pattern in the multi-slit case where each $width = a$. The uncertainty approach, however, is applicable to matter waves and not only to light. Interestingly, diffraction and interference patterns are the Fourier transforms of the slit patterns that produce them.