Talk:Planck constant/Archive 4

Recent Changes
It is really confusing to add new material at the top of talk pages.

It looks like an old battle (the details of which have been removed from the top of this discussion page) has been reignited. What is the point of mention of the Hayward work unless it can be shown to relate to how Planck arrived at his constant? If there is a point, then it should be stated. The reader who does not have a very firm footing in this area will be completely thrown off by this pointless mention. "What am I missing?" will be his or her question.

The series of postings that occurred last year were the only mention I have ever seen that hinted at the idea that Planck took over somebody's work from 40 years before, made some kind of empirical determination of it, and got the Nobel Prize. P0M (talk) 02:29, 9 March 2012 (UTC)

All the groundless assertions now have been transplanted to Specific relative angular momentum. I think an administrator will probably have to give some people time out unless there is a good explanation for what is being done.P0M (talk) 02:36, 9 March 2012 (UTC)


 * One of the contributors to that page has already been blocked. P0M (talk) 02:40, 9 March 2012 (UTC)

Six identical references for the value of h
In the first table, at right, the reference [1] is repeated six times, which is useless (and confusing for the measurement units). Perhaps it is sufficient to give the reference just once, in the heading: "Values of h[1]".

Can someone make this edit for me (the page is protected)?--79.19.234.209 (talk) 14:42, 13 March 2012 (UTC)

Edit request on 15 March 2012
See last talk above (Six identical references for the value of h)

87.10.226.136 (talk) 18:07, 15 March 2012 (UTC)

It is quite usual in tables for the reference to be given on every line, even if it is the same reference. This makes it easy to update if just one value is being updated from another source. Also if a new line is added, it is clear that it is unsourced or has a different source. But I won't close the request as I am the admin who protected the page. I will leave it someone else to decide.  Spinning Spark  23:20, 15 March 2012 (UTC)
 * I do not see any problem with the table as it stands. The reader may have ignored the power-of-ten indications. P0M (talk) 23:29, 15 March 2012 (UTC)

P.S. I have not understood the point of "the power-of-ten indications".--79.19.234.110 (talk) 17:21, 16 March 2012 (UTC)
 * Yes, references may be given on every line, but in this table we have just one physical constant (two, if you include the joule-equivalent of 1 eV) expressed in various units, so six references to the same source appear to be redundant.--79.19.234.110 (talk) 17:21, 16 March 2012 (UTC)


 * It makes no difference that it is the same constant in different units; they are still different numbers on different lines of the table. I really do not understand your concern, it does not detract from the layout or readability and it is simply clearer that all the numbers have a verification if each line is cited.  I don't understand POM's point either.  Spinning  Spark  17:48, 16 March 2012 (UTC)
 * I don't think we can implement SPERs where there isn't a clear consensus. FWIW, I don't think it is likely for the individual rows to have different sources and I would put the cite once after the column header, Values of h, to avoid anyone confusing the cite for some kind of exponent of the last unit, e.g. acceleration in m/s . Regards, Celestra (talk) 01:07, 17 March 2012 (UTC)

A possible compromise to avoid this confusion could be to add a separate column for the refs: | Values of h | units | ref. |.--87.17.219.56 (talk) 10:30, 17 March 2012 (UTC)
 * Ok, I have unlocked the page to allow you to do that. Please do not take too long, this page gets attacked quite frequently and it may well get protected again in the future.  Spinning  Spark  10:58, 17 March 2012 (UTC)
 * Done, thanks.--87.17.219.56 (talk) 11:19, 17 March 2012 (UTC) - I also made some aesthetic alignments --87.17.219.56 (talk) 11:53, 17 March 2012 (UTC)

Correction needed by someone with permission to edit
Sentence "One example is time vs. frequency." is incorrect. It should read:"One example is time vs. energy." Page is semi-locked, so someone with appropriate permissions please make a correction. Thanks. — Preceding unsigned comment added by 76.122.89.118 (talk) 23:57, 15 March 2012 (UTC)


 * Done  Spinning Spark  00:42, 16 March 2012 (UTC)


 * I'm confused. Time vs frequency makes sense when discussing a Fourier analysis. You either know the value at a specific time, or you compute a frequency spectrum - the power contained in each frequency. Why does "energy" make sense in that sentence? Q Science (talk) 05:17, 17 March 2012 (UTC)


 * Not really my subject, I just mechanically did what the IP asked. However, I do have Paul Davies Quantum Mechanics (ISBN 0710099622) which has at equation [8.12]
 * $$\Delta E \Delta t \sim \hbar $$
 * "which is known as the energy-time uncertainty relation and should be compared with the momentum-position uncertainty relation." From which I assumed the IP was correct.  Spinning  Spark  12:29, 17 March 2012 (UTC)

The Entire Discussion under "Uncertainty Principle...
...is nonideal and perhaps incomplete or erroneous in some respects. (Revised: I deleted these explanations on refactoring.)

I propose to replace this section text and examples with the following here (I am 99.999% done at this point.):

Heisenberg's Uncertainty Principle is another core concept of Quantum Mechanics where Planck's constant plays a key role. Perhaps Fourier Analysis best demonstrates the idea of an indeterminacy, i.e., something inherently indeterminable for a reason other than apparatus error or the Observer effect. In Fourier Analysis, as in Quantum Mechanics, these indeterminacies are associated with observing wave phenomena, although Heisenberg himself may not have initially understood this.

Fourier Analysis involves transform / inverse transform pairs between the time and frequency domains, between time and energy, between position and momentum, and between other examples of what physicists call canonical conjugates. One may calculate a set of transformed frequency data from a set of measured time data and view either on a 2D plot – but not both at once; it takes all the time data to calculate the transform for each frequency. However, plotting a 3D surface of frequency data (a spectrum) vs time (in frames $$ \scriptstyle \Delta t_F $$ long) seems to reveal all of it: signal power vs frequency and at each frame time. This 3D plot, sometimes called a waterfall or a spectrogram, is commonly found in Speech, Music, Communications, and Geology; and it is how voiceprints are made. The Spectrogram is a rich example of uncertainty and waves.

The Fourier Transform is defined with integrals from $$ \scriptstyle - \infty $$ to $$ \scriptstyle + \infty $$; however, in real measurements, the apparatus goes on at $$ \scriptstyle t = 0 $$ and measures samples at intervals of $$ \scriptstyle \Delta t_S $$ until $$ \scriptstyle N $$ samples are collected. Data are analyzed by the Discrete Fourier Transform (DFT), typically with an FFT algorithm. Since the $$ \scriptstyle t $$ domain was sampled, the $$ \scriptstyle f $$ domain also becomes discrete, having increments of $$ \scriptstyle \Delta f \, = \, 1/ \Delta t_F \, = \, 1/(N \Delta t_S) $$; and values of $$ \scriptstyle f(k) = k \Delta f $$ for $$ \scriptstyle k = 0 \, to \, (N-1) $$. However, these measurement parameters result in aliasing in the frequency domain that compells one to drop the terms for $$ \scriptstyle k = 0 $$ because time-framing under-samples frequencies $$ \scriptstyle f \, < \, 1 / \Delta t_F $$, and to drop the $$ \scriptstyle k > (N/2)-1 $$ terms because the time data are real and $$ \scriptstyle f_H \, \, \thickapprox \, 1/(2 \Delta t_S) \, = \, f_S/2$$ is the Nyquist frequency. Thus, there are uncertainties limiting both ends of the calculated spectrum as well as the resolution: $$ \scriptstyle f_L \, = \, \Delta f \, = \, 1/ \Delta t_F $$ and $$ \scriptstyle f_H \, \, \thickapprox \, 1/(2 \Delta t_S) $$.

A voiceprint with 10ms frames and a 20kSa/s sampling rate will produce a series of spectra each of which goes from 100Hz to 10kHz in 100Hz increments. Put another way, regardless of how accurately you measure; you cannot detect 100Hz (or a difference of 100Hz) in less than ~10ms $because it takes that long$ for 100Hz to happen. Thus, the indeterminacy of frequency in time means two things:


 * 1) Within a given time frame $$ \scriptstyle \Delta t_i $$, if an event were detected, there can be no more precise knowledge of when it happened than to observe it was during that particular time frame $$ \scriptstyle \Delta t_i $$, and
 * 2) Within any time frame $$ \scriptstyle \Delta t $$, the actual frequency can be no more precisely known than to observe it is within a range $$ \scriptstyle \Delta f \,\, \gtrsim \,\, 1 / \Delta t $$, while frequencies $$ \scriptstyle f \,\, < \,\, 1 / \Delta t $$ are indeterminable.

In actual lab practice, scientists often use window functions, overlapping data, and other advanced processing to overcome some indeterminacy limitations. Technical choices of windows and strategies affect the $$ \scriptstyle \Delta t $$ and the $$ \scriptstyle \Delta f $$ in various ways to balance requirements for signal levels, noise levels, spectral leakage, etc. For more details and examples see the Short-time Fourier transform (STFT), the Discrete Fourier transform (DFT), the Discrete-time Fourier transform (DTFT), and the Fast Fourier Transform (FFT).

By introducing Planck's constant, one may obtain Heisenberg's Energy vs Time and Momentum vs Position Uncertainty Relations in one step:


 * The Fourier Analysis indeterminacies for temporal frequency $$ \scriptstyle f $$ and spatial frequency $$ \scriptstyle \xi $$:
 * $$ \scriptstyle f_o \,\, = \,\, 1 / T_o $$   $$ \scriptstyle \to $$    $$ \scriptstyle \Delta f \Delta t \,\, \gtrsim \,\, 1$$      and      $$ \scriptstyle \xi_o \,\, = \,\, 1 / \lambda_o $$   $$ \scriptstyle \to $$   $$ \scriptstyle \Delta \xi \Delta x \,\, \gtrsim \,\, 1$$


 * The de Broglie Relations introduce Plancks constant:
 * $$ \scriptstyle f \,\, = \,\, E/h $$    and     $$ \scriptstyle \lambda \,\, = \,\, h / p $$    (or $$ \scriptstyle \xi \,\, = \,\, p / h $$  where $$ \scriptstyle \lambda \,\, = \,\, 1 / \xi $$ )


 * Substitution reveals the familiar inequalities:
 * $$ \scriptstyle \Delta E \Delta t \,\, \gtrsim \,\, h$$     and      $$ \scriptstyle \Delta p \Delta x \,\, \gtrsim \,\, h$$

Comparing these results with $$ \scriptstyle \Delta f \Delta t \,\, \gtrsim \,\, 1$$, one would expect the following also to hold in Quantum Mechanics due to indeterminacies associated with observing wave phenomena characterized by Planck's constant:

There are other ways like this one that show how Plancks constant relates the Heisenberg Uncertainty Principal inequalities to the Fourier transforms familiar to a broader technical audience. Nevertheless, Heisenberg's first derivation of $$ \scriptstyle \Delta p \Delta x \,\, \gtrsim \,\, h$$ in 1927 was complex, utilizing the Matrix Mechanics published in 1925-26 by him, Max Born, and Pascual Jordan. Matrix Mechanics is regarded as the first conceptually autonomous and logically consistent formulation of quantum mechanics, comparable to the Wave Mechanics formulation based on the Schrödinger wave equation, published in 1927. The form $&sigma;_{ p }&sigma;_{ x } &ge; ħ /2$, where the $&sigma;_{ p }$ and $&sigma;_{ x }$ are standard deviations and $ħ$ is the reduced Planck constant, is often called the exact solution of the Uncertainty Principle, and it has been shown valid for all wave functions – not just Gaussian waves as Heisenberg later had shown. In the modern mathematical formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits; for example, $$ \scriptstyle [\hat{p}_i, \hat{x}_j] = -i \hbar \delta_{ij}$$ where $$ \scriptstyle \hat{p}$$ is $$the momentum operator, $$ \scriptstyle \hat{x}$$ is the position operator, and $δ _{ij}$ is the Kronecker delta.

The Uncertainty Principle enables an alternate way of looking at certain classical wave problems, for example, diffraction. Assume the width of the central bright spot is determined by uncertainties in the momentum wave in space introduced by a single slit. The slit, with $width = a$, causes position uncertainty $Δ y = a$ in both the positive and negative $y$-directions, which in turn causes momentum uncertainty $Δ p _{y} = h / Δ y$ in each $y$-direction. The momentum in the $x$-direction towards the screen is given by $p _{x} = h / λ$ (de Broglie). If $&theta;$ is the angle from the center of the central bright spot to the first minimum on one side, then $&theta; = arctan(Δ p _{y} / p _{x}) = arctan( λ / a )$. When the screen is very far from the slit and $&theta;$ is small (Fraunhofer diffraction), then $&theta; ≈ sin &theta; ≈ tan &theta;$, and this reduces to $&theta; = λ / a$. This is the same result obtained from classical methods for diffraction (e.g., Huygens' principle) for the minima of single slit diffraction, $a sin &theta; = nλ$, when $n=1$ and $&theta;$ is small, and for the envelope of the interference pattern in the multi-slit case where each $width = a$. The uncertainty approach, however, is applicable to matter waves and not only to light. Interestingly, diffraction and interference patterns are the Fourier transforms of the slit patterns that produce them.

1901 experiment
As an undergraduate, I was taught that a 1901 experiment by Planck lead to the quantum theory. This experiment was a crt with a photo-cathode and a grid with a voltage applied just sufficient to stop all emissions. Plotting V against 1/λ is a straight line with a slope proportional to h. This seems to be missing from the article, and to my simplistic engineering mind is an interesting and understandable (not to say important) experiment.  Spinning Spark  08:39, 17 May 2012 (UTC)


 * A link would be useful. Since a crt produces a stream of electrons, I don't know where the λ comes from. Q Science (talk) 13:25, 17 May 2012 (UTC)
 * Sorry, lecture notes were not put online in my day, we had to write it all down. I will e-mail you a scan if you like (presuming I can still find them).  The λ is the wavelength of the incident light on the photo-cathode, so the max energy that an emitted electron can possible have is V=hf and this is the voltage required on the grid to exactly stop emissions reaching the screen.  So h is the slope, and the intersection with the vertical axis is the work function of the material (ignoring unit issues).  Spinning  Spark  14:56, 17 May 2012 (UTC)


 * Are you referring to something like slide 14 which describes the photoelectric effect? On slide 17, it shows how the data is used to compute h. I was confused because, in a crt, heat causes electrons to leave the cathode and photons are generated when the electrons hit a target. However, in this device, a photon knocks an electron off the emitter and the kinetic energy of the electron is related to the wavelength. Q Science (talk) 15:41, 17 May 2012 (UTC)
 * Yes, that's the experiment (although the link has just gone dead) except that the diagram does not show the grid. But the description in the slides talks about a "retarding potential" being applied which would require a grid to be in place.  I have now dug out my notes (amazing the junk one keeps) which say the experiment is actually due to Millikan, not Planck as I first thought.  There was also nothing about the 1901 date, so I might be conflating that with something else, but the date stuck in my mind because it was said that the quantum theory began on the first day of the 20th century (counting 1900 as the last year of the 19th).  That might possibly have come from a Paul Davies book, but I couldn't see it on a quick flip through.  Probably best to put it all down to a senior moment and I should go book myself into a home.  Spinning  Spark  17:00, 17 May 2012 (UTC)

Uncertainty principle examples
In the section "Uncertainty principle" we have: "A practical example is computational neurology trying to both measure the time effect and frequency of a neuron burst. fMRI (functional MRI), whose signal processing is based on Fourier transforms, can resolve frequency, but not time (a limit of Fourier analysis due to uncertainty). An EEG (a time series analysis measurement tool) can resolve time, but not frequency. Due to uncertainty, these are not problems with the design of the measuring instruments, but problems with the nature of quantum measurement and particle realities themselves."

Unless I am completely misunderstanding the author's point, this entire section seems based on misunderstandings of fMRI and EEG. Rather than pick it apart in detail, could I first ask for a citation that either of these have anything to do with "quantum measurement"? Since they are both measurements of bulk phenomena, the "quantum" aspect seems very suspect to me. Gwideman (talk) 13:59, 23 August 2012 (UTC)


 * You are correct, sir, but you are speaking to nobody. AAs you noted, the remarks about MRI and EEG are flat wrong. And then there is the bogus equation. It appears your editors prefer to keep it that way. A private club, it probably includes a beloved medical physician. Please refer to my [possibly too long] alternate contribution above; I included most of the references. It was rejected by the editors (whoever they are... I thought I was an editor, but...). At least I have the Physics and Engineering and Mathematics to know totally bad work when I see it. The MRI is a Fourier Transform between space and frequency, not time. An EEG is not a time series analysis; it is a presentation of the actual time signal. One can SEE frequency in a time display: it is wavey lines, like in epileptic seizures. This is common knowledge for engineers and mathematicians. And there is no reason why one could not do a Fourier Analysis on those data, especially in sleep studies. But what is perhaps even more important, this is NOT the Fourier Analysis chapter; it is about Planck and Uncertainty. "Uncertainty", however, is about Fourier Analysis – about "getting the whole wave" in time before calculating the frequency of it. ONE MUST INTRO FOURIER, BUT THE GOAL IS TO GET TO A USEFUL AND MEANINGFUL INSIGHT INTO THE ROLE OF THE PLANCK CONTANT – which means Quantum Mechanics. This is not easy at all, hence the mention of sprectral analysis. The current subchapter, however, does not even begin to go there. If you agree, then please start some discussions right here. Meanwhile, please start picking it apart... and mine, too (above) if you wish. I never got ANY content feedback except maybe that "they" did not feel like reading it or talking about it. Good luck, colleague. ItsTheEquations (talk) 23:58, 24 August 2012 (UTC)


 * I removed the section on MRI and EEG. I agree, it sounds suspicious. If its in fact a quantum effect, let the original editor restore it with citations. To anyone with any experience editing this particular page, the idea that there is a "private club" with anybody "beloved" is hilarious. PAR (talk) 07:23, 25 August 2012 (UTC)


 * Thank you for correcting my misimpression about the "private club." I am not likely ever to get the way people work here. It is not content-oriented, and there is no authority except for removing material. I don't know how you can succeed like that. How I feel about anything is irrelevant, and should not be expressed here. I am sorry. ItsTheEquations (talk) 01:29, 26 August 2012 (UTC)


 * Thanks for the responses. I think the removal of the fMRI and EEG "examples" is an improvement. That said, I see, again in the "Uncertainty principle" section, this sentence: "The either-or nature of uncertainty forces measurement attempts to choose between trade offs, and given that they are quanta, the trade offs often take the form of either-or (as in Fourier analysis), rather than the compromises and gray areas of time series analysis." The quanta => either-or point sounds plausible. But then Fourier analysis is given as an example of an "either-or" choice, and contrasted to time series analysis -- this seems bogus.  In signal analysis, one can choose from a continuum of precision (variance) in either the time domain or frequency domain... yes it's a tradeoff, but it's not a binary choice, as seems to be implied here. Gwideman (talk) 08:04, 25 August 2012 (UTC)


 * Note to ItsTheEquations: I've read your attempt above to provide more of a tutorial background relating to the Uncertainty principle in the context of Planck's constant, and very much applaud your effort and initiative. Inevitably it's difficult to illuminate the points you feel are problems for readers AND get expert contributors to concur that these are points that need elaborating... that's sort of the nature of the endeavor. I guess I'm urging to take this in stride rather than feel frustrated.  For my part, I found your tutorial approach helpful, but I thought that discussing the sampling aspect of practical Fourier processing is a bit of a red herring. I'm pretty sure that in principle, the same Fourier processing could be carried out in the analog domain (or continuous math), demonstrating the same tradeoffs between time-domain and frequency domain precision, and one need not deal with aliasing, Nyquist and so on. Perhaps I missed something though. Gwideman (talk) 08:22, 25 August 2012 (UTC)


 * I believe you will find there is NO continuous (or "analog") use of the Fourier Transform. In practice, one must always sample, and this sampling is therefore a fundamental part of the problem. The sense of a red herring I believe is the confusion that results from "the fourier uncertainties" in sampling itself. You cannot get rid of it. It actually was a surprise to me that I used this thing for so many years and never realized this. My original purpose was to leverage the universal understanding of what a audio spectrum analyzer is. In Quantum Mechanics, it is there in a natural way, but not without plenty of confusion, misunderstanding. Like "is delta T the time the camera lens was open?" Going down the easy road first was intended to explain what it means to say "it takes delta t of time to encompasses the whole wave of frequency f in order to measure the value of f." It is easy to understand f=1/delta_t. That is the Fourier anyone should get. The jump from here to canonical conjugates, even from here, is barely tolerable. Perhaps I failed completely. I would like to discuss this content this way, if you have the time. ItsTheEquations (talk) 01:29, 26 August 2012 (UTC)


 * The QM (quantum mechanical) wave function and the QM measurement problem is fundamentally different from the measurement of, say, a classical (i.e. non-QM) sound wave or a light wave. In the classical case, you have a wave that is presumably not affected by measurement. In QM, you have a wave that is totally altered by measurement. Therefore, there is no such thing as "sampling" or "measuring" a QM wave function. The QM wave function is an expression of what you know about the position of a particle as the result of a measurement, and its Fourier transform is an expression of what you know about the momentum of that particle as the result of that measurement. The wave function is used to predict the probabilities of future measurements, and the Schroedinger equation tells you how the wave function (and therefore your predictions) will change as time goes by. Every time you make a measurement, the wave function "collapses" and becomes a representation of your new knowledge as the result of that measurement. PAR (talk) 16:41, 26 August 2012 (UTC)


 * The reason the full, continuous Fourier transfer never is calculated is because it goes from minus infinity to infinity. In theoretical work, you can do that. However, to say the "wave function collapses" is an opinion, one of the several views of Quantum Mechanics. I believe it is obviously wrong, and it is irrelevant anyway to this topic: Uncertainty. It is now a well-understood fact that Uncertainty has nothing to do with the Observer Effect. Simply, it is about how if you would attempt to measure Energy (because of Planck, this is a wave of some frequency in time) or momentum (which via Planck is a wave of some frquency in space), then you must SAMPLE enough of the time (or space, respectively) or else your measurement become a random number... undersampled and indeterminate. The Planck Constant extends the Classical understanding between time and frequency directly into the Quantum Mechanical realm. The talk of wave function collapse and multiple universes and all that is simply off-topic to Heisenberg's Uncertainty principle. The formulae say you must include AT LEAST delta t to measure E or delta E; and you must include AT LEAST delta x to measure p or delta p. In these cases delta x and delta t are given by the famous equation with Plancks Constant. These facts about Uncertainty (and what it is not) can be found elsewhere in Wikipedia. Heisenberg Uncertainty is a self-contained mathematical fact. Wave function collapse is a broader and far more controversial theory of Physics. You could mention it here, but I discovered it is quite difficult to write the "Uncertainty" subchapter for Planck without staying close to the road, as it were. I am so happy to have someone to talk to about this. Please let us continue. ItsTheEquations (talk) 19:39, 27 August 2012 (UTC)


 * What exactly do you mean by "sample"? I have been assuming it was making successive measurements in time of a wave amplitude, for example.PAR (talk) 21:59, 27 August 2012 (UTC)


 * Yes. Sampling implies taking successive snapshots of a target spreadout in time or space. But samples need not be successive or sequential or even multiple. One sample, one picture, invokes uncertainty everytime if it involves canonical conjugates. The well-understood freq vs. time (or freq vs. space) is to me the prototype of all conjugates. Taking groups of sequential samples in time creates a large multidimensional fourier problem like the spectrum anaylzer. My having mentioned spectrograms was evidently problematic. Since p=mv, I always was suspicious of HUP because I know I can derive continuous smooth function of velocity whenever I have a continuous smooth function of the position. However, MASS IS THE WAVE, not velocity. You must include enough SPACE to include one cycle of the matter wave (according to the Planck spatial frequency) in order to catch the momentum in your measurement. In this sense, all measurement are samples. The term is at once helpful and confusing. Like referring to a point mass – which cannot exist. Making this clear should have been easier being in a Planck subchapter. But it requires revealing the HUP formula was (really) a hundred years old already – it is Fourier's accomplishment. Heisenberg's genius (and several others) was they did not use Fourier at all. They did not see the obvious. They plowed into a new math called Quantum Mechanics. Along with that brings the difficulty of bulk phenomena and real physical systems. By the time QM matured (30's? 50's?), the Fourier aspect became canonical conjugates. Fourier transforms of transforms that are physical observables. WHAT WAS I TRYING TO DO? "The HUP is simply Fourier with Planck." Which it is. But how to say it without seeming to go way off topic; that is the trick. And then one good example: derive the single slit formula in one step, applicable to both photons AND matter waves. The Classical Optics (i.e., geometrical) approach does not apply to quanta let alone to matter waves. ItsTheEquations (talk) 23:32, 27 August 2012 (UTC)


 * ItsTheEquations: Again I applaud your passion for communicating this subject. I have to admit that I'm not a heavy thinker on this subject. That said, I continue to feel that sampling, and even Fourier, are a little peripheral to the issue central to H's Uncertainty, which is that we have two phenomena which are conjugates, and whose precision is mutually exclusive. That is to say, more precision in one variable requires more "extent" in the space of the other. In the audio, radio etc, precision in the characterization of frequency (ie: lower variance in stating the frequency) requires more time, and vice versa. Similarly in position vs momentum etc.


 * Fourier analysis inevitably incorporates this physical reality, and is very familiar to engineers and scientists, but is not the underlying reality itself. So I feel that Fourier perhaps doesn't have to make an appearance at the core of the explanation, and the sampling aspect of actual Fourier-employing instruments is even more of a distraction for readers not already familiar. To put it another way, I think the Uncertainty arises not particularly in the measuring of the phenomena (time-freq, or position-momentum) it's in simply making a statement about them. Gwideman (talk) 01:07, 28 August 2012 (UTC)

"Since p=mv, I always was suspicious of HUP because I know I can derive continuous smooth function of velocity whenever I have a continuous smooth function of the position." - No, not in QM. Assuming you know what particle you are looking at, you know its mass, so measuring velocity is equivalent to measuring momentum. In QM, you cannot have a continuous smooth function of position, because that would imply multiple measurements of exact position, with momentum being little affected. That denies HUP. If you make an exact position measurement, the momentum is completely unknown after that measurement, and upon the next measurement, the particle can be anywhere in the universe (no relativity here). Measure its position exactly again, and again the momentum is completely indeterminate, the next position measurement can be anywhere. If you relax your constraints on position, measure to within delta x, then the uncertainty in momentum will decrease to delta p, but always in accordance with HUP. If you finally go to exact measurement of momentum, then you could predict where it would be next, but you now have no idea where it is to begin with. Again, you cannot assign a clear trajectory to any particle. Its always going to a fuzzy kind of trajectory, constrained by HUP.

Also, the HUP is only equivalent to the continuous Fourier uncertainty when the variables are continuous (e.g. position and momentum, or time and energy). There are other conjugate variables that are not continuous (e.g. angular momentum and angular position). Continuous Fourier uncertainty is an example of HUP, but not the whole story.

When you make a measurement of position, you are not sampling (or measuring) the wave function. For a pure state, you know the wave function to begin with and it tells you the probability of getting a particular result when you measure the position of a particle. Suppose that measurement was "exact". You now have a wave function whose absolute value squared is a Dirac delta function, reflecting the fact that the position has no uncertainty. The momentum wave function's absolute value is completely spread out over momentum space, since the momentum wave function is the Fourier transform of the position wave function and reflects the fact that after your measurement, the momentum of the particle is totally arbitrary. The wave function tells you what you already know and predicts the probabilities of what you will measure, given that knowledge. Measurement (or "preparation") yields the wave function. The fact that this knowledge must be expressed in terms of position and momentum wave functions, and the fact that the position and the momentum wave functions are Fourier transforms automatically constrains your knowledge such that it cannot violate HUP.

Please note I am not being very mathematically rigorous talking about this limiting case of "exact" position and "totally arbitrary" momentum. The entire discussion can be redone replacing "exact" with "very small variance" and "totally arbitrary" with "very large variance". PAR (talk) 03:28, 28 August 2012 (UTC)


 * Wow, you guys. I'm real tired, so let me touch only a couple of points now; then I will sleep. Okay. PAR said "you know its mass, so measuring velocity is equivalent to measuring momentum." We are there: the mass is distributed in a matter wave. THERE IS NO PARTICLE. Everything is a wave now. The extent one needs to embrace space to measure the momentum of a "particle" is the extent to which the matter wave is spread out in space. That's all. I heard that superconducting electrons can be thirty feet long. (no ref) In theory, probably all particles are infinite in size; you easily get that impression by looking at the sinc function or other self transforms: they go on forever in small amounts. Now from this point of view, one could say the delta_x _was a precision tradeoff of some sort. But not knowing that particular type of wave, consider a wave where a particle is well-contained in space. In this case HUP is NOT a precision tradeoff. Rather, it simply says you must include the whole "wave" or you will get a random number. Maybe you can average it to the right answer; maybe not. Depends on how short you are from the whole thing. However, the important thing is the error BEGINS at the HUP ">" point. Why? Because it is a fourier transform. Do not think of fourier as an advanced topic here. It is as fundamental as adding and multiplying. If you want to know "f" you must measure the whole wave, sin(2pft). Wanting to know p(x) or E(t) is tantamount to knowing "f" of the "quantum wave." Substituting the deBroglie relations for "f" introduces Planck's Constant, and THOSE _are the HUP equations. The derivation of the "exact" HUP (which H never accomplished) involves the acknowledgement of bulk phenomena, changing the deltas to sigmas. I would not try to go THERE in a subchapter under Planck. Besides, the delta form of HUP is the one intro physics texts usually quote and use for simple HUP examples. ItsTheEquations (talk) 05:07, 28 August 2012 (UTC)


 * No, you are still talking about measuring the quantum wave, which is wrong. A superconducting electron is not thirty feet long. The standard deviation of the square of the absolute value of the positional wave function is thirty feet, and that is not the same thing. It tells you that if you do measure the position of a superconducting electron described by such a wave to within delta x much smaller than thirty feet, you will not be able to predict that position (before the measurement) to within better than about thirty feet. After you make the measurement, the wave function will be different, it will represent your new knowledge of the electron, it will be described by a wave function whose standard deviation of its absolute value squared is delta x. Also, matter is not distributed on the wave. If it were, you could measure a density for a single particle, and you cannot do that with a single measurement. PAR (talk) 09:49, 28 August 2012 (UTC)


 * You should read Wikipedia. http://en.wikipedia.org/wiki/Observer_effect_(physics)#Quantum_mechanics (Even _that sounds like you wrote it: Many Worlds, etc.) You are wrong going and coming. There is no point in my wasting my time with your being so certain of your wrong "facts." I knew I should not have come back here. And YOU are the editor, eh? Good grief! I am done. This is all yours again. Enjoy looking at it. 71.22.238.46 (talk) 01:56, 29 August 2012 (UTC)


 * A few points:
 * I just entered this conversation a few days ago, and I find no entry by you (71.22.238.46) on this entire page, so perhaps you are addressing someone else? Or perhaps you forgot to log in?
 * I am a minor contributor to this article. The last technical contribution I made to this article, other than the one a few days ago, was four years ago. I have never even touched the Quantum mechanics article. You have me confused with someone else.
 * There is no one "editor" of this article.
 * I am not "certain" of anything I have written. If you can explain where and why you disagree and it seems reasonable to me, I will stop explaining and start learning.
 * PAR (talk) 08:04, 29 August 2012 (UTC)