User:Likebox/Schrodinger

In physics, specifically quantum mechanics, the Schrödinger equation is an equation that describes how the quantum state of a physical system changes in time. It is as central to quantum mechanics as Newton's laws are to classical mechanics.

In the standard interpretation of quantum mechanics, the quantum state, also called a wavefunction or state vector, is the most complete description that can be given to a physical system. Solutions to Schrödinger's equation describe not only atomic and subatomic systems, atoms and electrons, but also macroscopic systems, possibly even the whole universe. The equation is named after Erwin Schrödinger, who constructed it in 1926.

The most general form is the time dependent Schrödinger equation, which gives a description of a system evolving with time. For systems in a stationary state, the time independent Schrödinger equation is sufficient. Approximate solutions to the time independent Schrödinger equation are commonly used to calculate the energy and other properties of isolated atoms and molecules, and the chemical structure of molecules.

Schrödinger's equation can be mathematically transformed into Werner Heisenberg's matrix mechanics, and into Richard Feynman's path integral formulation. The Schrödinger equation describes time in a way that is inconvenient for relativistic theories, a problem which is not as severe in Heisenberg's formulation and completely absent in the path integral.

The Schrödinger equation
The Schrödinger equation takes several different forms, depending on the physical situation. This section presents the equation for the general case and for the simple case encountered in many textbooks.

General quantum system
For a general quantum system:


 * $$i\hbar\frac{\partial}{\partial t} \Psi = \hat H \Psi$$

where
 * $$i$$ is the imaginary unit
 * $$\hbar$$ is the reduced Planck constant (sometimes normalized to 1 in natural units, in order to simplify the math)
 * $$\hat H$$ is the Hamiltonian operator.
 * $$\Psi$$ is the wave function; the probability amplitude for different configurations of the system at different times

Single particle in a potential
For a single particle with potential energy V, the Schrodinger equation takes the form:


 * $$i\hbar\frac{\partial}{\partial t} \Psi(\mathbf{r},\,t) = -\frac{\hbar^2}{2m}\nabla^2\Psi(\mathbf{r},\,t) + V(\mathbf{r})\Psi(\mathbf{r},\,t)$$

where
 * $$-\frac{\hbar^2}{2m}\nabla^2$$ is the kinetic energy operator, where m is the mass of the particle.
 * $$\nabla^2$$ is the Laplace operator. In three dimension, the Laplace operator is$$\frac{\partial^2}{{\partial x}^2} + \frac{\partial^2}{{\partial y}^2} + \frac{\partial^2}{{\partial z}^2}$$, where x, y, and z are the Cartesian coordinates of space.
 * $$V\left(\mathbf{r}\right)$$ is the time-independent potential energy at the position r.
 * $$\Psi(\mathbf{r},\,t)$$ is the probability amplitude for the particle to be found at position r at time t.

Time independent equation
The time independent equation, again for a single particle with potential energy V takes the form:


 * $${E}\psi(r) = - {\hbar^2 \over 2m} \nabla^2 \psi(r) + V(r) \psi(r)$$

This equation describes the standing wave solutions of the time-dependent equation, which are the states with definite energy.

Historical background and development
Albert Einstein interpreted Planck's quanta to be photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, a form of wave–particle duality. Since energy and momentum are related in the same way as frequency and wavenumber in special relativity, it followed that the momentum p of a photon is proportional to its wavenumber k.


 * $$p = \frac{h}{\lambda} = \hbar k$$

Louis de Broglie hypothesized that this is true for all particles, not just light. Assuming that the waves travel roughly along classical paths, he showed that they form standing waves for certain discrete frequencies. These correspond to discrete energy levels, which reproduced the old quantum condition.

Following up on these ideas, Schrödinger decided to find a proper wave equation for the electron. He was guided by William R. Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system—the trajectories of light rays become sharp tracks which obey Fermat's principle, and analog of the principle of least action.

Hamilton believed that mechanics was the zero-wavelength limit of wave propagation, but did not formulate an equation for those waves.

A modern version of his reasoning is reproduced in the next section. The equation he found is:


 * $$i\hbar \frac{\partial}{\partial t}\Psi(x,\,t)=-\frac{\hbar^2}{2m}\nabla^2\Psi(x,\,t) + V(x)\Psi(x,\,t).$$

Using this equation, Schrödinger computed the Hydrogen spectral series by treating a hydrogen atom's electron as a wave &Psi;(x, t), moving in a potential well V, created by the proton. This computation accurately reproduced the energy levels of the Bohr model.

However, by that time, Arnold Sommerfeld had refined the Bohr model with relativistic corrections. Schrödinger used the relativistic energy momentum relation to find what is now known as the Klein–Gordon equation in a Coulomb potential (in natural units):


 * $$\left(E + {e^2\over r} \right)^2 \psi(x) = - \nabla^2\psi(x) + m^2 \psi(x).$$

He found the standing waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. Discouraged, he put away his calculations and secluded himself in an isolated mountain cabin with a lover.

While at the cabin, Schrödinger decided that his earlier non-relativistic calculations were novel enough to publish, and decided to leave off the problem of relativistic corrections for the future. He put together his wave equation and the spectral analysis of hydrogen in a paper in 1926. The paper was enthusiastically endorsed by Einstein, who saw the matter-waves as an intuitive depiction of nature, as opposed to Heisenberg's matrix mechanics, which he considered overly formal.

The Schrödinger equation details the behaviour of &psi; but says nothing of its nature. Schrödinger tried to interpret it as a charge density in his fourth paper, but he was unsuccessfully. In 1926, Just a few days after Schrödinger's fourth and final paper was published, Max Born successfully interpreted &psi; as a probability amplitude. Schrödinger, though, always opposed a statistical or probabilistic approach, with its associated discontinuities—much like Einstein, who believed that quantum mechanics was a statistical approximation to an underlying deterministic theory— and never reconciled with the Copenhagen interpretation.

Short heuristic derivation
Schrödinger's equation can be derived in the following short heuristic way.

Assumptions

 * 1) The total energy E of a particle is
 * $$E = T + V = \frac{p^2}{2m}+V.$$
 * This is the classical expression for a particle with mass m where the total energy E is the sum of the kinetic energy T, and the potential energy V (which can vary with position, and time). p and m are respectively the momentum and the mass of the particle.
 * 1) Einstein's light quanta hypothesis of 1905, which asserts that the energy E of a photon is proportional to the frequency ν (or angular frequency, ω = 2πν) of the corresponding electromagnetic wave:
 * $$E = h\nu = \hbar \omega \;,$$
 * 1) The de Broglie hypothesis of 1924, which states that any particle can be associated with a wave, and that the momentum p of the particle is related to the wavelength λ (or wavenumber k) of such a wave by:
 * $$p = \frac{h}{\lambda} = \hbar k\;,$$
 * Expressing p and k as vectors, we have
 * $$\mathbf{p} =\hbar \mathbf{k}\;.$$
 * 1) The three assumptions above allow one to derive the equation for plane waves only. To conclude that it is true in general requires the superposition principle, and thus, one must separately postulate that the Schrödinger equation is linear.

Expressing the wave function as a complex plane wave
Schrödinger's expressed the phase of a plane wave as a complex phase factor:


 * $$\Psi(\mathbf{x},t) = Ae^{i(\mathbf{k}\cdot\mathbf{x}- \omega t)}$$

and to realize that since


 * $$ \frac{\partial}{\partial t} \Psi = -i\omega \Psi $$

then


 * $$ E \Psi = \hbar \omega \Psi = i\hbar\frac{\partial}{\partial t} \Psi $$

and similarly since


 * $$ \frac{\partial}{\partial x} \Psi = i k_x \Psi $$

and


 * $$ \frac{\partial^2}{\partial x^2} \Psi = - k_x^2 \Psi $$

we find:


 * $$ p_x^2 \Psi = (\hbar k_x)^2 \Psi = -\hbar^2\frac{\partial^2}{\partial x^2} \Psi $$

so that, again for a plane wave, he obtained:


 * $$ p^2 \Psi = (p_x^2 + p_y^2 + p_z^2) \Psi = -\hbar^2\left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + \frac{\partial^2}{\partial z^2}\right) \Psi = -\hbar^2\nabla^2 \Psi $$

And by inserting these expressions for the energy and momentum into the classical formula we started with we get Schrödinger's famed equation for a single particle in the 3-dimensional case in the presence of a potential V:


 * $$i\hbar\frac{\partial}{\partial t}\Psi=-\frac{\hbar^2}{2m}\nabla^2\Psi + V\Psi$$

Longer discussion
The particle is described by a wave; the angular frequency ω is related to the energy E of the particle, while the momentum p is related to the wavenumber k. Because of special relativity, these are not two separate assumptions.


 * $$E= \hbar \omega \;\;\;\;p= \hbar k \,$$

The total energy is the same function of momentum and position as in classical mechanics:


 * $$E = T(p) + V(x) = {p^2\over 2m} + V(x)$$

where the first term T(p) is the kinetic energy and the second term V(x) is the potential energy.

Schrödinger required that a wave packet at position x with wavenumber k will move along the trajectory determined by Newton's laws in the limit that the wavelength is small.

Consider first the case without a potential, thus no potential energy (V = 0).


 * $$E = {1 \over 2m} (p_x^2+p_y^2 + p_z^2)$$


 * $$ \omega = {\hbar \over 2m} (k_x^2 + k_y^2 + k_z^2) $$

So that a plane wave with the right energy/frequency relationship obeys the free Schrödinger equation:


 * $$i \hbar {\partial \over \partial t} \Psi = -{\hbar^2 \over 2m} \left( {\partial^2 \Psi \over \partial x^2} + {\partial^2 \Psi \over \partial y^2} + {\partial^2 \Psi \over \partial z^2} \right)$$

and by adding together plane waves, you can make an arbitrary wave.

When there is no potential, a wavepacket should travel in a straight line at the classical velocity. The velocity v of a wavepacket is:


 * $$v = {\partial \omega \over \partial k } = {\partial \over \partial k} {\hbar k^2 \over 2m} = {\hbar k\over m}$$

which is the momentum over the mass as it should be. This is one of Hamilton's equations from mechanics:


 * $${dx \over dt} = {\partial H \over \partial p}$$

after identifying the energy and momentum of a wavepacket as the frequency and wavenumber.

To include a potential energy, consider that as a particle moves the energy is conserved, so that for a wavepacket with approximate wavenumber k at approximate position x the quantity


 * $$ {\hbar^2 k^2\over 2m } + V(x) $$

must be constant. The frequency doesn't change as a wave moves, but the wavenumber does. So where there is a potential energy, it must add in the same way:


 * $$i \hbar {\partial \over \partial t}\Psi = -{\hbar^2 \over 2m}\nabla^2\Psi + V(x)\Psi$$

This is the time dependent Schrödinger equation. It is the equation for the energy in classical mechanics, turned into a differential equation by substituting:


 * $$E\rightarrow i \hbar {\partial\over \partial t} \;\;\;\;\;\; p\rightarrow -i \hbar {\partial\over \partial x}$$

Schrödinger studied the standing wave solutions, since these were the energy levels. Standing waves have a complicated dependence on space, but vary in time in a simple way:


 * $$\Psi(x,t) = \psi(x) e^{- i\frac{E}{\hbar}t}\,$$

substituting, the time-dependent equation becomes the standing wave equation:


 * $${E}\psi(x) = - {\hbar^2 \over 2m} \nabla^2 \psi(x) + V(x) \psi(x)$$

Which is the original time-independent Schrödinger equation.

In a potential gradient, the k-vector of a short-wavelength wave must vary from point to point, to keep the total energy constant. Sheets perpendicular to the k-vector are the wavefronts, and they gradually change direction, because the wavelength is not everywhere the same. A wavepacket follows the shifting wavefronts with the classical velocity, with the acceleration equal to the force divided by the mass.

An easy modern way to verify that Newton's second law holds for wavepackets is to take the Fourier transform of the time dependent Schrödinger equation. For an arbitrary polynomial potential this is called the Schrödinger equation in the momentum representation:


 * $$i \hbar {\partial \Psi(p,\,t) \over \partial t} = {p^2\over 2m} \Psi(p,\,t) + V(i \hbar {\partial\over \partial p}) \Psi(p,\,t)$$

The group-velocity relation for the fourier transformed wave-packet gives the second of Hamilton's equations.


 * $${dp \over dt} = -{\partial H \over \partial x}$$

Versions
There are several equations which go by Schrödinger's name:

Time dependent equation
This is the equation of motion for the quantum state. In the most general form, it is written:


 * $$i \hbar{\partial \over \partial t} \Psi(x,\,t) = \hat H \Psi(x,\,t)$$

Where $$\hat H$$ is a linear operator acting on the wavefunction &Psi;. $$\hat H$$ takes as input one &Psi; and produces another in a linear way, a function-space version of a matrix multiplying a vector. For the specific case of a single particle in one dimension moving under the influence of a potential V.


 * $$i \hbar {\partial \over \partial t} \Psi(x,\,t)= -{\hbar^2 \over 2m} {\partial^2 \over \partial x^2} \Psi(x,\,t)+ V(x)\Psi(x,\,t)\,$$

and the operator $$\hat H$$ can be read off:


 * $$\hat H = -{\hbar^2 \over 2m} {\partial^2 \over \partial x^2} + V(x)\,$$

it is a combination of the operator which takes the second derivative, and the operator which pointwise multiplies &Psi; by V(x). When acting on &Psi; it reproduces the right hand side.

For a particle in three dimensions, the only difference is more derivatives:


 * $$i \hbar {\partial \over \partial t}\Psi(x,\,t)= -{\hbar^2 \over 2m} \nabla^2 \Psi(x,\,t)+ V(x)\Psi(x,\,t)\,$$

and for N particles, the difference is that the wavefunction is in 3N-dimensional configuration space, the space of all possible particle positions.


 * $$i \hbar {\partial \over \partial t} \Psi(x_1,...,x_n,t) = \hbar^2 (-{\nabla_1^2\over 2m_1} - {\nabla_2^2 \over 2m_2} ... - {\nabla_N^2\over 2m_N} ) \Psi(x_1,...,x_n,t) + V(x_1,..,x_n,t)\Psi(x_1,...,x_n,t)\,$$

This last equation is in a very high dimension, so that the solutions are not easy to visualize.

Time independent equation
This is the equation for the standing waves, the eigenvalue equation for $$\hat H$$. In abstract form, for a general quantum system, it is written:


 * $$\hat H \psi = E \psi\,$$

For a particle in one dimension,


 * $$E \psi = -\frac{\hbar^2}{2m}{\partial^2 \psi \over \partial x^2} + V(x)\psi\,$$

But there is a further restriction—the solution must not grow at infinity, so that it has a either a finite L2-norm (if it is a bound state) or a slowly diverging norm (if it is part of a continuum):


 * $$\| \psi \|^2 = \int |\psi(x)|^2\, dx\,$$

For example, when there is no potential, the equation reads:


 * $$ - E \psi = \frac{\hbar^2}{2m}{\partial^2 \psi \over \partial x^2}\,$$

which has oscillatory solutions for E > 0 (the Cn are arbitrary constants):


 * $$\psi_E(x) = C_1 e^{i\sqrt{2mE/\hbar^2}\,x} + C_2 e^{-i\sqrt{2mE/\hbar^2}\,x}\,$$

and exponential solutions for E < 0


 * $$\psi_{-|E|}(x) = C_1 e^{\sqrt{2m|E|/\hbar^2}\,x} + C_2 e^{-\sqrt{2m|E|/\hbar^2}\,x}\,$$

The exponentially growing solutions have an infinite norm, and are not physical. They are not allowed in a finite volume with periodic or fixed boundary conditions.

For a constant potential V the solution is oscillatory for E > V and exponential for E < V, corresponding to energies which are allowed or disallowed in classical mechanics. Oscillatory solutions have a classically allowed energy and correspond to actual classical motions, while the exponential solutions have a disallowed energy and describe a small amount of quantum bleeding into the classically disallowed region, to quantum tunneling. If the potential V grows at infinity, the motion is classically confined to a finite region, which means that in quantum mechanics every solution becomes an exponential far enough away. The condition that the exponential is decreasing restricts the energy levels to a discrete set, called the allowed energies.

Energy eigenstates
A solution &Psi;E(x) of the time independent equation is called an energy eigenstate with energy E:


 * $$\hat H \psi_E = E \psi_E\,$$

To find the time dependence of the state, consider starting the time-dependent equation with an initial condition &Psi;E(x). The time derivative at t = 0 is everywhere proportional to the value:


 * $$\left.i\hbar {\partial \over \partial t} \Psi(x,\,t)\right|_{t=0}= \left.(\hat H \Psi(x,\,t))\right|_{t=0} = \left. \hat H (\Psi(x,\,t))\right|_{t=0}= \hat H \Psi_E(x)=E \Psi(x,\,0)\,$$

So that at first the whole function just gets rescaled, and thus maintains the property that its time derivative is proportional to itself, for $$\hat H$$ being linear. So for all times,


 * $$\Psi(x,\,t)= A(t) \psi_E(x)\,$$

substituting,


 * $$i\hbar {dA \over dt } = - E A\,$$

So that the solution of the time-dependent equation with this initial condition is:


 * $$\Psi(x,\,t) = \psi_E(x) e^{-i{E t/\hbar}}\,$$

This is a restatement of the fact that solutions of the time-independent equation are the standing wave solutions of the time dependent equation. They only get multiplied by a phase as time goes by, and otherwise are unchanged. Since $$ |\psi_E(x) e^{-i{E t/\hbar}}|^2$$ is time-independent they are called stationary states.

Superpositions of energy eigenstates change their properties according to the relative phases between the energy levels.

Nonlinear equation
The nonlinear Schrödinger equation is the partial differential equation (in natural units)


 * $$i\partial_t\psi=-{1\over 2}\partial^2_x\psi+\kappa|\psi|^2 \psi$$

for the complex field ψ.

This equation arises from the Hamiltonian


 * $$H=\int \mathrm{d}x \left[{1\over 2}|\partial_x\psi|^2+{\kappa \over 2}|\psi|^4\right]$$

with the Poisson brackets


 * $$\{\psi(x),\psi(y)\}=\{\psi^*(x),\psi^*(y)\}=0 \, $$


 * $$\{\psi^*(x),\psi(y)\}=i\delta(x-y). \,$$

It must be noted that this is a classical field equation. Unlike its linear counterpart, it never describes the time evolution of a quantum state.

First order in time
The Schrödinger equation describes the time evolution of a quantum state, and must determine the future value from the present value. A classical field equation can be second order in time derivatives, the classical state can include the time derivative of the field. But a quantum state is a full description of a system, so that the Schrödinger equation is always first order in time.

Linear
The Schrödinger equation is linear in the wavefunction: if $$\Psi_A(x, t)$$ and $$\Psi_B(x,t)$$ are solutions to the time dependent equation, then so is $$ a \Psi_A+ b \Psi_B$$, where a and b are any complex numbers.

In quantum mechanics, the time evolution of a quantum state is postulated to be always linear, and this has been confirmed in experiments to an astonishing precision. Although there are nonlinear versions of the Schrödinger equation, these are not equations which describe the evolution of a quantum state, but classical field equations like Maxwell's equations or the Klein–Gordon equation.

The Schrödinger equation itself can be applied to classical fields in some contexts, such as for a coherent matter wave of a Bose–Einstein condensate or a superfluid with a large indefinite number of (noninteracting or weakly interacting) particles and a definite phase and amplitude. This cannot be done with interacting systems, however, since this would violate the linearity postulate. Fields and wavefunctions are not the same thing.

Real eigenstates
The time-independent equation is also linear, but in this case linearity has a slightly different meaning. If two wavefunctions &psi;1 and &psi;2 are solutions to the time-independent equation with the same energy E, then any linear combination of the two is a solution with energy E. Two different solutions with the same energy are called degenerate.


 * $$\hat H (a\psi_1 + b \psi_2 ) = ( a \hat H \psi_1 + b \hat H \psi_2) = E (a \psi_1 + b\psi_2)\,$$

In an arbitrary potential, there is one obvious degeneracy: if a wavefunction $$\psi$$ solves the time-independent equation, so does &psi;*. By taking linear combinations, the real and imaginary part of &psi; are each solutions. So that restricting attention to real valued wavefunctions does not affect the time-independent eigenvalue problem.

In the time-dependent equation, complex conjugate waves move in opposite directions. Given a solution to the time dependent equation &psi;(x, t), the replacement:


 * $$\Psi(x,\,t) \rightarrow \Psi^*(x,\, - t)\,$$

produces another solution, and is the extension of the complex conjugation symmetry to the time-dependent case. The symmetry of complex conjugation is called time-reversal.

Unitary time evolution
The Schrödinger equation is unitary, which means that the total norm of the wavefunction, the sum of the squares of the value at all points:


 * $$\int_x \Psi^*(x,\,t) \Psi(x,\,t) dx = \langle \Psi| \Psi\rangle\,$$

has zero time derivative.

The derivative of &psi;*(x, t) is according to the complex conjugate equations


 * $$-i \hbar {\partial \over \partial t} \Psi^*(x,\,t)= \hat H^\dagger \Psi^*(x,\,t)\,$$

where the operator $$H^\dagger$$ is defined as the continuous analog of the Hermitian conjugate,


 * $$\langle \eta H^{\dagger} | \Psi\rangle = \langle \eta | H \Psi\rangle\,$$

For a discrete basis, this just means that the matrix elements of the linear operator $$\hat H$$ obey:


 * $$\hat H_{ij} = \hat H^*_{ji}\,$$

The derivative of the inner product is:


 * $${d\over dt} \langle \Psi| \Psi\rangle = i\hbar \left( \langle \Psi\hat H^\dagger | \Psi\rangle - \langle \Psi| H \Psi\rangle \right)\,$$

and is proportional to the imaginary part of $$\hat H$$. If $$\hat H$$ has no imaginary part, if it is self-adjoint, then the probability is conserved. This is true not just for the Schrödinger equation as written, but for the Schrödinger equation with non-local hopping:


 * $$i\hbar{\partial \over \partial t} \Psi(x,\,t) = \int_y H(x,y) \Psi(y,\,t)\,$$

so long as:


 * $$H(x,y) = H(y,x)^*\,$$

the particular choice:


 * $$H(x,y) = - {\hbar^2 \over 2m} \nabla_x^2 \delta(x-y) + V(x) \delta(x-y)\,$$

reproduces the local hopping in the ordinary Schrödinger equation. On a discrete lattice approximation to a continuous space, H(x, y) has a simple form (in natural units):


 * $$H(x,y) = -{1 \over 2m} \,$$

whenever x and y are nearest neighbors. On the diagonal


 * $$H(x,x) = +{n \over 2m} + V(x)\,$$

where n is the number of nearest neighbors.

Positivity of energy
If the potential is bounded from below, the eigenfunctions of the Schrödinger equation have energy which is also bounded from below. This can be seen most easily by using the variational principle, as follows. (See also below.)

For any linear operator A bounded from below, the eigenvector with the smallest eigenvalue is the vector |ψ> that minimizes the quantity


 * $$\langle \psi |A|\psi \rangle$$

over all ψ which are normalized:


 * $$\|\psi\|^2 = \int_x |\psi(x)|^2 =\langle \psi | \psi \rangle = 1\,$$

In this way, the smallest eigenvalue is expressed through the variational principle.

For the Schrödinger Hamiltonian H bounded from below, the smallest eigenvalue is called the ground state energy. That energy is the minimum value of


 * $$\langle \psi|H|\psi\rangle = \int_x \psi^*(x) ( - \frac{\hbar^2}{2m} \nabla^2 \psi + V(x)\psi) = \int_x \frac{\hbar^2}{2m}|\nabla \psi|^2 + V(x) |\psi|^2 dx$$

(we used an integration by parts). The right hand side is never smaller than the smallest value of V(x); in particular, the ground state energy is positive when V(x) is everywhere positive.

Positive definite nondegenerate ground state
For potentials V(x) that are bounded below and are not infinite in such a way that will divide space into regions which are inaccessible by quantum tunneling, there is a ground state which minimizes the integral above. The lowest energy wavefunction is real and nondegenerate and has the same sign everywhere.

To prove this, let the ground state wavefunction be ψ. The real and imaginary parts are separately ground states, so it is no loss of generality to assume the ψ is real. Suppose now, for contradiction, that ψ changes sign. Define η(x) to be the absolute value of $$\psi$$.


 * $$\eta=|\psi|$$

The potential and kinetic energy integral for η is equal to ψ, except that η has a kink wherever ψ changes sign. The integrated-by-parts expression for the kinetic energy is the sum of the squared magnitude of the gradient, and it is always possible to round out the kink in such a way that the gradient gets smaller at every point, so that the kinetic energy is reduced.

This also proves that the ground state is non-degenerate. If there were two ground states ψ1(x) and ψ2(x) not proportional to each other and both everywhere non-negative then a linear combination of the two is still a ground state, but it can be made to have a sign change.

For one-dimensional potentials, every eigenstate is non-degenerate, because the number of sign changes is equal to the level number.

Already in two dimensions, it is easy to get a degeneracy—for example, if a particle is moving in a separable potential: V(x, y) = U(x) + W(y), then the energy levels are the sum of the energies of the one-dimensional problem. It is easy to see that by adjusting the overall scale of U and W that the levels can be made to collide.

For standard examples, the three-dimensional harmonic oscillator and the central potential, the degeneracies are a consequence of symmetry.

Completeness
The energy eigenstates form a basis—any wavefunction may be written as a sum over the discrete energy states or an integral over continuous energy states, or more generally as an integral over a measure. This is the spectral theorem in mathematics, and in a finite state space it is just a statement of the completeness of the eigenvectors of a Hermitian matrix.

Local conservation of probability
The probability density of a particle is $$\Psi^*(x,\,t)\Psi(x,\,t)$$. The probability flux is defined as [in units of (probability)/(area &times; time)]:


 * $$ \mathbf{j} = -{i\hbar^2 \over 2m} \left( \Psi^{*} \nabla \Psi - \Psi\nabla \Psi^{*} \right)  = {\hbar \over m} \operatorname{Im} \left( \Psi ^{*} \nabla \Psi\right) $$

The probability flux satisfies the continuity equation:


 * $${ \partial \over \partial t} P\left(x,t\right) + \nabla \cdot \mathbf{j} = 0 $$

where $$P\left(x, t\right)$$ is the probability density [measured in units of (probability)/(volume)]. This equation is the mathematical equivalent of the probability conservation law.

For a plane wave:


 * $$ \Psi(x,t) = \, A e^{ \mathrm{i} (k x - \omega t)} $$


 * $$ j\left(x,t\right) = \left|A\right|^2 {\hbar k \over m}$$

So that not only is the probability of finding the particle the same everywhere, but the probability flux is as expected from an object moving at the classical velocity p/m. The reason that the Schrödinger equation admits a probability flux is because all the hopping is local and forward in time.

Heisenberg observables
There are many linear operators which act on the wavefunction, each one defines a Heisenberg matrix when the energy eigenstates are discrete. For a single particle, the operator which takes the derivative of the wavefunction in a certain direction:


 * $$\hat p = -{i\hbar {\partial \over \partial x}}$$

Is called the momentum operator. Multiplying operators is just like multiplying matrices, the product of A and B acting on &psi; is A acting on the output of B acting on $$\psi$$.

An eigenstate of p obeys the equation:


 * $$\hat p \psi = k \psi\,$$

for a number k, and for a normalizable wavefunction this restricts k to be real, and the momentum eigenstate is a wave with frequency k.


 * $$\psi(x) = e^{i kx}\,$$

The position operator x multiplies each value of the wavefunction at the position x by x:


 * $$\hat x(\psi) = x\psi$$

So that in order to be an eigenstate of x, a wavefunction must be entirely concentrated at one point:


 * $$\hat x \delta(x-x_0) = x_0 \delta(x-x_0)$$

In terms of p, the Hamiltonian is:


 * $$\hat H = {\hat p^2\over 2m} + V(x)$$

It is easy to verify that p acting on x acting on &psi;:


 * $$\hat p (\hat x( \psi)) = -i \hbar {\partial \over \partial x}( x \psi) = -i \hbar x {\partial \over \partial x}\psi -i\hbar \psi$$

while x acting on p acting on &psi; reproduces only the first term:


 * $$\hat x(\hat p (\psi)) = -i \hbar x{\partial \over \partial x} \psi$$

so that the difference of the two is not zero:


 * $$( x p - p x ) \psi = i \hbar \psi\,$$

or in terms of operators:


 * $$[x,p] = xp - px = i \hbar\,$$

Since the time derivative of a state is:


 * $$i\hbar{\partial\over \partial t} \Psi = \hat H \Psi\,$$

while the complex conjugate is


 * $$- i\hbar{\partial\over \partial t} \Psi ^* = \hat H \Psi^*\,$$

The time derivative of a matrix element


 * $${d \over dt} \langle \eta | \hat A |\Psi\rangle = - \langle \eta | \hat H \hat A |\Psi\rangle+ \langle \eta | \hat A \hat H |\Psi\rangle= \langle \eta |[\hat H,\hat A]|\Psi\rangle\,$$

obeys the Heisenberg equation of motion. This establishes the equivalence of the Schrödinger and Heisenberg formalisms, ignoring the mathematical fine points of the limiting procedure for continuous space.

Correspondence principle
The Schrödinger equation satisfies the correspondence principle. In the limit of small wavelength wavepackets, it reproduces Newton's laws. This is easy to see from the equivalence to matrix mechanics.

All operators in Heisenberg's formalism obey the quantum analog of Hamilton's equations:


 * $${dA \over dt} = -i\hbar (AH - HA)$$

So that in particular, the equations of motion for the X and P operators are:


 * $${dX \over dt} = {P\over m}$$


 * $${dP \over dt} = - {\partial V \over \partial x}$$

in the Schrödinger picture, the interpretation of this equation is that it gives the time rate of change of the matrix element between two states when the states change with time. Taking the expectation value in any state shows that Newton's laws hold not only on average, but exactly, for the quantities:


 * $$\langle X\rangle = \int_x \psi^*(x)\psi(x) x = \langle \psi|X|\psi \rangle \,$$


 * $$\langle P\rangle = \int_x \psi^*(x) i\hbar {\partial \psi \over \partial x}(x) = \langle \psi |P|\psi\rangle \,$$

Relativity
The Schrödinger equation does not take into account relativistic effects; as a wave equation, it is invariant under a Galilean transformation, but not under a Lorentz transformation. But in order to include relativity, the physical picture must be altered.

The Klein–Gordon equation uses the relativistic mass-energy relation:


 * $$E^2 - p^2 = m^2\,$$

to produce the differential equation:


 * $$- {\partial^2 \over \partial t^2}\psi + \nabla^2 \psi = m^2 \psi\,$$

which is relativistically invariant, but second order in $$\psi$$, and so cannot be an equation for the quantum state. This equation also has the property that there are solutions with both positive and negative frequency, a plane wave solution obeys:


 * $$ \omega^2 - k^2 = m^2\,$$

which has two solutions, one with positive frequency the other with negative frequency. This is a disaster for quantum mechanics, because it means that the energy is unbounded below.

A more sophisticated attempt to solve this problem uses a first order wave equation, the Dirac equation, but again there are negative energy solutions. In order to solve this problem, it is essential to go to a multiparticle picture, and to consider the wave equations as equations of motion for a quantum field, not for a wavefunction.

The reason is that relativity is incompatible with a single particle picture. A relativistic particle cannot be localized to a small region without the particle number becoming indefinite. When a particle is localized in a box of length L, the momentum is uncertain by an amount roughly proportional to h/L by the uncertainty principle. This leads to an energy uncertainty of hc/L, when p is large enough so that the mass of the particle can be neglected. This uncertainty in energy is equal to the mass-energy of the particle when


 * $$L = {\hbar \over mc} \,$$

and this is called the Compton wavelength. Below this length, it is impossible to localize a particle and be sure that it stays a single particle, since the energy uncertainty is large enough to produce more particles from the vacuum by the same mechanism that localizes the original particle. In high energy physics on often uses natural units in which one sets $$\hbar = c = 1$$. This not only simplifies the above equations, but it also expresses the equivalence between mass and energy as well as between mass and inverse length as strict equalities.

There is another approach to relativistic quantum mechanics which does allow you to follow single particle paths, and it was discovered within the path-integral formulation. If the integration paths in the path integral include paths which move both backwards and forwards in time as a function of their own proper time, it is possible to construct a purely positive frequency wavefunction for a relativistic particle. This construction is appealing, because the equation of motion for the wavefunction is exactly the relativistic wave equation, but with a non-local constraint that separates the positive and negative frequency solutions. The positive frequency solutions travel forward in time, the negative frequency solutions travel backwards in time. In this way, they both analytically continue to a statistical field correlation function, which is also represented by a sum over paths. But in real space, they are the probability amplitudes for a particle to travel between two points, and can be used to generate the interaction of particles in a point-splitting and joining framework. The relativistic particle point of view is due to Richard Feynman.

Feynman's method also constructs the theory of quantized fields, but from a particle point of view. In this theory, the equations of motion for the field can be interpreted as the equations of motion for a wavefunction only with caution—the wavefunction is only defined globally, and in some way related to the particle's proper time. The notion of a localized particle is also delicate—a localized particle in the relativistic particle path integral corresponds to the state produced when a local field operator acts on the vacuum, and exactly which state is produced depends on the choice of field variables.

Solutions
Some general techniques are:
 * Perturbation theory
 * The variational method
 * Quantum Monte Carlo methods
 * Density functional theory
 * The WKB approximation and semi-classical expansion

In some special cases, special methods can be used:
 * List of quantum-mechanical systems with analytical solutions
 * Hartree-Fock method and post Hartree-Fock methods
 * Discrete delta-potential method

Free Schrödinger equation
When the potential is zero, the Schrödinger equation is linear with constant coefficients:


 * $$i\frac{\partial \Psi}{\partial t}=-{1\over 2m}\nabla^2\Psi $$

The solution $$\Psi(x,\,t)$$ for any initial condition $$\psi_0(x)$$ can be found by Fourier transforms. Because the coefficients are constant, an initial plane wave stays a plane wave. Only the coefficient changes:


 * $$\Psi(x,\,t) = A(t) e^{i k x}\,$$

Substituting:


 * $${dA(t) \over dt} = -{i k^2 \over 2m} A(t)\,$$

So that A is also oscillating in time:


 * $$A(t) = A e^{- i {k^2 \over 2m} t}\,$$

and the solution is:


 * $$\Psi(x,\,t) = A e^{i k x - i \omega t}\,$$

Where $$\omega= k^2/2m$$, a restatement of DeBroglie's relations.

To find the general solution, write the initial condition as a sum of plane waves by taking its Fourier transform:


 * $$\psi_0(x) = \int_k \psi(k) e^{ikx}\,$$

The equation is linear, so each plane waves evolves independently:


 * $$\Psi(x,\,t) = \int_k \psi(k)e^{-i\omega t} e^{ikx}\,$$

Which is the general solution. When complemented by an effective method for taking Fourier transforms, it becomes an efficient algorithm for finding the wavefunction at any future time–Fourier transform the initial conditions, multiply by a phase, and transform back.

Gaussian wavepacket
An easy and instructive example is the Gaussian wavepacket:


 * $$\psi_0(x) = e^{-x^2 / 2a}\,$$

where a is a positive real number, the square of the width of the wavepacket. The total normalization of this wavefunction is:


 * $$\langle \psi|\psi\rangle = \int_x \psi^* \psi = \sqrt{\pi a}$$

The Fourier transform is a Gaussian again in terms of the wavenumber k:


 * $$\psi_0(k) = (2\pi a)^{d/2} e^{- a k^2/2}\,$$

With the physics convention which puts the factors of $$2\pi$$ in Fourier transforms in the k-measure.


 * $$\psi_0(x) = \int_k \psi_0(k) e^{-ikx} = \int {d^dk \over (2\pi)^d} \psi_0(k) e^{-ikx} $$

Each separate wave only phase-rotates in time, so that the time dependent Fourier-transformed solution is:


 * $$\psi_t(k) = (2\pi a)^{d/2} e^{- a { k^2\over 2} - it {k^2\over 2m}} = (2\pi a)^{d/2} e^{-(a+it/m){k^2\over 2}} \,$$

The inverse Fourier transform is still a Gaussian, but the parameter a has become complex, and there is an overall normalization factor.


 * $$\psi_t(x) = \left({a \over a + i t/m}\right)^{d/2} e^{- {x^2\over 2(a + i t/m)} }\,$$

The branch of the square root is determined by continuity in time—it is the value which is nearest to the positive square root of a. It is convenient to rescale time to absorb m, replacing t/m by t.

The integral of $$\psi$$ over all space is invariant, because it is the inner product of $$\psi$$ with the state of zero energy, which is a wave with infinite wavelength, a constant function of space. For any energy state, with wavefunction $$\eta(x)$$, the inner product:


 * $$\langle \eta | \psi \rangle = \int_x \eta(x) \psi_t(x)$$,

only changes in time in a simple way: its phase rotates with a frequency determined by the energy of $$\eta$$. When $$\eta$$ has zero energy, like the infinite wavelength wave, it doesn't change at all.

The sum of the absolute square of $$\psi$$ is also invariant, which is a statement of the conservation of probability. Explicitly in one dimension:


 * $$|\psi|^2 = \psi\psi^* = {a \over \sqrt{a^2+t^2} } e^{-{x^2 a \over a^2 + t^2}}$$

Which gives the norm:


 * $$\int |\psi|^2 = \sqrt{\pi a}$$

which has preserved its value, as it must.

The width of the Gaussian is the interesting quantity, and it can be read off from the form of $$|\psi^2|$$:


 * $$\sqrt{a^2 + t^2 \over a}\,$$.

The width eventually grows linearly in time, as $$\scriptstyle t/\sqrt{a}$$. This is wave-packet spreading—no matter how narrow the initial wavefunction, a Schrödinger wave eventually fills all of space. The linear growth is a reflection of the momentum uncertainty—the wavepacket is confined to a narrow width $$\scriptstyle \sqrt{a}$$ and so has a momentum which is uncertain by the reciprocal amount $$\scriptstyle 1/\sqrt{a}$$, a spread in velocity of $$\scriptstyle 1/m\sqrt{a}$$, and therefore in the future position by $$\scriptstyle t/m\sqrt{a}$$, where the factor of m has been restored by undoing the earlier rescaling of time.

Galilean invariance
Galilean boosts are transformations which look at the system from the point of view of an observer moving with a steady velocity −v. A boost must change the physical properties of a wavepacket in the same way as in classical mechanics:


 * $$p'= p - mv\,$$


 * $$p = p' + mv\,$$


 * $$x'= x - vt\,$$


 * $$x = x' + vt\,$$

So that the phase factor of a free Schrödinger plane wave:


 * $$p x - E t = (p' + mv)(x' + vt) - {(p'+mv)^2\over 2m} t = p' x' - E' t + m v x - {mv^2\over 2}t \,$$

is only different in the boosted coordinates by a phase which depends on x and t, but not on p.

An arbitrary superposition of plane wave solutions with different values of p is the same superposition of boosted plane waves, up to an overall x, t dependent phase factor. So any solution to the free Schrödinger equation, $$\psi_t(x)$$, can be boosted into other solutions:


 * $$\psi'_t(x) = \psi_t(x - vt) e^{ i mv x - i {mv^2\over 2}t}\,$$

Boosting a constant wavefunction produces a plane-wave. More generally, boosting a plane-wave:


 * $$\psi_t(x) = e^{ipx - i {p^2\over 2m} t}\,$$

produces a boosted wave:


 * $$\psi'_t(x) = e^{ i p(x + vt) - i{p^2\over 2m}t + imv x - i {mv^2\over 2}t} = e^{i(p+mv)x + i {(p+mv)^2\over 2m}t }\,$$

Boosting the spreading Gaussian wavepacket:


 * $$\psi_t(x) = {1\over \sqrt{a+it/m}} e^{ - {x^2\over 2a} }\,$$

produces the moving Gaussian:


 * $$\psi_t(x) = {1\over \sqrt{a + it/m}} e^{ - {(x + vt)^2 \over 2a} + i m v x - i {mv^2\over 2} t } \,$$

which spreads in the same way.

Free propagator
The narrow-width limit of the Gaussian wavepacket solution is the propagator K. For other differential equations, this is sometimes called the Green's function, but in quantum mechanics it is traditional to reserve the name Green's function for the time-Fourier transform of K. When a is the infinitesimal quantity $$\epsilon$$, the Gaussian initial condition, rescaled so that its integral is one:


 * $$\psi_0(x) = {1\over \sqrt{2\pi \epsilon} } e^{-{x^2\over 2\epsilon}}\,$$

becomes a delta function, so that its time evolution:


 * $$K_t(x) = {1\over \sqrt{2\pi (i t + \epsilon)}} e^{ - x^2 \over 2it+\epsilon }\,$$

gives the propagator.

Note that a very narrow initial wavepacket instantly becomes infinitely wide, with a phase which is more rapidly oscillatory at large values of x. This might seem strange—the solution goes from being concentrated at one point to being everywhere at all later times, but it is a reflection of the momentum uncertainty of a localized particle. Also note that the norm of the wavefunction is infinite, but this is also correct since the square of a delta function is divergent in the same way.

The factor of $$\epsilon$$ is an infinitesimal quantity which is there to make sure that integrals over K are well defined. In the limit that $$\epsilon$$ becomes zero, K becomes purely oscillatory and integrals of K are not absolutely convergent. In the remainder of this section, it will be set to zero, but in order for all the integrations over intermediate states to be well defined, the limit $$\scriptstyle \epsilon\rightarrow 0$$ is to only to be taken after the final state is calculated.

The propagator is the amplitude for reaching point x at time t, when starting at the origin, x = 0. By translation invariance, the amplitude for reaching a point x when starting at point y is the same function, only translated:


 * $$K_t(x,y) = K_t(x-y) = {1\over \sqrt{2\pi it}} e^{-i(x-y)^2 \over 2t} \,$$

In the limit when t is small, the propagator converges to a delta function:


 * $$\lim_{t\rightarrow 0} K_t(x-y) = \delta(x-y)$$

but only in the sense of distributions. The integral of this quantity multiplied by an arbitrary differentiable test function gives the value of the test function at zero. To see this, note that the integral over all space of K is equal to 1 at all times:


 * $$\int_x K_t(x) = 1\,$$

since this integral is the inner-product of K with the uniform wavefunction. But the phase factor in the exponent has a nonzero spatial derivative everywhere except at the origin, and so when the time is small there are fast phase cancellations at all but one point. This is rigorously true when the limit $$\epsilon\rightarrow 0$$ is taken after everything else.

So the propagation kernel is the future time evolution of a delta function, and it is continuous in a sense, it converges to the initial delta function at small times. If the initial wavefunction is an infinitely narrow spike at position x0:


 * $$\psi_0(x) = \delta(x - x_0)\,$$

it becomes the oscillatory wave:


 * $$\psi_t(x) = {1\over \sqrt{2\pi i t}} e^{ -i (x-x_0) ^2 /2t}\,$$

Since every function can be written as a sum of narrow spikes:


 * $$\psi_0(x) = \int_y \psi_0(y) \delta(x-y)\,$$

the time evolution of every function is determined by the propagation kernel:


 * $$\psi_t(x) = \int_y \psi_0(y) {1\over \sqrt{2\pi it}} e^{-i (y-x_0)^2 / 2t}\,$$

And this is an alternate way to express the general solution. The interpretation of this expression is that the amplitude for a particle to be found at point x at time t is the amplitude that it started at x0 times the amplitude that it went from x0 to x, summed over all the possible starting points. In other words, it is a convolution of the kernel K with the initial condition.


 * $$\psi_t = K * \psi_0\,$$

Since the amplitude to travel from x to y after a time t + t&prime; can be considered in two steps, the propagator obeys the identity:


 * $$\int_y K(x-y;t)K(y-z;t') = K(x-z;t+t')\,$$

Which can be interpreted as follows: the amplitude to travel from x to z in time t + t&prime; is the sum of the amplitude to travel from x to y in time t multiplied by the amplitude to travel from y to z in time t&prime;, summed over all possible intermediate states y. This is a property of an arbitrary quantum system, and by subdividing the time into many segments, it allows the time evolution to be expressed as a path integral.

Analytic continuation to diffusion
The spreading of wavepackets in quantum mechanics is directly related to the spreading of probability densities in diffusion. For a particle which is random walking, the probability density function at any point satisfies the diffusion equation:


 * $${\partial \over \partial t} \rho = {1\over 2} {\partial^2 \over \partial x^2 } \rho $$

where the factor of 2, which can be removed by a rescaling either time or space, is only for convenience.

A solution of this equation is the spreading gaussian:


 * $$\rho_t(x) = {1\over \sqrt{2\pi t}} e^{-x^2 \over 2t}$$

and since the integral of &rho;t, is constant, while the width is becoming narrow at small times, this function approaches a delta function at t = 0:


 * $$\lim_{t\rightarrow 0} \rho_t(x) = \delta(x)\,$$

again, only in the sense of distributions, so that


 * $$\lim_{t\rightarrow 0} \int_x f(x) \rho_t(x) = f(0)\,$$

for any smooth test function f.

The spreading Gaussian is the propagation kernel for the diffusion equation and it obeys the convolution identity:


 * $$K_{t+t'} = K_{t}*K_{t'}\,$$

Which allows diffusion to be expressed as a path integral. The propagator is the exponential of an operator H:


 * $$K_t(x) = e^{-tH}\,$$

which is the infinitesimal diffusion operator.


 * $$H= -{\nabla^2\over 2}\,$$

A matrix has two indices, which in continuous space makes it a function of x and x&prime;. In this case, because of translation invariance, the matrix element K only depend on the difference of the position, and a convenient abuse of notation is to refer to the operator, the matrix elements, and the function of the difference by the same name:\
 * $$K_t(x,x') = K_t(x-x')\,$$

Translation invariance means that continuous matrix multiplication:


 * $$C(x,x) = \int_{x'} A(x,x')B(x',x)\,$$

is really convolution:


 * $$C(\Delta) = C(x-x) = \int_{x'} A(x-x') B(x'-x) = \int_{y} A(\Delta-y)B(y)\,$$

The exponential can be defined over a range of t which include complex values, so long as integrals over the propagation kernel stay convergent.


 * $$K_z(x) = e^{-zH}\,$$

As long as the real part of z is positive, for large values of x, K is exponentially decreasing and integrals over K are absolutely convergent.

The limit of this expression for z coming close to the pure imaginary axis is the Schrödinger propagator:


 * $$K_t^{\rm Schr} = K_{it+\epsilon} = e^{-(it+\epsilon)H}\,$$

and this gives a more conceptual explanation for the time evolution of Gaussians. From the fundamental identity of exponentiation, or path integration:


 * $$K_z * K_{z'} = K_{z+z'}\,$$

holds for all complex z values where the integrals are absolutely convergent so that the operators are well defined.

So that quantum evolution starting from a Gaussian, which is the diffusion kernel K:


 * $$\psi_0(x) = K_a(x) = K_a * \delta(x)\,$$

gives the time evolved state:


 * $$\psi_t = K_{it} * K_a = K_{a+it}.\,$$

This explains the diffusive form of the Gaussian solutions:


 * $$\psi_t(x) = {1\over \sqrt{2\pi (a+it)} } e^{- {x^2\over 2(a+it)} }.\,$$

Variational principle
The variational principle asserts that for any Hermitian matrix A, the eigenvector corresponding to the lowest eigenvalue minimizes the quantity:


 * $$\langle v,Av \rangle = \sum_{ij} A_{ij} v^*_i v_j\,$$

on the unit sphere  = 1. This follows by the method of Lagrange multipliers, at the minimum the gradient of the function is parallel to the gradient of the constraint:


 * $${\partial\over \partial v_i} \langle v,Av\rangle = \lambda {\partial \over \partial v_i} \langle v,v\rangle \,$$

which is the eigenvalue condition


 * $$\sum_{j} A_{ij} v_j = \lambda v_i\,$$

so that the extreme values of a quadratic form A are the eigenvalues of A, and the value of the function at the extreme values is just the corresponding eigenvalue:


 * $$\langle v,Av\rangle = \lambda\langle v,v\rangle = \lambda.\,$$

When the hermitian matrix is the Hamiltonian, the minimum value is the lowest energy level.

In the space of all wavefunctions, the unit sphere is the space of all normalized wavefunctions $$\psi$$, the ground state minimizes


 * $$\langle \psi | H |\psi \rangle = \int \psi^* H \psi = \int \psi^* (-\nabla^2 + V(x)) \psi \,$$

or, after an integration by parts,


 * $$\langle \psi | H |\psi \rangle = \int |\nabla \psi|^2 + V(x) |\psi|^2.\,$$

All the stationary points come in complex conjugate pairs since the integrand is real. Since the stationary points are eigenvalues, any linear combination is a stationary point, and the real and imaginary part are both stationary points.

Potential and ground state
For a particle in a positive definite potential, the ground state wavefunction is real and positive, and has a dual interpretation as the probability density for a diffusion process. The analogy between diffusion and nonrelativistic quantum motion, originally discovered and exploited by Schrödinger, has led to many exact solutions.

A positive definite wavefunction:


 * $$\psi = e^{-W(x)}\,$$

is a solution to the time-independent Schrödinger equation with m = 1 and potential:


 * $$V(x) = {1\over 2} |\nabla W|^2 - {1\over 2} \nabla^2 W\,$$

with zero total energy. W, the logarithm of the ground state wavefunction. The second derivative term is higher order in ħ, and ignoring it gives the semi-classical approximation.

The form of the ground state wavefunction is motivated by the observation that the ground state wavefunction is the Boltzmann probability for a different problem, the probability for finding a particle diffusing in space with the free-energy at different points given by W. If the diffusion obeys detailed balance and the diffusion constant is everywhere the same, the Fokker Planck equation for this diffusion is the Schrödinger equation when the time parameter is allowed to be imaginary. This analytic continuation gives the eigenstates a dual interpretation—either as the energy levels of a quantum system, or the relaxation times for a stochastic equation.

Harmonic oscillator
W should grow at infinity, so that the wavefunction has a finite integral. The simplest analytic form is:


 * $$W(x) = \omega x^2\,$$

with an arbitrary constant &omega;, which gives the potential:


 * $$V(x) = {1\over 2} \omega^2 x^2 - {\omega \over 2}\,$$

This potential describes a Harmonic oscillator, with the ground state wavefunction:


 * $$\psi(x) = e^{-\omega x^2 }\,$$

The total energy is zero, but the potential is shifted by a constant. The ground state energy of the usual unshifted Harmonic oscillator potential:


 * $$V(x) = {\omega x^2 \over 2}\,$$

is then the additive constant:


 * $$E_0 = {\omega\over 2}\,$$

which is the zero point energy of the oscillator.

Coulomb potential
Another simple but useful form is


 * $$W(x) = 2a|x|\,$$

where W is proportional to the radial coordinate. This is the ground state for two different potentials, depending on the dimension. In one dimension, the corresponding potential is singular at the origin, where it has some nonzero density:


 * $$V(x) = 2a^2 + a \delta(x)\,$$

and, up to some rescaling of variables, this is the lowest energy state for a delta function potential, with the bound state energy added on.


 * $$V(x) = a \delta(x)\,$$

with the ground state energy:


 * $$E_0 = - 2a^2\,$$

and the ground state wavefunction:


 * $$\psi = e^{-2a|x|}\,$$

In higher dimensions, the same form gives the potential:


 * $$V(x) = 2a^2+ { 2a (d-1) \over r};\,$$

which can be identified as the attractive Coulomb law, up to an additive constant which is the ground state energy. This is the superpotential that describes the lowest energy level of the Hydrogen atom, once the mass is restored by dimensional analysis:


 * $$\psi_0 = e^{-r/r_0}\,$$

where r0 is the Bohr radius, with energy


 * $$E_0 = - {2a\over d-1}\,$$

The ansatz


 * $$W(x) = a r + b \log(r)\,$$

modifies the Coulomb potential to include a quadratic term proportional to 1/r2, which is useful for nonzero angular momentum.

Bra-ket notation
In the mathematical formulation of quantum mechanics, a physical system is fully described by a vector in a complex Hilbert space, the collection of all possible normalizable wavefunctions. The wavefunction is just an alternate name for the vector of complex amplitudes, and only in the case of a single particle in the position representation is it a wave in the usual sense, a wave in space time. For more complex systems, it is a wave in an enormous space of all possible worlds. Two nonzero vectors which are multiples of each other, two wavefunctions which are the same up to rescaling, represent the same physical state.

The wavefunction vector can be written in several ways:


 * 1) as an abstract ket vector:
 * $$|\psi\rangle$$
 * 1) As a list of complex numbers, the components relative to a discrete list of normalizable basis vectors $$|\eta_i\rangle$$:
 * $$ c_i = \langle \eta_i |\psi \rangle $$
 * 1) As a continuous superposition of non-normalizable basis vectors, like position states $$|x\rangle$$:
 * $$ |\psi\rangle = \int_x \psi(x) |x dx\rangle$$

The divide between the continuous basis and the discrete basis can be bridged by limiting arguments. The two can be formally unified by thinking of each as a measure on the real number line.

In the most abstract notation, the Schrödinger equation is written:


 * $$i\hbar {d\over dt} |\psi\rangle = H |\psi\rangle$$

which only says that the wavefunction evolves linearly in time, and names the linear operator which gives the time derivative the Hamiltonian H. In terms of the discrete list of coefficients:


 * $$i\hbar {d\over dt} C_i = \sum_j H_{ij} C_j$$

which just reaffirms that time evolution is linear, since the Hamiltonian acts by matrix multiplication.

In a continuous representation, the Hamiltonian is a linear operator, which acts by the continuous version of matrix multiplication:


 * $$\langle x| i\hbar {d\over dt} |\psi\rangle = \langle x|H|\psi\rangle = \hat{H} \psi (x)$$

Taking the complex conjugate:


 * $$-i\hbar {d\over dt} \langle \psi | = \langle \psi | H^\dagger$$

In order for the time-evolution to be unitary, to preserve the inner products, the time derivative of the inner product must be zero:


 * $$i\hbar {d\over dt} \langle \psi | \psi \rangle = \langle\psi | H | \psi\rangle- \langle \psi |H^\dagger |\psi\rangle = 0$$

for an arbitrary state $$|\psi\rangle$$, which requires that H is Hermitian. In a discrete representation this means that $$\scriptstyle H_{ij}= H_{ji}$$. When H is continuous, it should be self-adjoint, which adds some technical requirement that H does not mix up normalizable states with states which violate boundary conditions or which are grossly unnormalizable.

The formal solution of the equation is the matrix exponential (natural units):


 * $$|\psi(t)\rangle = e^{-i H t} |\psi(0)\rangle = U(t) |\psi(0)\rangle$$

For every time-independent Hamiltonian operator, $$\hat H$$, there exists a set of quantum states, $$\left|\psi_n\right\rang$$, known as energy eigenstates, and corresponding real numbers $$E_n$$ satisfying the eigenvalue equation.


 * $$ H |\psi_n \rangle = E_n |\psi_n \rangle \,$$

This is the time-independent Schrödinger equation.

For the case of a single particle, the Hamiltonian is the following linear operator:


 * $$H = -{\hbar^2 \over 2m}\nabla^2 + V(x) = {p^2\over 2m} + V(x)$$

which is a Self-adjoint operator when V is not too singular and does not grow too fast. Self-adjoint operators have the property that their eigenvalues are real in any basis, and their eigenvectors form a complete set, either discrete or continuous.

Expressed in a basis of Eigenvectors of H, the Schrödinger equation becomes trivial:


 * $$\mathrm{i} \hbar \frac{\partial}{\partial t} \left| \psi_n \left(t\right) \right\rangle = E_n \left|\psi_n\left(t\right)\right\rang. $$

which means that each energy eigenstate is only multiplied by a complex phase:


 * $$ \left| \psi \left(t\right) \right\rangle = \mathrm{e}^{-\mathrm{i} Et / \hbar} \left|\psi\left(0\right)\right\rang. $$

which is what matrix exponentiation means—the time evolution acts to rotate the eigenfunctions of H.

When H is expressed as a matrix for wavefunctions in a discrete energy basis:


 * $$i\hbar {d\over dt} C_i = E_i C_i \,$$

so that:


 * $$C_n(t) = e^{-iE_n t} C_n(0)\,$$

The physical properties of the Cn are extracted by acting by operators, matrices. By redefining the basis so that it rotates with time, the matrices become time dependent, which is the Heisenberg picture.

Galilean invariance
Galilean symmetry requires that H(p) is quadratic in p in both the classical and quantum Hamiltonian formalism. In order for Galilean boosts to produce a p-independent phase factor, px − Ht must have a very special form—translations in p need to be compensated by a shift in H. This is only true when H is quadratic.

The infinitesimal generator of boosts in both the classical and quantum case is:


 * $$B = \sum_i m_i x_i(t) - t \sum_i p_i\,$$

where the sum is over the different particles, and B, x, p are vectors.

The poisson bracket / commutator of $$\scriptstyle B\cdot v$$ with x and p generate infinitesimal boosts, with v the infinitesimal boost velocity vector:


 * $$[B\cdot v ,x_i] = vt\,$$


 * $$[B\cdot v ,p_i] = v m_i\,$$

Iterating these relations is simple, since they add a constant amount at each step. By iterating, the dvs incrementally sum up to the finite quantity V:


 * $$x \rightarrow x_i + Vt\,$$


 * $$p \rightarrow p_i + m_i V\,$$

B divided by the total mass is the current center of mass position minus the time times the center of mass velocity:


 * $$B = M X_\mathrm{cm} - t P_\mathrm{cm}\,$$

In other words, B/M is the current guess for the position that the center of mass had at time zero.

The statement that B doesn't change with time is the center of mass theorem. For a Galilean invariant system, the center of mass moves with a constant velocity, and the total kinetic energy is the sum of the center of mass kinetic energy and the kinetic energy measured relative to the center of mass.

Since B is explicitly time dependent, H does not commute with B, rather:


 * $${dB\over dt} = [H,B] + {\partial B \over \partial t} = 0\,$$

this gives the transformation law for H under infinitesimal boosts:


 * $$[B\cdot v,H] = - P_\mathrm{cm} v\,$$

the interpretation of this formula is that the change in H under an infinitesimal boost is entirely given by the change of the center of mass kinetic energy, which is the dot product of the total momentum with the infinitesimal boost velocity.

The two quantities (H, P) form a representation of the Galilean group with central charge M, where only H and P are classical functions on phase-space or quantum mechanical operators, while M is a parameter. The transformation law for infinitesimal v:


 * $$P' = P + M v\,$$


 * $$H' = H - P\dot v\,$$

can be iterated as before—P goes from P to P + MV in infinitesimal increments of v, while H changes at each step by an amount proportional to P, which changes linearly. The final value of H is then changed by the value of P halfway between the starting value and the ending value:


 * $$H' = H - (P+{MV\over 2})\cdot V = H - P\cdot V - {MV^2\over 2}.\,$$

The factors proportional to the central charge M are the extra wavefunction phases.

Boosts give too much information in the single-particle case, since Galilean symmetry completely determines the motion of a single particle. Given a multiparticle time dependent solution:


 * $$\psi_t(x_1,x_2...,x_n)\,$$

with a potential that depends only on the relative positions of the particles, it can be used to generate the boosted solution:


 * $$\psi'_t = \psi_t(x_1 + v t, ..., x_2 + vt) e^{i P_\mathrm{cm}\cdot X_\mathrm{cm} - {Mv_\mathrm{cm}^2\over 2}t}. \,$$

For the standing wave problem, the motion of the center of mass just adds an overall phase. When solving for the energy levels of multiparticle systems, Galilean invariance allows the center of mass motion to be ignored.