User:19sully96/sandbox

Boltzmann edits


In statistical mechanics and mathematics, a Boltzmann distribution (also called Gibbs distribution ) is a probability distribution or probability measure that gives the probability that a system will be in a certain state as a function of that state's energy and the temperature of the system. The distribution is expressed in the form


 * $$p_i \propto e^{-\frac{\varepsilon_i}{kT}}$$

where $p_{i}$ is the probability of the system being in state $i$, $ε_{i}$ is the energy of that state, and a constant $kT$ of the distribution is the product of Boltzmann's constant $k$ and thermodynamic temperature $T$. The symbol $\propto$ denotes proportionality (see  for the proportionality constant).

The term system here has a very wide meaning; it can range from a single atom to a macroscopic system such as a natural gas storage tank. Because of this the Boltzmann distribution can be used to solve a very wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied than the states with higher energy.

The ratio of probabilities of two states is known as the Boltzmann factor and characteristically only depends on the states' energy difference:


 * $$\frac{p_i}{p_j} = e^{\frac{\varepsilon_j - \varepsilon_i}{kT}}$$

The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium. Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium" The distribution was later investigated extensively, in its modern generic form, by Josiah Willard Gibbs in 1902.

The Boltzmann distribution should not be confused with the Maxwell–Boltzmann distribution. The former gives the probability that a system will be in a certain state as a function of that state's energy; in contrast, the latter is used to describe particle speeds in idealized gases.

The distribution
The Boltzmann distribution is a probability distribution that gives the probability of a certain state as a function of that state's energy and temperature of the system to which the distribution is applied. It is given as



p_i=\frac{1}{Q}} {e^{- {\varepsilon}_i / k T}=\frac{e^{- {\varepsilon}_i / k T}}{\sum_{j=1}^{M}{e^{- {\varepsilon}_j / k T}}} $$

where pi is the probability of state i, εi the energy of state i, k the Boltzmann constant, T the temperature of the system and M is the number of all states accessible to the system of interest. Implied parentheses around the denominator kT are omitted for brevity. The normalization denominator Q (denoted by some authors by Z) is the canonical partition function



Q={\sum_{i=1}^{M}{e^{- {\varepsilon}_i / k T}}} $$

It results from the constraint that the probabilities of all accessible states must add up to 1.

The Boltzmann distribution is the distribution that maximizes the entropy


 * $$H(p_1,p_2,\cdots,p_M) = -\sum_{i=1}^{M} p_i\log_2 p_i$$

subject to the constraint that $ {\sum{p_i {\varepsilon}_i}} $ equals a particular mean energy value (which can be proven using Lagrange multipliers).

The partition function can be calculated if we know the energies of the states accessible to the system of interest. For atoms the partition function values can be found in the NIST Atomic Spectra Database.

The distribution shows that states with lower energy will always have a higher probability of being occupied than the states with higher energy. It can also give us the quantitative relationship between the probabilities of the two states being occupied. The ratio of probabilities for states i and j is given as



{\frac{p_i}{p_j}}=e^{({\varepsilon}_j-{\varepsilon}_i) / k T} $$

where pi is the probability of state i, pj the probability of state j, and εi and εj are the energies of states i and j, respectively.

The Boltzmann distribution is often used to describe the distribution of particles, such as atoms or molecules, over energy states accessible to them. If we have a system consisting of many particles, the probability of a particle being in state i is practically the probability that, if we pick a random particle from that system and check what state it is in, we will find it is in state i. This probability is equal to the number of particles in state i divided by the total number of particles in the system, that is the fraction of particles that occupy state i.



p_i={\frac{N_i}{N}} $$

where Ni is the number of particles in state i and N is the total number of particles in the system. We may use the Boltzmann distribution to find this probability that is, as we have seen, equal to the fraction of particles that are in state i. So the equation that gives the fraction of particles in state i as a function of the energy of that state is



{\frac{N_i}{N}}={\frac{e^{- {\varepsilon}_i / k T}}{\sum_{j=1}^{M}{e^{- {\varepsilon}_j / k T}}}} $$

This equation is of great importance to spectroscopy. In spectroscopy we observe a spectral line of atoms or molecules that we are interested in going from one state to another. In order for this to be possible, there must be some particles in the first state to undergo the transition. We may find that this condition is fulfilled by finding the fraction of particles in the first state. If it is negligible, the transition is very likely not to be observed at the temperature for which the calculation was done. In general, a larger fraction of molecules in the first state means a higher number of transitions to the second state. This gives a stronger spectral line. However, there are other factors that influence the intensity of a spectral line, such as whether it is caused by an allowed or a forbidden transition.

The Boltzmann distribution is related to the softmax function commonly used in machine learning.

Category:Statistical mechanics

Empirical
The empirical laws that led to the derivation of the ideal gas law were discovered with experiments that changed only 2 state variables of the gas and kept every other one constant.

All the possible gas laws that could have been discovered with this kind of setup are:

$${PV}=C_1 $$ or $$P_1V_1=P_2V_2 $$  (1)        known as Boyle's law

$$\frac{V}{T} = C_2 $$ or $$\frac{V_1}{T_1} = \frac{V_2}{T_2} $$          (2)        known as Charles's law

$$\frac{V}{N}=C_3 $$ or $$\frac{V_1}{N_1}=\frac{V_2}{N_2} $$       (3)        known as Avogadro's law

$$\frac{P}{T}=C_4 $$or $$\frac{P_1}{T_1}=\frac{P_2}{T_2} $$          (4)        known as Gay-Lussac's law

$$NT=C_5 $$or $$N_1T_1=N_2T_2 $$ (5)

$$\frac{P}{N}=C_6 $$ or $$\frac{P_1}{N_1}=\frac{P_2}{N_2} $$       (6)

Where "P" stands for pressure, "V" for volume, "N" for number of particles in the gas and "T" for temperature; Where $$C_1, C_2, C_3, C_4, C_5, C_6 $$ are not actual constants but are in this context because of each equation requiring only the parameters explicitly noted in it changing.

To derive the ideal gas law one does not need to know all 6 formulas, one can just know 3 and with those derive the rest or just one more to be able to get the ideal gas law, which needs 4.

Since each formula only holds when only the state variables involved in said formula change while the others remain constant, we cannot simply use algebra and directly combine them all. I.e. Boyle did his experiments while keeping N and T constant and this must be taken into account.

Keeping this in mind, to carry the derivation on correctly, one must imagine the gas being altered by one process at a time. The derivation using 4 formulas can look like this:

at first the gas has parameters $$P_1,V_1,N_1,T_1 $$

Say, starting to change only pressure and volume, according to Boyle's law, then:

$$P_1V_1=P_2V_2 $$   (7) After this process, the gas has parameters $$P_2,V_2,N_1,T_1 $$

Using then Eq. (5) to change the number of particles in the gas and the temperature,

$$N_1T_1=N_2T_2 $$ (8) After this process, the gas has parameters $$P_2,V_2,N_2,T_2 $$

Using then Eq. (6) to change the pressure and the number of particles,

$$\frac{P_2}{N_2}=\frac{P_3}{N_3} $$       (9) After this process, the gas has parameters $$P_3,V_2,N_3,T_2 $$

Using then Charles's law to change the volume and temperature of the gas,

$$\frac{V_2}{T_2}=\frac{V_3}{T_3} $$       (10) After this process, the gas has parameters $$P_3,V_3,N_3,T_3 $$

Using simple algebra on equations (7), (8), (9) and (10) yields the result:

$$\frac{P_1V_1}{N_1T_1}=\frac{P_3V_3}{N_3T_3} $$ or $$\frac{PV}{NT}=K_B $$, Where $$K_B $$ stands for Boltzmann's constant.

Another equivalent result, using the fact that $$nR=NK_B $$ ,where "n" is the number of mole s in the gas and "R" is the universal gas constant, is:

$$PV=nRT $$, which is known as the ideal gas law.

If you know or have found with an experiment 3 of the 6 formulas, you can easily derive the rest using the same method explained above; but due to the properties of said equations, namely that they only have 2 variables in them, they can't be any 3 formulas. For example, if you were to have Eqs. (1), (2) and (4) you would not be able to get any more because combining any two of them will give you the third; But if you had Eqs. (1), (2) and (3) you would be able to get all 6 Equations without having to do the rest of the experiments because combining (1) and (2) will yield (4), then (1) and (3) will yield (6), then (4) and (6) will yield (5), as well as would the combination of (2) and (3) as is visually explained in the following visual relation:

Where the numbers represent the gas laws numbered above.

If you were to use the same method used above on 2 of the 3 laws on the vertices of one triangle that has a "O" inside it, you would get the third.

For example:

Change only pressure and volume first: $$P_1V_1=P_2V_2 $$    (1´)

then only volume and temperature: $$\frac{V_2}{T_1}=\frac{V_3}{T_2} $$                 (2´)

then as we can choose any value for $$V_3 $$, if we set $$V_1=V_3 $$, Eq. (2´) becomes: $$\frac{V_2}{T_1}=\frac{V_1}{T_2} $$(3´)

combining equations (1´) and (3´) yields $$\frac{P_1}{T_1}=\frac{P_2}{T_2} $$, which is Eq. (4), of which we had no prior knowledge until this derivation.

Kinetic theory
The ideal gas law can also be derived from first principles using the kinetic theory of gases, in which several simplifying assumptions are made, chief among which are that the molecules, or atoms, of the gas are point masses, possessing mass but no significant volume, and undergo only elastic collisions with each other and the sides of the container in which both linear momentum and kinetic energy are conserved.

Statistical mechanics
Let q = (qx, qy, qz) and p = (px, py, pz) denote the position vector and momentum vector of a particle of an ideal gas, respectively. Let F denote the net force on that particle. Then the time-averaged kinetic energy of the particle is:

\begin{align} \langle \mathbf{q} \cdot \mathbf{F} \rangle &= \Bigl\langle q_{x} \frac{dp_{x}}{dt} \Bigr\rangle + \Bigl\langle q_{y} \frac{dp_{y}}{dt} \Bigr\rangle + \Bigl\langle q_{z} \frac{dp_{z}}{dt} \Bigr\rangle\\ &=-\Bigl\langle q_{x} \frac{\partial H}{\partial q_x} \Bigr\rangle - \Bigl\langle q_{y} \frac{\partial H}{\partial q_y} \Bigr\rangle - \Bigl\langle q_{z} \frac{\partial H}{\partial q_z} \Bigr\rangle = -3k_{B} T, \end{align} $$ where the first equality is Newton's second law, and the second line uses Hamilton's equations and the equipartition theorem. Summing over a system of N particles yields



3Nk_{B} T = - \biggl\langle \sum_{k=1}^{N} \mathbf{q}_{k} \cdot \mathbf{F}_{k} \biggr\rangle. $$

By Newton's third law and the ideal gas assumption, the net force of the system is the force applied by the walls of the container, and this force is given by the pressure P of the gas. Hence



-\biggl\langle\sum_{k=1}^{N} \mathbf{q}_{k} \cdot \mathbf{F}_{k}\biggr\rangle = P \oint_{\mathrm{surface}} \mathbf{q} \cdot d\mathbf{S}, $$

where dS is the infinitesimal area element along the walls of the container. Since the divergence of the position vector q is



\nabla \cdot \mathbf{q} = \frac{\partial q_{x}}{\partial q_{x}} + \frac{\partial q_{y}}{\partial q_{y}} + \frac{\partial q_{z}}{\partial q_{z}} = 3, $$

the divergence theorem implies that


 * $$P \oint_{\mathrm{surface}} \mathbf{q} \cdot d\mathbf{S} = P \int_{\mathrm{volume}} \left( \nabla \cdot \mathbf{q} \right) dV = 3PV,

$$

where dV is an infinitesimal volume within the container and V is the total volume of the container.

Putting these equalities together yields



3Nk_{B} T = -\biggl\langle \sum_{k=1}^{N} \mathbf{q}_{k} \cdot \mathbf{F}_{k} \biggr\rangle = 3PV, $$

which immediately implies the ideal gas law for N particles:



PV = Nk_{B} T = nRT,\, $$

where n = N/NA is the number of moles of gas and R = NAkB is the gas constant.

Canonical Ensemble
Assuming the particles making up the gas are non-interacting with each other, are confined to a fixed volume and are in thermal equilibrium, one can use the tools of the canonical ensemble to derive the ideal gas law. It is also possible to repeat similar derivations using the microcanonical and grand canonical ensembles. This derivation will follow generally the steps given by Kardar in "Statistical Physics of Particles."

The canonical partition function of a system of $$N$$ classical, identical particles with mass $$m$$ in a three dimensional box is given by

$$Z = \frac{1}{h^{3N}N!}\int\dots\int e^{-H/kT}d^3 p_1\dots d^3 q_N$$,

where $$H = \sum_{i=1}^N\bigg(\frac{p_i^2}{2m} + U(q_i)\bigg)$$is the Hamiltonian, $$h$$ is Planck's constant, and $$\frac{1}{h^{3N}N!}$$ serves as a normalization factor.

Since an ideal gas is assumed to have no interactions between particles, the potential $$U$$ is just the hard box potential,

$$U(q_i) = \begin{cases} \infty & \text{if } q_i \text{ is outside the box}\\0 & \text{if } q_i \text{ is inside the box}\end{cases}$$.

Thus the integral over all of phase space in the partition function reduces to

$$Z = \frac{V^N}{h^{3N}N!}\int\dots\int e^{\big(-\frac{1}{kT}\sum \frac{p_i^2}{2m}\big)} d^3 p_1\dots d^3 p_N$$

$$= \frac{V^N}{h^{3N}N!}\prod_{i=1}^N \int e^{\big(-\frac{1}{kT}\frac{p_i^2}{2m}\big)} d^3 p_i = \frac{V^N}{h^{3N}N!}\bigg[\int e^{\big(-\frac{1}{kT}\frac{p_1^2}{2m}\big)} d^3 p_1\bigg]^N $$,

where $$V$$ is the volume of the box.

The last equality comes from the fact that the particles are identical. Since the integral in the last equality is Gaussian, it is relatively easy to compute.

Defining $$\lambda = \sqrt{\frac{h^2}{2\pi mkT}}$$ to be the characteristic length for $$h$$ at temperature $$T$$, the partition function reduces further to

$$Z = \frac{V^N}{h^{3N}N!} \bigg[\big( 2\pi mkT \big)^{\frac{3}{2}}\bigg]^N = \frac{1}{N!}\bigg( \frac{V}{\lambda^3} \bigg)^N$$.

Now that the canonical partition function of an ideal gas has been calculated, one can solve for the free energy (specifically the Helmholtz free energy) and then use thermodynamics to obtain the ideal gas law.

The free energy is given by

$$F = -kT\log{Z}$$

$$ = -NkT\log\bigg[{\frac{V}{\lambda^3}}\bigg] + kT\log{\big[N!\big]}$$

$$ \approx -NkT\ln\bigg[{\frac{V}{\lambda^3}}\bigg] + NkT\ln{\big[ N \big]} - NkT$$,

where the approximation comes from using Stirling's approximation for factorials. Substituting in for $$ \lambda$$ and using simple logarithm rules yields the result

$$ F = -NkT\bigg( \ln{\bigg[\frac{V}{N}\bigg]} + \frac{3}{2}\ln{\bigg[ \frac{2\pi mkT}{h^2} \bigg]} + 1 \bigg)$$.

Now that the free energy has been calculated, one can use the main thermodynamic identity $$ dF = -SdT -PdV + \mu dN$$and take partial derivatives to obtain the ideal gas law.

Starting with the identity for pressure,

$$ P = -\bigg( \frac{\partial F}{\partial V} \bigg)_{T,N} = NkT\bigg(\frac{N}{V}\bigg)\cdot \frac{1}{N} =\frac{NkT}{V} $$

which is the ideal gas law.