Talk:Maxwell–Boltzmann statistics

Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 27 August 2021 and 19 December 2021. Further details are available on the course page. Student editor(s): 0.25cm. Peer reviewers: Mumtaziah, Ajp256, Leolsz.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 03:40, 17 January 2022 (UTC)

Derivation
I think this derivation is more complete than the one in the "derivation of the partition function" article. I also think that the derivation belongs here rather than in the latter article because the Bose-Einstein statistics and Fermi-Dirac statistics articles each contain their own derivation, with all three articles now being very similar in development. Eventually I would like to remove or reduce the derivation from the partition function article and link to this one instead. This would not affect the canonical and grand canonical material in the partition function article. PAR 07:16, 29 October 2005 (UTC)


 * more complete? in what sense? because it's more tedious? the longer derivations in the FD, BE, and MB statistics articles are of the physically un-illuminating and unnecessarily contorted variety. partition functions are, more or less, the multiplicity of the system. by exponentiating entropy and use partition functions, dealing with multiplicities directly can be avoided entirely. no graduate textbook i've seen takes this unappealing approach. Mct mht 18:00, 27 April 2006 (UTC)


 * Calling the derivation "tedious", "contorted", and "unappealing" implies that you know of a derivation that is less so, without sacrificing rigor. Please modify this page accordingly. PAR 23:39, 13 May 2006 (UTC)

I AGREE WITH USER :PAR|PAR


 * let me qualify my comment a bit. IMHO, the article derivation of the partition function needs to go or a rewrite. i did not mean to compare with that article. Mct mht 23:42, 18 May 2006 (UTC)

I really liked the derivation of the microcanonical ensemble in this article. It is just at the right level to be able to understand where the Boltzman distribution, and entropy come from. I had a little trouble to go from ln W to dE, but the proof was very readable and helpful. 128.163.8.203 (talk) 14:17, 29 February 2016 (UTC)

disagree with combining Maxwell-Boltzmann distribution with this article
In general, probability distributions have their own pages. I'd say the Maxwell-Boltzmann distribution should be no different. Also, the theorem presented on this page CAN be applied to moledular motion, but it really applies to all chemical (physical) systems and is the basis for a heck of a lot more than the Maxwell-Boltzmann distribution. Pdbailey 02:35, 18 April 2006 (UTC)

It is a mater of audience and communication not aethestics of MAth. People who want to understand a simple introduction to this subject will read the first few paragraphs. e.g. my usage was for a 16-year who wanted a layer more depth than "collision theory". Communication is about context as well as content therefore embedding a simple introduction in a more complex context will reduce he access and usefulness. (Stefan@wasilewski.com)
 * I disagree with combining [Maxwell-Boltzmann distribution] with this article ***
 * The issue isn't at all about "aesthetics of math". MB stats and MB distribution are completely different things. It's like merging carpentry into hammer. Capefeather (talk) 20:53, 23 April 2009 (UTC)


 * DISAGREE with combining. You can use MB statistics to derive MB distribution which is a special case of E=mv^2/2. There are many other degrees of freedom, many other expressions for their energy. MB distribution is useless for them, but MB statistics will give you the answer. PAR (talk) 17:22, 5 March 2010 (UTC)

mistake in Boltzmann counting
the reasoning given in Boltzmann counting is not quite right. a direct calculation shows that the "corrected" multiplicity W still fails to given additive entropy. for example, multiply each $$N_i$$ by 2 and the entropy fails to double. something is wrong.

the context in which this problem is brough up also seems to be unusual. the common approach is calculate the entropy for ideal gas, by computing directly the available volume of the phase space, then seeing that an ad hoc reduction by a factor of $$N !$$ is required. it is called ad hoc precisely because it is exact only if the expected value of the distribution numbers $$N _i $$ is much less then 1. this defines the classical limit, and Gibb's reduction factor only fixes the counting in the classical limit. this could be the problem here, as no such assumptions, that the system is in the classical limit, was made in the first derivation in the article. therefore using the expression obtained in the derivation leads to the incorrectness on Boltzmann counting.

if no justification is given or changes made, that section should be deleted. Mct mht 05:09, 24 May 2006 (UTC)

in fact, it obviously doesn't make sense to use the Gibbs reduction factor here. let's assume there's no degeneracy and $$g_i = 1$$ for all i. if you divide by the Gibbs factor $$N!$$, this leads to an entropy that's always non-positive, clearly nonsense. section will be deleted. Mct mht 05:52, 24 May 2006 (UTC)

Question: is the final term in the W equation actually correct? Should it be (N - N1 - . . . - Nk-1)! in the numerator? [12:31, 11 July 2006 (AGS)]


 * In the "A derivation of the Maxwell–Boltzmann distribution" there is a "clarification needed" tag that I have been thinking about removing. $$\beta=1/kT$$ can be easily derived from W, $$S=k\ln W$$ and the second law, but the derivation of $$\alpha=\mu$$ requires correct Boltzmann counting, i.e. dividing the present equation for W by N! (i.e., removing N! from in front of the product). Correct Boltzmann counting assumes $$g_i>>n_i$$ so the example for $$g_i = 1$$ is not a valid criticism. It also assumes $$n_i<<N$$ as mentioned above. I don't understand why replacing $$n_i$$ by $$2n_i$$ should double the entropy. The bottom line is that dividing the present equation for W by N! amounts to correct Boltzmann counting under the specified conditions, and would allow an easy identification of $$\alpha=\mu$$, and the clarification tag could be removed. PAR (talk) 04:40, 3 November 2011 (UTC)

Main equation error?
Is the main equation really correct?



\frac{N_i}{N} = \frac {g_i} {e^{(\epsilon_i-\mu)/kT}} = \frac{g_i e^{-\epsilon_i/kT}}{Z} $$

Shouldn't it be:



N_i = \frac {g_i} {e^{(\epsilon_i-\mu)/kT}} = \frac{N}{Z} g_i e^{-\epsilon_i/kT} $$

In the first derivation one line reads:



N_i = \frac{g_i}{e^{(\epsilon_i-\mu)/kT}} $$

This line directly conflicts with the main equation.


 * Reply: the eqn you're referring to from the introduction seems ok. Ni is the occupation number for the i-th state, and N is the total number of particles. so Ni/N is essentially the probability that a member of the ensemble chosen at random would be in the i-th state, which should be $$g_i e^{-\epsilon_i/kT}$$ divided by the normalizing constant Z. on the other hand, as you pointed out, something doesn't quite make sense with the eqn $$N_i = \frac{g_i}{e^{(\epsilon_i-\mu)/kT}}$$. that section needs a careful look-over. perhaps a clean-up tag should be attached to it. Mct mht 15:29, 21 February 2007 (UTC)


 * Reply: I agree with the original post. $$ \frac{N_i}{N} = \frac{g_i e^{-\epsilon_i/kT}}{Z} $$ is correct, but $$ \frac{N_i}{N} = \frac {g_i} {e^{(\epsilon_i-\mu)/kT}} $$ is incorrect. It should be $$ N_i = \frac {g_i} {e^{(\epsilon_i-\mu)/kT}}$$ (note that there is no longer an $$N$$ in the denominator of the LHS), as was stated in the original post. If there are no rebuttals, I will make the change soon. 27 June 2009 —Preceding unsigned comment added by 18.100.0.121 (talk) 16:05, 27 June 2009 (UTC)

Not so fast
 I didn't like that after a lot of detail in the beginning of the presentation, the Maxwell-Boltzmann statistics is pulled out of the hat. So I propose to somehow mangle the below derivation, replacing the short form:

Using Stirling's approximation for the factorials and taking the derivative with respect to $$N_i$$, and setting the result to zero and solving for $$N_i$$ yields the Maxwell-Boltzmann population numbers.:

by the following (please revise):

With Stirling's approximation in the form


 * $$\ln N! \approx N \ln N - N $$

and neglecting some constants, we obtain



f(N_i) \approx N \ln N - \sum N_i \ln N_i + \sum N_i \ln g_i + \alpha (N-\sum N_i)+ \beta(E-\sum N_i E_i) $$

We seek now extrema by setting



\frac{\partial f(N_i)}{\partial N_i} = \ln{N_i} + \ln{g_i} -1 - \alpha - \beta E_i = 0 $$

For convenience we can replace the expression $$(-1 -\alpha)$$ by $$\alpha$$ and obtain solutions



N_i = g_i e^{\alpha-\beta E_i} $$

However, $$\alpha$$ and $$\beta$$ still need to be determined. Since we have



N = \sum N_i = e^{\alpha} \sum g_i e^{-\beta E_i} $$

it is easy to see that



e^{\alpha} = \frac{N}{\sum g_i e^{-\beta E_i}} $$

So one gets:



\frac{N_i}{N} = \frac{g_i e^{-\beta E_i}}{\sum_j g_j e^{-\beta E_j}} $$

end of insertion



Quantization?
Suppose we have a number of energy levels, labelled by index i, each level having energy εi and containing a total of Ni particles. To begin with, let's ignore the degeneracy problem. Assume that there is only one way to put Ni particles into energy level i.

Does a number of energy levels mean that energy isn't continuous. But rather quantized? For instances, restricting to energy levels of 1.2, 2.4, 3.6, 4.8, 6.0 ... would imply that a quanta of thermal energy carries 1.2 unit of energy.

If it is continuous it would mean I can always find another state between states. But that would lead to the integration of the probability of all states to go to infinite, would it not?

And there's another problem: it's states that we put distinguishable particles into energy levels.

But if the quantization of thermal energy is correct. Why not put distinguishable energy quanta into distinguishable particles instead and than count their number of ways and than find the distribution with the highest value?

I was wondering if I've put this question into the right place. Do inform me where to post this question if there is one. I remember there was a help desk, but couldn't find it. Is it more appropriate? User:Tikai 16:22, 2 May 2008 (UTC)

Modeling of statistical thermodynamics: The Maxwell - Boltzmann distribution
Let an isolated system be composed of a large number $$N$$ of particles, each being able to assume several energy states $${{E}_{1}},\text{ }{{E}_{2}},...$$. If at a particular time the particles are distributed in such a way that $${{n}_{1}}$$ have energy $${{E}_{1}}$$, $${{n}_{2}}$$ have energy $${{E}_{2}}$$ and so on, then

$$\begin{align} & N=\sum\limits_{i} \\ & U=\sum\limits_{i}{{E}_{i}}\text{ (total energy of system)} \\ & {{E}_{\text{average}}}=\frac{U}{N} \\ \end{align}$$

The set $$\left\{ {{n}_{1}},{{n}_{2}},... \right\}$$ constitutes a partition which determines a microstate of the system. The macrostate determined by $$N,U$$ may correspond to a number of different partitions (microstates). However, it is logical to assert that for a specific macrostate there is a class of related microstates most probable to occur, in which case the system is said to be in statistical equilibrium. The system will not deviate from such stability (except for minor statistical fluctuations) unless disturbed by an external factor.

Under certain assumptions it can be shown that for the most probable or equilibrium partition it is

$$\begin{align} & {{n}_{i}}=\alpha {{g}_{i}}{{e}^{-\beta {{E}_{i}}}}=\underbrace{\frac{N}{Z}{{g}_{i}}{{e}^{-\beta {{E}_{i}}}}}_{\begin{smallmatrix} \text{Maxwell - Boltzmann} \\ \text{distribution law} \end{smallmatrix}} \\ & \underbrace{Z=\sum\limits_{i}}_{\begin{smallmatrix} \text{parition function:} \\ \text{indicates structure of system} \end{smallmatrix}} \\ \end{align}$$

where $$\beta $$ is a temperature – related parameter, $$\alpha $$ is a constant that depends on the structure of the system and $${{g}_{i}}\text{ }\left( {{\text{J}}^{-1}} \right)$$ is the intrinsic probability of the energy state $$i$$, also called degeneracy of the energy level.

Provided the system is in statistical equilibrium, the following relations hold:

$$\begin{align} & \beta =\frac{1}{kT} \\ & U=\frac{N}{Z}\sum\limits_{i}{{{g}_{i}}{{E}_{i}}{{e}^{-\beta {{E}_{i}}}}=kN{{T}^{2}}\frac{d\ln \left( Z \right)}{dT}} \\ & {{E}_{\text{average}}}=-\frac{d\ln \left( Z \right)}{d\beta }=k{{T}^{2}}\frac{d\ln \left( Z \right)}{dT} \\ \end{align}$$

where $$k$$ is the Boltzmann constant and $$T$$ is the temperature of the system. It can be proven that if two interacting systems of particles are in statistical equilibrium then they are in thermal equilibrium, i.e. they have the same temperature.

For an ideal gas, $${{\left\{ {{E}_{i}} \right\}}_{i\in \mathbb{R}}}=E$$ and

$$\begin{align} & \frac{dN}{dE}=\frac{2\pi N}{{E}^{1/2}}{{e}^{-\frac{E}{kT}}} \\ & \frac{dN}{dv}=4\pi N\frac{m}{{v}^{2}}{{e}^{-\frac{m{{v}^{2}}}{2kT}}} \\ \end{align}$$

With the above notions, the first law of thermodynamics for a non - isolated system becomes

$$dU=d\left( \sum\limits_{i} \right)=\underbrace{\sum\limits_{i}{\left( d{{n}_{i}} \right){{E}_{i}}}}_{\begin{smallmatrix} \text{change due to } \\ \text{redistribution of} \\ \text{molecules} \end{smallmatrix}}+\underbrace{\sum\limits_{i}{{{n}_{i}}\left( d{{E}_{i}} \right)}}_{\begin{smallmatrix} \text{change due to reestablishment} \\ \text{of energy levels (structure -} \\  \text{size of system)} \end{smallmatrix}}=dQ+d{{W}_{\text{ext}}}$$

while the second law of thermodynamics is expressed as

$$S=k\ln \left( P \right)$$

where $$P$$ is the probability of the partition corresponding to the state of the system and $$S$$ is the entropy.

Is Bose-Einstein expression for W correct?
Is this expression below right?


 * $$W=\prod_i \frac{(N_i+g_i-1)!}{N_i!(g_i-1)!}$$

I suspected it is not since assuming


 * $$g_i = 1$$

leads to


 * $$W = 1$$

Thinking twice, this may just be right: there is only one configuration $$N_i$$  if the particles are indistinguishable! — Preceding unsigned comment added by Maajdl (talk • contribs) 06:56, 11 December 2017 (UTC)

Edit in text between distinguishable/indistinguishable
Please evaluate whether my last edit is correct or not since those two letters change the meaning of the paragraph! Thank you. Bcpicao (talk) 22:45, 25 January 2023 (UTC)


 * Nope, the original version was correct. Evgeny (talk) 21:01, 30 January 2023 (UTC)

Oxygen particles?
In the figure caption ‘oxygen particles’ are mentioned. It should be specified whether atoms, molecules, ozone or pieces of frozen oxygen 😀 are meant. If these are not atoms, rotation and vibration come into play. If they are, what’s keeping them from forming dimers? Aoosten (talk) 11:59, 6 March 2024 (UTC)


 * Right, it was ambiguous. I've changed it to "molecules". As to your second comment, why would an internal degree of freedom (rotational or vibrational) make any impact on the velocity distribution? Evgeny (talk) 12:48, 6 March 2024 (UTC)