10 Feb 2014 No Comments
How does a gas of particles with uniform speed reach the Maxwell-Boltzmann distribution?
Take an empty container and fill it with N gas particles (ideally a monoatomic gas), each having the same kinetic energy E, then isolate the container. Since initially the speeds don’t follow the Maxwell-Boltzmann distribution, such a system cannot be in thermodynamic equilibrium. On the other hand, assuming perfectly elastic collisions (and there is no reason to assume otherwise, since the only form of energy the particles can possibly have is kinetic), I see no way such a system could spontaneously evolve to equilibrium: elastic collisions among equal masses keep speeds unchanged! What gives?
Thinking about this question is quite enlightening. The following was my answer.
Bravo on some wonderfully clear and careful thoughts on your problem! I think this paper will help you:
And a summary of my answer below is:
Statistical correlation between a system constituents’ states and the use of the Gibbs entropy vs. the Boltzmann entropy to account for that correlation is the mathematical framework you need.
You are quite right that in the case of *perfectly elastic* collisions with the walls that the system can never reach ideal thermodynamic equilibrium i.e. where the states of all the particles have identical probability density distributions and all these states are perfectly statistically independent (i.e. wholly uncorrelated).
In the practical case, the collisions with the wall are never perfectly elastic. The wall is thermalized, which means that the thermal state of the wall’s molecules is randomly varying. Sometimes a gas particle will pick up a little bit of energy in interacting with the wall, other times it will lose a little bit of energy: at thermodynamic equilibrium the long term averages of energy flows into and out of the box are equal. But if the gas molecules’s states are highly correlated as in the special example you cite, the information (i.e. Shannon entropy – in thermodynamics it corresponds to the *Gibbs Entropy*) needed to define the complete microstate of all the molecules in the box will increase with time in your example because the particles are interacting with thermalized walls: their collisions are defined by parameters that stochastically evolve owing to the randomly evolving states of the wall’s particles and you would need include a description of the history of these random collisions to fully define the box particles.
Actually if the walls are a finite thermal reservoir, the initially perfectly uncorrelated states of the wall’s molecules will actually become weakly correlated with time as they interact with the correlated gas molecules. The Shannon entropy of a closed system always stays constant. If the walls are thought of as being the boundaries of a bigger and bigger thermodynamic reservoir, the correlation just spoken of becomes spread over more and more wall molecules, so in the limit of an infinite reservoir, the correlation between the wall system particles becomes nought and the interaction with the gas particles does not disturb the wall system’s thermodynamic equilibrium. In classical thermodynamic problems “infinite reservoirs” are essential thought experimental tools, because they are perfect “dumping grounds” for the kind of statistical correlation just spoken of. This is why the non-equilibrium thermodynamics of small systems is so hard to analyse: the statistical correlations make the exact analysis of such systems intractable.
Given the above two paragraphs, we can now look at how you thought about the problem:
“At any rate, the process I imagined goes like this: initially, the particles bombarding the wall transfer some amount of heat to it while slowing down; in turn the wall, now heated up, will transfer back some heat to the gas; eventually, the system will reach the expected equilibrium.”
This is substantially sound: the only thing that is a near miss is the last phrase “eventually, the system will reach the expected equilibrium”. In general this statement needs to be qualified: if the walls are a “small” thermodynamic system, with not too many more particles than the gas itself, the correlation between the wall molecule states that arises from the interaction with the at-first-highly-correlated gas molecule states can only be “spread” over a limited number of particles. So the system will wind up with both gas and wall molecules having substantially correlated states. If however the walls are a big system, then the system “eventually reaches its expected equilibrium”, for all practical purposes. The size of the total system determines how far towards equilibrium it can proceed.
You might find it helpful now to look at a kind of “backwards” version of your problem: let’s think of an “irreversible” change where a gas at first at thermodynamic equilibrium (i.e. all particle states have the same probability density function and they are all statistically uncorrelated) undergoes the irreversible change where the container’s volume is suddenly doubled and the collisions with the walls are perfectly elastic. Its Gibbs entropy is unchanged by the volume doubling, because one can in principle compute all the future particle states from a knowledge of their states just before the doubling. The laws of physics at the molecular level are reversible; beginning states are mapped one-to-one and onto future states so that the former can always in principle be computed from the latter and contrawise and therefore no information gets added to a truly closed (i.e. sundered from the rest of the Wordl) thermodynamic system. However the experimentally observed Boltzmann entropy increases by $k_B \log 2$ per particle: intuitively, this is because all the particles’ positions now need one more bit of information to specify which half of the newly doubled box volume they lie within – i.e. whether they lie in the former half volume, or in the newly allowed volume. But now the particle states are statistically correlated and so
$$S_B = S_G + k_B N M$$
where $S_B$ is the Boltzmann entropy, $S_G$ the initial, unchanged Gibbs entropy, $N$ the number of particles and $M$ the mutual information (see the Wiki page with this name) per particle. $M$ is an information theoretical measure of how much of a particle’s state can be foretold from knowledge of other particles’s states i.e. an information theoretical measure of statistical correlation. So the Boltzmann entropy, which can be shown to equal the macroscopic, experimentally observable Clausius entropy definition, is a measure of information when systems’ constituents are statistically uncorrelated.
We can think of this irreversible change in terms of the shape of phase space volumes. Initially, the phase space volume of the uncorrelated gas in thermodynamic equilibrium looks like a pretty ordinary, convex, simply connected set (if anything in such fantastically high number dimensioned space can be called “ordinary” with a straight face!). But with the irreversible change it evolves into something with the same volume (i.e. Gibbs entropy: entropies can also be interpreted as phase space volumes) but it gets squashed and finely divided: it can end up as a fractal-like foam or something like a Sierpinski carpet. The true volume of this object does not change, but its practically observable volume (i.e. observed down to a given “coarse graining” level) i.e. the smallest “unfoamy” set (which is simply connected at levels finer than the coarse graining in question) containing the volume does (just as foam, resolved to coarse levels, seems to take up more space that it does), and this change is the difference between the Boltzmann and Gibbs entropies. As we make more and more detailed measurements, we can experimentally “see” more and more of the “foamy” structure, but in doing so we depart further and further from the classical Clausius characterisation of entropy as we must measure and bring more and more information into our description aside from the classical variables of pressure, temperature and so on so as to describe the foamy set more fully.