In thermodynamics, quantities such as energy, volume, quantity of matter and the infamous entropy are so-called *extensive* quantities: if the system size is doubled their value is doubled too, if the system size is tripled so is their value and so on.

That entropy is extensive ensures that if two identical — initially separated — systems are brought together to form a larger system of double the size without affecting “its nature” (in other words its size-independent properties), then the entropy of the universe should not change; meaning that if one were to perform the reverse operation (splitting a system into two identical parts), there would not be any way to discriminate the past from the future.

This property is part of the implicit postulates of the standalone theory of classical thermodynamics which was developed during the 19th century.

Later-on was developed a theory whose role was to rationalise thermodynamics through mechanics, the standard theory with which everything else was described. This theory, pioneered by people like Boltzmann, Lorentz, Maxwell, Gibbs and Paul and Tatyana Ehrenfest, is now known as *statistical mechanics.*

Although remarkably successful at combining a mechanistic view with a thermodynamic one, to this day (more than 120 years later) the theory of statistical mechanics still doesn’t have a set of fundamental postulates and rules on which everybody agrees: *people will agree on the main equations that work for most practical purposes but not on the fundamental assumptions these equations emerge from*.

A particular bone of contention is precisely to know where does the extensive character of entropy come from in statistical mechanics. A very popular narrative (supported in more than 85% of the 40 reference textbooks I consulted on the subject) consists in saying that fundamentally the extensive character of entropy could not be satisfactorily explained in the times of the pioneers aforementioned, as they did not know quantum mechanics. The point is indeed to claim that the physics of then was unequipped to answer this deep question and that now we know that extensivity is a consequence of the so-called quantum indistinguishability of identical particles which can only be either fermions or bosons.

Following other authors such as Frenkel, Cates, Swendsen or Jaynes, a paper of mine on this subject just got published in the journal Molecular Physics .

In this paper, I argue that pre-quantum-revolution physics was conceptually rich enough to propose a satisfactory rationale for the extensive character of entropy in statistical mechanics and that (a) it does not need to be opposed to the quantum rationale and (b) there seem to be cases where the quantum mechanical explanation fails to be compatible with experiments. At the same time, I point out a problem within thermodynamics and statistical mechanics which I argue has been overlooked since the 1870s and that does not seem to be solved by the quantum approach. Upon choosing a particular interpretation of the problem, I propose then a solution to it based on ideas by 19th century physicist J.W. Gibbs.

The basic idea of the paper is summarised in the figure below.

For those interested, the paper can be accessed at the link below https://www.tandfonline.com/eprint/UPvQ3sGrgqGcniYEqC2M/full .

Reblogged this on Computational Physics Group.

LikeLike