Thermodynamics is about the desciption of large systems which is mostly about the following keypoints. (A Modern Course in Statistical Physics by L. E. Reichl)
Anyway, thermodynamics is a kind of theory that deals with black boxes. We manipulate any variables we like and look at the changes. Then we summarize and get a set of laws.
The first thermodynamics potential we can think of is the (differential) internal energy, which, by definition is a function of entropy \(S\), volume \(V\) and number of particles \(N_i\),
where repeated indices are summed over. The simplest way to generate all the other thermodynamics potentials is to use Legendre transform, which is explained in Legendre Transform.
Mathematically, three-variable function will generate seven extra functions through Legendre transform. The differential forms of them are
Physically, there are at three different kinds of couplings here, which are
Legendre transform, in an easy understanding, is about turning on and off switches of the three different couplings. Turning off the mechanical coupling of (differential) internal energy \(dU(S,V,\{N_i\})\) leads to (differential) enthalpy \(dH(S,p,\{N_i\})\). Indeed, enthalpy is the same as internal energy when we are talking about constant pressure.
Let’s sum up.
The relations between them? All potentials are Legendre transformation of each other. To sum up, let’s gliffy.
Fig. 3 (The gliffy source file is here . Feel free to download and create your own version.)
Fig. 3 needs some illustration.
The Sixth Potential?
Question: Mathematically we can construct the sixth potential namely the one that should appear at the right bottom of the graph. Why don’t people talk about it?
We can surely define a new potential called \(Null(T,X,\{\mu_i\})\). However, the value of this function is zero. So we can have the derivitive of this potential is also zero. This is the Gibbs Duhem equation.
The answer I want to hear is that this is something \(\mathrm d\mathrm d f = 0\) where f is exact.
Hint
Question: Why is internal energy \(U\) a function of three extensive quantities, \(V\), \(S\), \(N\)?
There are three aspects to be considered.
This has always been confusing to use so many differential potentials. The trick of math is to discuss in the theory of differential forms.
What Are Forms
In simple words, 1-forms are linear mapping of functions to numbers.
For illustration purpose, we take the simple case that
We know that \(dU\) is a 1-form and it can be the basis of 1-forms, so is \(dV\). Also notice that we could difine a map from a point \((U,V)\) to a real number, which explains the pressure \(p(U,V)\). As a result, \(\bar dQ\) is also a 1-form. Rewrite the equation using the language of forms,
where the under tilde denotes 1-form. However, \(\underset{^\sim}{\omega}\) is not exact, which means that we do not find a function \(Q(U,V)\) on the manifold so that \(\mathbf{d d }Q = 0\). Following Bernard Schutz in his Geometrical Methods in Mathematical Physics, an exact \(\underset{^\sim}{\omega}\) means that
where we have used the condition that \(\underset{^\sim}{dU}\) is exact, i.e., \(\mathbf{d}\underset{^\sim}{dU}=0\). In order for it to be valid at all point, we have to require \(\left( \frac{\partial p}{\partial U} \right)_V=0\) at all points on the manifold.
Frobenius’ theorem tells us that we will find functions on the manifold so that \(\underset{^\sim}{\omega}=T(U,V)\mathbf{d}S\), which gives us
if we have \(\mathbf{d}\underset{^\sim}{\omega} \wedge \underset{^\sim}{\omega}=0\), which is easily proven to be true here since we have repeated basis if we write it down (no n+1 forms on n-dimension manifold).
Or if we are back to functions,
A list of references for differential geometry and thermodynamics:
With the help of differential forms, we could derive the Maxwell identities more easily by rewriting the functions to be functions of other variables. The punch line is the exterior derivative of equation (1),
Maxwell identities are obtained by writing functions as functions of \((S,V)\) or \(T,V\) etc.
The questions is, how could this formalism help us understanding more of the laws of thermodynamics. As an example, we examine second law using differential forms. For a more general composite system which has more dimensions or basis, we write down a one form that is related to heat production,
In general, on a n-dimension manifold, we could have non-zero \(\mathbf{d}\underset{^\sim}{\omega} \wedge \underset{^\sim}{\omega}\) since we have up to non-zero n-forms. The meaning is that we do not find global temperature and entropy on the whole manifold [BSchutz] or no globally integrable function of heat exchange \(\underset{^\sim}{\omega_n}\).
Regarding the geometrical meaning of 1-forms, which are surfaces of equi-function values, just like equipotential lines, we think of a system that has a global entropy and temperature with such equi-entropy surfaces. One of the aspects of the second law thus is to state that for a system that has no heat exchange \(\underset{^\sim}{\omega}=0\), it is restricted on a curtain part of the phase space, i.e., it has limited states compared to the whole possible states on the manifold. In the language of differential forms, the second law is all about the existance of entropy, by Caratheodory’s theorem.
Zeroth Law of Thermodynamics
Zeroth Law: A first peek at temperature
Two bodies, each in thermodynamic equilibrium with a third system, are in thermodynamic equilibirum with each other.
This gives us the idea that there is a universal quantity which depends only on the state of the system no matter what they are made of.
First Law of Thermodynamics
First Law: Conservation of energy
Energy can be transfered or transformed, but can not be destroyed.
In math,
where \(W\) is the energy done to the system, \(Q\) is the heat given to the system. A better way to write this is to make up a one-form \(\underset{^\sim}{\omega}\),
where in gas thermodynamics \(\underset{^\sim}{W}=-p\mathbf{d}V\).
Using Legendre transformation, we know that this one form have many different formalism.
Second Law of Thermodynamics
Second Law: Entropy change; Heat flow direction; Efficieny of heat engine
There are three different versions of this second law. Instead of statements, I would like to use two inequalities to demonstrate this law.
For isolated systems,
Combine second law with first law, for reversible systems, \(Q = T \mathrm d S\), or \(\underset{^\sim}{\omega}=T\mathbf{d}S\), then for ideal gas
Take the exterior derivative of the whole one-form, and notice that \(U\) is exact,
Clean up this equation we will get one of the Maxwell relations. Use Legendre transformation we can find out all the Maxwell relations.
Second Definition of Temperature
Second definition of temperature comes out of the second law. By thinking of two reversible Carnot heat engines, we find a funtion depends only a parameter which stands for the temperature like thing of the systems. This defines the thermodynamic temeprature.
Third Law of Thermodynamics
Third Law: Abosoulte zero; Not an extrapolation; Quantum view
The difference in entropy between states connected by a reserible process goes to zero in the limit \(T\rightarrow 0 K\).
Due to the asymptotic behavior, one can not get to absolute zero in a finite process.
When talking about entropy, we need to understand the properties of cycles. The most important one is that
where the equality holds only if the cycle is reversible for the set of processes. In another sense, if we have infinitesimal processes, the equation would have become
The is an elegent result. It is intuitive that we can build correspondence between one path between two state to any other paths since this is a circle. That being said, the following integral
is independent of path on state plane. We imediately define \(\int_A^B \frac{\mathrm d Q}{T}\) as a new quantity because we really like invariant quantities in physics, i.e.,
which we call entropy (difference). It is very important to realize that entropy is such a quantity that only dependents on the initial and final state and is independent of path. Many significant results can be derived using only the fact that entropy is a function of state.
This problem can be understood by thinking of the statistics. Suppose we have a box and N gas molecules inside. We divide it into two parts, left part and right part. At first all the particles are in the L part. As time passing by the molecules will go to the R part.
The question we would ask is what the probablity would be if all the particles comes back to the L part. By calculation we can show that the ratio \(R\) of number of particles on L part and R part,
will have a high probability to be 0.5, just as fascinating as central limit theorem.
© Copyright 2017, Lei Ma. | On GitHub | Index of Statistical Mechanics | Created with Sphinx1.2b2.