Ensembles

The problem in statistical mechanics is that we can only gain partial information of the system, thus only partial results are obtained. However we know that in equilibrium, all possible states appear with an equal probability. Recall that states means points in phase space. We use hydrodynamics to represent the phase space distribution but this can not determine if the phase space points are change or not. The only thing this density show us the the aparant density doesn’t change. So it seems that we don’t need to know the exact position of a phase space point. We can rely on average values.

The system is like a black box. Even we know all the quantities, we still have no idea about the exact state of the system.

Ensemble

Gibbs’ idea of ensemble is to create a lot of copies of the system with the same thermodynamics quantities.

The question is where to put them? In different time or in different space?

We can create a uge amount copies of the system and imagine that they are at different place. Then we have all the possible states of the system.

We can also wait infinite long time and all possible states will occur, at least for some system, which is called ergodic. Ergodic means the system can visit all possible states many times during a long time. This is rather a hypothesis than a theorem.

The problem is, not all systems are ergodic. For such systems, of course, we can only do the ensemble average.

Note

Cons

  1. Not possible to prove ensemble average is the same as time average. In fact some systems don’t obey this rule.
  2. Not possible to visit all possible states on constant energy surface in finite time.
  3. Even complicated systems can exhibit almost exactly periodic behavior, one example of this is FPU experiment .
  4. Even the system is ergodic, how can we make sure each state will occur with the same probability.

Here is an example of non ergodic system:

An example of non ergodic system

This is a box of absolutely smooth with balls collide with walls perpendicularly. Then this system can stay on some discrete points with same values of momentum components.

Another image from Wikipedia :

ergodic system

Note

Pros

  1. Poincaré recurrence theorem proves that at least some systems will come back to a state that is very close the the initial state after a long but finite time.
  2. Systems are often chaotic so it’s not really possible to have pictures like the first one in Cons.

We already have Liouville density evolution

\[\frac{\partial}{\partial t} \rho = \{ H, \rho \}\]

Von Neumann equation

\[i\hbar \frac{\partial}{\partial t} \hat\rho = [\hat H, \hat\rho ]\]

In all, they can be written as

\[i \frac{\partial\rho}{\partial t} = \hat L \rho\]

where \(\hat L\) is the Liouville operator.

We have mentioned that ensembles have the same thermodynamic quantities. In the language of math,

\[\avg{o} = \mathrm{Tr} \rho O\]

All we care about is the left hand side. So as long as \(\rho\) is not changed, we can stir the system as crazy as we can and keep the system the same.

Hint

Only one trace in phase space is true. How can we use ensemble to calculate the real observables?

Actually, what we calculated is not the real observable. What we calculated is the ensemble average. Since we are dealing with equilibrium, we need the time average because for equilibrium system, time average is the desired result. (Fluctuations? yes but later.) As we discussed previously, for ergodic systems, ensemble average is the same as time average.

Equilibrium

What does equilibrium mean exactly?

\[\frac{\partial}{\partial} \rho = 0\]

or equivalently,

\[\{ H, \rho \} =0\]

Obviously, one possible solution is

\[\rho \propto e^{-\beta H}\]

Ensembles, Systems

Table 1 Ensembles and systems
Systems Ensembles Geometry in Phase space Key Variables
Isolated Microcanonical Shell; \(\rho = c'\delta(E-H)\) Energy \(E\)
Weak interacting Canonical    
Exchange particles Grand canonical    

Isolated System - Micro-canonical Ensemble

UML of micro-canonical
\[\rho(p;q;0) = \delta(H(p;q;0) - E)\]

That is the system stays on the energy shell in phase space. Also we have for equilibrium system,

\[H(p;q;t) = E\]

Hint

Is it true that ensemble average is equal to the actual value of the system?

Not for all classical systems. (But for ALL quantum systems? Not sure.)

Ergodic Hypothesis Revisited

For ergodic systems, ensemble average is equal to time average.

Important

How about state of the system moving with changing speed on the shell? Then how can we say the system is ergodic and use ensemble average as time average?

Micro canonical ensembles are for isolated systems.

\[\rho \propto \frac{1}{\text{No. of states on the ensemble surface}} \equiv \frac{1}{\Omega (E)}\]

To calculate the entropy

\[S = k_B \ln \Omega\]

Canonical Ensemble

Canonical Ensemble

For a system weakly interacting with a heat bath, total energy of the system is

\[E_T = E_S + E_R + E _{S,R}\]

where the interacting energy \(E\) is very small compared to \(E_1\ll E_2\). So we can drop this interaction energy term,

\[E_T = E_S + E _R\]

A simple and intuitive derivation of the probability density is to use the theory of independent events.

  1. \(\rho_T d\Omega_T\): probability of states in phase space volume \(d\Omega_T\);
  2. \(\rho_S d \Omega_S\): probability of states in phase space volume \(d\Omega_S\);
  3. \(\rho_R d \Omega_R\): probability of states in phase space volume \(d\Omega_R\);

We assumed weak interactions between system and reservoir, so (approximately) the probability in system phase space and in reservoir phase space are independent of each other,

\[\rho _ T d\Omega_T = \rho _S d\Omega_S \cdot \rho _R d \Omega_R .\]

Since there is no particle exchange between the two systems, overall phase space volume is the system phase space volume multiplied by reservoir phase space volume,

\[d\Omega_T = d\Omega _S \cdot d\Omega_R .\]

Obviously we can get the relation between the three probability densities.

\[\rho_T = \rho_R \rho_S .\]

Take the logarithm,

\[\ln\rho_T = \ln\rho_R + \ln\rho_S .\]

Key: :math:`rho` is a function of energy :math:`E`. AND both :math:`rho` and energy are extensive. The only possible form of :math:`ln rho` is linear.

Finally we reach the destination,

\[\ln \rho = - \alpha - \beta E\]

i.e.,

\[\rho = e^{-\alpha} e^{-\beta E}\]

which is called canonical distribution.

Warning

This is not an rigorous derivation. Read R.K. Su’s book for a more detailed and rigorous derivation.

Grand Canonical Ensemble

Systems with changing particle number are described by grand canonical ensemble.

Grand Canonical Ensemble

Note that the partition function of grand canonical ensemble really is

\[Z = \sum _ n \sum_N e^{-\beta E_n - \mu N} = \sum_N \left( \sum _ n e^{- \beta E _ n} \right)\left(e^{-\mu}\right)^N = \sum _ N Z \left(Z^f\right)^N\]

Identical Particles

If a system consists of N indentical particles, for any state \(n_i^\xi\) particles in particle state i, we have the energy of the system on state \(\xi\) is

\[E^\xi = \sum_i \epsilon_i n_i^\xi\]

where the summation is over all possible states of particles.

The value of energy is given by ensemble value

\[\avg{E} = \frac{\sum _\xi e^{-\beta E^\xi} E^\xi}{\sum _\xi e^{-\beta E^\xi}}\]

\(\xi\) is the ensemble summation.

Hint

How to calculate the average number of particles on a particle state i?

Three Ensembles Cont’d

The three ensembles are the same when particle number N is really large, \(N\rightarrow 0\) .

The reason is that

  1. when N becomes really large, the interaction between system and reservoir becomes really negligible. The variance of Gaussian distribution is proportional to \(1/\sqrt{N}\).
  2. \(dE_S+dE_R=dE\) and we know \(dE=0\) so when the energy of the system increase that of reservoir drop. Professor Kenkre have a way to prove that the energy of the system is peaked at some value. However I didn’t get it.

Warning

Ask him why the value is peaked.

Most Probable Distribution

Quite different from Gibbs’ ensemble theory, Boltzmann’s theory is about most probable distribution.

  1. Classical distinguishable particles \(a_l = w_l e^{-\alpha -\beta e_l}\);
  2. Bosons \(a_l = w_l \frac{1}{e^{\alpha+\beta e_l} - 1}\);
  3. Fermion \(a_l = w_l \frac{1}{e^{\alpha + \beta e_l} + 1}\).
most probable distribution

This image tells us that the three lines converge when the factor \(\alpha + \beta e_l\) becomes large. Also Fermions have less micro states than classical particles because of Pauli exclusive principle.

\(\alpha + \beta e_l\) being large can have several different physical meanings.

  1. Temperature low;
  2. Energy high;
  3. Chemical coupling coefficient \(\alpha\) large.

We have several identical conditions for the three distribution to be the same.

\[\alpha + \beta e_l \gg 1 \Leftrightarrow \alpha \gg 1 \Leftrightarrow 1/\exp(\alpha + \beta e_l) \ll 1 \Leftrightarrow a_l / w_l \ll 1\]

where the last statement is quite interesting. \(a_l/w_l \ll 1\) means we have much more states then particles and the quantum effects becomes very small.

Warning

One should be careful that even when the above conditions are satisfied, the number of micro states for classical particles is very different from quantum particles,

\[\Omega_{B.E} = \Omega_{F.D.} = \Omega_{M.B.}/N! .\]

This will have effects on entropy eventually.

Recall that thermal wavelength \(\lambda_t\) is a useful method of analyzing the quantum effect. At high temperature, thermal wavelength becomes small and the system is more classical.

Hint

  1. Massive particles \(\lambda_t = \frac{h}{p} = \frac{h}{\sqrt{2m K}} = \frac{h}{\sqrt{ 2\pi m k T }}\)
  2. Massless particles \(\lambda_t = \frac{c h}{2\pi^{1/3} k T}\)

However, at high temperature, the three micro states numbers are going to be very different. This is because thermal wavelength consider the movement of particles and high temperature means large momentum thus classical. The number of micro states comes from a discussion of occupation of states.

Important

What’s the difference between ensemble probability density and most probable distribution? What makes the +1 or -1 in the occupation number?

Most probable distribution is the method used in Boltzmann’s theory while ensemble probability density is in ensemble theory. That means in ensemble theory all copies (states) in a canonical ensemble appear with a probability density \(\exp(-\beta E)\) and all information about the type of particles is in Hamiltonian.

Being different from ensemble theory, Boltzmann’s theory deals with number of micro states which is affected by the type of particles. Suppose we have N particles in a system and occupying \(e_l\) energy levels with a number of \(a_l\) particles. Note that we have a degeneration of \(w_l\) on each energy levels.

Boltzmann theory vs Gibbs theory

(Gliffy source here .)

For Boltzmann’s theory, we need to

  1. Calculate the number of micro states of the system;
  2. Calculate most probable distribution using Lagrange multipliers;
  3. Calculate the average of an observable using the most probable distribution.

Calculation of number of micro states

Calculation of the number of micro states requires some basic knowledge of different types of particles.

For classical particles, we can distinguish each particle from others and no restrictions on the number of states on each energy level. Now we have \(w_l^{a_l}\) possible states for each energy level. Since the particles are distinguishable we can have \(N!\) possible ways of exchanging particles. But the degenerated particles won’t contribute to the exchange (for they are the same and not distinguishable) which is calculated by \(\Pi_l a_l!\).

Finally we have

\[\Omega _{M.B.} = \frac{N!}{\Pi_l a_l !} \Pi_l w_l^{a_l}\]

as the number of possible states.

With similar techniques which is explicitly explained on Wang’s book, we get the number of micro states for the other two types of particles.

\[\Omega _{B.E.} = \Pi_l \frac{(a_l+w_l-1)}{a_l!(w_l -1)!}\]

is the number of micro states for a Boson system with a \(\{a_l\}\) distribution.

\[\Omega _ {F.D.} = \Pi _ {l} C_{w_l}^{a_l} = \Pi _ l \frac{w_l!}{a_l!(w_l - a_l)!}\]

is the number of micro states for a Fermion system with a distribution \(\{a_l\}\). We get this because we just need to pick out \(a_l\) states for \(a_l\) particles on the energy level.

The Exact Partition Function

DoS and partition function have already been discussed in previous notes.

Is There A Gap Between Fermion and Boson?

Suppose we know only M.B. distribution, by applying this to harmonic oscillators we can find that

\[\avg{H} = (\bar n + 1/2)\hbar \omega\]

where \(\bar n\) is given by

\[\bar n = \frac{1}{e^{\beta \hbar \omega} - 1}\]

which clearly indicates a new type of Boson particles.

So classical statistical mechanics and quantum statistical mechanics are closely connected not only in the most micro state numbers but also in a more fundamental way.

Hint

Note that this is possible because energy differences between energy levels are the same for arbitrary adjacent energy levels. Adding one new imagined particle with energy \(\hbar\omega\) is equivalence to excite one particle to higher energy levels. So we can treat the imagined particle as a Boson particle.

comments powered by Disqus

Back to top

© Copyright 2017, Lei Ma. | On GitHub | Index of Statistical Mechanics | Created with Sphinx1.2b2.