One way of starting somewhere in between instead of starting from microscopic equations to get macroscopic quantities is coarsing the system and use **Master Equation**.

\[\frac{dP_\xi}{d t} = \sum_{\mu} \left( T_{\xi\mu} P_\mu - T_{\mu\xi} P_\xi \right)\]

which means the rate of \(P_\xi(t)\) is determined by the gain and loss of probability.

Hint

**What’s the problem of this master equation?**

It’s linear. AND it comes from nowhere.

One way of deriving master equation is to start from Chapman-Kolmogorov equation which is

\[P_\xi(t) = \sum_\mu Q_{\xi\mu} P_\mu(t-\tau) .\]

This equation describes a discrete and randomwalk process, aka Markov process. In other words, the information about motion is lost with every movement. In this case, all information has been lost before a time interval \(\tau\).

The formalism of this equation reminds us the time derivative,

\[\partial_t P_\xi(t) = \lim \frac{P_\xi(t) - P_\xi(t-\tau)}{\tau} .\]

To achieve this, notice that

\[\sum_\mu Q_{\mu\xi} = 1.\]

Important

It’s very important to see this result clearly. Here we write this identity by regarding that the system must jump out of \(\xi\) state because the summation doesn’t include the case that \(\mu=\xi\).

Then we can rewrite Chapman-Kolmogorov equation,

\[P_\xi(t) - P_\xi(t-\tau) = \sum_\mu Q_{\xi\mu}P_\mu(t-\tau) - \sum_\mu Q_{\mu\xi} P_\xi(t-\tau) .\]

in which we used

\[P_\xi(t-\tau) = \sum_\mu Q_{\mu\xi} P_\xi(t-\tau) .\]

Then it seems clear that we can just divide \(\tau\) on each side and take the limit that \(\tau\rightarrow 0\).

\[\lim_{\tau\rightarrow} \frac{P_\xi(t) - P_\xi(t-\tau)}{\tau} = \lim_{\tau\rightarrow 0} \frac{ \sum_\mu Q_{\xi\mu}P_\mu(t-\tau) - \sum_\mu Q_{\mu\xi} P_\xi(t-\tau) }{\tau}\]

Watch out. The right hand side goes to infinity usually. One of the way out is to assume that

\[Q_{ux} = R_{ux}\tau + O(\tau^n)\]

with \(n > 1\) which introduces a weird property to the system.

Warning

By saying a system obeys Chapman-Kolmogorov equation we admit that the system loses information after an time interval \(\tau\). Now we take the limit \(\tau\rightarrow 0\), which means the system has no memory of the past at all! How to is this possible?

Or can we assume that \(P(t-\tau)\propto \tau\)?

Anyway, we reach our destination

\[\partial_t P_\xi(t) = \sum_\mu \left( R_{\xi\mu}P_\mu(t) - R_{\mu\xi}P_\xi(t) \right) .\]

Derivation of Master Equation “Rigorously”

Derivation of master equation can be more rigorous. [1] This note is a rephrase of Reichl’s chapter 6 B. Also refer to Irwin Oppenheim and Kurt E. Shuler’s paper. [2]

To do this we need to use conditional probability,

\[P_1(y_1,t_1) P_{1|1}(y_1,t_1|y_2,t_2) = P_2(y_1,t_1;y_2,t_2)\]

which means the probability density of variable Y has value \(y_1\) at time \(t_1\) and \(y_2\) at time \(t_2\) is given by the probability density of variable Y has value \(y_1\) at time \(t_1\) times that of it has value \(y_1\) at time \(t_1\) given it has value \(y_2\) at time \(t_2\).

Assume that the probability density at \(t_n\) only depends on that at \(t_{n-1}\), we have

\[P_{n-1|1}(y_1,t_1;\cdots;y_{n-1},t_{n-1}|y_n,t_n) = P_{1|1}(y_{n-1},t_{n-1}|y_n,t_n) = P_{1|1}(y_{n-1},t_{n-1}|y_n,t_n) ,\]

if we define the variable in a way that \(t_1<t_2< \cdots <t_n\).

**This assumption means that the system is chaotic enough.** This is called **Markov process**.

Like the transition coefficients \(T_{\xi\mu}\) we defined previously, this \(P_{1|1}(y_{n-1},t_{n-1}|y_n,t_n)\) is the **transition probability**.

To find out the time derivative of \(P_1(y_2,t_2)\), we need to write down the time dependence of it,

\[P_(y_1,t_1;y_2,t_2) = P_1(y_1,t_1) P_{1|1}(y_1,t_1|y_2,t_2)\]

We integrate over \(y_1\),

\[P_1(y_2,t_2) = \int P_1(y_1,t_1)P_{1|1}(y_1,t_1|y_2,t_2)dy_1\]

As we can write \(t_2=t_1+\tau\),

\[P_1(y_2,t_1+\tau) = \int P_1(y_1,t_1) P _ {1|1}(y_1,t_1|y_2,t_1+\tau) dy_1\]

Next we can construct time derivatives of these quantities.

\[\partial_{t_1} P_1(y_2,t_1) = \int \lim_{\tau\rightarrow 0} \frac{\int P_1(y_1,t_1) P _ {1|1}(y_1,t_1|y_2,t_1+\tau) - P_1(y_1,t_1) P _ {1|1}(y_1,t_1|y_2,t_1) }{\tau} dy_1\]

The next step is the messy one. Expand the right hand side using Taylor series, which one can find in Reichl’s book [1] , we get the expression for this time derivative,

\[\partial_{t} P_1(y_2,t) = \int dy_1 \left( W(y_1,y_2)P_1(y_1,t) - W(y_2,y_1)P_1(y_2,t) \right) .\]

This is the master equation.

The reason that \(W(y_1,y_2)\) is an transition rate is that it represents “the probability density perunit time that the system changes from state \(y_1\) to \(y_2\) in the time interval \(t_1\rightarrow t_1 +\tau\) “. [1]

Important

Now we see that Markov process is the hypothesis we need to get master equation? DO NOT mistake this with Markov process ever. There are things unclear in the derivation.

Read Irwin Oppenheim and Kurt E. Shuler’s paper for more details. [2]

Note

We can find out the Chapman-Kolmogorov equation

\[P_{1|1}(y_1,t_1|y_3,t_3) = \int P _{1|1}(y_1,t_1|y_2,t_2)P_{1|1}(y_2,t_2|y_3,t_3)dy_2\]

by comparing the following three equations.

\[P_2(y_1,t_1;y_3,t_3) = \int P_3(y_1,t_1;y_2,t_2;y_3,t_3) dy_2\]

\[P_3(y_1,t_1;y_2,t_2;y_3,t_3) = P_1(y_1,t_1) P _{1|1}(y_1,t_1|y_2,t_2) P _{1|1}(y_2,t_2|y_3,t_3)\]

\[\frac{P_2(y_1,t_1;y_3,t_3)}{P_1(y_1,t_1)} = P_{1|1}(y_1,t_1|y_3,t_3)\]

Master equation is

\[\partial_t P_\xi(t) = \sum_\mu \left( R_{\xi\mu}P_\mu(t) - R_{\mu\xi}P_\xi(t) \right) .\]

In such a system master equations are simply

\[\partial_t P_1 = R (P_2 - P_1)\]

and

\[\partial_t P_2 = R (P_1 - P_2) .\]

To solve the problem, we can choose “canonical coordinates”,

\[P_+ = P_1+P_2\]

and

\[P_- = P_1 - P_2 .\]

By adding the master equations, we have

\[\partial_t P_+ = 0\]

and

\[\partial_t P_- = -2R P_- .\]

Obviously, the solutions to these equations are

\[P_+(t) = P_+(0), \qquad P_-(t) = P_-(0)e^{-2Rt} .\]

This result proves that whatever states was the system in initially, it will get to equilibrium finally.

The term \(e^{-2R t}\) is a decaying process, or in other words a relaxation process.

Hint

In QM, Von Neumann equations is

\[\hat \rho = \hat\rho(0) e^{-i \hat H t/\hbar},\]

which is very similar to the solution to the stat mech Liouville equation,

\[P(t) = P(0) e^{-A t},\]

where A is a matrix

\[A_{\xi\mu} = -R_{\xi\mu}, \qquad A_{\xi\xi} = \sum_\mu R_{\mu\xi} .\]

The difference here is the \(i\) in the exponential. Think about decay and rotation.

In such a system, the transfer matrix is

\[\begin{split}A = \begin{pmatrix}A_{11}, A{12} \\ A_{21}, A_{22}\end{pmatrix}\end{split}\]

Then the master equation for this kind of systems is

\[\begin{split}\partial_t \begin{pmatrix}P_1 \\ P_2 \end{pmatrix} = \begin{pmatrix}A_{11}, A_{12}\\ A_{21}, A_{22} \end{pmatrix} \begin{pmatrix}P_1 \\ P_2 \end{pmatrix}\end{split}\]

We will see similar exponential decaying or growing behaviors as degenerate system. The difference is the equilibrium point.

\[\partial_t P_1 = R_{12} P_2 - R_{21} P_1\]

shows us that at equilibrium when \(\partial_t P_1 = 0\),

\[\frac{R_{12}}{R_{21}} = \frac{P_1(\infty)}{P_2(\infty)}\]

which means the coefficients defines the equilibrium point.

© Copyright 2017, Lei Ma. | On GitHub | Index of Statistical Mechanics | Created with Sphinx1.2b2.