Solving Master Equations

Solving Master Equations

Master equation, again, is

\[\frac{d P_m}{dt} = \sum_n R_{mn} P_n - \sum_n R_{nm} P_m\]

which can be written as a more simple form,

\[\frac{d P_m}{dt} + \sum_n A_{mn}P_n = 0 .\]

To figure out the elements of this new matrix \(A\), we need to understand how to combine the RHS of master equation.

\[\begin{split}\partial_t \begin{pmatrix} P_1 \\ P_2 \\ \vdots \\ P_N \end{pmatrix} = \begin{pmatrix} 0 & R_{12} & \cdots & R_{1N} \\ R_{21} & 0 & \cdots & R_{2N} \\ \vdots & \vdots & \ddots & \vdots \\ R_{N1} & R_{N2} & \cdots & 0 \end{pmatrix} \begin{pmatrix} P_1 \\ P_2 \\ \vdots \\ P_N \end{pmatrix} - \sum_{n}R_{nm} \begin{pmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & 0 \\ 0 & 0 & \cdots & 1 \end{pmatrix} \begin{pmatrix} P_1 \\ P_2 \\ \vdots \\ P_N \end{pmatrix}\end{split}\]

Since \(A_{mm} = \sum_n R_{nm}\) is just a real number, we can combine the two matrices on the RHS, which is

\[\begin{split}\partial_t \begin{pmatrix} P_1 \\ P_2 \\ \vdots \\ P_N \end{pmatrix} + \begin{pmatrix} A_{11} & -R_{12} & \cdots & -R_{1N} \\ -R_{21} & A_{22} & \cdots & -R_{2N} \\ \vdots & \vdots & \ddots & \vdots \\ -R_{N1} & -R_{N2} & \cdots & A_{NN} \end{pmatrix} \begin{pmatrix} P_1 \\ P_2 \\ \vdots \\ P_N \end{pmatrix} = 0 .\end{split}\]

We can define the \(\mathbf A\) matrix now.

\[\begin{split}A_{mn} = \begin{cases} -R_{mn} &\quad n \neq m \\ \sum_{n} R_{nm} & \quad n=m \end{cases}\end{split}\]

\(\mathbf A\) matrix is defined in this way so that we have the master equation becomes

\[\partial_t \mathbf P + \mathbf A \mathbf P = 0 .\]

So solve such a matrix equation, we need to diagonalize \(\mathbf A\), so that we have a simple solution

\[\mathbf P_{\text{diag}} = e^{- \mathbf A_{\text{diag}} t} .\]

The most general way is to diagonalize it using an invertible matrix \(\mathbf S\), i.e.,

\[\partial_t \bf S^{-1} P + S^{-1}AS S^{-1} P = 0\]

in which we’ll define \(\mathbf A_{\text{diag}} = \bf S^{-1} P S\) and \(\mathbf P_{\text{diag}} = \bf S^{-1} P\). For simplicity, we won’t write down the indices from now on.

Warning

Is there a mechanism that ensures the \(\mathbf A\) is invertible ? If \(\mathbf A\) is defective , none of these can be done. Do all physical systems have invertible \(\mathbf A\)?

Hint

Notice that the form of master equation after this transform is similar to the dynamics of quantum mechanics which is

\[i\hbar \frac{\partial }{\partial t} \ket{\psi} = \hat H \ket{\psi}.\]

Another thing is that this form is also similar to Liouville equation,

\[i\partial_t \rho^N = \hat L^N \rho^N .\]

Solving Master Equations: Fourier Transform

Fourier transform is a quick and efficient way of diagonalizing \(\mathbf A\) matrix.

We consider the case that coarsed system has translational symmetry, then we know the value of elements in \(\mathbf A\) matrix only dependends on \(l:= n-m\). In ohter words,

\[\partial_t P_m + \sum_n A_{mn} P_n = 0.\]

For translational symmetric system, we can work out discrete Fourier transform to find the normal modes. Define the kth mode as

\[P^k = P_m e^{ikm} .\]

Multiply by \(e^{ikm}\) on master equstion and sum over m, we get

\[\sum_m \partial_t P_m e^{ikm} + \sum_m \sum_n e^{ikm}A_{m-n} P_n = 0.\]

With the definition of kth mode above there, the master equation can be written as

\[\partial_t P^k + \sum_n\sum_m e^{ik(m-n)}A_{m-n} e^{ikn}P_n = 0\]

which “accidently” diagonalizes the matrix \(\mathbf A\). So define the kth mode of \(\mathbf A\) as \(A^k = \sum_{l=m-n} e^{ik(m-n)}A_{m-n}\) then

\[\partial_t P^k + \sum_{l=m-n}e^{ik(m-n)}A_{m-n} \sum_n e^{ikn}P_n = 0\]

is reduced to

\[\partial_t P^k + A^k P^k = 0 \qquad\text{No summation over k!.}\]

Note

Note that summation over n and m is equivalent to summation over n and m-n.

Finally we have the solution of normal modes,

\[P^k(t) = P^k(0) e^{-A^k t} .\]

To find out the final solution, inverse Fourier transform on kth mode,

\[P_m(t) = \frac{1}{N} \sum_k P^k(t) e^{-ikm} .\]

Important

Note that due to the Born van Karman BC we chose,

\[e^{ikm} = e^{ik(m+N)}\]

which leads to

\[k=\frac{2\pi}{N} .\]

Discrete transform will become integral if we are dealing with continous systems, which will be achieved by using the following transformation,

\[\frac{1}{N}\sum_k \rightarrow \frac{1}{2\pi} \int dk\]

We write down this identity like transformation because the discrete result has the form \(\frac{1}{N}\sum_k\).

Finite Chain with Nearest Neighbor Interactions

In this simple case, the transfer is

\[R_{mn} = F(\delta_{m,n-1}+ \delta_{m,n+1}) .\]

So master equation becomes

\[\partial_t P_m = F(P_{n+1} + P_{n-1}) -2F P_m .\]

Note

There is no need to use the definition of \(\mathbf A\) matrix in this simple case.

Discrete Fourier transform,

\[\partial_t P^k = F(e^{ikm} P_{m+1} + e^{ikm} P_{m-1} -2 P^k) .\]

Combine terms,

\[\partial_t P^k = 4F\sin^2\frac{k}{2} .\]

The solution for kth mode should be

\[P^k(t) = P^k(0) e^{-4F \sin^2\frac{k}{2} t} .\]

Inverse Fourier transform gives us the result of

\[P_m(t) = \frac{1}{N} \sum_ {k} P^k(t) e^{-i km} .\]

The last thing is to find out the value of \(k\). Apply Born-Von Karman boundary condition, we find that \(k\) is quantized,

\[k = \frac{2\pi}{N} n, \qquad n=0,1,2, \cdots, N-1 .\]

Matrix Form

To make it clear, we’ll redo this problem in matrix form.

First of all, write down master equation for this problem again,

\[\partial_t P_m = F(P_{n+1} + P_{n-1}) -2F P_m .\]

In the form of matrix, we have

\[\begin{split}\partial_t \begin{pmatrix} P_1 \\ P_2 \\ P_3 \\ P_4 \\ P_5 \\ P_6 \end{pmatrix} + F \begin{pmatrix} 2 & -1 & 0 & 0 & 0 & -1 \\ -1 & 2 & -1 & 0 & 0 & 0 \\0 & -1 & 2 & -1 & 0 & 0 \\ 0 & 0 & -1 & 2 & -1 & 0 \\ 0 & 0 & 0 & -1 & 2 & -1 \\ -1 & 0 & 0 & 0 & -1 & 2 \end{pmatrix} \begin{pmatrix} P_1 \\ P_2 \\ P_3 \\ P_4 \\ P_5 \\ P_6 \end{pmatrix} = 0\end{split}\]

Hint

An easy way to get the matrix is to write down the \(\mathbf R\) matrix which has 0 diagonal elements, construct \(\mathbf A\) matrix by adding minus sign to all elements and put the sum of the original elements at the diagonal in the corresponding line. Pay attention to the signs.

Another propertity worth talking about is the additive of the matrices. A more complicated system can be decomposed into several simple systems.

The technique used to solve this problem is to diagonalize the 6 times 6 matrix because we can then just write down the solutions.

The way to do this is exactly what have did in the previous part, that is defining new quantities. Then we have in matrix form

\[\begin{split}\partial_t \begin{pmatrix} P^{k_1} \\ P^{k_2}\\ P^{k_3}\\ P^{k_4}\\ P^{k_5}\\ P^{k_6} \end{pmatrix} + 4F \begin{pmatrix} \sin^2\frac{k_1}{2} 0 & 0 & 0 & 0 & 0 \\ 0 & \sin^2\frac{k_2}{2} & 0 & 0 & 0 & 0 \\ 0 & 0 & \sin^2\frac{k_3}{2} & 0 & 0 & 0 \\ 0 & 0 & 0 & \sin^2\frac{k_4}{2} & 0 & 0 \\ 0 & 0 & 0 & 0 & \sin^2\frac{k_5}{2} & 0 \\ 0 & 0 & 0 & 0 & 0 & \sin^2\frac{k_6}{2} \end{pmatrix} \begin{pmatrix} P^{k_1} \\ P^{k_2}\\ P^{k_3}\\ P^{k_4}\\ P^{k_5}\\ P^{k_6} \end{pmatrix} = 0\end{split}\]

By writing in this form, we imediately know why diagonalizing is the thing we eager to do. The solutions are just

\[P^k(t) = P^k(0) e^{-4F\sin^2(k/2) t} .\]

Hint

Recall that the elements in the diagonailized \(\mathbf A_{\text{diag}}\) matrix are just the eigenvalues of \(\mathbf A\) matrix with corresponding eigenvectors. So in other words, the way to solve this kind of descrete master equations is to solve the eigen problem of \(\mathbf A\) matrix and then find the eigen modes and finally inverse transform back to \(P_m(t)\).

Infinite Chain with Nearest Neighbor Interactions

We have exactly the same equations as in finite chain. The difference lies in the boundary condition where \(N\rightarrow \infty\) and

\[\frac{1}{N}\sum_k \rightarrow \frac{1}{2\pi}\int dk .\]

Hint

The way to check this result is to check the sum of unit probability.

\[\frac{1}{N}\sum_{k=1}^{N} 1 = 1 \Leftrightarrow \frac{1}{2\pi}\int_{-\pi}^{\pi} 1 dk = 1\]

So the solutions are

\[\begin{split}P_m(t) &= \frac{1}{2\pi} \int _{-\pi}^{\pi} P^k(0) e^{-2F(1-\cos{k})t} e^{-ikm} dk \\ &= P^k(0) e^{-2Ft} \frac{1}{2\pi} \int_{-\pi}^{\pi} e^{2Ft\cos{k}} e^{-ikm} dk \\ & = P^k(0) e^{-2Ft} \mathrm{I_m}(2Ft)\end{split}\]

Important

Vocabulary: Modified Bessel function is defined as

\[\mathrm{I_m} (z) = \frac{1}{2\pi}\int_{-\pi}^{\pi} e^{-ikm} e^{z\cos{k}} dk .\]

Since the argument has imaginary part, this is also called Bessel function of imaginary argument.

Check out more properties about this function in vocabulary part.

2D Lattice

A 2D lattice image is shown below with image cridet of Jim Belk (public domain).

equilateral Triangle Lattice

Fig. 9 An equilateral triangle lattice

Note that we have translational symmetry in both x and y direction. So the solution is simply the product of solutions to two directioins,

\[P_m(t) = e^{-2(F_x+F_y)t} \mathrm{I_m}(2F_xt) \mathrm{I_m}(2F_y t)\]

Continuum Limit

Suppose we have a continuum system,

\[\begin{split}\partial_t P_m &= F(P_{m-1} + P_{m+1}) - 2FP_m \\ & = F(P_{m+1}-P_m -(P_m-P_{m-1})) \\ & = F \epsilon^2 \frac{ (P_{m+1}-P_m)/\epsilon -(P_m-P_{m-1})/\epsilon ) }{\epsilon}\end{split}\]

We can identify the form of derivatives but RHS becomes zero if \(F\) is a constant when we take the limit \(\epsilon \rightarrow 0\).

It does make sense that \(F\) actually increases when the distance between two sites becomes smaller. And the way to reconcile the zero problem is to assume that

\[F = \frac{D}{\epsilon^2} .\]

Then we have the diffusion equation.

\[\frac{\partial P(x,t)}{\partial t} = D\frac{\partial^2 P(x,t)}{\partial x^2}\]

.

comments powered by Disqus

Back to top

© Copyright 2017, Lei Ma. | On GitHub | Index of Statistical Mechanics | Created with Sphinx1.2b2.