Complexity and Networks: Graph Laplacian and Master Stability Function by Konstantinos Efstathiou is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Until now we have been considering oscillators that are connected all-to-all and in the same way. Now we will consider networks where each node is connected to some other nodes but possibly (and typically) not all.

In mathematics, a network is often called a **graph**. The network nodes are called **vertices** and teh network connections are called **edges**.

The network structure is described through the **adjacency matrix** \(A = (a_{ij})\). This is defined by

\[ a_{ij} = \begin{cases} 1, & \text{ if node $i$ is connected to node $j,$ } \\ 0, & \text{ otherwise. } \end{cases} \]

Clearly, \(A\) is a real symmetric matrix, that is, \(A^t = A\).

\[A = \begin{pmatrix} 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 1 \\ 1 & 1 & 0 & 0 & 1 \\ 1 & 0 & 1 & 1 & 0 \end{pmatrix} \] |

The **degree** \(k_i\) of the \(i\)-th node is defined as the number of nodes that are directly connected to the \(i\)-th node. It follows that

\[ k_i = \sum_{j=1}^n a_{ij}. \]

The average degree is

\[ \langle k \rangle = \frac{1}{n} \sum_{i=1}^n k_i = \frac{1}{n} \sum_{i=1}^n \sum_{j=1}^n a_{ij}. \]

The **degree matrix** is

\[ D = \mathrm{diag}(k_1,k_2,\dots,k_n). \]

\[D = \begin{pmatrix} 3 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 2 & 0 & 0 \\ 0 & 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 0 & 3 \end{pmatrix} \] |

The **graph Laplacian** is defined by

\[ L = D - A, \]

that is,

\[ L_{ij} = k_i \delta_{ij} - a_{ij}. \]

\(L\) is a real symmetric matrix and this implies that it has real eigenvalues.

Moreover, it is possible to show that \(L\) is positive semidefinite, that is,

\[ x^t L x \ge 0, \]

for all \(x \in \mathbb{R}^n\).

This implies that the eigenvalues of \(L\) are non-negative.

**Exercise.** Show that \[
x^t L x = \frac12 \sum_{i,j=1}^n a_{ij} (x_i-x_j)^2.
\]

\[L = \begin{pmatrix} 3 & 0 & -1 & -1 & -1 \\ 0 & 1 & 0 & -1 & 0 \\ -1 & 0 & 2 & 0 & -1 \\ -1 & -1 & 0 & 3 & -1 \\ -1 & 0 & -1 & -1 & 3 \end{pmatrix} \] |

The eigenvalues of \(L\) are

\[ 0 = \lambda_1 \le \lambda_1 \le \dots \le \lambda_n \le n. \]

To show that \(0\) is always an eigenvalue consider the vector \(\mathbb{1} = (1,\dots,1)^t\).

Then

\[ (L \mathbb{1})_j = \sum_{i=1}^n L_{ji} \mathbb{1}_i = \sum_{i=1}^n L_{ji} = \sum_{i=1}^n k_j \delta_{ji} - a_{ji} = k_j - \sum_{i=1}^n a_{ji} = k_j - k_j = 0. \]

Therefore,

\[ L \mathbb{1} = 0 = 0 \, \mathbb{1}, \]

implying that \(0\) is an eigenvalue.

**Exercise.** Show that the number of zero eigenvalues of \(L\) gives the number of connected components of the network.

A **complete graph** is a graph in which all vertices are connected to all other vertices. In this case

\[ \lambda_2 = \dots = \lambda_n = n. \]

One way to see this is to write

\[ L = n I - C(1), \]

where \(C(1)\) is the \(n \times n\) matrix with all entries equal to \(1\).

Then notice that any vector \(v\) with \(\sum_i v_i = 0\) is an eigenvector of \(C(1)\) with eigenvalue \(0\). Let \(Z\) be the set of all such vectors \(v\). It can be easily shown that this is a vector subspace of \(\mathbb{R}^n\) with dimension \(n-1\).

Moreover, if \(v \in Z\) then

\[ L v = n v - C(1) v = n v. \]

Therefore, all \(v \in Z\) are eigenvectors of \(L\) with eigenvalue \(n\) and this implies that \(n\) must be an eigenvalue of \(L\) with mulitplicity \(n-1\) (the same as the dimension of \(Z\)).

A basis of \(Z\) is given by the \(n-1\) vectors

\((1,-1,0,0,0,\dots,0)\),

\((1,0,-1,0,0,\dots,0)\),

\((1,0,0,-1,0,\dots,0)\),

\(\dots\),

\((1,0,0,0,0,\dots,-1)\).

**Exercise.** Construct an orthogonal basis of \(Z\).

The eigenvalues of \(L\) are \((0,\,0.83,\,2.68,\,4,\,4.48)\). |

Consider on a connected network the dynamical system

\[ \dot{x}_i = C \sum_{j=1}^n a_{ij} (x_j - x_i), \]

where \(A=(a_{ij})\) is the adjacency matrix. Such a system describes a **diffusion** process on the network.

Then

\[ \dot{x}_i = C \sum_{j=1}^n a_{ij} x_j - C \sum_{j=1}^n a_{ij} x_i = C \sum_{j=1}^n a_{ij} x_j - C k_i x_i. \]

Therefore,

\[ \dot{x} = C A x - C D x = - C L x. \]

This is a linear dynamical system. Its stability is determined by the eigenvalues of \(-CL\).

If we denote by \(0 = \lambda_1 < \lambda_2 \le \dots \le \lambda_n\) the eigenvalues of \(L\) then the eigenvalues of \(-CL\) are \(0 = -C\lambda_1 > -C\lambda_2 \ge \dots \ge -C\lambda_n\). In particular, all eigenvalues are strictly negative except for a single zero eigenvalue.

Let \(v_i\) denote the corresponding eigenvectors, with \(v_1 = \mathbb{1}\). These are orthogonal. We have \[ x(t) = \sum_{i=1}^n c_i(t) v_i \] and then \[ \dot{x}(t) = \sum_{i=1}^n \dot{c}_i(t) v_i = -C L \sum_{i=1}^n c_i(t) v_i = - C \sum_{i=1}^n c_i(t) \lambda_i v_i. \]

From here we get

\[ \dot{c}_i = -C \lambda_i c_i, \]

with solution

\[ c_i(t) = e^{-C\lambda_i t} c_i(0). \]

This means

\[ x(t) = \sum_{i=1}^n e^{-C\lambda_i t} c_i(0) v_i \overset{t \to \infty}{\longrightarrow} c_1(0) \mathbb{1}, \]

where we used that \(\lambda_1 = 0\) and \(v_1 = \mathbb{1}\).

We consider now a network where each node has dynamics given by the equation

\[ \dot{x}_i = f(x_i) \]

where \(x_i \in \mathbb{R}^k\) and \(f : \mathbb{R}^k \to \mathbb{R}^k\). In particular, and unlike the Kuramoto model where we assumed that the natural frequencies of the oscillators were not the same, here we assume that each node has identical dynamics.

Then we introduce the coupling between nodes as

\[ \dot{x}_i = f(x_i) + \epsilon \sum_{i=1}^n a_{ij} ( h(x_j) - h(x_i) ) = f(x_i) - \epsilon \sum_{i=1}^n L_{ij} h(x_j), \]

where \(L\) is the graph Laplacian and \(h: \mathbb{R}^k \to \mathbb{R}^k\) is a given function.

It is easy to see that there exists a completely **synchronized solution**

\[ x_1(t) = x_2(t) = \cdots = x_n(t) = s(t), \]

since in that case the coupling terms vanish and all oscillators satisfy the equations

\[ \dot{s} = f(s). \]

Our aim is to characterize the linear stability of the synchronized solution.

As usual we consider a small deviation from the synchronized solution, that is, we write

\[ x_i(t) = s(t) + \xi_i(t). \]

Then we get the linearized equations of motion

\[ \dot{\xi}_i = Df(s) \xi_i - \epsilon \sum_{j=1}^n L_{ij} Dh(s) \xi_j = Df(s) \xi_i - \epsilon Dh(s) \sum_{j=1}^n L_{ij} \xi_j. \]

Notice that this is a time-dependent equation since \(s = s(t)\).

Write \(\xi\) for the \(k \times n\) matrix given by

\[ \xi = (\xi_1, \dots, \xi_n) \]

and write

\[ \xi = \zeta_1 v_1^t + \dots + \zeta_n v_n^t = \sum_{j=1}^n \zeta_j v_j^t, \]

where \(v_1,\dots,v_n\) are the eigenvectors of \(L\), and \(\zeta_1,\dots,\zeta_n \in \mathbb{R}^k\) are time-dependent vector coefficients.

The equations from the previous slide can be collectively written as

\[ \dot{\xi} = Df(s) \xi - \epsilon Dh(s) \xi L^t. \]

Then substitution gives

\[ \sum_{j=1}^n \dot{\zeta}_j v_j^t = \sum_{j=1}^n [Df(s) \zeta_j - \epsilon \lambda_j Dh(s) \zeta_j ] v_j^t, \]

where we used that \(v_j^t L^t = (L v_j)^t = \lambda_j v_j^t\).

From the last equation we obtain \[ \dot{\zeta}_j = [Df(s) - \epsilon \lambda_j Dh(s)] \zeta_j, \]

Therefore we see that along each eigenmode of \(L\), given by \(v_j\), the linearized equations have the same form \[ \dot{\zeta} = [Df(s) - \gamma Dh(s)] \zeta, \] where \(\gamma = \epsilon \lambda_j\) for \(\zeta = \zeta_j\).

We write the previous equation in the form

\[ \dot{\zeta} = A(t) \zeta, \]

where \(A(t) = Df(s(t)) - \gamma Dh(s(t))\).

Moreover, we will assume here that the synchronized solution \(s(t)\) is a periodic orbit in \(\mathbb{R}^k\), that is, there is some \(T > 0\) such that \(s(t+T) = s(t)\) for all \(t \in \mathbb{R}\).

We will now discuss (in general) the equation

\[ \dot{v} = A(t) v, \]

where, with the assumption that \(s\) has period \(T\), we have \(A(t+T) = A(t)\).

If at \(t=0\) we have \(v = v(0)\) then after time \(T\) we will have the vector \(v(T)\) by solving the equation \(\dot{v} = A(t) v\). This defines a linear mapping

\[ v(0) \mapsto v(T) \]

which can be expressed in terms of a matrix \(M\), called the **monodromy** matrix.

This allows us to describe the dynamics and its stability through the mapping \(v \mapsto M v\). In particular, the stability of \(\dot{v} = Av\) will correspond to stability of \(v \mapsto M v\).

The eigenvalues of \(M\) are called **characteristic multipliers**.

The **Floquet exponents** are defined by \(e^{\mu T} = \lambda\) where \(\lambda\) are characteristic multipliers.

Finally, the **Lyapunov exponents** in this context are defined as \(\mathbb{Re}(\mu)\).

The reason that these are important is because of the following stability classification of \(v \mapsto M v\). Let \(\lambda_i\) be the eigenvalues of \(M\) (characteristic multipliers). Then

If all \(|\lambda_i| < 1\) then the map is

**asymptotically stable**. This follows from the fact \(v = \sum_{i=1}^n c_i v_i\) after \(k\) iterations of the map goes to \(M^k v = \sum_{i=1}^n \lambda_i^k c_i v_i\) and in this case \(\lambda_i \to 0\) as \(k \to \infty\).If there is some \(|\lambda_j| > 1\) then the map is unstable. This follows from the observation that if \(v = v_j\) then \(M^k v = \lambda_j^k v_j\) and in this case \(\lambda_j \to \infty\) as \(k \to \infty\).

If we write as \(\alpha\) the Lyapunov exponent corresponding to \(\lambda\) then we have \[ e^{(\alpha+i\beta)T} = \lambda, \] so \[ |\lambda| = e^{\alpha T}. \]

Therefore, we can say that

If all \(\alpha_i < 0\) then the map is asymptotically stable.

If there is some \(\alpha_j > 0\) then the map is unstable.

Recall that here we consider the linear system

\[ \dot{\zeta} = A(t) \zeta = [Df(s(t)) - \gamma Dh(s(t))] \zeta \]

The **master stability function** is the function \(g(\gamma)\) that assigns to \(\gamma\) the corresponding maximal Lyapunov exponent of the system above.

Suppose that \(g(\gamma) < 0\) for \(\gamma_{\mathrm{min}} < \gamma < \gamma_{\mathrm{max}}\). Then the system is linearly stable if

\[ \gamma_{\mathrm{min}} < \epsilon \lambda_2 \le \epsilon \lambda_n < \gamma_{\mathrm{max}}, \]

that is, if all eigenmodes give negative maximal Lyapunov exponent.

The region of the coupling strength for which the synchronized solution is stable is then

\[ \frac{\gamma_{\mathrm{min}}}{\lambda_2} < \epsilon < \frac{\gamma_{\mathrm{max}}}{\lambda_n}. \]

This implies that in order for such an interval to be non-empty we must have

\[ R := \frac{\lambda_n}{\lambda_2} < \frac{\gamma_{\mathrm{max}}}{\gamma_{\mathrm{min}}}. \]

Notice that \(R\) depends only on the network topology (its graph Laplacian) while the master stability function (and subsequently \(\gamma_{\mathrm{min}}\) and \(\gamma_{\mathrm{max}}\)) depends only on the dynamics.

Therefore, the master stability formalism allows us to decouple the topology from the dynamics. After computing the master stability function for some coupling we can then find the eigenvalues of the graph Laplacian and determine which topologies are the most “synchronizable”.

Finally, suppose that \(g(\gamma) < 0\) for \(\gamma_{\mathrm{min}} < \gamma\). Then the system is linearly stable if

\[ \gamma_{\mathrm{min}} < \epsilon \lambda_2. \]

This means that there is always in this case a value

\[ \epsilon_c = \frac{\gamma_{\mathrm{min}}}{\lambda_2}, \]

such that if \(\epsilon > \epsilon_c\) then the synchronized solution is stable.