2  Ordinary Differentials Equations

In this chapter, we review solution methods of linear differential equations and systems of linear differential equations, also called Ordinary Differential Equations (ODE). These are essential when studying continuous time problem in macroeconomic dynamics. The chapter is organized as follows: we will first review some basic definitions and then proceed to examine solutions methods of first-order and second-order differential equations. We finally discuss the generalizations of the solution methods to a system of n first-order differential equations.

2.1 Definitions

A differential equation is a mathematical equation derivend from an unknown function of one or more variables, which connects the function itself and its derivatives of various degrees.

The solution of a simple equation (or a system of equations) is a constant or a set of constants that satisfy these equations. In contrast, the solution of a differential equation (or a system of differential equations) is a function, or a set of functions that, along with their derivatives, satisfy the differential equation or the system of differential equations. The equations we shal consider are all functions of time \(t\), which is assumed to be a continuous variable defined on the set of real numbers \(\mathbb{R}\).

Consider for a example the solution to the differential equation:1 \[ \dot{y}(t)=\frac{dy}{dt}=a \] is a function \(y(t)\), the first derivative of which with respect to time is equal to \(a\). The general solution to this differential equation is the function

\[ y(t)=at+c \]

where \(c\) is an arbitrary constant. A particular solution can then be found using a boundary condition, such as an initial condition known at time \(t=0\), \(y_0\) taken as given.

To find the general solution, rewrite the initial equation as :

\[\begin{aligned} \frac{\dot{y}(t)}{y(t)}=a \Rightarrow \frac{1}{y(t)}\frac{dy(t)}{dt}=a \Rightarrow & \frac{dy}{y}=a dt \end{aligned}\]

Which can be integrated:

\[\begin{aligned} \int \frac{dy}{y}= \int a dt +c \Rightarrow ln(y(t))=at \Rightarrow y(t)=c\exp^{at} \end{aligned}\]

with \(c\) an arbitrary constant.To determine this constant and find the particular solution, use the information \(y(0)=y_0\) given:

\[\begin{aligned} y(0)=y_0=c\exp^{a\cdot0}\Rightarrow c=y_0 \end{aligned}\]

The final solution has then the form:

\[ y(t)=y_0exp^{at} \] Note that we can differentiate with respect to \(t\) the RHS of the solution : \(\dot{y}(t)=ay_0 exp^{at} \equiv a y(t)\). We have indeed found the solution.

We could also look at a differential equation of the form \(\ddot{y}(t)=\frac{d^2 y(t)}{dt^2}=a\) which is a second-order differential equation and so on …

A solution of a differential equation is a function \(y(t)\), which along with its derivatives, satisfies the differential equation. A particular solution requires the determination of the arbitrary constant or a constants of integration.

Differential equations are classified by their porder, which is none other than the order of the highest derivative that appears in the equation.

A differential equation is linear if the unknown function \(y(t)\) and its derivatives are linear. Otherwise, it is nonlinear.

A differential equation can be solved by a method known as separation of variables if it can be written as a term that contains a function of only \(y\), equated to a term that contains a function only of \(t\). For example, \(g(y(t))\dot{y}(t)=f(t)\) can be written as:

\[ g(y)dy=f(t)dt \] The solution can be found using the integration techniques used above.

2.2 First-Order Linear Differential Equations

We may found two types of linear differential equations: those with constant coefficients or those with variable coefficients.

2.2.1 Constant Coefficients

This category of linear ODE takes the form:

\[ \dot{y}(t)+ay(t)=b \tag{2.1}\]

with \(a\) and \(b\) some constants. To find the function that satisfies (2.1), note that

\[\frac{d\left(e^{a t} y(t)\right)}{d t}=a e^{a t} y(t)+e^{a t} \dot{y}(t)=e^{a t}[\dot{y}(t)+a y(t)] \tag{2.2}\]

If the differential equation (2.1) is multiplied by \(e^{a t}\), its left-hand side is an exact differential equation) (i.e., the total differential of a function with respect to \(t\) ). The function \(e^{a t}\) is called the integrating factor. Multiplying both sides of (2.2) by \(d t\), we get

\[d\left(e^{a t} y(t)\right)=b e^{a t} d t\]

whose integral is

\[e^{a t} y(t)=e^{a t} \frac{b}{a}+c\],

where \(c\) is the constant of integration. Multiplying both sides of this expression by \(e^{-a t}\), we get

\[y(t)=\frac{b}{a}+c e^{-a t} \tag{2.3}\]

as the family of functions that satisfy the differential equation (2.1). This family is called the general solution of (2.1).

To determine the constant of integration \(c\), we need to know the value of the function at some point in time. For example, if we know that at time \(t=0\), \(y(0)=y_{0}\), then

\[y_{0}=\frac{b}{a}+c\]

which implies

\[c=y_{0}-\frac{b}{a}\]

The general solution of the differential equation (2.1) that satisfies \(y(0)=y_{0}\) is then given by

\[ y(t)=y_{0} e^{-a t}+\left(1-e^{-a t}\right) \frac{b}{a}=\frac{b}{a}+\left(y_{0}-\frac{b}{a}\right) e^{-a t} \]

In conclusion, to solve a linear first-order differential equation with constant coefficients, multiply it by the integrating factor and integrate it. To calculate the constant of integration, use the value of the function at some point. The point that is used is called an initial condition or a terminal condition, or more generally, a boundary condition.

2.2.2 Variable Right-Hand Side

If the right-hand side of (2.1) is not constant but a known function of time, the solution method is similar. We multiply by the integrating factor and take the integral.

For example, for the differential equation

\[ \dot{y}(t)+a y(t)=b e^{\lambda t} \tag{2.4}\]

multiplying by the integrating factor and separating the variables results in

\[ d\left(e^{a t} y\right)=b e^{(a+\lambda) t} d t \tag{2.5}\]

Taking the integral of both sides of (2.5) yields

\[e^{a t} y(t)=\frac{b}{a+\lambda} e^{(a+\lambda) t}+c\]

Dividing both sides by the integrating factor, we get the solution

\[y(t)=\frac{b}{a+\lambda} e^{\lambda t}+c e^{-a t} \tag{2.6}\]

Equation (2.6) is the family of functions satisfying (2.4). The unknown constant \(c\) can again be determined by a boundary condition.

2.2.3 Variable Coefficients

The general form of a first-order linear differential equation is

\[\dot{y}(t)+a(t) y(t)=b(t) \tag{2.7}\]

where \(a(t)\) and \(b(t)\) are known functions, and we seek the function \(y(t)\). The function \(b(t)\) is often called a forcing term and is considered exogenous. The integrating factor in this case is \(e^{\int a(t) d t}\), because

\[\frac{d\left(y(t) e^{\int a(t) d t}\right)}{d t}=e^{\int a(t) d t}[\dot{y}(t)+a(t) y(t)]\]

Thus, multiplying (2.7) by this integrating factor and taking the integral, we get

\[y(t) e^{\int a(t) d t}=\int b(t) e^{\int a(t) d t} d t+c\]

Dividing (C.21) by the integrating factor, we finally get

\[y(t)=e^{-\int a(t) d t} \int b(t) e^{\int a(t) d t} d t+e^{-\int a(t) d t} c \tag{2.8}\]

where \(c\) is the constant of integration. (2.8) is the general solution of (2.7). A particular solution requires a boundary condition that will determine the unknown constant \(c\).

Note that it is not advisable to apply the solution (2.8) to any equation. It is simpler in many cases to multiply by the integrating factor and take the integral.

2.2.4 Homogeneous and Nonhomogeneous Differential Equations

If \(b=0\) in (2.1), the differential equation to be solved is called homogeneous. Otherwise, it is called nonhomogeneous.

The general solution of a differential equation consists of the sum of the general solution to the relevant homogeneous differential equation (i.e., setting \(b=0\) and solving, and then adding to this solution a particular solution of the general equation (2.1).

For example, the homogeneous equation derived from (2.1) is

\[\dot{y}(t)+a y(t)=0 \tag{2.9}\]

The general solution of (2.9) is

\[y(t)=c e^{-a t} \tag{2.10}\]

A particular solution (setting, for example, \(\dot{y}(t)=0)\) ) is

\[\bar{y}=\frac{b}{a} \tag{2.11}\]

Consequently, the general solution of the nonhomogeneous differential equation (2.1) is the sum of (2.10) and (2.11), that is, the general solution of the relevant homogeneous differential equation (sometimes called the complementary function) plus the particular solution for a constant \(y\) (otherwise known as the particular integral). The general solution is thus given by

\[y(t)=\frac{b}{a}+\left(y_{0}-\frac{b}{a}\right) e^{-a t}\]

This methodology is not generally necessary for solving first-order linear differential equations, but it becomes very useful for differential equations of order higher than one or for systems of first-order linear differential equations.

2.2.5 Convergence and Stability of First-Order Differential Equations

In many economic applications, we are interested in the behavior of the solution of a differential equation as the independent variable, usually time, tends to infinity. The value to which the solution converges is referred to as a stationary state, or steady state, or equilibrium state.

For example, from (2.3), which is the general solution of (2.1), for \(a>\) 0 , we get

\[ \lim _{t \rightarrow \infty} y(t)=\lim _{t \rightarrow \infty}\left(\frac{b}{a}+c e^{-a t}\right)=\frac{b}{a} . \]

The particular integral of the differential equation (2.1) can therefore be interpreted economically as the equilibrium state (or the steady state), which is the state toward which the variable \(y\) converges as time goes to infinity. This equilibrium is called a stable node. It is a stable equilibrium if \(y\) is a predetermined variable and only changes gradually, as postulated by the law of motion (C.26).

Assume now that \(a<0\). In this case, if the boundary condition \(y_{0}\) is different from the steady value \(\bar{y}\), then \(y(t)\) (as determined by (C.26)) moves to plus or minus infinity, farther and farther away from the steady state. The only case in which this does not happen is when the boundary condition \(y_{0}\) is equal to the steady state value \(\bar{y}=b / a\). Then \(y\) remains constant at \(\bar{y}\). However, this is an unstable equilibrium, called a saddle point. There is only one adjustment path that leads to it and this is for \(y\) to jump immediately to the steady state. If \(y\) is a non-predetermined variable (for example, a financial variable or any variable that can change abruptly and not gradually), then it can jump immediately to its steady state value.

2.3 Second-Order Linear Differential Equations

A second-order linear differential equation has the form

\[\ddot{y}(t)+a(t) \dot{y}(t)+b(t) y(t)=h(t) \tag{2.12}\]

where \(a(t), b(t)\), and \(h(t)\) are known functions, and what is sought is the function \(y(t)\). The forcing term in this case is the function \(h(t)\). Equation (2.12) is referred to as the complete equation and is nonhomogenous. Related to (2.12) is a homogeneous differential equation in which \(h(t)=0\) :

\[\ddot{y}(t)+a(t) \dot{y}(t)+b(t) y(t)=0 \tag{2.13}\]

which is called the reduced equation. The complete equation is nonhomogenous, whereas the reduced equation is homogeneous. The reduced equation is of interest because of the following two theorems.

Theorem 1

The general solution of the complete equation (2.12) is the sum of any particular solution of the complete equation and the general solution of the reduced equation (2.13).

Theorem 2

Any solution \(y(t)\) of the reduced equation (2.13) on \(t_{0} \leq t \leq t_{1}\) can be expressed as a linear combination, \(y(t)=c_{1} y_{1}(t)+c_{2} y_{2}(t), t_{0} \leq t \leq t_{1}\), of any two particular solutions \(y_{1}, y_{2}\) that are linearly independent.

2.3.1 Homogeneous Equations with Constant Coefficients

We first examine the differential equation (2.12), with constant coefficients, that is, \(a(t)=a, b(t)=b\). Assume that \(h(t)=0\). The differential equation then takes the form

\[\ddot{y}(t)+a \dot{y}(t)+b y(t)=0 \tag{2.14}\]

Inspired by the general solution of the first-order linear differential equation with constant coefficients, let us try the general solution

\[y(t)=c e^{r t}\]

with unknown constants \(c\) and \(r\). This solution method is called the method of undetermined coefficients. This solution implies that

\[\dot{y}(t)=r c e^{r t}, \ddot{y}(t)=r^{2} c e^{r t}\]

Substituting these expressions in (2.14), we get

\[c e^{r t}\left(r^{2}+a r+b\right)=0 \tag{2.15}\]

For a nonzero \(c\), our trial solution satisfies (2.15) only if \(r\) is a root of the quadratic equation

\[r^{2}+a r+b=0 \tag{2.16}\]

Equation (2.16) is called the \(characteristic equation\) of (2.14). It has two roots:

\(r_{1}, r_{2}=\frac{-a \pm \sqrt{a^{2}-4 b}}{2}\).

We distinguish three cases, depending on the value of the discriminant \(a^{2}-\) \(4 b\) of the characteristic equation (2.16).

  • Case 1 $ a^{2}>4 b $ The discriminant is positive, and the roots are real and distinct. The general solution of (2.14) takes the form

\[y(t)=c_{1} e^{r_{1} t}+c_{2} e^{r_{2} t}\]

where \(r_{1}, r_{2}\) are the roots of the characteristic equation (2.16), and \(c_{1}, c_{2}\) are arbitrary constants.

  • Case 2 $ a^{2}<4 b$ The discriminant is negative, and the roots are a pair of complex conjugates:

\[r_{1}, r_{2}=-\frac{a}{2} \pm i \frac{\sqrt{4 b-a^{2}}}{2}=\alpha \pm i \beta\]

where \(\alpha=-\frac{a}{2}\), and \(\beta=\frac{\sqrt{4 b-a^{2}}}{2}\). The general solution in this case is

\[y(t)=e^{\alpha t}\left(k_{1} \cos \beta t+k_{2} \sin \beta t\right)\]

where \(k_{1}, k_{2}\) are arbitrary constants.

  • Case 3 \(a^{2}=4 b\) The discriminant is equal to zero, and the two roots are the same and equal to \(-a / 2\). One can show that the general solution of (2.14) in this case takes the form

\[y(t)=c_{1} e^{r t}+c_{2} t e^{r t}=e^{r t}\left(c_{1}+c_{2} t\right)\]

where \(r=-a / 2\) is the double root of the characteristic equation (2.16), and \(c_{1}, c_{2}\) are arbitrary constants.

2.3.2 Nonhomogeneous Equations with Constant Coefficients

We have already derived the solution of any homogeneous second-order linear differential equation with constant coefficients. To find the solution of a nonhomogeneous equation, we need a particular solution of the complete equation. If the complete equation is of the form

\[\ddot{y}(t)+a \dot{y}(t)+b y(t)=h\]

then a particular solution is the constant function

\[\bar{y}=\frac{h}{b}\]

The full solution is thus the sum of the general solution to the homogeneous equation plus the particular solution to the complete equation.

For differential equations with variable coefficients, more advanced methods, such as the method of variation of parameters, can be utilized for their solution.

2.4 A Pair of First-Order Linear Differential Equations

We next examine a case with extensive applications in macroeconomics: a pair of first-order linear differential equations of the form

\[ \begin{aligned} & \dot{x}(t)=a_{1} x(t)+b_{1} y(t)+p(t) \\ & \dot{y}(t)=a_{2} x(t)+b_{2} y(t)+g(t) \end{aligned} \tag{2.17}\]

where \(a_{1}, a_{2}, b_{1}, b_{2}\) are given constants, and \(p(t), g(t)\) are given functions. The solution of the system of differential equations (2.17) will be two functions \(x(t)\) and \(y(t)\) that satisfy both differential equations.

The homogeneous system that corresponds to (2.17) is given by \[ \begin{aligned} \dot{x}(t)=a_{1} x(t)+b_{1} y(t) \\ \dot{y}(t)=a_{2} x(t)+b_{2} y(t) \end{aligned} \tag{2.18}\]

2.4.1 The Method of Substitution

One solution method is the method of substitution. Substituting \(y(t)\) and its derivatives in the equation determining \(x(t)\), we end up with a second-order linear differential equation that contains only \(x(t)\) and its derivatives:

\[\ddot{x}(t)-\left(a_{1}+b_{2}\right) \dot{x}(t)+\left(a_{1} b_{2}-b_{1} a_{2}\right) x(t)=0 \tag{2.19}\]

Equation (2.19) is a linear homogeneous second-order differential equation. It can be solved using the method of undetermined coefficients. Its characteristic equation is given by

\[r^{2}-\left(a_{1}+b_{2}\right) r+\left(a_{1} b_{2}-a_{2} b_{1}\right)=0 \tag{2.20}\]

If the roots of (2.20) are real and distinct, the solution of (2.19) is given by

\[x(t)=c_{1} e^{r_{1} t}+c_{2} e^{r_{2} t} \tag{2.21}\]

Solving the first equation of (2.18) with respect to \(y(t)\), we get

\[y(t)=\frac{1}{b_{1}}\left(\dot{x}(t)-a_{1} x(t)\right)\]

Substituting the solution (2.21) for \(x(t)\) and its first derivative, we get

\[y(t)=\frac{1}{b_{1}}\left(\left(r_{1}-a_{1}\right) c_{1} e^{r_{1} t}+\left(r_{2}-a_{1}\right) c_{2} e^{r_{2} t}\right) \tag{2.22}\]

Consequently, the solution of the system (2.18) consists of equations (2.21) and (2.22), if the roots of (2.20) are real and distinct. We can solve the system in an analogous way if we have complex or repeated roots.

However, there is a second and more direct solution method of the homogeneous system (2.18). This method also generalizes to higher-order systems of differential equations.

2.4.2 The Method of Eigenvalues

Our experience with first-order differential equation suggests that we use the pair of equations

\[x(t)=A e^{r t}, y(t)=B e^{r t}\]

as particular solutions for (2.18). Here \(A, B\), and \(r\) are undetermined coefficients. Substituting these solutions in (2.18), we get

\[\begin{aligned} r A e^{r t}=a_{1} A e^{r t}+b_{1} B e^{r t} \\ r B e^{r t}=a_{2} A e^{r t}+b_{2} B e^{r t}\end{aligned} \tag{2.23}\]

Dividing both equations by \(e^{r t}\), we can rewrite the system (2.23) in matrix form:

\[\left(\begin{array}{cc}a_{1}-r & b_{1} \\ a_{2} & b_{2}-r\end{array}\right)\binom{A}{B}=\binom{0}{0} \tag{2.24}\]

For (2.24) to hold, the determinant of the matrix of coefficients must be zero:

\[\left|\begin{array}{cc}a_{1}-r & b_{1} \\ a_{2} & b_{2}-r\end{array}\right|=0\]

Calculating the determinant, we get a quadratic equation in \(r\) :

\[r^{2}-\left(a_{1}+b_{2}\right) r+\left(a_{1} b_{2}-a_{2} b_{1}\right)=0 \tag{2.25}\]

Equation (2.25) is the characteristic equation of the system (2.18). Equation (2.25) is identical to (2.20), the equation we ended up with when using the method of substitution. The solutions of the characteristic equation (2.25) are called the eigenvalues of the matrix of coefficients:

\[\left(\begin{array}{ll}a_{1} & b_{1} \\ a_{2} & b_{2}\end{array}\right)\]

The two roots are given by

\[ r_{1}, r_{2}=\frac{\left(a_{1}+b_{2}\right) \pm \sqrt{\left(a_{1}+b_{2}\right)^{2}-4\left(a_{1} b_{2}-a_{2} b_{1}\right)}}{2} \tag{2.26}\]

For future use, note that

\[ \begin{aligned} r_{1}+r_{2} & = & a_{1}+b_{2} \\ r_{1} r_{2} & = & a_{1} b_{2}-a_{2} b_{1} \end{aligned} \]

If the roots are real, and \(r_{1} \neq r_{2}\), then the general solution of the homogeneous system (2.18) is given by

\[ \begin{aligned} & x(t)=A_{1} e^{r_{1} t}+A_{2} e^{r_{2} t}, \\ & y(t)=B_{1} e^{r_{1} t}+B_{2} e^{r_{2} t}, \end{aligned} \]

where \(A_{1}, A_{2}\) are determined by boundary conditions; the roots are determined by (2.26); and \(B_{1}, B_{2}\) are determined by (2.23) as

\[B_{1}=\frac{r_{1}-a_{1}}{b_{1}} A_{1}\]

\[B_{2}=\frac{r_{2}-a_{1}}{b_{1}} A_{2}\]

The solution is identical to (2.21) and (2.22). In the case of complex or repeated roots, the solution is analogous.

Having found the general solution to the homogeneous system (2.18), it remains to find a particular solution of (2.17), using, for example, the method of variation of parameters.

For the special case where \(p\) and \(g\) are constants, a special solution with constant \(x\) and \(y\) can be found by solving the system of equations

\[\begin{aligned} a_{1} \bar{x}+b_{1} \bar{y}+p=0\\ a_{2} \bar{x}+b_{2} \bar{y}+g=0\end{aligned} \tag{2.27}\]

Expressing (2.27) in matrix form and solving for \(\bar{x}\) and \(\bar{y}\), we get

\[\left(\begin{array}{ll}a_{1} & b_{1} \\ a_{2} & b_{2}\end{array}\right)\binom{\bar{x}}{\bar{y}}=-\binom{p}{g}\]

The solution is then given by

\[ \binom{\bar{x}}{\bar{y}}=-\left(\begin{array}{ll} a_{1} & b_{1} \\ a_{2} & b_{2} \end{array}\right)^{-1}\binom{p}{g} \]

where \(\bar{x}\) and \(\bar{y}\) can be regarded as steady state, or equilibrium points. Whether the system converges globally to equilibrium depends on whether both roots are real and smaller than zero. In this case, the equilibrium is a sink. If both variables are predetermined, this is a stable equilibrium.

If we have a positive and a negative root, the equilibrium is called a saddle point. There is only one unique path that leads to this equilibrium, and this path is called the saddle path. The economy will converge to equilibrium if one variable is predetermined and the other not predetermined. The non-predetermined variable will jump to the unique adjustment path leading to equilibrium. Technically, the negative root corresponds to the predetermined variable (for which we solve backward), and the positive root corresponds to the non-predetermined variable (for which we solve forward).

Thus, a system with one predetermined and one non-predetermined variable has an equilibrium (a saddle point) if the matrix of coefficients has one positive and one negative eigenvalue.

2.4.3 A System of n First-Order Linear Differential Equations

Finally, we turn to a more general case with extensive applications in macroeconomics: a system of \(n\) first-order linear differential equations of the form

\[ \begin{aligned} & \dot{x}_{1}(t)=a_{11} x_{1}(t)+a_{12} x_{2}(t)+\cdots+a_{1 n} x_{n}(t)+g_{1}(t) \\ & \dot{x}_{2}(t)=a_{21} x_{1}(t)+a_{22} x_{2}(t)+\cdots+a_{2 n} x_{n}(t)+g_{2}(t) \\ & \vdots \\ & \dot{x}_{n}(t)=a_{n 1} x_{1}(t)+a_{n 2} x_{2}(t)+\cdots+a_{n n} x_{n}(t)+g_{n}(t) \end{aligned} \tag{2.28}\]

where \(x_{1}, x_{2}, \ldots, x_{n}, a_{i j}\) for \(i, j=1,2, \ldots, n\) are given constant parameters, and \(g_{1}, g_{2}, \ldots, g_{n}\) are exogenous forcing functions of time.

In matrix form, this system can be written as

\[ \left(\begin{array}{c} \dot{x}_{1}(t) \\ \vdots \\ \dot{x}_{n}(t) \end{array}\right)=\left(\begin{array}{ccc} a_{11} & \ldots & a_{1 n} \\ \vdots & \ddots & \vdots \\ a_{n 1} & \cdots & a_{n n} \end{array}\right)\left(\begin{array}{c} x_{1}(t) \\ \vdots \\ x_{n}(t) \end{array}\right)+\left(\begin{array}{c} g_{1}(t) \\ \vdots \\ g_{n}(t) \end{array}\right) \]

or

\[\dot{\mathbf{x}}(t)=\mathbf{A x}(t)+\mathbf{g}(t) \tag{2.29}\]

where bold letters denote vectors and matrices; \(\mathbf{A}\) is the square \(n \times n\) coefficient matrix, which is assumed to be nonsingular.

2.4.4 Eigenvalues and Eigenvectors

Before we proceed to discuss the solution of the system of differential equations (2.28), it is worth delving a little more into linear algebra, and in particular, into the concepts of eigenvalues and eigenvectors.

Let \(\mathbf{A}\) be a square matrix, like the one multiplying the \(\mathbf{x}\) vector in the righthand side of (2.29). An eigenvalue of \(\mathbf{A}\) is a number \(\rho\), which when subtracted from each of the diagonal elements of \(\mathbf{A}\) converts \(\mathbf{A}\) into a singular (i.e., noninvertible) matrix. Subtracting a scalar \(\rho\) from each of the diagonal elements of \(\mathbf{A}\) is equivalent to subtracting from \(\mathbf{A} \rho\) times the identity matrix I. Therefore, \(\rho\) is an eigenvalue of \(\mathbf{A}\) if and only if \(\mathbf{A}-\rho \mathbf{I}\) is singular.

Because a matrix is singular if its determinant is equal to zero, \(\rho\) is an eigenvalue of \(\mathbf{A}\) if and only if

\(\operatorname{det}|\mathbf{A}-\rho \mathbf{I}|=0\).

For an \(n \times n\) matrix \(\mathbf{A}\), the determinant of \(\mathbf{A}-\rho \mathbf{I}\) is an \(n\) th-order polynomial in \(\rho\), called the characteristic polynomial of \(\mathbf{A}\). An \(n\) th-order polynomial has at most \(n\) roots. Therefore, an \(n \times n\) square matrix has at most \(n\) eigenvalues.

It follows from the above that the diagonal entries of a diagonal matrix \(\mathbf{D}\) are eigenvalues of \(\mathbf{D}\), and that a square matrix \(\mathbf{A}\) is singular if and only if 0 is an eigenvalue of \(\mathbf{A}\).

Recall from elementary linear algebra that a square matrix \(\mathbf{B}\) is nonsingular if and only if the only solution of \(\mathbf{B x}=\mathbf{0}\) is \(\mathbf{x}=\mathbf{0}\). Conversely, \(\mathbf{B}\) is singular if and only if the system \(\mathbf{B x}=\mathbf{0}\) has a nonzero solution.

The fact that \(\mathbf{A}-\rho \mathbf{I}\) is singular when \(\rho\) is an eigenvalue of \(\mathbf{A}\) means that the system of equations \((\mathbf{A}-\rho \mathbf{I}) \mathbf{v}=0\) has a solution other than \(\mathbf{v}=0\).

When \(\rho\) is an eigenvalue of \(\mathbf{A}\), a nonzero vector \(\mathbf{v}\) such that \((\mathbf{A}-\rho \mathbf{I}) \mathbf{v}=\mathbf{0}\), is called a (right) eigenvector of \(\mathbf{A}\), corresponding to the eigenvalue \(\rho\).

Thus, eigenvectors are nonzero vectors \(\mathbf{v}\) that satisfy

\((\mathbf{A}-\rho \mathbf{I}) \mathbf{v}=\mathbf{0}, \mathbf{A v}-\rho \mathbf{I v}=\mathbf{0}, \mathbf{A v}=\rho \mathbf{v}\).

These three statements are equivalent.

2.5 Solving the \(n\) th-Order System of Linear Differential Equations

Let us now turn to the solution of the \(n\) th-order system of linear differential equations represented by (2.28). The general solution of the nonhomogeneous system of differential equations (2.28) will be the sum of the general solution of the relevant homogeneous system of differential equations plus the particular solution for constant \(x \mathrm{~s}\).

We shall concentrate on the solution of the homogeneous system

\[ \left(\begin{array}{c} \dot{x}_{1}(t) \\ \vdots \\ \dot{x}_{n}(t) \end{array}\right)=\left(\begin{array}{ccc} a_{11} & \ldots & a_{1 n} \\ \vdots & \ddots & \vdots \\ a_{n 1} & \cdots & a_{n n} \end{array}\right)\left(\begin{array}{c} x_{1}(t) \\ \vdots \\ x_{n}(t) \end{array}\right) \tag{2.30}\]

or simply, \(\dot{\mathbf{x}}=\mathbf{A x}\), where \(\mathbf{x}\) is the column vector on the right-hand side of (2.30).

Assume first that \(\mathbf{A}\) is a diagonal matrix, for which \(a_{i j}=0\) for \(i \neq j\). Then (2.30) becomes a system of n independent self-contained equations of the form

\[\dot{x}_{i}(t)=a_{i i} x_{i}(t)\]

We thus have a system of independent first-order linear differential equations that can be solved one by one:

\[x_{i}(t)=c_{i} e^{a_{i j} t}\]

If the off-diagonal elements \(a_{i j}\) differ from zero, so that the equations are linked to each other, then we can use the eigenvalues and eigenvectors of the coefficient matrix of (2.30) to transform it to a system of \(n\) (or fewer) independent equations. We can use the eigenvalues and eigenvectors of \(\mathbf{A}\) to transform the system to a system that has a diagonal coefficient matrix.

Let us assume that \(\mathbf{A}\) has \(n\) distinct real eigenvalues \(\rho_{1}, \rho_{2}, \ldots, \rho_{n}\), with corresponding eigenvectors \(\mathbf{v}_{1}, \mathbf{v}_{2}, \ldots, \mathbf{v}_{n}\). It then follows from the definition of eigenvalues and eigenvectors that

\[ \mathbf{A v}_{i}=\rho_{i} \mathbf{v}_{i}, i=1,2, \ldots, n \tag{2.31}\]

Let \(\mathbf{P}\) be the \(n \times n\) matrix whose columns are these \(n\) eigenvectors. Thus \(\mathbf{P}\) is defined as

\[ \mathbf{P}=\left[\mathbf{v}_{1} \cdots \mathbf{v}_{n}\right] \tag{2.32}\]

The system of equations (2.31) can thus be written as

\[ \mathbf{A P}=\mathbf{P J} \]

where

\[\mathbf{J}=\left(\begin{array}{lll}\rho_{1} & & 0 \\ & \ddots & \\ 0 & & \rho_{n}\end{array}\right)\]

Because eigenvectors for distinct eigenvalues are linearly independent, \(\mathbf{P}\) is nonsingular and therefore invertible. We can write

\[\mathbf{P}^{-1} \mathbf{A P}=\mathbf{J}\]

Thus, we can use (2.32) to transform the system (2.28), defined in the variables \(\mathbf{x}\), to a system in the variables \(\mathbf{y}=\mathbf{P}^{-1} \mathbf{x}\), which means that \(\mathbf{x}=\mathbf{P y}\). It follows that

\[\dot{\mathbf{y}}=\mathbf{P}^{-1} \dot{\mathbf{x}}=\mathbf{P}^{-1} \mathbf{A x}=\mathbf{P}^{-1} \mathbf{A P y}=\mathbf{J y}\]

Because \(\mathbf{J}\) is a diagonal matrix, the solution of the system (C.59) can be obtained very easily as the vector of solutions to each variable \(y_{i}\) :

\(\left(\begin{array}{c}y_{1}(t) \\ \vdots \\ y_{n}(t)\end{array}\right)=\left(\begin{array}{c}c_{1} e^{\rho_{1} t} \\ \vdots \\ c_{n} e^{\rho_{n} t}\end{array}\right)\)

Finally, we can use the transformation \(\mathbf{x}=\mathbf{P y}\) to return to the original variables \(x_{1}, \ldots, x_{n}\) :

\[ \mathbf{x}(t)=\mathbf{P y}(t)=\left[\begin{array}{lll} \mathbf{v}_{1} & \cdots & \mathbf{v}_{n} \end{array}\right]\left(\begin{array}{c} c_{1} e^{\rho_{1} t} \\ \vdots \\ c_{n} e^{\rho_{n} t} \end{array}\right)=c_{1} e^{\rho_{1} t} \mathbf{v}_{1}+c_{2} e^{\rho_{2} t} \mathbf{v}_{2}+\cdots+c_{n} e^{\rho_{n} t} \mathbf{v}_{n} \]

Thus, under the assumption that the \(n \times n\) matrix \(\mathbf{A}\) has \(n\) distinct real eigenvalues \(\rho_{1}, \rho_{2}, \ldots, \rho_{n}\), with corresponding eigenvectors \(\mathbf{v}_{1}, \mathbf{v}_{2}, \ldots, \mathbf{v}_{n}\), the general solution of the homogeneous linear system (2.28) is given by

\(\mathbf{x}(t)=c_{1} e^{\rho_{1} t} \mathbf{v}_{1}+c_{2} e^{\rho_{2} t} \mathbf{v}_{2}+\cdots+c_{n} e^{\rho_{n} t} \mathbf{v}_{n}\).

The solution in the cases of complex eigenvalues or multiple eigenvalues without enough eigenvectors is analogous to the solution of the second-order homogeneous system analyzed in Section 2.4.2

Steady states and stability conditions are defined in an analogous way to those for first-and second-order differential equations. Assuming that the vector of \(g \mathrm{~s}\) consists of constants, one gets the nonhomogeneous system:

\[ \left(\begin{array}{c} \dot{x}_{1}(t) \\ \vdots \\ \dot{x}_{n}(t) \end{array}\right)=\left(\begin{array}{ccc} a_{11} & \ldots & a_{1 n} \\ \vdots & \ddots & \vdots \\ a_{n 1} & \cdots & a_{n n} \end{array}\right)\left(\begin{array}{c} x_{1}(t) \\ \vdots \\ x_{n}(t) \end{array}\right)+\left(\begin{array}{c} g_{1} \\ \vdots \\ g_{n} \end{array}\right) \]

The steady state, if it exists, can be derived by setting the change in the \(x \mathrm{~s}\) equal to zero. A steady state exists if \(\mathbf{A}\) is nonsingular and is given by

\[ \left(\begin{array}{c} \bar{x}_{1} \\ \vdots \\ \bar{x}_{n} \end{array}\right)=-\left(\begin{array}{ccc} a_{11} & \ldots & a_{1 n} \\ \vdots & \ddots & \vdots \\ a_{n 1} & \cdots & a_{n n} \end{array}\right)^{-1}\left(\begin{array}{c} g_{1} \\ \vdots \\ g_{n} \end{array}\right) \]

where \(\bar{x}_{i}\) denotes the steady state value of \(x_{i}\). These values can be regarded as equilibrium points. If the \(x_{i} \mathrm{~s}\) are predetermined variables, for the system to converge to equilibrium, all eigenvalues must be less than zero. In this case, the equilibrium is a fixed node. When the \(x_{i} \mathrm{~s}\) consist of \(p\) predetermined and \(q\) non-predetermined variables, where \(p+q=n\), the equilibrium (if it exists) is a saddle point. For the system to converge to equilibrium, there must be \(p\) negative eigenvalues and \(q\) positive eigenvalues. The negative eigenvalues correspond to the predetermined variables, which are solved for backward, and the positive eigenvalues correspond to the non-predetermined variables which are solved for forward.

Thus, a system with \(p\) predetermined and \(q\) non-predetermined variables has a stable equilibrium (a saddle vector) if the matrix of coefficients has \(p\) negative and \(q\) positive eigenvalues. The adjustment path is unique and is called a saddle path.

2.6 Nonlinear Differential Equations

In this section, we consider problems of the form:

\[ \dot{x}(t)=f(x(t)) \tag{2.33}\]

where \(x(t)\) is a vector \(x(t)=(x_1(t),x_2(t),...x_n(t))\) and \(f\) is a shortcut notation for \(n\) different functions. This kind of system may be difficult (or even impossible) to solve. However, it is possible to derive some approximation of the solution by taking a first-order expansion (or eventually higher-order) around a reference point. A good candidate is the steady state \(\bar{x}\) satisfying \(f(\bar{x})=0\). Hence, we get:

\[ d\dot{x}(t)=\underbrace{f(\bar{x})}_{=0}+J(\bar{x})\cdot(x(t)-\bar{x}) \] where \(J(\bar{x})\) is the Jacobian matrix with entries \(f'(\bar{x})\) for each \(x_i(t)\),\(i=1,...,n\).

Treating \((x(t)-\bar{x})=dx(t)\) as the new variable of interest, one can see that we obtain a \(n\) system of first-order linear differential equations and use the techniques from Section 2.5.

2.7 Qualitative Analysis

It is possible to introduce some qualitative informations about systems. We focus on a generic pair of linear differential equation of the form:

\[\begin{aligned} \dot{x}(t)=ax_t+by_t+c \\ \dot{y}(t)= d x_t-ey_t +f \end{aligned}\]

Without loss of generality, assume that all parameters \(a,b,c,d,e,f\) are positive. We can draw the isoclines \(\dot {x}(t)=0\) and \(\dot{y}(t)=0\) in the plane \(\{x(t),y(t)\}\), that is the loci:

\[\begin{aligned} \dot {x}(t)=0 &:& \tilde{y}=\frac{-a y+c}{b} \\ \dot {y}(t)=0&:& \hat{y}=\frac{ dx+f}{e} \end{aligned}\]

The next step is to characterize the vector field, that is the qualitative changes of \(x_t\) and \(y_t\) for every pair \({x,y}\) outside the isoclines. For \(\dot {x}(t)=0\), we have:

\[\begin{aligned} \dot {x}(t)=0\geq 0 \Leftrightarrow y(t) \geq \frac{-a x(t)+c}{b} \equiv \tilde{y} \\ \dot {y}(t)=0 \leq 0 \Leftrightarrow y(t) \leq \frac{ dx(t)+f}{e} \equiv \hat{y} \end{aligned}\]

On the first hand, for every pair \(\{x(t),y(t) \}\) located on the right (left) of the locus \(\tilde{y}\), the value of \(x(t)\) increases (decreases) and goes on the right (left). On the other hand, for every pair \(\{x(t),y(t) \}\) located below (above) the locus \(\hat{y}\), the value of \(y(t)\) increases (decreases) and goes upward (downward).


  1. We use the conventional notation where \(t\) enters as an argument for each variable that depends to time and the “dot” over a variable means the time derivative of the considered variable.↩︎