3  Difference Equations

In this chapter, we review the properties and solution methods of first- and second-order difference equations, as well as of systems of first-order difference equations.

Difference equations are the analog of differential equations when time is a discrete variable defined in terms of integers. They are an indispensable tool for the study of dynamic economic problems in discrete time. We shall thus assume that time is an integer \(t=\ldots,-2,-1,0,1,2, \ldots\), instead of \(t\) being a real continuous variable.

After defining lag operators, we then proceed to present solution methods for first- and second-order linear difference equations and for systems of interdependent linear difference equations.

3.1 Lag Operators and Difference Equations

To define and analyze difference equations, it is useful to first define lag operators. The value of a variable \(x\) in period \(t\) is denoted by \(x_{t}\). The lag operator \(L\) for a variable \(x_{t}\) is defined by

\[L^{n} x_{t}=x_{t-n}\]

for \(n=\ldots,-2,-1,0,1,2, \ldots\)

Thus, the multiplication of \(x_{t}\) by \(L\) denotes the value of the variable in the previous period, and the multiplication of the variable by \(L^{n}\) denotes the value of the variable in period \(t-n\). Note than if \(n\) is negative (i.e., \(n<0\) ), the lag operator shifts the variable \(n\) periods into the future.

This definition is mathematically somewhat loose. More formally, let us assume a sequence

\[ \left\{x_{t}\right\}_{t=-\infty}^{\infty} \]

that links a real number \(x\) with every integer \(t\). Applying the operator \(L^{n}\) to this sequence, we get a new sequence:

\[ \left\{y_{t}\right\}_{t=-\infty}^{\infty}=\left\{x_{t-n}\right\}_{t=-\infty}^{\infty} \]

The operator \(L^{n}\) projects one sequence onto another.

Let us now examine a polynomial in the lag operator:

\[A(L)=a_{0}+a_{1} L+a_{2} L^{2}+\cdots=\sum_{j=0}^{\infty} a_{j} L^{j}\]

Applying the polynomial \(A(L)\) to variable \(x_{t}\), we get a moving sum of \(x \mathrm{~s}\) in different time periods:

\[A(L) x_{t}=\sum_{j=0}^{\infty} a_{j} L^{j} x_{t}=\sum_{j=0}^{\infty} a_{j} x_{t-j}\]

Let us confine ourselves to rational functions (i.e., polynomials) that can be expressed as the ratio of two finite polynomials in \(L\). Assume that

\[A(L)=\frac{B(L)}{C(L)} \tag{3.1}\]

where

\[\begin{aligned} B(L)=\sum_{j=0}^{m} b_{j} L^{j}\\ C(L)=\sum_{j=0}^{n} c_{j} L^{j} \end{aligned} \tag{3.2}\]

and \(b_{j}\) and \(c_{j}\) are constants. The combination of (3.1) and (3.2) imposes a more economical and restrictive form on \(a_{j}\), without serious loss of generality.

A special case of (3.1) and (3.2) is the so-called geometric polynomial, which takes the form

\[A(L)=\frac{1}{1-\lambda L} \tag{3.3}\]

From the properties of geometric progressions, the geometric polynomial can be expanded in two ways:

\[A(L)=\frac{1}{1-\lambda L}=1+\lambda L+\lambda^{2} L^{2}+\cdots \tag{3.4}\]

\[A(L)=\frac{1}{1-\lambda L}=-\frac{1}{\lambda L}\left(1+\frac{1}{\lambda} L^{-1}+\frac{1}{\lambda^{2}} L^{-2}+\cdots\right) \tag{3.5}\]

The expansion (3.4) is used when \(|\lambda|<1\), and the expansion (3.5) when \(|\lambda|>\) 1 .

If we multiply the geometric polynomial (3.3) by some variable \(x_{t}\), we get

\[A(L) x_{t}=\frac{1}{1-\lambda L} x_{t} \tag{3.6}\]

With the expansion (3.4) for \(A(L)\) we get

\[A(L) x_{t}=\frac{1}{1-\lambda L} x_{t}=\sum_{i=0}^{\infty} \lambda^{i} L^{i} x_{t}=\sum_{i=0}^{\infty} \lambda^{i} x_{t-i} \tag{3.7}\]

If \(|\lambda|<1\), and \(\left\{x_{t}\right\}_{l=-\infty}^{\infty}\) is a finite sequence of real numbers, then (3.7) defines a finite sequence of real numbers as well.

In contrast, we have the alternative expansion of (3.6). Using (3.5), we get

\[A(L) x_{t}=\frac{1}{1-\lambda L} x_{t}=(\lambda L)^{-1} \sum_{i=0}^{\infty} \lambda^{-i} L^{-i} x_{t}=\sum_{i=0}^{\infty} \lambda^{-i} x_{t+i} \tag{3.8}\]

If \(|\lambda|>1\), and \(\left\{x_{t}\right\}_{t=-\infty}^{\infty}\) is a finite sequence of real numbers, then (3.8) defines a finite sequence of real numbers as well, because we have \(\left|\lambda^{-1}\right|<1\).

In economics, because we usually seek convergence to some equilibrium, we seek the analysis of finite sequences. Thus, we select the backward expansion when \(|\lambda|<1\), and the forward expansion when \(|\lambda|>1\).

A difference equation (or recurrence relation) equates a polynomial in the various iterates of a variable-that is, in the values of the elements of a sequence-to zero.

An \(n\) th-order linear difference equation with constant coefficients takes the form

\[ a_{0} x_{t}+a_{1} x_{t-1}+a_{2} x_{t-2}+\cdots+a_{n} x_{t-n}-b=\sum_{j=0}^{n} a_{j} L^{j} x_{t}-b=0 \]

where \(a_{j}, j=0,1,2, \ldots, n\) and \(b\) are constant coefficients.

By equating the right-hand side of the geometric polynomial (3.7) to zero, we get

\[\sum_{i=0}^{\infty} \lambda^{i} x_{t-i}=x_{t}+\lambda x_{t-1}+\lambda^{2} x_{t-2}+\cdots=0\]

This is an example of an infinite-order linear difference equation.

3.2 First-Order Linear Difference Equations

Let us first consider the first-order linear difference equation with constant coefficients:

\[x_{t}=a+b x_{t-1} \tag{3.9}\]

Using lag operators, (3.9) can be written as

\[(1-b L) x_{t}=a \tag{3.10}\]

Dividing both sides of (3.10) by \((1-b L)\) and adding \(c b^{t}\), we get

\[x_{t}=\frac{a}{1-b L}+c b^{t}=\frac{a}{1-b}+c b^{t} \tag{3.11}\]

where \(c\) is an arbitrary constant. We include the term \(c b^{t}\) because for any \(c\),

\[(1-b L) c b^{t}=c b^{t}-b c b^{t-1}=0\]

Hence, if we multiply (3.11) by \((1-b L)\), we get back (3.10). Equation (3.11) determines the general solution of the linear first-order difference equation (3.9).

To find a particular solution, we must determine \(c\). Assume that in period \(t\) \(=0, x\) had the value \(x_{0}\). From (3.11), it follows that

\[c=x_{0}-\frac{a}{1-b}\]

Thus, the particular solution of (3.9) is given by

\[x=\frac{a}{1-b}+\left(x_{0}-\frac{a}{1-b}\right) b^{t} \tag{3.12}\]

If the boundary condition is such that \(x_{0}=a /(1-b)\), then (3.12) implies that

\[x_{t}=x_{0}, \forall t \geq 0\]

Thus, \(a /(1-b)\) can be seen as an equilibrium value. If \(x=a /(1-b)\), then \(x\) tends to stay at this level.

In addition, \(x_{0}\) if \(|b|<1\), (3.12) implies that for any \(x_{0}\), we have

\[\lim _{t \rightarrow \infty} x_{t}=\frac{a}{1-b} \tag{3.13}\]

Equation (3.13) implies that the difference equation is stable, because \(x\) tends to approach its equilibrium value over time from any initial condition. In this case, the equilibrium value is a stable node.

If \(|b|>1\), the only path that leads to the equilibrium value is the immediate jump of \(x\) to the equilibrium value \(a /(1-b)\). This solution requires

\(c=0, x_{t}=a /(1-b), \forall t\).

The equilibrium value in this case is a saddle point.

3.3 Second-Order Linear Difference Equations

We next turn to the second-order linear difference equation with constant coefficients, of the form

\[x_{t}=a+b x_{t-1}+c x_{t-2} \tag{3.14}\]

Using the lag operator, (3.14) can be written as

\[ \left(1-b L-c L^{2}\right) x_{t}=a \tag{3.15}\]

Equation (3.15) can be expressed as

\[\left(1-\lambda_{1} L\right)\left(1-\lambda_{2} L\right) x_{t}=a \tag{3.16}\]

where

\[\lambda_{1}+\lambda_{2}=b\]

\[\lambda_{1} \lambda_{2}=-c\]

and \(\lambda_{1}\) and \(\lambda_{2}\) are the two roots of the second-order linear difference equation (3.14).

There are three possible cases, depending on the discriminant of the characteristic equa-tion of (3.14).

  • Case 1: \(b^{2}>-4 c\) The discriminant is positive, and the roots are real and distinct, taking the form

\[ \begin{aligned} & \lambda_{1}=\frac{b+\sqrt{b^{2}+4 c}}{2} \\ & \lambda_{2}=\frac{b-\sqrt{b^{2}+4 c}}{2} \end{aligned} \]

From (3.16), the general solution of (3.14) takes the form

\[ x_{t}=\frac{a}{\left(1-\lambda_{1}\right)\left(1-\lambda_{2}\right)}+d_{1} \lambda_{1}^{t}+d_{2} \lambda_{2}^{t}=\frac{a}{(1-b-c)}+d_{1} \lambda_{1}^{t}+d_{2} \lambda_{2}^{t} \]

where \(d_{1}\) and \(d_{2}\) are two arbitrary constants. To determine the arbitrary constants, one needs two boundary conditions, depending on the values of the two roots.

As in the case of a first-order difference equation, \(a /(1-b-c)\) can be seen as the equili-brium value of \(x\).

We have convergence to the equilibrium value if \(\left|\lambda_{1}\right|<1\) and \(\left|\lambda_{2}\right|<1\). In this case, the equilibrium value will be a stable node, and to determine the two arbitrary constants, \(d_{1}\) and \(d_{2}\), we need two initial conditions \(x_{1}, x_{2} \neq 0\).

If the two roots lie on either side of unity (i.e., if \(\left|\lambda_{1}\right|<1\) and \(\left|\lambda_{2}\right|>1\) ), then the equilibrium value will be a saddle point. In this case, to determine the two arbitrary constants \(d_{1}\) and \(d_{2}\), we need one initial and one final condition. The final condition can be none other than the equilibrium value. As a result, we shall have convergence to the equilibrium value only if \(d_{1} \neq 0\) and \(d_{2}=0\).

If both roots are greater than unity (i.e., if \(\left|\lambda_{1}\right|>1\) and \(\left|\lambda_{2}\right|>1\) ), then the only solution is the immediate jump of \(x\) to the equilibrium value \(a /(1-b-\) \(c)\). This solution requires \(d_{1}=0, d_{2}=0\), and \(x_{t}=a /(1-b-c)\) for all \(t\).

  • Case 2: \(b^{2}=-4 c\) The discriminant is equal to zero, and we have two equal real roots of the form

\[ \lambda_{1}=\lambda_{2}=\lambda=\frac{b}{2} . \]

The general solution takes the form

\[x_{t}=\frac{a}{(1-b-c)}+d_{1} \lambda^{t}+d_{2} t \lambda^{t}\]

If \(|\lambda|<1\), to determine the two arbitrary constants \(d_{1}\) and \(d_{2}\), we need two initial conditions.

If \(\lambda\) is greater than unity in absolute value (i.e., if \(|\lambda|>1\) ), then the only solution is the immediate jump of \(x\) to the equilibrium value \(a /(1-b-c)\). This solution requires \(d_{1}=0, d_{2}=0\), and \(x_{t}=a /(1-b-c)\) for all \(t\).

  • Case 3 \(b^{2}<-4 c\) The discriminant is negative, and we have two complex roots, which take the form of a pair of complex conjugates: \[ \lambda_{1}=\mu+\nu i \]

\[ \lambda_{2}=\mu-v i \]

where \(\mu=\frac{b}{2}\), and \(v=\frac{\sqrt{-4 c-b^{2}}}{2}\).

Using De Moivre’s theorem and the Pythagorean theorem the solution takes the form

\[ x_{t}=\frac{a}{1-b-c}+R^{t}\left(\left(d_{1}+d_{2}\right) \cos (\theta t)+\left(d_{1}-d_{2}\right) \sin (\theta t)\right) \]

where \(R\) and \(\theta\) are defined by

\[ R=\sqrt{\mu^{2}+v^{2}}=\sqrt{\frac{b^{2}-4 c-b^{2}}{4}}=\sqrt{-c} \]

and

\[\cos (\theta)=\frac{\mu}{\sqrt{-c}}=\frac{b}{2 \sqrt{-c}}, \quad \sin (\theta)=\frac{v}{\sqrt{-c}}=\frac{\sqrt{-4 c-b^{2}}}{2 \sqrt{-c}}\]

This solution will produce oscillations of a periodic nature. The oscillations will be dampened if and only if \(|c|<1\). In such a case, there will be cyclical convergence to the equilibrium value. In the case \(|c|<1\), there will be continuous oscillations of a constant periodicity. And if \(|c|>1\), there will be divergent oscillations, unless \(x\) jumps immediately to its equilibrium value.

3.4 A Pair of First-Order Linear Difference Equations

We next turn to a second-order system of two linear first-order difference equations. The system is described by

\[ \begin{aligned} & x_{t}=a_{10}+a_{11} x_{t-1}+a_{12} y_{t-1} \\ & y_{t}=a_{20}+a_{21} x_{t-1}+a_{22} y_{t-1} \end{aligned} \tag{3.17}\]

As in the case of a system of two first-order differential equations, the first method of solving this system is the substitution method. We can use the second equation to substitute for \(y_{t-1}\) in the first equation, and thus obtain a second-order difference equation in \(x\) :

\[ x_{t}=a+b x_{t-1}+c x_{t-2} \tag{3.18}\]

where \(a=\left(a_{10}\left(1-a_{22}\right)+a_{12} a_{20}\right), b=a_{11}+a_{22}\), and \(c=-\left(a_{11} a_{22}-a_{12} a_{21}\right)\).

Equation (3.18) has the same form as (3.14) and can be solved as an ordinary second-order linear difference equation with constant coefficients. Making use of the lag operator, the homogeneous equation corresponding to (3.18) can be written as

\[ \left(L^{2}-\frac{a_{11}+a_{22}}{a_{11} a_{22}-a_{12} a_{21}} L+\frac{1}{a_{11} a_{22}-a_{12} a_{21}}\right) x_{t}=0 . \]

The two roots of the polynomial in the lag operator must satisfy the characteristic equation

\[\lambda^{2}-\frac{a_{11}+a_{22}}{a_{11} a_{22}-a_{12} a_{21}} \lambda+\frac{1}{a_{11} a_{22}-a_{12} a_{21}}=0 \tag{3.19}\]

By going through the alternative substitutions, a similar second-order difference equation can be obtained for the second variable \(y_{t}\).

Alternatively, one can rewrite the system (3.17) in matrix form as

\[ \binom{x_{t}}{y_{t}}=\left(\begin{array}{ll}a_{11} & a_{12} \\ a_{21} & a_{22}\end{array}\right)\binom{x_{t-1}}{y_{t-1}}+\binom{a_{10}}{a_{20}} \tag{3.20}\]

The homogeneous system corresponding to (3.20), with \(a_{10}=a_{20}=0\), takes the form

\[ \binom{x_{t}}{y_{t}}=\left(\begin{array}{ll}a_{11} & a_{12} \\ a_{21} & a_{22}\end{array}\right)\binom{x_{t-1}}{y_{t-1}} \tag{3.21}\]

Using the lag operator \(L\), (3.20) can be rewritten as

\[ \binom{x_{t}}{y_{t}}\left(\begin{array}{cc}1-a_{11} L & -a_{12} L \\ -a_{21} L & 1-a_{22} L\end{array}\right)=\binom{0}{0} \]

For (3.20) to have a solution, the matrix in the lag operator must be singular. Therefore, its determinant must be equal to zero. Thus, for a solution to exist, we must have

\[\operatorname{det}\left(\begin{array}{cc}1-a_{11} L & -a_{12} L \\ -a_{21} L & 1-a_{22} L\end{array}\right)=0\]

This condition implies a polynomial in the lag operator with characteristic equation

\[ \lambda^{2}-\frac{a_{11}+a_{22}}{a_{11} a_{22}-a_{12} a_{21}} \lambda+\frac{1}{a_{11} a_{22}-a_{12} a_{21}}=0 \]

which is identical to (3.19), the characteristic equation of the second-order difference equation (3.18), and will of course give the same solution for the two roots.

However, even this solution method becomes unwieldy for higher-order systems when there are more than two variables. It is thus desirable to investigate other solution methods. To do so, it is worth generalizing the system to one of \(n\) first-order linear difference equations.

3.5 A System of n First-Order Linear Difference Equations

Let us consider the following system of \(n\) first-order difference equations. Such systems arise quite often in dynamic macroeconomics:

\[ \begin{aligned} & x_{1, t}=a_{10}+a_{11} x_{1, t-1}+a_{12} x_{2, t-1}+\cdots+a_{1 n} x_{n, t-1} \\ & x_{2, t}=a_{20}+a_{21} x_{1, t-1}+a_{22} x_{2, t-1}+\cdots+a_{2 n} x_{n, t-1} \\ & \vdots \\ & x_{n, t}=a_{n 0}+a_{n 1} x_{1, t-1}+a_{n 2} x_{2, t-1}+\cdots+a_{n n} x_{n, t-1} \end{aligned} \tag{3.22}\]

In matrix form, the system (3.22) can be written as

\[ \left(\begin{array}{c} x_{1, t} \\ \vdots \\ x_{n, t} \end{array}\right)=\left(\begin{array}{ccc} a_{11} & \cdots & a_{1 n} \\ \vdots & \ddots & \vdots \\ a_{n 1} & \cdots & a_{n n} \end{array}\right)\left(\begin{array}{c} x_{1, t-1} \\ \vdots \\ x_{n, t-2} \end{array}\right)+\left(\begin{array}{c} a_{10} \\ \vdots \\ x_{n 0} \end{array}\right) \tag{3.23}\]

By defining the vector of \(x\) as \(\mathbf{x}\), the matrix of multiplicative parameters as \(\mathbf{A}\), and the vector of the constants as \(\mathbf{a}_{0}\), the system can be written as

\[ \mathbf{x}_{t}=\mathbf{A x}_{t-1}+\mathbf{a}_{0} \tag{3.24}\]

If \(a_{i j}=0\) for \(i \neq j\) in the system (3.23), then the \(n\) equations are uncoupled, and the system can be solved as \(n\) independent first-order linear difference equations with solutions

\[ x_{i, t}=\frac{a_{i 0}}{1-a_{i i}}+\left(x_{i, 0}-\frac{a_{i 0}}{1-a_{i i}}\right)\left(a_{i i}\right)^{t} \]

where \(x_{i, 0}\) is a boundary value for \(x_{i}\).

Thus, if we could transform the system (3.23) into one with a coefficient matrix that is diagonal, we could easily calculate the solution. The question is how to transform the system into one with a diagonal coefficient matrix.

We know from the properties of matrices that the coefficient matrix \(\mathbf{A}\) can be transformed as

\[ \mathbf{A}=\mathbf{P J P}^{-1} \tag{3.25}\]

where \(\mathbf{J}\) is a diagonal matrix with the eigenvalues of \(\mathbf{A}\) on its diagonal, and \(\mathbf{P}\) is a matrix consisting of the corresponding (right) eigenvectors. Equation (3.25) implies that

\[\mathbf{A P}=\mathbf{P J}\]

We can use these properties to rewrite the system (3.24) as

\[ \mathbf{x}_{t}=\mathbf{A x}_{t-1}+\mathbf{a}_{0}=\mathbf{P J P}^{-1} \mathbf{x}_{t-1}+\mathbf{a}_{0} \tag{3.26}\]

Multiplying both sides of (3.26) by \(\mathbf{P}^{-1}\), we get

\[ \mathbf{P}^{-1} \mathbf{x}_{t}=\mathbf{J P}^{-1} \mathbf{x}_{t-1}+\mathbf{P}^{-1} \mathbf{a}_{0} \]

Defining a new vector of variables \(\mathbf{X}_{t}=\mathbf{P}^{-1} \mathbf{x}_{t}\), and a new vector of constants \(\mathbf{A}_{0}=\mathbf{P}^{-1} \mathbf{a}_{0}\), we can rewrite (3.24) as

\[\mathbf{X}_{t}=\mathbf{J} \mathbf{X}_{t-1}+\mathbf{A}_{0}\]

Thus, by using the matrix consisting of the eigenvectors of the original coefficient matrix \(\mathbf{A}\) to define new variables and new constants, we can transform the original coupled system of difference equations to a system of decoupled difference equations in the newly defined variables, as in the case of differential equations. The decoupled system can be solved for each of the transformed variables in \(\mathbf{X}\). We can then find the solutions for the original vector of variables by using the reverse transformation

\[\mathbf{x}_{t}=\mathbf{P} \mathbf{X}_{t}\]

By using the diagonal matrix of the eigenvalues of \(\mathbf{A}\) and the matrix of the corresponding (right) eigenvectors, we can solve the system of \(n\) first-order linear difference equations (3.22). The solution method is similar in spirit to the one for a system of \(n\) first-order differential equations discussed in Chapter 2.

Important

Keep in mind that the conditions for stability for discrete and continuous time systems are different. In continuous time, the conditions requires to compare the eigenvalues \(\lambda_i\) to zero (i.e. a positive eigenvalue is unstable, a negative eigenvalue is stable). In contrast, in discrete time systems, the eigenvalues \(\lambda_i\) (in absolute value) has to be compared to unity.(i.e. a stable eigenvalue is within the unit circle, an unstable eigenvalue is outside the unit circle).

3.6 Nonlinear difference equations

We are interested now in problems of the form: \[ x_{t+1}=f(x_t) \] with \(x_t\) a vector \(x_t=(x_{1,t},x_{2,t},...,x_{n,t})\) and \(f\) is a shortcut notation for \(n\) different functions. Just like in the case of differential equations, we can approximate in the neigborhood of the steady state \(\bar{x}\) satisfying \(\bar{x}=f(\bar{x})\). Taking a first-order expansion:

\[ x_{t+1}=\underbrace{f(\bar{x})}_{=\bar{x}}+J(x_t-\bar{x}) \]

Where \(J\) is the Jacobian matrix with entries \(f'(\bar{x})\) for each \(x_{i,t}\), \(i=1,...,n\).

Treating \(dx_t=(x_t-\bar{x})\) and \(dx_{t+1}=(x_{t+1}-\bar{x})\) as the new variables of interest, one can see that we obtain a \(n\) system of first-order linear differential equations and use the techniques from Section 3.5.

3.7 Qualitative Analysis

It is possible to introduce some qualitative informations about systems. We focus on a generic pair of linear difference equation of the form:

\[\begin{aligned} x_{t+1}=\tilde{a}x_t+by_t+c \\ y_{t+1}= d x_t-\tilde{e}y_t +f \end{aligned}\]

Define \(\Delta x_{t+1}=x_{t+1}-x_t\) and \(\Delta y_{t+1}=y_{t+1}-y_t\), the above system rewrite:

\[\begin{aligned} \Delta x_{t+1}=ax_t+by_t-c : \\ \Delta y_{t+1}= d x_t-ey_t +f \end{aligned}\]

where \(a=\tilde{a}-1\) and \(e=1+\tilde{e}\). Without loss of generality, assume that all parameters \(a,b,c,d,e,f\) are positive. We can draw the isoclines \(\Delta x_{t+1}=0\) and \(\Delta y_{t+1}=0\) in the plane \(\{x_t,y_t\}\), that is the loci:

\[\begin{aligned} \Delta x_{t+1}=0 &:& \tilde{y}=\frac{-a x+c}{b} \\ \Delta y_{t+1}=0&:& \hat{y}=\frac{ dx+f}{e} \end{aligned}\]

The next step is to characterize the vector field, that is the qualitative changes of \(x_t\) and \(y_t\) for every pair \({x,y}\) outside the isoclines. For \(\Delta x_{t+1}\), we have:

\[\begin{aligned} \Delta x_{t+1}\geq 0 \Leftrightarrow y_t \geq \frac{-a y+c}{b} \equiv \tilde{y} \\ \Delta y_{t+1} \leq 0 \Leftrightarrow y_t \leq \frac{ dx+f}{e} \equiv \hat{y} \end{aligned}\]

On the first hand, for every pair \(\{x_t,y_t \}\) located on the right (left) of the locus \(\tilde{y}\), the value of \(x_t\) increases (decreases) and goes on the right (left). On the other hand, for every pair \(\{x_t,y_t \}\) located below (above) the locus \(\hat{y}\), the value of \(y_t\) increases (decreases) and goes upward (downward).