The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.
Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by roshan-1, 2017-08-27 03:57:50

An Introduction to Dynamical Systems - 2nd Edition

2.2. Constant Coefficients 29

x = Axg = 0 for some nonzero point xo, then X9 is an eigenvector for the eigenvalue
0.

2.2.1. Complex Eigenvalues. To be able to consider complex eigenvalues, we
need to understand the exponential of a complex number. By comparing power
series expansions, we can see that

elm = cos( t) + isin(Bt) and

e(°+'m‘ = e"‘e*B‘ = e°‘ (cos( t) + isin( t)) .

The next theorem shows how to use thwe formulas to nd two real solutions for a.

complex eigenvalue. '

Theorem 2.4. Let A be an n >< n matrix with constant real entries.

a. Assume that z(t) = x(t) + iy(t) is a complex solution of 2 = Az, where
x(t) and y(t) are real. Then, x(t) and y(t) are each real solutions of the equation.

b. In particular, if A = 01 + i is a complex eigenvalue with a complea: eigen-
vector v = u + iw, where 0 and ,6 real numbers and u and v real vectors, then

e°" ( cos( t)u — sin( t)w) and

e°" ( sin(,6t)u + cos( t)w)

are two real solutions of x = Ax.

Proof. Part (a) follows directly from the rules of di erentiation and matrix mul-
tiplication:

*0) + i$'(i) = W)

=1 Az(t)

= A(X(¢) + iy(i))

= Ax(t) + i Ay(t).

By equating the real and imaginary parts, we get x(t) = Ax(t) and y(t) = Ay(t),
which gives the rst part of the theorem.

(b) The complex solution

e(°‘+w)‘(u + iw) = e"i‘(cos(Bt) +i sin(Bt)) (u + iw)

= e°“( cos( t)u — sin( t)w) + i e°‘(sin( t)u + cos(Bt)w)

can be written as the sum of a real function plus a purely imaginary function. By

part (a), the real and the imaginary parts are each real solutions, giving the result

claimed. U

Example 2.12 (Elliptic Center). Consider the differential equation

1—':-—0410x'

The characteristic equation is A2 + 4 = 0, and the eigenvalues are /\ = 3:2 i. Using
the eigenvalue 2i, we have

A - (21. )! = (_—,2‘' _42z.).

30 2. Linear Systems

The two rows are (complex) multiples of each other, so the eigenvector satis es the
equation

—2iu1 + 4'02 = 0,

and an eigenvector is v = i . Using the eigenvector, we get the two real solutions

x1(t) = cos(2t) _ sin(2t) and

x2(t) = sin(2t) + cos(2t) ,

which have initial conditions and at t _: 0, respectively. Notice that

det(02 01)_2;éO,

so the solutions are linearly independent. Both of these solutions are periodic with
period T = 2"/2 = 1r, so the solution comes back to the same point after a time of 1r.
The solution moves on ellipses with axes twice as long in the :01-direction as in the
as;-direction. When 2:1 = O and 2:2 > 0, 5:1 = 42:2 > 0, so the solution goes around
in a clockwise direction. Such an example, with purely imaginary eigenvalues, is
called an elliptic center. See Figure 6. The plot of (t, 1:} (t)) for solution x'(t) is
given in Figure 7. Notice that the component :c{(t) is a periodic function of t and
that the period does not depend on the amplitude.

<<e>>__,2 I2
Figure 6. Phase portrait for Example 2.12 with an elliptic center

Example 2.13 (Stable Focus). This example has complex eigenvalues with neg-
ative real parts. "Consider

ic = (--45 52 ) x'
The characteristic equation is A2 +2A+ 17 = 0, and the eigenvalues are A = —l d;4i.
Using the eigenvalue -1 + 4i, we have

A—(—1+.4i)I-( 3_5 4i 3_5 4i).

2.2. Constant Coe icients 31

2Z1

1
i
t
10

-1

-2

Figure 7. Plot of 1| as a function of t, initial conditions (1,0) and (2,0) :
Example 2.12 with an elliptic center

Multiplying the rst row by -3+4i, which is the complex conjugate of —3—4i, yields
(25, -15 + 202'), which is a multiple of the second row. Therefore, the eigenvector
satis es the equation

5'U1+(-3 + 4i) ‘U2 = 0,
and an eigenvector is

"=(3 —54’')-»=(53)+i(.@-4)~

A complex solution of the system is given by

e_‘ (cos(4t) +1 sin(4t)) +1 (_04)]

= e-* [cos(4t) —sin(4t)

+ie_‘ [sin(4t) + c0s(4t) (B4)) .

Taking the real a.nd imaginary parts, the two real solutions are

x1(t) = 6-‘ (cos(4t) - sin(4t) and

x2(t) = 6-’ (sin(4t) + COS(4t) (B4)).

The initial conditions of the two solutions are

xl(0) = and x2(0) = ('64)

and

det '04) = 20 76 0,

32 2. Linear Systems

so the solutions are independent. The sine and cosine terms have period 21r/4 =
1r/2; but the exponential factor decreases for this example as t increases and con-
tracts by e"'/2 every revolution around the origin. The solutions tend asymptoti-
cally toward the origin as t goes to in nity. When :01 = 0 and 2:2 > 0, :i:; = 51:2 > 0,
so the solution go around in a clockwise direction. This example, with a negative
real part and nonzero imaginary part of the eigenvalue, is called a stable focus. It
is stable because solutions tend to the origin as t goes to in nity, and it is a focus
because the solutions spiral. See Figure 8. The plot of (t,a:}(t)) for solution x1(t)
is given in Figure 9. Notice that the component a:}(t) oscillates as a function of t
as it goes to zero.

. $2

$1

Figure 8. Phase portrait for Example 2.13 with a stable focus

Table 3 summarizes the proced\n"e for drawing the phase portrait for a linear
system with complex eigenvalues in two dimensions.

Example 2.14. For an example in R3, consider

OO 1
$c= 1 1 -1 x.

-1 4 -2

The characteristic equation is

0=A3+A2+3A—5= (,\-1)(,\’+2,\+5),

and the eigenvalues are A = 1, —1 ;t 2i.

Using the eigenvalue 1, we have the row reduction

-l 0 1 1 O -1

A-I= 1 0 _—1 ~ O 4 -4

-1 4 -3 0O 0

/v _

@@I-' @\-'@ ol—ll-4
V

2.2. Constant Coef cients 33

3531

I

-3

Figure 9. Plot ol'a:1 versus t, initial condition ($|(0),Il72(0)) = (0, 3): Exam-
ple 2.13 with a stable focus

Phase portrait for a pair of complex eigenvalues

Assume the eigenvalues are /\ = a i i with ,3 96 0.

(1) If a = 0, then the origin is an elliptic center, with all the solu-

tions periodic. The direction of motion can be either clockwise or

counterclockwise.

(2) If a < 0, then the origin is a stable focus, which spirals either

clockwise or counterclockwise.

(3) If a > 0, then the solutions spiral outward and the origin is an
unstable focus, which spirals either clockwise or counterclockwise.

(4) In any of the three case, the direction the solution goes around

the origin can be determined by checking whether £1 is positive
or negative when :01 = 0. If it is positive, then the direction is

clockwise, and if it is negative, then the direction is counterclock-

wise. I

Thble 3

and a.n eigenvector is (1, 1, 1)T.
Using the eigenvalue -1 + 21', we have

1 — 21' 0 1
A—(—l+2i)I= 1 2-21 -1 .
-1 4 -1-21

Interchanging the rst and third rows and multiplying the new third row by the
complex conjugate of 1 — 2i (i.e., by 1 + 2i) to make the rst entry of the third row

34 2. Linear Systems

z Z

4 4
0

0 y '4 ‘C
. 4
_4 I -4
_ 0
0 0
-4 0 -4
4 :1: 4
4 ll
(a)
(b)

Z
4-

0-

-4-

<<=> M

Figure 10. Three views of the phase portrait for Example 2.14: (a) a three
dimensional view of all the directions, (b) down the unstable direction onto
the stable directions, and (c) one stable and the unstable direction. The initial
conditions include points on the line spanned by (1, 1, 1), on the plane spanned
by (1, —1, -5) and (2, —2,0), and points slightly o ' the plane.

real, we get

—1 4 -1- 22'
A—(—4+2i)I~ 1 2-22‘ -1 .

5 0 1+2i

Performing the row operations to make the entries of the rst column, except for
the top one, equal to zero, we get

—1 4 —1—2i 1 -4 1+2'i

O 6-2i —2—2i ~ 0 3—i —1—i ,

0 20 —4—8z' O 5 —l—2i

2.2. Constant Coe icients 35

where in the second step we multiply the rst row by -1, the second row by 1/2, and
the third row by 1/4. To make the rst entry in the second row real, we multiply
the second row by the complex conjugate of the rst entry, 1 — 31', resulting in

1 -4 1 + 21'
0 10 -2 — 4i .
0 5 — 1 — 22'

Performing further row operations, we get the following sequence of matrices:

1 -4 1+2¢ 1 1 0
0 5 -1-21 ~ 0 5 -1-21.
00 0 00 0

One eigenvector is 1+22' 1 2

-1-21 '= -1 +1" -2 .
-5 -5 0

Combining, three independent solutions are

1 12 and
e‘ (1) , e_‘cos(2t) (-1) — e_‘sin(2t) (-2)

1 -5 0

12
e_‘sin(2t) (-1) +e"cos(2t) (-2) .

-5 0

The initial conditions of these three solutions are

(1) (4) (0)1 1 2

1 , -1' ,and -2 .

Since

( 4 0))—l [O

det -1 -2 =—209E0,

)—'|-—‘)—‘
the solutions are independent.

See Figure 10 for three different views of the phase portrait.

2.2.2. Repeated Real Eigenvalues.

Example 2.15 (Enough eigenvectors for a repeated eigenvalue). As a rst
example with repeated eigenvalues, consider

-2 -2 -4
>'¢= 0 0 4 x.

022

The characteristic equation is 0 = /\3 — 12/\ — 16 = (A — 4)(/\ + 2)2. Therefore, -2
is a repeated eigenvalue with multiplicity two. The matrix

0 -2 -4
A+2I= 0 2 4

02 4

36 2. Linea: Systems

is row reducible to

012
0O0 .
0O0

This matrix has rank one and, therefore, 3 - 1 = 2 independent eigenvectors:
11; = -211;, with v1 and v3 as free variables, or eigenvectors

(1) 1"‘ (ii
For this example, A = -2 has as many independent eigenvectors as the multiplicity
from the characteristic equation.

’_l-_

The eigenvalue A = 4 has an eigenvector <— 1) . Therefore, the general solution

-1

is

Pi © 1-

x(t)=c1e‘“ 0 +c2e_2' -2 +c3e‘“ -1 .
0 1 —1

Notice that, in this case, the solution looks very much like the case for distinct
eigenvalues.

Example 2.16 (Not enough eigenvectors). As an example with not enough
eigenvectors, consider

0 -1 1
ii: = 2 -3 1 x.

1 -1 -1

The characteristic equation is 0 = (A + 1)2(A + 2), so A = -1 is a repeated
eigenvalue.

For the eigenvalue A = -2, the matrix is

2 -1 1 1 -1 1

A + 21 = 2 -1 1 ~ 0 1 -1 ,

1 -1 1 00 0

and -2 hasan eigenvector (0,1,1)T.
For the eigenvalue A = -1, the matrix

1-11 1-10
A-(-1)1= 2 -21 ~ 0 0 1,

1-10 000

which has rank two. Therefore, - 1 has only one independent eigenvector (1, 1, 0)T.
So far we have only found two independent solutions

0 1
ed‘ 1 and e"‘ 1 .

1 U

2. 2. Constant Coef cien ts 37

We will next discuss how to nd a third independent solution in a. general
situation, and then we will return to the preceding example.

Let A be a multiple eigenvalue for A with an eigenvector v, Av = Av. As we
discussed earlier, eA‘w is a solution for any vector w using the matrix exponential.
We want to rewrite this solution somewhat for special choices of w. If two matrices
A and B commute (AB = BA), then

e(A+B)t = eAteBt_

This can be shown by multiplying the series and rearranging terms. See Theorem
2.13 at the end of the chapter. In our situation, (A — AI)(AI) = (AI)(A — AI), since
AI is a scalar multiple of the identity, so

eAtw = e((AI)+(A-A1))tw = e(AI)te(A-AI)tw : eAtIe(A—AI)tw

2

=e’\‘(Iw+t(A—AI)w+ %(A—AI)2w+---).

If we have w such that (A—AI)w = v, where v is the eigenvector for the eigenvalue
A, then

(A - AI)2w = (A - AI)v = 0, so

(A - AI)"w = (A — AI)"—1v = 0 for n 2 2.

Therefore, the in nite series for emw is actually nite, and we get a second solution,

x2 (t) = e’\' (W + tv).

Note the similarity to the second solution te“ of a second-order scalar equation
for a repeated root of the characteristic. equation.

In fact, a direct check shows that x2(t) = e“ (w + tv) is a solution. We use
the fact that Av = Av and Aw + v = Aw to obtain the following:

J'c2(t) = A e’\‘ (W + tv) + e“ v

= e’\‘ (Aw + v) + e’\‘tAv

= e’\‘Aw + e’\‘tAv

= Ac“ (w + tv)

= Ax2(t).

Therefore, for an eigenvalue A, which has multiplicity two but only one inde-
pendent eigenvector v for A, we solve (A —AI)v = 0 for a.n eigenvector v. Then, we
solve the equation (A - AI)w = v for w. Such a vector w is called a generalized
eigenvector for the eigenvalue A. Two solutions of the linear differential equation
are e“ v and e’\‘(w + tv). Note that, if the multiplicity is greater than two, then
more complicated situations can arise, but the general idea is the same.

Returning to Example 2.16. For the repeated eigenvalue A = -1,

1 -1 I
A+I= 2 -2 1 .

1 -1 O

38 2. Linear Systems

and an eigenvector is v = (1,1,0)T. We also need to solve the nonhomogeneous
equation (A+ I) w = v. Separating the matrix A + I from the vector v by a.
vertical line, the augmented matrix is

i—l Q

2—2 ~ —1—.~ 0 01
On-I OO ©l—I® L2
(-5 Ii--I ._1 Oi-i-l Oi-Ii-I ©©v- ._1_CO1-1 b—4I-do Z? Ol—|
V

Therefore, a generalized eigenvector is w = (0,O,1)T and another solution of the

»i=~i<i>~<z>i=~<;>differential equation is
We have now found three solutions and the general solution is

01i

c1e"2' 1 +c2 e_‘ 1 +c3 e" t .
1 01

The determinant of the matrix formed by putting the initial conditions of the three
solutions at t = 0 in as columns is nonzero,

010
det110=—17é0,

101

so the three solutions.are independent. I

Example 2.17 (Degenerate Stable Node). A two-dimensional example with a
repeated eigenvalue is given by

3'4 = --12 01 x '

This has characteristic equation 0 = A2 + 2)\ + 1 = (A + 1)2, and has repeated

A +1- (_1 1)eigenvalue —1. The matrix
-1 1

has rank one, and only one independent eigenvector To solve the equation

(A + I)w = v,

we consider the augmented matrix

—1 1 1 ~ —

-1 1 1 000

or —w1 + wg = 1. Thus, a generalized eigenvector is w; = 0 and ’LUg = 1, w =
(0, 1)T. The second solution is

*’">=e'i(‘f)+* (Di (iii)

2.2. Constant Coefficients 39

and the general solution is

c|e_" 1 -I-Cg e_‘ t .
1 1-l-t

The rst component of the second solution is aPf(t) = te_‘. This term goes to
zero as t goes to in nity, because the exponential goes to zero faster than t goes to
in nity. Alternatively, apply l’H6pital’s rule to t/e‘ and see that it goes to zero. In
the same way, the second component :r§(t) = e_‘ + te“ goes to zero as t goes to

i%<?>+(i>lin nity. Combining, x2(t) goes to the origin as t goes to in nity. Moreover,
approaches <1)

as t goes to in nity, so the solution comes in to the origin in a direction asymptotic
to the line generated by the eigenvector.

1 $2 V

Z1

l

Figure 11. Phase portrait for Example 2.17 with a degenerate stable node

Clearly x1(t) tends to the origin as t goes to in nity, so any solution, which
is a linea.r combination of the two independent solutions, goes to the origin as t
goes to in nity. However, in this case, there is only one solution that moves along
a straight line. All other solutions approach the origin in a di.rection asymptotic
to the line generated by the eigenvector. This system is called a degenerate stable
node. See Figure ll. The plot of (t, a:1(t)) for cl = 2 and C2 = -2 is given in Figure
12. This solution has initial conditions :c1(O) = 2 and 22(0) = O.

Table 4 summarizes the procedure for drawing the phase portrait for a stable
linear system with real equal eigenvalues in two dimensions. Table 5 summarizes
the process for drawing the phase portrait for any linear system in two dimensions.

40 2. Linear Systems
2°"

1
4

-2

Figure 12. Plot ofz; versus t for cl = 2 and C2 = —2 for Example 2.17 with
a degenerate stable node

$2

$1

Figure 13. Phase portrait with a stable star

Example 2.18 (Multiplicity Three). For the situation with greater multiplic-

ity in higher dimensions, there are many cases; we give just one example as an

illustration: -2 1 0

x = 0 —2 1 x.

0 0 -2

This has characteristic equation 0 = (A + 2):’, and has eigenvalue /\ = -2 with
multiplicity three. The vector v = (1, 0, 0)T is an eigenvector, (A + 2I)v = 0; the
generalized eigenvector w = (0,1,0)T satis es

(A + 2I)w = v and (A + 21)’w = (A + 2I)v = 0;

nally, the generalized eigenvector z = (0,0,1)T satis es and

(A + 2I)z = w, (A + 2I)2z = (A + 2I)w = v

(A + 2I)3z = (A + 2I)2w = 0.

2.2. Constant Coe icients 41

p Phase portrait for two equal real eigenvalues

We consider the stable case; the case for unstable systems is similar,
with obvious changes between t going to in nity and minus in nity.
First, assume that there are two independent eigenvectors (a.nd the
matrix is diagonal).

(1) If there are two independent eigenvectors, then all solutions go
straight in toward the origin. The origin of this system is called
a stable star. See Figure 13.

Next, assume that there is only one independent eigenvector v, and a
second generalized eigenvector w, where (A — /\I)w = v.

(1) Draw the two trajectories, which move along straight lines toward
the origin along the line generated by v. Mark each of these half-
lines with the direction that the solution is moving as t increases.

(2) Next, draw the trajectory, which has initial condition w and then
comes in toward the origin along the half-line generated by posi-
tive multiples of the vector v (i.e., the trajectory e“ w + ta“ v
is nearly equal to the cu.rve te’\‘ v).

(3) Draw the trajectory with initial condition -—w, which should be r
just the re ection through the origin of the previous trajectory.

Table 4

A third solution with initial condition w is
2 t3

e“‘z = e"‘ (Iz + t(A + 2I)z + %(A + 2I)2z + §(A + 2I)3z + - -

t2

=e‘2‘ (z+tw+?v+0)

(ii~i<<:>+i<i>>r @e1=<<@>+¢<z>a<i>>Therefore, the threeindependent solutionsare

2.2.3. Second-order Scalar Equations. Many of you have had a rst course is
differential equations in which the solution for second-order scalar linear differential
equations was discussed. This section shows how those solutions are related to the
solutions of systems of linear differential equations that we have presented. This
section can be skipped without loss of continuity.

Consider

(2-4) 11” + av’ + by = 0.

where a and b are constants. This equation is called second-order since it involves
derivatives up to order two. Assume that y(t) is a solution (2.4), set :r1(t) = y(t),

42 2. Linear Systems

Phase portrait for a linear system in two dimensions

(1) From the characteristic equation /\2 — 'r /\ + A = 0, where -r is the
trace and A is the determinant of the coef cient matrix, determine
the eigenvalues.

(2) Classify the origin as a stable node, unstable node, stable focus,
unstable focus, center, repeated real eigenvalue, or zero eigenvalue
case. In the case of a repeated real eigenvalue, classify the origin
as a star system if it is diagonal and degenerate node otherwise.

(3) Proceed to draw the phase portrait in each case as listed previ-
ously.

When using a computer program, such as Maple, Mathematica, or
Matlab, to draw the phase portrait, certain steps are helpful.

(1) Since the size of the region in the phase space plotted does not
affect the appearance of the phase portrait for a linear system,
pick a region of any size centered about the origin (e.g., —5 5
2:1 5 5 and -5 _~§ 1:2 3 5). (The region plotted is sometimes
called the window plotted.)

(2) If you know what type of linear system it is, pick initial conditions
that reveal the important behavior for that type of equation. Oth-
erwise, experiment with initial conditions to determine the type
of behavior of the linear system being Cll‘3.WT|.

(3) For a system with real eigenvalues, try to take initial conditions
near. but on either side of, each of the half-lines that are scalar
multiples of the eigenvectors. For unstable directions, either fol-
low the solutions for negative time or start with initial conditions
very near the origin. For stable directions, either follow the so-
lutions for positive time or start with initial conditions near the
origin and follow the trajectory for negative time.

(4) For systems that oscillate (have complex eigenvalues), take
enough initial conditions to reveal the phase portrait.

Table 5

:r:2(t) = y’(t), and consider the vector x(t) = (a:1(t),a.rg(t))T = (y(t), y’(t))T. Then

I

(2.57 “U'1 = li1//(Kt)2] =[-a:I:2(Mt) -‘)t$,(¢))= [0—b —1aH“'T$210(*))],

since y”(t) = —ay’(t) — by(t) = —a:r2(t) — b:c1(t). We have shown that if y(t)
is a solution of the equation (2.4) then x(t) = (:c1(t),:r2(t))T = (y(t),y’(t))T is a
solution of x = Ax, where

(2.6) A = [_0b _1a) .

Notice that the characteristic equation of (2.6) is A2 + aA + b = 0, which is simply
related to the original second-order equation (2.4). For the linear system x’ = Ax
with A given by (2.6), we have to specify initial conditions of both a:1(to) and

2. 2. Constant Coe icien ts 43

:r:2(t0). Therefore, for the second-order scalar equation y” + ay’ + lry = 0, we have
to specify initial conditions of both 3/(to) = a:1(tq) and 1/(to) = a:2(t0).

Starting with a solution x(t) = (:1:1(t),a:2(t))T of x’ = Ax with A given in
(2.6), then the rst coordinate y(t) = 2:1 (t) satis es

3'/l = I52 = -511 - 0-$2 = —bI'~/"ail,

and y(t) is a solution of y” + cry’ + by = 0.
Since second-order scalar equations can be solved by converting them to a

rst order linear system, we do not need to give a separate method of solving
them. However, the following result gives the more direct solution method that is
discussed in elementary courses on differential equations.

Theorem 2.5. a. The characteristic equation of (2.6) is A2 + aA + b = 0.

b. A function y = e" is a solution of y” + ay' + by = 0 if and only if 1' is a
root of A2+aA+b=0.

c. If 1‘ is a double root of A2 + aA + b" = 0, then two independent solutions are
e"‘ and te".

d. If 1' = a :i: [ii are complex roots of A2 + aA + b = 0, then two independent
real solutions are e‘"cos( t) and e°"sin([3t)

Proof. The proof of part (a) is direct and left to the reader.
(b) Let y = e". Then

y” + ay' + by = rze" + are" + be” = e" (r2 + av" + b).

Since e" 99 0, e"‘ is a solution if and only if 1' is a root of A2 + a.A + b = 0.

(c) This part is usually checked difectly. However, if r is a double root, then
the equation is 0 = (A —r)2 = A2 — 21'A +13 and the equation is y” — 2ry’ +r2y = 0.
The matrix of the linear system is

A = (_1O_2 21T) and A — rl = (J— ; 21?) .

An eigenvector for A = 1' is (1,1')T and a generalized eigenvector is (0,1)T. Thus,
the two independent solutions of the linear system are

<3») and K?) H (ill ~

The rst components of the two solutions are e” and te".

(d) The characteristic equation for the complex roots azlz i is 0 = A2 — 2aA+
0:2 + H2, and the matrix is

A = <_Q20_ B2 21(2) and A — (Oz + z. )I = (_—o€; —_€;2' Q _1 i).

A complex eigenvector for a + Bi is (1,a + i )T. The rst component of the
complex solution is

e‘°’ [cos( t) + isin(Bt)] ,

which has real and imaginary parts as given in the statement of the theorem. El

44 2. Linear Systems

2.2.4. Quasiperiodic Systems. This subsection considers linear systems in
higher dimensions to introduce the idea of quasiperiodicity, a function with two
or more frequencies which have no common period. The rst example uses two
harmonic oscillators that are not coupled (i.e., the equations of one oscillator do
not contain the variables from the other oscillator). The second example again has
two frequencies, but in the original variables, the equations of motion involve both
position variables (i.e., the harmonic oscillators are coupled).

Example 2.19 (Two uncoupled harmonic oscillators). Consider two uncou-
pled oscillators given by

__ 551 = —wf a:1 and ft; = —mg :22.

We can complete this to a rst-order system of equations by letting i1 = 2:3 and

r; = :04. Then, '

531 = $3,
532 = $4»
:i:3 = —w% 2:1,

5:4 = -—w§ 11:2.

A direct calculation shows that this system has eigenvalues :i:'luJ1 and ;l;i U12. The
2:1 and rt;-components of a general solution are

= cl cos(w1t) + C2 sin(w1t)

U c3 cos(w2t) + c4 SlI'l(LUgt)

= R. <=<>s<w.<¢ - 61>) (5) + R2 ¢<>s<w2<¢ — 62>) (Y) ,

where

c1 = R1cos(w161), Cg = R1 sin(w161),

C3 = R2 cos(w262), C4 = R2 sin(w§62).

Thus, the four arbitrary constants which give the general solution can be taken to
be R1, R2, 6;, and 62, rather than c1, . . . , c4. If both R1 and R2 are nonzero, then
the solution can be periodic with period T, if and only if there exist integers It and
m such that

Tw1=k21r and Tw2=m21r,

T =52 = nu,
0)] U2
U2 _ T71-

U1 _

Therefore, to be periodic, the ratio of the frequencies needs to be a rational munber.
When this ratio is i.rrational, the solution cannot be periodic but is generated by
two frequencies. Such a solution is called quasiperiodic. For the case in which
w; = 1 and wg = \/5, a plot of a solution in the (:|:1,:r1)-space is given in Figure
14 for R1 = R2 = 1 and 61 = 62 = 0. Notice that the solution tends to ll up

2.2. Constant Coe icients

the whole square -1 5 .1r;,a:2 3 1. The plot of :21 + 1:2 as a function oft for these
constants is given in Figure 15.

l

¢¢'~'-_’r6\$;Z;.0 ' lei
0fo0g‘'.0\‘~;'.l0I.o./00.0:04.00.000.0..4‘0‘?\0-\1
\\\.Io/°”00°0.0°0~ I-
-» I-
'»°:°:°o’§\0°0°0‘’0°0’l’\\\o'l'-'-
|'~'¢'0\\ ”.0.0'5.0.0 Q M‘:-
_1'iv'o°s\\'..o'9mo.’;4¢§ov'§’0;§¢0§;¢a\;.IqIa.''i'¢i'. 1
'.'o'o'o°o oi’ \nul\~
,\)ofI’°o:0:0:0‘§‘ ,".0:0:0:o\‘)'|

"i ;§:§:=tz»?-i:i:3:3:!!I3'°‘°--”_/°4§;\.6‘l:‘-0°,0:"
I\.0 -
_.,_

Figure 14. Quasiperiodic solution for Example 2.19: plot of (a:1(t),:cg (1))

(E1-i-(E2

2

1,H ll.m al al 1 ll "yll Uill'1 l 1)" D_0‘

--2

Figure 15. Quasiperiodic solution for Example 2.19: plot of 11 + 1:; versus t

31"; *1 5' Pf‘R, FT‘ >-I

it-ti-' 4-? -. ‘: 11:1 m zw..-.--q1:

._ (5 I2 ’* *5‘./‘K1v-AQ-
tr
-.,'./_._-' . _‘
§1J\‘.V*-,"
l A'
1 "5~\~‘~*l ‘("=‘§.~r_-*

Figure 16. Two coupled oscillators

Example 2.20 (Coupled harmonic oscillators). Consider two equal masses m
attached by springs between two xed walls and supported by a. floor so gravity

46 2. Linear Systems

is not a factor. See Figure 16. We assume that there is no friction, but only
forces induced by the springs. Let the rst body be the one of the left. There is a
linear spring attached to the rst body and the left wall with spring constant k1.
The second body is attached to the right wall by a spring with the same spring
constant k1. Between the two bodies, there is a third spring with spring constant
kg. Let :z:1 be the displacement of the rst body from the equilibrium position
and $2 be the displacement of the second body from its equilibrium position. If
the rst body is moved to the right (a positive value of 2:1), there is a restoring
force of —k1a:1 because the rst spring is stretched and a restoring force of —k2a:1
because the middle spring is compressed. If the second body is now moved to the
r_i_ght, the middle spring is stretched and there is a force of kgi g exerted on the
rst body. Altogether, the forces on the rst body are —k1:t1 — k2:1:1 + kgilig =
—(lc1 + k2):v1 + kga:;. This force has to equal the 'mass times the acceleration of the
rst body, so the equation of motion is

'!Tl(1‘1 = —(lt1 + lC2)I1 + ltg lig.

There are similar forces on the second body, and so it has a similar equation of
motion with the indices 1 and 2 interchanged. The combined system of equations
is

111.551 = —(lC1 + kg) $1 + kg $2,

77252 = R1231 — (kl + kg) 222.

This system can be made to be rst order by setting :i:1 = 23 and :i:; = :24. The
system of equations then becomes

’ 0 0 10
iii] 0 0 0 1 131

3.32 = kl -l‘ lC2 kg I2

$4 Q _ k1 + kg 0 0 I4

TTL TTL

Let

_l€1-l-kg g

K: 771. m
ll _lC1-i-kg

" mm

be-the 2 >< 2 matrix in the bottom left. Let A be an eigenvalue for the total matrix

A with eigenvector where v and w are each vectors with two components

6”“ r<r.> -<:.> <2s> <:> = ts
setting the components equal, we get

w = Av and

Kv = Aw = Azv.

2.2. Constant Coefficients 47

Therefore, the square of the eigenvalues of A are actually the eigenvalues of K, and
the characteristic equation of A is

A‘+ 2(k1m+y kg) A’+ (kl +gekm2),2 — k22 =0.

This equation has solutions or

/\2=—(k1+k2):Ek2=_k1,_<k1-lrzkg),
m mm

A = :l:iw1,:l:iwg

where w1 = \/ E1/m and wg = 1/("1+ 2'”)/m. The eigenvectors of K for the eigen-

values —~ and — 2 are, respectively,

T71 TTL

(1) rd (=1)-

These vectors give the rst two components of the eigenvectors of the total matrix
A. Therefore, the rst two components of the general solution are given by

(1:8) = R1 ¢<>==»<~.u - 61>) (1) + R. ¢os<~2<i — a.>>( _{ ) .

where R1, Rg, 61, and 6g are arbitrary constants. Notice that the rst solution
in the sum corresponds to the two bodies moving in the same direction and the
middle spring remaining unstretched and uncompressed. The resulting frequency
is determined by the two end springs with spring constants k1. The second solution
corresponds to the bodies moving symmetrically with respect to the center position,
pulsating toward and away from eacli other. In this latter case, all three springs
are stretched and compressed, so the frequency involves both k1 and kg.

If both R1 and Rg are nonzero, then the solution is periodic if and only if the
ratio wg/w1 is rational. If the ratio is irrational, then the solution never repeats
but is generated by the two frequencies w1 and wg. In this latter case, the solution
is quasiperiodic, just as in the case of the uncoupled oscillators of Example 2.19.

A general quasiperiodic function can involve more than two frequencies. A
nite set of frequencies w1, ..., w,, is called rationally dependent provided that
there are integers k1, . . . , k.,,, not all of which are zero, such that

l<:1w1 -+----+knw1-1 =0.

A nite set of frequencies w1, . . . , wn is called rationally independent provided that
it is not rationally dependent; that is, if

k1w1+---+lc,,w,,=0

for integers k1, . . . , kn, then all the kj must equal zero. If the frequencies are ratio-
nally dependent, then one of them can be written as a rational linear combination
of the other ones; for example, if kn 74 0, then

w‘n ——knw 1 — --L‘-k1n w'n— 1 -

48 2. Linear Systems

If the frequencies are rationally independent, then none of the frequencies can be
written as a rational combination of the other frequencies. A quasiperiodic function
h(t) is a function generated by a nite number of such rationally independent
frequencies w1, ..., w,, and can be written as h(t) = g(w1t,...,w,,t) where g is
periodic of period 1 in each of its arguments. For example, h(t) could be a linear
combination of cos(w_,~t) and sin(wJ-t) for j = 1, . . .,n.

Exercises 2.2

l:~ For each of the following linear systems of differential equations, (i) nd the

general real solution, (ii) show that the solutions are linearly independent, and

(iii) draw the phase portrait. '

a.

x= 62 -13 x'
b.

>'<_— --24 31 x'
c.

12_- _-14 -12 x’

d.

x _ /0 -1\
0 )lX,
— \\1

e.

it = K\-O1 01\/\ x’

f.

x = \\/25 _-15/\ x’

g.

/1 -2\, X

)'( = \3 -41

2. __Consider the system of linear differential equations x = Ax, where the matrix

66 6
A = 5 11 —1 ,

1 -5 7

which has eigenvalues 0, 12, and 12. Find the general solution.
3. Consider the system of linear differential equations x = Ax, where the matrix

03 1
A = 4 1 —1

2 7 -5

which has eigenvalues 4, and -4 :i: 2 i. Find the general solution.

2.3. Nonhomogeneous 49

4. The motion of a damped harmonic oscillator is deterrnined by my+b3}+Icy = 0,
where m > 0 is the mass, b 2 0 is the damping constant or friction coefficient,
and lc > 0 is the spring constant.
a. Rewrite the differential equation as a rst-order system of linear equations.
(See Chapter 1.)
b. Classify the type of linear system depending on the size of b Z 0.

5. Find the Jordan ca.nonica.l form for each of the matrices in Exercise 1.

6. Assume that /\1 76 A2 are two real eigenvalues of A with corresponding eigen-
vectors vb and v2. Prove that vl and v2 are linearly independent. Hint:
Assume that 0 = C1v1 + C2 v2, and also consider the equation 0 = A0 =

C1AV1+CgAV2 = C1/\1V1+C2)\2V2.

7. Assume that A is a 2 X 2 matrix with two real distinct negative eigenvalues /\1

Blld A2. I

a. Let v1 and v2 be the corresponding eigenvectors and P = (v1 v2) the

matrix with eigenvectors as columns. Prove that AP = PD, where D =

diag(/\1, /\2) is the diagonal matrix having the eigenvalues as entries (i.e.,

A = PDP‘1).

b. Let a = min{—)\1, —/\2} > 0. Prove that there is a. positive constant K

such that |1e“- x|| 5 K |><|

for all t Z 0 and all x in IR’.

2.3. Nonhomogeneous Systems: Time-dependent Forcing

The last topic of this chapter, before considering applications, is linear systems with
a time-dependent forcing term, or a tim'e-dependent linear di erential equation. The
general nonhomogeneous linear system of di erential equations we consider is

(2.7) at = Ax + g(t).

Given such a.n equation, we associate the corresponding homogeneous linear system
of differential equations,

(2.8) it = Ax.

The next theorem indicates the relationship between the solutions of equations (2.7)
and (2.8).

Theorem 2.6. a. Let x1(t) and x2(t) be two solutions of the nonhomogeneous
linear di erential equation (2.7). Then, x1(t) - x2(t) is a solution of the homoge-
neous linear di erential equation (2.8).

b. Let xP(t) be any solution of the nonhomogeneous linear differential equation
(2.7) and x"(t) be a solution of the homogeneous linear di erential equation (2.8).
Then, xP(t) +x"(t) is a. solution of the nonhomogeneous linear differential equation
(2.7).

c. Let x"(t) be a solution of the nonhomogeneaus linear di erential equation
(2.7) and M(t) be a fundamental matria: solution of the homogeneous linear di 'er-
ential equation (2.8). Then, any solution of the nonhomogeneous linear di erential
equation (2.7) can be written as x"(t) + M(t)c for some vector c.

50 . 2. Linear Systems

The preceding theorem says that it is enough to nd one particular solution of
the nonhomogeneous differential equation (2.7) and add to it the general solution
of the homogeneous differential equation (2.8). Just as in the case of second-order
scalar equations, sometimes it is possible to guess a solution. (This method is often
called the method of undetermined coe icients.) A more general method is that of
variation of parameters. This method is more ciunbersome, but it always works.
We state this method for nonhomogeneous linear systems with constant coe icients
in the next theorem, using the exponential of a matrix. The more general form uses
a fundamental matrix solution, but we do not state it that way because it looks
messier. However, in an example, the matrix exponential is usually calculated using
any fundamental matrix solution.

Theorem 2.7 (Variation of parameters). The sblution x(t) of the nonhomo_qe-
neous linear di erential equation with initial condition x(0) = xo can be written
as

I

x(t) = eA'xo + eu e_A"g(s) ds.

If M(t) is a fundamental matrix solution, then

x(t) = M(t)M(O)‘*xo +M(t) A1 M(s)‘lg(s)ds.

The proof is at the end of the chapter. The case for a fundamental ma-
trix solution follows= from the exponential case since e“ = M(t)M(0)‘1 and
e“A’ = M(0)M(s)"1. Notice that for the fundamental matrix solution found
by the solution method, M(s)‘1 does not equal M(-s) unless M(0) = I.

Note that the rst term is the solution of the homogeneous equation. In the
integral, the integrand can be thought of as pulling back the effects of the nonho-
mogeneous term to t = 0 by means of the fundamental matrix of the homogeneous
equation. The integral adds up these perturbations and-then the fundamental
matrix transfers them to time t by means of the fundamental matrix.

These calculations are usually very cumbersome. The following simple example
is one which can be carried out.

Example 2.21. Consider

(ii) = (ii. °6°) (ii) + B (.i.‘€.i>)»

with to # too. The solution of the homogeneous equation is

A, _ cos(wOt)' sin(w@t) and

e — (- sin(w0t) cos(w0t)

_A, _ cos woe — sin mos
e _ (sinfwosl c0s(¢l)<>s))) '

2. 3. Nonhomogeneous

The integral term given in the theorem is

x"(t) = Bemi‘ FA‘ (Sin(0w8)) cls

= BeA,/1‘ (~ sin(wos) sin(ws)) ds

cos(w0s) sin(ws)

O

cu .2 1‘ (cos((w0 + w)s) — cos((w0 — w)s) ) ds
will
‘om. sin((w0 + w)s) + sin((—w0 + w)s)

=5BeAt w—01_€—w sin((wg + w)t) - w%ai- sin((w0 - w)t)

$ COS((L4)[) 'l' t4J)ll) - H COS((L4) — LlJQ)t

+2-BeA: -0 .
P1 g1

(4)0-i-LU uJ—uJ(]

Using some algebra and identities from trigonometry, we get

xpw = ‘.%iw_ (too Sl1'l((u.)ll))+§eA¢( I 3 1 )

- 2 wcoswt) —i
u)+wq w wq

The general solution is

x(t) = emxq + x"(t)

O

=6A" X0-l-E. l. l.
2 iw0+wsu)—wo

+ B we sin(w t)
mg _ 0,2 wcos(w t) '

4] 1'1

2
t

0
50

2-

‘*1

Figure 17. Plot of quasiperiodic solution for Example 2.21

52 2. Linear Systems

The rst term has p8l‘].0(l (277: and the second term has period 2 For thi.s sum to be
0

periodic it is necessary that there exists a common time T such that Two = k2'rr
and Tw = m21r, so 51- must be rational. Therefore, the particular solution is
periodic if and only if the ratio of the frequencies is a rational number. When the
ratio is irrational, the solution has two frequencies with no common period and
the frequenciesare called rationally independent. Such combinations of rationally
independent frequencies are called quasiperiodic just as we discussed for coupled
and uncoupled harmonic oscillators. See Figure 17.

I-Exercises 2.3

1. Find the general solution of the differential equation '

...=(_°1Ol )..+(1).

2. Find the general solution of the differential equation
x. _-(1-2 _1 2)x+(co0s(t)

3. Find the general solution the nonhomogeneous linear (scalar) equation
:i: = —:z: + sin(t).

G

Note that the variation of parameters formula is equivalent to what is usually
called “solving the nonhomogeneous linear (scalar) equation by means of an
integrating factor.”
4. Assume that /J. is not an eigenvalue of the constant n x n matrix A and b is any
constant wvector. Show that the nonhomogeneous linear system of equations

. x = A x + el“ b
has a solution of the form qS(t) = e"‘ a for some n-vector a.

*=i$ Em]5. Find the general solution the nonhomogeneous linear equation
with x(0) = (1,3)?

2.4. Applications

2.4.1. Model for Malignant Tumors. Kaplan and Glass [Kap95] present a
model for the metastasis of malignant tumors, based on the research of Liotta and
DeLisi. In the experiments of Liotta and DeLisi, tumor cells were added to the
blood stream of laboratory mice. The blood carried these cells to the capillaries of
the lungs and then some entered the tissue of the lungs. By radioactively marking
the cells, the levels of cancer cells could be measured over time. The rate of decay
did not follow a simple exponential decay model which would be the case if there

2.4. Applications 53

were a complete transfer. Therefore, they use two variables: The rst variable a:1
measures the number of cancer cells in the capillaries, and the second variable :22
measures the number of cancer cells in the lung tissue. The rate of transfer of cancer
cells from the capillaries to the lung tissue was hypothesized as a linear function
of 2:1, 2$]_. The rate of loss of cancer cells from the capillaries due to their being
dislodged and canied away by the blood was given as -5121. Finally, the loss of
cancer cells in the lung tissue was given as -— 3:1:2. Thus, the system of differential
equations is linear and is given as

ii = -(51 +52lI1,
is = 132151 -53%-

This description is a campartmental model, with the number of cancer cells
divided between those in the capillaries and those in the lung tissue, and with
a rate of transfer between these two “compartments”. The cells are assumed to
be homogeneous in each compartment so the reaction rate depends only on the
amount in each compartment and not on any distribution within the compartment.
This assumption implies that the concentration instantaneously becomes uniformly
distributed within each compartment as the amounts change.

The matrix of this system is

(‘(61 + I32) 0 )
32 -133 '

which has eigenvalues —( 1 + 52) and — 3, and eigenvectors

< e. - (51z + £32) and 01 .

If the initial conditions are a:1(0) = N and 22(0) = 0, then the solution is

(28%) = (aim) ("“ ' ‘Z1 *"”)

‘ (il

In the experiment, what could be measured was the total amount of radioactivity

in both the capillaries and lung tissue, or

t + t = a(B3_ 1)N ) I =-(6 +13 )z

$10 IA) < 3—( l-+ 2) 8

_< 2N ) e_g,¢_

53 - (51 + zl

This sum has two rates of decay, rather than just one as the result of the two
compartments. By matching the data of the laboratory experiments on mice, they
found that the best t was with [31 = 0.32, ,3; = 0.072, and [33 = 0.02 measured in
units per hour.

Using this type of model, it is possible to t the parameters with different
treatments for the cancer and decide which treatments are most effective.

54 2. Linear Systems

2.4.2. Detection of Diabetes. M. Braim presents a model for the detection of
diabetes in the book [Bra78] based on the work in [Ack69]. The two variables g
and h are the deviations of the levels of glucose and hormonal concentration from
base level after several hours of fasting. When the patient enters the hospital, the
blood glucose level is increased to a level g(0) and then the body's response is
measured for positive time. The initial hormonal level is taken as h(0) = 0. Then
the response is measured starting at t = 0 when the glucose is administered. If
the responses are assumed linear (or the system is linearized near the equilibrium
as in equation (4.1) i.n Section 4.5), the resulting system of differential equations is

§=-"119-mzh,

il»=7Tl.4g-'!T‘L3h.',

where the m_,- are parameters. If 1' = m1 + m3 and A = ml TTL3 + mg 1114, then the
characteristic equation of the homogeneous part is /\2 + 1: /\ + A. We assume that
-rz — 4A = ——w2 < 0, so the eigenvalues are ‘T/2 :t iw.

All solutions have a factor of e“"/2 and either cos(wt) or sin(wt). Just as for
scalar second-order scalar solutions, the g-component of an arbitrary solution can
be written as

g(t) = A6”/2 cos(w(t — 6)),

for some constants A and 6. For a given patient, the m,- constants, as well as the
amplitude A and phase shift 6, are unknown. Rather than trying to determine the
m,-, it is enough to determine the quantities 1' and w, in addition to A and 6.

By measuring the level of glucose when the patient arrives for a base level, and
then again at later times, it is possible to determine gj at times t_,-. Since there
are four unknown constants, we need at least four readings at times t_.,- to solve
equations

91 = A¢_"’/2 °°$(=-'(tJ' - 5))
for the constants. Rather than using just four readings, it is better to take more
and then use least squares to minim.ize the quantity

E = lgj — Ae"""1'/2 cos(w(tJ- — 6))]2.
.'i=1

(See [Lay01] for a discussion of least squares.) When this was carried out in
a medical study as reported in [Ack69], it was found that a slight en'or in the
reading leads to large errors in the constant 1'. However, the constant w was much
more reliable; thus, w was a better quantity to determine whether a person had
diabetes. w is essentially the period of oscillation of the levels of hormones and
glucose in the blood. A person without diabetes had a period of To = 2"/w less
than four hours, while a person with diabetes had To greater than four hours.

2.4.3. Model for Inflation and Unemployment. A model for in ation and un-
employment is given in [Chi84] based on a. Phillips relation as applied by M. Fried-
man. The variablos are the expected rate of inflation 1r and the rate of unem-
ployment U. There are two other auxiliary variables which are the actual rate of
in ation p and the rate of growth of wages w. There are two quantities that are

2.4. Applications 55

xed externally, exogenous variables or parameters: The rate of monetary expan-
sion is m > 0 and the increase in productivity is T > 0. The model assumes the
following:

w=a— Q
p=h1r—[3U+a—T with O<h§1,

5%=m=-To=-j<1-tn-jnv+1<a-T).

g=lc(p—m)=kh1r—kBU+k(a—T—m),

where a, [3, h, j, and It are parameters of the modeling differential equation, so

tl l = l"‘i;"1 ii l [El + lk<.i‘i';~T’..>l~

All the parameters are assumed to be positive.

The equilibrium is obtained by nding the values where the time derivatives

d This yields a system of two linear equations that can be
are zero, -(1% = 0 =

solved to give 1r‘ = m and U‘ = %; [Q — T — m(1 — h)]. At the equilibrium, both
the expected rate of in ation 1r‘ and the actual rate of in ation p‘ equal the rate
of monetary expansion m, and the unemployment is U‘ = % [Oz — T — m(1 — h)].

The variables 2:1 = 1r — 1r‘ and 1:2 = U — U ‘ giving the displacement from
equilibrium satisfy x’ = Ax, where A is the coe cient matrix given above. A
direct calculations shows that the trace and determinant are as follows:

tr(A) = —j(1 - h) - k < 0,
det(A) = jk > 0.

Since the eigenvalues 1'1 and 1'2 satisfy 1'1 +1‘; = tr(A) < 0 and 1'11‘; = det(A) > 0,
the real part of both eigenvalues must be negative. Therefore, any solutions of the
linear equations in the x-variables goes to 0 and ('rr(t), U (t))T goes to (1r", U“)T as
t goes to in nity.

2.4.4. Input-Ouput Economic Model. A model for adjustment of output in
a Leontief input-output model is given in [Chi84] where the rate of adjustment is
given by the excess demand.

In the static Leontief input-output model with n sectors, the vector x is the
production vector of the sectors, the nal demand vector is d(t), which we allow to
depend on time, the intermediate demand need for production is Cx, where C is
a given n X n matrix. The excess demand is Cx + d(t) — x. If we assume that the
rate of adjust of production is equal to the excess demand, then we get the system
of differential equations

x’ = (C — I)x +d(t).

As an example of an external nal demand that grows exponentially with time,
Consider a two sector economy with nal demand d(t) = e"(1,1)T with r > 0

56 2. Linear Systems

a given parameter. If we guess at a solution x,,(t) = e"(b1,b2)T with b1 and bg

undetermined coefficients, then we need

lizl = <=*I> lil lily

<0‘ + 1)I - <1) [Z] = OI-1

=__ T-I-1—C22 -C12 1

Q"Q"up +__| ri-I —C21 7'-l-1-C11 1

["‘+1-¢22- C12]
. D‘-‘Pl-"—,_:' 7'-l-1—C11—Cg1

where A = det((r + 1)I — C) = (r + 1 - c11)(*r + 1 -— C22) — C1262]. Therefore,
we have solved for the undetermined coe icients b1 and b2 in terms of the known
parameters. For the particular solution, the output of the. two sectors will grow at
the same rate as the growth of the nal demand.

The eigenvalues of the homogeneous system satisfy

A2 - [tr(C) - 2] .\ + det(C - 1) = 0.

If tr(C) — 2 < 0 and det(C — I) = det(C) — tr(C) + 1 > 0, then the eigenvalues
both have negative real parts; if these inequalities hold, then any solution of the
nonhomogeneous solution converges to the particular solution x,,(t) as t goes to
in nity.

Exercises 2.4

1. (From Kaplan and Glass [Kap95]) An intravenous administration of a drug can
be described by two-compartment model, with compartment 1 representing the
blood plasma and compartment 2 representing body tissue. The dynamics of
evolution,of the system are given by the system of differential equations

C1 = —(K1+K,)C1 + K3 C2,‘

_ C-'2=K1C1—K3c2,

with K1, K2, and K3 all positive.
a. Draw -a schematic diagram that shows the compartments and the ows
into and out of them.
b. Solve this system of differential equations for the initial conditions C1(0) =
N and 02(0) = 0 for the special case with K1 = 0.5 and K; = K3 = 1.
What happens in the limit as t goes to in nity?
c. Sketch the phase plane for the case when K1 = 0.5 and K2 = K3 = 1.

2. Consider the mixing of concentrations of salt in three different tanks which are
connected as shown in Figure 18. -We assume that the volume V of each tank
is the same. We assume that the substance is uniformly distributed within a
given tank (due to rapid diffusion or mixing), so its concentration in the jm
tank at a given time is given by a single number C,-(t). Thus, C5 (t)V is the
amount of salt in this tank at time t. Let K1 be the constant rate of ow from

Exercises 2.4 57

K, K2
ix

Ki

k

Figure 18 Flow between tanks for the closed system

the rst tank to the second in gallons per minute. Let K2 be the constant rate
of ow from the second tank to the third tank and from the third tank to the

rst tank. Finally, let K3 be the constant rate of ow back from the second
tank to the rst tank. We assume that K1 = Kg + K3, so the amount of ow
into and out of each tank balances. The system of di erential equations for the
amount of material in each tank is

d
El/C1 = —K1C1+(K1 — K2)C2 + K2 C3,

%vc2=K1c, -K102,

%VC1 = Kgcg —K2C3.

Dividing by v, setting k = K2/v, bk = Ks/v, and K-/v = /=(1+ 1»), we get the

equations K1/V = k(1 + b), we get the equations

(@Q1_,)=k(<-(11++bb,) -(1b+b) 01 )(@01,).

C3 0 1 -1 C3

a. Show that the characteristic equation is ~ ,/ 2 _

0 =V_ —/\[/\2 + (3 + 21>)/¢,\ + (3 + saw].

_
b. Show that the eigenvalues are roots 0 and

c. Show that the eigenvector for /\ = 0 is (1, 1, l)T. How does this relate to
the fact that ad(C1 + C2 + C3) = 0?

d. Show that the other two eigenvalues have negative relate part. Interpret
the properties of a solution which starts with with C'1(0) +C-1(0) +C3(0) >
0.

Consider the mixing of concentrations of a salt in two different tanks of equal
volume V, which are connected as shown in Figure 19. The rate of flow into
tank one from the outside is kV; the rate owing out the system 'om tank
two is also kV; the rate of ow from tank two to tank one is bkV; and the
rate of owing from tank one to tank two is (1 + b)kV. We assume that the
concentration of the salt in the uid coming into the rst tank from the outside

2. Linear Systems

(l+b)kV

#

bkV

Figure 19. Flow between tanks with input and output

is C'o(t), a given function of time. The system of differential equations for the
change of concentrations in the two tanks is

C1 =-k(1+b)Cl +kbC'2+kC'o(t),
C2 =1¢c1 -k(1+b)C2,

or

Q1 = k -(1-l-b) b C1 +k Cg(t)

C2 (1-i-b) —(1-i-b) C2 0'

a. Show that the eigenvalues of the matrix are /\ = —k(1 + b) :l: In/b+ F.
Also show that they are real and negative.

b. Explain why any solution converges to a unique particular solution, de-
pending on the rate of in ow of the salt.

c. If C0(t) = 2+sin(t), what are the limiting concentrations in the two tanks?

Consider an LRC electric circuit with R > 0, L > 0, and C > 0. Sketch the
phase portrait for the three cases (a) R2 > 4L/C, (b) R2 = 4L/C, and (c)
R2 < 4L/c. What happens when R = 0 but L > 0 and C > 0.

Consider a Cournot model of a duopoly with pro ts * 1r; = (6 — q; — q2)q1 and
1r; = (6 — ql — q2)q2. The levels of output that maximize the pro ts of each

rm xing the output of the other rm satisfy ql = 3 — ‘Y1/2 and q; = 3 — ‘F1/2.
(These are found by setting the partial derivatives or marginal pro ts equal to
zero.) Let these amounts be a:1 = 3 — '12/2 and as; = 3 — ‘I1/2. Assume that the
dynamics are proportional to to displacement from equilibrium,

dk
% = /¢1(I1 ~ <11) = 3/F1 - k1q1_ éqz.

dk
gs = k2(I2 ~ G2) = 3k2 — Yzql - k2q2»

With 161,102 > nding the simul-
a. Find the equilibrium of the dynamic model.
(Note that is also the solution of the static problem of
taneous maximum.)
b. Show that the dynamic equilibrium is stable.

A simple model of dynamics for a Keynesian IS-LM continuous model for na-
tional income is given in Chapter 10 of [Sho02]. Let y be real income and

2.5. Theory and Proo s 59

1' the nominal interest rate. The assumptions are that (i) mo + ky — ur is
the demand for money with m0,k,u > 0, and (ii) consumers’ expenditure is
0. + b(1 — T)y — hr where a. is the autonomous expenditine, b is the marginal
propensity to consume, T is the marginal tax rate, and h > 0 the coe icient of
investment in response to r. Let A = 1 — b(1 — T) with 0 < A < 1. Assuming
that income responds according linearly to the excess demand in that market
with constant and that the interest rate responds linearly to the excess demand
in the money market with proportionality constants '7 > 0 and B > 0, then the
system of di erential equations are given by

1/'=—"1Ay—'1hr+*1a.

r’= ky— ur—,6m0.

a. Find the equilibrium for the system of differential equations.
b. Determine whether the equilibrium is stable or unstable.

2.5. Theory and Proofs

Fundamental set of solutions
The linear systems we study mainly have constant coefficients

(2.9) it = Ax,

where the entries in the matrix A are constants. However, there are times when
we need to consider the case in which the entries of A depend on time, A(t). (We
always assmne that the entries of A(t) are bounded for all time.) The differential
equation formed using the matrix A(t),

(2.10) >2 = 'A(t)x,

is called a time-dependent linear cli erential equation. A linear combination of
solutions of such an equation is still a solution of the equation. The method of
solution given in Section 2.2 does not apply to such systems, but a fundamental
matrix solution still exists even if it is very difficult to nd.
Uniqueness of solutions

As noted in Section 2.1, uniqueness of solutions follows from the general treat-
ment of nonlinear equations given in Theorem 3.2. However, we can give a much
more elementary proof in the linear case, as we do it here.

We need a lemma covering the derivative of the inverse of a matrix.

Lemma 2.8. Let M(t) be a square matria: whose entries depend dijfferentiably on
t. Also assume that M(t) is invertible (i.e., det(M(t)) 76 0). Then, the derivative
with respect to t of the inverse of M(t) is given as follows:

d'1tM(t) -1 _- M(t) -1 (ddtM(t)) M(t) -1 .

In particular, if M(t) is the fundamental matrix solution of it = A(t)x, then

%M(i)-1 = -M(t)-1A(i).

6i) 2. Linear Systems

Proof. The product of M(t) with its inverse is the identity matrix, so
0 - Ed l

=;dmm_‘wm

= iM(»:)"1 M(t) + M(t)-1 ~‘iM(n)

dt dt
and

(§Mm*)Mm=—Mm*(§Mm)

(%M('¢)-1) = -ivm)-1 (%M(¢)) M(t)‘1.

The second statement about thei fundamental matrix solution follows from the
rst by substituting A(t) M(t) for -d7M(t):

§Mw*=—M@*Mm~m»Mw%

= -M(t)“A(t).

El

Restatement of Theorem 2.1. Let A(t) be a n x n matrix whose entries depend
continuously on t. Qiven xo in IR", there is at most one solution x(t;xQ) of
x = A(t)x with x(0; = xo .

Proof. Let x(t) be any solution with this initial condition, and let M(t) be
a fundamental matrix solution. We want to understand how x(t) differs from
M(t)M(0)‘lx0, so we introduce the quantity y(t) = M(t)'1x(t). Then,

y. m=(;dMw_*)ao+Mw1_ n. n

= —M(t)“‘A(z)x(n) + M(t)‘1A(t)x(t)

= O.

The derivative of y(t) is zero, so it must be a. constant, y(t) = y(0) = M(0)”1x0,

and x(t) = M(t)y(t) = M(t) M(0)"x0. This proves that any solution with

the given initial conditions must equal M(t) M(0)‘1xQ. (In essence, this is just

the proof of the variation of parameters formula for solutions to nonhomogeneous

equations that we give later in Section 2.5.) El

Existence for all time
Solutions of linear equations exist for all time. This is not always the case for

nonlinear equations. The proof uses the idea of the norm of a matrix. We introduce
this rst.

Given a matrix A, there is a number C such that

||Av|| S Cllvll

2.5. Theory and Proofs 61

for all vectors v. The smallest such number is written ||A||, so

IIAVII S ||A|| - |l\'||-
This number is called the norm of the matrix. (See Appendix A.3.) Usually, we do
not need to calculate it explicitly, but just know that it exists. However, from the
de nition, it follows that

"A" = |mlv|a|¢x° i|||A|vv||||

= max ||Aw||.
||W||=1

This form of the norm does not provide an easy way to calculate it. We can consider
the square of the length of the image of a unit vector by the matrix

||Aw-||2 = WTATAW.

The product ATA is always a symmetric matrix, so it has real eigenvalues with
eigenvectors that are perpendicular. It follows that the maximum of ||Aw||2 for
unit vectors w is the largest eigenvalue of ATA. Therefore, the norm of A is the
square root of the largest eigenvalue of ATA. This largest eigenvalue is (at least
theoretically) computable.

Now, we return to the result dealing with differential equations.

Theorem 2.9. Assume A(t) is o real n >< n matrix with bounded entries. Then
solutions of >2 = A(t)x exist for all time.

The proof of this theorem is more advanced than most other topics considered
in this chapter, and can easily be skipped.

Proof. We consider the square of the length of a solution x(t):

%||x<m|* = ;,";<=<<¢> -x(t))

= 2x(t) -x(t)
= 2x(t) -A(t)x(t)

5 2 ||X(¢)|| - ||A(¢)X(¢)||
3 2 ||A(¢)|l - |lX(i)||2-

Because of the assumption on the entries of A(t), there is a constant C such that
||A(t)|| 3 C for all t. (In the case of constant coe iicients, we can take C = ||A||.)
Therefore,

5d||X(¢)||’ 5 2C||X(¢)||’

for some constant C. In this case, by dividing by ||x(t)||2 and integrating, we see
that

||K(¢)||’ S |lX(0)||2@2C‘ or
l|><(¢)l| 3 ||X(0)||¢c‘-

62 2. Linear Systems

(This is a special case of Gronwall’s inequality given in Lemma 3.8, which covers
the case when x(t) = 0 for some t.) This quantity does not go to in nity in nite
time, so if the solution is de ned for 0 3 t < t+ < oo, then

llA(i)X(¢)ll S (7 llX(°)||@c‘* = K

and

nan.) - ><<¢.>n = /tA<a=<<i) ii" 3 1”||A<¢>x(¢>|| <1:

5 Kdt = Kn, -—t1|.

1|

Therefore, x(t) is Cauchy with respect to t and must converge to a limit as t

converges to t+, the solution is de ned at time t+, and the time interval on which

the solution is de ned is closed. It is open, because we can always nd a solution

on a short time interval. This contradicts the assumption that t+ < oo, and the

solution can be extended for all time. El

Vector space of solutions

Theorem 2.10. Consider the differential equation it = A(t)x where A(t) is a real
n >< n matnkv with bounded entries. Let

.7 = { x(-)0: x(t) is a solutions of the di e-rential equation}.
k.

Then .7 is a vector space of dimension n.

Proof. Each solution in .7 exists for all time so we can take a linear combinations
of solutions and get a solution by Theorem 2.2. Therefore, .7 is a vector space.

If ul, . . . ,~u"‘ is the standard basis of IR" (or any other basis, in fact), then there
are solutions xi (t) with initial conditions uj for j = 1, .. . , n. For any solution x(t)
in .7, its initial condition xo can be written as xo = alul + - -- + a.,,u". Then,
a1x1(t) + - - - + a.~,,x"(t) is also a solution with initial condition xq. By uniqueness,
x(t) = a1x1(t) + - - - + a.,,x"(t) is a linear combination of the solutions x-1' (t). This
shows that these solutions span .7.

~If a1x1(t) + - --+a.,,x"(t) = 0, then at time t = 0, a.1u1 + - ' - +a.,,u" = 0. The

basis vectors {uj} a.re independent, so all the a_.,- = 0. This shows that the {xi (t)}

are linearly independent. Thus, they are a basis and .7 has dimension n. El

Change of volume

We next give a proof of what is called the Liouville formula for the determinant
of a fundamental set of solutions in terms of an integral. This formula is used later
in the discussion of Lyapunov exponents for nonlinear systems. We give a proof
at that time using the divergence theorem. The proof given here is direct, but is
somewhat messy in n dimensions.

2.5. Theory and Proofs 63

Restatement of Theorem 2.3. Let M(t) be a fundamental matrizt solution for
a linear system of differential equations it = A(t)x, and let W(t) = det(M(t)) be
the Wronskian. Then,

%w(i) = ti-(A(t)) we) and

W(t) = W(t0) exp (V/ttr(A(s))ds) ,

where exp(z) = e’ is the exponential function and tr(A(s)) is the trace of the matria:
A(s). In particular, if W(t0) 95 0 for any time to, then W(t) 96 0 for all times t.

For a constant coe icient equation, eA‘ is a fundamental matrix solution with

det (em) = e"'(A)‘.

Proof. The main property of deterrninants used is that we can add a scalar multiple
of one column to another column and not change the determinant. Therefore, if a
column is repeated the detemiinant is zero. Also, the determinant is linear in the
columns, so the product rule applies.

Considering the derivative at time t = to and using u-7' for the standard basis
of IR",

Ed det (M(t)M(t0) -1 )|t=to

= %det (M(t)M(t0)“‘u1,...,M(t)M(t0)“u")|t=t
= 2 det(M(t0)M(to)_1u1,...,M(t0)M(t0)'1uj'1,

M'<».>M(:.)-1111'.M<¢.)M(¢.)-W1... ..M(¢.)Mu.>"*u")

= Zdet (u1,.. .,u5'1,A(t0)M(t0)M(t0)‘1u7,uj*'1,. . .,u")

.7
= Zdet (ul, . . .,u-"_1,A(tQ)u",u"*1, . . . ,u")

.7
= Zdet (u1,. . . ,u-l'1,X:a,|,-(t0)ui,uJ_'H, . . . ,u")

j 1"
= z j.j(to)

.‘l
= l31'(.A(t9)).

The next to the last equality holds because det (u1,..., u5‘1,u*,u5+1,... ,u") = 0
L1I1l6SS i = j. Since det (M(t)M(t0)'1) = det (M(t)) det (M(t0))_1, we get

idet (M(t)) |t=to = tr(A(t)) det (M(t0)).

This proves the differential equation given in the theorem. This scalar differential
equation has a solution for det (M(t)) as given in the theorem.

64 2. Linear Systems

For the second part of the theorem, det (eA°) = det (I) = 1. Also, tr(A) is a
constant, so fol tr(A) ds = tr(A)t and

I

det (em) = det (eA°) exp tr(A) ds) = e"(A)‘.
0

El

Convergence of the exponential

To show that the series for the matrix exponential converges, we rst give a
lemma describing how large the entries of a power of a matrix can be. We use the
notation (A'°)i]. for the (i, j) entry of the power A".

Lemma 2.11. Let A be a. constant nxn matria: zirith |A,,-I 5 Cfor all 1 5 i,j 5 n.

Then [(A'"),J. 5 n"~-10'".

Proof. We prove the lemma. by induction on m. It is true for m = 1, by assump-
tion. For clarity, we check the case for m = 2:

|(A2),J.| = i:A,-;,A;,,- 5 i:|A,;,| ~ |A,,,-| 5 ijc-c = nC2.

k1 k—'1 k=1

Next, assume the lemma is true for m. Then,

i(Am+L)¢ji = A*'< (Am)Icj 5 i lAu=| ' |(Am)I=ji
’ n=1 k=1

9/\ =m—1C1n = nm Crm.+l
l'M=

Thus, the lemma is true for m + 1. By induction, we are done. Cl

Theorem 2.12. Let A be a constant n >< n matrzkz. For each pair (11, j), the series
for (e‘A)‘.J. converges.

Proof. Let C > 0 be such that |A,:;] 5 C for all 1 5 i,j 5 n. For a xed
(i, j )-entry, the sum of the absolute values of this entry for the series of em satis es

2 —.|<->,| s 2 -.— = ~e'~"@~X |¢l"‘

m=o m.
I Ill“ (nC)'" 1

J m=0 m. n n

Thus, the series for this entry converges absolutely, and so converges. E1

One of the important features of the exponential of real numbers is the fact
that e“*"’ = e“ ei’. In general, the corresponding formula is not true for matrices.
As an example, consider

A__(0010) and B__ (_0, 00 ).

2.5. Theory and Proofs 65

Then,

1>~(~)=(@1)~00 I-1D-I PM

2)..(:>. 2>=(1. 2).

tAtB_ Or-P1‘©l—*l t2t

e 8 _ -t 1)’
et(A-4-B) : °°$(t) 51119")

/-\/5/3/%— sin(t) cos(t) '

Thus, et(A+B) 75 eI.AetB_

Two matrices are said to commute provided that A B = B A. In order to prove
that e'(A+B) = e‘Ae‘B it is necessary for the matrices to commute, so similar
powers of A and B can be combined.

Theorem 2.13. Let A and B be two n X n matrices that commute. Then,
eA+B = 6A eB_

Proof. We need to check that the series for eA"'B is the same as the product of
the series for eA and ea. Because the matrices commute,

(A+B)2=A2+AB+BA+B2
=A2%L2AB+B2.

Similarly, for higher powers,

(A+B) _A1?-__ n +:-(n_n1! ),1!An—1 B+ +l(n_nk!)!k!An—kBk + +13n.

Writing out the power series for the exponential of the sum, we get

eA+B=I+(A+B)+%(A+B)2
+§1(A+B)3+~--+a1 (A+B)"+---

=I+(A+B)+%(A"‘+2AB+B2)

+§1 <A3+3: A’B+ 3A! B’+B3) +---

1 ,, n! ,,_,

+11! (A +(n—1)!1!A B

+(_n_;'21)!!2_!A~-2B2+ +B~ +

66 2. Linear Systems

e‘+B=I+(A+B)+ %A’+AB+—B2

+ (5119 + §1—,A2B + A%B2 + -B3) + - --
§g9_r—Ib§?.'-‘

+ (MlAn + ——1 W(n An—1 B

+(n_12)!An—221 !B2 + +n1 !Bn )+

= (I+A+§1-,A2+---) (I+B+%B’+---)

>U
=66.

Cl

Constant coefficients
We rst give results related to the independence of the eigenvectors.

Theorem 2.14. Let A be a real matrirr.

a. Assume /\1, ..., /\;, are distinct eigenvalues with eigenvectors v1, ..., v".
Then, the eigenvectors vb, .. ., vk are linearly independent.

b. Assume /\ = a + i is a complea: eigenvector (5 94 0) with complex eigen-
vector v = u+iw. Then, (i) both u 96 0 and w aé 0, and (ii) u and w are linearly
independent.

6

Proof. (a) Assume the" result is false and the vectors are linearly dependent. As-
sume that V1, .. . , v”‘ is the smallest set of linearly dependent vectors. Then,

(2.11) 0=c1v1+---+c,,,v"‘.

Since it is the smallest set, cm 79 0. Acting on equation (2.11) by A, we get

(2.12) 0=c1Av‘+---+c,,,Av"‘

= c1/\1v1 + -~+c,,,/\,,,v"‘.

Multiplying equation (2.11) by /\,,, and subtracting it £rom equation (2.12), we get

0 = c1(/\1 — )\,,,)v1 + - ~ - + c,,,_1()\,,,_1- /\,,,)v”"1.

Because the eigenvalues are distinct, we get all the coefficients nonzero. This con-
tradicts the fact that this is the smallest set of linearly dependent vectors. This
proves part (a).

(b.i)

Au+iAw = A(u+iw)
= (a_+i ) (u+iw)
= (o:u— w)+i( u+aw).

If w = 0, then the left side of the equality is real, and the right side has nonzero
imaginary part, i u. This is a contradiction. Similarly, u = 0 leads to a contra-
diction.

2.5. Theory and Proofs 67

(b.ii) Assume that

(2.13) 0 = c1 u + cg w.

Acting on this equation by A gives

(2.14) 0 = c1 Au + cg Aw
= c1(au — Bw) + cg ( u + aw)
=(c1a + cg ) u + (—c1[3 + cga) w.

Multiplying equation (2.13) by c1a+cg and equation (2.14) by c1 and subtracting,
we get

0 = cg (c1a + cg ) w — c1(-cl + cga) w

= + cf) Bw.

Since w 75 0 and [3 76 O, this implies that 0% + cf = 0 and c1 = cg = O. Thus, we

have shown that u and w are linearly independent. E1

Section 2.2 gives the solution method for various types of eigenvalues. From the
theory of matrices, it follows that we have considered all the cases except the case
of a repeated complex eigenvalue. This follows by showing that there is a basis that
puts the matrix in a. special form called the Jordan canonical form of the matrix.

If the matrix A is real and symmetric, then all the eigenvalues are real and
there exists a basis of eigenvectors. Thus, in this case, there are real numbers /\1,
..., /\,, (possibly repeated) and eigenvectors v1, ..., v,, such that Av-1 = A, v7.
Letting V = (v1,...,v,,) and A = dia.g()\1,...,/\,,), AV = VA and V“1AV = A
is linearly conjugate to a diagonal matrix. Thus, in terms of the basis of vectors
{v|, . . . ,v,,}, the matrix is diagonal and the solutions are given by

11

x(t) = X: c_,-e’)-"vi.
1-1

In any case where the matrix is linearly conjugate to a diagonal matrix (i.e., has a
basis of eigenvectors), thesolutions are of that same form, even though the matrix
might not be symmetric.

If the eigenvalue /\_,- = aj + i j is complex, then its eigenvector v-7' = uj + iwj
must be complex with both u-1', wj 74 0. Since

A(uj + iwj) = (or,-uj — £3,-wj) + i (H,-uj + a,-wt),

equating the real and imaginary parts yields and

Auj = 0.,-ui - /3,-wi

Aw-7' = [3,-u-1 + or,-wj.

Using the vectors uj and w-"' as part of a basis yields a sub-block of the matrix of
the form

. 5.

Bi : ab '

68 2. Linear Systems

Thus, if A has a basis of complex eigenvectors, then there is a real basis {z1, . . . , z")
in terms of which

A = diag(A1, . . . ,A,,),

where each A1, is either: (i) a 1 x 1 block with real entry Ah, or (ii) of the form BJ-
previously given. The 1 X 1 blocks give solutions of the form

eA);lvk’

and the 2 x 2 blocks give solutions of the form and
e"“‘(cos( ;,t)u" —- sin( ;,t)w'°)
e°"‘(sin( ;,t)u" +_cos( ;,t)w").

Next, we turn to the case of repeated eigenvalues in which there are fewer
eigenvectors than the multiplicity of the eigenvalue. The Cayley—Hamilton theorem
states that, if

p(:r) = (—1)"a:" + a,,_1:1:"_1+~-+ G9

is the characteristic polynomial for A, then

0 = (—1)"A" + a,._1A""1 + - - - + a0I.

In particular, if /\1, ..., /\q are the distinct eigenvalues of A with algebraic multi-
plicities ml, . . . , mq,

0(1) = (== — /\1)i" ~ - - (I — ML".

then

S]; = {V I (A — )\;,,I)m“V = 0}

is a vector subspace of dimension mk. Thus, the geometric multiplicity, which is the
dimension of St, is the same as the algebraic multiplicity, which is the multiplicity
in the characteristic equation. Vectors in S1, are called generalized eigenvectors.

Take a real /\;, of multiplicity mi, > 1. Assume vi’) is a. vector with

(A - ,\,.1)’v<'> = 0, but
(A-&U“%"¢u

If there is not a basis of eigenvectors of S1,, then the general theory says that there
is such an r with 1 < r 5 lc for which this is true. Setting

v(r-1') = (A _ /\kI)J'v(")’

we get

(A - ,\,.1)v<'> = v<*-1),

(A —- AkI)V(T_1) = V(r_2),

(A - /\;,I)v(2) = v<1>
(A - ,\,,1)v<1> = 0.

2.5. Theory and Proofs 69

In terms of this partial basis, there is an r X r sub-block of the form

/\1, 1 0 00
0 A1, 1 00

0 0 /\1( 00
Ce= . . . ..

UOO A1, 1
00O O /\1,

The general theory says that there are enough blocks of this form together with
eigenvectors to span the total subspace S1,. Therefore, in the case of a repeated
real eigenvector, the matrix A on S1, can be represented by blocks of the form C1
plus a diagonal matrix.

Summarizing the above discussion, we get the following theorem.

Theorem 2.15. a. Assume that A has a real eigenvalue r with multiplicity m
and a generalized eigenvector w. Then there is the solution of x’ = Ax with
x(0) = w is given by

t"'l—l.

x(t) =e‘Aw=e" [w+t(A—rI)w+..-+ (A—rI)'" 1w].

b. Assume A has a real eigenvalue r with algebraic multiplicity 2 but geometric
multiplicity 1. Assume that v is an eigenvector and w is a generalized eigenvector
solving (A — rI)w = v. Then x’ = Ax has two independent solutions x1 (t) =
e“°*v = e"*v and xg(t) = e‘Aw = e"‘w + te"‘v. Notice that x1(0) = v and

X2 = W.

Next, we assume that /\1, = 0:1, + in k is a complex eigenvalue of multiplicity
m1, > 1. If there are not as many eigenvectors as the dimension of the multiplicity,
then there are blocks of the from

B1, I . . . 0 0
0 B1, . . . 0 0

Dl=§§ ff»

00 Bk I
00 0 Bk

where B1, is the 2 X 2 block with entries a1, and i161, just given. If the multiplicity
is two, then the exponential of this matrix is given by

DI‘ — eB1,l teB|,l

e _ O eB"’ '

Therefore, if the basis of vectors are u1+iw1 and uz +iw2, then there are solutions
of the form

¢a"i(¢°5( kt)u1 - l¢t)w1))
e""‘(sin( 1,t)u1 + co 1,t)w1),

e°"“(cos( 1,t)u2 — U’E' . "GEE1,t)w2) + te“"‘(cos( 1,t)u1 — sin( 1,t)w1),

e“"‘(sin( 1,t)u2 + cos ARBAB1. ,t)w2) + te°"'(sin( 1,t)u' + C0$( I¢t)W1).

70 2. Linear Systems

Again, there are enough blocks of this form to give the total subspaces for oik :l:i k.

The total matrix A can be decomposed into blocks of these four forms. This
shows the form of the solutions given in the next theorem.

Theorem 2.16. Given a real n x n constant matrix A and xo 6 IR", there is a
solution ¢(t;x0) of the di erential equation 1': = Ax with ¢(0; xo) = xo. Moreover,
each coordinate function of ¢(t; Xq) is a linear combination of functions of the form

tk e°“ cos(Bt) and t'°e°" sin(,6t),

where a + i is an eigenvalue of A and lc is less than or equal to the algebraic
multiplicity of the eigenvalue. In particular, we allow lc = 0 so the solutions do
not have the term t", and ,6 = 0 so the solutions" do not have any terms involving
cos( t) or sin( t).

Quasiperiodic equations

For the solutions of Example 2.19, we could take a type of polar coordinates
de ned by

2:5 = pj sin(21r 1',-) and a':,- = wj p_,- cos(21r T1)» or

w-:l:- 112-
ta.n(2'rr 1',-) = 1and pg = Hulna +

1

The variables r1 and $2 are taken modulo 1, so 21rr1 and 2rrr2 are taken modulo 211'
(i.e., the variable rj is obtained by subtracting an integer so the variable -r_.,- satis es
0 5 13 < 1). We write the variables as “r_.,~ (mod 1)” and “2rrr,- (mod 21r )”. Then,

. .. - 2

PM. = 11%. " + "I3U-" T£j3" = 1311.1" + -371'i("ilU-»‘]J- j 3-3')

= :c_,-:i:,- - :i:,- zzj = 0 _and

2rr sec2(21r -1',-) T_ j = w-:i:-Ii:-—w-it-if
$1

= w,-5:? - ldj 123' (-L4)? I3")

is

= ‘"1" (ii + ‘"12 xi)

and so, _ ~15=-’=P§?

= wj sec2(2rr 1',-)
. w,-

Tj =

2.5. Theory and Proofs 71

Thus, the differential equations for the variables are

-1"1= = 0:1 (mod 1),

r.2 = iW‘ = a2 (mod 1),

P1 = 0,
P2 = 0-
The solution of this system is clearly
r1(t) = 1-1(0) +a1t (mod 1),
r2(t) = 'r2(0) + Qgt (mod 1),
P1(t) = /11(0),
P2(tl = 02(0)-
The next theorem states that the trajectories are dense in the phase portrait if
L01/wg = a1/oi; is irrational.

Theorem 2.17. Assume that ‘"1/U, = °‘1/Q, is irrational. Then, for any initial
condition (1-1 (0), 17(0)), the solution ('r1(t),1-g(t)) is dense in the phase portrait

{('r1,r;) : where each Tj is taken (mod 1 )

Proof. For time T = 1/012,
1'g(T) — Tg(0) = 0 (I'l10d 1) &I1d

T1(T)—T1(0)= g (mod 1).

Taking times equal to multiples of T, '
T2(TZT) — rg(0) = 0 (mod 1 ) and
r1(nT) — 11(0) = ngg (mod 1).

Since the ratio Q]/O2 is irrational, noq/0:2 cannot be an integer. These distinct
points must get close in the interval [0,1]. Given any e > 0, there must be three
integers n > m > O and k such that

|r1(nT) — r1(mT) — /2| < e.

Then
lT1((n — m)T) — T1(0) - kl = I1‘: ("T") - T1(mT) — kl < @-

Letting q = n — m, and taking multiples of qT,

lT1((.7+1)qT)- TIUGT) - kl < E

for all j. These points must ll up the interval [0,1] to within e of each point.
Because e > 0 is arbitrary, the forward orbit {r(nT)} is dense in the interval
[0,1]. Taking the ow of these points for times 0 3 t 3 T, we see that the orbit
(‘r1(t), h(t)) is dense in the phase portrait

{(1-1,-r2) 2 where each rj is taken (mod 1) },

as claimed. Cl

72 2. Linear Systems

Nonhomogeneous systems

Restatement of Theorem 2.6. a. Let x1(t) and x2(t) be two solutions of the
nonhomogeneous linear di erential equation (2.7). Then, x1 (t) —x2(t) is a solution
of the homogeneous differential equation (2.8).

b. Let x(P) (t) be a solution of the nonhomogeneous linear di erential equa-
tion (2.7) and x(") (t) a solution of the homogeneous linear of di erential equation
(2.8). Then, x(P) (t) +x(") (t) is a solution of the nonhomogeneous linear differential
equation (2.7).

c. Let x(")(t) be a solution of nonhomogeneous linear di erential equation (2.7)
and M(t) a fundamental matria: solution of homogeneous linear differential equation
(2.8). Then, any solution of the nonhomogeneous linear differential equation (2.7)
can be written as xi”) (t) + M(t)c for some vector c.

Proof. (a) Taking x1(t) and x2 (t) as i.n the statement of the theorem,

§(==‘(i> - ><’<i>) = A:<1<¢>+ em - (Am) + em)

= A(x'(t) — x2(t)),

which shows that xi (t) — x2 (t) is a solution of the homogeneous equation (2.8) as
claimed.

(b) Let x(Pl(t) be a solution of the nonhomogeneous equation (2.7) and x("l(t)
be a. solution of homogeneous equation (2.8). Then,

i(x<i‘<¢> + =<<">(i>) = (A>=‘"><»> + g(t)) + A==<"><¢>

dt
= A(x(")(t) + x(")(t)) + g(t),

which shows that x(P)(t) + x(")(t) is a solution of nonhomogeneous equation (2.7),
as claimed.

(c) Let xlP)(t) be a solution of the nonhomogeneous equation (2.7) and M(t)
a fundamental matrix solution of the homogeneous equation (2.8). Let x(t) be an
arbitrary solution of the nonhomogeneous equation (2.7). Then, x(t) — x(1’)(t) is a
solution of (2.8) by part (a). But any solution of equation (2.8) can be written as
M(t)c for some vector c. Therefore,

x(t) — xi”) (t) = M(t)c and

x(t) = x<Pl (t) + M(t)c

for some vector c, as claimed. El

Restatement of Theorem 2.7. The solution x(t) of the nonhomogeneous linear
differential equation with initial condition x(0) = xo can be written as

x(t) = em (X0.+ /I e“A’g(s) (is) .

Proof. We derive the form of the solution x(t) of the nonhomogeneous equation
(2.7). A solution of the homogeneous equation (2.8) can be written as emc. For

2.5. Theory and Proofs 73

a solution of the nonhomogeneous equation (2.7), we investigate the possibility of
writing it in this form, where c varies with t, by considering

x(t) = ¢‘°“y(¢)-

The extent to which y(t) varies measures how much the solution varies from a
solution of the homogeneous system. Solving for y(t) = e_A‘x(t), we get

y(t) = —Ae_Mx(t) + e_A‘J'c(t)

= —Ae_A’x(t) + e_MAx(t) + e‘Mg(t)

= ¢“°"s(#)- or

Integrating from 0 to t gives

t

y(t) = y(0) +2 e_A’g(s) ds,

I

x(t) = eA‘y(0) + emi e"A’g(s) ds.

The rst term gives the general solution of the homogeneous equation, and the

integral gives one particular solution of the nonhomogeneous equation. Since x(0) =

y(0), we can set xo = y(0). Rearranging terms gives the form as stated in the

theorem. El



i
Chapter 3

The Flow: Solutions of
Nonlinear Equations

Before we start the speci c material on the phase portraits for nonlinear differ-
ential equations in the next chapter, we present the basic results on existence and
uniqueness of solutions in Section 3.1. Not only does this answer the question about
existence, but it emphasizes the dependence of the solution on the initial condition
and how the solution varies as the initial condition varies. These considerations
form the foundation for the analysis of the nonlinear equations treated in the rest
of the book.

In Section 3.2, we consider the standard numerical methods for solving differ-
ential equations. Although they a.re usually presented in a beginning differential
equations course for scalar equations, we emphasize the vector case. The reason
some understanding of munerical methods is important is that we use various pro-
grams to help draw the phase space for the nonlinear equations. Such programs
do not solve the differential equation, but implement some form of 1'11l1‘l‘lBl'lC8.l inte-
gration of the type we discuss or a more complicated type. Since it is important
to have some idea of what the computer program is doing, we introduce at least
the simplest type of numerical methods for systems of equations. However, this
material on numerical methods can be skipped without any loss of continuity.

3.1. Solutions of Nonlinear Equations
For linear equations with constant coefficients, we gave a constructive proof that
solutions exist by showing the form of the solutions. For nonlinear equations,
there is no general explicit solution from which we can deduce the existence and
uniqueness of solutions. In this section, we discuss the condition which ensures
that the solution exists and is unique, and some properties that follow from the
uniqueness of solutions. We rst consider scalar differential equations, and then
differential equations with more variables.

1
75

76 3. Solutions

We will state a general theorem. However, rst we present a solution method
utilizing integrals that can be used for many scalar equations. We start with a.n
example, which is discussed fiu'ther in Section 4.3.

Example 3.1 (Logistic Equation). The differential equation
:i:=r:z:(1—2R: ),

with positive parameters r and K, is called the logistic equation. It can be solved
explicitly by separation of variables and integration by partial fractions as follows.
First, we separate all of the a: variables to the left side and apply partial fractions:

-<1i-:-.-2,->' ,s

:i: :i:
—:1: +K——ia—: =r'

Then, we integrate with respect to t (on the left-hand side, the term zizdt changes
to an integral with respect to :12):

11
‘/-;d.’lI+'[?_3d£-‘/‘Tdl,

ln(|a:|) — ln(|K — 2|) = rt + C1,

1}

|Kl_xl$| -_Cert, where C’_-eC’.

Next, we solve for :1: by perfonning some algebra, while assuming that 0 < a: < K,
so we can drop the absolute value signs:

:1: = CK e” — Ce"‘a:,
(1+ Ce”) a: = C'Ke”,

CK e”
$(t) 1 + C’ e" .
Using the initial condition 2:0 when t = 0, we have

,,,=£ 0,
1+C'
C-—iK_xo.

Substituting into the solution and doing some algebra yields

a: t = —ZQKi-.

( ) :20 + (K — :cq)e-T‘

Notice that for t = O, this solution yields :60. Also, once this form is derived, we
can check that it works for all values of :00 and not just 0 < 2:0 < K.

3.1. Solutions 77

The solution method in the preceding example applies to any equation of the
form a‘: = f(rs), reducing its solution to simply solving the integral

F(a:)=/ da:=/dt=t+C.

This method is called separation of variables. In practice, the particular integral
could be dil cult or impossible to evaluate by known mctions to determine F(:c).
Even after the integral is evaluated, the result is an implicit solution, F(a:) = t+ C,
and it is often dif cult to solve it for a: as an explicit function of t. Therefore,
even for scalar equations, there might not be a practical method to nd an explicit
solution.

We now state the general result about the existence and uniqueness of solutions
in one dimension.

Theorem 3.1 (Existence and uniqueness for scalar differential equations). Con-
sider the scalar differential equation zt = f(zr), where f(:2) is a function from an
open interval (a, b) to R, such that both f(:z:) and f’(a:) are continuous.

a. For an initial condition a < $0 < b, there exists a solution z(t) to :i: = f(::)
de ned for some time interval —r < t < 1' such that :z:(0) = 1:0. Moreover, the
solution is unique in the sense, that if x(t) and y(t) are two such solutions with
2(0) = :20 = y(0), then they must be equal on the largest interval of time about
t = 0 where both solutions are de ned. Let ¢(t;z0) = x(t) be this unique solution

with $9) = I9.

b. The solution ¢(t;:c0) depends continuously on the initial condition zzro.
Moreover, let T > O be a time for which ¢(t;:z:0) is de ned for —T 5 t 3 T.
Let e > O be any bound on the distance between solutions. Then, there exists a
6 > 0 which measures the distance between allowable initial conditions, such that
if Iyo — a:o| < 6, then ¢(t;yq) is de ned for —T 5 t $ T and

|¢(i;yo) — ¢(t;a:o)| < 6 for — T 3 t 3 T-

c. In fact, the solution ¢(t;a:0) depends differentiably on the initial condition,

$0.

De nition 3.2. The mction ¢(t;a:0), which gives the solution as a function of
the initial condition 2:0, is called the ow of the differential equation. This notation
is used when the solutions exist and are unique (e.g., the components of the vector
eld de ning the system of differential equations has continuous partial derivatives).

The proof of the multidimensional version of the preceding theorem is given in
Section 3.3 at the end of the chapter. It uses what is called the Picard iteration
scheme. Starting with any curve y°(t) (e.g., y°(t) E mo), by induction, the next
curve is constructed from the previous one by the integral equation

rs) = it + A2 f(v“"($)) is-

The proof shows that this sequence of curves converges to a solution on some time
interval [—-r,-r].

78 3. Solutions

It is possible to show that solutions exist provided f (as) is continuous. However,
without some additional condition, there are examples with nonunique solutions.
The next example does not have a derivative at one point and the solutions are
nonunique.

Example 3.3 (Nonunique solutions). Consider the differential equation

zt = ,3/FE with rcq = 0.

One solution is 2:1 (t) E 0 for all t. On the other hand, separation of variables yields
a different solution:

M /a:"lid:t'=/dt,

'3
517% =t-to,
..., - (2%)?

In the preceding calculation, we took the constant of integration to be —t0. This
solution can be extended to be equal to 0 for t < to. Thus for any to, there is a
solution

O for t 5 to,

t;t = _ g’
z( 0) for t Z to.

These solutions are zero up to the time to and then become positive for t > to.
Since to can be made atbitrarily small, there can be branching as close as we like
to t = 0 (even at 0). In addition, there is the solution :z:;(t) which is identically
equal to zero. Therefore, there are many solutions with the same initial condition.
See Figure 1. Notice that f’ (:i:) = §:c“2/3 and f’ (0) is unde ned (or equal to oo).
Therefore, Theorem 3.1 does not apply to guarantee unique solutions.

a:

t
0

Figure 1. Nonunique solutions for Example 3.3


Click to View FlipBook Version