The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.
Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by billysparrow, 2019-11-03 12:20:31

Summary

Summary

SUMMARY

1. Matrix Addition, Matrix Multiplication and Scalar
Multiplication.

1.1. Properties.
(1) A + B = B + A
(2) (A + B) + C = A + (B + C)
(3) A + O = O + A = A where O is the zero matrix.
(4) A + (−A) = O = (−A) + A where −A = (−1)A
(5) (a + b)A = aA + bA
(6) a(A + B) = aA + aB
(7) a(AB) = (aA)B = A(aB)
(8) (ab)A = a(bA)
(9) IA = AI = A where I is the identity matrix.
(10) (A + B)C = AC + BC
(11) C(A + B) = CA + CB
(12) A(BC) = (AB)C
(13) 1A = A and 0A = O.

1.2. Matrix Inverses. An n × n matrix A is called invertible (or nonsingular )
provided A−1 exists. If A has no inverse, then A is called singular.

1.3. Properties. Let A and B be square nonsingular matrices. Then
(1) (A−1)−1 = A
(2) If c = 0, then (cA)−1 = c−1A−1.
(3) (AB)−1 = B−1A−1.

1

2 SUMMARY

2. The Transpose, Conjugate and Hermitian.

If A is an m × n matrix, then AT is defined by

entij(AT ) = entji(A)

If A is an m × n matrix, then A is defined by

entij(A) = entij(A)

If A is an m × n matrix, then A∗ is defined by

A∗ = (AT ) = (A)T

2.1. Properties.
(1) (AT )T = A
(2) (A + B)T = AT + BT
(3) (aA)T = aAT
(4) (AB)T = AT BT
(5) (A∗)∗ = A
(6) (A + B)∗ = A∗ + B∗
(7) (aA)∗ = aA∗
(8) (AB)∗ = A∗B∗

2.2. Definitions.
• If A is an n×n real matrix, then A is called symmetric provided AT = A.
• If A is an n × n real matrix, then A is called skew-symmetric provided
AT = −A.
• If A is an n × n complex matrix, then A is called Hermitian provided
A∗ = A.
• If A is an n×n complex matrix, then A is called skew-Hermitian provided
A∗ = −A.

SUMMARY 3

3. Trace and Determinant.
If A is an n × n matrix, then the trace of A, denoted by tr(A), is

tr(A) = a11 + a22 + ... + ann

3.1. Properties.

(1) tr(A + B) = tr(A) + tr(B)
(2) tr(aA) = a · tr(A)
(3) tr(AB) = tr(A) · tr(B)
(4) det(AT ) = det(A)
(5) det(A) = a11 · a22 · ... · ann if Ais triangular.
(6) det(AB) = det(A) · det(B)
(7) A matrix A is nonsingular if and only if det(A) = 0.
(8) If A is a nonsingular matrix, then det(A−1) = 1/det(A).
(9) If B is obtained from A by adding a muliple of one row or column to

another row or column, then det(B) = det(A).
(10) If B is obtained from A by switching two rows, then det(B) = −det(A).
(11) If A has a row or column consisting of all zeros, then det(A) = 0.
(12) If A has two identical rows or columns, then det(A) = 0.
(13) If B is obtained from A by multiplying each entry of a single row (or

column) by c, then det(B) = c · det(A).

3.2. Cramer’s Rule. Suppose that a system of n equations in n unknowns is
represented by AX = K. If det(A) = 0, then the system has a unique solution
X, where each entry xj is given by

xj = det(Aj )
det(A)

where Aj is obtained from A by replacing column j with K.

4. Eigenvalues and Eigenvectors

If A is a n × n marix, then a complex number λ is called an eigenvalue of A
provided

det(λI − A) = 0

The characteristic polynomial of an n × n matrix A is c(λ) = det(λI − A). The
characteristic polynomial of an n × n matrix is a monic polynomial of degree n.

4 SUMMARY

An eigenvector of A associated to λ is a non-zero column C such that

(λI − A)C = O

Note: Since λI −A is a noninvertible matric, the matrix equation above has infin-
itely many solutions. Thus, there are always an infinite number of eigenvectors
associated to an eigenvalue.

4.1. Similarity. Two n×n matrices A and B are called similar, written A ≈ B,
provided there exists an invertible matrix P such that

A = P −1BP

4.2. Theorem. If A and B are similar matrices, then A and B have the same
characteristic polynomial. Thus, similar matrices have the same eigenvalues and
eigenvectors.

Proof: Let A and B be similar matrices. Then for some invertible matrix P ,
A = P −1BP . Thus,

λI − A = λI − P −1BP = λP −1P − P −1BP

It follows that = P −1(λP − BP ) = P −1(λI − B)P
c(λ) = det(λI − A) = det[P −1(λI − B)P ]

= det(P −1)det(λI − B)det(P )

= 1 det(λI − B)det(P )
det(P )
= det(λI − B)

4.3. The Cayley-Hamilton Theorem. Let A be an n × n matrix with char-
acteristic polynomial given by

c(λ) = λn + a1λn−1 + a2λn−2 + . . . + an−2λ2 + an−1λ + an

Then

c(A) = An + a1An−1 + a2An−2 + . . . + an−2A2 + an−1A + anI = 0

SUMMARY 5

4.4. Companion Matrices. Given the definition of the characteristic polyno-
mial of a matrix A, it’s natural to ask if every monic polynomial occurs as the
characteristic polynomial of some matrix i.e. given a monic polynomial p(λ) of
degree n, does there exist a n × n matrix A such that

p(λ) = det(λI − A)?

Fortunately, the answer is ’yes’ and is provided by the companion matrix of a
polynomial defined below.

Let p(λ) = λn + a1λn−1 + a2λn−2 + . . . + an−2λ2 + an−1λ + an be a monic
polynomial of degree n. The companion matrix of p(λ), denoted Comp p(λ), is
given by

0 

0 I 

 ... 
 
 

 0


−an −an−1 . . . −a2 −a1

where I denotes the n − 1 × n − 1 identity matrix.

4.5. Theorem. Let p(λ) be a monic polynomial of degree n and let C =
Comp p(λ). Then p(λ) = det(λI − C).

6 SUMMARY

5. First Degree Systems of Linear Differential Equations from
nth Order Linear Differential Equations

Consider a general nth order linear differential equation with constant coeffi-
cients:

y(n) + a1y(n−1) + . . . + an−1y(1) + any = F (t)
Written in terms of a linear differential operator:

(D(n) + a1D(n−1) + . . . + an−1D(1) + an)y = F (t)
with characteristic polynomial

c(λ) = λn + a1λn−1 + a2λn−2 + . . . + an−1λ + an
Let

y1 = y
y2 = y(1)
y3 = y(2)

...
yn = y(n−1)
Differentiate both sides of each equality above to obtain:
y1 = y(1) = y2
y2 = y(2) = y3

...
yn−1 = y(n−1) = yn

yn = y(n)

SUMMARY 7

Solve y(n) + a1y(n−1) + . . . + an−1y(1) + any = F (t) for y(n):
yn = y(n) = −a1y(n−1) − a2y(n−2) − . . . − an−1y(1) − any + F (t)

= −a1yn − a2yn−1 − . . . − an−1y2 − any1 + F (t)

Let

 y1 

 y2 
 
Y = ... 



 yn−1 
 

yn

Then

 y1   y2 

 y2   y3 
 ...  ... 
Y = = 
 


 yn−1   yn 
   

yn y(n)

 y2 

 y3 
 
= ... 



 yn 
 

−a1yn − a2yn−1 − . . . − an−1y2 − any1 + F

 y2   0 

 y3   0 
  
= ... +F  ... 
 


 yn   0 
   

−a1yn − a2yn−1 − . . . − an−1y2 − any1 1

8 SUMMARY

0 1 0 0 0 ... 0 
0 1 0 0 ... 0
0 0 0 1 0 ... 0 0
... ...
 0 0 0 0 0 −a2  0
 −an−1 −an−2 ... 

= ...   ... 
 Y +F  
  


  0 
 
 0 1 
  1

−an −a1

Y = Comp(c(λ))Y + F Coln(I)

6. Companion Matrices Revisited

Since every nth order linear differential equation with constant coefficients
is equivalent to a first-order linear system involving a companion matrix, it is
worth knowing when such a system can be uncoupled.

6.1. Theorem. Let p(λ) be a monic polynomial with factorization

p(λ) = (λ − λ1) . . . (λ − λn)
Then C = Comp(p(λ)) is similar to a diagonal matrix if and only if the λi are
distinct. In this case, V −1CV is diagonal where

 1 1 ... 1 

 λ1 λ2 . . . λn 

 λ21 λ22 ... λ2n 
V = 

 ... ... ... ... 
 
 

λ1n−1 λn2 −1 . . . λnn−1

In fact,

 λ1 

V −1CV  λ2 O 
= O ... 
 



λn

SUMMARY 9

7. The Matrix Exponential
Consider the general system

(∗) X (t) = A(t)X(t) + F (t); a < t < b

where A(t) is an n×n matrix of functions, X(t) is an n-entry column of unknowns
and F (t) is an n-entry column of given functions (the forcing functions).
The associated homogeneous system is written as

(∗∗) X (t) = A(t)X(t)

7.1. Theorem (EU). Let t0 be fixed, a < t0 < b, and let X0 be an n-entry
column of constants. If all the entries of both A(t) and F (t) are continuous
on the interval a < t < b, then there exists a unique solution X(t) to (*) with
X(t0) = X0.

7.2. Theorem (Nature of Solutions). Let Λ be the set of all solutions to (*).
Let Γ be the set of all solutions to (**). Let Xp be any one solution to (*). Then

Λ = {Z + Xp : Z ∈ Γ}

In this case, we simply write that the general solution to (*) is X = Z + Xp.

7.3. Theorem (Dimension and Basis). Let t0 be fixed, a < t0 < b, and for
each i = 1, ..., n, let Xi be the unique solution to (**) with Xi(t0) = Coli(I).
Then {X1, ..., Xn} is a basis for the vector space of all solutions to (**). In
particular, the solution set to (**) is a vector space with dimension equal to n.

7.4. Definition. The set {X1, ..., Xn} is called a fundamental system of solu-
tions to (**) at t = t0.

7.5. Definition. The n × n matrix G = [X1, ..., Xn] is called the fundamental
matrix of solutions to (**) at t = t0.

Note that since the set {X1, ..., Xn} is linearly independent, G is an invertible
matrix.

10 SUMMARY

7.6. Theorem. If G is the fundamental matrix of solutions to (**) at t = t0,
then G = AG and G(t0) = I. Furthermore, if G is any n × n matrix such that
G = AG and G(t0) = I, then the columns of G form a fundamental system
of solutions to (**) at t = t0. In particular, G is the fundamental matrix of
solutions to (**) at t = t0.

7.7. Corollary. The unique solution to the system

X = AX

X(t0) = X0

is given by

X = GX0
where G is the fundamental matrix of solutions to (**) at t = t0.

7.8. Theorem. If A is an n × n real matrix, then the fundament solution to
(**) at t = 0 is given by

etA = ∞ tkAk
k!

k=0

Furthermore, every entry of etA is a solution to c(D)y = 0 where c(λ) is the
characteristic polynomial of A.

7.9. Corollary. The unique solution to the system

is given by X = AX
X(0) = X0

X = etAX0

SUMMARY 11

8. The Nonhomogeneous Case
Consider the initial-value problem

X = AX + F

X(0) = X0
where A is an n × n real matrix and F is an n-entry column of known functions.

8.1. Theorem. The general solution to the nonhomogeneous system above is
given by

X = etAX0 + etA e−tAF dt


Click to View FlipBook Version