The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.
Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by laikhanhhoang30111, 2021-10-07 20:45:24

ANT-intro

ANT-intro

B.4. EXERCISES 401

are all real, In other words, g(r, s) is non-negative if and only if r and s are. This implies that

0 ≥ lim g(r, s) = g(r, 0) = lim g(r, s) ≥ 0,
s→0− s→0+

i.e. g(r, 0) = 0 for any non-negative r. But then, the polynomial g(R, 0) must be zero since it
has infinitely many roots, so g(r, 0) is also zero (and in particular non-negative) for negative r
which is a contradiction.

Appendix C

Linear Algebra

C.1 Vector Spaces

Exercise C.1.1∗. Prove Proposition C.1.3.

Solution

Let u1, . . . , uk be linearly independent elements of the n-dimensional vector space V . Proceed as
follow to complete it into a basis: as long as it does not generate everything, add one element
that is not generated (and thus linearly independent with the previous ones). This process must
stop since n + 1 vectors are always linearly dependent by Proposition C.1.2.

For the second part, let u1, . . . , uk be a generating family of elements and suppose without loss of

generality that u1, . . . , um is a maximal subset of linearly independent elements. This is a basis,

since every other uk can be represented as a linear combination of them (and thus all of V too).
m
Indeed, since u1, . . . , um, uk are linearly dependent for k > m, we have aum + i=1 aiui = 0 for

some ai ∈ K. Since u1, . . . , um are linearly independent, a must be non-zero and we get

m ai
a
um = − ui

i=1

as wanted.

C.2 Linear Maps and Matrices

Exercise C.2.1∗. Prove that 1 0 1 1 = 1 1 but 1 1 1 0 = 1 0 .
0 0 0 0 0 0 0 0 0 0 0 0

Solution 10 1 1 = 1·1+0·0 1·1+0·0 = 1 1
We have 00 0 0 0·1+0·0 0·1+0·0 0 0

and 1·1+1·0 1·0+1·0
0·1+0·0 0·0+0·0
1 1 = 1 0 .
0 0 0 0

402

C.3. DETERMINANTS 403

Exercise C.2.2∗. Prove that matrix multiplication is distributive over matrix addition, i.e. A(B +
C) = AB + AC and (A + B)C = AC + BC for any A, B, C of compatible dimensions.

Solution

Let A = (ai,j), B = (bi,j) and C = (ci,j). Then, the (i, j) coordinate of A(B + C) is

ai,k(bk,j + ck,j )

k

which is equal to k ai,kbk,j + k ai,kck,j, i.e. the (i, j) coordinate of AB + AC. The right-
distributivity is completely analogous by symmetry of the left and right multiplication.

C.3 Determinants

Exercise C.3.1. Prove that an m × n matrix can only have a right-inverse if m < n, and only a
left-inverse if m > n. When does such an inverse exist?

Solution

By symmetry, it suffices to consider right-inverses. Suppose that a matrix A has dimensions
m × n and is right-invertible. Consider the surjective linear map from Kn×m to Km×m defined
by B → AB. By surjectivity, mn ≥ m2, i.e. n ≥ m as wanted.

For the converse, we shall refine a bit our original argument. Note that each column of AB is
the sum of linear combinations of the columns of A: if A1, . . . , An are the columns of A and
B = (bi,j), then

bi,k Ai

i

is the kth column of AB. Hence, A has a right-inverse if and only if n ≥ m and its columns are
linearly independent.

Exercise C.3.2∗. Prove that (AB)T = BTAT for any n × n matrices A, B.

Solution
Let A = (ai,j) and B = (bi,j). The (i, j) coordinate of AB is k ai,kbk,j so the (i, j) coordinate
of its transpose is k aj,kbk,i = k bk,iaj,k which is also the (i, j) coordinate of BTAT.

Exercise C.3.3. Prove this identity.

Solution

We have

ab d −b = ad + b(−c) a(−b) + ba = ad − bc 0 = (ad − bc)I2.
cd −c a cd + d(−c) c(−b) + da 0 ad − bc

Exercise C.3.4∗. Prove that det In = 1.

404 APPENDIX C. LINEAR ALGEBRA

Solution
As always, by induction. From the definition of the determinant, we have det In = 1 · det In−1 +
0 · . . . = det In−1 and det I1 = 1.

Exercise C.3.5∗. Prove that the determinant of a matrix with a zero column is zero.

Solution
Suppose that the kth column M k of M is zero. By Proposition C.3.1 (with t = 0), we have

det M = det0k(A) = 0 det0k(M ) = 0.

Exercise C.3.6. Prove that the determinant of a non-invertible matrix is 0.

Solution

Suppose that the columns of M are linearly dependent, i.e. i aiM i = 0 for some ai ∈ K with
ak = 0. Then,

0 = detk0 (M ) = ai detkMi (M )

i

by Exercise C.3.5∗ and Proposition C.3.1. Now, by Proposition C.3.3, all the determinants vanish
except the one with i = k. Thus, we get 0 = ak det M which means det M = 0 as wanted.

Exercise C.3.7∗. Prove that an upper triangular matrix is invertible if and only if its determinant
is non-zero, i.e. if the elements on its diagonal are non-zero.

Solution

Let αi denote the ith element on the diagonal (i.e. the (i, i) coordinate). First, suppose that

αi = 0 for every i and that n

aiM i = 0

i=1

for some ai ∈ K not all zero. Consder the least k such that ak = 0 and let α denote the (k, k)

coordinate of M . The kth coordinate of n aiM i is k ai αi = ak αk since the ai = 0 for
i=1 i=1

i < k. This means that ak = 0 which contradicts our assumption.

For the converse, let k be such taht αk = 0. Then, the columns M k, M k+1, . . . , M n all have the
top k coordinates zero. Thus, we can view them as vectors with n − k coordinates. We have

n − k + 1 vectors in a space of dimension n − k so they must be linearly dependent.

Exercise C.3.8. Prove that Z is integrally closed , meaning that, if f is a monic polynomial with alge-
braic integer coefficients, then any of its root is also an algebraic integer. (This is also Exercise 1.5.22†.)

C.3. DETERMINANTS 405

Solution
Let f = Xn + αn−1Xn−1 + . . . + α0 be a monic polynomial with algebraic integer coefficients
and let β be one of its roots. Then,

M = Z[αn−1, . . . , α0, β]
is a finitely generated Z-module such that βM ⊆ M , so β is an algebraic integer.

Exercise C.3.9∗. Prove Lemma C.3.1.

Solution

By induction:

n

det A = (−1)i−1ai,1 det Ai,1

i=1

n ε(σ)a2,σ(2) · . . . · an,σ(n)

= (−1)i−1ai,1

i=1 σ

n (−1)i−1ε(σ)ai,1aσ(2),2 · . . . · aσ(n),n

=

i=1 σ

where the sum is over the bijections σ : [n] \ {1} → [n] \ {i}. This has the desired form. Finally,
note that the sign of a1,1 · . . . · an,n is (−1)1−1 times the sign of a2,2 · . . . · an,n which is 1 as

wanted.

Exercise C.3.10∗. Prove that the number of derangements of [m] is
m (−1)im!
i!

i=0

and that this number is odd if m is even and even if m is odd.

Solution

We shall instead count the number of permutations with at least one fixed point. We use the

principle of inclusion-exclusion. For a fixed k, there are (n − 1)! permutations σ satisfying

σ(k) = k. Thus, we count n · (n − 1)! = n! permutations. However, we have counted some

permutations twice: the ones which have at least two fixed points. Thus, we remove n · (n − 2)!
2
from our count (we need to choose two fixed points and then permute the remaining n − 2

elements arbitrarily). But now, we have removed some permutations too many times: the ones

with at least three fixed points, so we must add n · (n − 3)! to our count, etc. At the end, we
3

get

n n (n − k)! = n (−1)k−1 n! .

(−1)k−1 k k!

k=1 k=1

If we subtract this from n! (the total number of permutations), we get exactly the wanted
formula.

Exercise C.3.11∗. Prove that the signature is negated when one exchanges two values of σ (i.e.
compose a transposition with σ).

406 APPENDIX C. LINEAR ALGEBRA

Solution

Say that we exchange σ(i) with σ(j), i.e. we apply the transposition τ = τσ(i),σ(j). We have

ε(σ)/ε(τ ◦ σ) = σ(i)−σ(j) · k=i σ(k)−σ(j) · σ(k)−σ(i)
j−i k−i k=j k−j

σ(j)−σ(i) · k=i σ(k)−σ(i) · σ(k)−σ(j)
j−i k−i k=j k−j

σ(i)−σ(j)

= j−i
σ(j)−σ(i)

j−i

= −1

as wanted.

Exercise C.3.12∗. Prove that transpositions τi,j : i ↔ j and k → k for k = i, j generate all
permutations (through composition).

Solution
By induction: we start with τn,σ−1(n) so that σ ◦ τn,σ−1(n)(n) = n. Then, ignoring the last
element of σ ◦ τ1,σ−1(1), it is a permutation of [n − 1] so a composition of transpositions. We get
the wanted result by applying τn,σ−1(n) = τn−,σ1−1(n) to both sides.

Exercise C.3.13∗. Prove Theorem C.3.3.

Solution

We proceed as in Exercise C.3.9∗: we have

n

det(ai,j) = (−1)i−1ε(σ)ai,1aσ(2),2 · . . . · aσ(n),n

i=1 σ

where the sum is over the bijections σ : [n] \ {1} → [n] \ {i}. It remains to prove that ε(σ) =
(−1)σ(1)−1ε(σ ) where σ denotes the bijection [n] \ {1} → [n] \ {σ(1)} obtained by forgetting
the first element. This is easy: we want to count the number of inversions of σ which are not
inversions of σ . This is exactly the number of inversions (k, 1) since 1 is the only difference
bewteen σ and σ . Since k > 1 for each k = 1, the number of such inversions is the number of
σ(k) < σ(1), i.e. σ(1) − 1 as wanted.

Exercise C.3.14∗. Prove that det A = det AT for any square matrix A.

C.3. DETERMINANTS 407

Solution

Our formula from Theorem C.3.3 is symmetric in rows and columns:

det A = ε(σ)aσ(1),1 · . . . · aσ(n),n

σ∈Sn

= ε(σ−1)a1,σ−1(1) · . . . · an,σ−1(n)

σ∈Sn

= ε(σ)a1,σ(1) · . . . · an,σ(n)

σ∈Sn

= det AT.

Exercise C.3.15∗. Prove that Dk(In−1) = (−1)k−1.

Solution

Consider the n × n matrix I we defined Dk with and substitute A = In−1. We will exchange
k − 1 columns to transform it into In, thus getting a determinant of (−1)k−1 det In = (−1)k−1
as wanted.

Note that I is already almost equal to In: its first column should be kth and the 1 to k ones
should be shifted to the left. Here is how we do this shift using transpositions.

We first exchange the first column of the with the second one so that a1,1 = 1 becomes the (1, 1)
coordinate of I, then we exchange the (new) second column with the third one so that a2,2 = 1
becomes the (2, 2) coordinate of I, etc., until we exchange the k − 1th column with the kth one so
that ak,k = 1 becomes the (k, k) coordinate of I. Thus, the k − 1th column, which was originally
the first one, becomes the kth one as wanted.

Exercise C.3.16. Prove that the determinant is multiplicative by using the explicit formula of
Theorem C.3.3.

Solution

Write C = AB, so that

Ck = b1,kA1 + . . . + bn,kAn.

Then, by multilinearity of the determinant,

det C = det(b1,1A1 + . . . + bn,1An, . . . , b1,nA1 + . . . + bn,nAn)

= bσ(1),1 · . . . · bσ(n),n det(Aσ(1), . . . , Aσ(n))

σ∈Sn

= σ(ε)bσ(1),1 · . . . · bσ(n),n det(A1, . . . , An)

σ∈Sn

= det B det A

where the second-to-last equality comes from Remark C.3.4.

Exercise C.3.17. Let L/K be a finite extension. Prove that the determinant of the K-linear map
L → L defined by x → xα is the norm of α defined in Definition 6.2.3.

408 APPENDIX C. LINEAR ALGEBRA

Solution

We first treat the case where L = K(α). Consider the basis 1, α, . . . , αn−1 of L and let f =

Xn + an−1Xn−1 + . . . + a0 be the minimal polynomial of α. The matrix corresponding to x → xα

in this basis is 0 0 0 · · · −a0 

1 0 0 · · · −a1 

0 1 0 · · · −a2 


0 0 1 · · · −a3 .


 ... ... ... ... ... 
 
 

0 0 0 · · · −an−1

Using Theorem C.3.3, we see that this is −ε(σ)a0, where σ is the cycle (1, 2, . . . , n). Hence, we
need to prove that ε(σ) = (−1)n−1 since the product of the conjugates of α is (−1)na0 by Vieta’s

formulas. This follows from the fact that

σ = (1, n) · . . . · (1, 3)(1, 2)

is a product of n − 1 transpositions.

Now here is what we can do for the general case. Note that the norm is multiplicative. Indeed,
for any linear maps ϕ and ψ,

det ϕ · det ψ = det ϕ ◦ ψ

since the determinant is multiplicative. Since the composite of x → αx and x → βx is x → αβx,
the norm of αβ is the norm of α times the norm of β. Then, pick a primitive element γ of L/K
such that α/γ is also a primitive element. We can do this since

σ(α)/σ(γ) = α/γ ⇐⇒ σ(γ) = γ · σ(α)
α

is false for γ = aδ + b for well-chosen a, b ∈ K and a fixed primitive element δ. Thus, from the
first observation, we get

NL/K (α) = NL/K (γ)NL/K (α/γ)

= σ(γ) σ(α/γ)

σ∈EmbK (L) σ∈EmbK (L)

= σ(α)

σ∈EmbK (L)

as wanted.

Exercise C.3.18∗. Prove that adj AA = (det A)In.

Solution

Set (bi,j) = adj AA. This time, we have

n

bi,j = (−1)i+k det(Ak,i)ak,j .

k=1

When i = j this is the column expansion of the determinant of A which is det A, and when i = j
this is (−1)i+j times the determinant of the matrix obtained by replacing the ith column of A
by its jth column. This matrix has two identical columns so its determinant is zero as wanted.

C.4. LINEAR RECURRENCES 409

C.4 Linear Recurrences

Exercise C.4.1. Prove that Theorem C.4.1 holds in a field K of characteristic p = 0 as long as the
multiplicities of the roots of the characteristic polynomial are at most p. In particular, for a fixed
characteristic equation, it holds for sufficiently large p.

Solution

The only thing to check is that if fi(n) = 0 for all n ∈ Z and fi has less than the multiplicity of αi
so less than p then fi = 0. This is true because Z reduces to p elements in a field of characteristic
p = 0 so fi has p roots which is more than its degree and is thus zero.

C.5 Exercises

Vector Spaces and Bases

Exercise C.5.1 (Grassmann’s Formula). Let U be a vector space and V, W be two finite-dimensional
subspaces of U . Prove that

dim(V + W ) = dim V + dim W − dim(V ∩ W ).

Solution

Let u1, . . . , uk be a basis of V ∩ W . Complete it to a basis u1, . . . , uk, v1, . . . , vm of V and a basis
u1, . . . , uk, w1, . . . , wn of W . We claim that

u1, . . . , uk, v1, . . . , vm, w1, . . . , wn

is a basis of V + W . It clearly spans all of V + W so it remains to check that it’s linearly

independent. If

k mm

aiui + bivi + ciwi = 0,

i=1 i=1 i=1

then m bi vi is both in V and W so is in V ∩ W. This means that it’s a linear combination
i=1

of the ui, but by construction this implies b1 = . . . = bm = 0 since the ui and vi are linearly

independent (they form a basis of V together). By symmetry, c1 = . . . = cn = 0. Finally, this

forces a1 = . . . = ak = 0 too.

We conclude that

dim(V + W ) = k + m + n = (k + m) + (k + n) − k = dim V + dim W − dim V ∩ W.

Exercise C.5.3†. Given a vector space V of dimension n, we say a subspace H of V is a hyperplane
of V if it has dimension n − 1. Prove that H is a hyperplane of Kn if and only if there are elements
a1, . . . , an ∈ K not all zero such that

H = {(x1, . . . , xn) ∈ Kn | a1x1 + . . . + anxn = 0}.

Solution

Clearly, if H is defined as the zero set of a1X1 + . . . + anXn then H has dimension n − 1 since,

410 APPENDIX C. LINEAR ALGEBRA

assuming without loss of generality that an = 0, we get a bijective map Kn−1 → H given by

(x1, . . . , xn−1) → x1 , . . . , xn−1 , − a1 x1 + ... + an xn .
a1

For the converse, pick a linear map ϕ mapping H to 0 without being identically 0. Then,

n

(x1, . . . , xn) = x ∈ H ⇐⇒ ϕ(x) = 0 ⇐⇒ xiϕ(ei) = 0

i=1

where ei is the canonical basis of Kn: 0 everywhere except in its ith coordinate where there is a
1.

Determinants

Exercise C.5.6. Let a0, . . . , an−1 be elements of K and ω a primitive nth root of unity. Prove that

the circulant determinant  a0 a1 · · · an−1

an−1 a0 · · · an−2

 ... ... . . . ... 
 
 

a1 a2 · · · a0

is equal to

f (ω)f (ω2) · . . . · f (ωn−1)

where f = a0 + . . . + an−1Xn−1. Deduce that this determinant is congruent to a0 + . . . + ap−1 modulo
p when n = p is prime and a1, . . . , ap are integers.

Solution

Let A be the circulant matrix. We shall imitate the proof of Theorem C.3.2: we need to prove
that det A is zero when a0 = −(a1ω + . . . + an−1ωn−1) for any root of unity ω. This shows
that the product f (ω) divides the determinant as a polynomial in a0. Then, by looking at the
coefficient of a0n, we can conclude that they are in fact equal. To show that the determinant
vanishes for these a0, note that the following linear combination of the columns is zero

n

ωiAi = 0.

i=1

Alternatively, one could note that A = f (J), where is the circulant matrix with a1 = 1 and
a0 = a2 = a3 = . . . = an−1 = 0. We can check that all nth roots of unity are eigenvalues of J.
To finish, Exercise C.5.15 implies that the eigenvalues of f (J) are the f (ω), and the determinant
is their product as wanted.

Exercise C.5.7 (Cramer’s Rule). Consider the system of equations M V = X where M is an n × n

matrix and V = (vi)i∈[[1,n]] and X = (xi)i∈[[1,n]] are column vectors. Prove that, for any k ∈ [[1, n]],
vk is equal to det M/ det Mk,X , where Mk,X denotes the matrix [M 1, . . . , M k−1, X, M k+1, . . . , M n]
obtained from M by replacing the kth column by X.

Solution

Note that this formula is linear in XX since the determinant is multilinear. Since the kth
coordinate of the formula M −1V is also linear in X, we just need to check that both formulas

C.5. EXERCISES 411

agree on a basis of Kn. This is easy: when X = M k we get vk = 1 and vi = 0 for i = k which
is indeed the solution to M V = M k. Since these form a basis of Kn as M is invertible, we are
done.

Exercise C.5.8†. Let (un)n≥0 be a sequence of elements of a field K. Suppose that the (m+1)×(m+1)
determinant det(un+i+j)i,j∈[[0,m]] is 0 for all sufficiently large n. Prove that there is some N such that
(un)n≥N is a linear recurrence of order at most m.

Solution

We proceed by induction on m (it’s trivial when m = 1). More precisely, we prove that, under the
assumptions of the problem, if the (m − 1) × (m − 1) determinant det(un+i+j)i,j∈[[0,m−1]] vanishes
for one value of n, then it vanishes for all the next ones which means that there exists some N for
which (un)n≥N is a linear recurrence of order at most m − 1 ≤ m. If it doesn’t vanish for n ≥ N ,
then (un+i+j)i,j∈[[0,m]] has rank m − 1 by definition of the rank (see Exercise C.5.26†), and, more
precisely, its first m rows as well as its first last rows are linearly independent and generate the
same hyperplane H. Notice that the last m rows of (un+i+j)i,j∈[[0,m]] are the first m rows of
(un+1+i+j)i,j∈[[0,m]] so this hyperplane H is always the same. Finally, with Exercise C.5.3†, we
conclude that

a0un + a1un+1 + . . . + an+mun+m = 0

for all n ≥ N , i.e. (un)n≥N is a linear recurrence of order at most m as claimed.

It remains to prove that, if det(un+i+j )i,j∈[[0,m−1]] = 0, then det(un+1+i+j )i,j∈[[0,m−1]] = 0 as well.
Hence, suppose that the first determinant is 0, i.e. that there is a linear dependence between the
rows. If this dependence does not involve the first row, then it also creates a linear dependence
in the rows of the second matrix which implies that its determinant is 0 as wanted. Otherwise,
the first row is a linear combination of the m − 1 next ones. This implies that the first row of
(un+i+j)i,j∈[[0,m]] is a linear combination of the m − 1 next ones as well as a vector of the form
(0, . . . , 0, a) for some a ∈ K. Then, by performing row operations, we find

0 ··· 0 a

0 = (un+i+j )i,j∈[[0,m]] = ± un+1 ··· un+m un+m+1
... ... ... = ±a det det(un+1+i+j )i,j∈[[0,m−1]]
...

un+m · · · un+2m−1 un+2m

by expanding with respect to the first row. We are done.

Exercise C.5.9†. Let f1, . . . , fn : N → C be functions which grow at different rates, i.e.

f1(m) , f2(m) , . . . , fn−1(m) −→ 0
f2(m) f3(m) fn(m) m→∞

Prove that there exists n integers m1, . . . , mn such that the tuples

(f1(m1), . . . , fn(m1)), . . . , (f1(mn), . . . , fn(mn))

are linearly independent over C.

Solution
We proceed by induction on n. It is of course trivial when n = 1. Fix m1, . . . , mn−1 such that

(f1(m1), . . . , fn−1(m1)), . . . , (f1(mn−1), . . . , fn−1(mn−1))

412 APPENDIX C. LINEAR ALGEBRA

are linearly independent, i.e. such that the determinant

 f1(m1) · · · fn−1(m1) 

C = ... ... ... 
 

f1(mn−1) · · · fn−1(mn−1)

is non-zero. We wish to show that there is some mn such that

 f1(m1) · · · fn(m1)

 ... ... ... 
 

f1(mn−1) · · · fn(mn)

is non-zero. Expand it with respect to the last column to get

n

Cifi(mn)

i=1

where Ci are constants and Cn = C. Since fn dominates all other fi and Cn is non-zero by
construction, this is non-zero for sufficiently large m as wanted.

Algebraic Combinatorics

Exercise C.5.12†. Let A1, . . . , An+1 be non-empty subsets of [n]. Prove that there exist disjoint
subsets I and J of [n + 1] such that

Ai = Aj .

i∈I j∈J

Solution

Identity each subset Ai ⊆ [n] with the vector vi ∈ Rn whose ith coordinate is 1 if i ∈ S and 0
otherwise. This makes the set of subsets of [n] into a subset of a R-vector space of dimension n.
(It’s not a vector space itself, though it would be if we chose F2 instead of R. F2 is usually very
useful in algebraic combinatorics but doesn’t work here.) We have n + 1 vectors in a space of
dimension n so they must be linearly dependent, say

n+1

civi = 0.

i=1

Now consider the set I of indices of positive ci, and the set J of indices of negative ci. We have

|ci|vi = |cj|vj

i∈I j∈J

which gives us i∈I Ai = j∈J Aj as wanted. In addition, I and J are disjoint by construction.

The Characteristic Polynomial and Eigenvalues

Exercise C.5.15 (Characteristic Polynomial). Let K be an algebraically closed field. Let M ⊆ Kn×n
be an n × n matrix. Define its characteristic polynomial as χM = det(M − XIn). Its roots (counted
with multiplicity) are called the eigenvalues λ1, . . . , λn ∈ K of M . Prove that det M is the product
of the eigenvalues of M , and that Tr M is the sum of the eigenvalues. In addition, prove that λ is
an eigenvalue of M if and only if there is a non-zero column vector V such that M V = λV (in other
words, M acts like a homothety on V ). Conclude that, if f ∈ C[X] is a polynomial, the eigenvalues of

C.5. EXERCISES 413

f (M ) are f (λi) (with multiplicity). (We are interpreting 1 ∈ K as In for f (M ) here, i.e., if f = X + 1,

f (M ) is M + In.) In particular, the eigenvalues of M + Iα are λ1 + α, . . . , λn + α, and the eigenvalues
of M k are λ1k, . . . , λnk .1

Solution

The first part follow simply from expanding the determinant and using Vieta’s formulas. For the
second one, there is a non-zero vector V such that M V = λV ⇐⇒ (M − λIn)V if and only if
ker(M − λIn) is non-trivial, which is equivalent to det(M − λIn) = 0. Now, note that M V = λV
gives M kV = λkV and thus, by taking linear combinations, f (M )V = f (λ)V . This shows that

the f (λ) are eigenvalues of M . To account for the multiplicity, note that we have established the

result when χM has simple roots, and that the general results follows by density, analytical or
algebraic. We present the proof by algebraic density (more technically, Zariski density) since it

works over any field, which is similar to Remark C.3.7.

Note that the equality χf(M) = n f (λi) − X is a polynomial equality in the coefficients of
k=1

f and the coordinates of M by the fundamental theorem of symmetric polynomials. Let ∆f by
n
the discriminant of k=1 f (λi) − X , i.e. ± i=j f (λi) − f (λj) which is again polynomial in the

coefficients of f and the coordinates of M . We have shown that

n

(χf(M) − f (λi) − X)∆f

k=1

always takes the value zero. Hence, it must be the zero polynomial. If we show that ∆f is
non-zero, we are hence done. Choosing f = X, this amounts to finding a matrix with distinct
eigenvalues. This is not very hard: we can fix all coordinates of M except one and let it vary.
The determinant then varies affinely in this coordinates, say it is ua + v, where a is the varying
coordinate. By induction, we can choose u to have simple roots. A multiple root of ua + v would
also be a root of u , but this is impossible for large a unless this root is a common root of u and
v, which must thus have multiplicity one: this is a contradiction. (Alternatively, we can consider
the matrix J from Exercise C.5.6.)

Exercise C.5.16 (Cayley-Hamilton Theorem). Prove that, for any n × n matrix M , χM (M ) = 0

where χM is the characteristic polynomial of M and 0 = 0In. Conclude that, if every eigenvalue of M
is zero, M is nilpotent, i.e. M k = 0 for some k.2

Solution

Using Proposition C.3.7, we get

(M − XIn) adj(M − XIn) = adj(M − XIn)(M − XIn) = χM In. (*)

We wish to substitute M for X in M − XIn, but we can only do that if M commutes with the
coefficients of adj(M − XIn) (which are matrices). Note that this is the case since

XIn adj(M − XIn) = adj(M − XIn)XIn

(X is a formal variable so commutes with everything, and In does as well) so M adj(M − XIn) =

adj(M − XIn)M by (∗). The second part is obvious: if all eigenvalues of M are zero, then
χM = ±Xn so M n = 0.

1One of the advantages of the characteristic polynomial is that we are able to use algebraic number theory, or more

generally polynomial theory, to deduce linear algebra results, since the eigenvalues say a lot about a matrix (if we combine

this with the Cayley-Hamilton theorem). See for instance Exercise C.5.18 and the third solution of Exercise C.5.19.
2Note that if, in the definition of χM , we replace det by an arbitrary multilinear form in the coordinates of M ,

such as the permanent) perm(A) = σ∈Sn a1,σ(1) · . . . · an,σ(n), the result becomes false, so we cannot just say that
"χM (M ) = det(M − M In) = det 0 = 0" (this "proof" is nonsense because the scalar 0 is not the matrix 0, but the point
is that this intuition is fundamentally incorrect).

414 APPENDIX C. LINEAR ALGEBRA

Exercise C.5.19. Let p be a prime number, and G be a finite (multiplicative) group of n × n matrices
with integer coordinates. Prove that two distinct elements of G stay distinct modulo p. What if the
elements of G only have algebraic integer coordinates and p is an algebraic integer with all conjugates
greater than 2 in absolute value?

Solution

We will present three solutions, in increasing order of non-elementariness, and of generality.

Suppose that the reduction modulo p (group) morphism ϕ is not injective, i.e. has non-trivial
kernel (see Exercise A.2.13∗ – this is really trivial, I’m only using this language so that you
familiarise with it), say I := In = A ⊆∈ ker ϕ, i.e. A = I + pB for some non-zero B. Since G
is a finite group, we have Am = I for m = |G| by Lagrange’s theorem Exercise A.3.17†. We will

show that this is impossible (if fact it is equivalent to the problem: if it were possible, the group

generated by M would be a counterexample). (Note that all solutions will use the assumption
that p = 2 somewhere, and for a good reason: −I ≡ I (mod 2).)

Here is the first solution, which works only for the first part of the problem. Let k be the greatest
integer such that B/pk has integer coordinates. Suppose first that p m. Then, modulo pk+2,

using the binomial expansion (which we can use since I and pB commute), we have

(I + pB)m ≡ I + mpB ≡ I (mod pk+2)

which is a contradiction. We will now replace B by C such that (I + pB)p = I + pC. That, way,
m gets replaced by m/p, and by iterating this process we will eventually reach a p m which is
a contradiction. However, we need to prove that C = 0 too. Thus, suppose that (I + pB)p = I.
We then have

(I + pB)m ≡ I + mpB + m(m − 1) p2B2equivI + p2B ≡ I (mod pk+3)
2

since p is odd, which is also a contradiction.

We now present the second solution. Let |M | denote the maximum of the absolute value of the

coordinates of a matrix M . Since Am = I for some m, |Ak| is bounded when k varies, say by C.

We have r

|(pB)r = |(I − A)r| ≤ r |Ar| ≤ C2r.
k
k=0

However, the left hand side is divisible by pn so is at least pn unless it is 0. Since p > 2, by

taking r sufficiently large, we get |Br| = 0, i.e. Br = 0: B is nilpotent. When p is only an

algebraic integer with all conjugates greater than 2, the same argument works: the coordinates

of Br/pr are algebraic integers with absolute value less than C2r/pr so less than 1 for large r.

However, the same goes for their conjugates by assumption. Since the only algebraic integer

whose conjugates are all strictly in the unit disk is 0 (by looking at the constant coefficient of its

minimal polynomial), we get Br = 0 for large r as wanted.

Consider k such that Bk = 0 but Bk+1 = 0. Since B = 0, we have k ≥ 1. By expanding
(I + pB)m = I, we get

pmB + p2 m(m − 1) B2 + . . . = 0.
2

Finally, by multiplying this equation by Bk−1, we get pmBk = 0 which implies m = 0 and is a

contradiction.

Finally, the third solution uses more advanced linear algebra. Let β be an eigenvalue of B. Then,

α = 1 + pβ is an eigenvalue of A which is congruent to 1 modulo p. Further, since Am, we have

αm = 1 so it is a root of unity. This implies that β = α−1 has module less than 1 since p > 2.
p

This is also true for all its conjugates. Thus, the constant coefficient of its minimal polynomial

must be 0, i.e. β = 0. Hence, all eigenvalues of B are zero, which implies that B is nilpotent by

Exercise C.5.16. We finish as in the previous solution.

C.5. EXERCISES 415

Miscellaneous

Exercise C.5.20 (USA TST 2019). For which integers n does there exist a function f : Z/nZ → Z/nZ
such that

f, f + id, f + 2id, . . . , f + mid

are all bijections?

Solution

There exists such a function if and only if all prime factors of n are greater than m + 1. In that
case, it is clear that f = id works. Now suppose that f has a prime factor p ≤ m + 1. Pick p to
be minimal, and suppose without loss of generality that m = p − 1. For 1 ≤ k ≤ m, since f and
f + kid are both bijections, we have

nn m m km−i n g(x)ixm−i,

g(x)m ≡ (g(x) + kx)m = i

x=1 x=1 i=0 x=1

i.e.
m−1 m km−i n g(x)ixm−i ≡ 0.
i

i=0 x=1

Thus, we have a linear system in the km−i with solution xi = m n g(x)ixm−i. By Vander-
monde, the determinant of the matrix M = (km−i)k,i is i x=1

i−j

1≤i<j≤m

which is invertible modulo n since p is the smallest prime factor of n by assumption and m = p−1.
Thus, by Exercise C.3.18∗, M is invertible modulo n which implies that our system has exactly

one solution. Since x0 ≡ . . . ≡ xm−1 = 0 is of course the trivial solution, this must in fact be the

case. In particular,

n

x0 = xp−1

x=1

is zero. We will prove that this is impossible. If p = 2, this sum is n 2 so n | n(n−1) which
n−1 2

implies that n is odd and is a contradiction.

Now suppose that p is odd. Let k = vp(n). Since this sum is congruent to

n pk

pk xp−1

x=1

modulo pk, it suffices to prove that pk pk xp−1. Let g be a primitive root modulo pk , there
exists one by Exercise 3.5.18†. Then, x=1

xp−1 ≡ pk −1 = gpk−1(p−1)2 − 1
gp−1 − 1 .
gk(p−1)

x∈(Z/pZ)× k=1

By LTE 3.4.3, the p-adic valuation of this is k − 1 < k. To take care of the terms of the sum
which are divisible by p, simply note that

  

vp  xp−1 = (p − 1) + vp   = (p − 1) + k − − 1 ≥ k

vp(x)= ,x∈Z/pkZ x∈(Z/pk− )×xp−1

by our previous computation.

416 APPENDIX C. LINEAR ALGEBRA

Exercise C.5.21 (Finite Fields Kakeya Conjecture, Zeev Dvir). Let n ≥ 1 an integer and F a finite
field. We say a set S ⊆ Fn is a Kakeya set if it contains a line in every direction, i.e., for every y ∈ Fn,
there exists an x ∈ Fn such that S contains the line x + yF. Prove that any polynomial of degree less
than |F| vanishing on a Kakeya set must be zero. Deduce that there is a constant cn > 0 such that,
for any finite field F, any Kakeya set of Fn has cardinality at least cnpn.

Solution

Let q be the cardinality of F. The proof will be in two steps. Suppose that f is a polynomial of
degree d < q vanishing on a Kakeya set S. Fix any y ∈ Fn. Then, for some x ∈ Fn, f (x + ty) = 0
for any t ∈ F. The polynomial f (x + T y) has more roots than its degree so is zero. Let g be the

homogeneous part of f , i.e. the polynomial formed by the monomials of degree d of f . Notice
that the coefficient of T d in f (x + T d) is exactly g(y). Hence, g(y) = 0 for any y ∈ Fn, which
implies that g = 0 by Exercise A.1.7∗. This contradicts the assumption that f had degree d.

For the second step, note that the dimension of the vector space V of polynomials of degree at

most q − 1 is n+q−1 . Indeed, the monomials X1d1 · . . . · Xndn for d1 + . . . + dn ≤ q−1 form a basis
n

of this space. However, the number of such tuples is the same as the number of ways to choose

n elements from [n + q − 1]: choose a1 < . . . < an and decide that d1 = a1, d1 + d2 = a2, etc.,

until d1 + . . . + dn = an (this technique is usually called stars and bars because you have q − 1
stars and n bars used to separate them). Now consider the linear map T : V → F|S| defined by

T (f ) = (f (s))s∈S. If |S| < dim V , it must have a non-trivial kernel by Proposition C.1.2 (or the

rank-nullity theorem). This contradicts the first step.

We conclude that

|S| ≥ n + q − 1 = q(q + 1) · . . . · (n + q − 1) ≥ qn
n n! n!

so we can take cn = 1 .
n!

Exercise C.5.22 (Siegel’s Lemma). Let a = (ai,j) be an m × n matrix with integer coordinates.
Prove that, if n > m, the system
n

ai,j xj = 0

j=1

for i = 1, . . . , n always has a solution in integers with

m

n−m

max |xi| ≤ n max |ai,j| .

i i,j

Solution

Let M = maxi,j |ai,j|. Fix a constant N to be chosen later. Suppose that the integers a1, . . . , ak
are negative while ak+1, . . . , an are positive. Then, for any (x1, . . . , xn) ∈ [N ]n, we have

N (a1 + . . . + ak) ≤ a1x1 + . . . + anxn ≤ N (ak+1 + . . . + an).

Thus, the expression a1x1 + . . . + anxn can take at most 1 + N (|a1| + . . . + |an|) values.

Now, we return to the problem. Set A = (ai,j). We have shown that, when X ⊆ [N ]n, each
rows of AX can take at most 1 + N nM values. Thus, when X ranges through [N ]n, AX takes

at most (1 + N nM )m < (1 + N )m(nM )m values. Since X can take (1 + N )n values, if

(1 + N )n > (1 + N )m(nM )m ⇐⇒ (1 + N )n−m > (nM )m ⇐⇒ m

1 + N > (nM ) n−m

then one value will be taken twice by the pigeonhole principle, say AX = AY . This yields

C.5. EXERCISES 417

A(X − Y ) = 0 for some Z = X − Y ⊆ [[−N, N ]]n as wanted. It is clear that N = m

(nM ) n−m

works and gives us what we want.

Exercise C.5.24. How many invertible n × n matrices are there in Fp? Deduce the number of
(additive) subgroups of cardinality pm that (Z/pZ)n has.

Solution

We proceed inductively to determine the number of tuples of linearly independent vectors of
cardinality k. At first, we can pick any non-zero vector in Fnp , there are thus pn − 1 choices.
Then, we can pick any vector which is not a linear combination of the first one, there are thus
pn −p possible choices. Continuing like that, if we have picked k vectors, their linear combinations
generate pk elements so we have pk elements to avoid and thus pn − pk possibilities for the next
vector. In conclusion, the number of invertible n × n matrices with coefficients in Fp is

(pn − 1)(pn − p) · . . . · (pn − pn−1).

For the second part, note that a subgroup of (Z/pZ)n is a Fp-vector space, and the fact that
it has pm elements means that its dimension is m. Thus, we want to count the subspaces of
Fpn of dimension m. Here is how we will proceed: we count the number of tuples of m linearly
independent elements, and divide this by the number of tuples which represent a fixed subspace.
We have already computed the first one: it is

(pn − 1) · . . . · (pn − pm−1).

We have also determined the second: if we fix a subspace of dimension m, it has

(pm − 1) · . . . · (pm − pm−1)

bases. We conclude that (Z/pZ)n has

(pn − 1) · . . . · (pn − pm−1)
(pm − 1) · . . . · (pm − pm−1)

subgroups of cardinality pm.

Exercise C.5.25†. Let K be a field, and√let S ⊆ K2 be a set of points. Prove that there exists a
polynomial f ∈ K[X, Y ] of degree at most 2n such that f (x, y) = 0 for every (x, y) ∈ S.

Solution

Write f = ai,j√X i Y j where the sum is over the i, j such that i +j ≤ √ 2n. By stars and
bars, there such pairs: we choose the two values i and i+j+ √
i,j 2+ 2n 1
2 in [[0, 2n + 1]].
are

Since √ √ √ √√

2 + 2n (1 + 2n )(2 + 2n ) 2n · 2n
= > = n,
2 22

we have more unknowns then equations so there is a solution.

Exercise C.5.26†. Given an m×n matrix M , we define its row rank as the maximal number of linearly
independent rows of M . Similarly, its column rank is the maximal number of linearly independent
columns of M . Prove that these two numbers are the same, called the rank of M and denoted rank M .

418 APPENDIX C. LINEAR ALGEBRA

Solution

Without loss of generality, by removing some rows if necessary, suppose that all m rows of M
are linearly independent. We will prove that M has at least m linearly independent columns,
which implies that the column rank is greater than or equal to the row rank (if we add rows the
linearly independent columns stay linearly independent). Taking the transpose then yields the
reverse inequality, so they are in fact both equal.

Suppose for the sake of a contradiction that M has at most m − 1 linearly independent columns,
say M 1, . . . , M k. If we consider the m vectors corresponding to the rows of [M 1, . . . , M k],
they are linearly dependent by Proposition C.1.2. Now, if we add a column M k+1 which is a
linear combination of M 1, . . . , M k, the vectors corresponding to the rows of [M 1, . . . , M k+1] stay
linearly dependent. Indeed, if

k

M k+1 = aiM i

i=1

and the linear dependence of the rows of [M 1, . . . , M k] is

m

bimi,j

i=1

for any j ∈ [k], then

m mk

bimi,k+1 = bi aj mi,j

i=1 i=1 j=1
km

= aj bimi,j

j=1 i=1

= 0.

In other words, a linear dependence between the rows extends to a linear dependence with the
same coefficients between the rows when we add a linearly dependent column. Continuing like
that shows that, finally, the rows of [M 1, . . . , M m] = M are linearly dependent, contradicting
our initial assumption.

Exercise C.5.28 (Nakayama’s Lemma). Let R be a commutative ring, I an ideal of a R, i.e. an
R-module inside R3, and M a finitely-generated R-module. Suppose that IM = M , where IM does
not mean the set of products of elements of I and M , but instead the R-module it generates (i.e. the
set of linear combinations of products). Prove that there exists an element r ≡ 1 (mod I) of R such
that rM = 0.

Solution

Let α1, . . . , αn be generators of M . We have a system of equation as follows:

n

αi = βi,j αj

j=1

for i = 1, . . . , n and βi,j ∈ I. Let B = (βi,j) and A = (αi) (as a colum vector), so that BA = A,
i.e. (In − B)A = 0. We claim that r = det(In − B) works. First, note that r ≡ 1 (mod I) since
In − B ≡ In (mod I). By Proposition C.3.7, we have

rIn = adj(In − B)(In − B)

3See also Proposition C.3.5. A module is like a vector space but the underlying structure is not necessarily a field (in
this case it’s R).

C.5. EXERCISES 419
so, after right-multiplying by A, we get rA = 0. This implies that rM = 0 as wanted.

Further Reading

Here are some books I like4. As said in the foreword, I particularly recommend Andreescu-Dospinescu
[1, 2], Ireland-Rosen [11], and Murty [19].

For classical algebraic number theory, I suggest Murty [19]. I’ll just define ideals here because they
are not defined in the book (but no prior exposure to them is assumed, so a wikipedia search would
also do). An ideal I of a commutative ring R is simply a set which is closed under addition, and
closed under multiplication by elements of R. In number fields, ideals are all finitely generated, i.e.
I = a1R + . . . + anR for some a1, . . . , an (see chapter 5 of Murty). As an exercise you can prove that
the ideals of Z have the form nZ for some n. Milne [18] (pages 7–10) has good motivation on why to
consider ideals.

For p-adic analysis, I wholeheartedly recommend the addendum 3B of SFTB first, and then the
excellent book by Cassels [7], which, although a bit old5, is full of number-theoretic applications like
the one in Section 8.6. Robert [23] is also very good, but focuses a lot more on analysis than on number
theory, and assumes a fair amount of topology.6 Also, Borevich-Shafarevich [6] has a great proof of
Thue’s theorem 7.4.3 using local methods (p-adic methods).

The elementary theory of elliptic curves is also of similar flavour to the topics of the present book,
see Silverman-Tate [26] for a wonderful introduction.

For more on polynomials, see Prasolov [21] (things like Appendix A). See also Rosen [24] for number
theory in function fields, i.e. algebraic number theory but with polynomials over Fq instead of rational
integers!

For abstract algebra (including Galois theory), I recommend Lang [15]. For even more advanced
algebra, see his other book [13].

For linear algebra, I also recommend Lang [14] because I like everything I read by him. For
applications of linear algebra to combinatorics, see chapter 12 of PFTB [1], which assumes approxi-
mately Appendix C as background, and Stanley [27], which assumes approximately Lang’s book as
background.

4Disclaimer: I haven’t finished reading all of them.
5It’s not typeset in LATEX!
6The first chapter is particularly topologically heavy but it gets better afterwards. I would suggest to skip it at first
since it just defines the p-adic numbers, and have a look at the other chapters if you’re interested in the analytical theory.

420

Further Reading

[1] T. Andreescu and G. Dospinescu. Problems from the Book. 2nd ed. XYZ Press, 2010.
[2] T. Andreescu and G. Dospinescu. Problems from the Book. XYZ Press, 2012.
[6] Z. I. Borevich and I. Shafarevich. Number Theory. Academic Press, 1964.
[7] J. W. S. Cassels. Local fields. Cambridge University Press, 1986.
[11] S. Ireland and M. Rosen. A Classical Introduction to Modern Number Theory. 2nd ed. Vol. 84.

Graduate Texts in Mathematics. Springer-Verlag, 1990.
[13] S. Lang. Algebra. 5th ed. Vol. 211. Graduate Texts in Mathematics. Springer-Verlag, 2002.
[14] S. Lang. Linear Algebra. 3rd ed. Undergraduate Texts in Mathematics. Springer-Verlag, 1987.
[15] S. Lang. Undergraduate Algebra. 3rd ed. Undergraduate Texts in Mathematics. Springer-Verlag,

1987.
[18] J. S. Milne. Algebraic Number Theory. url: https://www.jmilne.org/math/CourseNotes/

ant.html. (accessed: 26.09.2021).
[19] M. R. Murty and J. Esmonde. Problems in Algebraic Number Theory. 2nd ed. Vol. 190. Graduate

Texts in Mathematics. Springer-Verlag, 2005.
[21] V. V. Prasolov. Polynomials. Vol. 11. Algorithms and Computation in Mathematics. Springer-

Verlag, 2004.
[23] A. M. Robert. A Course in p-adic Analysis. Vol. 198. Graduate Texts in Mathematics. Springer-

Verlag, 2000.
[24] M. Rosen. Number Theory in Function Fields. Vol. 210. Graduate Texts in Mathematics. Springer-

Verlag, 2002.
[26] J. H. Silverman and J. T. Tate. Rational Points on Elliptic Curves. 2nd ed. Undergraduate Texts

in Mathematics. Springer-Verlag, 2015.
[27] R. Stanley. Algebraic Combinatorics. 2nd ed. Undergraduate Texts in Mathematics. Springer-

Verlag, 2018.

421

Bibliography

[3] G. M. Bergman. Luroth’s Theorem and some related results, developed as a series of exercises.
url: https://math.berkeley.edu/~gbergman/grad.hndts/. (accessed: 26.09.2021).

[4] Y. Bilu, Y. Bugeaud, and M. Mignotte. The Problem of Catalan. Springer-Verlag, 2014.
[5] A. B. Block. The Skolem-Mahler-Lech Theorem. url: http://www.columbia.edu/~abb2190/

Skolem-Mahler-Lech.pdf. (accessed: 26.09.2021).
[8] E. Chen. A trailer for p-adic analysis, first half: USA TST 2003. url: https://blog.evanchen.

cc/2018/10/10/a-trailer-for-p-adic-analysis-first-half-usa-tst-2003/. (accessed:
26.09.2021).
[9] E. Chen. Napkin. url: https://web.evanchen.cc/napkin.html. (accessed: 26.09.2021).
[10] K. Conrad. Kummer’s lemma. url: https://kconrad.math.uconn.edu/blurbs/gradnumthy/
kummer.pdf. (accessed: 26.09.2021).
[12] M. Klazar. Størmer’s solution of the unit equation x − y = 1. url: https://kam.mff.cuni.cz/
~klazar/stormer.pdf. (accessed: 26.09.2021).
[16] P-S. Loh. Algebraic Methods in Combinatorics. url: https://www.math.cmu.edu/~ploh/docs/
math/mop2009/alg-comb.pdf. (accessed: 26.09.2021).
[17] D. Masser. Auxiliary Polynomials in Number Theory. Cambridge Tracts in Mathematics. Cam-
bridge University Press, 2016.
[20] M. R. Murty and N. Thain. “Primes in Certain Arithmetic Progressions”. In: Functiones et
Approximatio 35 (2006), pp. 249–259. doi: 10.7169/facm/1229442627.
[22] P. Ribenboim. 13 Lectures on Fermat’s Last Theorem. Springer-Verlag, 1979.
[25] A. Schinzel. “On Primitive Prime Factors of an − bn”. In: Mathematical Proceedings of the Cam-
bridge Philosophical Society 58 (4 1962), pp. 556–. doi: 10.1017/s0305004100040561.
[28] C. L. Stewart. “On the greatest prime factor of terms of a linear recurrence sequence”. In: Rocky
Mountain Journal of Mathematics 35 (2 1985), pp. 599–608. doi: 10.1216/RMJ-1985-15-2-599.
[29] R. Thangadurai. On the Coefficients of Cyclotomic Polynomials. url: https://www.bprim.
org/sites/default/files/th.pdf. (accessed: 26.09.2021).
[30] N. Tsopanidis. “The Hurwitz and Lipschitz Integers and Some Applications”. PhD thesis. Facul-
dade De Ciências da Universidade do Porto, 2020.
[31] S. Weintraub. Galois Theory. 2nd ed. Universitext. Springer-Verlag, 2009.

422

Index

Symbols expansion 51, 58, 78, 81, 115, 127

5/8 theorem 158, 383 series 126

binomial expansion 247, 252, 414

A BMO 1 120, 333

abelian Bolzano-Weierstrass theorem 132, 338

field extension 101, 155, 307 Brazil

group 98, 103, 155, 158, 384 MO 54, 65, 141, 251, 360

absolute convergence 263 Bulgaria

action MO 120

of a group 74, 158, 285, 385, 399

algebra C

closure 149, 166, 193, 394, 412 Capelli 106, 314

fundamental theorem of 149, 163 Carmichael’s theorem 74, 283

algebraic Cauchy

closure 64, 90, 240 equation 94, 172, 303

field extension 94 sequence 126

independence 166, 396 theorem 103

integer 14 Cauchy-Mirimanoff polynomials 381

number 14 Cauchy-Schwarz inequality 22

AMM 21, 23 Cayley

APMO 80, 149 theorem 105, 311

Archimedean 126, 140, 353 Cayley-Hamilton theorem 193, 413

arithmetic function 71, 271 center 383

multiplicative 71, 231, 271 centraliser 286, 383

Artin-Schreier theorem 106, 316 characateristic 147

associate 29 characteristic

left or right 38 of a ring 57, 58, 69, 93, 131, 144, 152, 162,

associative 37, 143, 173, 219 171, 189, 265

automorphism 28, 97, 211 polynomial

of a linear recurrence 138, 189

B of a matrix 193, 412

Bézout Chebotarev density theorem 73, 277

domain 30, 83, 114 Chebyshev polynomial 66, 255

left or right 38 Chevalley-Warning Theorem 72, 276

lemma 29, 145, 256, 332, 358, 372, 390 China

theorem 166, 395 MO 41

Bézout’s lemma 262 TST 72, 86, 87, 142, 157, 159, 273, 292,

BAMO 120 367, 386

basis 91, 169 Chinese remainder theorem 54, 80, 81, 231,

canonical 176, 186 242, 251, 254, 321

changes of bases 174 chinese remainder theorem 297

integral 108, 326 circulant determinant 192, 328, 410

transcendence 166, 397 class equation (of a group action) 74, 285

binomial class number

coefficient 74, 80, 86, 282 of a number field 55, 257

423

424 INDEX

closure derangements 184

algebraic 64, 90, 240 derivative 147, 172

Galois 97 discrete 157, 377

integral 24, 181, 205, 404 determinant 177

column operations 179 circulant 192, 328, 410

comatrix 187 norm 187

commutative resultant 165, 393

group 155 Vandermonde 182

operation 37, 143, 173 dimension 143, 170

ring 18, 58, 153, 188 finite 170

compact transcendence 166, 397

sequentially 132 Dirichler

complete homogeneous 165 theorem 276

completion 126 Dirichlet

complex field approximation theorem 110

cubic 116 convolution 71, 271

quadratic 109 L-function 226

composite field 100, 307 series 71, 271

compositum 100, 307 theorem 88, 297

congruence 16 theorem (on arithmetic progressions) 49,

conjugate 67, 86, 88, 298

complex 15, 17 unit theorem 119, 121, 337

of an algebraic number 17 discrete 343

quadratic 28 discrete derivative 157, 377

quaternion 37 discriminant 72, 108, 275

constructible disriminant 21

number 106, 317 distance 125, 140, 355

content (of a polynomial) 76 distributivity 143, 174, 403

convergence 22 divisibility 16, 30

absolute 263 left or right 38

p-adic 125 of polynomials 145

convex hull 159, 388 domain 153

coset 98, 305, 309 Bézout 30

cosine 23–25, 47, 56, 66, 202, 203, 207 Euclidean 31

law 202 integral 30, 153, 213

quadratic 47, 244 principal ideal 42

rational 15 unique factorisation 29

cyclic

field extension 105, 312 E

group 106, 155, 317 effective 116, 136
EGMO 120
cyclotomic Ehrenfeucht’s criterion
eigenvalues 159, 389
field 25, 55, 97, 99, 207, 255 Eisenstein 193, 412

quadratic subfield 107, 323 criterion 78, 92
integers 34
units 55, 256 ELMO
embedding 121, 340
polynomial 22, 34, 43, 65, 73, 78, 216, 282 complex 123
of a field extension 117
ring of integers 55, 255 real 94
equivalence relation 117
D 164 equivalent
107, 321 Euclid 38, 222
d’Alembert-Gauss theorem 105, 311 algorithm 141, 356
Dedekind lemma
226 Euclidean 67
lemma 293
zeta function 90
degree 144 29
of a field extension 16
of a polynomial 127
of an algebraic number
density

INDEX 425

algorithm 93, 145 of algebra 149, 163
division 17, 31, 114, 144, 154
domain of finitely generated abelian groups 158,
function 31, 75, 83, 138
left or right 31, 38 384
Euler 38
criterion of Galois theory 99
extremal value theorem 41, 226
68 of symmetric polynomials 161

140, 355 symmetric polynomials 19

fundamental unit 109

G

F Galois

Fermat 92 closure 97

last theorem 34, 41, 55, 227, 257 correspondence 99

for polynomials 159, 387 field extension 97

little fundamental theorem 99

theorem 386 group 73, 97, 277

little theorem 60, 102, 103, 144, 147–149, inverse problem 105, 311

281, 380 Galois theory 244

two square theorem 34 Gauss

Fibonacci sequence 42, 54, 72, 74, 121, 235, formula 107, 324

273, 283, 339 integers 32

field 143, 153 primes 33

algebraically closed 149, 166, 193, 394, lemma 76

412 sum 70, 101

complex gcd 145

cubic 116 Gelfond-Schneider theorem 126

quadratic 109 generating functions 162

extension 90 generator 48, 93, 102, 105, 181, 245, 312

finite 57, 93, 99, 136, 171, 366 global 131, 136

fixed 99 field 126

of fractions 153 Grassmann’s Formula 191, 409

real greatest common divisor 30

quadratic 109 left or right 38

totally 117 group 49, 54, 97, 154, 155, 245, 251

field extension 90 abelian 98, 103, 155

abelian 101, 155, 307 action 74, 158, 285, 385, 399

algebraic 94 commutative 155

cyclic 105, 312 cyclic 106, 155, 317

finite 92 of units 60

Galois 97 quotient 158, 381

separable 94 simple 318

solvable solvable 106, 317

radicals 106, 317 symmetric 155

real radicals 107, 320 H

tower 91, 97 Hadamard quotient theorem 72, 275

finite height 312

field 366 Hensel’s lemma 81, 124, 350

finite field 57, 93, 99, 136, 171 Hermite

fixed field 99 matrix 193

Fleck’s congruences 56, 258 Hilbert

formal 123, 132, 144, 164 Theorem 90 105, 312

power series 144, 163 homogeneous 28, 45, 119, 165, 177, 239

France Hurwitz

TST 53, 248 integers 37

Frobenius hyperplane 191, 409

morphism 366

Frobenius morphism 47, 58, 71, 79, 101, 242 I

fundamental theorem ideal 194, 237, 262, 390, 418

426 INDEX

prime 208 L

ideal factorisation 283 Lüroth’s theorem 105, 312

image 156, 174 Lagrange 167, 399

IMC 23, 56, 159, 262, 390 four square theorem 39

IMO 42, 54, 87, 236, 249 interpolation 150, 172, 182

SL 49, 54, 72, 83, 86, 88, 120, 249, 275, theorem 103, 158, 383

297, 300, 333 lattice 338

inclusion-exclusion principle 405 Legendre

inert prime 33, 217 formula 129

integer symbol 68, 277

algebraic 14 lifting the exponent lemma 50, 54, 297, 363

Eisenstein 34 linear

Gaussian 32 independence 168

Hurwitz 37 map 172

p-adic 122 multi 177

quadratic 18, 197 recurrence 189

rational 15 order 189

integral basis 108, 326 transformation 172

integral closure 24, 181, 205, 404 linear recurrence 59, 60, 72, 74, 274, 283

integral domain 30, 57, 76, 89, 122, 153, 213 Liouville’s theorem 121, 339

integral element 329 local 131, 136

intermediate value theorem 164, 247, 389 field 126

inverse Galois problem 105, 311 local-global 20

Iran localisation 123

MO 56, 88, 167, 261, 400 locally analytic function 130

TST 54, 86–88, 296 Lucas

Ireland 23, 199 formula 107, 324

irrationality measure 119 sequence 42

irreducible theorem 74, 282

element 30 M

polynomial 150 Möbius Function
Mahler’s theorem
ISL 108 Mann 55, 71, 272
Mason-Stothers theorem 139, 351
isolated 133 matrix 107, 323
159, 387
isomorphism 57, 62, 105, 154, 265 adjacency
adjugate 176
IZHO 24 change of bases 187
comatrix 174
J 42, 233 Hermitian 187
73, 278 identity 193
Jacobi 73, 277, 278 multiplication 174
four square theorem transpose 173
reciprocity 53 upper triangular 176
symbol mean value theorem 179
Mersenne 140, 352
Japan prime
MO Mersenne sequence 261
metric space 87, 293
K 193, 416 Miklós Schweitzer 140, 355
193, 416 minimal polynomial 23, 72
Kakeya 156, 174, 303 module
conjecture monic 16
set 116 polynomial 180, 194, 418
monoid
kernel 56, 72, 259 morphism 144
Kobayashi’s theorem 56, 259 143, 371
Korea 24, 206
99 155
MO 55, 257
winter program 55, 257
Kronecker’s theorem
Kronecker-Weber theorem
Kummer
lemma

INDEX 427

multilinear 177 constant coefficient 144
multiplicative 68
multiplicity cyclotomic 43
147
of a root divisibility 145

elementary symmetric 18

irreducible 150

N leading coefficient 144

Nagell 105, 138, 309 monic 144
Nakayama’s lemma 194, 418
Newton power sum 161
358
method 22, 162 primitive 75
Newton’s formulas 193, 413
nilpotent root 145, 147
Noether’s Lemma 191
norm 92 symmetric 18
28
absolute 187 power mean inequality 200, 359
determinant 31, 32, 42
Euclidean 96 power series
of a field extension 28
quadratic 37 formal 144
quaternion 141, 356
norm (on a vector space) 158, 381 pre-periodic points 87, 296
normal subgroup 92
number field primary

Hurwitz integer 41, 229

prime

divisors of a polynomial 79

element 29, 30

Gaussian 33

ideal 58, 123, 208

inert 33, 217

ramified 33, 217

O rational 30

order split 33, 217
group
maximal 103 primitive
of a group 37
of a linear recurrence 158, 383 root (of finite fields) 346
189
Ostrowski 140, 141, 354, 359 element 93, 108

Hurwitz integer 41, 228

polynomial 75

P prime factor 50, 87, 293

root (modulo n) 54, 68, 251

p-adic 55, 257 root (of finite fields) 48, 68, 156, 245

absolute value 124 root of unity 17, 43, 65, 92, 98, 116, 134,

convergence 125 157, 192, 379, 410

exponential 140, 352 primitive element theorem 90, 93, 104, 108

integer 122 primorial 295

logarithm 140, 352 principal ideal domain 42

number 123

unit 123 Q

valuation 123 quadratic
complex field
partial fractions decomposition 190, 281 conjugate 109
cosine 28
Pell’s equation 109 field
integer 47, 244
Pell-type 113 norm 26, 90, 97
number
permanent 193, 413 real field 18, 197
reciprocity 28
permutation residue
unit 18, 197
even 185 109
quaternion
odd 185 conjugate 69, 102
norm 68, 83, 288
transposition 185 numbers
109
PFTB 22, 54, 249
37
pigeonhole principle 110, 111, 294, 355, 357, 37
36
387, 397, 416

Poland

MO 85

polynomial 144

chebyshev 66, 255

428 INDEX

R 395 normal 101, 307
138 symmetric
Rabinowitsch’s trick 33, 217 155
Ramanujan 194, 417 group 18, 160
ramified prime 174 polynomial
rank 158, 384 165
174 complete homogeneous 18, 160
of a linear map 151 elementary 19, 161
of an abelian group 15, 196 fundamental theorem of
rank-nullity theorem power sum 161
rational function 109
rational root theorem 117 T 141
real field 106, 314
quadratic 165, 393 Taiwan 132
totally real TST 81
Redei 226, 250 263
resultant 152 Taylor
Riemann expansion 358
zeta function 27, 58, 92 formula 367
ring 18, 58, 153, 188 series 108, 328
of integers 116
commutative 60 Taylor’s formula 119
multiplicative group 121 Teichmüller character 384
of integers 120, 334 TFJM 117
RMM 86, 291 Thue’s equation 194
SL Thue-Siegel-Roth theorem 194
Romania 86, 292 torsion
TST 157, 379 totally real 166, 397
root of unity filter trace 166, 397
Russia 157
All-Russian Olympiad of a field extension 14, 95
transcendence 185
S
basis 87, 294
scalars 168 degree 73, 280
transcendental number
transposition
Tuymaada
Tuymadaa

series 125

Siegel’s Lemma 193, 416 U

signature 184 unique factorisation domain 29, 77

simple group 318 unit 24, 54, 109, 205, 206, 251

sine law 202 circle 159, 387

skew field 37, 153, 221 complex cubic 117

Skolem-Mahler-Lech theorem 118, 133, 142, complex quadratic 109

369 Dirichlet theorem 119, 121, 337

solvability fundamental 109, 117

by radicals 106, 317 group 60

by real radicals 107, 320 of a ring 29, 155

Sophie Germain’s identity 314 of cyclotomic fields 55, 256

Sophie-Germain p-adic 123

prime 55, 255 real quadratic 110

theorem 55, 255 S 114, 119, 121, 341

split USA

polynomial 57, 104 MO 23, 73, 87, 107, 159, 199, 292

prime 33, 217 TST 23, 24, 53, 82, 87, 88, 97, 128, 139,

splitting field 62, 73, 107, 277, 320, 321 157, 193, 204, 247, 294, 377, 415

squarefree 55, 71, 272 TSTST 74

Størmer’s theorem 114, 121, 341

stars and bars 416 V

Strassmann’s theorem 137 Vahlen 106, 314
Vandermonde 139, 351
Sturm’s theorem 159, 388
determinant 182
subgroup 99

INDEX 429

Vandermonde determinant 415 Z 413
193, 416
vector space 156, 168 Zariski density
Zeev Dvir 226
dimension 90 zeta function 226, 250

vectors 168 Dedekind 50
Riemann
Vieta’s formulas 18, 19, 47, 148, 164 Zsigmondy’s theorem

W 74, 285
148, 300
Wedderburn’s theorem
Wilson’s theorem


Click to View FlipBook Version