The words you are searching are inside this book. To get more targeted content, please make full-text search by clicking here.

Course: Information Theory and Coding Teacher: Piotr Chołda.....Group: Electronics and ...

Discover the best professional documents and content resources in AnyFlip Document Base.
Search
Published by , 2016-02-10 01:24:02

Draft of a lecture with exercises - AGH University of ...

Course: Information Theory and Coding Teacher: Piotr Chołda.....Group: Electronics and ...

Course: Information Theory and Coding

Teacher: Piotr Chołda

. G. .ro.u.p.: . . . .E.le.c.tr.o.n.ic.s.a.n.d.T.e.le.co.m. .m.u.ni.c.at.io.n.s., .M..S. c...s.tu.d.i.es.,.1.st. .se.m..., .Sp.r.in.g. 2.0.1.4. . . . . . . . . . . . . . . . . .

Draft of a lecture with exercises

April 9, 2014

1 Lecture VI (April 11, 2014): Hamming coding
and modifications of linear codes

1.1 Binary Hamming codes

1. Basic parameters: n = 2c − 1, k = 2c − 1 − c, where c is the number of
parity-check symbols.

2. Hamming codes are the perfect codes (we attain the equality in the Ham-
ming bound) with dmin = 3, thus they correct only single errors.

3. Parity-check matrix for Hamming code (2c − 1, 2c − 1 − c) is always con-
structed from columns that form the binary representations of consecutive
numbers from 1 to 2c − 1. Thus, Hamming code is not a systematic code
(H is not in a canonical form).

4. Syndrome decoding of Hamming codes: due to the specific form of parity-
check matrices for Hamming codes, the decoding of those codes consists in
correcting the position of the obtained word, which is binary represented
by the calculated syndrome (if s = 0).

1.2 Reed-Muller (R-M) Codes

1. R-M codes are defined by two parameters [m, l]:

code order l,
number m > l.

2. When we use the standard description (n, k), we have:

n = 2m,

k= l m ,
i=0 i

n−k = m m (since n = 2m = (1 + 1)m = m m )).
i=l+1 i i=0 i

3. Minimum Hamming distance of [m, l]-RM codes: dmin = 2m−l.

4. R-M generator matrix:

Let vi be the vector of length n = 2m, i = 1, 2, . . . , m, containing
alternately changing series of 2i−1 0’s and 2i−1 1’s. v0 contains all

1’s.

Page 1

Course: Information Theory and Coding

Teacher: Piotr Chołda

. G. .ro.u.p.: . . . .E.le.c.tr.o.n.ic.s.a.n.d.T.e.le.co.m. .m.u.ni.c.at.io.n.s., .M..S. c...s.tu.d.i.es.,.1.st. .se.m..., .Sp.r.in.g. 2.0.1.4. . . . . . . . . . . . . . . . . .

The inner product of p = (p1, p2, . . . , pn) and q = (q1, q2, . . . , qn) is
defined as follows:

p ∗ q = (p1q1, p2q2, . . . , pnqn).

The generator matrix of l-th order Reed-Muller code is composed
of v0, {vi : i = 1, 2, . . . , m}, and all inner products of vi of l or less
elements.

Example for m = 3 and l = 2:

 v0   1 1 1 1 1 1 1 1 

 v1   0 1 0 1 0 1 0 1 

 v2   0 0 1 1 0 0 1 1 
   

G =  v3  =  0 0 0 0 1 1 1 1  .
   

 v1 ∗ v2   0 0 0 1 0 0 0 1 
   

 v1 ∗ v3   0 0 0 0 0 1 0 1 
   

v2 ∗ v3 00000011

1.3 Modifications of linear codes (n, k)

1. Lengthening (wydłużanie) of a code:

idea: an information symbol is added;
if G is in the canonical form: column and row of the same number
are added, and a column is added to H;
kLe = k + 1;
nLe = n + 1;
dminLe ≤ dmin;
the inverse operation: shortening (skracanie)—for saving transmis-
sion resources.

2. Extending (rozszerzanie) of a code:

idea: a parity-check symbol is added (to improve reliability);
a new column is added in G and both new row and column are added
in H;
nEt = n + 1;
kEt = k;
dminEt ≥ dmin;
the inverse operation: puncturing (przebijanie).

3. Augmenting (powiększanie) of a code:

idea: new codewords are added in a structured way (to safe the
transmission resources—at a price of reduced reliability)—after the
operation there are twice as many codewords as before;
a new row is added to G and row i is removed from H with the
modification of column i in H (i.e. a parity check position starts to
be used as an information-carrying position);

Page 2

Course: Information Theory and Coding

Teacher: Piotr Chołda

. G. .ro.u.p.: . . . .E.le.c.tr.o.n.ic.s.a.n.d.T.e.le.co.m. .m.u.ni.c.at.io.n.s., .M..S. c...s.tu.d.i.es.,.1.st. .se.m..., .Sp.r.in.g. 2.0.1.4. . . . . . . . . . . . . . . . . .

Table 1: Changes related to code modifications

Modification nmod kmod dminmod

Lengthening nLe ↑ kLe ↑ dminLe ↓
Shortening nSh ↓ kSh ↓ dminSh ↑

Extending nEt ↑ kEt dminEt ↑
Puncturing nP u ↓ kPu dminP u ↓

Augmenting nAu kAu ↑ dminAu ↓
Expurgating nEp kEp ↓ dminEp ↑

nAu = n.
kAu = k + 1.
dminAu ≤ dmin.
the inverse operation: expurgating (okrajanie).

4. Joined codes (kody łączone):

we must have m codes with the same parameter k—(ni, k) for code
i (non necessarily systematic);
a codeword of the joined code is made up by joining bit-by-bit m
codewords (each from different code (ni, k)) encoding the same in-
formation sequence u;
parameters of joined codes:

– n = i ni;
– k = ki, because k1 = k2 = . . . ;
– dmin = i dmini .
Example:

m = 2,
code A—(5, 2), code B—(6, 2),
n = 11, k = 2.

We ask: what is the codeword encoding u = 11 in a joined code obtained
from codes A and B? Answer:

in code 1 information u = 11 is encoded as e.g. 11 |000,

u

in code 2 information u = 11 is encoded as e.g. 101| 11 |0,

u

thus, in the joined code u = 11 will be encoded as 11000101110.

5. Iterated (product) codes (kody iterowane):

we must have m codes with not necessarily identical parameters
(ni, ki) for code i (the codes are systematic);

Page 3

Course: Information Theory and Coding

Teacher: Piotr Chołda

. G. .ro.u.p.: . . . .E.le.c.tr.o.n.ic.s.a.n.d.T.e.le.co.m. .m.u.ni.c.at.io.n.s., .M..S. c...s.tu.d.i.es.,.1.st. .se.m..., .Sp.r.in.g. 2.0.1.4. . . . . . . . . . . . . . . . . .

a codeword of the iterated code is made up by constructing a matrix
for m = 2, a cube for m = 3 or a hypercube of dimension m ≥ 4;

information symbols in the codeword are determined by the informa-
tion to be sent u (of length k1×k2×. . . ); the parity-check symbols are
generated according to the rule of constructing parity-check symbols
of code i on the basis of the information symbols subset ui related
only to dimension i of the matrix, cube or hypercube;

parameters:

– n = i ni,
– k = i ki,
– dmin = i dmini .

Example:

m = 2,
code A—(3, 2) and code B—(4, 3) ⇒ n = 12, k = 6.

We ask: what is the codeword encoding u = 000110 in an iterated code
obtained from codes A and B? Answer:

in code 1 information 00 is encoded as e.g. 00|0, 01 as e.g. 01|1, 10
as e.g. 10|1, and 11 as e.g. 11|0;

in code 2 information 001 is encoded as e.g. 001|1, and 010 as e.g.
010|1;

thus, in the iterated code u = 000110 will be encoded as:

0 0 0

0 1 1  ⇒ 000|011|101|110
0 1 
 1
 

110

1.4 Exercises

1. Hamming codes:

(a) Find a number of codewords of weight three in a Hamming code that
contains six redundant positions.

(b) Which of the following sequences are codewords of a Hamming code:

01101011011000 011010110110000 10000010000011
100000100000011 11010110111111 110010110111111

(c) Find the probability of the wrong decoding decision for a decoder of
the Hamming code (7, 4), if single bit errors are independent and their
probabilities are equal to 10−3. What is the ratio of the probability of
the occurrence of errors correctable by the decoder to the probability
of the occurrence of a codeword error?

Page 4

Course: Information Theory and Coding

Teacher: Piotr Chołda

. G. .ro.u.p.: . . . .E.le.c.tr.o.n.ic.s.a.n.d.T.e.le.co.m. .m.u.ni.c.at.io.n.s., .M..S. c...s.tu.d.i.es.,.1.st. .se.m..., .Sp.r.in.g. 2.0.1.4. . . . . . . . . . . . . . . . . .

(d) Find the maximum Hamming distance dmax of a Hamming code
that contains seven redundant positions. dmax is defined analogously
to dmin, that is, for code C:

dmax(C) = max {d (xi, xj) : xi, xj ∈ C ∧ xi = xj} ,

where d (xi, xj) is a Hamming distance between two codewords of
code C.

(e) A binary cyclic Hamming code (n, k) with five parity-check symbols
is given. Find the following sequences:

x1 i x2: two codewords, so that the Hamming distance between
them is equal to five.
y1 i y2: two errored codewords, so that the Hamming distance
between them is equal to nine.

2. Reed-Muller codes:

(a) In 1969 and 1976 NASA used (on its spacecraft) Reed-Muller (32, 6)
code. Find the minimum distance of this code and its generator
matrix. What is the probability that any given block is wrongly
decoded, if the transmission channel can be modeled as BSC with
BER = 0.05?

(b) What are the codewords of the shortest possible Reed-Muller code
for messages (i.e., information sequences):

u1 = 1101 u2 = 1001?

(c) Design the shortest possible Reed-Muller code to be used to carry at
least sixteen various messages and correcting all single and double
errors. Find the code’s parameters, its minimum Hamming distance,
and the generator matrix.

3. Modification of codes:

(a) Find the syndrome related to the binary sequence:

y = 1011110101,

obtained by a receiver of a shortened Hamming code. What will be
the decoder decision?

(b) Show that the shortened Hamming code (6, 3) is not a perfect code.

(c) Determine the minimum number of repetitions of non-redundant bi-
nary code, that allows for correction of all single and double errors.

(d) Determine the improvement of correction properties of parity-check
code (4, 3), if it is replaced with the double iterated code. Find the
rate of this code and analyze the operation of the decoder when the
transmission is:

not disturbed by errors,
disturbed by a single error,
disturbed by two errors,

Page 5

Course: Information Theory and Coding

Teacher: Piotr Chołda

. G. .ro.u.p.: . . . .E.le.c.tr.o.n.ic.s.a.n.d.T.e.le.co.m. .m.u.ni.c.at.io.n.s., .M..S. c...s.tu.d.i.es.,.1.st. .se.m..., .Sp.r.in.g. 2.0.1.4. . . . . . . . . . . . . . . . . .

disturbed by more errors.

(e) Find the generator matrix of the double iterated code, in which

columns and rows are encoded with the codes of the following gener-

ator matrices:

G1 = G2 = 1 01 .
0 11

Moreover, find the minimum Hamming distance on the basis of the
parity-check matrix for this iterated code.

1.5 Bibliography

The contents of this lecture is based on the following books:

Dominic Welsh. Codes and Cryptography. Clarendon Press, Oxford, UK,
1988: chapters 4.6, 4.8.

Todd K. Moon. Error Correction Coding. John Wiley & Sons, Inc., Hobo-
ken, NJ, 2005: chapters 1.9, 8.

Stefan M. Moser and Po-Ning Chen. A Student’s Guide to Coding and
Information Theory. Cambridge University Press, Cambridge, UK, 2012:
chapters 3.3, 8.1-8.2.

Page 6


Click to View FlipBook Version