Chapter 2 2.3 Since m is not a prime, it can be factored as the product of two integers a and b, m = a · b with 1 < a,b < m. It is clear that both a and b are in the set {1, 2, ··· ,m - 1}. It follows from the definition of modulo-m multiplication that a ¡ b =0. Since 0 is not an element in the set {1, 2, ··· ,m- 1}, the set is not closed under the modulo-m multiplication and hence can not be a group. 2.5 It follows from Problem 2.3 that, if m is not a prime, the set {1, 2, ··· ,m - 1} can not be a group under the modulo-m multiplication. Consequently, the set {0, 1, 2, ··· ,m - 1} can not be a field under the modulo-m addition and multiplication. 2.7 First we note that the set of sums of unit element contains the zero element 0. For any 1 ≤ ‘<λ, ‘ X i=1 1+ λ-‘ X i=1 1= λ X i=1 1=0. Hence every sum has an inverse with respect to the addition operation of the field GF(q). Since the sums are elements in GF(q), they must satisfy the associative and commutative laws with respect to the addition operation of GF(q). Therefore, the sums form a commutative group under the addition of GF(q). Next we note that the sums contain the unit element 1 of GF(q). For each nonzero sum ‘ X i=1 1 with 1 ≤ ‘<λ, we want to show it has a multiplicative inverse with respect to the multipli- cation operation of GF(q). Since λ is prime, ‘ and λ are relatively prime and there exist two 1
126
Embed
Solution Manual.error Control Coding 2nd.by Lin Shu and Costello
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Chapter 2
2.3 Sincem is not a prime, it can be factored as the product of two integersa andb,
m = a · b
with 1 < a, b < m. It is clear that botha andb are in the set{1, 2, · · · , m − 1}. It follows
from the definition of modulo-m multiplication that
a ¡ b = 0.
Since0 is not an element in the set{1, 2, · · · ,m−1}, the set is not closed under the modulo-m
multiplication and hence can not be a group.
2.5 It follows from Problem 2.3 that, ifm is not a prime, the set{1, 2, · · · ,m − 1} can not be a
group under the modulo-m multiplication. Consequently, the set{0, 1, 2, · · · ,m− 1} can not
be a field under the modulo-m addition and multiplication.
2.7 First we note that the set of sums of unit element contains the zero element0. For any1 ≤` < λ,
∑i=1
1 +λ−∑i=1
1 =λ∑
i=1
1 = 0.
Hence every sum has an inverse with respect to the addition operation of the field GF(q). Since
the sums are elements in GF(q), they must satisfy the associative and commutative laws with
respect to the addition operation of GF(q). Therefore, the sums form a commutative group
under the addition of GF(q).
Next we note that the sums contain the unit element 1 of GF(q). For each nonzero sum
∑i=1
1
with 1 ≤ ` < λ, we want to show it has a multiplicative inverse with respect to the multipli-
cation operation of GF(q). Sinceλ is prime,` andλ are relatively prime and there exist two
1
integersa andb such that
a · ` + b · λ = 1, (1)
wherea andλ are also relatively prime. Dividinga by λ, we obtain
a = kλ + r with 0 ≤ r < λ. (2)
Sincea andλ are relatively prime,r 6= 0. Hence
1 ≤ r < λ
Combining (1) and (2), we have
` · r = −(b + k`) · λ + 1
Consider
∑i=1
1 ·r∑
i=1
1 =`·r∑i=1
1 =
−(b+k`)·λ∑i=1
+1
= (λ∑
i=1
1)(
−(b+k`)∑i=1
1) + 1
= 0 + 1 = 1.
Hence, every nonzero sum has an inverse with respect to the multiplication operation of GF(q).
Since the nonzero sums are elements of GF(q), they obey the associative and commutative
laws with respect to the multiplication of GF(q). Also the sums satisfy the distributive law.
As a result, the sums form a field, a subfield of GF(q).
2.8 Consider the finite field GF(q). Letn be the maximum order of the nonzero elements of GF(q)
and letα be an element of ordern. It follows from Theorem 2.9 thatn dividesq − 1, i.e.
q − 1 = k · n.
Thusn ≤ q − 1. Let β be any other nonzero element in GF(q) and lete be the order ofβ.
2
Suppose thate does not dividen. Let (n, e) be the greatest common factor ofn ande. Then
e/(n, e) andn are relatively prime. Consider the element
β(n,e)
This element has ordere/(n, e). The element
αβ(n,e)
has orderne/(n, e) which is greater thann. This contradicts the fact thatn is the maximum
order of nonzero elements in GF(q). Hencee must dividen. Therefore, the order of each
nonzero element of GF(q) is a factor ofn. This implies that each nonzero element of GF(q)
is a root of the polynomial
Xn − 1.
Consequently,q − 1 ≤ n. Sincen ≤ q − 1 (by Theorem 2.9), we must have
n = q − 1.
Thus the maximum order of nonzero elements in GF(q) is q-1. The elements of orderq − 1
are then primitive elements.
2.11 (a) Suppose thatf(X) is irreducible but its reciprocalf ∗(X) is not. Then
f ∗(X) = a(X) · b(X)
where the degrees ofa(X) andb(X) are nonzero. Letk andm be the degrees ofa(X) and
b(X) respectivly. Clearly,k + m = n. Since the reciprocal off ∗(X) is f(X),
f(X) = Xnf ∗(1
X) = Xka(
1
X) ·Xmb(
1
X).
This says thatf(X) is not irreducible and is a contradiction to the hypothesis. Hencef ∗(X)
must be irreducible. Similarly, we can prove that iff ∗(X) is irreducible,f(X) is also
irreducible. Consequently,f ∗(X) is irreducible if and only iff(X) is irreducible.
3
(b) Suppose thatf(X) is primitive butf ∗(X) is not. Then there exists a positive integerk less
than2n − 1 such thatf ∗(X) dividesXk + 1. Let
Xk + 1 = f ∗(X)q(X).
Taking the reciprocals of both sides of the above equality, we have
Xk + 1 = Xkf ∗(1
X)q(
1
X)
= Xnf ∗(1
X) ·Xk−nq(
1
X)
= f(X) ·Xk−nq(1
X).
This implies thatf(X) divides Xk + 1 with k < 2n − 1. This is a contradiction to the
hypothesis thatf(X) is primitive. Hencef ∗(X) must be also primitive. Similarly, iff ∗(X) is
primitive, f(X) must also be primitive. Consequentlyf ∗(X) is primitive if and only iff(X)
is primitive.
2.15 We only need to show thatβ, β2, · · · , β2e−1are distinct. Suppose that
β2i
= β2j
for 0 ≤ i, j < e andi < j. Then,
(β2j−i−1)2i
= 1.
Since the orderβ is a factor of2m − 1, it must be odd. For(β2j−i−1)2i= 1, we must have
β2j−i−1 = 1.
Since bothi andj are less thane, j − i < e. This is contradiction to the fact that thee is the
smallest nonnegative integer such that
β2e−1 = 1.
4
Henceβ2i 6= β2jfor 0 ≤ i, j < e.
2.16 Let n′ be the order ofβ2i. Then
(β2i
)n′ = 1
Hence
(βn′)2i
= 1. (1)
Since the ordern of β is odd,n and2i are relatively prime. From(1), we see thatn dividesn′
and
n′ = kn. (2)
Now consider
(β2i
)n = (βn)2i
= 1
This implies thatn′ (the order ofβ2i) dividesn. Hence
n = `n′ (3)
From (2) and (3), we conclude that
n′ = n.
2.20 Note thatc ·v = c · (0+v) = c ·0+ c ·v. Adding−(c ·v) to both sides of the above equality,
we have
c · v + [−(c · v)] = c · 0 + c · v + [−(c · v)]
0 = c · 0 + 0.
Since0 is the additive identity of the vector space, we then have
c · 0 = 0.
2.21 Note that0 · v = 0. Then for anyc in F ,
(−c + c) · v = 0
5
(−c) · v + c · v = 0.
Hence(−c) · v is the additive inverse ofc · v, i.e.
−(c · v) = (−c) · v (1)
Sincec · 0 = 0 (problem 2.20),
c · (−v + v) = 0
c · (−v) + c · v = 0.
Hencec · (−v) is the additive inverse ofc · v, i.e.
−(c · v) = c · (−v) (2)
From (1) and (2), we obtain
−(c · v) = (−c) · v = c · (−v)
2.22 By Theorem 2.22,S is a subspace if (i) for anyu andv in S, u + v is in S and (ii) for anyc
in F andu in S, c · u is in S. The first condition is now given, we only have to show that the
second condition is implied by the first condition forF = GF (2). Letu be any element inS.
It follows from the given condition that
u + u = 0
is also inS. Let c be an element in GF(2). Then, for anyu in S,
c · u =
0 for c = 0
u for c = 1
Clearlyc · u is also inS. HenceS is a subspace.
2.24 If the elements of GF(2m) are represented bym-tuples over GF(2), the proof that GF(2m) is
6
a vector space over GF(2) is then straight-forward.
2.27 Let u andv be any two elements inS1 ∩ S2. It is clear theu andv are elements inS1, andu
andv are elements inS2. SinceS1 andS2 are subspaces,
u + v ∈ S1
and
u + v ∈ S2.
Hence,u + v is in S1 ∩ S2. Now letx be any vector inS1 ∩ S2. Thenx ∈ S1, andx ∈ S2.
Again, sinceS1 andS2 are subspaces, for anyc in the fieldF , c · x is in S1 and also inS2.
Hencec · v is in the intersection,S1 ∩ S2. It follows from Theorem 2.22 thatS1 ∩ S2 is a
subspace.
7
Chapter 3
3.1 The generator and parity-check matrices are:
G =
0 1 1 1 1 0 0 0
1 1 1 0 0 1 0 0
1 1 0 1 0 0 1 0
1 0 1 1 0 0 0 1
H =
1 0 0 0 0 1 1 1
0 1 0 0 1 1 1 0
0 0 1 0 1 1 0 1
0 0 0 1 1 0 1 1
From the parity-check matrix we see that each column contains odd number of ones, and no
two columns are alike. Thus no two columns sum to zero and any three columns sum to a 4-
tuple with odd number of ones. However, the first, the second, the third and the sixth columns
sum to zero. Therefore, the minimum distance of the code is 4.
3.4 (a) The matrixH1 is an(n−k+1)×(n+1) matrix. First we note that then−k rows ofH are
linearly independent. It is clear that the first(n− k) rows ofH1 are also linearly independent.
The last row ofH1 has a′′1′′ at its first position but other rows ofH1 have a′′0′′ at their first
position. Any linear combination including the last row ofH1 will never yield a zero vector.
Thus all the rows ofH1 are linearly independent. Hence the row space ofH1 has dimension
n− k + 1. The dimension of its null space,C1, is then equal to
dim(C1) = (n + 1)− (n− k + 1) = k
HenceC1 is an(n + 1, k) linear code.
(b) Note that the last row ofH1 is an all-one vector. The inner product of a vector with odd
weight and the all-one vector is′′1′′. Hence, for any odd weight vectorv,
v ·HT1 6= 0
andv cannot be a code word inC1. Therefore,C1 consists of only even-weight code words.
(c) Let v be a code word inC. Thenv ·HT = 0. Extendv by adding a digitv∞ to its left.
8
This results in a vector ofn + 1 digits,
v1 = (v∞,v) = (v∞, v0, v1, · · · , vn−1).
Forv1 to be a vector inC1, we must require that
v1HT1 = 0.
First we note that the inner product ofv1 with any of the firstn−k rows ofH1 is 0. The inner
product ofv1 with the last row ofH1 is
v∞ + v0 + v1 + · · ·+ vn−1.
For this sum to be zero, we must require thatv∞ = 1 if the vectorv has odd weight and
v∞ = 0 if the vectorv has even weight. Therefore, any vectorv1 formed as above is a code
word inC1, there are2k such code words. The dimension ofC1 is k, these2k code words are
all the code words ofC1.
3.5 Let Ce be the set of code words inC with even weight and letCo be the set of code words in
C with odd weight. Letx be any odd-weight code vector fromCo. Addingx to each vector in
Co, we obtain a set ofC ′e of even weight vector. The number of vectors inC ′
e is equal to the
number of vectors inCo, i.e. |C ′e| = |Co|. Also C ′
e ⊆ Ce. Thus,
|Co| ≤ |Ce| (1)
Now addingx to each vector inCe, we obtain a setC ′o of odd weight code words. The number
of vectors inC ′o is equal to the number of vectors inCe and
C ′o ⊆ Co
Hence
|Ce| ≤ |Co| (2)
From (1) and (2), we conclude that|Co| = |Ce|.
9
3.6 (a) From the given condition onG, we see that, for any digit position, there is a row inG
with a nonzero component at that position. This row is a code word inC. Hence in the code
array, each column contains at least one nonzero entry. Therefore no column in the code array
contains only zeros.
(b) Consider the-th column of the code array. From part (a) we see that this column contains
at least one′′1′′. Let S0 be the code words with a′′0′′ at the`-th position andS1 be the
codewords with a′′1′′ at the`-th position. Letx be a code word fromS1. Addingx to each
vector inS0, we obtain a setS ′1 of code words with a′′1′′ at the`-th position. Clearly,
|S ′1| = |S0| (1)
and
S ′1 ⊆ S1. (2)
Adding x to each vector inS1, we obtain a set ofS ′0 of code words with a′′0′′ at the`-th
location. We see that
|S ′0| = |S1| (3)
and
S ′0 ⊆ S0. (4)
From (1) and (2), we obtain
|S0| ≤ |S1|. (5)
From (3) and (4) ,we obtain
|S1| ≤ |S0|. (6)
From (5) and (6) we have|S0| = |S1|. This implies that the-th column of the code array
consists2k−1 zeros and2k−1 ones.
(c) Let S0 be the set of code words with a′′0′′ at the`-th position. From part (b), we see that
S0 consists of2k−1 code words. Letx andy be any two code words inS0. The sumx + y
also has a zero at the-th location and hence is code word inS0. ThereforeS0 is a subspace
of the vector space of alln-tuples over GF(2). SinceS0 is a subset ofC, it is a subspace ofC.
The dimension ofS0 is k − 1.
10
3.7 Let x, y andz be any threen-tuples over GF(2). Note that
d(x,y) = w(x + y),
d(y, z) = w(y + z),
d(x, z) = w(x + z).
It is easy to see that
w(u) + w(v) ≥ w(u + v). (1)
Let u = x + y andv = y + z. It follows from (1) that
w(x + y) + w(y + z) ≥ w(x + y + y + z) = w(x + z).
From the above inequality, we have
d(x,y) + d(y, z) ≥ d(x, z).
3.8 From the given condition, we see thatλ < bdmin−12
c. It follows from the theorem 3.5 that all
the error patterns ofλ or fewer errors can be used as coset leaders in a standard array. Hence,
they are correctable. In order to show that any error pattern of` or fewer errors is detectable,
we need to show that no error patternx of ` or fewer errors can be in the same coset as an
error patterny of λ or fewer errors. Suppose thatx andy are in the same coset. Thenx + y
is a nonzero code word. The weight of this code word is
w(x + y) ≤ w(x) + w(y) ≤ ` + λ < dmin.
This is impossible since the minimum weight of the code isdmin. Hencex andy are in
different cosets. As a result, whenx occurs, it will not be mistaken asy. Thereforex is
detectable.
3.11 In a systematic linear code, every nonzero code vector has at least one nonzero component in
its information section (i.e. the rightmostk positions). Hence a nonzero vector that consists of
only zeros in its rightmostk position can not be a code word in any of the systematic code inΓ.
11
Now consider a nonzero vectorv = (v0, v1, · · · , vn−1) with at least one nonzero component
in its k rightmost positions,sayvn−k+i = 1 for 0 ≤ i < k. Consider a matrix of the following
form which hasv as itsi-th row:
p00 p01 · · · p0,n−k−1 1 0 0 0 · · · 0
p10 p11 · · · p1,n−k−1 0 1 0 0 · · · 0
......
v0 v1 · · · vn−k−1 vn−k vn−k+1 · · · · · vn−1
pi+1,0 pi+1,1 · · · pi+1,n−k−1 0 0 · · 1 · · 0
......
pk−1,0 pk−1,1 · · · pk−1,n−k−1 0 0 0 0 · · · 1
By elementary row operations, we can putG into systematic formG1. The code generated
by G1 containsv as a code word. Since eachpij has 2 choices,0 or 1, there are2(k−1)(n−k)
matricesG with v as thei-th row. Each can be put into systematic formG1 and eachG1
generates a systematic code containingv as a code word. Hencev is contained in2(k−1)(n−k)
codes inΓ.
3.13 The generator matrix of the code is
G = [P1 Ik P2 Ik]
= [G1 G2]
Hence a nonzero codeword inC is simply a cascade of a nonzero codewordv1 in C1 and a
11.6 (a) The controller canonical form encoder realization, requiring 6 delay elements, is shown below.
v( 0 )
v( 1 )
v
u( 1 )
u( 2 )
( 2 )
4
(b) The observer canonical form encoder realization, requiring only 3 delay elements, is shown below.
v(2)
v(1)
v(0)
u(2)
u(1)
11.14 (a) The GCD of the generator polynomials is 1.
(b) Since the GCD is 1, the inverse transfer function matrix G−1(D) must satisfy
G(D)G−1(D) =[1 + D2 1 + D + D2
]G−1(D) = I.
By inspection,
G−1(D) =[
1 + DD
].
11.15 (a) The GCD of the generator polynomials is 1 + D2 and a feedforward inverse does not exist.
(b) The encoder state diagram is shown below.
S2 S5
1/10
0/00 S7
S6
S3
S0
S4
S1
0/01
0/10
1/10
0/11
0/011/11
0/00
1/00
1/11
1/01
1/00
1/01
0/10
0/11
(c) The cycles S2S5S2 and S7S7 both have zero output weight.
(d) The infinite-weight information sequence
u(D) =1
1 + D2= 1 + D2 + D4 + D6 + D8 + · · ·
5
results in the output sequences
v(0)(D) = u(D)(1 + D2
)= 1
v(1)(D) = u(D)(1 + D + D2 + D3
)= 1 + D,
and hence a codeword of finite weight.
(e) This is a catastrophic encoder realization.
11.16 For a systematic (n, k, ν) encoder, the generator matrix G(D)is a k × n matrix of the form
G(D) = [Ik|P(D)] =
1 0 · · · 0 g(k)1 (D) · · · g(n−1)
1 (D)0 1 · · · 0 g(k)
2 (D) · · · g(n−1)2 (D)
......
...0 0 · · · 1 g(k)
k (D) · · · g(n−1)k (D)
.
The transfer function matrix of a feedforward inverse G−1(D) with delay l = 0 must be such that
G(D)G−1(D) = Ik.
A matrix satisfying this condition is given by
G−1(D) =[
Ik
0(n−k)×k
]=
1 0 · · · 00 1 · · · 0...
......
0 0 · · · 10 0 · · · 0...
......
0 0 · · · 0
.
6
11.19 (a) The encoder state diagram is shown below.
S2 S1
S3
S0
0/000
1/001
1/1110/011
1/100
0/110 1/010
0/101
(b) The modified state diagram is shown below.
S3
X
X
X
S1 S2 S0S0
X2
X2X3X2
(c) The WEF function is given by
A(X) =
∑i
Fi∆i
∆.
7
There are 3 cycles in the graph:
Cycle 1: S1S2S1 C1 = X3
Cycle 2: S1S3S2S1 C2 = X4
Cycle 3: S3S3 C3 = X .
There is one pair of nontouching cycles:
Cycle pair 1: (Cycle 1, Cycle 3) C1C3 = X4.
There are no more sets of nontouching cycles. Therefore,
∆ = 1 −∑
i
Ci +∑i′,j′
Ci′Cj′
= 1 − (X + X3 + X4) + X4
= 1 − X − X3.
There are 2 forward paths:
Forward path 1: S0S1S2S0 F1 = X7
Forward path 2: S0S1S3S2S0 F2 = X8.
Only cycle 3 does not touch forward path 1, and hence
∆1 = 1 − X.
Forward path 2 touches all the cycles, and hence
∆2 = 1.
Finally, the WEF is given by
A(X) =X7(1 − X) + X8
1 − X − X3=
X7
1 − X − X3.
Carrying out the division,
A(X) = X7 + X8 + X9 + 2X10 + · · · ,indicating that there is one codeword of weight 7, one codeword of weight 8, one codeword ofweight 9, 2 codewords of weight 10, and so on.
(d) The augmented state diagram is shown below.
(e) The IOWEF is given by
A(W, X, L) =
∑i
Fi∆i
∆.
There are 3 cycles in the graph:
Cycle 1: S1S2S1 C1 = WX3L2
Cycle 2: S1S3S2S1 C2 = W 2X4L3
Cycle 3: S3S3 C3 = WXL.
8
S1 S2S0 S0
W X3L X2LX2L
X2L
S3
WXL
WXL
WXL
There is one pair of nontouching cycles:
Cycle pair 1: (Cycle 1, Cycle 3) C1C3 = W 2X4L3.
There are no more sets of nontouching cycles. Therefore,
∆ = 1 −∑
i
Ci +∑i′,j′
Ci′Cj′
= 1 − WX3L2 + W 2X4L3 + WXL + W 2X4L3.
There are 2 forward paths:
Forward path 1: S0S1S2S0 F1 = WX7L3
Forward path 2: S0S1S3S2S0 F2 = W 2X8L4.
Only cycle 3 does not touch forward path 1, and hence
∆1 = 1 − WXL.
Forward path 2 touches all the cycles, and hence
∆2 = 1.
Finally, the IOWEF is given by
A(W, X, L) =WX7L3(1 − WXL) + W 2X8L4
1 − (WX3L2 + W 2X4L3 + WXL) + X4Y 2Z3=
WX7L3
1 − WXL − WX3L2.
Carrying out the division,
A(W, X, L) = WX7L3 + W 2X8L4 + W 3X9L5 + · · · ,indicating that there is one codeword of weight 7 with an information weight of 1 and length 3,one codeword of weight 8 with an information weight of 2 and length 4, and one codeword ofweight 9 with an information weight of 3 and length 5.
9
11.20 Using state variable method described on pp. 505-506, the WEF is given by
which indicates that there is one codeword of weight 4, one codeword of weight 5, two codewords ofweight 6, and so on.
11.28 (a) From Problem 11.19(c), the WEF of the code is
A(X) = X7 + X8 + X9 + 2X10 + · · · ,
and the free distance of the code is therefore dfree = 7, the lowest power of X in A(X).
(b) The complete CDF is shown below.
0 1 2 3 4
1
2
3
4
5
6
7
5 6 7
d
dmin = 5
dfree =7
(c) The minimum distance isdmin = dl|l=m=2 = 5.
10
11.29 (a) By examining the encoder state diagram in Problem 11.15 and considering only paths that beginand end in state S0 (see page 507), we find that the free distance of the code is dfree = 6. Thiscorresponds to the path S0S1S2S4S0 and the input sequence u = (1000).
(b) The complete CDF is shown below.
0 1 2 3 4
1
2
3
4
5
6
d
dfree = 6
dmin = 3
(c) The minimum distance isdmin = dl|l=m=3 = 3.
11.31 By definition, the free distance dfree is the minimum weight path that has diverged from and remergedwith the all-zero state. Assume that [v]j represents the shortest remerged path through the statediagram with weight free dfree. Letting [dl]re be the minimum weight of all remerged paths of lengthl, it follows that [dl]re = dfree for all l ≥ j. Also, for a noncatastrophic encoder, any path that remainsunmerged must accumulate weight. Letting [dl]un be the minimum weight of all unmerged paths oflength l, it follows that
liml→∞
[dl]un → ∞.
Therefore
liml→∞
dl = min
{liml→∞
[dl]re, liml→∞
[dl]un
}= dfree.
Q. E. D.
Chapter 12
Optimum Decoding of ConvolutionalCodes
12.1 (Note: The problem should read “ for the (3,2,2) encoder in Example 11.2 ”rather than “ for the (3,2,2)code in Table 12.1(d)”.) The state diagram of the encoder is given by:
01/111
11/101
00/000
01/011
00/100
10/101
S3
S1S2
S0
00/0
11
11/1
10
00/111
10/010
10/001
01/100
11/001
10/110
11/010
01/000
1
2
From the state diagram, we can draw a trellis diagram containing h + m + 1 = 3 + 1 + 1 = 5 levels asshown below:
00/000
10/10101
/011
11/110
10/01000/111
01/10011
/001
01/11110/001
11/010
00/100
11/101
01/000
10/110
00/011
S3
S1
S2
S0S000/000
10/10101
/01111/110
10/010
00/111
01/10011
/001
01/111
10/001
11/010
00/100
11/101
01/000
10/110
S0
S3
S1
S2
00/011
S0
S3
S1
S2
00/000
10/10101
/011
11/110
00/000
00/111
00/10000/011
S0
Hence, for u = (11, 01, 10),v(0) = (1001)v(1) = (1001)v(2) = (0011)
andv = (110, 000, 001, 111),
agreeing with (11.16) in Example 11.2. The path through the trellis corresponding to this codeword isshown highlighted in the figure.
3
12.2 Note that
N−1∑l=0
c2 [log P (rl|vl) + c1] =N−1∑l=0
[c2 log P (rl|vl) + c2c1]
= c2
N−1∑l=0
log P (rl|vl) + Nc2c1.
Since
maxv
{c2
N−1∑l=0
log P (rl|vl) + Nc2c1
}= c2 max
v
{N−1∑l=0
log P (rl|vl)
}+ Nc2c1
if C2 is positive, any path that maximizes∑N−1
l=0 log P (rl|vl) also maximizes∑N−1
l=0 c2[log P (rl|vl) +c1].
12.3 The integer metric table becomes:
01 02 12 11
0 6 5 3 01 0 3 5 6
The received sequence is r = (111201, 111102, 111101, 111111, 011201, 120211, 120111). The decoded se-quence is shown in the figure below, and the final survivor is
v = (111, 010, 110, 011, 000, 000, 000),
which yields a decoded information sequence of
u = (11000).
This result agrees with Example 12.1.
4
0/000
1/111
0/000 0/000 0/000 0/000 0/000 0/000
1/111
1/111
1/111
1/111
0/101
1/010
1/010
1/010
1/010
0/101
0/101
0/101
0/101
1/001
0/110
1/001 1/001
0/110
0/110
0/1101/100
0/011
1/100
1/100
0/011
0/011
0/011
0/011
S3
S1
S0S0 S0 S0 S0 S0 S0
S1 S1 S1 S1
S2 S2 S2
S3
S2
S3 S3
S2
S0
0 9 13 26 52 67 75 84
11 24 32 46 57
22 36 42 63
20 40 48 53 73
5
12.4 For the given channel transition probabilities, the resulting metric table is:
To construct an integer metric table, choose c1 = 2.699 and c2 = 4.28. Then the integer metric tablebecomes:
01 02 03 04 14 13 12 11
0 10 9 8 7 6 5 3 01 0 3 5 6 7 8 9 10
12.5 (a) Referring to the state diagram of Figure 11.13(a), the trellis diagram for an information sequenceof length h = 4 is shown in the figure below.
(b) After Viterbi decoding the final survivor is
v = (11, 10, 01, 00, 11, 00).
This corresponds to the information sequence
u = (1110).
6
0/000
0/00 0/00 0/00 0/00 0/00 0/00
0/00
0/00
0/00
1/10
1/10
1/10
S1
S0
0/00
0/01
S0 S0 S0 S0 S0
S4 S4 S4 S4
S2 S2S2S2
S6 S6 S6
S5 S5
S1 S1 S1
0/01
0/01
0/01
S3 S3 S3
3
19 12 21 42
22
38
16
1/00
0/110/11
0/110/11
1/00
1/01
1/01
0/10
0/10
0/10
46 51
S7 S7
20 48
1/10
1/01
0/01
0/01
0/10
0/10
40 61
1/00
1/11
0/11
0/00
0/00
0/00
53 71 66
20 45 79
0/110/11
0/11
27 68 83 94
34 49 80 98 115
S0 S0
7
12.6 Combining the soft decision outputs yields the following transition probabilities:
0 10 0.909 0.0911 0.091 0.909
For hard decision decoding, the metric is simply Hamming distance. For the received sequence
r = (11, 10, 00, 01, 10, 01, 00),
the decoding trellis is as shown in the figure below, and the final survivor is
v = (11, 10, 01, 01, 00, 11, 00),
which corresponds to the information sequence
u = (1110).
This result matches the result obtained using soft decisions in Problem 12.5.
8
0/000
0/00 0/00 0/00 0/00 0/00 0/00
0/00
0/00
0/00
1/10
1/10
1/10
S1
S0
0/00
0/01
S0 S0 S0 S0 S0
S4 S4 S4 S4
S2 S2S2S2
S6 S6 S6
S5 S5
S1 S1 S1
0/01
0/01
0/01
S3 S3 S3
2
0 3 5 4
2
0
3
1/00
0/110/11
0/110/11
1/00
1/01
1/01
0/10
0/10
0/10
1 3
S7 S7
4 2
1/10
1/01
0/01
0/01
0/10
0/10
2 3
1/00
1/11
0/11
0/00
0/00
0/00
1 1 2
4 4 3
0/110/11
0/11
4 2 2 3
3 4 3 3 3
S0 S0
9
12.9 Proof: For d even,
Pd =12
(d
d/2
)pd/2(1 − p)d/2 +
d∑e=(d/2)+1
(de
)pe(1 − p)d−e
<
d∑e=(d/2)
(de
)pe(1 − p)d−e
<d∑
e=(d/2)
(de
)pd/2(1 − p)d/2
= pd/2(1 − p)d/2d∑
e=(d/2)
(de
)
< 2dpd/2(1 − p)d/2
and thus (12.21) is an upper bound on Pd for d even. Q. E. D.
12.10 The event error probability is bounded by (12.25)
which is Eb/No = 12.72dB, the coding threshold. The coding gain as a function of Pb(E) is plottedbelow.
10-10
10-9
10-8
10-7
10-6
10-5
10-4
10-3
10-2
10-1
-6
-4
-2
0
2
4
6
8
10
12
14
Pb
Eb/N
o (d
B)
Reguired SNR, CodedRequired SNR, UncodedGain
( E )
Note that in this example, a short constraint length code (ν = 2) with hard decision decoding, theapproximate expressions for Pb(E) indicate that a positive coding gain is only achieved at very smallvalues of Pb(E), and the asymptatic coding gain is only 0.7dB.
12.13 The (3, 1, 2) encoder of Problem 12.1 has dfree = 7 and Bdfree= 1. Thus, expression (12.46) for the
unquantized AWGN channel becomes
Pb(E) ≈ Bdfreee−RdfreeEb/No = e−(7/3)·(Eb/No)
and (12.37) remains
Pb(E) ≈ 12e−Eb/No .
These expressions are plotted versus Eb/No in the figure below.
13
0 5 10 1510
-35
10-30
10-25
10-20
10-15
10-10
10-5
100
Eb/N
o (dB)
Pb(E
)
12.46 (Coded)12.37 (Uncoded)
Equating the above expressions and solving for Eb/No yields
e−(7/3)·(Eb/No) =12e−Eb/No
1 =12e(4/3)(Eb/No)
e(4/3)(Eb/No) = 2(4/3)(Eb/No) = ln(2)
Eb/No = (3/4) ln(2) = 0.5199,
which is Eb/No = −2.84dB, the coding threshold. (Note: If the slightly tighter bound on Q(x) from(1.5) is used to form the approximate expressin for Pb(E), the coding threshold actually moves to−∞ dB. But this is just an artifact of the bounds, which are not tight for small values of Eb/No.)The coding gain as a function of Pb(E) is plotted below. Note that in this example, a short constraintlength code (ν = 2) with soft decision decoding, the approximate expressions for Pb(E) indicate thata coding gain above 3.0 dB is achieved at moderate values of Pb(E), and the asymptotic coding gainis 3.7 dB.
14
10-10
10-9
10-8
10-7
10-6
10-5
10-4
10-3
10-2
10-1
-2
0
2
4
6
8
10
12
14
Pb
Eb/N
o (d
B)
Reguired SNR, CodedRequired SNR, UncodedGain
( E )
15
12.14 The IOWEF function of the (3, 1, 2) encoder of (12.1) is
A(W, X) =WX7
1 − WX − WX3
and thus (12.39b) becomes
Pb(E) < B(X)|X=Do =1k
∂A(W, X)∂W
∣∣∣∣X=D0,W=1
=X7
(1 − WX − WX3)2
∣∣∣∣X=D0,W=1
.
For the DMC of Problem 12.4, D0 = 0.42275 and the above expression becomes
Pb(E) < 9.5874× 10−3.
If the DMC is converted to a BSC, then the resulting crossover probability is p = 0.091. Using (12.29)yields
Pb(E) < B(X)|X=Do =1k
∂A(W, X)∂W
∣∣∣∣X=2
√p(1−p),W=1
=X7
(1 − WX − WX3)2
∣∣∣∣X=2
√p(1−p),W=1
= 3.7096×10−1,
about a factor of 40 larger than the soft decision case.
12.16 For the optimum (2, 1, 7) encoder in Table 12.1(c), dfree = 10, Adfree= 1, and Bdfree
= 2.
(a) From Table 12.1(c)γ = 6.99dB.
(b) Using (12.26) yieldsP (E) ≈ Adfree
2dfreepdfree/2 = 1.02 × 10−7.
(c) Using (12.30) yieldsPb(E) ≈ Bdfree
2dfreepdfree/2 = 2.04 × 10−7.
(d) For this encoder
G−1 =[
D2
1 + D+D2
]
and the amplification factor is A = 4.
For the quick-look-in (2, 1, 7) encoder in Table 12.2, dfree = 9, Adfree= 1, and Bdfree
= 1.
(a) From Table 12.2γ = 6.53dB.
(b) Using (12.26) yieldsP (E) ≈ Adfree
2dfreepdfree/2 = 5.12 × 10−7.
(c) Using (12.30) yieldsPb(E) ≈ Bdfree
2dfreepdfree/2 = 5.12 × 10−7.
16
(d) For this encoder
G−1 =[
11
]
and the amplification factor is A = 2.
12.17 The generator matrix of a rate R = 1/2 systematic feedforward encoder is of the form
G =[1 g(1)(D)
].
Letting g(1)(D) = 1 + D + D2 + D5 + D7 achieves dfree = 6 with Bdfree= 1 and Adfree
= 1.
(a) The soft-decision asymptotic coding gain is
γ = 4.77dB.
(b) Using (12.26) yieldsP (E) ≈ Adfree
2dfreepdfree/2 = 6.4 × 10−5.
(c) Using (12.30) yieldsPb(E) ≈ Bdfree
2dfreepdfree/2 = 6.4 × 10−5.
(d) For this encoder (and all systematic encoders)
G−1 =[
10
]
and the amplification factor is A = 1.
12.18 The generator polynomial for the (15, 7) BCH code is
g(X) = 1 + X4 + X6 + X7 + X8
and dg = 5. The generator polynomial of the dual code is
h(X) =X15 + 1
X8 + X7 + X6 + X4 + 1= X7 + X6 + X4 + 1
and hence dh ≥ 4.
(a) The rate R = 1/2 code with composite generator polynomial g(D) = 1 + D4 + D6 + D7 + D8 hasgenerator matrix
G(D) =[1 + D2 + D3 + D4 D3
]and dfree ≥ min(5, 8) = 5.
(b) The rate R = 1/4 code with composite generator polynomial g(D) = g(D2)+Dh(D2) = 1+D+D8 + D9 + D12 + D13 + D14 + D15 + D16 has generator matrix
12.20 (a) The augmented state diagram is shown below.
S1 S2
S3
S0 S0WX 3L X2L
X2LWXL
WXL
X2L
WXL
The generating function is given by
A(W, X, L) =
∑i
Fi∆i
∆.
There are 3 cycles in the graph:
18
Cycle 1: S1S2S1 C1 = WX3L2
Cycle 2: S1S3S2S1 C2 = W 2X4L3
Cycle 3: S3S3 C3 = WXL.
There is one pair of nontouching cycles:
Cycle pair 1: (loop 1, loop 3) C1C3 = W 2X4L3.
There are no more sets of nontouching cycles. Therefore,
∆ = 1 −∑
i
Ci +∑i′,j′
Ci′Cj′
= 1 − (WX3L2 + W 2X4L3 + WXL) + W 2X4L3.
There are 2 forward paths:
Forward path 1: S0S1S2S0 F1 = WX7L3
Forward path 2: S0S1S3S2S0 F2 = W 2X8L4.
Only cycle 3 does not touch forward path 1, and hence
∆1 = 1 − WXL.
Forward path 2 touches all the cycles, and hence
∆2 = 1.
Finally, the WEF A(W, X, L) is given by
A(W, X, L) =WX7L3(1 − WXL) + W 2X8L4
1 − (WX3L2 + W 2X4L3 + WXL) + W 2X4L3=
WX7L3
1 − WXL − WX3L2
and the generating WEF’s Ai(W, X, L) are given by:
A1(W, X, L) =WX3L(1 − WXL)
∆=
WX3L(1 − WXL)1 − WXL − WX3L2
= W X3 L + W 2 X6 L3 + W 3 X7 L4 + (W 3X9 + W 4X8 )L5 + (2 W 4, X10 + W 5X9 )L6 + · · ·A2(W, X, L) =
WX5L2(1 − WXL) + W 2X6L3
∆=
WX5L2
1 − WXL − WX3L2
= W X5 L2 + W 2 X6 L3 + (W 2 X8 + W 3 X7)L4 + (2 W 3 X9 + W 4 X8)L5
+(W 3 X11 + 3 W 4 X10 + W 5 X9)L6 + · · ·A3(W, X, L) =
W 2X4L2
∆=
W 2X4L2
1 − WXL − WX3L2
= W 2 X4 L2 + W 3 X5 L3 + (W 3 X7 + W 4 X6)L4 + (2 W 4 X8 + W 5 X7)L5
+(W 4 X10 + 3 W 5 X9 + W 6 X8)L6 · · ·
(b) This code has dfree = 7, so τmin is the minimum value of τ for which d(τ ) = dfree + 1 =8. Examining the series expansions of A1(W, X, L), A2(W, X, L), and A3(W, X, L) above yieldsτ min = 5.
19
(c) A table of d(τ ) and Ad(τ ) is given below.
τ d(τ ) Ad(τ )
0 3 11 4 12 5 13 6 14 7 15 8 1
(d) From part (c) and by looking at the series expansion of A3(W, X, L), it can be seen that
limτ→∞
d(τ ) = τ + 3.
12.21 For a BSC, the trellis diagram of Figure 12.6 in the book may be used to decode the three possible 21-bit subequences using the Hamming metric. The results are shown in the three figures below. Since ther used in the middle figure (b) below has the smallest Hamming distance (2) of the three subsequences,it is the most likely to be correctly synchronized.
0/000
1/111
0/000 0/000 0/000 0/000 0/000 0/000
1/111
1/111
1/111
1/111
0/101
1/010
1/010
1/010
1/010
0/101
0/101
0/101
0/101
1/001
0/110
1/001 1/001
0/110
0/110
0/1101/100
0/011
1/100
1/100
0/011
0/011
0/011
0/011
S3
S1
S0S0 S0 S0 S0 S0 S0
S1 S1 S1 S1
S2 S2
S3
S2
S3 S3
S2
S0
0 2 3 4 4 6 4 6
1 4 3 5 5
3 5 3 6
2 3 6 3 7S2
r 011 100 110 010 110 010 001
(a)
20
0/000
1/111
0/000 0/000 0/000 0/000 0/000 0/000
1/111
1/111
1/111
1/111
0/101
1/010
1/010
1/010
1/010
0/101
0/101
0/101
0/101
1/001
0/110
1/001 1/001
0/110
0/110
0/1101/100
0/011
1/100
1/100
0/011
0/011
0/011
0/011
S3
S1
S0S0 S0 S0 S0 S0 S0
S1 S1 S1 S1
S2 S2
S3
S2
S3 S3
S2
S0
0 3 4 4 5 4 5 2
0 5 1 4 1
2 4 4 6
1 3 1 5 2S2
r 111 001 100 101 100 100 011
(b)
21
0/000
1/111
0/000 0/000 0/000 0/000 0/000 0/000
1/111
1/111
1/111
1/111
0/101
1/010
1/010
1/010
1/010
0/101
0/101
0/101
0/101
1/001
0/110
1/001 1/001
0/110
0/110
0/1101/100
0/011
1/100
1/100
0/011
0/011
0/011
0/011
S3
S1
S0S0 S0 S0 S0 S0 S0
S1 S1 S1 S1
S2 S2
S3
S2
S3 S3
S2
S0
0 2 4 4 4 5 5 6
1 3 5 5 6
2 2 3 3
3 4 4 6 5S2
r 110 011 001 011 001 000 111
(c)
Chapter 13
Suboptimum Decoding ofConvolutional Codes
13.1 (a) Referring to the state diagram of Figure 11.13a, the code tree for an information sequence oflength h = 4 is shown below.
(b) For u = (1001), the corresponding codeword is v = (11, 01, 11, 00, 01, 11, 11), as shown below.
11
10
01
00
11
00
01
10
10
01
11
00
00
11
10 01 00 11
01 00 11 10
11
00
10
11 00 00
11 11
01
10
10
11
11
11
00
00
11 110100
11 00 00 00
01 00 1101
00 11 1010
10
11 00 00
11 1100
11
10
11
11
11
00
00
10
01
00 00 00 00
11 11 1101
1
2
13.2 Proof: A binary-input, Q-ary-output DMC is symmetric if
P (r` = j|0) = P (r` = Q− 1− j|1) (a-1)
for j = 0, 1, . . . , Q− 1. The output probability distribution may be computed as
P (r` = j) =1∑
i=0
P (r` = j|i)P (i),
where i is the binary input to the channel. For equally likely input signals, P (1) = P (0) = 0.5 and
P (r` = j) = P (r` = j|0)P (0) + P (r` = j|1)P (1)= 0.5 [P (r` = j|0) + P (r` = j|1)]= 0.5 [P (r` = Q− 1− j|0) + P (r` = Q− 1− j|1)] [using (a-1)]= P (r` = Q− 1− j).
Q. E. D.
13.3 (a) Computing the Fano metric with p = 0.045 results in the metric table:
0 10 0.434 −3.9741 −3.974 0.434
Dividing by 0.434 results in the following integer metric table:
0 10 1 −91 −9 1
(b) Decoding r = (11, 00, 11, 00, 01, 10, 11) using the stack algorithm results in the following eightsteps:
The decoded sequence is v = (11, 10, 01, 01, 00, 11, 00) and u = (1110). This agrees with the resultof Problem 12.5(b).
13.6 As can be seen from the solution to Problem 13.3, the final decoded path never falls below the secondstack entry (part (b)) or the fourth stack entry (part(c)). Thus, for a stacksize of 10 entries, the finaldecoded path is not effected in either case.
13.7 From Example 13.5, the integer metric table is
0 10 1 −51 −5 1
5
(a) Decoding the received sequence r = (010, 010, 001, 110, 100, 101, 011) using the stack bucket algo-rithm with an interval of 5 results in the following decoding steps:
Interval 1 Interval 2 Interval 3 Interval 4 Interval 5 Interval 6 Interval 79 to 5 4 to 0 -1 to -5 -6 to -10 -11 to -15 -16 to -20 -21 to -25
The decoded sequence is v = (111, 010, 001, 110, 100, 101, 011) and u = (11101), which agrees withthe result of Example 13.5.
6
(b) Decoding the received sequence r = (010, 010, 001, 110, 100, 101, 011) using the stack bucket algo-rithm with an interval of 9 results in the following decoding steps.
Interval 1 Interval 2 Interval 3 Interval 48 to 0 -1 to -9 -10 to -18 -19 to -27
* Not the first visit. Therefore no tightening is done.
Note: The final decoded path agrees with the results of Examples 13.7 and 13.8 and Problem 13.7.The number of computations in both cases is reduced compared to Examples 13.7 and 13.8.
The orthogonal check sums on e(0)0 are {s0, s2, s7, s13, s16, s17}.
11
(b) The block diagram of the feedback majority logic decoder for this code is:
+
+ + + + + +
+
MAJORITY GATE
)1(17r
)0(17r
)0(e
s
u)0(r
17s
13.37 (a) These orthogonalizable code with g(1)(D) = 1 + D6 + D7 + D9 + D10 + D11 yields the paritytriangle:
100000110111 1
10000011011
1000001101
100000110
10000011
1000001
100000
10000
1000
100
10
The orthogonal check sums on e(0)0 are {s0, s6, s7, s9, s1 + s3 + s10, s4 + s8 + s11}.
12
(b) The block diagram of the feedback majority logic decoder for this code is:
+
+ + + + +
+
MAJORITY GATE
+
++
)0(11r
)1(11r 11s s
)0(r u
)0(e
Chapter 16
Turbo Coding
16.1 Let u = [u0, u1, . . . , uK−1] be the input to encoder 1. Then
v(1) = [v(1)0 , v
(1)1 , . . . , v
(1)K−1] = u G(1)
is the output of encoder 1, where G(1) is the K ×K parity generator matrix for encoder 1, i.e.,v(1) = uG(1) is the parity sequence produced by encoder 1. For example, for the (2,1,2) systematicfeedbacK encoder with
16.5 Consider using an h-repeated (n, k, dmin) = (7, 4, 3) Hamming code as the constituent code, formingan (h[2n − k], k) = (10h, 4h) PCBC. The output consists of the original input bits u, the parity bitsfrom the non-interleaved encoder v(1), and the parity bits from the interleaved encoder v(2). For theh = 4-repeated PCBC, the IRWEF of the (28,16) constituent code is
A(W,Z) = [1 + W (3Z2 + Z3) + W 2(3Z + 3Z2) + W 3(1 + 3Z) + W 4Z3]4 − 1= W (12Z2 + 4Z3) +
Using a 4x4 row-column (block) interleaver, the minimum distance of the PCBC is 5. This comesfrom a weight 1 input, resulting in wH(u) = 1, wH(v(1)) = 2, and wH(v(2)) = 2, where wH(x) is theHamming weight of x.
16.7 On page 791 in the text, it is explained that “...any multiple error event in a codeword belongingto a terminated convolutional code can be viewed as a succession of single error events separated bysequences of 0’s.” Another way to look at it is that for convolutional codewords, we start in the zerostate and end in the zero state and any multiple error event can be seen as a series of deviations fromthe zero state. These deviations from the zero state can be separated by any amount of time spentin the zero state. For example, the codeword could have a deviation starting and another deviationending at the same zero state, or the codeword could stay in the zero state for several input zeros.Likewise the first input bit could take the codeword out of the zero state or the last input bit could bethe one to return the codeword to the zero state.
So, if there are h error events with total length λ in a block codeword length K, then there must beK−λ times that state 0 occurs or, as explained in the text, K−λ 0’s. Now, in the codeword, there areK − λ + h places where an error event can start, since the remaining λ− h places are then determinedby the error events. Since there are h events, the number of ways to choose h elements from N −λ+hplaces gives the multiplicity of block codewords for h-error events.
16.8 The IRWEF and WEF for the PCCC in Example 16.6 is found in the same manner as for a PCBC.The following combines the equations in (16.49) for the CWEF’s:
16.10 Redrawing the encoder block diagram and the IRWEF state diagram for the reversed generators, wesee that the state diagram is essentially the same except that W and Z are switched. Thus, we canuse equation (16.41) for the new IRWEF with W and Z reversed.
Dropping all terms of order greater than L9 gives the double-error event enumerators A(2)2,`(Z):
A(2)4,6 = Z6 A
(2)4,8 = 2Z7
A(2)6,7 = 2Z5 A
(2)6,8 = 2Z6 A
(2)6,9 = 6Z6 + 2Z7
A(2)8,8 = Z4 A
(2)8,9 = 2Z5
10
Following the same procedure, the triple-error event IRWEF is given by A(3)(W,Z,L) = [A(W,Z, L)]3 =L9W 6Z9 + . . ., and the triple-error event enumerator is given by A
(3)6,9(Z) = Z9.
Before finding the CWEF’s we need to calculate hmax. Notice that there are no double-error events ofweight less than 4 and that the only triple-error event has weight 6. Also notice that, for the terminatedencoder, odd input weights don’t appear. So for weight 2, hmax = 1. For weight 4 and 8, hmax = 2.For weight 6, hmax = 3. Now we can use equations (16.37) and (16.39) to find the CWEF’s as follows:
A2(Z) = c[3, 1]A(1)2,3(Z) + c[5, 1]A(1)
2,5(Z) + c[7, 1]A(1)2,7(Z) + c[9, 1]A(1)
2,9(Z)
= 3Z2 + 7Z3 + 5Z4 + Z6
A4(Z) = c[4, 1]A(1)4,4(Z) + c[5, 1]A(1)
4,5(Z) + c[6, 1]A(1)4,6(Z) + c[7, 1]A(1)
4,7(Z) + c[8, 1]A(1)4,8(Z) +
c[9, 1]A(1)4,9(Z) + c[6, 2]A(2)
4,6(Z) + c[8, 2]A(2)4.8(Z)
= 6Z2 + 13Z3 + 16Z4 + 10Z5 + 14Z6 + 7Z7
A6(Z) = c[7, 1]A(1)6,7(Z) + c[8, 1]A(1)
6,8(Z) + c[9, 1]A(1)6,9(Z) + C[6, 2]A(2)
6,7(Z) + c[8, 2]A(2)6,8(Z) +
c[9, 2]A(2)6,9(Z) + c[9, 3]A(3)
6,9(Z)
= 3Z2 + 7Z3 + 3Z4 + 12Z5 + 12Z6 + 2Z7 + Z9
A8(Z) = c[8, 2]A(2)8,8(Z) + c[9, 2]A(2)
8,9(Z)
= 3Z4 + 2Z5
A quick calculation shows that the CWEF’s include a total of 127 nonzero codewords, the correctnumber for an (18,7) code.
This convolutional code has minimum distance 4, one less than for the code of Example 6.6. As in theexample, we must modify the concept of the uniform interleaver because we are using convolutionalconstituent codes. Therefore, note that for weight 2, there are 16 valid input sequences. For weight 4,there are 66. For weight 6, there are 40, and for weight 8, there are 5. For all other weights, there areno valid input sequences.
11
The CWEF’s of the (27,7) PCCC can be found as follows:
16.13 First constituent encoder: G(1)ff (D) = [1 + D + D2 1 + D2]
Second constituent encoder: G(2)ff (D) = [1 1 + D + D2]
For the first constituent encoder, the IOWEF is given by
AC1(W,X, L) = WX5L3 + W 2L4(1 + L)X6 + W 3L5(1 + L)2X7 + . . . ,
and for the second one the IRWEF can be computed as
AC2(W,Z,L) = WZ3L3 + W 2Z2L4 + W 2Z4L5 + . . . ,
The effect of uniform interleaving is represented by
APCw (X, Z) =
AC1w (X) ∗AC2
w (Z)(Kw
)
13
and for large K, this can be approximated as
APCw (X, Z) ≈
∑
1≤ h1≤ hmax
∑
1≤ h2≤ hmax
c[h1]c[h2](Kw
) A(h1)w (X)A(h2)
w (Z),
where A(h1)w (X) and A
(h2)w (Z) are the conditional IO and IR h-error event WEF’s of the nonsystematic
and systematic constituent encoders, C1 and C2, respectively. With further approximations, we obtain
APCw (X, Z) ≈ w!
h1max!h2max!K(h1max+h2max−w)A(h1max)
w (X)A(h2max)w (Z)
and
BPCw (X, Z) ≈ w
KAPC
w (X,Z).
Going directly to the large K case, for any input weight w, w ≥ 1, hmax = w, since the weight 1 singleerror event can be repeated w times. It follows that
A(w)w (X) = X5w, w ≥ 1
and
A(w)w (Z) = Z3w, w ≥ 1.
Hence
APCw (X, Z) ≈ w!
w!w!K(w+w−w)X5wZ3w =
Kw
w!X5wZ3w
and
BPCw (X,Z) ≈ K(w−1)
(w − 1)!X5wZ3w.
Therefore
APC(W,X) ≈∑1≤w≤KWw Kw
w!X8w = KWX8 +
K
2
2
W 2X16 +K
6
3
W 3X24 − . . . ,
BPC(W,X) ≈ WX8 + KW 2X16 +K2
2W 3X24 + . . . ,
the average WEF’s are
14
APC(X) = KX8 +K2
2X16 +
K3
6X24 + . . . ,
BPC(X) = X8 + KX16 +K2
2X24 + . . . ,
and the free distance of the code is 8.
16.14 The encoder and augmented modified state diagram for this problem are shown below:
S0 S2 S0
S3
S1
LWZ
LW
LWZLWL
LZ
LZ
(a) (b)
(a) Encoder and (b) Augmented Modified State Diagram for G(D) = 1+D+D2
1+D
Note that in order to leave from and return to state S0 (i.e., terminating a K − 2 bit input sequence),an even overall-input (including termination bits) weight is required. Examination of the state diagramreveals 6 paths that contain overall-input weight W < 6 (“· · ·” means an indefinite loop around S3):
The multiple turbo code, as shown in Figure 16.2, contains 3 encoders, and it is assumed that theinterleavers are uniformly distributed, the block size K is large, and the code uses the same constituentencoders. Assume that a particular input sequence of weight w enters the first encoder, generatingthe parity weight enumerator APC
w (Z). Then, by the definition of the uniform interleaver, the secondand third encoders will generate any of the weights in APC
w (Z) with equal probability. The codewordCWEF of this (4K, K − 2) terminated multiple turbo code is given by
APCw (Z) =
[hmax∑
h1=1
c[h1, w]A(h1)w (Z)
][hmax∑
h2=1
c[h2, w](
Kw
)−1
A(h2)w (Z)
][hmax∑
h3=1
c[h3, w](
Kw
)−1
A(h3)w (Z)
]
=hmax∑
h1=1
hmax∑
h2=1
hmax∑
h3=1
(Kw
)−2
c[h1, w] c[h2, w] c[h3, w] A(h1)w (Z) A(h2)
w (Z) A(h3)w (Z) ,
16
where hmax is the maximum number of error events for the particular choice of w and c[h, w] =(K − h + w
w
). With K À h and K À w, c[h,w] ≈
(Kh
)≈ Kh
h! , and hmax = bw2 c. These
approximations, along with keeping only the highest power of K, i.e., the term corresponding toh1 = h2 = h3 = hmax, result in the codeword CWEF
APCw (Z) ≈ (w!)2
(hmax!)3K(3hmax−2w)
[A(hmax)
w (Z)]3
and the bit CWEF
BPCw (Z) =
w
KAPC
w (Z) ≈ w (w!)2
(hmax!)3K(3hmax−2w−1)
[A(hmax)
w (Z)]3
.
Then, for w < 6, the approximate IRWEF’s for this PCCC are
APC(W,Z) ≈5∑
w=2
WwAPCw (Z) and BPC(W,Z) ≈
5∑w=2
WwBPCw (Z),
and the WEF’s are APC(X) = APC(X, X) and BPC(X) = BPC(X, X). From the above,[A
(a) Using the above, the approximate CWEF’s up to Z14 are
APC2 (Z) ≈ 4
K
[A
(1)2 (Z)
]3
= 4K Z6 + 24
K Z7 + 60K Z8 + 92
K Z9 + 120K Z10 + 156
K Z11+196K Z12 + 240
K Z13 + 288K Z14 + · · ·+ 2
(n2−3n−10
K
)Zn + · · ·
APC4 (Z) ≈ 72
K2
[A
(2)4 (Z)
]3
≈ 72K2 Z12 + 864
K2 Z13 + 4752K2 Z14 + · · ·
APC3 (Z) = APC
5 (Z) = 0
BPC2 (Z) = 2
K
[APC
2 (Z)] ≈ 8
K2 Z6 + 48K2 Z7 + 120
K2 Z8 + 184K2 Z9 + 240
K2 Z10 + 312K2 Z11+
392K2 Z12 + 480
K2 Z13 + 576K2 Z14 + · · ·+ 4
(n2−3n−10
K2
)Zn + · · ·
BPC4 (Z) = 4
K
[APC
4 (Z)] ≈ 288
K3 Z12 + 3456K3 Z13 + 19008
K3 Z14 + · · ·BPC
3 (Z) = BPC5 (Z) = 0
17
(b) Using the above, the approximate IRWEF’s up to W 5 and Z14 are
APC(W,Z) ≈ W 2[
4K Z6 + 24
K Z7 + 60K Z8 + 92
K Z9 + 120K Z10 + 156
K Z11+196K Z12 + 240
K Z13 + 288K Z14 + · · ·+ 2
(n2−3n−10
K
)Zn + · · ·
]+
W 4[
72K2 Z12 + 864
K2 Z13 + 4752K2 Z14 + · · ·]
andBPC(W,Z) ≈ W 2
[8
K2 Z6 + 48K2 Z7 + 120
K2 Z8 + 184K2 Z9 + 240
K2 Z10 + 312K2 Z11+
392K2 Z12 + 480
K2 Z13 + 576K2 Z14 + · · ·+ 4
(n2−3n−10
K2
)Zn + · · ·
]+
W 4[288K3 Z12 + 3456
K3 Z13 + 19008K3 Z14 + · · ·]
(c) Using the above, the approximate WEF’s up to X14, neglecting any higher power of K terms, are
APC(X) ≈ 4K
X8 +24K
X9 +60K
X10 +92K
X11 +120K
X12 +156K
X13 +196K
X14 + · · ·
andBPC(X) ≈ 8
K2X8 +
48K2
X9 +120K2
X10 +184K2
X11 +240K2
X12 +312K2
X13 +392K2
X14 + · · ·
(d) The union bounds on the BER and WER for K = 103 and K = 104, assuming a binary input,unquantized output AWGN channel are shown below. The plots were created using the loose bounds
WER ≤ APC(X)∣∣∣∣X=e
−R EbN0
and BER ≤ BPC(X)∣∣∣∣X=e
−R EbN0
where R = 1/4 is the code rate and Eb
N0
is the SNR.
10-2
10-3
10-1
10-4
10-5
10-6
10-7
1 2 3 4 5 6 7
- - - - - N = 1000 ______ N = 10000
Eb/N0 (dB)
(a) Word Error Rate
18
Eb/N0 (dB)
(b) Bit Error Rate
10-5
10-6
10-4
10-7
10-8
10-9
10-10
1 2 3 4 5 6 7
- - - - - N = 1000 ______ N = 10000
16.15 (a)[1 1
1+D
]
i. Weight 2Minimum parity weight generating information sequence = 1 + D.Corresponding parity weight = 1.In a PCCC (rate 1/3) we therefore have (assuming a similar input after interleaving):Information weight = 2; Parity weight = 1 + 1 = 2; Total weight = 4.
ii. Weight 3For this encoder a weight 3 input does not terminate the encoder and hence is not possible.
(b)[1 1+D2
1+D+D2
]
i. Weight 2Minimum parity weight generating information sequence = 1 + D3.Corresponding parity weight = 4.In a PCCC we therefore have:Information weight = 2; Parity weight = 4 + 4 = 8; Total weight = 10.
ii. Weight 3Minimum parity weight generating information sequence = 1 + D + D2.Corresponding parity weight = 2.In a PCCC we therefore have:Information weight = 3; Parity weight = 2 + 2 = 4; Total weight = 7.
(c)[1 1+D2+D3
1+D+D3
]
i. Weight 2Minimum parity weight generating information sequence = 1 + D7.Corresponding parity weight = 6.In a PCCC we therefore have:Information weight = 2; Parity weight = 6 + 6 = 12; Total weight = 14.
19
ii. Weight 3Minimum parity weight generating information sequence = 1 + D + D3.Corresponding parity weight = 4.In a PCCC we therefore have:Information weight = 3; Parity weight = 4 + 4 = 8; Total weight = 11.
(d)[1 1+D+D2+D4
1+D3+D4
]
i. Weight 2Minimum parity weight generating information sequence = 1 + D15.Corresponding parity weight = 10.In a PCCC we therefore have:Information weight = 2; Parity weight = 10 + 10 = 20; Total weight = 22.
ii. Weight 3Minimum parity weight generating information sequence = 1 + D3 + D4.Corresponding parity weight = 4.In a PCCC we therefore have:Information weight = 3; Parity weight = 4 + 4 = 8; Total weight = 11.
(e)[1 1+D4
1+D+D2+D3+D4
]
i. Weight 2Minimum parity weight generating information sequence = 1 + D5.Corresponding parity weight = 4.In a PCCC we therefore have:Information weight = 2; Parity weight = 4 + 4 = 8; Total weight = 10.
ii. Weight 3For this encoder a weight 3 input does not terminate the encoder and hence is not possible.
In all cases the input weight two sequence gives the free distance codeword for large K.
16.16 For a systematic feedforward encoder, for input weight w, the single error event for input weight 1 canoccur w times. Therefore hmax, the largest number of error events associated with a weight w inputsequence, is equal to w. That is, the maximum number of error events produced by a weight w inputsequence occurs by repeating the single error event for input weight 1 w times. Thus,
A(w)w (Z) = [A(1)
1 (Z)]w.
This is not true for feedback encoders. Feedback encoders require at least a weight 2 input sequenceto terminate. Thus, for an input sequence of weight 2w, hmax = w, i.e., we have a maximum of w errorevents. This occurs when an error event caused by a weight 2 input sequence is repeated w times (fora total input weight of 2w).
Therefore
A2w(w)(Z) = [AZ
(1)(Z)]w.
20
m0 m1 m2
u v(())
V(2)
V(n-1)
V(1)
m-1
16.19 An (n, 1, v) systematic feedback encoder can be realized by the following diagram:
In the figure, the dashed lines indicate potential connections that depend on the encoder realization.It can be seen from the encoder structure that the register contents are updated as
mi,t = mi−1,t−1 , i = 1, 2, . . . , v − 1
and
m0,t = mv−1,t−1 + ut +v−2∑
i=0
aimi,t−1,
where t is the time index, ut represents the input bit at time t, and the coefficients ai depend on theparticular encoder realization. When the encoder is in state S2v−1 = (0 0 . . . 1) at time t− 1, it followsfrom the given relations that
mi,t = mi−1,t−1 = 0, i = 1, 2, . . . , v − 1
and
m0,t = mv−1,t−1 + ut +v−2∑
i=0
aimi,t−1 = 1 + ut.
Thus if the input bit ut = 1, the first memory position at time t is also 0, and the encoder is in theall-zero state.
16.20 For the primitive encoder D with Gp(D) =[1 D4+D2+D+1