Near Shannon Limit Performance of Low Density Parity Check Codes

Post on 03-Jan-2016

74 Views

Category:

Documents

6 Downloads

Preview:

Click to see full reader

DESCRIPTION

Near Shannon Limit Performance of Low Density Parity Check Codes. David J.C. MacKay and Radford M. Neal , Electronics Letters 29th Vol. 32 No. 28, August 1996. Outline. Features of LDPC Codes History of LDPC Codes Some Fundamentals A Simple Example Properties of LDPC Codes - PowerPoint PPT Presentation

Transcript

1

Near Shannon Limit Performance of Low Density Parity Check Codes

David J.C. MacKay and Radford M. Neal,

Electronics Letters 29th Vol. 32 No. 28, August 1996.

2

Outline

Features of LDPC Codes History of LDPC Codes Some Fundamentals A Simple Example

Properties of LDPC Codes How to construct?

Decoding Concept of Message Passing Sum Product Algorithm Concept of Iterative Decoding Channel Transmission Decoding Algorithm Decoding Example

Performance Cost

3

Shannon Limit (1/2)

Shannon Limit:

Describes the theoretical maximum information

transfer rate of the communication channel (channel

capacity), for a particular levels of noise.

Given:

A noisy channel with channel capacity C

Information transmitted at a rate R

4

Shannon Limit (2/2)

If R < C, there exist codes that allow the probability of

error at the receiver to be made arbitrarily small.

Theoretically, it is possible to transmit information nearly without

error at any rate below a limiting rate, C.

If R > C, all codes will have a probability of error greater

than a certain positive minimal level.

This level increases as the rate increases.

Information cannot be guaranteed to be transmitted reliably

across a channel at rates beyond the channel capacity.

5

Features of LDPC Codes

Low-Density Parity-Check Codes (LDPC Codes) is an error

correcting code.

A method of transmitting a message over a noisy transmission channel.

Approaching Shannon capacity

For example, 0.3 dB from Shannon limit (1999).

An closer design from (Chung:2001), 0.0045 dB away from capacity.

Linear decoding complexity in time.

Suitable for parallel implementation.

6

History of LDPC Codes

Also known as Gallager codes, by whom devel

oped the LDPC concept in his doctoral dissertati

on at MIT in 1960.

Long time being ignored due to requirement of hi

gh complexity computation.

In 90’s, Rediscovered by MacKay and Richardso

n/Urbanke.

7

Some Fundamentals

The structure of a linear block code is described by the generator matrix G or the parity-check matrix H.

H is sparse. Very few 1's in each row and column.

Regular LDPC codes: Each column of H had a small weight (e.g. 3), and the weight per

row was also uniform. H was constructed at random subject to these constraints.

Irregular LDPC codes: The number of 1's per column or row is not constant. Usually irregular LDPC codes outperforms regular LDPC codes.

8

A Simple Example (1/6)

Variable Node: box with an '=' sign. Check Node: box with a '+' sign. Constraints:

All lines connecting to a variable node have the same value, and all values connecting to a check node must sum, modulo two, to zero (they must sum to an even number).

www.wikipedia.org

9

A Simple Example (2/6)

There are 8 possible 6 bit strings which correspond to valid codewords: 000000, 011001, 110010, 111100, 101011, 100101, 001110, 010111.

This LDPC code fragment represents a 3-bit message encoded as 6 bits. The redundancy is been used to aid in recovering from channel errors.

10

A Simple Example (3/6)

The parity-check matrix representing this graph fragment is:

11

A Simple Example (4/6) Consider the valid codeword: 101011

Been transmitted across a binary erasure channel, and received with

the 1st and 4th bit erased: ?01?11

Belief Propagation is particularly simple for the binary erasure channel.

Consists of iterative constraint satisfaction.

12

A Simple Example (5/6) Consider the erased codeword: ?01?11

In this case: The first step of belief propagation is to realize that the 4th bit

must be 0 to satisfy the middle constraint. Now that we have decoded the 4th bit, we realize that the 1st bit

must be a 1 to satisfy the leftmost constraint.

13

A Simple Example (6/6) Thus we are able to iteratively decode the

message encoded with our LDPC Code. We can validate this result by multiplying the

corrected codeword r by the parity-check matrix H: Because the outcome z (the syndrome) of this operation is

the 3 x 1 zero vector, we have successfully validated the resulting codeword r.

(mod 2)

14

Properties of LDPC Codes

The structure of a linear block code is completely described by the generator matrix G or the parity-check matrix H. r = Gm

r: codeword, G: generator matrix, m: input message

HG = 0 Hr = 0

A Low-Density Parity-Check Code is a code which has a very sparse, random parity check matrix H. Typically, column weight is around 3 or 4.

15

How to construct? (1/4)

Construction 1A: An M by N matrix is created at random with:

weight per column t (e.g., t = 3).

weight per row as uniform as possible.

overlap between any two columns no greater than 1. No cycle graph.

16

How to construct? (2/4)

Construction 2A: Up to M/2 of the columns: designated weight 2 columns

Such that there is zero overlap between any pair of columns.

The remaining columns: made at random with weight 3.

The weight per row: as uniform as possible.

Overlap between any two columns of the entire matrix no greater than 1.

17

How to construct? (3/4)

Construction 1B and 2B: A small number of columns are deleted from a matrix

produced by Constructions 1A and 2A.

The bipartite graph has no short cycles of length less than some length l.

18

How to construct? (4/4)

Above constructions do not ensure all the rows of the matrix are linearly independent. The M by N matrix created is the parity matrix of a

linear code with rate at least R = K/N, where K = N - M.

The generator matrix of the code can be created by Gaussian elimination.

19

Decoding

Decoding problem is to find the most probable v

ector x iteratively, such that:

Hx mod 2 = 0

Gallager's algorithm may be viewed as an appro

ximate Belief Propagation (BP) algorithm.

Message Passing (MP).

20

Concept of Message Passing (1/4)

How to know the number of soldiers stand in

line?

Each soldier plus 1 to the number hearing from the

neighbor on one side, then pass the result to the

neighbor on the opposite side.

21

Concept of Message Passing (2/4)

In the beginning, every soldier knows that: There exists at least one soldier (himself).

Intrinsic Information.

Figure 1. Each node represent a soldier & Local rule: +1

22

Concept of Message Passing (3/4)

Start from the leftmost and rightmost soldiers.Extrinsic Information.

Figure 2. Extrinsic Information Flow.

23

Concept of Message Passing (4/4)

The total number = [left + right] + [oneself]

Overall Information =

Extrinsic Information + Intrinsic Information.

Figure 3. Overall Information Flow.

24

Sum Product Algorithm (1/4)

During decoding, apply Sum Product Algorithm to

derive: Extrinsic Information (Extrinsic Probability)

Overall Information (Overall Probability)

In this paper, channel transmission: BPSK through AWGN.

Binary Phase-Shift Keying.

Additive White Gaussian Noise.

25

Sum Product Algorithm (2/4)

A simple example: If local rule:

26

Sum Product Algorithm (3/4) Form 1. valid codeword for rule:

(Overall Probability with m2 = 0):

P10P20P30 + P11P20P31

(Extrinsic Probability with m2 = 0):

P10P30 + P11P31

27

Sum Product Algorithm (4/4)

Likelihood Ratio:

, where

( with m1 or m2 is similar )

28

Concept of Iterative Decoding (1/6) A simple example:

If received a codeword with Likelihood Ratio:

( “10110” is an invalid codeword )

29

Concept of Iterative Decoding (2/6) A simple example:

Calculate Extrinsic Probability by Check Nodes:

30

Concept of Iterative Decoding (3/6) A simple example:

We then obtain the Overall Probability of 1st. Round:

( “10010” is an valid codeword )

31

Concept of Iterative Decoding (4/6) A simple example:

2nd. Round :

32

Concept of Iterative Decoding (5/6) A simple example:

2nd. Round :

33

Concept of Iterative Decoding (6/6) A simple example:

We then obtain the Overall Probability of 2nd. Round:

( “10010” is an valid codeword )

34

Channel Transmission

BPSK through AWGN:

A Gaussian channel with binary input ±a and additive

noise of variance σ2 = 1.

t: BPSK-modulated signal,

AWGN channel:

Posterior probability:

35

Decoding Algorithm (1/6)

We refer to the elements of x as bits, to the rows

of H as checks, and denote:

the set of bits n that participate in check m by N(m):

the set of checks in which bit n participates by M(n):

a set N(m) with bit n excluded by N(m)\n

36

Decoding Algorithm (2/6)

37

Decoding Algorithm (3/6)

N(1) = {1, 2, 3, 6, 7, 10}, N(2) = {1, 3, 5, 6, 8, 9}, … etc

M(1) = {1, 2, 5}, M(2) = {1, 4, 5}, … etc N(1)\1 = {2, 3, 6, 7, 10}, N(2)\3 = {1, 5, 6, 8, 9}, … etc

M(1)\1 = {2, 5}, M(2)\4 = {1, 5}, … etc

38

Decoding Algorithm (4/6) The algorithm has two parts, in which quantities qmn and rmn associate

d with each non-zero element in the H matrix are iteratively updated:

qxmn: the probability that bit n of x is x, given the information obtained via check

s other than check m.

rxmn: the probability of check m being satisfied if bit n of x is considered fixed at

x, and other bits have a separable distribution given by the probabilities

.

The algorithm would produce the exact posterior probabilities of all the bits of t

he bipartite graph defined by the matrix H.

39

Decoding Algorithm (5/6) Initialization:

q0mn and q1

mn are initialized to the values f0n and f1

n

Horizontal Step:

Define:

For each m, n:

Set:

40

Decoding Algorithm (6/6) Vertical Step:

For each n and m, and for x = 0, 1 we update:

We can also update the “pseudoposterior probabilities” q0n and q

1n, given by:

, where αmn is chosen such that q0mn+ q1

mn = 1

41

Decoding Example (1/10)

42

Decoding Example (2/10)

43

Decoding Example (3/10) BPSK through AWGN:

Simulated a Gaussian channel with binary input ±a and additive

noise of variance σ2 = 1.

t: BPSK-modulated signal,

AWGN channel:

44

Decoding Example (4/10) BPSK through AWGN:

Posterior probability:

45

Recall: Decoding Algorithm Input: The Posterior Probabilities pn(x).

Initialization: Let qmn(x) = pn(x).

1. Horizontal Step:

(a). Form the δq matrix from qmn(0) - qmn(1) (at sparse non-zero location).

(b). For each nonzero location (m, n), let δrmn be the product of δq matrix elements along its row, excluding the (m, n) position.

(c). Let rmn (1) = (1 -δrmn ) / 2, rmn (0) = (1 +δrmn ) / 2

2. Vertical Step:

For each nonzero location (m, n) let qmn(0) be the product along its column, excluding the (m, n) position, times pn(0).

Similarly for qmn(1). Then normalize.

46

Decoding Example (5/10) Initialization: Let qmn(x) = pn(x).

47

Decoding Example (6/10) Iteration 1: Horizontal Step:

(a)

(b)

48

Decoding Example (7/10) Iteration 1: Horizontal Step:

(c)

49

Decoding Example (8/10) Iteration 1: Vertical Step:

(a)

(b)

50

Decoding Example (9/10) Iteration 1: After Vertical Step:

Hc mod 2 ≠ 0

Recall: update pseudoposterior probabilities qn(1) and qn(0), given by:

51

Decoding Example (10/10) After two more iterations:

However, a failure is declared if some maximum number of iterations (e.g., 100) occurs without a valid decoding.

Hc mod 2 = 0Successfully decoded

52

Performance (1/4) Compares the performance of LDPC codes with text

book codes and with state of the art codes.

Textbook codes: The curve (7, 1/2) shows the performance of a rate 1/2 convolutional c

ode with constraint length 7

de facto standard for satellite communications.

The curve (7, 1/2)C shows the performance of the concatenated code

composed of the same convolutional code and a Reed-Solomon code.

53

Performance (2/4) State of the art:

The curve (15,1/4)C shows the performance of concat

enated code developed at JPL based on a constraint l

ength 15, rate 1/4 convolutional code.

Extremely expensive and computer intensive.

The curve Turbo shows the performance of the rate 1/

2 Turbo code.

54

Performance (3/4) LDPC codes: from left to right, the codes had the following

parameters (N; K; R): (29507; 9507; 0:322) (Construction 2B);

(15000; 5000; 0:333) (Construction 2A);

(14971; 4971; 0:332) (Construction 2B);

(65389; 32621; 0:499) (Construction 1B);

(19839; 9839; 0:496) (Construction 1B);

(13298; 3296; 0:248) (Construction 1B);

(29331; 19331; 0:659) (Construction 1B).

55

Performance (4/4)

Figure LDPC codes’ performance over Gaussian channel compared with

that of standard textbook codes and state of art codes.

56

Cost (1/2) In a brute force approach, the time to create the code scales

as N3, where N is the block size.

Encoding time scales as N2

But encoding involves only binary arithmetic so for the block lengths

studied here it takes considerably less time than the simulation of the

Gaussian channel.

It may be possible to reduce encoding time using sparse

matrix techniques.

57

Cost (2/2) Decoding involves approximately 6Nt floating point

multiplies per iteration, so the total number of operations

per decoded bit (assuming 20 iterations) is about 120t/R,

independent of block length.

(6Nt * 20 ) / K = 120t * (N/K) = 120t / R

For the codes presented here, this is about 800

operations.

58

Thank you

Sincerely thanks for your listening.

59

How It Works

60

Probabilities Used

61

Vertical Step I: Single Tier Approximate Subset

62

Vertical Step II: Multi Tier Approximate Subset

63

Horizontal Step

top related