Top Banner
Presented by, K Swaraj Gowtham G Srinivasa Rao B Gopi Krishna TURBO CODES
46
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Presentation

Presented by,K Swaraj GowthamG Srinivasa RaoB Gopi Krishna

TURBO CODES

Page 2: Presentation

Turbo Code Concepts

Log Likelihood Algebra

Interleaving & Concatenated Codes

Encoding With R S C

Turbo Codes

Page 3: Presentation

Objectives Studying channel coding Understanding channel capacity Ways to increase data rate Provide reliable communication link

Introduction

Page 4: Presentation

Communication System

Structural modular approach with variouscomponents

ChannelCoding

Source Coding

ModulationFormattingDigitization

MultiplexingAccess

techniques

send

receive

Page 5: Presentation

CHANNEL CODING

Waveform M-ary signaling Antipodal Orthogonal Trellis coded modulation

Structured sequence Block Convolution Turbo

Can be categorized Wave form signal design Structured sequences

Better detectible signals Added redundancy

Page 6: Presentation

Structured Redundency

Channel encoderChannel encoder

Input word

k-bit

Output word

n-bit

Redundancy = (n-k)

Code rate = k/n

codeword

Code sequence

Page 7: Presentation

A turbo code is a refinement of the concatenated encoding structure plus an iterative algorithm for decoding the associated code sequence.

Concatenated coding scheme is a method for achieving large coding gains by combining two or more relatively simple building blocks or component codes.

TURBO CODES

Page 8: Presentation

Likelihood Functions: The mathematical foundations of hypothesis testing rests on Baye’s

theorem. A Posteriori Probability (APP)of a decision in terms of a continuous-

valued random variable x as

TURBO CODE CONCEPTS

Page 9: Presentation

Before the experiment, there generally exists an a priori probability P(d = i). The experiment consists of using Equation (1) for computing the APP, P(d = i|x), which can be thought of as a “refinement” of the prior knowledge about the data, brought about by examining the received signal x.

Page 10: Presentation

The Two-Signal Class Case:

P (d = + 1| x)

H1><H2

P(d=-1|x)

Page 11: Presentation

Binary logical elements 1 and 0 are represented electronically by voltages +1 and -1 where ‘d’ represents this voltages.

The rightmost function, p(x|d = +1), shows the pdf of the random variable x conditioned on d = +1 being transmitted. The leftmost function, p(x|d = -1), illustrates a similar pdf conditioned on d = -1 being transmitted.

A line subtended from Xk an arbitrary value taken from the full range of values of X, intercepts the two likelihood functions, yielding two likelihood values ℓ1 = p(xk|dk = +1) and ℓ2 = p(xk|dk = -1).

Page 12: Presentation

General expression for the MAP rule in terms of APPs is

Page 13: Presentation

The previous equation is expressed in terms of ratio, yielding the so-called likelihood ratio test, as follows

Page 14: Presentation

Log-Likelihood Ratio

By logging on both sides to the MAP ruled APPs is

To simplify the notation, it is rewritten as

Page 15: Presentation

At the decoder it is equal to

This equation shows the output LLR of a systematic decoder Consists of channel measurement , a prior knowledge of the data, and an extrinsic LLR stemming solely from the decoder.This soft decoder output L(dˆ ) is a real number that provides a hard decision as well as the reliability of that decision. The sign of L(dˆ ) denotes the hard decision; that is, for positive values of L(dˆ ) decide that d = +1, and for negative values decide that d = -1. The magnitude of L(dˆ ) denotes the reliability of that decision.

Page 16: Presentation

Log Likelihood Algebra

For Statistically independent data d, the sum of two log likelihood ratios are defined as

Page 17: Presentation
Page 18: Presentation
Page 19: Presentation

INTERLEAVING

This is the concept that aids much in case of channels with memory.

A channel with memory exhibits mutually dependent transmission impairments.

A channel with multipath fading is an example for channel with memory.

Page 20: Presentation
Page 21: Presentation

Errors caused due to disturbances in these types of channels – Burst Errors.

Interleaving only requires a knowledge of span of the memory channel.

Interleaving at the Tx’r side and de-interleaving at the Rx’r side causes the burst errors to be corrected.

Page 22: Presentation

The interleaver shuffles the code symbols over a span of several block lengths or constraint lengths.

It makes the memory channel look like memoryless one for decoder.

Two types of interleavers:- ->Block Interleavers. ->Convolutional Interleavers.

Page 23: Presentation

Block Interleaving

A block interleaver accepts the coded symbols in blocks from the encoder,permutes the symbols,and then feeds the rearranged ones to the modulator.

The minimum end-to-end delay is (2MN-2M+2) symbol times where the encoded sequence is written as M*N array format.

Page 24: Presentation

It needs a memory of 2MN symbol times.

The choice of M is dependent on the coding scheme used.

The choice of N for t-error-correcting codes must overbound the expected burst length divided by t.

Page 25: Presentation
Page 26: Presentation

Convolutional Interleaving

In this type, the code symbols are sequentially shifted into the bank of N registers; each successive register contains J symbols more storage than the preceding one.

In this case, the end-to-end delay is M(N-1) and the memory required is M(N-1)/2.

Page 27: Presentation
Page 28: Presentation
Page 29: Presentation

Concatenated Codes

A concatenated code uses two levels on coding : an inner code and an outer code (higher rate).

o Popular concatenated codes :-

Convolutional codes with Viterbi decoding as the inner code and Reed-Solomon codes as the outer code.

Page 30: Presentation

o

Page 31: Presentation

The purpose is to reduce the overall complexity, yet achieving the required error performance.

However, the concatenated system performance is severely degraded by correlated errors among successive smbols.

Page 32: Presentation

Encoding with Recursive systematic codes

Turbo codes are generated by parallel concatenation of component convolutional codes

Consider an encoder with data rate ½ ,constraint length K ,i/p to encoder dk. The corresponding code word (Uk,Vk) is

Page 33: Presentation

G1 = { g1i } and G2 = { g2i } are the code generators, and dk is represented as a binary digit

This encoder can be visualized as a discrete-time finite impulse response (FIR) linear system, giving rise to the familiar nonsystematic

convolutional (NSC) code

Page 34: Presentation

An example for NSC code G1={111},G2={101}, K=3, bit

rate = 1/2.

Page 35: Presentation

At large Eb/N0 values, the error performance of an NSC is better than that of a systematic code

infinite impulse response (IIR) convolutional codes [3] has been proposed as building blocks for a turbo code

For high code rates RSC codes result in better error performance than the best NSC codes at any value of Eb/N0

Page 36: Presentation

an RSC code, with K = 3, where ak is recursively calculated as

g′i is respectively equal to g1i

if uk = dk, and to g2i if vk = dk.

Page 37: Presentation

An ex for Recursive encoder and its trellis diagram:6(a),6(b)

Page 38: Presentation

Trellis diagram

Page 39: Presentation

Example: Recursive Encoders and Their Trellis Diagrams

a) Using the RSC encoder in Figure 6(a), verify the section of the trellis

structure (diagram) shown in Figure 6(b).

b) For the encoder in part a), start with the input data sequence{dk} = 1 1 1 0, and show the step-by-step encoder procedure for finding the output codeword.

Page 40: Presentation

Validation of trellis diagram

Page 41: Presentation

Encoding a bit sequence with RSC encoder

Page 42: Presentation

Concatenation of RSC codesGood turbo codes have been

constructed from component codes having short lengths(K = 3 to 5).

There is no limit to the number of encoders that may be concatenated.

we should avoid pairing low-weight codewords from one encoder with low-weight codewords from the other encoder. Many such pairings can be avoided by proper design of the interleaver

Page 43: Presentation

Fig :parallel concatenation of RSC codes

Page 44: Presentation

If the component encoders are not recursive, the unit weight input sequence 0 0 … 0 0 1 0 0 … 0 0 will always generate a low-weight codeword at the input of a second encoder for any interleaver design.

if the component codes are recursive,a weight-1 input sequence generates an infinite impulse response

Page 45: Presentation

For the case of recursive codes, the weight-1 input sequence does not

yield the minimum-weight codeword out of the encoder

Turbo code performance is largely influenced by minimum-weight codewords

Page 46: Presentation