8/3/2019 Project Part II_16914285
1/22
Project Part II Sumanth Sridhar, 16914285
TABLE OF CONTENTS
Contents Page No
BPSK Modulation 3
1
Assignment Cover Sheet
School of Engineering
Student Name
Student Number
Sumanth Sridhar
16914285
Unit Name and
Number
300513 Engineering Software Applications
Lecturer Dr Upul Gunawardana
Title of AssignmentSimulation Model of a Coded Baseband BPSK
Communication System in AWGN
Length 21 Pages
Due Date June 6 2011
Date Submitted June 6 2011
Campus Enrolment Kingswood
DECLARATION
I hold a copy of this assignment that I can produce if the original is lost or damaged. I hereby certify that no part of this
assignment/product has been copied from any other students work or from any other source except where due
acknowledgement is made in the assignment. No part of this assignment/product has been written/produced for me by
another person except where such collaboration has been authorised by the subject lecturer/tutor concerned.
Signature:
Note: An examiner or lecturer/tutor has the right not to mark this assignment if theabove declaration has not been signed)
8/3/2019 Project Part II_16914285
2/22
Project Part II Sumanth Sridhar, 16914285
QPSK Modulation 4
Theoretical BER BPSK 6
Theoretical BER QPSK 7
Baseband System 8
Scatter Plot 9
Pulse Shaping 9
Eye Diagram 9
Description of the Communication System 11
Convolutional Encoder and Viterbi Decoder 12
Hard Decision & Soft Decision Decoding 13
Performance Gain in Hard & Soft Decision Decoding 15
Description of the Simulation Model 15
Simulation Results 21
SIMULATION MODEL OF A CODED BASEBAND BPSK
COMMUNICATION SYSTEM IN
AWGN
BPSK
2
8/3/2019 Project Part II_16914285
3/22
Project Part II Sumanth Sridhar, 16914285
Binary Phase Shift Keying (BPSK) is the simplest form of Phase Shift
Keying. It uses two opposite phases (0 and 180). The Digital Signal is
broken up into individual bits (binary digits). The state of each bit is
determined according to the state of the preceding bit. If the phase of the
wave does not change, then the signal state stays the same (0 or 1). Thesignal state changes (from 0 to 1 or from 1 to 0) if the phase of the wave
changes by 180 degrees (phase reverses). Because there are two possible
wave phases, BPSK is sometimes called biphasic modulation.
In Binary Phase Shift Keying (BPSK) only one sinusoid is considered for
function modulation. Modulation is achieved by varying the phase of the
basis function depending on the message bits. A BPSK modulator can be
implemented by NRZ coding the message bits (1 represented by +ve
voltage and 0 represented by -ve voltage) and multiplying the output by a
reference oscillator running at carrier frequency . The following figureshows a BPSK transmitter and receiver.
Truth Table for BPSK
Implementation of BPSK:
The general form for BPSK is
3
Binary Input Output phase
Logic 0
Logic 1
180
0
8/3/2019 Project Part II_16914285
4/22
Project Part II Sumanth Sridhar, 16914285
This yields two phases, 0 and . In the specific form, binary data is often
conveyed with the following signals:
For binary "0"
For binary "1"
Where fc is the frequency of the carrier-wave
Hence, the signal-space can be represented by the single basis function
Where 1 is represented by and 0 is represented by .
Pros & Cons:
BPSK modulation is the most robust of all the PSK modulation techniquessince it takes the highest level of noise or distortion to make thedemodulator reach an incorrect decision. But, it is only able to modulateat 1 bit/symbol. Hence, it is unsuitable for high data-rate applicationswhen bandwidth is limited. With an arbitrary phase-shift introduced by thecommunications channel, the demodulator is unable to identify
constellation points. As a result, the data is often differentially encodedbefore modulation.
QPSK
Quadrature Phase Shift Keying (QPSK) or Quadrature PSK is another formof angled-modulated, constant-amplitude digital modulation. QPSK usesfour points on the constellation diagram, equispaced around a circle. WithQPSK four Output phases are possible for a single carrier frequency.Because there are four different output phases, there must be four
different input conditions. Because the digital input to a QPSK modulatoris a binary (base 2) signal, to produce four different input conditions it
4
8/3/2019 Project Part II_16914285
5/22
Project Part II Sumanth Sridhar, 16914285
takes more than a single bit. With 2 bits, there are four possibleconditions: 00,01,10,11.
Therefore, with QPSK, the binary input data are combined into groups of 2bit called dibits. Each dibit code generates one of the four possible output
phases. Therefore, for each 2-bit dibit clocked into the modulator, a singleoutput change occurs. Hence, the rate of change at the output (baud rate)is one-half of the input bit rate.
QPSK can be used either to double the data rate compared with a BPSKsystem while maintaining the same bandwidth of the signal, or tomaintain the data-rate of BPSK but halving the bandwidth needed. Theadvantage of QPSK over BPSK is that QPSK transmits twice the data ratein a given bandwidth compared to BPSK, at the same BER. But, QPSKtransmitters and receivers are more complicated than the ones for BPSK.As with BPSK, there are phase ambiguity problems at the receiving end,and differentially encoded QPSK is often used in practice.
Implementation of QPSK
The implementation of QPSK is indicates the implementation of higher-order PSK.
This yields the four phases /4, 3/4, 5/4 and 7/4.
This results in a two-dimensional signal space with unit basis functions
And
The first basis function is used as the in-phase component of the signaland the second as the Quadrature component of the signal.
Hence, the signal constellation consists of the signal-space 4 points
The factors of 1/2 indicate that the total power is split equally between thetwo carriers.
By comparing the basis functions with that of BPSK, it can be inferred thatQPSK can be considered as two independent BPSK signals.
5
8/3/2019 Project Part II_16914285
6/22
Project Part II Sumanth Sridhar, 16914285
A schematic diagram of the QPSK transmitter & Receiver:
Theoretical BER for
BPSK
For BPSK modulation the channel can be modelled as
where y is the received signal at the input of the BPSK receiver, x is themodulated signal transmitted through the channel , a is a channelamplitude scaling factor for the transmitted signal usually 1. 'n' is theAdditive Gaussian White Noise random variable with zero mean andvariance 2.
For AWGN the noise variance in terms of noise power spectral density (N0)is given by,
The symbol energy is given by
6
8/3/2019 Project Part II_16914285
7/22
Project Part II Sumanth Sridhar, 16914285
where Es =Symbol energy per modulated bit (x), Rm = log2 (M), (for BPSKM=2, QPSK M=4. Rc is the code rate of the system if a coding scheme isused. In our case since no coding scheme is used Rc = 1. Eb is the Energy
per information bit.
Assuming Es=1 for BPSK (Symbol energy normalized to 1) ,then Eb/N0 canbe represented as (using above equations),
From the above equation the noise variance for the given Eb/N0 can becalculated as
Theoretical & simulated BER Curve-BPSK:
QPSK
7
8/3/2019 Project Part II_16914285
8/22
Project Part II Sumanth Sridhar, 16914285
Although QPSK can be viewed as a quaternary modulation, it is easierconsidering it as two independently modulated Quadrature carriers.Hence, even (or odd bits) are used to modulate the in-phase componentof the carrier, while the odd (or even) bits are used to modulate theQuadrature phase component of the carrier. Therefore BPSK is used on
both carriers and they can be independently demodulated.
As a result, the probability of bit-error for QPSK is the same as for BPSK:
Theoretical & Simulated BER Curve-QPSK:
Baseband System
A baseband system is a lowpass communication channel that can transfer
frequencies that are very near zero. Examples are serial cables and local
area networks, as opposed to passband channels such as radio frequency
channels and passband filtered wires of the analogue telephone network.
A Baseband signal is represented as
8
8/3/2019 Project Part II_16914285
9/22
8/3/2019 Project Part II_16914285
10/22
Project Part II Sumanth Sridhar, 16914285
10
8/3/2019 Project Part II_16914285
11/22
Project Part II Sumanth Sridhar, 16914285
SIMULATION MODEL OF A CODED BASEBAND BPSK
COMMUNICATION SYSTEM IN
AWGN
BLOCK DIAGRAM OF THE COMMUNICATION SYSTEM
A BRIEFDESCRIPTIONOFTHEOVERALLCOMMUNICATIONSYSTEM
In the designed communication system, the data is sought from the user
and encoded using the convolutional encoder. Here, code rate k/n is set to
and constraint length k used is 5. The constraint length used representsthe number of bits present in the encoder memory. Addition of random
bits to the original input takes place here. The encoded data is modulated
in the BPSK modulator and transmitted through the communication
channel where additive white Gaussian noise (AWGN) is added to the
transmitted data. At the receiver end, first the data received is
demodulated with a BPSK demodulator and fed to the Viterbi decoder to
remove the random bits added during encoding. There are three decoding
algorithms used here. After decoding, the original data input is obtained.
The bit error rate for unquantised input, hard decision decoding and soft
11
8/3/2019 Project Part II_16914285
12/22
Project Part II Sumanth Sridhar, 16914285
decision decoding is plotted against Eb/N0 to obtain the BER curve for
each decoding technique.
BPSK & QPSK Modulation
The explanation for these topics is given in pages 5 to 9 of this report.
CONVOLUTIONAL ENCODERAND VITERBIDECODER
The idea of channel coding is to improve the capacity of a channel by
adding some carefully designed redundant information to the data being
transmitted through the channel. Convolutional codes operate on serial
data, one or a few bits at a time. Convolutional codes accept a continuous
stream of bits and map them into an output stream introducing
redundancies in the process. The efficiency or data rate of a convolutional
code is measured by the ratio of the number of bits in the input, k, and
the number of bits in the output, n. In a convolutional code, there is some
`memory' that remembers the stream of bits that flow by. This
information is used to encode the following bits. The number of the
preceding bits used in the encoding process is called the constraint length
m (that is similar to the memory in the system). Typically the values of k,
n, m are 1-2, 2- 3, and 4-7 in commonly employed convolutional codes.
Convolutional coding with Viterbi decoding has been the predominant FEC
technique used in space communications, particularly in geostationary
satellite communication networks, such as VSAT (very small aperture
terminal) networks. The most common variant used in VSAT networks is
rate 1/2 convolutional coding using a code with a constraint length K = 7.
With this code, BPSK or QPSK signals can be transmitted with at least 5 dB
less actual power. This is very useful in reducing transmitter and/or
antenna cost or permitting increased data rates given the same
transmitter power and antenna sizes. But there's a trade-off-the same
data rate with rate 1/2 convolutional coding takes twice the bandwidth of
the same signal without it, given that the modulation technique is thesame. That is because with rate convolutional encoding, two channel
symbols are transmitted per data bit. If the modulation technique stays
the same, the bandwidth expansion factor of a Convolutional code is n/k.
Convolutional codes are usually described using two parameters: the code
rate and the constraint length. The code rate, k/n, is expressed as a ratio
of the number of bits into the convolutional encoder (k) to the number of
channel symbols output by the convolutional encoder (n) in a given
encoder cycle. The constraint length parameter, K, denotes the "length" of
the convolutional encoder, i.e. how many k-bit stages are available to feedthe combinatorial logic that produces the output symbols. Closely related
12
8/3/2019 Project Part II_16914285
13/22
Project Part II Sumanth Sridhar, 16914285
to K is the parameter m, which indicates how many encoder cycles an
input bit is retained and used for encoding after it first appears at the
input to the convolutional encoder. The m parameter can be thought of as
the memory length of the encoder. In this tutorial, and in the example
source code, I focus on rate 1/2 convolutional codes. Viterbi decoding wasdeveloped by Andrew J. Viterbi, a founder of Qualcomm Corporation. The
Viterbi decoding algorithm is also used in decoding trellis-coded
modulation, the technique used in telephone-line modems to squeeze
high ratios of bits-per-second to Hertz out of 3 kHz-bandwidth analogue
telephone lines. Viterbi decoding is one of two types of decoding
algorithms used with convolutional encoding-the other type is sequential
decoding. Viterbi decoding has the advantage that it has a fixed decoding
time. It is well suited to hardware decoder implementation. But its
computational requirements grow exponentially as a function of the
constraint length, so it is usually limited in practice to constraint lengths
of K = 9 or less.
HARDDECISIONDECODINGANDSOFTDECISIONDECODING
The output codeword is transmitted through the channel. For instance, "0"is transmitted as "0 Volt and "1" as "1 Volt". The channel attenuates thesignal that is being transmitted and the receiver sees a distortedwaveform (Red colour waveform"). The hard decision decoder makes adecision based on the threshold voltage. At each sampling instant in thereceiver (as shown in the figure above) the hard decision detector
determines the state of the bit to be "0" if the voltage level falls below thethreshold and "1" if the voltage level is above the threshold. Therefore,
13
8/3/2019 Project Part II_16914285
14/22
Project Part II Sumanth Sridhar, 16914285
the output of the hard decision block is "001". Perhaps this "001" output isnot valid, which implies that the message bits cannot be recoveredproperly. The decoder compares the output of the hard decision blockwith the all possible code words and computes the minimum Hammingdistance for each case. The decoder's job is to choose a valid codeword
which has the minimum Hamming distance. When the hard decisiondecoding is employed the probability of recovering data is 1/3.
The difference between hard and soft decision decoder is as follows
In Hard decision decoding, the received codeword is compared withthe all possible code words and the codeword which givesthe Minimum Hamming Distance is selected
In Soft decision decoding, the received codeword is compared withthe all possible code words and the codeword which givesthe Minimum Euclidean Distance is selected.
Thus the soft decision decoding improves the decision makingprocess by supplying additional reliability information ( calculatedEuclidean distance or calculated log-likelihood ratio)
Soft decision decoders use all of the information ( voltage levels inthis case) in the process of decision making whereas the harddecision decoders does not fully utilize the information available inthe received signal
The usage of Soft decision decoding scheme will improve theperformance of the receiver by approx 2 dB when compared to hard
decision scheme.
For the same encoder and channel combination the effect of replacing thehard decision block with a soft decision block.
Voltage levels of the received signal at each sampling instant are shown14
8/3/2019 Project Part II_16914285
15/22
Project Part II Sumanth Sridhar, 16914285
in the figure. The soft decision block calculates the Euclidean distancebetween the received signal and the all possible code words and selects acodeword as the output. Even though the encoder cannot correct errors,the soft decision scheme helped in recovering the data in this case. Thisfact delineates the improvement that will be seen when this soft decision
scheme is used in combination with forward error correcting (FEC)schemes like convolution codes, LDPC etc. Soft decision decoders use allof the information ( voltage levels in this case) in the process of decisionmaking whereas the hard decision decoders does not fully utilize theinformation available in the received signal (evident from calculatingHamming distance just by comparing the signal level with the thresholdwhereby neglecting the actual voltage levels). Soft decision decodingViterbi decoders utilize Soft Output Viterbi Algorithm (SOVA) whichconsiders the probabilities of the input symbols producing a soft outputindicating the reliability of the decision.
PERFORMANCEGAINCOMPARISONWITHSOFTDECISIONANDHARDDECISIONDECODING
In the simulated communication system it can be inferred from the plotthat the usage of Soft decision decoding will improve the performance ofthe receiver by approx 2 dB when compared to hard decision decoding.
SIMULATIONMODEL
The convolutional encoder used in the communication system with code
rate k/n =1/2
Convolutional Encoder:
To generate random bits x = (sign(rand(1,k))+1)/2;Here encoding takes place y = convenc(x,trellis);
15
8/3/2019 Project Part II_16914285
16/22
Project Part II Sumanth Sridhar, 16914285
BPSK Modulation before transmitting the signal z = -2*y+1;
AWGN:
To add additive white Gaussian noise to the channel :
z=z+sqrt(Var)*randn(1,n);
Viterbi Decoder for unquantised input:
xhat = vitdec(z,trellis,tblength,'cont','unquant');[errors,ratio]=biterr(xhat(tblength+1:end),x(1:end-tblength));
TotalNumError = TotalNumError + errors;TotalN = TotalN + k;
Viterbi Decoder hard decision:
xhat = vitdec(z,trellis,tblength,'cont','hard');[errors,ratio]=biterr(xhat(tblength+1:end),x(1:end-tblength));
TotalNumError = TotalNumError + errors;TotalN = TotalN + k;
Viterbi decoder-soft decision:
clear
N = 10^6 ; % number of bits or symbolsEb_N0_dB = [0:1:10]; % multiple Eb/N0 valuesEc_N0_dB = Eb_N0_dB - 10*log10(2);refHard = [0 0 ; 0 1; 1 0 ; 1 1 ];refSoft = -1*[-1 -1; -1 1 ;1 -1; 1 1 ];ipLUT = [ 0 0 0 0;...
0 0 0 0;...1 1 0 0;...
0 0 1 1 ];for yy = 1:length(Eb_N0_dB)
% Transmitterip = rand(1,N)>0.5; % generating 0,1 with equal
probability
cip1 = mod(conv(rand(1,N),[1 1 1 ]),2);cip2 = mod(conv(rand(1,N),[1 0 1 ]),2);
cip = [cip1;cip2];cip = cip(:).';
16
8/3/2019 Project Part II_16914285
17/22
Project Part II Sumanth Sridhar, 16914285
s = 2*cip-1; % BPSK modulation 0 -> -1; 1 -> 0
n = 1/sqrt(2)*[randn(size(cip)) + j*randn(size(cip))]; % white gaussiannoise, 0dB variance
% Noise additiony = s + 10^(-Ec_N0_dB(yy)/20)*n; % additive white gaussian
noise
% receivercipHard = real(y)>0; % hard decisioncipSoft = real(y); % soft decision
% Viterbi decodingpmHard = zeros(4,1); % hard path metricsvHard_v = zeros(4,length(y)/2); % hard survivor pathpmSoft = zeros(4,1); % soft path metricsvSoft_v = zeros(4,length(y)/2); % soft survivor path
for ii = 1:length(y)/2
rHard = cipHard(2*ii-1:2*ii); % taking 2 hard bitsrSoft = cipSoft(2*ii-1:2*ii); % taking 2 soft bits
% computing the Hamming distance and euclidean distancerHardv = kron(ones(4,1),rHard);
rSoftv = kron(ones(4,1),rSoft);hammingDist = sum(xor(rHardv,refHard),2);euclideanDist = sum(rSoftv.*refSoft,2);
if(ii == 1) || (ii == 2)
% branch metric and path metric for state 0bm1Hard = pmHard(1,1) + hammingDist(1);pmHard_n(1,1) = bm1Hard;svHard(1,1) = 1;
bm1Soft = pmSoft(1,1) + euclideanDist(1);pmSoft_n(1,1) = bm1Soft;svSoft(1,1) = 1;
% branch metric and path metric for state 1bm1Hard = pmHard(3,1) + hammingDist(3);pmHard_n(2,1) = bm1Hard;svHard(2,1) = 3;bm1Soft = pmSoft(3,1) + euclideanDist(3);pmSoft_n(2,1) = bm1Soft;
svSoft(2,1) = 3;
17
8/3/2019 Project Part II_16914285
18/22
Project Part II Sumanth Sridhar, 16914285
% branch metric and path metric for state 2bm1Hard = pmHard(1,1) + hammingDist(4);pmHard_n(3,1) = bm1Hard;svHard(3,1) = 1;
bm1Soft = pmSoft(1,1) + euclideanDist(4);pmSoft_n(3,1) = bm1Soft;svSoft(3,1) = 1;
% branch metric and path metric for state 3bm1Hard = pmHard(3,1) + hammingDist(2);pmHard_n(4,1) = bm1Hard;svHard(4,1) = 3;bm1Soft = pmSoft(3,1) + euclideanDist(2);pmSoft_n(4,1) = bm1Soft;svSoft(4,1) = 3;
else% % branch metric and path metric for state 0
bm1Hard = pmHard(1,1) + hammingDist(1);bm2Hard = pmHard(2,1) + hammingDist(4);[pmHard_n(1,1) idx] = min([bm1Hard,bm2Hard]);svHard(1,1) = idx;bm1Soft = pmSoft(1,1) + euclideanDist(1);bm2Soft = pmSoft(2,1) + euclideanDist(4);[pmSoft_n(1,1) idx] = min([bm1Soft,bm2Soft]);
svSoft(1,1) = idx;
% branch metric and path metric for state 1bm1Hard = pmHard(3,1) + hammingDist(3);bm2Hard = pmHard(4,1) + hammingDist(2);[pmHard_n(2,1) idx] = min([bm1Hard,bm2Hard]);svHard(2,1) = idx+2;bm1Soft = pmSoft(3,1) + euclideanDist(3);bm2Soft = pmSoft(4,1) + euclideanDist(2);[pmSoft_n(2,1) idx] = min([bm1Soft,bm2Soft]);svSoft(2,1) = idx+2;
% branch metric and path metric for state 2bm1Hard = pmHard(1,1) + hammingDist(4);bm2Hard = pmHard(2,1) + hammingDist(1);[pmHard_n(3,1) idx] = min([bm1Hard,bm2Hard]);svHard(3,1) = idx;bm1Soft = pmSoft(1,1) + euclideanDist(4);bm2Soft = pmSoft(2,1) + euclideanDist(1);[pmSoft_n(3,1) idx] = min([bm1Soft,bm2Soft]);svSoft(3,1) = idx;
% branch metric and path metric for state 3
18
8/3/2019 Project Part II_16914285
19/22
Project Part II Sumanth Sridhar, 16914285
bm1Hard = pmHard(3,1) + hammingDist(2);bm2Hard = pmHard(4,1) + hammingDist(3);[pmHard_n(4,1) idx] = min([bm1Hard,bm2Hard]);svHard(4,1) = idx+2;bm1Soft = pmSoft(3,1) + euclideanDist(2);
bm2Soft = pmSoft(4,1) + euclideanDist(3);[pmSoft_n(4,1) idx] = min([bm1Soft,bm2Soft]);svSoft(4,1) = idx+2;
end
pmHard = pmHard_n;svHard_v(:,ii) = svHard;pmSoft = pmSoft_n;svSoft_v(:,ii) = svSoft;
end
% trace back unitcurrHardState = 1;currSoftState = 1;ipHatHard_v = zeros(1,length(y)/2);ipHatSoft_v = zeros(1,length(y)/2);
for jj = length(y)/2:-1:1prevHardState = svHard_v(currHardState,jj);ipHatHard_v(jj) = ipLUT(currHardState,prevHardState);
currHardState = prevHardState;prevSoftState = svSoft_v(currSoftState,jj);ipHatSoft_v(jj) = ipLUT(currSoftState,prevSoftState);currSoftState = prevSoftState;
end
% counting the errorsnErrHardViterbi(yy) = size(find([ip- ipHatHard_v(1:N)]),2);nErrSoftViterbi(yy) = size(find([ip- ipHatSoft_v(1:N)]),2);
endsimBer_HardViterbi = nErrHardViterbi/N; % simulated ber - hard decisionViterbi decoding BERsimBer_SoftViterbi = nErrSoftViterbi/N; % simulated ber - soft decisionViterbi decoding BERtheoryBer = 0.5*erfc(sqrt(10. (Eb_N0_dB/10))); % theoretical beruncoded AWGN
close allfigure
19
8/3/2019 Project Part II_16914285
20/22
Project Part II Sumanth Sridhar, 16914285
semilogy(Eb_N0_dB,theoryBer,'bd-','LineWidth',2);hold on%semilogy(Eb_N0_dB,simBer_HardViterbi,'mp-','LineWidth',2);semilogy(Eb_N0_dB,simBer_SoftViterbi,'cd-','LineWidth',2);axis([0 10 10^-5 0.5])
grid onlegend('theory - uncoded', 'simulation - hard Viterbi', 'simulation - softViterbi');xlabel('Eb/No, dB');ylabel('Bit Error Rate');title('BER for BCC with Viterbi decoding for BPSK in AWGN');
To calculate BER:
P1 = TotalNumError/TotalN;P2 = [P2 P1];
SNR = [SNR SNRdB];fprintf('%5.2f\t%e\t%6.0f\t%6.0f\n', SNRdB, P1, TotalNumError, TotalN);
To plot the ber curve:
semilogy (SNR,P2,'-bo');title('Eb/N0 vs BER Curve')xlabel('Eb/N0');ylabel('Bit Error Rate');
hold onaxis tightgrid on
To define the trellis:
trellis = poly2trellis(5, [32 25]);here, 5 represents the value of k and [32,25] is the generator polynomial.
To define the number of simulations:
nsim = 1000;
To generate SNR:
SNRinit = 0;SNRinc = 1;SNRfinal = 8;
Here, the initial value of SNR will be 0 and it will increase by 1 up to 8
To define the length of the block
20
8/3/2019 Project Part II_16914285
21/22
Project Part II Sumanth Sridhar, 16914285
n = 8000;
Here, k represents the number of message bits and code rate is given byk/n whose value is .
k = 4000;rate = k/n;tblength = 16;
To get the data:seed = input('Please Enter Input:');rand('state',seed);randn('state',seed);
SNR Calculation (Eb/N0)
for SNRdB = SNRinit:SNRinc:SNRfinalTotalNumError = 0;TotalN = 0;SNRdBs = SNRdB + 10*log(rate)/log(10);No = (10.^(-SNRdBs/10));Var = No/2;sigma = sqrt(Var);for Nsimulation = 1:nsim
fprintf('*');
if mod(Nsimulation,50) == 0fprintf('\n');end
SIMULATIONRESULTS
21
8/3/2019 Project Part II_16914285
22/22
Project Part II Sumanth Sridhar, 16914285
0 1 2 3 4 5 6 7 8
10-6
10-5
10-4
10-3
10-2
10-1
Eb/N0 vs BER Curve
Eb/N0
BitErrorRate
22