I Modelling and Simulation of an Underwater Acoustic Communication Channel Submitted by: Kalangi Pullarao Prasanth A Thesis approved on by the following committee: Hochschule Bremen University of applied sciences Bremen, Germany. Date Kraus, Dieter, Prof. Dr.-Ing. Wenke, Gerhard, Prof. Dr.-Ing.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
I
Modelling and Simulation of an Underwater Acoustic Communication Channel
Submitted by: Kalangi Pullarao Prasanth
A Thesis approved on
by the following committee:
Hochschule Bremen University of applied sciences
Bremen, Germany.
Date
Kraus, Dieter, Prof. Dr.-Ing.
Wenke, Gerhard, Prof. Dr.-Ing.
II
ACKNOWLEDGEMENTS
I take this opportunity to thank all those magnanimous persons who stood behind me as
an inspiration and rendered their full service throughout my thesis. I am deeply indebted
to my thesis supervisor, Prof. Dr.-Ing. Dieter Kraus for timely and kind help, guidance,
providing me with valuable suggestions when ever I used to digress away from the aim
of project work and also the most essential materials required for the completion of this
report. He stood as an inspiration through out my project work and explained me even
the minute details very patiently at various stages of the project.
I would like to thank Prof. Dr.-Ing. Gerhard Wenke, for his support and cooperation in
this project. Finally, I want to thank my parents and sisters for providing me mental and
emotional support through my endeavour. I want to thank all my friends who have
distinguished themselves by giving me strength, encouragement, guidance, and support
to persevere throughout this project despite many difficult obstacles.
III
ABSTRACT
Underwater acoustic communication is a rapidly growing field of research and
engineering. The wave propagation in an underwater sound channel mainly gets
affected by channel variations, multipath propagation and Doppler shift which keep lot
of hurdles for achieving high data rates and transmission robustness. Furthermore, the
usable bandwidth of an underwater sound channel is typically a few kHz at large
distances. In order to achieve high data rates it is natural to employ bandwidth efficient
modulation.
Thus we present a reliable simulation environment for underwater acoustic
communication applications (reducing the need of sea trails) that models the sound
channel by incorporating multipath propagation, surface and bottom reflection
coefficients, attenuation, spreading and scattering losses as well as the
For lower speeds, in Eq. (3.14) we can ignore, gvv
x xρ∂ ∂⎛ ⎞
⎜ ⎟∂ ∂⎝ ⎠ and for oρ ρ , the term
g vx t
ρ∂⎛ ⎞∂⎜ ⎟∂ ∂⎝ ⎠
in Eq. (3.15) can be ignored. Now, Eq’s. (3.14) and (3.15) can be written as:
2
2 gp v
x x tρ∂ ∂ ∂⎛ ⎞− = ⎜ ⎟∂ ∂ ∂⎝ ⎠
(3.16)
2
2 2
1g
v px t c t
ρ∂ ∂ ∂⎛ ⎞ = −⎜ ⎟∂ ∂ ∂⎝ ⎠. (3.17)
Combining Eq’s. (3.16) and (3.17), we get the one dimensional linear wave equation
2 2
2 2 2
1p px c t
∂ ∂=
∂ ∂. (3.18)
26
Extending it to three dimensional equation we get,
2
2 2
1 ppc t
∂Δ =
∂ (3.19)
where
2 2 2
2 2 2x y z∂ ∂ ∂
Δ = + +∂ ∂ ∂
denotes the Laplacian operator.
3.2 Helmholtz Equation
For ( ) ( ), , , expp P x y z t j tω= , we obtain
2 0P k PΔ + = . (3.20)
In spherical coordinates the Laplacian can be expressed by 2 2 2P P R R P RΔ = ∂ ∂ + ∂ ∂ ,
if taken into account that P only depends upon R. Spherical wave solution of the
Helmholtz Equation is given by,
( )exp4
A jkRP
Rπ−
= (3.21)
with
( ) ( ) ( )2 2 2o o oR x x y y z z= − + − + − (3.22)
where , ,o o ox y z are the coordinates of an omni directional point source (pulsating sphere
of small radius). Another simple and important solution is given by plane wave,
( )exp x y zP A j k x k y k z⎡ ⎤= − + +⎣ ⎦ (3.23)
where , and x y zk k k denote the wave numbers that satisfy,
2 2 2x y zk k k= + +Tk k (3.24)
with ( ), ,T
x y zk k k=k the wave number vector.
27
3.3 Sound Propagation in Homogenous Waveguide
A homogenous water column within infinitely extended perfectly reflecting boundaries,
as shown in Fig. 16. is considered in the sequel.
Fig. 16: Homogenous waveguide with source S and receiver R The field produced by a point source at ( )0, Sz in the absence of boundaries is given by,
( ) ( ),4
jkReP r z AR
ωπ
−
= . (3.25)
where,
( ) ( )2 2s sR z z r r= − + −
( ) Amplitude or Source strengthA ω =
Next we need to add a solution to the homogenous Helmholtz equation to satisfy the
boundary conditions of vanishing pressure at the surface and bottom of the waveguide.
The method which we use for this is image or mirror method, and is explained in the
following section. Here, the ocean surface and bottom are considered as two mirrors.
The rays which hit the surface and bottom are then starting exactly at the images of the
actual sources of origin. With this logic the whole image or mirror method is developed
and thereby it is easy to provide mathematics for multipath propagation.
•R
0Sr
D
S
r
1480 c m s=
31000 kg mρ =
sediment
air
2. Boundary
1. Boundary
z
Sz •
r
z
28
3.3.1 Image or Mirror Method
The image method superimposes the free-field solution with the fields produced by the
image sources. In the waveguide case, sound will be multiply reflected between the two
boundaries, requiring an infinite number of image sources to be included see for details
[1,6].
Fig. 17: Reflections of a wave from the boundaries of a layer, and the image sources
Fig. 17 shows a schematic representation of the contributions from the physical source
at depth Sz and the first three image sources, leading to the first 4 terms in the
expression for the total field,
( ) ( ) ( )
( ) ( ) ( )
01 02
03 04
1 0201 02
2 03 1 04 2 0403 04
ˆ, , ,
ˆ ˆ ˆ , , ,
jkL jkL
jkL jkL
e eP r z A RL L
e eR R RL L
ω ω φ ω
φ ω φ ω φ ω
− −
− −
⎧≅ +⎨
⎩⎫
+ + ⎬⎭
(3.26)
( ) ( )ˆ , , ,2i i iR R Rπφ ω φ ω θ ω⎛ ⎞= − =⎜ ⎟
⎝ ⎠, where i = 1,2.
•
•
Z
r0
2R
1R
D
Air
Bottom
Water
Image Surface
Image Bottom
Sz−
2 SD z+
2 SD z−
Sz
03L
01L
02L
04L
z
r
03φ 04φ
02φ
29
( )
( )
( )
( )
2201
2202
2203
2204
,
,
2 ,
2 .
s
s
s
s
L r z z
L r z z
L r D z z
L r D z z
= + −
= + +
= + − −
= + + −
The remaining terms are obtained by successive imaging of these sources to yield the
ray expansion for the total field,
( ) ( ) ( ) ( )
( ) ( )
( ) ( )
1
2
3
1 1 2 10 1
11 2 2 2
2
11 3 2 3
3
ˆ ˆ, , , ,
ˆ ˆ , ,
ˆ ˆ , ,
m
m
m
jkLm m
m mm m
jkLm m
m mm
jkLm m
m mm
eP r z A R RL
eR RL
eR RL
ω ω φ ω φ ω
φ ω φ ω
φ ω φ ω
−∞
=
−+
−+
⎧= ⎨
⎩
+
+
∑
( ) ( )4
1 11 4 2 4
4
ˆ ˆ , ,mjkL
m mm m
m
eR RL
φ ω φ ω−
+ + ⎫+ ⎬
⎭
(3.27)
where, A - Amplitude of the signal,
1R̂ - Surface reflection coefficient,
2R̂ - Bottom reflection coefficient, k - Complex wave number,
1 2 3 4, , ,m m m mL L L L - Length’s of all rays.
with
( )
( )
( )( )
( )( )
221
222
223
224
2
2
2 1
2 1
m S
m S
m S
m S
L r Dm z z
L r Dm z z
L r D m z z
L r D m z z
= + − +
= + + +
= + + − −
= + + + −
(3.28)
and D being the vertical depth of the duct.
30
3.3.2 Grazing angles
The angle with which each ray grazes the boundaries is usually termed as grazing angle.
This is quite important because of its influence on both the bottom and surface
reflection coefficients. With simple mathematics, the grazing angle for the four paths or
for all the rays can be computed, and is given by
( )11
2tan s
m
Dm z zr
φ − ⎛ ⎞− += ⎜ ⎟
⎝ ⎠, (3.29)
( )12
2tan s
m
Dm z zr
φ − ⎛ ⎞+ += ⎜ ⎟
⎝ ⎠, (3.30)
( )( )1
3
2 1tan s
m
D m z zr
φ −⎛ ⎞+ − −
= ⎜ ⎟⎜ ⎟⎝ ⎠
, (3.31)
( )( )1
4
2 1tan s
m
D m z zr
φ −⎛ ⎞+ + −
= ⎜ ⎟⎜ ⎟⎝ ⎠
, (3.32)
where
1 2 3 4, , ,m m m mφ φ φ φ - grazing angles of all the rays,
D - depth of the duct or channel,
0 to m = ∞ ,
sz - depth of the source in meters,
z - depth of the receiver in meters,
r - distance of the receiver in meters.
Further, from Eq. (3.27), we can write down the influence of
• Wind speed, frequency and grazing angle on Surface reflection coefficient, 1R̂ ,
• Bottom type and grazing angle on Bottom reflection coefficient, 2R̂ .
The functional dependence of wind speed wv , frequency f and grazing angle mφ on
Surface reflection coefficient is illustrated in Fig. 18.
31
0 10 20 30 40 50 60 70 80 90-800
-700
-600
-500
-400
-300
-200
-100
0
grazing angle [deg], f=25kHz
scat
terin
g lo
ss
a) wv =5 kn
0 10 20 30 40 50 60 70 80 90-15000
-10000
-5000
0
grazing angle [deg], f=25kHz
scat
terin
g lo
ss
b) wv = 15 kn.
Fig. 18: Diagram illustrating dependence of 1R on grazing angle, frequency and two wind speeds From Fig. 18, it can be observed that with an increase of grazing angle the scattering
loss also increases. In the same way with the increase of wind speed, there is an increase
in scattering loss.
Similarly we can also observe the dependence of Bottom reflection coefficient, 2R on grazing angle mφ and bottom type bt . This is illustrated in Fig. 19.
0 10 20 30 40 50 60 70 80 90-7
-6
-5
-4
-3
-2
-1
0
grazing angle [deg]
refle
ctio
n lo
ss [d
B]
a) bt = coarse sand
0 10 20 30 40 50 60 70 80 90-35
-30
-25
-20
-15
-10
-5
0
grazing angle [deg]
refle
ctio
n lo
ss [d
B]
b) bt = very fine sand
Fig. 19: Diagram illustrating dependence of 2R on grazing angle and two bottom types
32
3.3.3 Travel Times
The travel time of each ray is the time taken for it to reach the receiver. From the above
discussion it is vivid that, all the other paths need more time compared to the direct
path. Travel times for all the rays can be easily computed provided we know the
length’s or distances of all rays and the velocity of each ray. From Eq.(3.28), we know
the distances of all rays and the velocity of each ray is the speed of sound, c. Thereby,
we can write as
1 1
2 2
3 3
4 4
,
,
,
.
m m
m m
m m
m m
T L c
T L c
T L c
T L c
=
=
=
=
(3.33)
where,
c - Sound velocity in meters per second.
1 2 3 4, , and m m m mL L L L - Length’s of each ray in meters.
1 2 3 4, , and m m m mT T T T - Travel times of all rays in seconds.
Travel time of each ray in seconds
Num
ber o
f ray
s
0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
1
2
3
4
5
6
7
80.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Fig. 20: Multipath propagation depicting delays in 2D-view.
33
0
0.05
0.1
0.15
0.2
0
2
4
6
80
0.2
0.4
0.6
0.8
1
1.2
1.4
Travel time of each ray in seconds
Number of rays
Am
plitu
de
Fig. 21: Multipath propagation depicting delays in 3D-view.
Fig. 20 and Fig. 21 show the delay of rays in 2 dimensional and 3 dimensional views. It
can be clearly observed the delay of sinc pulse from ray 1 till ray 8. Here, sinc pulse is
just taken as an example to show the concept of delay.
3.3.4 Transmission loss for each ray
The transmission loss or sometimes referred as propagation loss is nothing but the sum
of all the losses, a ray gets effected. So, transmission loss can be written as,
0404 1 2
04
1 ˆ ˆltl e R Rl
α= (3.34)
In the above equation the transmission loss is written only for ray 4, as an example. In
terms of Eq. (3.27), spreading loss is due to the terms 1 2 3 4
1 1 1 1, , ,m m m ml l l l
(as discussed in
sec. 2.4.1). The attenuation or absorption is from the imaginary part of the complex
wave number k , as discussed in sec. 2.4.3, refer to Eq. (2.30). The Total Reflection loss
can be seen in this way. It is a sum of,
• reflection loss, loss which is caused when a ray travels from medium1 to medium2
due to the refraction of the ray, due to the reflection of the ray.
• scattering loss, loss which is caused by the roughness of the boundary. That is rays
get scattered in an un orderly fashion.
34
In Eq. (3.27), it is caused due to the terms 1 1 1 11 2 1 2 1 2 1 2, , ,m m m m m m m mR R R R R R R R+ + + + . The
following figure illustrates the transmission loss phenomenon.
As an example, a sinc pulse is taken to present the transmission loss phenomenon. In
Fig.22, one can observe clearly the degradation process in amplitude from ray 1 till
ray 8.
35
4 Modulation
Ordinarily, the transmission of a message signal (be it in analog or digital form) over a
band-pass communication channel (e.g., telephone line, satellite channel) requires a
shift of frequencies contained in the signal into other frequency ranges suitable for
transmission, and a corresponding shift back to the original frequency range after
reception. A shift of the range of frequencies in a signal is accomplished by using
modulation, defined as the process by which some characteristic of a carrier is varied
in accordance with a message signal. The message signal is referred to as the
modulating signal, and the result of the modulation process is referred to as the
modulated signal. At the receiving end of the communication system, we usually
require the message signal to be recovered. This is accomplished by using the process
known as demodulation, which is the inverse of the modulation process [7].
4.1 Digital Modulation Techniques
With a binary modulation technique, the modulation process corresponds to switching
or keying the amplitude, frequency, or phase of the carrier between either of two
possible values corresponding to binary symbols 0 and 1. This results in three basic
signaling techniques, namely, amplitude shift-keying (ASK), frequency shift-keying
(FSK) and phase shift-keying (PSK), as described herein [7]:
4.1.1 ASK
In ASK the amplitude of the signal is changed in accordance to the information and all
else is kept fixed. 1 is transmitted by a signal of particular amplitude. To transmit 0, we
change the amplitude keeping the frequency constant. On-off keying (OOK) is a special
form of ASK, where one of the amplitudes is zero as shown below;
36
Fig. 23: Baseband information sequence - 0010110010
( ) ( ) ( )sin 2ASK t s t ftπ= (4.1)
Fig. 24: Binary ASK (OOK) carrier
4.1.2 FSK
In FSK, we change the frequency in response to the information, one particular
frequency for a 1 and another frequency for a 0 as shown below for the same bit
sequence as above. In the example below, frequency 1f for a 1 is higher than 2f used
for the 0.
( )( )( )
1
2
sin 2 for 1
sin 2 for 0
f tFSK t
f t
π
π
⎧⎪= ⎨⎪⎩
(4.2)
Fig. 25: Binary FSK carrier
37
4.1.3 PSK
In PSK, we change the phase of the sinusoidal carrier to indicate the information. Phase
in this context is the starting angle at which the sinusoid starts. Depending on the start
of the binary sequence, to transmit 0 or 1, we shift the phase of sinusoid by 180°. Phase
shift represents the change in the state of information in this case.
If ( )b t represents the binary sequence then we can write
( ) ( ) ( )sin 2PSK t ft b tπ= ⋅ . (4.3)
Fig. 26: Binary PSK carrier (note the 180° phase sifts at big edges) Remarks ASK
• Pulse shaping can be employed to remove spectral spreading.
• One binary digit is represented by presence of carrier, at constant amplitude. Other
binary digit by absence of carrier.
• ASK is susceptible to sudden gain changes and demonstrates poor performance.
FSK
• Bandwidth occupancy of FSK is dependant on the spacing of the two symbols. A
frequency spacing of 0.5 times the symbol period is typically used.
• FSK can be expanded to a M-ary scheme, employing multiple frequencies as different
states.
PSK
• PSK can be expanded to a M-ary scheme, employing multiple phases as different
states.
38
• Filtering can be employed to avoid spectral spreading.
4.2 Bit rate and Symbol rate
Symbols, Bits and Bauds A symbol is quite apart from a bit in concept although both can be represented by a
sinusoidal or wave functions. Where bit is the unit of information and symbol is the unit
of transmitted energy. The definition of a symbol is always little bit ambiguous. In
general or according to communication aspects, it can be broadly defined as: “A symbol
is a set of bits”, not just one bit. The size of the set depends upon the modulation
scheme which you are using. For example, in BPSK, symbol has only a single bit and in
QPSK, 2 bits constitute a symbol. In our case considered, the number of bits in a symbol
can be written as, 2log M . Where M represents the number of phase shifts.
To understand and compare different modulation format efficiencies, it is important to
first understand the difference between bit rate and symbol rate. The signal bandwidth
for the communications channel needed depends on the symbol rate, not on the bit rate,
cf. [9].
bit rateSymbol ratethe number of bits transmitted with each symbol
=
Bit rate is the frequency of a system bit stream. Take, for example, an 8 bit sampler,
sampling at 10 kHz. The bit rate, the basic bit stream rate, would be eight bits multiplied
by 10K samples per second or 80 Kbits per second. (For the moment we will ignore the
extra bits required for synchronization, error correction, etc.).
The symbol rate is the bit rate divided by the number of bits that can be transmitted with
each symbol. If one bit is transmitted per symbol, as with BPSK, then the symbol rate
would be the same as the bit rate of 80 Kbits per second. If two bits are transmitted per
symbol, as in QPSK, then the symbol rate would be half of the bit rate or 40 Kbits per
second. A Baud rate is same as the Symbol rate. If more bits can be sent with each
symbol, then the same amount of data can be sent in a narrower spectrum. This is why
modulation formats that are more complex and use a higher number of states can send
the same information over a narrower piece of the RF spectrum.
39
4.3 Representation of Signals
4.3.1 Baseband and Bandpass Signals
In many communication systems the baseband signal that conveys the message to be
transmitted is up-converted (i.e., translated in frequency) in order to better suit the
characteristics of the channel. An example of such a system is a QPSK system. The
QPSK modulation process can be viewed as a two step procedure. First, a baseband
signal, consisting of a series of complex valued pulses, is formed, cf. section 4.5. This
signal is then up-converted to the desired carrier frequency. The result is a bandpass
signal which can be transmitted over a physical channel [10].
4.3.2 Baseband vs. Bandpass
In general any amplitude or phase modulation technique can be described by the
relation
( ) ( ) ( )cos 2 cx t A t f t tπ φ⎡ ⎤= +⎣ ⎦ (4.4)
where cf is the carrier frequency. The bandwidths of the phase function ( )tφ and the
amplitude function ( )A t are in general much lower than the carrier frequency. Hence,
the rate-of-change in these signals is typically much lower than cf . Consequently, ( )x t
is a bandpass signal with its spectrum concentrated around the carrier frequency cf .
Using trigonometric relations, it is possible to write the same function as
( ) ( ) ( ) ( ) ( )cos 2 sin 2I c Q cx t x t f t x t f tπ π= − (4.5)
where
( ) ( ) ( )( )cosIx t A t tφ= (4.6)
( ) ( ) ( )( )sinQx t A t tφ= (4.7)
represent the quadrature components. Here, ( )Ix t and ( )Qx t is the in-phase (I) and the
quadrature component (Q) respectively. Eq. (4.5) can be rewritten using complex
numbers as
( ) ( ){ }2Re cj f tbbx t x t e π= (4.8)
where
( ) ( ) ( )bb I Qx t x t jx t= + (4.9) is the baseband equivalent signal.
40
It is always instructive to study the baseband and bandpass equivalent signals in the
frequency domain. As illustrated in Fig. 27, the baseband equivalent signal is obtained
from the bandpass signal by removing the image of the signal on the negative frequency
axis, scaling the remaining spectrum by a factor of two and moving the result to the
baseband by shifting the spectrum cf Hz to the left. This frequency domain result, as
well as baseband and bandpass signals in general, is thoroughly discussed in chapter 4
of [4].
Fig. 27: The relation between a Bandpass signal and its Baseband equivalent signal
in the frequency domain.
A baseband equivalent signal can be obtained by using a down-converter. Such a device
might be implemented in several ways. One example of an implementation is shown in
Fig. 28. The signal is first multiplied by 22 cj f te π− in order to shift the spectrum and scale
it by a factor of two. The low-pass filter removes the negative frequency image. It is
assumed that the low-pass filter is ideal and has a sufficiently large bandwidth so as not
to alter the shape of the positive frequency image. A down-converter is found in most
bandpass communication receivers.
Fig. 28: Converting a Bandpass signal into its Baseband equivalent signal. Communication signals are usually represented using just the complex signal ( )bbx t in
Eq.(4.9), which is called as baseband representation as opposed to the bandpass
representation ( )x t , which is a real valued signal. The baseband representation is much
easier to work than the bandpass representation, as will be illustrated below.
( )x t
⊗ LP ( )Ix t
⊗ LP ( )Qx t
( )2sin 2 cf tπ
( )2cos 2 cf tπ
41
Advantages of the baseband representation
The use of a baseband representation simplifies communications system simulation and
analysis in a number of ways. A simulation or analysis of a baseband system is not tied
to any particular carrier frequency and can be reused if the carrier frequency is changed.
Certain very useful operations become extremely simple when using the complex
representation. For example, a frequency shift by shiftf is done by multiplying the signal
by 2 shiftj f te π . A phase shift of shiftϕ is done by multiplying the signal with shiftje ϕ .
4.4 Modulation – QPSK
In binary data transmission, we send only one of two possible signals during each bit
interval bT . On the other hand, in an M-ary data transmission system we send any one
of M possible signals, during each signaling interval T. For almost all applications, the
number of possible signals 2nM = , where n is an integer, and the signaling interval
bT nT= . It is apparent that a binary data transmission system is a special case of an
M-ary data transmission system. Each of the M signals is called a symbol. The rate at
which these symbols are transmitted through a communication channel is expressed in
units of bauds (as explained in the above section). For M-ary data transmission, it
equals to 2log M bits per second.
Quadrature phase shift keying (QPSK) is an example of M-ary data transmission with
4M = . In QPSK, one of four possible signaling elements is transmitted during each
signaling interval, with each signal uniquely related to a dibit (pairs of bits are termed as
dibits).
For example, we may represent the four possible dibits 00, 10, 11, and 01 in Gray-
encoded form (further on Gray encoding cf. [4], p.170), by transmitting a sinusoidal
carrier with one of four possible values, as follows:
42
( )
cos 2 , dibit 0043cos 2 , dibit 104
cos 2 , dibit 0143cos 2 , dibit 114
c c
c c
c c
c c
A f t
A f ts t
A f t
A f t
ππ
ππ
ππ
ππ
⎧ ⎛ ⎞+⎜ ⎟⎪ ⎝ ⎠⎪⎪ ⎛ ⎞+⎪ ⎜ ⎟
⎝ ⎠⎪= ⎨⎛ ⎞⎪ −⎜ ⎟⎪ ⎝ ⎠
⎪⎛ ⎞⎪ −⎜ ⎟⎪ ⎝ ⎠⎩
(4.10)
where 0 t T≤ ≤ ; we refer to T as the symbol duration. Fig. 29 depicts the signal state
diagram of Eq.(4.10).
Fig. 29: QSPK state diagram Clearly QPSK represents a special form of phase modulation. This is done by
expressing ( )s t succinctly as
( ) ( )cos 2c cs t A f t tπ φ⎡ ⎤= +⎣ ⎦ (4.11)
where the phase ( )tφ assumes a constant value for each dibit of the incoming data
stream. A further insight into the representation of QPSK can be developed by
expanding the cosine term in Eq.(4.11) and rewriting the expression for ( )s t as,
( ) ( ) ( ) ( ) ( )cos cos 2 sin sin 2c c c cs t A t f t A t f tφ π φ π⎡ ⎤ ⎡ ⎤= −⎣ ⎦ ⎣ ⎦ (4.12) According to this representation, the QPSK wave ( )s t has an in-phase component equal
to ( )coscA tφ⎡ ⎤⎣ ⎦ and a quadrature component equal to ( )sincA tφ⎡ ⎤⎣ ⎦ .
The representation of Eq. (4.12) provides the basis for the general block diagram of the
QPSK transmitter shown in Fig. 30. It consists of a serial-to-parallel converter, a pair of
product modulators, a supply of the two carrier waves (inphase, quadrature) and a
00
01
10
11
I
Q
43
summer. The function of the serial-to-parallel converter is to represent each successive
pair of bits of the incoming binary data stream ( )m t as two separate bits, with one bit
applied to the in-phase channel of the transmitter and the other bit applied to the
quadrature channel.
Fig. 30: General block diagram QPSK transmitter
It is apparent that the signaling interval T in a QPSK system is twice as long as the bit
duration bT of the input binary data stream ( )m t . That is, for a given bit rate 1/ bT , a
QPSK system requires half of the transmission bandwidth of the corresponding binary
PSK system.
Assuming the coding arrangement of Eq. (4.10), for the following data sequence, signal
wave forms are shown for modulated carrier for I channel, Q channel and the final
QPSK carrier.
Example to illustrate QPSK modulation
Assumed data sequence = [0 0 1 1 0 0 0 1 1 1 0 1 1 1 1 0 1 1] to be transmitted.
( )cos 2c cA f tπ
Serial- to-parallel converter
Oscillator
( )sin 2c cA f tπ
∑ QPSK signal
2π
I data, odd bits
Q data, even bits
⊗
⊗
Binary data stream ( )m t
44
0 0.5 1 1.5 2 2.5 3
x 10-3
-1.5
-1
-0.5
0
0.5
1
1.5data sequence
time in seconds
mag
nitu
de
Fig. 31: Data sequence transmitted
0 0.5 1 1.5 2 2.5 3
x 10-3
-1.5
-1
-0.5
0
0.5
1
1.5modulated carrier Signal for I channel
time in seconds
mag
nitu
de
Fig. 32: Modulated carrier signal for I channel
45
0 0.5 1 1.5 2 2.5 3
x 10-3
-1.5
-1
-0.5
0
0.5
1
1.5modulated carrier Signal for Q channel
time in seconds
mag
nitu
de
Fig. 33: Modulated carrier signal for Q channel
Here the mapping of the bits is done according to the state diagram of QPSK, Fig. 29.
0 0.5 1 1.5 2 2.5 3
x 10-3
-1.5
-1
-0.5
0
0.5
1
1.5QPSK Signal
time in seconds
mag
nitu
de
Fig. 34: QPSK signal for the given data sequence
46
4.5 Pulse shaping
The resulting QPSK symbols are passed through a pulse shaping filter. The rectangular
pulses are not practical to send and require a lot of bandwidth. So, in their lieu we send
shaped pulses that convey the same information but use smaller bandwidths and have
other good properties such as intersymbol interference rejection. One of the most
common pulse shaping used with QPSK is “root raised cosine”, in short RRC. This
pulse shaping has a so called roll-off parameter which controls the shape and the
bandwidth of the signal.
Some common pulse shaping methods are
• Root Raised cosine (used with QPSK)
• Half-sinusoid (used with MSK)
• Gaussian (used with GMSK)
The root raised pulse shape is given by,
( )( ) ( ) ( ) 1
2 2 2
cos 1 sin 1 44
1 16T
t T t T t Tt
T t T
β π β π βψ β
π β
−⎡ ⎤ ⎡ ⎤+ + −⎣ ⎦ ⎣ ⎦=⎡ ⎤−⎣ ⎦
(4.13)
where T, is the symbol time and β is the roll-off factor. The roll-off factor β usually
lies between 0 and 1 and defines the excess bandwidth100 %β . Using a smaller β
results in a more compact power density spectrum, but the link performance becomes
more sensitive to errors in the symbol timing. A typical root raised pulse with a roll-off
factor of 0.5β = is shown in Fig. 35.
47
-5 -4 -3 -2 -1 0 1 2 3 4 5-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6Root raised cosine pulse
delay t/T
Am
plitu
de
Fig. 35: Root raised cosine pulse with a roll-off factor, 0.5β =
48
5 System description
In sec. 4.3, we have dealt about bandpass and baseband descriptions. As we have seen
there are many advantages when we use baseband representation over bandpass in terms
of simulations. So, here we have used baseband system representation. A detailed
description about the simulation of a continuous-time baseband system is provided here.
Fig. 36: Underwater Acoustic simulation system.
The communication system considered is shown in Fig. 36. This is a typical set up
which can represent any kind of system using quadrature amplitude modulation (QAM).
This QPSK system is used in our investigations. A brief overview of the system now
follows. At the transmitting side, the sequence of symbols ( )d n is converted to a
continuous-time baseband signal ( )bbs t by a pulse amplitude modulator (PAM). Note
that ( )d n takes the values from discrete set of complex-valued symbols. Up-conversion
is performed by multiplying with 2 cj f te π , resulting in a bandpass signal ( )s t , being
transmitted over the channel (refer chapters 2 and 3). In order to remove the carrier, the
received signal ( )r t is processed by a down-converter which outputs the corresponding
baseband equivalent signal ( )bbr t . The down-converter is followed by a low pass and
then by a matched filter. The detector gives the estimates of the transmitted symbols. As
we already said above, baseband representation is useful in order to be able to simulate
the system using, for example, Matlab, where only time discrete signals can be
represented. Fig. 37 represents the baseband equivalent system.
nT
2 cj f te π−
( )d̂ n
2 cj f te π
( )d n ( )r t det⊗ BP
( )bbr tPAM ⊗( )bbs t ( )s t
Noise
Under- water acoustic channel
( )p t−( )p t
49
Fig. 37: The baseband equivalent system
UAC – Underwater Acoustic channel. A simulation is often based on oversampled system, i.e. the sampling rate is higher than
the symbol rate. In general, a higher sampling rate will more accurately reflect the
original system. However, this comes at the cost of a longer simulation time since more
samples need to be processed. It is common to use an oversampling rate that is a
multiple of the symbol rate. The number of samples per symbols, here denoted by Q is
then an integer.
In order to arrive at the desired discrete time system, we will take the continuous-time
baseband equivalent system, introduce an ideal anti-alias filter at the output of the
matched filter and then oversample its output. This is depicted in Fig. 38, where also
down-sampling a factor Q is assumed to be chosen so large that the bandwidth of the
matched filter is smaller than the bandwidth 2Q T of the anti-alias filter. Consequently,
the anti-alias filter does not change the signal output from the matched filter. The signal
at the input of the detector is same as for the continuous-time system. Thus, this
oversampled system is equivalent to the original system.
Fig. 38: Oversampling the system Below, a series of equivalent systems are presented where the anti-alias filter and the
sampling device step by step, is moved to the left until a completely discrete system
remains. As illustrated in Fig. 39, the first step is to switch the order of the matched
filter and the anti-alias filter and also use a discrete time matched filter ( )p n− . With a
nT Q
( )d̂ ndet ( )bbr t( )p t−PAM ( )d n ( )p t
( )bbs t
Noise
UAC
nT Q
( )d̂ n
2QT2
QT
−f
1
detQ( )bbr t
( )p t−PAM ( )d n ( )p t( )bbs t
Noise
UAC
50
slight change of notation, ( )x n denotes the discrete-time signal, ( )x nT Q . The
matched filter has a bandwidth smaller than the Nyquist frequency 2Q T so this
reordering operation does not affect the signal. This can be further understood with a
simple mathematical motivation by considering the sampled output of the continuous-
time convolution.
( ) ( ) ( )y t h t x t= ∗
and approximating the integral with a summation to yield
( ) ( ) ( )s sy nT h x nT dτ τ τ∞
−∞
= −∫
( ) ( )( )s s sk
T h kT x n k T∞
=−∞
≈ −∑
( ) ( )sT h n x n= ∗
where,
sTTQ
=
The relation holds exactly if both ( )h t and ( )x t has a bandwidth less than the Nyquist
frequency 1 2 sT . As a result, sampled continuous-time convolutions can be computed
using discrete-time processing and if the output is scaled by the sample period sT .
Fig. 39: Moving the anti-alias filter and the sampling device in front of the matched filter. As a next step it should be obvious that the anti-alias filter and the sampler can be
moved in front of the summation without changing the signal at the detector. The anti-
alias filter after PAM can be removed (since the bandwidth ( )p t is smaller) which
means that we can write the sampled transmitted signal as
( )d̂ ndetQ
2QT2
QT
−f
1 nT Q
⊗
T Q
( )bbr tPAM ( )d n ( )p t( )bbs t
Noise
UAC ( )p n−
51
( ) ( ) ( ) .bbk
s n d k p n kQ T Q∞
=−∞
⎡ ⎤= −⎣ ⎦∑
Clearly, this summation can be implemented by up-sampling ( )d n by a factor of Q ,
followed by filtering the resultant sequence with ( )p n .
Fig. 40: The equivalent discrete time baseband system. It is finally possible to summarize the above development in the equivalent discrete-
time-system shown in Fig. 40.
5.1 Simulation system
The simulation system is illustrated in Fig. 41. It consists of a bit source, transmitter,
channel, receiver and a bit sink. The bit source generates the random binary sequence
that is to be transmitted by the transmitter. Typically a random bit source is employed in
simulations and this is the case in our simulation as well. The transmitter converts the
bits into QPSK symbols, applies pulse shaping and up-conversion is done to the desired
carrier frequency.
Fig. 41: The Simulation system considered The output from the transmitter is fed through the underwater acoustic channel. The
receiver block takes the output from the channel, estimates phase and timing offset, and
demodulates the received QPSK symbols into information bits which are fed to the bit
sink. Here, the bit sink counts the number of errors that occurred to gather the statistics
used for investigating the performance of the system.
Bit source Transmitter Underwater acoustic channel
Receiver Bit sink
( )d̂ ndet
Noise
Q( )bbr n
Q( )d n ( )bbs n( )p n ( )p n−
UAC
52
5.2 Transmitter
The transmitter which is used in this report is illustrated in Fig. 42. It consists of blocks
for training sequence generation, QPSK mapping, pulse shaping put together known as
QPSK modulation and carrier modulation block.
Fig. 42: The transmitter
5.2.1 Training Sequence
The training sequence generator generates a known data sequence which is transmitted
prior to any data transmission. Its purpose is to provide the receiver with a known
sequence, which can be used for phase estimation and synchronization. The training
sequence is multiplexed with the data sequence before QPSK modulation as shown in
Fig. 42. In this report, multiplexing is done such that the whole training sequence is
transmitted before the data sequence, but any other scheme can also be used. Keeping
the training sequence in the middle of the data, i.e. half the data bits followed by
training sequence, followed by other half of the data bits, is another common scheme.
Training sequence carry no information and it is therefore to be seen as “useless”
overhead. A shorter training sequence is preferred from a overhead point of view, while
a longer one usually results in better performance of the synchronization and phase
estimation algorithms in the receiver. The length of the training sequence in general is
not fixed any where; it depends on the receiver design and modulation scheme which is
used. Later in this report performance results are shown for shorter and longer train
sequences.
Traning sequence
QPSK mapping
Pulse shaping
Carrier modulation
QPSK modulation
Train
Data transmitted
53
5.2.2 QPSK mapping
The bits are mapped onto corresponding QPSK symbols using Gray coding, as shown in
Fig. 43. Each QPSK symbol is represented by I Qd jd+ , corresponding to real-valued I
and Q channels, respectively. This is covered completely in sec. 4.4.
Fig. 43: Mapping of bits into QPSK symbols
5.2.3 Pulse shaping
The resulting QPSK symbols are passed through pulse shaping filter. Often a
rectangular pulse shape is used in simulations, although a root raised cosine pulse is
common choice in a real system. Here, we have used a root raised cosine pulse shape.
The complete description of the RRC pulse shaper is given in sec. 4.5.
5.2.4 Carrier modulation
Subsequent to pulse shaping is carrier modulation, taking the complex valued pulse
shaped QPSK symbols in the baseband, shifting them in frequency finishes the process
of carrier modulation. Which carrier frequency that one has to choose depends upon the
channel. Underwater acoustic channel is a low frequency channel and here, the carrier
that is chosen is in the range of 20-30 KHz. But, the carrier frequency is almost always
subsequently higher than the baseband frequency determined by symbol rate.
5.3 Channel
The complete description of the channel can be understood from chapters. 1, 2 and 3.
Nevertheless, a brief summary of it is again provided here. The main problems of this
channel are its multipath propagation, thereby a cause of interference. And next are
1,0 0,0
1,1 0,1
Id
Qd
54
channel variations, variations in physical parameters of the ocean such as temperature,
pH, salinity, pressure or depth of water. All these are extensively discussed in the above
chapters mentioned. This report considers almost all the parameters into consideration
while modeling the channel. The simulation block diagram of the channel can be found
in Appendix of the report. Fig. 44 represents the Underwater acoustic channel model
used in this simulation. ( )1h t , represents the direct path or first ray with zero delay
(relative) and ( )Nh t represents the Nth ray with a delay of Nτ with respect to the direct
path.
Fig. 44: Underwater Acoustic Channel Model
5.4 Receiver
The receiver design in any communication system is usually complicated compared to
transmitter and channel design. But, here as the channel is considered extensively the
complexity of the receiver design appears to be a bit reduced compared to channel
design. Fig. 45 depicts the receiver block diagram used in this report. The following
sections explain the functionality of each and every block used here.
Rx signal
⊕ ⊕Tx signal
Noise
( )1h t
( )nh t
( )Nh t
( )h t
55
5.4.1 Bandpass Filtering
The first block in the receiver is a bandpass filter with center frequency equal to the
carrier frequency cf and a bandwidth matching the bandwidth of the transmitted signal.
The purpose of the bandpass filter is to remove out-of-band noise. Choosing bandwidth
of the bandpass filter should be taken some care. If the bandwidth is chosen too large,
more noise enters than necessary will pass on to the subsequent stages. On the other
hand, if it is too narrow, the desired signal is distorted.
Fig. 45: The receiver
Decision
BP
Phase estimation
Down conversion
Front end
Matched filter
Phase estimation
Sync.
Phase correction
Training
56
5.4.2 Down conversion and Sampling
The down conversion block down-converts the received bandpass signal resulting into
complex valued baseband signal. In down-conversion operation the input signal is
multiplied with the local oscillator signal. Here a local oscillator is not used and the
same carrier frequency is assumed in both receiver and transmitter. So, there won’t be
any affect in terms of carrier frequency. One aspect of the local oscillator signal in the
down-conversion block is how to set the initial phase. In Fig. 45, a connection from the
optional phase estimation block to the down-conversion block is shown with dashed
lines and phase estimate obtained from the phase estimator is used for the initial phase
of the local oscillator. Another approach, which is common in practice, is not to lock the
phase of the local oscillator, but instead to do a phase compensation of the baseband
signal. This is done in our case after the matched filtering, where phase compensation is
simply a rotation of the signal constellation. The latter approach is shown with solid
lines in Fig. 45.
5.4.3 Matched Filtering
The matched filtering block contains a filter matched to the transmitted pulse shape. The
matched filter operation can be done on a discrete time signal or a continuous time
signal. The two possibilities are equivalent, but from the implementation point of view,
operating on the discrete time signal is to prefer.
In case of a rectangular pulse-shape, the matched filter is an integrate-and-dump filter.
In Fig. 46, the output signal from the matched filter (either I or Q channel) is shown for
the case of rectangular pulse shapes. The black dots represent the sampled signal in the
receiver, assuming four samples per symbol. The optimal sampling instants are
illustrated with small arrows. In the figure, the sampling of the matched filter happens
to be at one of the samples of the discrete signal, but this is typically not the case. If the
matched filter is to be sampled between two solid dots, interpolation can be used to find
the value between two samples, or, simpler but with loss in performance the closest
sample can be chosen.
57
Fig. 46: Output from the matched filter for successive signaling in absence of noise.
The solid line is the resulting output signal from the matched filters and the dotted lines
are the contributions from the first two bits (the remaining bits each has a similar
contribution, but this is not shown). The small arrows illustrate the preferred sampling
instants.
5.4.4 Synchronization
The synchronization algorithm is crucial for the operation of the system. Its task is to
find the best sampling time for the sampling device. Ideally, the matched filter should
be implemented such that the signal to noise ratio for the decision variable is
maximized. For a rectangular pulse shape, the best sampling time sampt is at the peak of
the triangles coming out from the matched filter, illustrated with small arrows in
Fig. 46.
The synchronization algorithm used in this report is based on the complex training
sequence. During the training sequence, it is known to the receiver what the transmitter
is transmitting. Hence, one possible way of recovering the symbol timing is to cross-
correlate the complex valued samples after the matched filter with locally generated
time-shifted replica of the training sequence. By trying different time-shifts in steps of
T Q , where Q is the number of samples per symbol, the symbol timing can be found
with a resolution of T Q . Keeping it mathematical terms, if ( ){ } 1
0
L
nc n
−
= is the locally
generated symbol-spaced replica of the QPSK training sequence of length L and ( )r n
denotes the output from the matched filter, the timing can be found as
sampt sampt T+ samp 6t T+
1+ 1+ 1+ 1+1− 1− 1−
t
58
( ) ( )1
samp samp0
arg max .L
kt r kQ t c k
− ∗
=
⎛ ⎞= +⎜ ⎟
⎝ ⎠∑ (5.1)
190 200 210 220 230 2400
20
40
60
80
100
120
140
Fig. 47: Example of cross-correlating the received sequence with the
training sequence in order to find the timing. In this example, the delay was estimated to 211 samples (corresponding to the
maximum) and, hence, the matched filter should be sampled at 211, 211 + Q , …. in
order to recover the QPSK symbols. The correlation properties of the training sequence
are important as they affect the estimation accuracy. Ideally the autocorrelation function
for the training sequence should be equal to a delta pulse, i.e. zero correlation
everywhere except at lag zero. Therefore, a training sequence should be carefully
designed.
5.4.5 Sampling
The output from the matched filter is down-sampled with a sampling rate of 1 Q , i.e.
every thQ symbol in the output sequence is kept. The position for the sample (illustrated
by arrows in Fig. 46 controlled by the synchronization device previously described.
5.4.6 Phase Estimation
The phase estimator estimates the phase of the transmitted signal, which is necessary to
know in order to demodulate the signal. Phase estimation especially in the low SNR
region, is a hard problem and several different techniques are available. The phase
59
estimation algorithm used in this report is as follows. Using a complex baseband
representation, the sub samples of the matched filter output is a sequence of the form
( ) ( ) ( ) ( )1 1 2..., , , , ,...,n n n nj j j je e e eϕ ϕ ϕ ϕ− + +Φ + Φ + Φ + Φ + (5.2) where { }4, 3 4n π πΦ ∈ ± ± is the information bearing phase of the thn symbol and ϕ
is the unknown phase offset caused by the channel. If nΦ is known, which is the case
during the training sequence, the receiver can easily remove the influence from the
information in each received symbol by element-wise multiplication with complex
conjugate of a QPSK modulated training sequence replica, generated by the receiver.
The value of ϕ can then easily obtained by averaging over the sequence. In other
words, if ( ){ } 1
0
L
nr n
−
= denotes the L received QPSK symbols (i.e. received signal after
down sampling) during the training sequence and ( ){ } 1
0
L
nc n
−
= is the local replica of the
complex training sequence, an estimate of the unknown phase offset can be obtained as
( ) ( )( )1
0
1ˆ argL
kr k c k
Lϕ
−∗
=
= ∑ . (5.3)
The longer the training sequence, the better the phase estimate as the influence from
noise decreases. A longer training sequence, on the other hand, reduces the amount of
payload that can be transmitted during a given time.
5.4.7 Decision
The decision device is a threshold device comparing the I and Q channels, respectively,
with threshold zero. If the decision variable is larger than zero, a logical “0” is decided
and if its less than zero a logical “1” is decided.
60
6 Observations and Results
This chapter presents you with some exemplary simulation results along with some
interesting observations. First we look into the Underwater Acoustic Channel then we
cover the Communication part of the system.
As discussed in the above chapters 2 and 3, the major impact in an underwater acoustic
channel would be its multipath propagation. Always our desired goal is to achieve high
data rates at a decent geometry of the transmitter and receiver (low BER is implied).
Here, the term geometry means the physical positioning of a transmitter and receiver in
an underwater acoustic channel of depth D and infinite length. At shorter distances the
multipath reaches the receiver at a much longer time compared to the direct path. This
statement may appear some what contrasting to what we think. But, it certainly makes
sense when we look into it in a deeper view. Here, we are not speaking about the time
taken for each ray to reach the receiver. But, instead we are referring to the “Relative
time” of all the other rays comparing to direct path.
Fig. 48 presents the simulation results for a particular environmental scenario varying
the receiver location. This figure explains the impact of distances, (indirectly its grazing
angles which play a major role) on time delays of multipath propagation for the
following environmental scenario. Here, the wind speed and bottom type are not
included as we are representing only the time delay concept without any transmission
loss phenomenon included.
Environmental Scenario (for Fig. 48)
Source location ( ) ( ), 0, 20S Sr z = m
Receiver locations ( ) ( )1 1, x,20R Rr z = m, x 10,100,500,1000=
Sound velocity c = 1500 m/s Water depth 40D = m Salinity 35S = ppt Water temperature T = 14 °C pH value pH = 8
Fig. 49: Simulation results showing relative travel times for various transmitter and receiver locations of a sinc-pulse including the transmission loss phenomenon.
Coming to Fig. 49c and 49d, we certainly see the impact of multipath growing to greater
extent as the separation between transmitter and receiver is more, i.e. 1000 m. In Fig.
49c, the 4th ray hits the surface 2 times and then bottom one time, i.e. S-B-S and the 5th
ray hits the surface one time and 2 times the bottom, i.e. B-S-B. From the following
results, it is observed that the 4th ray grazes at an angle of 3.1481° and 5th ray with
65
5.9941°. This leads to lower reflection coefficients for 5th ray compared to 4th ray.
Similarly, in Fig. 49d, the 5th ray hits the surface one time and 2 times the bottom, i.e.
B-S-B and the 4th ray hits the surface 2 times and then bottom one time, i.e. S-B-S. From
the results provided in d), it is observed that the 5th ray grazes at an angle of 3.1481° and
4th ray with 5.9941°. This leads to lower reflection coefficients for 4th ray compared to
5th ray.
Here, we see another interesting observation, i.e. in Fig. 49c, the 4th ray has larger
amplitude compared to 5th ray and exactly the opposite is seen in Fig. 49d. This is due to
the swapping of the vertical placements of transmitter and receiver from 10 to 35 m.
The simulation results for grazing angles and reflection coefficients:
c) ( ) ( ) ( ) ( )3 3 3 3, 0,10 and , 1000,35S S R Rr z r z= =
R2 = [0.9772 0.5533 0.6306 0.2568 0.4858 0.2002 0.2266 0.0960] Fig. 50 represents another simulation result to just show the impact of multi-path at a
little bit lower wind speed of 6 knots and with a bottom type value of 4.
Environmental Scenario (for Fig. 50)
Source location ( ) ( )1
, 0, xS Sr z = m, x 10,35=
Receiver locations ( ) ( )1 1, 1000, xR Rr z = m
Sound velocity c = 1500 m/s
Water depth 40D = m
Salinity 35S = ppt
Water temperature T = 14 °C
pH value pH = 8
Wind speed 6wv =
Bottom type bt = very fine sand
66
Fig. 50: Simulation results showing relative travel times for two different vertical depths of transmitter
and receiver of a sinc-pulse including the transmission loss phenomenon.
By now it is understood, multi-path dominates when the separation between the
transmitter and receiver increases and also varies with the vertical positions of the
transmitter and receiver. Added to this when you have a lower wind speed and a soft
bottom, it becomes more worse. The difference can be clearly observed from Fig. 49c,d
and Fig. 50. Here also the same behavior of amplitude difference is observed for 4th,
5th, 6th, 7th, etc as the geometry is different for both cases.
Finally, you have the case of a constructive interference and destructive interference of
the multipath. When the multi-path gets added to the direct path in accordance with its
phase then we have a constructive interference otherwise a destructive one. So,
sometimes even if the multi-path is not dominative, you may still have a poor BER.
As we have discussed till now the multipath propagation in underwater acoustic channel
and all the channel effects, now we move to communications part of the system. Always
in communications the desired goal is to achieve maximum signal to noise ratio. In
underwater acoustic channel the noise is in two forms, one is the ambient noise
discussed in chapter 2 and the other is the multipath itself. We can also say, here the
signal itself acts as a noise as the multipath is nothing but (delayed versions of direct
path) generated from our signal only. So, here when ever we refer to SNR we imply that
it the ratio between the signal strengths of the direct path and multipath. The following
0
0.05
0.1
0.15
0.2
0
2
4
6
80
0.2
0.4
0.6
0.8
1
x 10-3
Relative travel time [s]Number of ray
Am
plitu
de
a) ( ) ( ) ( ) ( )1 1 1 1, 0,35 and , 1000,10S S R Rr z r z= =
0
0.05
0.1
0.15
0.2
0
2
4
6
80
0.2
0.4
0.6
0.8
1
x 10-3
Relative travel time [s]Number of ray
Am
plitu
de
b) ( ) ( ) ( ) ( )1 1 1 1, 0,10 and , 1000,35S S R Rr z r z= =
67
are some simulation results which show the Bit Error Ratio for only direct path and
multi-path for 2 different wind speeds and bottom types.
DIRECT PATH
1. Environmental Scenario (for Fig. 51)
Source location ( ) ( ), 0,10S Sr z = m
Receiver locations ( ) ( ), 1000,35R Rr z = m
Sound velocity c = 1500 m/s
Water depth 40D = m
Salinity 35S = ppt
Water temperature T = 14 °C
pH value pH = 8
Wind speed 6wv = knots
Bottom type bt = coarse silt
Fig. 51 represents the BER plot only for direct path. As one can imagine, when we
transmit only the direct path, there will not be any noise present only you have
attenuation of the signal strength. So, the BER of direct path is 0.
1 2 3 4 5 6 7 8 9 10-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Eb/No Values
Bit
Erro
r Rat
e
Fig. 51: BER plot direct-path for the above Environmental scenario 1.
68
In the following two cases of multi-path propagation have been considered. One is at a
mud bottom type and lower wind speeds and the other is a bit higher wind speed and
sand bottom type. In case 1, the BER is much higher compared to case 2 as expected.
This is due to much reflections at lower wind speeds and softer bottom types.
MULTI-PATH PROPAGATION
Case 1
1. Environmental Scenario (for Fig. 52)
Source location ( ) ( ), 0,10S Sr z = m
Receiver locations ( ) ( ), 1000,35R Rr z = m
Sound velocity c = 1500 m/s
Water depth 40D = m
Salinity 35S = ppt
Water temperature T = 14 °C
pH value pH = 8
Wind speed 6wv = knots
Bottom type bt = coarse silt
1 2 3 4 5 6 7 8 9 100
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
Eb/No Values
Bit
Erro
r Rat
e
1 1.5 2 2.5 3 3.5 410
-3
10-2
10-1
100
Eb/No Values
Bit
Erro
r Rat
e
Fig. 52: BER plots multi-path propagation for the above Environmental scenario 2, case 1 a) linear scale
b) log scale.
69
Case 2 2. Environmental Scenario (for Fig. 53)
Source location ( ) ( ), 0,10S Sr z = m
Receiver locations ( ) ( ), 1000,35R Rr z = m
Sound velocity c = 1500 m/s
Water depth 40D = m
Salinity 35S = ppt
Water temperature T = 14 °C
pH value pH = 8
Wind speed 8wv = knots
Bottom type bt = very fine sand
1 2 3 4 5 6 7 8 9 100
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Eb/No Values
Bit
Erro
r Rat
e
1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 310
-4
10-3
10-2
10-1
Fig. 53: BER plots multi-path propagation for the above Environmental scenario 2, case 2 a) linear scal
b) log scale. From Fig’s. 52 and 53 it can observed that when you have higher wind speeds and
rough bottom types, the strengths of all the rays constituting multi-path propagation is
getting minimized. In these type of situations the communication aspect would become
easy compared to the acoustic channel. But, in practical applications, lower wind speeds
are present and thereby, making the communications design more hard. So, we should
always keep the range of wind speeds between 0-20 knots for our desired underwater
acoustic applications and the communication system should be designed robust even at
lower wind speeds.
70
Constellation Diagrams From the following constellation diagrams, you can see an error free propagation for
direct path, Fig. 54 and errors for multi-path, Fig. 55
Fig. 54: Received QPSK states for direct path
Fig. 55: Received QPSK states for multi path
71
7 Summary and Concluding Remarks
Some underwater acoustic applications like simple status reports or transfer time-
position co-ordinates may require a bit rate of 100 1bits s−⋅ . But in several other
applications like seafloor mapping and in some military applications bit rates of several 1kbits s−⋅ are required due to the transfer of high size images. As an initial step to
explore systems for communication that have the potential of transferring data at rates
of multiple 1kbits s−⋅ over distances of several kilometres underwater, we have
developed this simulation tool.
This simulation tool is designed for communication using Quadrature Phase Shift
Keying (QPSK) modulation techniques in an Underwater Acoustic Channel (UAC). It
mainly consists of a transmitter, UAC and a receiver. It provides a thorough insight into
various problems that are encountered by underwater sound channel and also explains
the degradation of bit error rate (BER) due to channel variations and presence of
multipath propagation.
All the oceanographic acoustic fundamentals have been considered in depth while
modelling the UAC. QPSK modulation techniques have been employed for the
transmitter and receiver. This tool works with a very low BER for the direct path even
at higher bit rates and is also robust for all channel variations. In short we can
summarize the following about what this simulation model provides: • a thorough insight into the complexity of an underwater acoustic channel.
• the ability to design and analyse time invariant equalizers with sensitivity to
equalizer mismatch.
• gives the flexibility to change the carrier frequency.
This tool shows the practical poor BER for multi path propagation and it produces
satisfactory results in the bandwidths ranging 1-2 Kbps. The robustness of the system
for multipath propagation drastically decreases when the channel variations are getting
worse. The simulation tool developed here was for fixed transmitter and receiver
locations. As explained in this report, the presence of multipath causes an intersymbol
72
interference (ISI) that destroys the message, due to different travel times for different
rays. Depending on the particular sound underwater channel in question, the ISI can
involve, in tens or even hundreds of symbols. A solution for this problem might be to
employ an adaptive equalizer in the simulation tool (here, adaptive is used as we refer to
a moving transmitter and receiver). An equalizer can be viewed as an inverse filter to
the channel. But, nevertheless, in practical situations even the employment of an
equalizer would not solve the problem of transferring high bit rates. This can pose us to
think of employing modulation techniques like Orthogonal Frequency Division
Multiplexing (OFDM). So, our future outlook for the extension of this simulation tool
would be:
• Incorporation of moving transmitter and receiver.
• Model validation with measurements.
• Investigation of adaptive single input multiple output (SIMO) equalization.
• Application of orthogonal frequency division multiplex (OFDM) communication.
73
Appendix
The following Schematic diagram for simulation gives complete idea how the main
program is structured down into functions and then sub-functions. Later to the
schematic diagram, the complete Matlab code is provided according to each function as
stated in the diagram.
74
Fig. 56: Schematic diagram for Simulation
75
Simulation code – Main.m
76
Main.m (contd.)
77
Main.m (contd.)
78
tansmitter.m
79
tansmitter.m (contd.)
80
tansmitter.m (contd.)
81
root_raised_cosine.m
82
training_sequence.m
83
qpsk.m
84
random_data.m
85
underwater_acoustic_channel.m
86
underwater_acoustic_channel.m (contd.)
87
underwater_acoustic_channel.m (contd.)
88
channel.m
89
channel.m (contd.)
90
attenuation.m
91
attenuation.m (contd.)
92
loss.m
93
loss.m (contd.)
94
ambient_noise.m
95
SRC.m
96
SRC.m (contd.)
97
BRC.m
98
BRC.m (contd.)
99
BRC.m (contd.)
100
receiver.m
101
receiver.m (contd.)
102
receiver.m (contd.)
103
phase_estimation.m
104
detect.m
105
detect.m (contd.)
106
detect.m (contd.)
107
References
[1] F.B. Jensen, W.A. Kuperman, M.B. Porter and H. Schmidt, Computational Ocean
Acoustics (Springer- Verlag, New York, Inc., 2000).
[2] H.G. Urban, Handbook of Underwater Acoustic Engineering (STN Atlas Elektronik
GmbH, Bremen, 2002).
[3] H. Medwin and C.S. Clay, Fundamentals of Acoustical Oceanography (Academic Press,
San Diego, 1998).
[4] John. G. Proakis, Digital Communications, fourth edition (McGraw-Hill, NY, 2001).
[5] Johnny R. Johnson, Introduction to Digital Signal Processing (Prentice-Hall of India Pvt.
Ltd, New Delhi, 1996).
[6] L.M. Brekhovskikh and Yu. P. Lysanov, Fundamentals of Ocean Acoustics (Springer-
Verlag, second edition).
[7] Simon Haykin, An introduction to Analog & Digital Communications (John Wiley &
Sons, Singapore, 1994).
[8] www.complextoreal.com
[9] http://literature.agilent.com
[10] http://www.kth.se
[11] 51-st Open Seminar on Acoustic Program, Gdansk 2004.