1 Advanced Random Processes Text: PROBABILITY AND RANDOM PROCESSES, by Davenport Course Outline - Basic Probability Theory; Random Variables and Vectors; Conditional Probability and Densities; Expectation; Conditional Expectation - Estimation with Static Models Random Process; - Stationarity; Power Spectral Density - Mean-Square Calculus; Linear System - Kalman Filter - Wiener Integrals; Wiener Filter Grade Mid-term (40 %), Final (40 %), Homework (20 %)
79
Embed
Text: PROBABILITY AND RANDOM PROCESSES, by …cel.cau.ac.kr/class/rp/RP-ADV01.pdf · · 2016-03-021 Advanced Random Processes Text: PROBABILITY AND RANDOM PROCESSES, by Davenport
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
1
Advanced Random Processes
Text: PROBABILITY AND RANDOM PROCESSES, by Davenport
Course Outline
- Basic Probability Theory; Random Variables and Vectors;
Conditional Probability and Densities; Expectation; Conditional Expectation
- Estimation with Static Models Random Process;
- Stationarity; Power Spectral Density
- Mean-Square Calculus; Linear System
- Kalman Filter
- Wiener Integrals; Wiener Filter
Grade
Mid-term (40 %), Final (40 %), Homework (20 %)
2
2. SAMPLE POINTS AND SAMPLE SPACES
Sample point
A sample point is a representation of a possible outcome of an experiment.
Sample space
A sample space is the totality of all possible sample points, that is, the representation of allpossible outcomes of an experiment.
Event
An event is an outcome or a collection of outcomes. It is also defined as the correspondingsample point or set of sample points, respectively.
Event defined by listing
A = s1, s2, · · · , sn B = s1, s2, s3, · · ·
Event defined by description
A = s : prop(s) is true
where prop (s) is some proposition about s : for example, |s| < 1.
3
Implication or inclusionA ⊂ B ⇔ (s ∈ A ⇒ s ∈ B)
EqualityA = B ⇔ A ⊂ B and B ⊂ A
Union
A ∪B , s : s ∈ A or s ∈ B or both
A ⊂ A ∪B and B ⊂ A ∪B
A ⊂ B ⇔ A ∪B = B
Intersection
A ∩B , s : s ∈ A and s ∈ B
A ∩B ⊂ A and A ∩B ⊂ B
A ⊂ B ⇔ A ∩B = A
4
Distributive laws
A ∩ (B ∪ C) = (A ∩B) ∪ (A ∩ C)
andA ∪ (B ∩ C) = (A ∪B) ∩ (A ∪ C)
Complement
Ac , s : s ∈ S and s 6∈ A(Ac)c = A
A ⊂ B ⇒ Bc ⊂ Ac
Relative complement
B −A , s : s ∈ B and s 6∈ A
B −A = B ∩Ac
5
Null set φ
A ∩Ac = φ
Sc = φ
A ∪ φ = A and A ∩ φ = φ
φ ⊂ A for any A ⊂ S
Disjoint events
The events A and B are disjoint if and only if A ∩B = φ.
De Morgan’s rules
(A ∩B)c = Ac ∪Bc
(A ∪B)c = Ac ∩Bc
Partitions of S
A1, A2, · · · , Anwhere
Ai ∩Aj = φ for i 6= j and ∪ni=1 Ai = S
6
3. PROBABILITY
Probability Space
A Probability Space is a triple (S,A, P ).
S = Sample space
A = σ-algebra on S
P = Probability measure
σ- Algebra
A is a nonempty class of subsets of S such that
(i) A ∈ A ⇒ Ac ∈ A(ii) A,B ∈ A ⇒ A ∪B ∈ A(iii) A1, A2, A3, · · · ∈ A ⇒ ∪∞i=1 Ai ∈ A
Probability
Probability is a set function P : A → R which satisfies the following axioms:
(i) P (A) ≥ 0
(ii) P (S) = 1
7
(iii) Ai ∩Aj = φ, i 6= j ⇒
P [∪∞i=1Ai] =∞∑
i=1
P [Ai] (countably additive)
Elementary properties of probability
P [Ac] = 1− P [A]
P [φ] = 0
P [A] ≤ 1
P [B −A] = P [B]− P [A ∩B]
A ⊂ B ⇒ P [B −A] = P [B]− P [A]
A ⊂ B ⇒ P [A] ≤ P [B]
P [A ∪B] = P [A] + P [B]− P [A ∩B]
P [A ∪B] ≤ P [A] + P [B]
8
Joint probability
If the sample space S is partitioned both by the collection of events A1, A2, · · · , An, then
P [B] = P [B ∩ S] = P [B ∩ (∪mj=1Ai)] = P [∪m
j=1(B ∩Ai)]
=m∑
j=1
P [B ∩Ai]
=m∑
j=1
P [B|Aj ]P [Aj ]
Conditional probability
P [B|A] , P [A ∩B]P [A]
so long as P [A] > 0.
Bayes’ rule
Let B be an arbitrary event in a sample space S. Suppose that the events A1, A2, · · · , Am
partition S and that P [Ai] > 0 for all i. Then
P [Ai|B] =P [Ai ∩B]
P [B]=
P [B|Ai]P [Ai]∑mj=1 P [B|Aj ]P [Aj ]
9
Independent events
The events A and B are said to be statistically independent if
P [A ∩B] = P [A] P [B]
The events A1, A2, · · · , An are said to be mutually independent if and only if the relations
hold for all combinations of the indices such that 1 ≤ i < j < k < · · · ≤ n.
Independent experiments
Suppose that we are concerned with the outcomes of n different experiments E1, E2, · · · , En.Suppose further that the sample space Sk of the kth of these n experiments is partitioned by themk events Akik
, ik = 1, 2, · · · ,mk. The n given experiments are then said to be statisticallyindependent if the equation
P [A1i1 ∩A2i2 ∩ · · · ∩Anin ] = P [A1i1 ]P [A2i2 ] · · ·P [Anin ]
holds for every possible set of n integers i1, i2, · · · , in, where the index ik ranges form 1 to mk.
10
4. RANDOM VARIABLES
Set-indicator function
IA(s) =
1, if s ∈ A
0, if s 6∈ A
Inverse imageX−1(A) = s ∈ S | X(s) ∈ A
A-measurable
A map X : S → R is A-measurable if
s | X(s) < a ∈ A, for all a ∈ R
Random variable
A random variable X is an A-measurable function from the sample space S to R.
Induced probability
P [X ∈ A] , P [X−1(A)] = P [s ∈ S | X(s) ∈ A]P [X < a] , P [s ∈ S | X(s) < a]
RangeRX = range of X = a ∈ R | a = X(s) for s ∈ S
11
Probability distribution function
FX(x) , P [X ≤ x] = P [s ∈ S | X(s) ≤ x]
Properties of probability distribution function
FX(+∞) = 1 and FX(−∞) = 0
b > a ⇒ FX(b) ≥ FX(a) (monotone non-decreasing)
FX(a) = FX(a + 0) = limε→+0
FX(a + ε) (right continuous)
FX(a− 0) + P [X = a] = FX(a)
Decomposition of distribution functions
FX(x) = DX(x) + CX(x)
where DX is a step function and hence may be expressed as
DX(x) =∑
i
P [X = xi]U(x− xi)
and where CX is continuous everywhere.
12
Interval probability
P [X ∈ (a, b]] , P [a < X ≤ b] = FX(b)− FX(a)
Probability density
fX(x) , dFX(x)dx
Properties of probability density function
fX(x) ≥ 0
FX(x) =∫ x
−∞fX(ξ)dξ
∫ ∞
−∞fX(ξ)dξ = 1
Calculation of probability
P [X ∈ A] =∫
A
fX(x)dx
13
uniform density function
fX(x) =
1b−a , for a ≤ x ≤ b
0, otherwise
exponential density function
fX(x) =
ae−ax, for x ≥ 0
0, otherwise(a > 0)
normal density function
fX(x) =1√2π
e−x2/2
Rayleigh density function
fX(x) =
xb e−x2/2b, for x ≥ 0
0, otherwise(b > 0)
Cauchy density function
fX(x) =a
π
1a2 + x2
(a > 0)
14
5. RANDOM VECTORS
Random vector
X(s) =
X1(s)
X2(s)...
Xn(s)
where Xi(s), i = 1, · · · , n is a random variable defined on S. Thus, a random vector is a finitefamily of random variables.
Joint-probability distribution function
FX,Y (x, y) , P [X ≤ x, Y ≤ y] , P [s ∈ S | X(s) ≤ x and Y (s) ≤ y]
Properties of joint-probability distribution function
(7) Y1, Y2, Y3, · · · , Yk+1 are linearly related to Y1, Y2|1, Y3|2, · · · , Yk+1|k.”Gram-Schmidt orthogonalization”
42
Sample Mean
Estimator of E(X) = m
mn , 1n
n∑
i=1
Xi
E[mn] = m
Xi, i = 1, 2, 3, ..., n independent ⇒ var(mn) =1n
var(X)
Assume independence.
limn→0
P [|mn −m| ≥ ε] = 0 (the weak law of large numbers)
Relative frequency
- Suppose that we sample a random variable, say X, and that we determine for each samplewhether or not some given event A occurs. The random variable characterizing the relativefrequency of occurrence of the event A has the statistical properties.
E[nA
n] = p and var(
nA
n) =
p(1− p)n
wherep , P [X ∈ A]
43
P[∣∣∣nA
n− p
∣∣∣ ≥ ε]≤ 1
4nε2
limn→∞
P[∣∣∣nA
n− p
∣∣∣ ≥ ε]
= 0 (Bernoulli theorem)
44
9. Random Processes
Random Process
An indexed family of random variables, Xt, t ∈ T, where T denotes the set of possible values ofthe index t.
If T is a countably infinite set, then the process is called a discrete-parameter random process; ifT is a continuum, then the process is called a continuous-parameter random process.
Bernoulli process
A random process Xn, n = 1, 2, 3, · · · in which the random variables Xn are Bernoulli randomvariables, for example, where
P [Xn = 1] = p and P [Xn = 0] = 1− p
and where the Xn are statistically independent random variables.
E[Xn] = p
var(Xn) = pq = p(1− p)
Binomial counting process
A random process Yn, n = 1, 2, 3, · · ·
Yn ,n∑
i=1
Xi
45
where Xi, i = 1, 2, 3, · · · is independent Bernoulli r.p.
P [Yn = k] =(
n
k
)pk(1− p)n−k, for k = 0, 1, 2, · · · , n
E[Yn] = np
var(Yn) = npq
cov(Ym, Yn) = pq min(m,n)
var(Ym − Yn) = |m− n|pq
Sine wave processXt , V sin(Ωt + Φ) , t ∈ R
where V, Ω, and Φ are r.v.’s.
Stationarity (strict sense)
A random process XT , t ∈ T is stationary (in the strict sense) if and only if all of thefinite-dimensional probability distribution functions are invariant under shifts of the time origin.
Let Xt,−∞ < t < +∞ be a strictly stationary real random process. It then follows that
mX(t) = E[Xt] = E[X0] = const
47
RX(t, t− τ) = RX(0,−τ)
We generally write in this caseE[Xt] = E[X] = mX
RX(t, t− τ) = RX(τ) , t ∈ R
RX(−τ) = RX(τ)
|RX(τ)| ≤ RX(0)
Wide sense stationarity (wss)
Let Xt,−∞ < t < +∞ be a real random process such that
E[Xt] = E[X0] , t ∈ R
RX(t, t− τ) = RX(0, 0− τ) , t ∈ R, τ ∈ RThen the given random process is said to be stationary in the wide sense.
Jointly wide sense stationary random processes
We say that the random processes Xt,−∞ < t < +∞ and Yt,−∞ < t < +∞ are jointly wss,if Xt,−∞ < t < +∞ and Yt,−∞ < t < +∞ are wss and
RXY (t, t− τ) = RXY (0,−τ) , t ∈ R, τ ∈ R
48
Sample mean
Consider the wide-sense stationary random process Xt,−∞ < t < +∞ whose second momentis finite. Suppose that we sample that process at the n time instants t1, t2, · · · , tn. The estimator
mn , 1n
n∑
i=1
Xi
whereXi , Xti
is called the sample mean.
E[mn] = mX
var(mn) =1n2
n∑
i=1
n∑
j=1
KX(ti, tj)
Special cases are:
Xi pairwise uncorrelated ⇒ var(mn) =KX(0, 0)
n=
σ2
n
⇒ limn→∞
P [|mn −m| > ε] = 0 (the weak law of large numbers)
Xi highly correlated ⇒ var(mn) = σ2
49
Periodic sampling
Let the wss random process Xt,−∞ < t < +∞ be sampled periodically throughout the interval0 ≤ t ≤ T in such a way that there are n sampling instants equally spaced throughout thatinterval (the last at t = T ). The variance of the sample mean is given in this case by the formula
var(mn) =σ2
n+
2n
n−1∑
k=1
(1− k
n)KX(k ∆t)
where ∆t , T/n. It therefore follows that
limn→∞
var(mn) =2T
∫ T
0
(1− τ
T)KX(τ)dτ
if we pass to the limit n →∞ while keeping T fixed.
50
10. LINEAR TRANSFORMATIONS
n-dimensional case
Suppose that the m-dimensional real random vector
Y = (Y1, Y2, · · · , Ym)
is generated from the n-dimensional real random vector
X = (X1, X2, · · · , Xn)
by the transformation g, that is,Y = g(X)
We say that g is a linear transformation if and only if it satisfies the relation
g(aW + bZ) = ag(W ) + bg(Z) , ∀a, b ∈ R
Y1 = g11X1 + g12X2 + · · ·+ g1nXn
Y2 = g21X1 + g22X2 + · · ·+ g2nXn
· · ·Ym = gm1X1 + gm2X2 + · · ·+ gmnXn
51
Yi =n∑
j=1
gijXj , i = 1, 2, · · · ,m
E[Yi] =n∑
j=1
gijE[Xj ]
cov(Yi, Yk) =n∑
j=1
n∑r=1
gijgkrcov(Xj , Xr)
Matrix formulation
X =
X1
X2
Y =
Y1
Y2
Y3
G =
g11 g12
g21 g22
g31 g32
Y = GX
E[Y ] = GE[X] ΣY = GΣXGT
52
Time averages
Xt, −∞ < t < +∞ - - r.p.
Yt , 1T
∫ t
t−T
Xτdτ
E[Yt] =1T
∫ t
t−T
E[Xτ ]dτ
Output autocorrelation function
RY (t1, t2) = E[Yt1Yt2 ] = E
[1T
∫ t1
t1−T
Xα1dα11T
∫ t2
t2−T
Xα2dα2
]= E
[1
T 2
∫ t1
t1−T
∫ t2
t2−T
Xα1Xα2dα1dα2
]
=1
T 2
∫ t1
t1−T
∫ t2
t2−T
RX(α1, α2)dα1dα2 =1
T 2
∫ T
0
∫ T
0
RX(τ1 + t1 − T, τ2 + t2 − T )dτ1dτ2
Xt, −∞ < t < +∞ wss ⇒E[Yt] = mX , E[Xt]
RY (t1, t2) =1
T 2
∫ T
0
∫ T
0
RX(t1 − t2 + τ1 − τ2)dτ1dτ2
53
RY (t, t) =1
T 2
∫ T
0
∫ T
0
RX(τ1 − τ2)dτ1dτ2 =2
T 2
∫ T
0
∫ T
α1
RX(α1)dα2dα1
=2
T 2
∫ T
0
(T − α1)RX(α1)dα1 =2T
∫ T
0
(1− τ
T)RX(τ)dτ
var(Yt) = RY (t, t)−m2X =
2T
∫ T
0
(1− τ
T)[RX(τ)−m2
X ]dτ =2T
∫ T
0
(1− τ
T)KX(τ)dτ
∫ ∞
−∞|KX(τ)|dτ < C ⇒ var(Yt) <
C
T(??)
Weighting functions
time-invariant Linear system
y(t) =∫ +∞
−∞h(τ)x(t− τ)dτ
h(t) - - system weighting function
Output moments
Xt ∼ random input
Yt =∫ +∞
−∞h(τ)Xt−τdτ
54
E[Yt] =∫ +∞
−∞h(τ)E[Xt−τ ]dτ
KY (t1, t2) =∫ +∞
−∞
∫ +∞
−∞h(τ1)h(τ2)KX(t1 − τ1, t2 − τ2)dτ1dτ2
Xt ∼ wss
E[Yt] = mX
∫ +∞
−∞h(τ)dτ
KY (τ) =∫ +∞
−∞
∫ +∞
−∞h(τ1)h(τ2)KX(τ − τ1 + τ2)dτ1dτ2
RY X(τ) =∫ +∞
−∞h(t′)RX(τ − t′)dt′
System correlation function
Rh(τ) ,∫ +∞
−∞h(t)h(t− τ)dt
RY (τ) =∫ +∞
−∞Rh(t′)RX(τ − t′)dt′
var(Yt) =∫ +∞
−∞Rh(t′)KX(t′)dt′
55
11. SPECTRAL ANALYSIS
Fourier transforms
X(f) ,∫ +∞
−∞x(t)e−i2πftdt
x(t) =∫ +∞
−∞X(f)ei2πftdf
System functions
h(t) - - the weighting function of a stable, linear, time-invariant linear system.
H(f) ,∫ +∞
−∞h(τ)e−i2πfτdτ
the system function
h(τ) =∫ +∞
−∞H(f)ei2πfτdf
y(t) =∫ +∞
−∞h(τ)x(t− τ)dτ
Y (f) = X(f)H(f)
56
Spectral density
SX(f) ,∫ +∞
−∞RX(τ)e−i2πfτdτ
RX(τ) =∫ +∞
−∞SX(f)ei2πfτdf
SX(0) =∫ +∞
−∞RX(τ)dτ
E[X2t ] = RX(0) =
∫ +∞
−∞SX(f)df
SX(f) ≥ 0, for all f
Xt real ⇒ SX(−f) = SX(f)
Spectral analysis of linear system
Xt ∼ wss input , Yt ∼ wss output
SY (f) = |H(f)|2SX(f)
57
E[Y 2t ] =
∫ +∞
−∞|H(f)|2SX(f)df
If H has the value unity over a narrowband of width ∆f centered about a frequency f1 , then
E[Y 2t ] = 2SX(f1)∆f
Cross-spectral density
SXY (f) ,∫ +∞
−∞RXY (τ)e−i2πfτdτ
RXY (τ) =∫ +∞
−∞SXY (f)ei2πfτdf
58
12. SUMS OF INDEPENDENT RANDOM VARIABLES
Independent-increment process
The real random process Yt , t ≥ 0 is said to be an independent-increment process if for everyset of time instants
Let Yt, t ≥ 0 be a real random process with stationary and independent increments and letY0 = 0. If, given the time instants
0 = t0 < t1 < t2 < · · · < tn
Xk , Ytk− Ytk−1 , for k = 1, 2, · · · , n
66
Ytn =n∑
k=1
Xk
φYtn(v) =
n∏
k=1
φXk(v)
φYt1 ,Yt2 ,...,Ytn(v1, v2, · · · , vn) =
n∏
k=1
φXk
n∑
j=k
vj
Probability generating function
Let X be a discrete random variable with nonnegative integer possible values.
ψX(z) , E[zX ] =∑
k
P [X = k]zk
E[X] = ψ′X(1)
E[X(X − 1) · · · (X − n + 1)] = ψ(n)X (1)
Joint-probability generating functions
Each of the Xk is a nonnegative, integer-valued random variable.
67
ψX(z) , E[zX11 zX2
2 · · · zXnn ] = E[
n∏
k=1
zXk
k ]
mutually independent ⇔ ψX(z) =n∏
k=1
ψXk(zk)
68
13. The Poisson process
Poisson process
Let Nt , 0 ≤ t < +∞ be a counting random process such that:
a. Nt assumes only nonnegative integer values and
N0 , 0
b. The process has stationary and independent increments,
c.P [Nt+∆t −Nt = 1] = λ∆t + o(∆t) (λ > 0)
d.P [Nt+∆t −Nt > 1] = o(∆t)
where
lim∆t→0
o(∆t)∆t
= 0
It then follows that
P [Nt+∆t −Nt = 0] = 1− λ∆t + o(∆t)
and that
69
P [Nt = k] =e−λt(λt)k
k!, k = 0, 1, 2, · · · ;
That is, Nt is a Poisson random variable. The counting process Nt, 0 ≤ t < ∞ is then called aPoisson counting process.
E[Nt] = λt
var(Nt) = λt
Arrival times
Let Tk be the random variable which describes the arrival time of the kth event counted by thecounting random process Nt, 0 ≤ t < ∞. Then
FTk(t) = 1− FNt(k − 1)
If Nt, 0 ≤ t < ∞ is a Poisson counting process, then
FTk(t) =
1− e−λt∑k−1
j=0(λt)j
j! , t ≥ 0
0 , t < 0
fTk(t) =
λe−λt (λt)k−1
(k−1)! , t ≥ 0
0 , t < 0
70
that is, Tk has an Erlang probability density. In this case:
E[Tk] =k
λ
var(Tk) =k
λ2
φTk(v) =
1(1− iv
λ
)k
Interarrival times
Let Nt, 0 ≤ t < +∞ be a counting process with the arrival times Tk , k = 1, 2, 3, · · · . Thedurations
Z1 , T1
Zk , Tk − Tk−1 , k = 2, 3, 4, · · ·
are called the interarrival times of the counting process. We then have
FZk(τ) = 1− P [Ntk−1+τ −Ntk−1 = 0]
If the counting process has stationary increments, then, for all k,
71
FZk(τ) = 1− P [Nτ = 0]
Further, if the given counting process is Poisson, then
FZk(τ) =
1− e−λτ , τ ≥ 0
0 , τ < 0
fZk(τ) =
λe−λτ , τ ≥ 0
0 , τ < 0
In this case
E[Zk] =1λ
, k = 1, 2, 3, · · ·
In any case
E[Tk] = kE[Zk]
Renewal counting processes
Let Nt , 0 ≤ t < ∞ be a counting process. If the interarrival times of this counting process aremutually independent random variables, all with the same probability distribution function, thenthe given process is called a renewal counting process. The renewal function m(t) of a renewalcounting process is the expected value of that process; that is,
72
m(t) , E[Nt]
and its derivative
λ(t) , dm(t)dt
is called the renewal intensity of the process. It then follows that
m(t) =∞∑
k=1
FTk(t)
and, if the various derivative exist,
λ(t) =∞∑
k=0
fTk(t)
On defining Λ(v) to be the Fourier transform of the renewal intensity, that is,
Λ(v) ,∫ +∞
−∞λ(t)eivtdt
it then follows that
Λ(v) =φZ(v)
1− φZ(v)
73
where φz is the common characteristic function of the interarrival times.
Unordered arrival times
Let Nt, 0 ≤ t < ∞ be a Poisson counting process and suppose that Nt = k : that is, supposethat k events occur by time t. The unordered arrival times U1, U2, · · · , Uk of those k events arethen mutually independent random variables, each of which is uniformly distributed over theinterval (0, t] :
fUi(ui|Nt = k) =
1t , 0 < ui ≤ t
0 , otherwise
for all i = 1, 2, · · · , k.
Filtered Poisson processes
Let Nt, 0 ≤ t < ∞ be a Poisson counting process. The random process Xt, 0 ≤ t < ∞ inwhich
Xt ,Nt∑
j=1
h(t− Uj)
where an event which occurs at time uj generates an outcome h(t− uj) at time t and where therandom variables Uj are the unordered arrival times of the events which occur during the interval(0, t], is called a filtered Poisson process. The mean of a filtered Poisson process is
74
E[Xt] = λ
∫ t
0
h(u)du
the variance is
var(Xt) = λ
∫ t
0
h(u)2du
and the characteristic function of the random variable Xt is
φXt(v) = exp[λ
∫ t
0
(eivh(u) − 1
)du
]
Random partitioning
Let Nt, 0 ≤ t < ∞ be a Poisson counting process and let Xt, 0 ≤ t < ∞ be thecorresponding filtered Poisson process in which
Xt ,Nt∑
j=1
h(t− Uj)
We say that the random process Zt, 0 ≤ t < ∞ is a randomly partitioned filtered Poissonrandom process if
Zt ,Nt∑
j=1
Yjh(t− Uj)
75
where the partitioning random variables Yj are mutually independent random variables which areindependent of the unordered arrival times Uj , and where each of the Yj has the same Bernoulliprobability distribution
P [Yj = 1] = p and P [Yj = 0] = q , 1− p
where 0 < p < 1. In this case,
E[Zt] = pλ
∫ t
0
h(u)du = pE[Xt]
The characteristic function of the randomly partitioned random variable Zt is
φZt(v) = exp[pλ
∫ t
0
(eivh(u) − 1
)du
]
76
14. Gaussian random process
Gaussian random vectors
Y = (Y1, Y2, · · · , Ym)
φY (v) = exp(imTY v − 1
2vT ΣY v)
fY (y) =exp[− 1
2 (y −mY )T ΣY (y −mY )](2π)m/2|ΣY |1/2
Gaussian random processes
The real random process Yt, t ∈ T is said to be a gaussian random process if for every finiteset of time instants tj ∈ T , the corresponding random variables Ytj are jointly gaussian randomvariables.
Narrowband random processes
The random process Xt, −∞ < t < ∞ is said to be a narrowband random process if it has azero mean, is stationary in the wide sense, and if its spectral density SX differ from zero only insome narrowband of width ∆f centered about some frequency f0 where
f0 >> ∆f
A narrowband random process may be represented in terms of an envelope random processVt, −∞ < t < ∞ and a phase random precess Φt, −∞ < t < ∞ by using the relation
77
Xt = Vt cos(ω0t + Φt)
where w0 = 2πf0. Alternatively, a narrowband random process may also be represented in termsof cosine and sine component random processes Xct,−∞ < t < ∞ and Xst,−∞ < t < ∞,respectively, by using the relation
Xt = Xct cos ω0t−Xst sin ω0t
The relations between these two representations are given by the formulas
Xct = Vt cosΦt and Xst = Vt sinΦt
which have the inverses
Vt =√
X2ct + X2
st and Φt = tan−1 Xst
Xct
The random variables Xct, Xst, Xc(t+τ), and Xs(t+τ) have the covariance matrix
R(τ) =
RX(0) 0 Rc(τ) Rcs(τ)
0 RX(0) −Rcs(τ) Rc(τ)
Rc(τ) −Rcs(τ) RX(0) 0
Rcs(τ) Rc(τ) 0 RX(0)
78
where
Rc(τ) = 2∫ +∞
0
SX(f) cos[2π(f − f0)τ ]df
and
Rcs(τ) = 2∫ +∞
0
SX(f) sin[2π(f − f0)τ ]df
Narrowband gaussian processes
The cosine- and sine-component random variables Xct and Xst of a gaussian narrowband randomprocess are independent random variables with zero means, each with a variance equal to RX(0),and a joint-probability density
fXctXst(x, y) =exp
[− x2+y2
2RX(0)
]
2πRX(0)
The envelope and phase random variables Vt and Φt of a gaussian narrowband random processare also independent random variables. The envelope has the Rayleigh probability density
fVt(v) =
vRX(0)exp
[− v2
2RX(0)
], v ≥ 0
0 , otherwise
and the phase is uniformly distributed over [0, 2π] :