Top Banner
Linear Stochastic Models Special Types of Random Processes: AR, MA, and ARMA Advanced Signal Processing Department of Electrical and Electronic Engineering, Imperial College [email protected] c Danilo P. Mandic Advanced Signal Processing 1
46

ASP Lecture 2 ARMA Modelling

Mar 02, 2015

Download

Documents

prigogineesfm
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: ASP Lecture 2 ARMA Modelling

Linear Stochastic Models

Special Types of Random Processes: AR,MA, and ARMA

Advanced Signal Processing

Department of Electrical and Electronic Engineering, Imperial College

[email protected]

c⃝Danilo P. Mandic Advanced Signal Processing 1

Page 2: ASP Lecture 2 ARMA Modelling

Motivation:- Wold Decomposition Theorem

The most fundamental justification for time series analysis is due to Wold’sdecomposition theorem, where it is explicitly proved that any (stationary)time series can be decomposed into two different parts.

Therefore, a general random process can be written a sum of two processes

x[n] = xp[n] + xr[n]

⇒ xr[n] – regular random process⇒ xp[n] – predictable process, with xr[n] ⊥ xp[n],

E{xr[m]xp[n]} = 0

that is we can separately treat the predictable process (i.e. adeterministic signal) and a random signal.

c⃝Danilo P. Mandic Advanced Signal Processing 2

Page 3: ASP Lecture 2 ARMA Modelling

What do we actually mean?

a) Periodic oscillations b) Small nonlinearity c) Route to chaos

d) Route to chaos e) small noise f) HMM and others

c⃝Danilo P. Mandic Advanced Signal Processing 3

Page 4: ASP Lecture 2 ARMA Modelling

Example from brain science

Electrode positions

Top: Raw EEG. Bottom: EOG artefact

c⃝Danilo P. Mandic Advanced Signal Processing 4

Page 5: ASP Lecture 2 ARMA Modelling

Linear Stochastic Processes

It therefore follows that the general form for the power spectrum of a WSSprocess is

Px(eȷω) = Pxr(e

ȷω) +

N∑k=1

αku0(ω − ωk)

We look at processes generated by filtering white noise with a linearshift–invariant filter that has a rational system function. These include the

• Autoregressive (AR) → all pole system

• Moving Average (MA) → all zero system

• Autoregressive Moving Average (ARMA) → poles and zeros

Notice the difference between shift–invariance and time–invariance

c⃝Danilo P. Mandic Advanced Signal Processing 5

Page 6: ASP Lecture 2 ARMA Modelling

ACF and Spectrum of ARMA models

Much of interest are the autocorrelation function and power spectrum ofthese processes. (Recall that ACF ≡ PSD in terms of the availableinformation)

Suppose that we filter white noise w[n] with a causal linear shift–invariantfilter having a rational system function with p poles and q zeros

H(z) =Bq(z)

Ap(z)=

∑qk=0 bq(k)z

−k

1 +∑p

k=1 ap(k)z−k

Assuming that the filter is stable, the output process x[n] will bewide–sense stationary and with Pw = σ2

w, the power spectrum of x[n] willbe

Px(z) = σ2w

Bq(z)Bq(z−1)

Ap(z)Ap(z−1)

Recall that “(·)∗” in analogue frequency corresponds to “z−1” in “digital freq.”

c⃝Danilo P. Mandic Advanced Signal Processing 6

Page 7: ASP Lecture 2 ARMA Modelling

Frequency Domain

In terms of “digital” frequency θ (unit circle – e−ȷθ = e−ȷωT )

• Bq(z)Bq(z−1) # “quadratic form” and real valued

• Ap(z)Ap(z−1) # “quadratic form” and real valued

Pz(eȷθ) = σ2

w

∣∣Bq(eȷθ)

∣∣2|Ap(eȷθ)|2

We are therefore using H(z) to shape the spectrum of white noise.

A process having a power spectrum of this form is known as anautoregressive moving average process of order (p, q) and is referred toas an

ARMA(p,q) process

c⃝Danilo P. Mandic Advanced Signal Processing 7

Page 8: ASP Lecture 2 ARMA Modelling

Example

Plot the power spectrum of an ARMA(2,2) process for which

• the zeros of H(z) are z = 0.95e±ȷπ/2

• poles are at z = 0.9e±ȷ2π/5

Solution: The system function is (poles and zeros – resonance & sink)

H(z) =1 + 0.9025z−2

1− 0.5562z−1 + 0.81z−2

0 0.5 1 1.5 2 2.5 3 3.5−1

0

1

2

3

4

5

6

7

Frequency

Pow

er S

pect

rum

c⃝Danilo P. Mandic Advanced Signal Processing 8

Page 9: ASP Lecture 2 ARMA Modelling

Difference Equation Representation

Random processes x[n] and w[n] are related by the linear constantcoefficient equation

x[n]−p∑

l=1

ap(l)x[n− l] =

q∑l=0

bq(l)w[n− l]

Notice that the autocorrelation function of x[n] and crosscorrelationbetween x[n] and w[n] follow the same difference equation, i.e. if wemultiply both sides of the above equation by x[n− k] and take theexpected value, we have

rxx(k)−p∑

l=1

ap(l)rxx(k − l) =

q∑l=0

bq(l)rxw(k − l)

Since x is WSS, it follows that x[n] and w[n] are jointly WSS.

c⃝Danilo P. Mandic Advanced Signal Processing 9

Page 10: ASP Lecture 2 ARMA Modelling

General Linear Processes: Stationarity and Invertibility

Consider a linear stochastic process # output from a linear filter, driven byWGN w[n]

x[n] = w[n] + b1w[n− 1] + b2w[n− 2] + · · · = w[n] +

∞∑j=1

bjw[n− j]

that is, a weighted sum of past inputs w[n].

For this process to be a valid stationary process, the coefficients must beabsolutely summable, that is

∑∞j=0 |bj| < ∞.

The model implies that under suitable condition, x[n] is also a weightedsum of past values of x, plus an added shock w[n], that is

x[n] = a1x[n− 1] + a2x[n− 2] + · · ·+ w[n]

• Linear Process is stationary if∑∞

j=0 |bj| < ∞• Linear Process is invertible if

∑∞j=0 |aj| < ∞

c⃝Danilo P. Mandic Advanced Signal Processing 10

Page 11: ASP Lecture 2 ARMA Modelling

Are these ARMA(p,q) processes?

• Unit response u[n] =

{0, n < 01, n ≥ 0

– If w[n] = δ[n] then

u[n] = u[n− 1] + w[n], n ≥ 0

• Ramp function r[n] =

{0, n < 0n, n ≥ 0

– If w[n] = u[n] then

r[n] = r[n− 1] + w[n], n ≥ 0

c⃝Danilo P. Mandic Advanced Signal Processing 11

Page 12: ASP Lecture 2 ARMA Modelling

Autoregressive Processes

A general AR(p) process (autoregressive of order p) is given by

x[n] = a1x[n− 1] + · · ·+ apx[n− p] + w[n] =

p∑i=1

aix[n− i] + w[n]

Observe the auto–regression above

Duality between AR and MA processes:

For instance the first order autoregressive process

x[n] = a1x[n− 1] + w[n] ⇔∞∑j=0

bjw[n− j]

Due to its “all–pole“ nature follows the duality between IIR and FIR filters.

c⃝Danilo P. Mandic Advanced Signal Processing 12

Page 13: ASP Lecture 2 ARMA Modelling

ACF and Spectrum of AR Processes

To obtain the autocorrelation function of an AR process, multiply theabove equation by x[n− k] to obtain

x[n− k]x[n] = a1x[n− k]x[n− 1] + a2x[n− k]x[n− 2] + · · ·+apx[n− k]x[n− p] + x[n− k]w[n]

Notice that E{x[n− k]w[n]} vanishes when k > 0. Therefore we have

rxx(k) = a1rxx(k − 1) + a2rxx(k − 2) + · · ·+ aprxx(k − p) k > 0

On dividing throughout by rxx(0) we obtain

ρ(k) = a1ρ(k − 1) + a2ρ(k − 2) + · · ·+ apρ(k − p) k > 0

Quantities ρ(k) are called normalised correlation coefficients

c⃝Danilo P. Mandic Advanced Signal Processing 13

Page 14: ASP Lecture 2 ARMA Modelling

Variance and Spectrum of AR Processes

Variance:When k = 0 the contribution from the term E{x[n− k]w[n]} is σ2

w, and

rxx(0) = a1rxx(−1) + a2rxx(−2) + · · ·+ aprxx(−p) + σ2w

Divide by rxx(0) = σ2x to obtain

σ2x =

σ2w

1− ρ1a1 − ρ2a2 − · · · − ρpap

Spectrum:

Pxx(f) =2σ2

w

|1− a1e−ȷ2πf − · · · − ape−ȷ2πpf |20 ≤ f ≤ 1/2

Look into “Spectrum of Linear Systems” from Lecture 0: Course Introduction

c⃝Danilo P. Mandic Advanced Signal Processing 14

Page 15: ASP Lecture 2 ARMA Modelling

Yule–Walker Equations

For k = 1, 2, . . . , p from the general autocorrelation function, we obtaina set of equations:-

rxx(1) = a1rxx(0) + a2rxx(1) + · · ·+ aprxx(p− 1)

rxx(2) = a1rxx(1) + a2rxx(0) + · · ·+ aprxx(p− 2)

... = ...

rxx(p) = a1rxx(p− 1) + a2rxx(p− 2) + · · ·+ aprxx(0)

These equations are called the Yule–Walker or normal equations.

Their solution gives us the set of autoregressive parametersa = [a1, . . . , ap]

T . This can be expressed in a vector–matrix form as

a = R−1xxrxx

Due to Toeplitz structure of Rxx, its positive definitness enables matrix inversion

c⃝Danilo P. Mandic Advanced Signal Processing 15

Page 16: ASP Lecture 2 ARMA Modelling

ACF Coefficients

For the autocorrelation coefficients

ρk = rxx(k)/rxx(0)

we have

ρ1 = a1 + a2ρ1 + · · ·+ apρp−1

ρ2 = a1ρ1 + a2 + · · ·+ apρp−2

... = ...

ρp = a1ρp−1 + a2ρp−2 + · · ·+ ap

When does the sequence {ρ0, ρ1, ρ2, . . .} vanish?

Homework:- Try command xcorr in Matlab

c⃝Danilo P. Mandic Advanced Signal Processing 16

Page 17: ASP Lecture 2 ARMA Modelling

Example:- Yule–Walker modelling in Matlab

In Matlab – Power spectral density using Y–W method pyulear

Pxx = pyulear(x,p)

[Pxx,w] = pyulear(x,p,nfft)

[Pxx,f] = pyulear(x,p,nfft,fs)

[Pxx,f] = pyulear(x,p,nfft,fs,’range’)

[Pxx,w] = pyulear(x,p,nfft,’range’)

Description:-

Pxx = pyulear(x,p)

implements the Yule-Walker algorithm, and returns Pxx, an estimate of thepower spectral density (PSD) of the vector x.

To remember for later → This estimate is also an estimate of themaximum entropy.

Se also aryule, lpc, pburg, pcov, peig, periodogram

c⃝Danilo P. Mandic Advanced Signal Processing 17

Page 18: ASP Lecture 2 ARMA Modelling

Example:- AR(p) signal generation

• Generate the input signal x by filtering white noise through the ARfilter

• Estimate the PSD of x based on a fourth-order AR model

Solution:-randn(’state’,1);

x = filter(1,a,randn(256,1)); % AR system output

pyulear(x,4) % Fourth-order estimate

c⃝Danilo P. Mandic Advanced Signal Processing 18

Page 19: ASP Lecture 2 ARMA Modelling

Alternatively:- Yule–Walker modelling

AR(4) system given byy[n] = 2.2137y[n−1]−2.9403y[n−2]+2.1697y[n−3]−0.9606y[n−4]+w[n]

a = [1 -2.2137 2.9403 -2.1697 0.9606]; % AR filter coefficients

freqz(1,a) % AR filter frequency response

title(’AR System Frequency Response’)

c⃝Danilo P. Mandic Advanced Signal Processing 19

Page 20: ASP Lecture 2 ARMA Modelling

From Data to AR(p) Model

So far, we assumed the model (AR, MA, or ARMA) and analysed the ACFand PSD based on known model coefficients.

In practice:- DATA # MODEL

This procedure is as follows:-

* record data x(k)

* find the autocorrelation of the data ACF(x)

* divide by r_xx(0) to obtain correlation coefficients \rho(k)

* write down Yule-Walker equations

* solve for the vector of AR paramters

The problem is that we do not know the model order p beforehand;we will deal with this problem later in Lecture 2.

c⃝Danilo P. Mandic Advanced Signal Processing 20

Page 21: ASP Lecture 2 ARMA Modelling

Example:- Finding parameters ofx[n] = 1.2x[n− 1]− 0.8x[n− 2] + w[n]

0 50 100 150 200 250 300 350 400−6

−4

−2

0

2

4

6

Sample number

AR

(2)

sign

al v

alue

s

AR(2) signal x=filter([1],[1, −1.2, 0.8],w)

−400 −300 −200 −100 0 100 200 300 400−1500

−1000

−500

0

500

1000

1500

2000

Correlation lagA

CF

of A

R(2

) si

gnal

ACF for AR(2) signal x=filter([1],[1, −1.2, 0.8],w)

−20 −10 0 10 20

−1000

−500

0

500

1000

1500

Correlation lag

AC

F o

f AR

(2)

sign

al

ACF for AR(2) signal x=filter([1],[1, −1.2, 0.8],w)

Apply:- for i=1:6; [a,e]=aryule(x,i); display(a);end

a(1) = [0.6689] a(2) = [1.2046,−0.8008]

a(3) = [1.1759,−0.7576,−0.0358]

a(4) = [1.1762,−0.7513,−0.0456, 0.0083]

a(5) = [1.1763,−0.7520,−0.0562, 0.0248,−0.0140]

a(6) = [1.1762,−0.7518,−0.0565, 0.0198,−0.0062,−0.0067]

c⃝Danilo P. Mandic Advanced Signal Processing 21

Page 22: ASP Lecture 2 ARMA Modelling

Special case:- AR(1) Process (Markov)

Given below (Recall p(x[n], x[n− 1], . . . , x[0]) = p(x[n] |x[n− 1]))

x[n] = a1x[n− 1] + w[n] = w[n] + a1w[n− 1] + a21w[n− 2] + · · ·

i) for the process to be stationary −1 < a1 < 1.

ii) Autocorrelation Function:- from Yule-Walker equations

rxx(k) = a1rxx(k − 1), k > 0

or for the correlation coefficients, with ρ0 = 1

ρk = ak1, k > 0

Notice the difference in the behaviour of the ACF for a1 positive and negative

c⃝Danilo P. Mandic Advanced Signal Processing 22

Page 23: ASP Lecture 2 ARMA Modelling

Variance and Spectrum of AR(1) process

Can be calculated directly from a general expression of the variance andspectrum of AR(p) processes.

• Variance:- Also from a general expression for the variance of linearprocesses from Lecture 1

σ2x =

σ2w

1− ρ1a1=

σ2w

1− a21

• Spectrum:- Notice how the flat PSD of WGN is shaped according tothe position of the pole of AR(1) model (LP or HP)

Pxx(f) =2σ2

w

|1− a1e−ȷ2πf |2=

2σ2w

1 + a21 − 2a1cos(2πf)

c⃝Danilo P. Mandic Advanced Signal Processing 23

Page 24: ASP Lecture 2 ARMA Modelling

Example: ACF and Spectrum of AR(1) for a = ±0.8

0 5 10 15 20−1

−0.5

0

0.5

1ACF

Correlation lag

0 0.2 0.4 0.6 0.8 1−15

−10

−5

0

5

10

Normalized Frequency (×π rad/sample)Pow

er/fr

eque

ncy

(dB

/rad

/sam

ple) Burg Power Spectral Density Estimate

0 5 10 15 200

0.5

1 ACF

Correlation lag

Cor

rela

tion

0 20 40 60 80 100−5

0

5

Sample Number

Sig

nal v

alue

s

x[n] = 0.8*x[n−1] + w[n]

0 0.2 0.4 0.6 0.8 1−15

−10

−5

0

5

10

15

Normalized Frequency (×π rad/sample)Pow

er/fr

eque

ncy

(dB

/rad

/sam

ple) Burg Power Spectral Density Estimate

0 20 40 60 80 100−4

−2

0

2

4

Sample Number

Sig

nal v

alue

s

x[n] = −0.8*x[n−1] + w[n]

a < 0 → High Pass a > 0 → Low Pass

c⃝Danilo P. Mandic Advanced Signal Processing 24

Page 25: ASP Lecture 2 ARMA Modelling

Special Case:- Second Order Autoregressive ProcessesAR(2)

The input–output functional relationship is given by

x[n] = a1x[n− 1] + a2x[n− 2] + w[n]

For stationarity- (to be proven later)

a1 + a2 < 1

a2 − a1 < 1

−1 < a2 < 1

This will be shown within the so–called “stability triangle”

c⃝Danilo P. Mandic Advanced Signal Processing 25

Page 26: ASP Lecture 2 ARMA Modelling

Work by Yule – Modelling of sunspot numbers

Recorded for more than 300 years.

In 1927, Yule modelled them and invented AR(2) model

0 50 100 150 200 250 300−50

0

50

100

150

Sample Number

Sig

nal v

alue

s

Sunspot series

0 5 10 15 20 25 30 35 40 45 50−0.5

0

0.5

1

Correlation lag

Cor

rela

tion

ACF for sunspot series

Sunspot numbers and its autocorrelation function

c⃝Danilo P. Mandic Advanced Signal Processing 26

Page 27: ASP Lecture 2 ARMA Modelling

Autocorrelation function of AR(2) processes

The ACF

ρk = a1ρk−1 + a2ρk−2 k > 0

• Real roots: ⇒ (a21 + 4a2 > 0) ACF = mixture of damped exponentials

• Complex roots: ⇒ (a21 + 4a2 < 0) ⇒ ACF exhibits a pseudo–periodicbehaviour

ρk =Dk sin(2πf0k + F )

sinF

D - damping factor, of a sine wave with frequency f0 and phase F.

D =√−a2

cos(2πf0) =a1

2√−a2

tan(F ) =1 +D2

1−D2tan(2πf0)

c⃝Danilo P. Mandic Advanced Signal Processing 27

Page 28: ASP Lecture 2 ARMA Modelling

Stability Triangle

ACF

m

ACF

m

ACF

mIVIII

II I

Real Roots

Complex Roots

a

a

1

−1

2−2 1

2

ACF

m

i) Real roots Region 1: Monotonically decaying ACFii) Real roots Region 2: Decaying oscillating ACFiii) Complex roots Region 3: Oscilating pseudoperiodic ACFiv) Complex roots Region 4: Pseudoperiodic ACF

c⃝Danilo P. Mandic Advanced Signal Processing 28

Page 29: ASP Lecture 2 ARMA Modelling

Yule–Walker Equations

Substituting p = 2 into Y-W equations we have

ρ1 = a1 + a2ρ1

ρ2 = a1ρ1 + a2

which when solved for a1 and a2 gives

a1 =ρ1(1− ρ2)

1− ρ21a2 =

ρ2 − ρ211− ρ21

or substituting in the equation for ρ

ρ1 =a1

1− a2

ρ2 = a2 +a21

1− a2

c⃝Danilo P. Mandic Advanced Signal Processing 29

Page 30: ASP Lecture 2 ARMA Modelling

Variance and Spectrum

More specifically, for the AR(2) process, we have:-

Variance

σ2x =

σ2w

1− ρ1a1 − ρ2a2=

(1− a21 + a2

)σ2w

(1− a2)2 − a21

Spectrum

Pxx(f) =2σ2

w

|1− a1e−ȷ2πf − a2e−ȷ4πf |2

=2σ2

w

1 + a21 + a22 − 2a1(1− a2 cos(2πf)− 2a2 cos(4πf)), 0 ≤ f ≤ 1/2

c⃝Danilo P. Mandic Advanced Signal Processing 30

Page 31: ASP Lecture 2 ARMA Modelling

Example AR(2): x[n] = 0.75x[n− 1]− 0.5x[n− 2] +w[n]

0 50 100 150 200 250 300 350 400 450 500−20

−10

0

10

20

Sample Number

Sig

nal v

alue

s

x[n] = 0.75*x[n−2] − 0.5*x[n−1] + w[n]

0 5 10 15 20 25 30 35 40 45 50−0.5

0

0.5

1

Correlation lag

Cor

rela

tion

ACF

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−10

−5

0

5

10

15

Normalized Frequency (×π rad/sample)Pow

er/fr

eque

ncy

(dB

/rad

/sam

ple)

Burg Power Spectral Density Estimate

The damping factor D =√0.5 = 0.71, frequency f0 =

cos−1(0.5303)2π = 1

6.2The fundamental period of the autocorrelation function is 6.2.

c⃝Danilo P. Mandic Advanced Signal Processing 31

Page 32: ASP Lecture 2 ARMA Modelling

Partial Autocorrelation Function:- Motivation

Let us revisit example from page 21 of Lecture Slides.

0 50 100 150 200 250 300 350 400−6

−4

−2

0

2

4

6

Sample number

AR

(2)

sign

al v

alue

s

AR(2) signal x=filter([1],[1, −1.2, 0.8],w)

−400 −300 −200 −100 0 100 200 300 400−1500

−1000

−500

0

500

1000

1500

2000

Correlation lagA

CF

of A

R(2

) si

gnal

ACF for AR(2) signal x=filter([1],[1, −1.2, 0.8],w)

−20 −10 0 10 20

−1000

−500

0

500

1000

1500

Correlation lag

AC

F o

f AR

(2)

sign

al

ACF for AR(2) signal x=filter([1],[1, −1.2, 0.8],w)

We do not know p, let us re-write the coefficients as [a_1p,...,a_pp]

p = 1 # [0.6689] = a11 p = 2 # [1.2046,−0.8008] = [a21, a22]

p = 3 #[1.1759,−0.7576,−0.0358] = [a31, a32, a33]

p = 4 # [1.1762,−0.7513,−0.0456, 0.0083] = [a41, a42, a43, a44]

p = 5 # [1.1763,−0.7520,−0.0562, 0.0248,−0.0140] = [a51, . . . , a55]

p = 6 # [1.1762,−0.7518,−0.0565, 0.0198,−0.0062,−0.0067] =[a61, . . . , a66]

c⃝Danilo P. Mandic Advanced Signal Processing 32

Page 33: ASP Lecture 2 ARMA Modelling

Partial Autocorrelation Function

Notice: ACF of AR(p) infinite in duration, but can by be described interms of p nonzero functions ACFs.

Denote by akj the jth coefficient in an autoregressive representation oforder k, so that akk is the last coefficient. Then

ρj = akjρj−1 + · · ·+ ak(k−1)ρj−k+1 + akkρj−k j = 1, 2, . . . , k

leading to the Yule–Walker equation, which can be written as1 ρ1 ρ2 · · · ρk−1

ρ1 1 ρ1 · · · ρk−2... ... ... . . . ...

ρk−1 ρk−2 ρk−3 · · · 1

ak1ak2...

akk

=

ρ1ρ2...ρk

c⃝Danilo P. Mandic Advanced Signal Processing 33

Page 34: ASP Lecture 2 ARMA Modelling

Partial ACF Coefficients:

Solving these equations for k = 1, 2, . . . successively, we obtain

a11 = ρ1, a22 =ρ2 − ρ211− ρ21

, a33 =

∣∣∣∣∣∣1 ρ1 ρ1ρ1 1 ρ2ρ2 ρ1 ρ3

∣∣∣∣∣∣∣∣∣∣∣∣1 ρ1 ρ2ρ1 1 ρ1ρ2 ρ1 1

∣∣∣∣∣∣, etc

• The quantity akk, regarded as a function of lag k, is called the partialautocorrelation function.

• For an AR(p) process, the PAC akk will be nonzero for k ≤ p and zerofor k > p ⇒ tells us the order of an AR(p) process.

c⃝Danilo P. Mandic Advanced Signal Processing 34

Page 35: ASP Lecture 2 ARMA Modelling

Importance of Partial ACF

For a zero mean process x[n], the best linear predictor in the meansquare error sense of x[n] based on x[n− 1], x[n− 2], . . . is

x̂[n] = ak−1,1x[n− 1] + ak−1,2x[n− 2] + · · ·+ ak−1,k−1x[n− k + 1]

(apply the E{·} operator to the general AR(p) model expression, andrecall that E{w[n]} = 0)

(Hint:E{x[n]} = x̂[n] = E {ak−1,1x[n − 1] + · · · + ak−1,k−1x[n − k + 1] + w[n]} =

ak−1,1x[n − 1] + · · · + ak−1,k−1x[n − k + 1]) )

whether the process is an AR or not

In MATLAB, check the function:

ARYULE

and functions

PYULEAR, ARMCOV, ARBURG, ARCOV, LPC, PRONY

c⃝Danilo P. Mandic Advanced Signal Processing 35

Page 36: ASP Lecture 2 ARMA Modelling

Model order for Sunspot numbers

0 100 200 300−50

0

50

100

150

Sample Number

Sig

nal v

alue

s

Sunspot series

0 10 20 30 40 50−1

−0.5

0

0.5

1

Correlation lag

Partial ACF for sunspot series

Cor

rela

tion

0 10 20 30 40 50−0.5

0

0.5

1

Correlation lag

Cor

rela

tion

ACF for sunspot series

0 0.2 0.4 0.6 0.8 15

10

15

20

25

30

35

Normalized Frequency (×π rad/sample)

Pow

er/fr

eque

ncy

(dB

/rad

/sam

ple)

Burg Power Spectral Density Estimate

Sunspot numbers, their ACF and partial autocorrelation (PAC)After lag k = 2, the PAC becomes very small

c⃝Danilo P. Mandic Advanced Signal Processing 36

Page 37: ASP Lecture 2 ARMA Modelling

Model order for AR(2) generated process

0 100 200 300 400 500−8

−6

−4

−2

0

2

4

6

8

Sample Number

Sig

nal v

alue

s

AR(2) signal

0 10 20 30 40 50−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

Correlation lag

Partial ACF for AR(2) signal

Cor

rela

tion

0 10 20 30 40 50−1

−0.5

0

0.5

1

Correlation lag

Cor

rela

tion

ACF for AR(2) signal

0 0.2 0.4 0.6 0.8 1−20

−15

−10

−5

0

5

10

15

Normalized Frequency (×π rad/sample)

Pow

er/fr

eque

ncy

(dB

/rad

/sam

ple)

Burg Power Spectral Density Estimate

AR(2) signal, its ACF and partial autocorrelation (PAC)After lag k = 2, the PAC becomes very small

c⃝Danilo P. Mandic Advanced Signal Processing 37

Page 38: ASP Lecture 2 ARMA Modelling

Model order for AR(3) generated process

0 100 200 300 400 500−15

−10

−5

0

5

10

Sample Number

Sig

nal v

alue

s

AR(3) signal

0 10 20 30 40 50−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

Correlation lag

Partial ACF for AR(3) signal

Cor

rela

tion

0 10 20 30 40 500

0.2

0.4

0.6

0.8

1

Correlation lag

Cor

rela

tion

ACF for AR(3) signal

0 0.2 0.4 0.6 0.8 1−20

−10

0

10

20

30

Normalized Frequency (×π rad/sample)

Pow

er/fr

eque

ncy

(dB

/rad

/sam

ple) Burg Power Spectral Density Estimate

AR(3) signal, its ACF and partial autocorrelation (PAC)After lag k = 3, the PAC becomes very small

c⃝Danilo P. Mandic Advanced Signal Processing 38

Page 39: ASP Lecture 2 ARMA Modelling

Model order for a financial time series

From:- http://finance.yahoo.com/q/ta?s=%5EIXIC&t=1d&l=on&z=m&q=b&p=v&a=&c=

Nasdaq ascending Nasdaq descending

0 500 1000 1500 20001400

1600

1800

2000

2200

2400

2600

Day number

Nas

daq

valu

eNasdaq composite June 2003 − February 2007

0 500 1000 1500 20001400

1600

1800

2000

2200

2400

2600

Day number

Nas

daq

valu

e

Nasdaq composite February 2007 − June 2003

−2000 −1500 −1000 −500 0 500 1000 1500 2000−3

−2

−1

0

1

2

3

4

5

6

7x 10

7

Correlation lag

AC

F v

alue

ACF of Nasdaq composite June 2003 − February 2007

−2000 −1500 −1000 −500 0 500 1000 1500 2000−3

−2

−1

0

1

2

3

4

5

6

7x 10

7

Correlation lag

AC

F v

alue

ACF of Nasdaq composite June 2003 − February 2007

c⃝Danilo P. Mandic Advanced Signal Processing 39

Page 40: ASP Lecture 2 ARMA Modelling

Partial ACF for financial time series

a = 1.0000 -0.9994

a = 1.0000 -0.9982 -0.0011

a = 1.0000 -0.9982 0.0086 -0.0097

a = 1.0000 -0.9983 0.0086 -0.0128 0.0030

a = 1.0000 -0.9983 0.0086 -0.0128 0.0026 0.0005

a = 1.0000 -0.9983 0.0086 -0.0127 0.0026 0.0017 -0.0012

c⃝Danilo P. Mandic Advanced Signal Processing 40

Page 41: ASP Lecture 2 ARMA Modelling

Model Order Selection – Practical issues

In practice – the greater the model order the higher the accuracy

⇒ When do we stop?

To save on computational complexity, we introduce “penalty” for a highmodel order. The criteria for model order selection are, for instance MDL(minimum description length - Rissanen), AIC (Akaike Informationcriterion), given by

MDL = log(E) +p ∗ log(N)

N

AIC = log(E) + 2p/N

E = the loss function (typically cumulative squared error,p = the number of estimated parametersN = the number of estimated data.

c⃝Danilo P. Mandic Advanced Signal Processing 41

Page 42: ASP Lecture 2 ARMA Modelling

Example:- Model order selection – MDL vs AIC

Let us have a look at the squared error and the MDL and AIC criteria foran AR(2) model with

a1 = 0.5 a2 = −0.3

1 2 3 4 5 6 7 8 9 100.88

0.9

0.92

0.94

0.96

0.98

1MDL for AR(2)

1 2 3 4 5 6 7 8 9 100.88

0.9

0.92

0.94

0.96

0.98

1

AR(2) Model Order

AIC for AR(2)

AICCumulative Squared Error

MDLCumulative Squared Error

(Model error)2 versus the model order p

c⃝Danilo P. Mandic Advanced Signal Processing 42

Page 43: ASP Lecture 2 ARMA Modelling

Moving Average Processes

A general MA(q) process is given by

x[n] = w[n] + b1w[n− 1] + · · ·+ bqw[n− q]

Autocorrelation function: The autocovariance function of MA(q)

ck = E[(w[n] + b1w[n− 1] + · · ·+ bqw[n− q])(w[n− k]

+b1w[n− k − 1] + · · ·+ bqw[n− k − q])]

Hence the variance of the process

c0 = (1 + b21 + · · ·+ b2q)σ2w

The ACF of an MA process has a cutoff after lag q.

Spectrum: All zeros ⇒ struggles to model PSD with peaks

P (f) = 2σ2w

∣∣1 + b1e−ȷ2πf + b2e

−ȷ4πf + · · ·+ bqe−ȷ2πqf

∣∣c⃝Danilo P. Mandic Advanced Signal Processing 43

Page 44: ASP Lecture 2 ARMA Modelling

Example:- MA(3) process

0 100 200 300 400 500−2

−1

0

1

2

3

Sample Number

Sig

nal v

alue

s

MA(3) signal

0 10 20 30 40 50−0.2

0

0.2

0.4

0.6

0.8

1

1.2

Correlation lag

Cor

rela

tion

ACF for MA(3) signal

0 0.2 0.4 0.6 0.8 1−16

−14

−12

−10

−8

−6

−4

Normalized Frequency (×π rad/sample)

Pow

er/fr

eque

ncy

(dB

/rad

/sam

ple)

Burg Power Spectral Density Estimate

0 10 20 30 40 50−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

Correlation lag

Cor

rela

tion

Partial ACF for MA(3) signal

MA(3) model, its ACF and partial autocorrelation (PAC)After lag k = 3, the ACF becomes very small

c⃝Danilo P. Mandic Advanced Signal Processing 44

Page 45: ASP Lecture 2 ARMA Modelling

Analysis of Nonstationary Signals

2000 3000 4000 5000 6000 7000 8000 9000 100000

0.5

1

Sample Number

Sig

nal v

alue

s

Speech Signal

0 25 50−1

0

1

−1

0

Correlation lag

Partial ACF for W1

Cor

rela

tion

0 25 50−1

−0.5

0

0.5

1

Correlation lag

Partial ACF for W2

Cor

rela

tion

0 25 50−1

−0.5

0

0.5

1

Correlation lag

Partial ACF for W3

Cor

rela

tion

0 25 500.2

0.4

0.6

0.8

1MDL calculated for W1

Model Order

MD

L

0 25 500

0.5

1MDL calculated for W2

Model Order

MD

L

0 25 500.2

0.4

0.6

0.8

1MDL calculated for W3

Model OrderM

DL

W2 W3W1

Calculated ModelOrder > 50

Calculated ModelOrder = 24

CalculatedModel Order = 13

Different AR models for different segments of speech

To deal with nonstationarity we need short sliding windows

c⃝Danilo P. Mandic Advanced Signal Processing 45

Page 46: ASP Lecture 2 ARMA Modelling

Duality Between AR and MA Processes

i) A stationary finite AR(p) process can be represented as an infinite orderMA process. A finite MA process can be represented as an infinite ARprocess.

ii) The finite MA(q) process has an ACF that is zero beyond q. For an ARprocess, the ACF is infinite in extent and consits of mixture of dampedexponentials and/or damped sine waves.

iii) Parameters of finite MA process are not required to satisfy anycondition for stationarity. However, for invertibility, the roots of thecharacteristic equation must lie inside the unit circle.

ARMA modelling is a classic technique which has found atremendous number of applications

c⃝Danilo P. Mandic Advanced Signal Processing 46