Stochastic Structural Dynamics Lecture-31 - NPTELnptel.ac.in/courses/105108080/module7/Lecture31.pdf · Stochastic Structural Dynamics Lecture-31 Dr C S Manohar Department of Civil

Post on 15-Feb-2018

219 Views

Category:

Documents

6 Downloads

Preview:

Click to see full reader

Transcript

111

Stochastic Structural Dynamics

Lecture-31

Dr C S ManoharDepartment of Civil Engineering

Professor of Structural EngineeringIndian Institute of ScienceBangalore 560 012 India

manohar@civil.iisc.ernet.in

Monte Carlo simulation approach-7

22

( ) 0

1

21

( )

1

(1 )1Var (1 )

( ) 0( ) ; ( ) .

0

f X Xg x

n

ii

nF F

F Fi

XF V F h

V

XV

F

P p x dx I g x p x dx I g X

I g Xn

P PP Pnn

I g x p xP F x h x dx F x P F X

h x

I g v p vh v

P

Probability of failure

Variance reduction

3

(a) Variance reduction can be viewed as a means to use known information about the problem.

(b) If nothing is known about the problem, variance reductionis not achievable.

(c) At the o

Variance reduction

ther extreme, that is, when everything about the problem is known, variance reduces to zero but then simulation itself is not needed.

(d) How do we get information about the problem? - Perform a few cycles of brute force simulations and learn something about the problem.

4

Sub‐set simulations using Markov Chain Monte Carlo (MCMC)

• S K Au and J L Beck, 2001, Estimation of small failure probabilities in high dimension by subset simulation, Probabilistic Engineering Mechanics, 16, 263-277

• J S Liu, 2001, Monte Carlo strategies in scientific computing, Springer, NY.

5

Small failure probability can be expressed as a product of larger conditional failure probabilities.These larger conditional failure probabilities can be

estimated with lesser computation

Basic idea

al effort.The method is applicable to a wide class of problems

6

0

1

2

, , ; 0 , 0 specified

: zero mean, stationary Gaussian random process.

cos sin

where , ~ 0, , , ,&

, 1, ;

N

n n n nn

n n n n k n k

n k

my cy ky f y y t q t y y

q t

q t a t b t

a b N a a n k b b n k

a b n k N

Subset simulation : motivation

12

*

2

Let , , a metric of system performance.

We are interested in estimating P 0, .

: The system parameters could also be random .

n

n

qq nS d

z t h y t y t t

z t z t T

Note

7

0

*

*

0,

*

0,

*

*1

1 0,

max

0

0

max

, , ,

0

F

t T

m

m t T

m

Nn n n

F X

P P z t z t T

P z t z

P Z X z

P g X

Z X z t

g X z Z X

X a b z

P I g x p x dx

8

1

0

1ˆ 0

ˆ is an unbiased and consistent estimator of P with minimum variance. The optimal variance is given by

(1 ) .F

F X

Ni

Fi

F F

F FP

P I g x p x dx

P I g XN

P

P Pn

Remark

9

1 2

1

1

1 1

1 1

1

11

1

1 11

0 Failure event

Definesuch that

, 1, 2, ,

|

|

|

mk

k ii

m

F m ii

m m

m i ii i

m

m m ii

m

i ii

F g X

F F F F

F F k m

P P F P F

P F F P F

P F F P F

P F P F F

Subset simulations

10

1

1 11

1 1

6

|

If -s are configured such that | and are much larger than , then we will be able to estimate

in terms of product of "large" probabilities.

Suppose, ~ 10 ,

m

F i ii

i i i

F

F

F

P P F P F F

F P F F P FP

P

P

Remarks

6 1 1 1 1 1 1

then we could obtain an estimate of

as 10 ~ 10 10 10 10 10 10 .

Estimation of probability of failure of the order of 0.1 can be easily done using MCS because the failure events here are m

FP

orefrequent.

11

1

1 11

1

1

|

can be estimated using a "brute force" Monte Carlo.

| , 1, 2, , 1 can be estimated using MCMC.

m

F i ii

i i

P P F P F F

P F

P F F i m

Remarks (continued)

12

th

1. Run a brute force Monte Carlo using, say, 200 samples. Evaluate the realization of the performance function at these 200 points. Rank order the these realizations and pick the 20 ranke

Steps

1

* *1 1 1

1 1

1

d member and denote the performance function as . Define a new performance function .

Define 0ˆ Clearly, Estimate of 0 0.1.

2. Store 20 members of which lieF

g g X g X g

F g X

P P g X

X

1

1

in the failure region of .3. Run 20 episodes of MCMC with each episode commencing from one of the 20 points in faiure region of . In each run continue with the simulaitons till 9 points are

g X

g X

1

obtained in failure region of .g X

13

1

th

*2

4. This leads to 200 points in failure region of . Rank order the value

of ( ) at these 200 points and identify the 20 ranked member and denoteit by . Define a new performance

g X

g Xg

Steps (Continued)

2

*2 2

2 2

2 1

1 1

function .

Define 0ˆ Clearly, Estimate of 0 | 0 0.1.

5. Repeat this exercise till is reached.6. Obtain the final probability of failure by using

F

m

F i

g X g X g

F g X

P P g X g X

F F

P P F P F

1

1

|m

ii

F

14

RemarksThe definition of -s (as in the present illustrative explanation)

ensures that -s are all equal to 0.1.

Estimates for sampling variance can be deduced. Choice of proposal density functio

i

i

F

FP

n: In standard normal space, typically shifted normal pdf.

15

1,10

10

1

2

1 2 4 5 6 10

Let max

: zero mean Gaussian random variables with

covariance matrix given by

1 1,10

0 , 1,10 excepting

0.3; 0.4; 0.2

Estimate 5 using subset m

m ii

i i

i

i j

X

X X

X

X i

X X i j

X X X X X X

P

Example

Questionsimulations.

16

Number of samples: 200 at each subset

Proposal pdf | ~N ,i iq X x x I

17

0 1 2 3 4 5 610

-5

10-4

10-3

10-2

10-1

100

level

Fai

lure

pro

babi

lity

Level

1 FP

5

Blue line:Simulation with 10 samples110

210

310

410

18

Run PF

1 2.5388 1.5394 0.8291 0.1154 0.0 6.95E‐05

2 2.4819 1.6062 0.8591 0.1662 0.0 5.75E‐05

3 2.4454 1.4920 0.6616 0.0 ‐ 1.00E‐04

4 2.2659 1.2125 0.4420 0.0 ‐ 2.65E‐04

*1g *

2g *3g *

4g *5g

19

25

1

0 10

cos sin

1 1~ iid N 0, ; ~ iid N 0,2 2

, 1, 252

max

What is P 8 ?

n n n nn

n n

n k

n

m t

m

X t a t b t

a b

a b n kn

X X t

X

Example

Question

20

5

Number of samples: 200 per subset

Proposal pdf | ~N ,0.4

Brute force Monte Carlo with 10 samplesi iq X x x I

21

0 1 2 3 4 5 6 7 8 9 1010

-5

10-4

10-3

10-2

10-1

100

level

Fai

lure

pro

babi

lity

4*

12.9187, 1.1528, 0.4760, 0i i

g

22

Preliminaries

Let be a deterministic function defined over .2

Let us assume that ( ) is well behaved in a suitable sen

Tf t t

f t

Series representation for random processes :Karhunen - Loeve expans

revisitedion

1

2

2

se.

Consider a sequence of functions which satisfy

completeness requirements and the orthogonality conditionsn n

T

n k nkT

t

t t dt

23

1

22

12

can be expressed in terms of the convergent series

with a measure of error of representation given bytotal meansquare error given by

= .

The constants can b

n nn

T

n nnT

n

f t

f t b t

f t b t dt

b

e determined using the conditions

0; 1, 2, ,k

kb

24

2

2

0; 1, 2, ,

; 1, 2, ,

QuestionCan similar formulation be developed for representingrandom process ( )?

H K Van Trees, 2001, Detection, estimation, and modulationthe

kT

k kT

kb

b t f t dt k

x t

Reference :

ory, Vol. I, John Wiley, NY pp. 178-198.

2525

01

Let be a zero mean, stationary, Gaussian random process defined as

cos sin ;

~ 0, , ~ 0, , 0 , 0 ,

n n n n nn

n n n n n k n k

X t

X t a t b t n

a N b N a a n k b b n k

a

RecallFourier representation of a Gaussian random process

1

2

1 1

2

0 , 1,2, ,

cos sin 0

cos ;

2

n k

n n n nn

XX n n XX n n nn n

n nn

b n k

X t a t b t

R S S

S

26

01

If ( ) is mean square periodic we can use the Fourierrepresentation with uncorrelated coefficients.

cos sin ;

Can we obtain series representations with uncorrelatedcoeffcients w

n n n n nn

X t

X t a t b t n

hen ( ) is not mean square periodic?Or, more generally, when ( ) is not even stationary?How can we proceed if ( ) is non-Gaussian?

X tX t

X t

27

1

1

Consider ( ) to be a zero mean Gaussian random process-not necessarily stationary-not necessarily mean square periodicConsider the series

; 2

Here are a set of random variables a

n nn

n n

x t

Tx t a t t

a

1

2 2

2 2

1

1

nd

are a set of deterministic functions such that

We would like to select such that .

0 0

n n

T T

n m nm k kT T

n n k n nkn

n n nn

t

t t dt a t x t dt

t a a

a x t a t

28

1

1 11

1 11

2

1 1

2

2

1 1

2

; 2

If we impose the requirement we get

n nn

n nn

k k n nn

k n k nk

T

k k kT

T

k k kT

Tx t a t t

x t a t

a x t a a t

a a

x t t x t dt t

t x t x t dt t

29

2

2

, ;2

This is an integral eigenvalue problem.The kernel , is nonnegative definite.

eigenfunction; =eigenvalueExact solutions are available for a few cases.Numeri

T

xxT

xx

TR t d t t

R t

t

Remarks

cal solutions can be obtained by usingGalerkin's method

30

2 2

2exp ;

exp

exp exp

Differentiate with respect to

exp

exp

xx xx

T

Tt T

T t

t

TT

t

PR P S

P t u u du t

P t u u du P u t u du t

t

P t u u du P

P u t u du P t

Example

31

2

2

exp exp

exp exp exp exp

Differentiate with respect to

exp exp exp exp

exp exp exp

t T

T tt T

T t

t

T

P t u u du P u t u du t

t P t u u du P t u u du

t

t P t u u du P t t t

P t u u du P t

2

2

exp

2 exp

2

T

tT

T

t t

P P t u u du

P t t

32

2

2

2

2 2

1 2

2

2

0

2

0 with

exp expIt can be shown that (Exercise) - s are roots of the equation

tan tan 0

t P t t

P

t t

P

t b t b

t c ibt c ibtb

bbT bTb

33

2 2

0.5

0.5

2 ; 1,2, ,

cos odd

sin 212

sin even ;

sin 212

The eigenfuncitons are sinusoids (as in Fourier series) but the frequencies are not uni

ii

ii

i

i

ii

i

i

P ib

b tt i

bTTbT

b tt i t T

bTTbT

Remark

formly spaced.

34

2

2

2 2 2

sin, for

sin

1 2 0; 1

; eigenvalue2

Eigenfunctions: angular prolate spheroidal functio

xx xx

T

T

t u PR t u P St u

t ut P u du

t u

t f t tf t c t f t t

Tc

Example : Bandlimited white noise process

ns

35

2

2 2 2

0 0

2 2 2 2

22

2 2

Consider to be the Brownian motion process defined over 0 .

0; , min ,

min ,

0

xx

T t T

tT T

t t

n

x tt T

x t R t u t u

t t u u du u u du t u du

t t t u du t t u du

t u

Tn

Example

12

2 2

2; sin 0.5 ;00.5

1,2, ,

ntt n t T

T T

n

36

Let ( ) be a random process whose first order pdf andthe ACF functions are available. No further

X t

Series represetation of partially specified non - Gaussianrandom processes using Nataf's transformation

informaiton about the process is available.

need not be stationary.How to represent ( ) in a series?X t

X t

37

2

Define so that

0 & 1.

Introduce a new random process Z(t) through the transformation

Here PDF of N 0,1 random variable.

is a zero mean Gaussian random process with a

X

X

Y

X t m tY t

t

Y t Y t

Z t P Y t

Z t

n unknowncovariance function.

38

1

1 1 *1 2 1 2 1 2 1 2

1 11

*1 22 1 2 1 2 1 2

*1 2

*1 2 1 2

, ;0,

, , ;0,

RHS is known and , is not known

, 1& ,

,

1

Y

Y

Y Y

XX Y Y

XX

Z t P Y t

Y t P Z t

Y t Y t P z P z z z dz dz

t t P z P z z z dz dz

t

t

t

t

t t

t

t

Remarks

*1 2 1 2, ;0, , 2 dimensional Gaussian pdfz z t t

39

2

2

1

1

1

Solve the eigenvalue problem

, ;2

by using numerical methods.

T

zzT

n nn

X X Y n nn

TR t d t t

Z t a t

X t m t t P a t

40

2 2 2 2

2 2 2 2

0

0

2 2

2 42 200

2 2

12 200

( ) ( ) ( , ) ,

,0

,0

;

;

x x lx x l

xx x l

y y y yEI x P t m x c x f x t x tx x x t t

y x y x

y x y x

y y y yEI x k EI x kx x x x

y yEI x k y EI xx x x x

2 0xk y

Monte Carlo simulation of response of systemswith spatially distributed random parameters

41

th4 order, 2-point stochastic boundary value problemEvolution of randomness in space and timeMarkovian properties in space is not possibleDiscretization of random fields is also essentialN

Remarks

atural frequencies, modeshapes, Green's functionsare all stochastic in nature.

( ), ( ), and ( ) cannot take negative values Gaussian models are not valid (especially when considering prob

EI x m x c x

lem of reliability evaluation)

42

Approach: employ KL expansions for ( ), ( ), and ( ).

Note: These processes are non-Gaussian in nature. Assume that they are independent.Discretization using KL-expansion and Nataf's trans

EI x m x c x

1

1

2

2

3

3

1

1

KL expansion

1

1

1

1

formationN

EI EI Y n nn

N

m m Y n nn

N

c c Y n nn

EI x m x x P a x

m x m x x P b x

c x m x x P d x

43

1

1

,

modeshapes of the system with

deterministic propertiesUse method of wieghted residues (e.g., Galerkin's method)

to getM along with associat

N

n nn

Nn n

y x t t x

x

C K F t

ed ics.

, , = random matrices (fully populated)Starting point for applicaiton of methods such as the subset

simulations

M C K

44

Summary

• Simulations of random variables and random processes

• Fourier and KL expansions• Introduction to statistical inference and estimation theory

• Introduction to calculus of Brownian motion and implications on numerical simulations

• Estimation of low probability of failure• Variance reduction: adaptive procedures• Discretization of spatially varying random quantities.

top related