Rules & Integrators Improper Integrals Practical Numerical Methods in Physics and Astronomy Lecture 7 – Numerical Integration II Pat Scott Department of Physics, McGill University February 6, 2013 Slides available from http://www.physics.mcgill.ca/ ˜ patscott PHYS 606 Practical Numerical Methods, Winter 2013 Lecture 7 – Numerical Integration II
47
Embed
ODE methods, Monte Carlo Integration, Improper Integrals
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Rules & IntegratorsImproper Integrals
Practical Numerical Methods in Physics andAstronomy
Lecture 7 – Numerical Integration II
Pat Scott
Department of Physics, McGill University
February 6, 2013
Slides available fromhttp://www.physics.mcgill.ca/̃ patscott
Closed Netwon-Coates rulesv1 – direct samples from integralSumming up rectangly things with various choices of topconstant rectangly width hTrapezoidal rule
- 2-point formula (simplest closed NC rule)- ≡ linear interpolation across each interval
Simpson’s rule- 3-point formula- ≡ parabolic interpolation over (non-overlapping) pairs of
intervals- Nice error cancellation, can be constructed from trapezoidal
Romberg Integration – generalisation of Simpson’s tohigher orders
Integration via ODEs and Runge-KuttaMonte Carlo Integration
Doing a definite integral is equivalent to solving the initial valueproblem (IVP)
dydx
= f (x); y(a) = 0 (2)
for x = b, i.e. I ≡ y(b)
We see this by
I ≡∫ b
af (x) dx =
∫ b
a
dydx
dx
=
∫ y(b)
y(a)dy
= y(b)− y(a)
Choice of y(a) is arbitrary as we only care about its derivative→ choose y(a) = C =⇒ I = y(b)− C→ with no loss of generality, can choose C = 0=⇒ I = f (b)
Integration via ODEs and Runge-KuttaMonte Carlo Integration
In doing numerical integration this is not much better thansimply halving the stepsize, because dy
dx does not dependon y (as it does in general ODEs)
Often no need to use full Runge-Kutta – just use Euler’smethod with an appropriately adaptive stepsizerevert to Runge-Kutta if Euler’s method seems to beunstable
Stepsize adaptation can be achieved by e.g.- trying two different δs- compare the results- adjust to smaller stepsize if results differ significantly
Integration via ODEs and Runge-KuttaMonte Carlo Integration
Synopsis
I =
∫ b
af (x)dx (3)
Estimate the total area under f (x) by taking random samples off (x) between a and b
Basic Riemann sum is to divide into equal-width rectanglesand sum:
I =b − a
M
M∑i=1
f (xi) (4)
with the xi equally spaced.For large M, Eq. 4 is equally valid if xi are drawn at randombetween a and bFor Monte Carlo integration, the xi are drawn randomlyfrom some distributionW(x)
Integration via ODEs and Runge-KuttaMonte Carlo Integration
Properly importance-sampled MC integration will convergemuch faster than withoutConvergence tests are similar to with uniform samplingError proportional to M−1/2 for uniform sampling, betterwith a good sampling pdfCompare with error order M−2 for trapezoidal, M−4 forSimpson’s=⇒ MC integration is slow to converge
So why ever use MC integration over other methods??Error always order M−1/2 or better even in n dimensionsTrapezoidal→ M−2/n, Simpson’s→ M−4/n
Integration via ODEs and Runge-KuttaMonte Carlo Integration
Properly importance-sampled MC integration will convergemuch faster than withoutConvergence tests are similar to with uniform samplingError proportional to M−1/2 for uniform sampling, betterwith a good sampling pdfCompare with error order M−2 for trapezoidal, M−4 forSimpson’s=⇒ MC integration is slow to converge
So why ever use MC integration over other methods??Error always order M−1/2 or better even in n dimensionsTrapezoidal→ M−2/n, Simpson’s→ M−4/n
Integration via ODEs and Runge-KuttaMonte Carlo Integration
Metropolis-style MC integration
Can do importance sampling via a Markov ChainDraw samples one after another, using the previous to getthe nextRun the typical Metropolis-Hastings step, using thesampling pdf as the objective functionForces sampling density to track the sampling pdf (which isthe whole point, right?)
3 important points to remember1 Gotta have a fixed proposal density2 Gotta have a fixed proposal density3 Remember the difference between the sampling pdf and
Integration via ODEs and Runge-KuttaMonte Carlo Integration
Metropolis-style MC integration
Can do importance sampling via a Markov ChainDraw samples one after another, using the previous to getthe nextRun the typical Metropolis-Hastings step, using thesampling pdf as the objective functionForces sampling density to track the sampling pdf (which isthe whole point, right?)3 important points to remember
1 Gotta have a fixed proposal density2 Gotta have a fixed proposal density3 Remember the difference between the sampling pdf and
Integration via ODEs and Runge-KuttaMonte Carlo Integration
Metropolis-style MC integration
Can do importance sampling via a Markov ChainDraw samples one after another, using the previous to getthe nextRun the typical Metropolis-Hastings step, using thesampling pdf as the objective functionForces sampling density to track the sampling pdf (which isthe whole point, right?)3 important points to remember
1 Gotta have a fixed proposal density
2 Gotta have a fixed proposal density3 Remember the difference between the sampling pdf and
Integration via ODEs and Runge-KuttaMonte Carlo Integration
Metropolis-style MC integration
Can do importance sampling via a Markov ChainDraw samples one after another, using the previous to getthe nextRun the typical Metropolis-Hastings step, using thesampling pdf as the objective functionForces sampling density to track the sampling pdf (which isthe whole point, right?)3 important points to remember
1 Gotta have a fixed proposal density2 Gotta have a fixed proposal density
3 Remember the difference between the sampling pdf andthe proposal pdf!
Integration via ODEs and Runge-KuttaMonte Carlo Integration
Metropolis-style MC integration
Can do importance sampling via a Markov ChainDraw samples one after another, using the previous to getthe nextRun the typical Metropolis-Hastings step, using thesampling pdf as the objective functionForces sampling density to track the sampling pdf (which isthe whole point, right?)3 important points to remember
1 Gotta have a fixed proposal density2 Gotta have a fixed proposal density3 Remember the difference between the sampling pdf and
Integration via ODEs and Runge-KuttaMonte Carlo Integration
Full-blown MCMC integration
Can do full Markov Chain Monte Carlo integration tooRun the typical Metropolis-Hastings step, using f (~x) as theobjective function (instead of someW(~x))Use the fact that the density of points in chain isproportional to the function itselfEssentially importance sampling with the function itself assampling pdfHereW(~x) is equal to f (~x), up to a normalisation factor(which is actually I)Can calculate weights for different points after the fact,using final density of points in that regionThis is how MCMCs and nested sampling estimate theBayesian evidence (multi-D integral over posterior pdf)
Integration via ODEs and Runge-KuttaMonte Carlo Integration
Importance sampling is important (!)
Importance sampling can be used in non-MC integration tooA falling integrand could be sampled more densely near aA strongly-peaked function could be sampled withGaussian intervalsMore complicated integrals could be done on adaptivemeshes (bit of col A, bit of col B)Convergence is optimal when f (~x) is sampled such thateach (hyper-)rectangle contributes ∼equal volume
Coping strategies: dealing with misbehaving integrals
2 When a = −∞ or b =∞Transform the asymptotic bit of the integral via x → 1
t
=⇒∫ b
af (x) dx =
∫ 1/a
1/b
1t2 f(
1t
)dt (8)
Then use an open rule on the transformed integralWorks for 1 limit only, only when a, b have same signsWhere both are infinite or opposite signs, split itSplit it and use standard rules where f (x) notasymptotically decreasing faster than x−2