Top Banner
Advanced Computational Methods in Statistics: Lecture 1 Monte Carlo Simulation & Introduction to Parallel Computing Axel Gandy Department of Mathematics Imperial College London http://www2.imperial.ac.uk/~agandy London Taught Course Centre for PhD Students in the Mathematical Sciences Spring 2011
59
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Lecture1

Advanced Computational Methods in Statistics:Lecture 1

Monte Carlo Simulation&

Introduction to Parallel Computing

Axel Gandy

Department of MathematicsImperial College London

http://www2.imperial.ac.uk/~agandy

London Taught Course Centrefor PhD Students in the Mathematical Sciences

Spring 2011

Page 2: Lecture1

Today’s Lecture

Part I Monte Carlo Simulation

Part II Introduction to Parallel Computing

Axel Gandy 2

Page 3: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Part I

Monte Carlo Simulation

Random Number Generation

Computation of Integrals

Variance Reduction Techniques

Axel Gandy Monte Carlo Simulation 3

Page 4: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Uniform Random Number Generation

I Basic building block of simulation:stream of independent rv U1, U2, . . . ∼ U(0, 1)

I “True” random number generators:I based on physical phenomenaI Example http://www.random.org/; R-package random: “The

randomness comes from atmospheric noise”I Disadvantages of physical systems:

I cumbersome to install and runI costlyI slowI cannot reproduce the exact same sequence twice [verification,

debugging, comparing algorithms with the same stream]

I Pseudo Random Number Generators: Deterministic algorithmsI Example: linear congruential generators:

un =snM, sn+1 = (asn + c)modM

Axel Gandy Monte Carlo Simulation 5

Page 5: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

General framework for Uniform RNG

(L’Ecuyer, 1994)

s s1T

u1

G

s2T

u2

G

s3T

u3

G

. . .T

I s initial state (’seed’)I S finite set of statesI T : S → S is the transition functionI U finite set of output symbols

(often 0, . . . ,m − 1 or a finite subset of [0, 1])I G : S → U output functionI si := T (Si−1) and ui := G (si ).I output: u1, u2, . . .

Axel Gandy Monte Carlo Simulation 6

Page 6: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Some Notes for Uniform RNG

I S finite =⇒ ui is periodic

I In practice: seed s often chosen by clock time as default.

I Good practice to be able to reproduce simulations:

Save the seed!

Axel Gandy Monte Carlo Simulation 7

Page 7: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Quality of Random Number Generators

I “Random numbers should not be generated with a methodchosen at random” (Knuth, 1981, p.5)Some old implementations were unreliable!

I Desirable properties of random number generators:I Statistical uniformity and unpredictabilityI Period LengthI EfficiencyI Theoretical SupportI Repeatability, portability, jumping ahead, ease of implementation

(more on this see e.g. Gentle (2003), L’Ecuyer (2004), L’Ecuyer(2006), Knuth (1998))

I Usually you will do well with generators in modern software (e.g.the default generators in R).Don’t try to implement your own generator!(unless you have very good reasons)

Axel Gandy Monte Carlo Simulation 8

Page 8: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Nonuniform Random Number Generation

I How to generate nonuniform random variables?

I Basic idea:

Apply transformations to a stream of iid U[0,1] random variables

Axel Gandy Monte Carlo Simulation 9

Page 9: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Inversion Method

I Let F be a cdf.

I Quantile function (essentially the inverse of the cdf):

F−1(x) = infx : F (x) ≥ u

I If U is uniform then F−1(U) ∼ F . Indeed,

P(X ≤ x) = P(F−1(U) ≤ x) = P(U ≤ F (x)) = F (x)

I Only works if F−1 (or a good approximation of it) is available.

Axel Gandy Monte Carlo Simulation 10

Page 10: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Acceptance-Rejection Method

0 5 10 15 20

0.00

0.10

0.20

0.30

x

f((x))Cg((x))

I target density fI Proposal density g (easy to generate from) such that for some

C <∞:f (x) ≤ Cg(x)∀x

I Algorithm:1. Generate X from g .

2. With probability f (X )Cg(X ) return X - otherwise goto 1.

I 1C = probability of acceptance - want it to be as close to 1 aspossible.

Axel Gandy Monte Carlo Simulation 11

Page 11: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Further Algorithms

I Ratio-of-Uniforms

I Use of the characteristic function

I MCMC

For many of those techniques and techniques to simulate specificdistributions see e.g. Gentle (2003).

Axel Gandy Monte Carlo Simulation 12

Page 12: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Evaluation of an Integral

I Want to evaluate

I :=

∫[0,1]d

g(x)dx

I Importance for statistics: computation of expected values(posterior means), probabilities (p-values), variances, normalisingconstants, ....

I Often, d is large. In a random sample, often d =sample size.I How to solve it?

I SymbolicalI Numerical IntegrationI Quasi Monte CarloI Monte Carlo Integration

Axel Gandy Monte Carlo Simulation 14

Page 13: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Numerical Integration/Quadrature

I Main idea: approximate the function locally with simplefunction/polynomials

I Advantage: good convergence rate

I Not useful for high dimensions - curse of dimensionality

Axel Gandy Monte Carlo Simulation 15

Page 14: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Midpoint Formula

I Basic: ∫ 1

0f (x)dx ≈ f

(1

2

)I Composite: apply the rule in n subintervals

0.0 0.5 1.0 1.5 2.0

−1.

0−

0.5

0.0

0.5

1.0

Error: O( 1n ).

Axel Gandy Monte Carlo Simulation 16

Page 15: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Trapezoidal Formula

I Basic: ∫ 1

0f (x)dx ≈ 1

2(f (0) + f (1))

I Composite:

0.0 0.5 1.0 1.5 2.0

−1.

0−

0.5

0.0

0.5

1.0

Error: O( 1n2

).Axel Gandy Monte Carlo Simulation 17

Page 16: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Simpson’s rule

I Approximate the integrand by a quadratic function∫ 1

0f (x)dx ≈ 1

6[f (0) + 4f (

1

2) + f (1)]

I Composite Simpson:

0.0 0.5 1.0 1.5 2.0

−1.

0−

0.5

0.0

0.5

1.0

Error: O( 1n4

). Axel Gandy Monte Carlo Simulation 18

Page 17: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Advanced Numerical Integration Methods

I Newton Cotes formulas

I Adaptive methods

I Unbounded integration interval: transformations

Axel Gandy Monte Carlo Simulation 19

Page 18: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Curse of dimensionality - Numerical Integration in HigherDimensions

I :=

∫[0,1]d

g(x)dx

I Naive approach:I write as iterated integral

I :=

∫ 1

0

. . .

∫ 1

0

g(x)dxn . . . dx1

I use 1D scheme for each integral with, say g points .I n = gd function evaluations needed

for d = 100 (a moderate sample size) and g = 10 (which is not alot):n > estimated number of atoms in the universe!

I Suppose we use the trapezoidal rule, then the error = O( 1n2/d

)I More advanced schemes are not doing much better!

Axel Gandy Monte Carlo Simulation 20

Page 19: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Monte Carlo Integration

∫[0,1]d

g(x)dx ≈ 1

n

n∑i=1

g(Xi ),

where X1,X2, · · · ∼ U([0, 1]d) iid.

I SLLN:1

n

n∑i=1

g(Xi )→∫[0,1]d

g(x)dx (n→∞)

I CLT: error is bounded by OP( 1√n

).

independent of d

I Can easily compute asymptotic confidence intervals.

Axel Gandy Monte Carlo Simulation 21

Page 20: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Quasi-Monte-Carlo

I Similar to MC, but instead of random Xi : Use deterministic xithat fill [0, 1]d evenly.so-called “low-discrepancy sequences”.

R-package randtoolbox

Axel Gandy Monte Carlo Simulation 22

Page 21: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Comparison between Quasi-Monte-Carlo and Monte Carlo- 2D

1000 Points in [0, 1]2 generated by a quasi-RNG and a Pseudo-RNG

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Quasi−Random−Number−Generator

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

Standard R−Randon Number Generator

Axel Gandy Monte Carlo Simulation 23

Page 22: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Comparison between Quasi-Monte-Carlo and Monte Carlo

∫[0,1]4

(x1 + x2)(x2 + x3)2(x3 + x4)3dx

Using Monte-Carlo and Quasi-MC

0e+00 2e+05 4e+05 6e+05 8e+05 1e+06

2.55

2.60

2.65

2.70

2.75

n

quasi−MCMC

Axel Gandy Monte Carlo Simulation 24

Page 23: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Bounds on the Quasi-MC error

I Koksma-Hlawka inequality (Niederreiter, 1992, Theorem 2.11)

‖1

n

∑g(xi )−

∫[0,1]d

g(x)dx‖ ≤ Vd(g)D∗n ,

whereVd(g) is the so-called Hardy and Krause variation of g

andDn is the discrepancy of the points x1, . . . , xn in [0, 1]d

given by

D∗n = supA∈A|#xi ∈ A : i = 1, . . . , n − λ(A)|

whereI λ is Lebesgue measureI A is the set of all subrectangles of [0, 1]d of the

form∏d

i=1[0, ai ]d

I Many sequences have been suggested, e.g. the Halton sequence(other sequences: Faure, Sobol, ...) with:

D∗n = O(log(n)d−1

n)

→ better convergence rate than MC integration!However, it does depend on d

I Conjecture: for all sets of points Dn

D∗n ≥ O(log(n)d−1

n)

(Niederreiter, 1992, p.32)

Axel Gandy Monte Carlo Simulation 25

Page 24: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Comparison

The consensus in the literature seems to be:

I use numerical integration for small d

I Quasi-MC useful for medium d

I use Monte Carlo integration for large d

Axel Gandy Monte Carlo Simulation 26

Page 25: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Importance Sampling

I Main idea: Change the density we are sampling from.I Interested in E(φ(X )) =

∫φ(x)f (x)dx

I For any density g ,

E(φ(X )) =

∫φ(x)

f (x)

g(x)g(x)dx

I Thus an unbiased estimator of E(φ(X )) is

I =1

n

n∑i=1

φ(Xi )f (Xi )

g(Xi ),

where X1, . . . ,Xn ∼ g iid.I How to choose g?

I Suppose g ∝ φf then Var(I ) = 0.However, the corresponding normalizing constant is E(φ(X )), thequantity we want to estimate!

I A lot of theoretical work is based on large deviation theory.

Axel Gandy Monte Carlo Simulation 28

Page 26: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Importance Sampling and Rare Events

I Importance sampling can greatly reduce the variance forestimating the probability of rare events, i.e. φ(x) = I(x ∈ A)and E(φ(X )) = P(X ∈ A) small.

Axel Gandy Monte Carlo Simulation 29

Page 27: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Control Variates

I Interested in I = EXI Suppose we can also observe Y and know EY .I Consider T = X + a(Y − E(Y ))I Then ET = I and

VarT = VarX + 2aCov(X ,Y )− a2 VarY

Minimized for a = −Cov(X ,Y )VarY .

I usually, a not known → estimateI For Monte Carlo sampling:

I generate iid sample (X1,Y1), . . . , (Xn,Yn)I estimate Cov(X ,Y ), VarY based on this sample → aI I = 1

n

∑ni=1[Xi + a(Yi − E(Y ))]

I I can be computed via standard regression analysis.Hence the term“regression-adjusted control variates”.

I Can be easily generalised to several control variates.

Axel Gandy Monte Carlo Simulation 30

Page 28: Lecture1

Random Number Generation Computation of Integrals Variance Reduction Techniques

Further Techniques

I Antithetic SamplingUse X and −X

I Conditional Monte CarloEvaluate parts explicitly

I Common Random NumbersFor comparing two procedures - use the same sequence ofrandom numbers.

I StratificationI Divide sample space Ω into strata Ω1, . . . ,Ωs

I In each strata, generate Ri replicates conditional on Ωi andobtain an estimates Ii

I Combine using the law of total probability:

I = p1 I1 + · · ·+ ps Is

I Need to know pi = P(Ωi ) for all i

Axel Gandy Monte Carlo Simulation 31

Page 29: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Part II

Parallel Computing

Introduction

Parallel RNG

Practical use of parallel computing (R)

Axel Gandy Parallel Computing 32

Page 30: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Moore’s Law

(Source: Wikipedia, Creative Commons Attribution ShareAlike 3.0 License)

Axel Gandy Parallel Computing 34

Page 31: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Growth of Data Storage

1980 1985 1990 1995 2000 2005 2010

1e−

021e

+00

1e+

02

Growth PC Harddrive Capacity

year

capa

city

[GB

]

I Not only the computer speed but also the data size is increasingexponentially!

I The increase in the available storage is at least as fast as theincrease in computing power.

Axel Gandy Parallel Computing 35

Page 32: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Introduction

I Recently: Less increase in CPU clock speed

I → multi core CPUs are appearing (quad core available - 80 coresin labs)

I → software needs to be adapted to exploit this

I Traditional computing:Problem is broken into small steps that are executed sequentially

I Parallel computing:Steps are being executed in parallel

Axel Gandy Parallel Computing 36

Page 33: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

von Neumann Architecture

I CPU executes a stored program that specifies a sequence of readand write operations on the memory.

I Memory is used to store both program and data instructions

I Program instructions are coded data which tell the computer todo something

I Data is simply information to be used by the program

I A central processing unit (CPU) gets instructions and/or datafrom memory, decodes the instructions and then sequentiallyperforms them.

Axel Gandy Parallel Computing 37

Page 34: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Different Architectures

I Multicore computing

I Symmetric multiprocessingI Distributed Computing

I Cluster computingI Massive Parallel processorI Grid Computing

List of top 500 supercomputers at http://www.top500.org/

Axel Gandy Parallel Computing 38

Page 35: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Flynn’s taxonomySingle Instruction Multiple Instruction

Single Data SISD MISD

Multiple Data SIMD MIMDExamples:

I SIMD: GPUs

Axel Gandy Parallel Computing 39

Page 36: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Memory Architectures of Parallel Computers

I Traditional SystemCPU

Memory

I Shared Memory System Memory

CPU CPU CPU

I Distributed Memory System

CPU

Memory

CPU

Memory

CPU

Memory

I Distributed Shared Memory System

Memory

CPU CPU CPU

Memory

CPU CPU CPU

Memory

CPU CPU CPU

Axel Gandy Parallel Computing 40

Page 37: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Embarrassingly Parallel Computations

Examples:

I Monte Carlo Integration

I Bootstrap

I Cross-Validation

Axel Gandy Parallel Computing 41

Page 38: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Speedup

I Ideally: computational time reduced linearly in the number ofCPUs

I Suppose only a fraction p of the total tasks can be parallelized.

I Supposing we have n parallel CPUs, the speedup is

1

(1− p) + p/n(Amdahl’s Law)

→ no infinite speedup possible.Example: p = 90%, maximum speed up by a factor of 10.

Axel Gandy Parallel Computing 42

Page 39: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Communication between processes

I Forking

I Threading

I OpenMPshared memory multiprocessing

I PVM (Parallel Virtual Machine)

I MPI (Message Passing Interface)

How to divide tasks? e.g. Master/Slave concept

Axel Gandy Parallel Computing 43

Page 40: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Parallel Random Number Generation

Problems with RNG on parallel computers

I Cannot use identical streams

I Sharing a single stream: a lot of overhead.

I Starting from different seeds: danger of overlapping streams(in particular if seeding is not sophisticated or simulation is large)

I Need independent streams on each processor...

Axel Gandy Parallel Computing 45

Page 41: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Parallel Random Number Generation - sketch of generalapproach

s

s31

u31

G

s32T

u32

G

s33T

u33

G

. . .T

f3

s21

u21

G

s22T

u22

G

s23T

u23

G

. . .Tf2

s11

u11

G

s12T

u12

G

s13T

u13

G

. . .T

f1

Axel Gandy Parallel Computing 46

Page 42: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Packages in R for Parallen random Number Generation

rsprng Interface to the scalable parallel random numbergenerators library (SPRNG)http://sprng.cs.fsu.edu/

rlecuyer Essentially starts with one random stream andpartitions it into long substreams by jumping ahead.L’Ecuyer et al. (2002)

Axel Gandy Parallel Computing 47

Page 43: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Profile

I Determine what part of the programme uses most time with aprofiler

I Improve the important parts (usually the innermost loop)

I R has a built-in profiler (see Rprof, Rprof.summary, packageprofr)

Axel Gandy Parallel Computing 49

Page 44: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Use Vectorization instead of Loops

> a <- rnorm(1e+07)

> system.time(

+ x <- 0

+ for (i in 1:length(a)) x <- x + a[i]

+ )[3]

elapsed

21.17

> system.time(sum(a))[3]

elapsed

0.07

Axel Gandy Parallel Computing 50

Page 45: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Just-In-Time Compilation - RA

I From the developer’s websie(http://www.milbo.users.sonic.net/ra/): “Ra isfunctionally identical to R but provides just-in-time compilationof loops and arithmetic expressions in loops. This usually makesarithmetic in Ra much faster. Ra will also typically run a littlefaster than standard R even when just-in-time compilation is notenabled.”

I Not just a package - central parts are reimplemented.

I Bill Venables (on R help archive):“if you really want to write R code as you might C code, then jitcan help make it practical in terms of time. On the other hand, ifyou want to write R code using as much of the inbuilt operatorsas you have, then you can possibly still do things better.”

Axel Gandy Parallel Computing 51

Page 46: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Use Compiled Code

I R is an interpreted language.

I Can include C, C++ and Fortran code.

I Can dramaticallly speed up computationally intensive parts(a factor of 100 is possible)

I No speedup if the computationally part is a vector/matrixoperation.

I Downside: decreased portability

Axel Gandy Parallel Computing 52

Page 47: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

R-Package: snow

I Mostly for “embarassingly parallel” computations

I Extends the “apply”-style function to a cluster of machines:

params <- 1:10000

cl <- makeCluster(8, "SOCK")

res <- parSapply(cl, params, function(x) foo(x))

I applies the function to each of the parameters using the cluster.I will run 8 copies at once.

Axel Gandy Parallel Computing 53

Page 48: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

snow - Hello World

> library(snow)

> cl <- makeCluster(2, type = "SOCK")

> str(clusterCall(cl, function() Sys.info()[c("nodename", "machine")]))

List of 2

$ : Named chr [1:2] "AG" "x86"

..- attr(*, "names")= chr [1:2] "nodename" "machine"

$ : Named chr [1:2] "AG" "x86"

..- attr(*, "names")= chr [1:2] "nodename" "machine"

> str(clusterApply(cl, 1:2, function(x) x + 3))

List of 2

$ : num 4

$ : num 5

> stopCluster(cl)

Axel Gandy Parallel Computing 54

Page 49: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

snow - set up random number generatorwithout setting upt the RNG

> cl <- makeCluster(2, type = "SOCK")

> clusterApply(cl, 1:2, function(i) rnorm(5))

[[1]]

[1] 0.1540537 -0.4584974 1.1320638 -1.4979826 1.1992120

[[2]]

[1] 0.1540537 -0.4584974 1.1320638 -1.4979826 1.1992120

> stopCluster(cl)

Now with proper setup of the RNG

> cl <- makeCluster(2, type = "SOCK")

> clusterSetupRNG(cl)

[1] "RNGstream"

> clusterApply(cl, 1:2, function(i) rnorm(5))

[[1]]

[1] -1.14063404 -0.49815892 -0.76670013 -0.04821059 -1.09852152

[[2]]

[1] 0.7049582 0.4821092 -1.2848088 0.7198440 0.7386390

> stopCluster(cl)

Axel Gandy Parallel Computing 55

Page 50: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

snow - Another Simple Example5x4 processors - several servers

> cl <- makeCluster(rep(c("localhost","euler","dirichlet","leibniz","riemann"),

each=4),type="SOCK")

may need to give password if not set up public/private key for ssh

> system.time(sapply(1:1000,function(i) mean(rnorm(1e3))))[3]

0.156

> system.time(clusterApply(cl,1:1000,function(i) mean(rnorm(1e3))))[3]

0.161

> system.time(clusterApplyLB(cl,1:1000,function(i) mean(rnorm(1e3))))[3]

0.401

→ too much overhead - parallelisation does not lead to gains

> system.time(sapply(1:1000,function(i) mean(rnorm(1e5))))[3]

12.096

> system.time(clusterApply(cl,1:1000,function(i) mean(rnorm(1e5))))[3]

0.815

> system.time(clusterApplyLB(cl,1:1000,function(i) mean(rnorm(1e5))))[3]

0.648

> stopCluster(cl)

→ parallelisation leads to substantial gain in speed.

Axel Gandy Parallel Computing 56

Page 51: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Extensions of snow

snowfall offers additional support for implicit sequential execution (e.g.for distributing packages using optional parallel support),additional calculation functions, extended error handling, andmany functions for more comfortable programming.

snowFT Extension of the snow package supporting fault tolerant andreproducible applications. It is written for the PVMcommunication layer.

Axel Gandy Parallel Computing 57

Page 52: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

RmpiFor more complicated parallel algorithms that are not embarassinglyparallel.Tutorial under http://math.acadiau.ca/ACMMaC/Rmpi/Hello world from this tutorial

# Load the R MPI package if it is not already loaded.

if (!is.loaded("mpi_initialize"))

library("Rmpi")

# Spawn as many slaves as possible

mpi.spawn.Rslaves()

# In case R exits unexpectedly, have it automatically clean up

# resources taken up by Rmpi (slaves, memory, etc...)

.Last <- function()

if (is.loaded("mpi_initialize"))

if (mpi.comm.size(1) > 0)

print("Please use mpi.close.Rslaves() to close slaves.")

mpi.close.Rslaves()

print("Please use mpi.quit() to quit R")

.Call("mpi_finalize")

# Tell all slaves to return a message identifying themselves

mpi.remote.exec(paste("I am",mpi.comm.rank(),"of",mpi.comm.size()))

# Tell all slaves to close down, and exit the program

mpi.close.Rslaves()

mpi.quit()

(not able to install under win from CRAN - install fromhttp://www.stats.uwo.ca/faculty/yu/Rmpi/)

Axel Gandy Parallel Computing 58

Page 53: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Some other Packages

nws Network of Workstationshttp://nws-r.sourceforge.net/

multicore Use of parallel computing on a single machine via fork(Unix, MacOS) - very fast and easy to use.

GridR http:

//cran.r-project.org/web/packages/GridR/

Wegener et al. (2009, Future Generation ComputerSystems)

papply on CRAN “ Similar to apply and lapply, applies afunction to all items of a list, and returns a list with theresults. Uses Rmpi to distribute the processing evenlyacross a cluster.”‘

multiR http://e-science.lancs.ac.uk/multiR/

rparallel http://www.rparallel.org/

Axel Gandy Parallel Computing 59

Page 54: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

GPUs

I graphical processing units - in graphics cards

I very good at parallel processing

I need to taylor to specific GPU.

I Packages in R:

gputools several basic routines.cudaBayesreg Bayesian multilevel modeling for fMRI.

Axel Gandy Parallel Computing 60

Page 55: Lecture1

Introduction Parallel RNG Practical use of parallel computing (R)

Further Reading

I A tutorial on Parallel Computing:https://computing.llnl.gov/tutorials/parallel_comp/

I High Performance Computing task view on CRANhttp://cran.r-project.org/web/views/

HighPerformanceComputing.html

I An up-to-date talk on high performance comuting with R:http://dirk.eddelbuettel.com/papers/

useR2010hpcTutorial.pdf

Axel Gandy Parallel Computing 61

Page 56: Lecture1

References

Part III

Appendix

Axel Gandy Appendix 62

Page 57: Lecture1

References

Topics in the coming lectures:

I Optimisation

I MCMC methods

I Bootstrap

I Particle Filtering

Axel Gandy Appendix 63

Page 58: Lecture1

References

References I

Gentle, J. (2003). Random Number Generation and Monte Carlo Methods.Springer.

Knuth, D. (1981). The art of computer programming. Vol. 2: Seminumericalalgorithms. Addison-Wesley.

Knuth, G. (1998). The Art of Computer Programming, Seminumerical Algorithms,Vol.2 .

L’Ecuyer, P. (1994). Uniform random number generation. Annals of OperationsResearch 53, 77–120.

L’Ecuyer, P. (2004). Random number generation. In Handbook of ComputationalStatistics: Concepts and Methods, Springer.

L’Ecuyer, P. (2006). Uniform random number generation. In Handbooks inOperations Research and Management Science, Elsevier.

L’Ecuyer, P., Simard, R., Chen, E. J. & Kelton, W. D. (2002). Anobjected-oriented random-number package with many long streams andsubstreams. Operations Research 50, 1073–1075. The code in C, C++, Java,and FORTRAN is available.

Axel Gandy Appendix 64

Page 59: Lecture1

References

References IINiederreiter, H. (1992). Random number generation and quasi-Monte Carlo

methods. SIAM.

Axel Gandy Appendix 65