Top Banner
Lecture 13: ODE-IVP and Numerical Integration 10.34: Numerical Methods Applied to Chemical Engineering 1
22

Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

Nov 06, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

Lecture 13: ODE-IVP and Numerical Integration

10.34: Numerical Methods Applied to

Chemical Engineering

1

Page 2: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

Recap

• Constrained optimization

• Method of Lagrange multipliers

• Interior point methods

2

Page 3: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

• Example:

• minimize:

• subject to:

• Can you solve this problem?

Recap

exp(�x

21 � x

22)

x

21 + x

22 � 1 = 0

✓rf � �rc

c

◆=

0

@�2x1e

�x

21�x

22 � 2x1�

�2x2e�x

21�x

22 � 2x2�

x

21 + x

22 � 1

1

A = 0

� = e�1

3

Page 4: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

• Most physical processes are dynamic in nature. This means that first principles models describing those processes can be depicted as differential equations:

• is often called the state vector and is the set of dynamic variables for which we want to solve.

• is time

• is a time dependent input that we specify

• is a vector of time independent parameters.

• is the initial value of the state vector at

Dynamic Models

d

dtx(t) = f(x(t),u(t), t;✓)

x(t0) = x0

x(t)

x0

t

u(t)

4

Page 5: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

• Usually, the solution we are interested in is values of the state vector within some time domain:

• The initial value problem can be rewritten as:

• By convention, the initial time, , is often set to be zero.

• Since , can be an arbitrary nonlinear function of the state vector, a closed form, analytical solution rarely exists.

• Numerically, we will solve this equation by finding the state vector at a finite number of points within the time domain.

• We will need to characterize the accuracy and stability of solution methods to these problems.

Dynamic Models

d

dtx(t) = f(x(t), t) 8t 2 [t0, tf ]

t 2 [t0, tf ]

x(t0) = x0

t 2 [ t0 f ]d

dtx(t) = f(x(t), t) 8t 2 [t0, tf ]

5

Page 6: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

• Higher order differential equations can always be rewritten as systems of first order equations

• Consider the force balance on a driven mass-spring-damper:

• Let:

• Then:

• And:

Dynamic Models

m

d

2x

dt

2+ b

dx

dt

+ kx = f(t)

v =dx

dt

dx

dt

= v

m

dv

dt

+ bv + kx = f(t)

6

Page 7: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

• Higher order differential equations can always be rewritten as systems of first order equations

• Consider the force balance on a driven mass-spring-damper:

• Collecting a state vector, , gives:

Dynamic Models

or

d

dt

✓x(t)v(t)

| {z }x

=

✓v(t)

(f(t)� bv(t)� kx(t))/m

| {z }f(x(t),u(t), t) or f(x(t), t)

(9)

x(t0) =

x(t0)dx

dt

(t0)

!. (10)

2 Existence and Uniqueness

The existence of a unique solution to (3) follows if f is Lipschitz continuous. Precisely speaking, afunction f(x, t) is said to be Lipschitz continuous in x over some domain of x and t if there existssome constant m so that

kf(x, t)� f(z, t)k mkx� zk (11)

for all x and z in the domain, for some vector norm k · k. Lipschitz continuity is slightly strongerthan continuity, which only requires that

limx!z

kf(x, t)� f(z, t)k ! 0. (12)

The function is uniformly Lipschitz continuous over some time period if there exists a single constantm that applied for all x and z and all time t in the time period. The mean value theorem canbe applied to show that uniform Lipschitz continuity of f(x, t) is implied if f(x, t) is continuouslydi↵erentiable with respect to x and the derivative @

@x f(x, t) has bounded norm over the domain ofx [1].1

A unique solution (3) exists for all initial conditions x(t0) within the domain and all timet 2 [t0, tf ] if f(x, t) is uniformly Lipschitz continuous in x over the time period.

3 Finite Di↵erence Approximations of Derivatives

ODEs are solved by replacing the derivatives with finite di↵erence approximations to generatea system of algebraic equations. To introduce finite di↵erences, consider the simplest forwarddi↵erence approximation

d

dt

x(t) ⇡ 1

�t

(x(t+�t)� x(t)), (13)

the backward di↵erence approximation

d

dt

x(t) ⇡ 1

�t

(x(t)� x(t��t)), (14)

and central di↵erence approximation

d

dt

x(t) ⇡ 1

2�t

(x(t+�t)� x(t��t)), (15)

where �t is a small positive scalar. The first two approximations are one-sided and first-orderaccurate, meaning that the size of the error is proportional to �t, and the third approximation istwo-sided and second-order accurate, which means that the error is proportional to (�t)2.

1

@

@x

f(x, t)

ij

:=

@fi(x, t)

@xj.

2

or

d

dt

✓x(t)v(t)

| {z }x

=

✓v(t)

(f(t)� bv(t)� kx(t))/m

| {z }f(x(t),u(t), t) or f(x(t), t)

(9)

x(t0) =

x(t0)dx

dt

(t0)

!. (10)

2 Existence and Uniqueness

The existence of a unique solution to (3) follows if f is Lipschitz continuous. Precisely speaking, afunction f(x, t) is said to be Lipschitz continuous in x over some domain of x and t if there existssome constant m so that

kf(x, t)� f(z, t)k mkx� zk (11)

for all x and z in the domain, for some vector norm k · k. Lipschitz continuity is slightly strongerthan continuity, which only requires that

limx!z

kf(x, t)� f(z, t)k ! 0. (12)

The function is uniformly Lipschitz continuous over some time period if there exists a single constantm that applied for all x and z and all time t in the time period. The mean value theorem canbe applied to show that uniform Lipschitz continuity of f(x, t) is implied if f(x, t) is continuouslydi↵erentiable with respect to x and the derivative @

@x f(x, t) has bounded norm over the domain ofx [1].1

A unique solution (3) exists for all initial conditions x(t0) within the domain and all timet 2 [t0, tf ] if f(x, t) is uniformly Lipschitz continuous in x over the time period.

3 Finite Di↵erence Approximations of Derivatives

ODEs are solved by replacing the derivatives with finite di↵erence approximations to generatea system of algebraic equations. To introduce finite di↵erences, consider the simplest forwarddi↵erence approximation

d

dt

x(t) ⇡ 1

�t

(x(t+�t)� x(t)), (13)

the backward di↵erence approximation

d

dt

x(t) ⇡ 1

�t

(x(t)� x(t��t)), (14)

and central di↵erence approximation

d

dt

x(t) ⇡ 1

2�t

(x(t+�t)� x(t��t)), (15)

where �t is a small positive scalar. The first two approximations are one-sided and first-orderaccurate, meaning that the size of the error is proportional to �t, and the third approximation istwo-sided and second-order accurate, which means that the error is proportional to (�t)2.

1

@

@x

f(x, t)

ij

:=

@fi(x, t)

@xj.

2

dx

dt

= v m

dv

dt

+ bv + kx = f(t)

7

Page 8: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

• Example:

• Use separation of variables to solve:

• Does a solution exist for all times? Is the solution unique?

Existence and Uniqueness

dx

dt

= x

2

x(0) = x0

dx

x

2= dt )

✓1

x0� 1

x(t)

◆= t ) x(t) =

11x0

� t

8

Page 9: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

• Example:

• Use separation of variables to solve:

• Does a solution exist for all times? Is the solution unique?

Existence and Uniqueness

x(0) = x0

dx

x

3= dt ) 1

2

✓1

x

20

� 1

x(t)2

◆= t ) x(t)2 =

11x

20� 2t

dx

dt

= x

3

9

Page 10: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

• A unique solution exists if , is Lipschitz continuous.

• Lipschitz continuity within some domain means that:

• This is stronger than regular continuity:

• For existence and uniqueness to be guaranteed, , needs to be Lipschitz continuous over the whole domain of and in the time domain of interest.

• Examples:

• Is continuous? Is it uniformly Lipschitz cont.?

• Is continuous? Is it uniformly Lipschitz cont.?

Existence and Uniqueness

kf(x, t)� f(z, t)kp mkx� zkp

limx!z

kf(x, t)� f(z, t)kp ! 0

x, z 2 D

f(x, t)

f(x, t)

x, z 2 D

x, z 2 D

f(x) = x

2

f(x) = x

10

Page 11: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

• One way to solve differential equations numerical is to approximate the derivatives and turn the differential equation into a sequence of algebraic equations.

• Finite differences are a typical method for this approximation:

• Forward difference:

• Backward difference:

• Central difference:

Finite Differences

or

d

dt

✓x(t)v(t)

| {z }x

=

✓v(t)

(f(t)� bv(t)� kx(t))/m

| {z }f(x(t),u(t), t) or f(x(t), t)

(9)

x(t0) =

x(t0)dx

dt

(t0)

!. (10)

2 Existence and Uniqueness

The existence of a unique solution to (3) follows if f is Lipschitz continuous. Precisely speaking, afunction f(x, t) is said to be Lipschitz continuous in x over some domain of x and t if there existssome constant m so that

kf(x, t)� f(z, t)k mkx� zk (11)

for all x and z in the domain, for some vector norm k · k. Lipschitz continuity is slightly strongerthan continuity, which only requires that

limx!z

kf(x, t)� f(z, t)k ! 0. (12)

The function is uniformly Lipschitz continuous over some time period if there exists a single constantm that applied for all x and z and all time t in the time period. The mean value theorem canbe applied to show that uniform Lipschitz continuity of f(x, t) is implied if f(x, t) is continuouslydi↵erentiable with respect to x and the derivative @

@x f(x, t) has bounded norm over the domain ofx [1].1

A unique solution (3) exists for all initial conditions x(t0) within the domain and all timet 2 [t0, tf ] if f(x, t) is uniformly Lipschitz continuous in x over the time period.

3 Finite Di↵erence Approximations of Derivatives

ODEs are solved by replacing the derivatives with finite di↵erence approximations to generatea system of algebraic equations. To introduce finite di↵erences, consider the simplest forwarddi↵erence approximation

d

dt

x(t) ⇡ 1

�t

(x(t+�t)� x(t)), (13)

the backward di↵erence approximation

d

dt

x(t) ⇡ 1

�t

(x(t)� x(t��t)), (14)

and central di↵erence approximation

d

dt

x(t) ⇡ 1

2�t

(x(t+�t)� x(t��t)), (15)

where �t is a small positive scalar. The first two approximations are one-sided and first-orderaccurate, meaning that the size of the error is proportional to �t, and the third approximation istwo-sided and second-order accurate, which means that the error is proportional to (�t)2.

1

@

@x

f(x, t)

ij

:=

@fi(x, t)

@xj.

2

or

d

dt

✓x(t)v(t)

| {z }x

=

✓v(t)

(f(t)� bv(t)� kx(t))/m

| {z }f(x(t),u(t), t) or f(x(t), t)

(9)

x(t0) =

x(t0)dx

dt

(t0)

!. (10)

2 Existence and Uniqueness

The existence of a unique solution to (3) follows if f is Lipschitz continuous. Precisely speaking, afunction f(x, t) is said to be Lipschitz continuous in x over some domain of x and t if there existssome constant m so that

kf(x, t)� f(z, t)k mkx� zk (11)

for all x and z in the domain, for some vector norm k · k. Lipschitz continuity is slightly strongerthan continuity, which only requires that

limx!z

kf(x, t)� f(z, t)k ! 0. (12)

The function is uniformly Lipschitz continuous over some time period if there exists a single constantm that applied for all x and z and all time t in the time period. The mean value theorem canbe applied to show that uniform Lipschitz continuity of f(x, t) is implied if f(x, t) is continuouslydi↵erentiable with respect to x and the derivative @

@x f(x, t) has bounded norm over the domain ofx [1].1

A unique solution (3) exists for all initial conditions x(t0) within the domain and all timet 2 [t0, tf ] if f(x, t) is uniformly Lipschitz continuous in x over the time period.

3 Finite Di↵erence Approximations of Derivatives

ODEs are solved by replacing the derivatives with finite di↵erence approximations to generatea system of algebraic equations. To introduce finite di↵erences, consider the simplest forwarddi↵erence approximation

d

dt

x(t) ⇡ 1

�t

(x(t+�t)� x(t)), (13)

the backward di↵erence approximation

d

dt

x(t) ⇡ 1

�t

(x(t)� x(t��t)), (14)

and central di↵erence approximation

d

dt

x(t) ⇡ 1

2�t

(x(t+�t)� x(t��t)), (15)

where �t is a small positive scalar. The first two approximations are one-sided and first-orderaccurate, meaning that the size of the error is proportional to �t, and the third approximation istwo-sided and second-order accurate, which means that the error is proportional to (�t)2.

1

@

@x

f(x, t)

ij

:=

@fi(x, t)

@xj.

2

or

d

dt

✓x(t)v(t)

| {z }x

=

✓v(t)

(f(t)� bv(t)� kx(t))/m

| {z }f(x(t),u(t), t) or f(x(t), t)

(9)

x(t0) =

x(t0)dx

dt

(t0)

!. (10)

2 Existence and Uniqueness

The existence of a unique solution to (3) follows if f is Lipschitz continuous. Precisely speaking, afunction f(x, t) is said to be Lipschitz continuous in x over some domain of x and t if there existssome constant m so that

kf(x, t)� f(z, t)k mkx� zk (11)

for all x and z in the domain, for some vector norm k · k. Lipschitz continuity is slightly strongerthan continuity, which only requires that

limx!z

kf(x, t)� f(z, t)k ! 0. (12)

The function is uniformly Lipschitz continuous over some time period if there exists a single constantm that applied for all x and z and all time t in the time period. The mean value theorem canbe applied to show that uniform Lipschitz continuity of f(x, t) is implied if f(x, t) is continuouslydi↵erentiable with respect to x and the derivative @

@x f(x, t) has bounded norm over the domain ofx [1].1

A unique solution (3) exists for all initial conditions x(t0) within the domain and all timet 2 [t0, tf ] if f(x, t) is uniformly Lipschitz continuous in x over the time period.

3 Finite Di↵erence Approximations of Derivatives

ODEs are solved by replacing the derivatives with finite di↵erence approximations to generatea system of algebraic equations. To introduce finite di↵erences, consider the simplest forwarddi↵erence approximation

d

dt

x(t) ⇡ 1

�t

(x(t+�t)� x(t)), (13)

the backward di↵erence approximation

d

dt

x(t) ⇡ 1

�t

(x(t)� x(t��t)), (14)

and central di↵erence approximation

d

dt

x(t) ⇡ 1

2�t

(x(t+�t)� x(t��t)), (15)

where �t is a small positive scalar. The first two approximations are one-sided and first-orderaccurate, meaning that the size of the error is proportional to �t, and the third approximation istwo-sided and second-order accurate, which means that the error is proportional to (�t)2.

1

@

@x

f(x, t)

ij

:=

@fi(x, t)

@xj.

2

x(t+�t)x(t��t)

x(t)

11

Page 12: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

• One way to solve differential equations numerical is to approximate the derivatives and turn the differential equation into a sequence of algebraic equations.

• Finite differences are a typical method for this approximation:

• Forward difference:

• Backward difference:

• Central difference:

Finite Differences

or

d

dt

✓x(t)v(t)

| {z }x

=

✓v(t)

(f(t)� bv(t)� kx(t))/m

| {z }f(x(t),u(t), t) or f(x(t), t)

(9)

x(t0) =

x(t0)dx

dt

(t0)

!. (10)

2 Existence and Uniqueness

The existence of a unique solution to (3) follows if f is Lipschitz continuous. Precisely speaking, afunction f(x, t) is said to be Lipschitz continuous in x over some domain of x and t if there existssome constant m so that

kf(x, t)� f(z, t)k mkx� zk (11)

for all x and z in the domain, for some vector norm k · k. Lipschitz continuity is slightly strongerthan continuity, which only requires that

limx!z

kf(x, t)� f(z, t)k ! 0. (12)

The function is uniformly Lipschitz continuous over some time period if there exists a single constantm that applied for all x and z and all time t in the time period. The mean value theorem canbe applied to show that uniform Lipschitz continuity of f(x, t) is implied if f(x, t) is continuouslydi↵erentiable with respect to x and the derivative @

@x f(x, t) has bounded norm over the domain ofx [1].1

A unique solution (3) exists for all initial conditions x(t0) within the domain and all timet 2 [t0, tf ] if f(x, t) is uniformly Lipschitz continuous in x over the time period.

3 Finite Di↵erence Approximations of Derivatives

ODEs are solved by replacing the derivatives with finite di↵erence approximations to generatea system of algebraic equations. To introduce finite di↵erences, consider the simplest forwarddi↵erence approximation

d

dt

x(t) ⇡ 1

�t

(x(t+�t)� x(t)), (13)

the backward di↵erence approximation

d

dt

x(t) ⇡ 1

�t

(x(t)� x(t��t)), (14)

and central di↵erence approximation

d

dt

x(t) ⇡ 1

2�t

(x(t+�t)� x(t��t)), (15)

where �t is a small positive scalar. The first two approximations are one-sided and first-orderaccurate, meaning that the size of the error is proportional to �t, and the third approximation istwo-sided and second-order accurate, which means that the error is proportional to (�t)2.

1

@

@x

f(x, t)

ij

:=

@fi(x, t)

@xj.

2

or

d

dt

✓x(t)v(t)

| {z }x

=

✓v(t)

(f(t)� bv(t)� kx(t))/m

| {z }f(x(t),u(t), t) or f(x(t), t)

(9)

x(t0) =

x(t0)dx

dt

(t0)

!. (10)

2 Existence and Uniqueness

The existence of a unique solution to (3) follows if f is Lipschitz continuous. Precisely speaking, afunction f(x, t) is said to be Lipschitz continuous in x over some domain of x and t if there existssome constant m so that

kf(x, t)� f(z, t)k mkx� zk (11)

for all x and z in the domain, for some vector norm k · k. Lipschitz continuity is slightly strongerthan continuity, which only requires that

limx!z

kf(x, t)� f(z, t)k ! 0. (12)

The function is uniformly Lipschitz continuous over some time period if there exists a single constantm that applied for all x and z and all time t in the time period. The mean value theorem canbe applied to show that uniform Lipschitz continuity of f(x, t) is implied if f(x, t) is continuouslydi↵erentiable with respect to x and the derivative @

@x f(x, t) has bounded norm over the domain ofx [1].1

A unique solution (3) exists for all initial conditions x(t0) within the domain and all timet 2 [t0, tf ] if f(x, t) is uniformly Lipschitz continuous in x over the time period.

3 Finite Di↵erence Approximations of Derivatives

ODEs are solved by replacing the derivatives with finite di↵erence approximations to generatea system of algebraic equations. To introduce finite di↵erences, consider the simplest forwarddi↵erence approximation

d

dt

x(t) ⇡ 1

�t

(x(t+�t)� x(t)), (13)

the backward di↵erence approximation

d

dt

x(t) ⇡ 1

�t

(x(t)� x(t��t)), (14)

and central di↵erence approximation

d

dt

x(t) ⇡ 1

2�t

(x(t+�t)� x(t��t)), (15)

where �t is a small positive scalar. The first two approximations are one-sided and first-orderaccurate, meaning that the size of the error is proportional to �t, and the third approximation istwo-sided and second-order accurate, which means that the error is proportional to (�t)2.

1

@

@x

f(x, t)

ij

:=

@fi(x, t)

@xj.

2

or

d

dt

✓x(t)v(t)

| {z }x

=

✓v(t)

(f(t)� bv(t)� kx(t))/m

| {z }f(x(t),u(t), t) or f(x(t), t)

(9)

x(t0) =

x(t0)dx

dt

(t0)

!. (10)

2 Existence and Uniqueness

The existence of a unique solution to (3) follows if f is Lipschitz continuous. Precisely speaking, afunction f(x, t) is said to be Lipschitz continuous in x over some domain of x and t if there existssome constant m so that

kf(x, t)� f(z, t)k mkx� zk (11)

for all x and z in the domain, for some vector norm k · k. Lipschitz continuity is slightly strongerthan continuity, which only requires that

limx!z

kf(x, t)� f(z, t)k ! 0. (12)

The function is uniformly Lipschitz continuous over some time period if there exists a single constantm that applied for all x and z and all time t in the time period. The mean value theorem canbe applied to show that uniform Lipschitz continuity of f(x, t) is implied if f(x, t) is continuouslydi↵erentiable with respect to x and the derivative @

@x f(x, t) has bounded norm over the domain ofx [1].1

A unique solution (3) exists for all initial conditions x(t0) within the domain and all timet 2 [t0, tf ] if f(x, t) is uniformly Lipschitz continuous in x over the time period.

3 Finite Di↵erence Approximations of Derivatives

ODEs are solved by replacing the derivatives with finite di↵erence approximations to generatea system of algebraic equations. To introduce finite di↵erences, consider the simplest forwarddi↵erence approximation

d

dt

x(t) ⇡ 1

�t

(x(t+�t)� x(t)), (13)

the backward di↵erence approximation

d

dt

x(t) ⇡ 1

�t

(x(t)� x(t��t)), (14)

and central di↵erence approximation

d

dt

x(t) ⇡ 1

2�t

(x(t+�t)� x(t��t)), (15)

where �t is a small positive scalar. The first two approximations are one-sided and first-orderaccurate, meaning that the size of the error is proportional to �t, and the third approximation istwo-sided and second-order accurate, which means that the error is proportional to (�t)2.

1

@

@x

f(x, t)

ij

:=

@fi(x, t)

@xj.

2

x(t+�t)x(t��t)

x(t)

12

Page 13: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

The order of accuracy of these expressions can be derived analytically by expanding x(t+�t)and x(t��t) in Taylor series:

(16)

(17)

which are valid provided that x is su�ciently smooth. Insertion of these Taylor series into the finitedi↵erence approximations (13)–(15) gives, for the forward di↵erence approximation,

d

dt

x(t) ⇡ 1

�t

(x(t+�t)� x(t)) (18)

=1

�t

✓x(t) +�t

d

dt

x(t) +(�t)2

2!

d

2

dt

2x(t) +

(�t)3

3!

d

3

dt

3x(t) + O((�t)4)� x(t)

◆(19)

=d

dt

x(t) +�t

2

d

2

dt

2x(t) +

(�t)2

6

d

3

dt

3x(t) + O((�t)3), (20)

for the backward di↵erence approximation,

d

dt

x(t) ⇡ 1

�t

(x(t)� x(t��t)) (21)

=1

�t

✓x(t)�

✓x(t)��t

d

dt

x(t) +(�t)2

2!

d

2

dt

2x(t)� (�t)3

3!

d

3

dt

3x(t) + O((�t)4)

◆◆(22)

=d

dt

x(t)� �t

2

d

2

dt

2x(t) +

(�t)2

6

d

3

dt

3x(t) + O((�t)3), (23)

and for the central di↵erence approximation,

d

dt

x(t) ⇡ 1

2�t

(x(t+�t)� x(t��t)) (24)

=1

2�t

✓x(t) +�t

d

dt

x(t) +(�t)2

2

d

2

dt

2x(t) +

(�t)3

3!

d

3

dt

3x(t) + O((�t)4) (25)

�✓x(t)��t

d

dt

x(t) +(�t)2

2!

d

2

dt

2x(t)� (�t)3

3!

d

3

dt

3x(t) + O((�t)4)

◆◆(26)

=d

dt

x(t) +(�t)2

6

d

3

dt

3x(t) + O((�t)3). (27)

The above expressions confirm that the above forward and backward di↵erence approximations arefirst-order accurate and the central di↵erence approximation is second-order accurate, that is, theerror

E(�t) / (�t)p, (28)

where the order of accuracy p is equal to 1, 1, and 2, respectively.Finite di↵erence approximations for various order of derivatives and orders of accuracy are

summarized in handbooks on numerical methods such as [3, 4]. Especially popular is the second-order accurate central di↵erence approximation of the second-order derivative

d

2

dt

2x(t) ⇡ 1

(�t)2(x(t+�t)� 2x(t) + x(t��t)). (29)

3

The order of accuracy of these expressions can be derived analytically by expanding x(t+�t)and x(t��t) in Taylor series:

x(t+�t) = x(t) +�t

d

dt

x(t) +(�t)2

2!

d

2

dt

2x(t) +

(�t)3

3!

d

3

dt

3x(t) + O((�t)4), (16)

x(t��t) = x(t)��t

d

dt

x(t) +(�t)2

2!

d

2

dt

2x(t)� (�t)3

3!

d

3

dt

3x(t) + O((�t)4), (17)

which are valid provided that x is su�ciently smooth. Insertion of these Taylor series into the finitedi↵erence approximations (13)–(15) gives, for the forward di↵erence approximation,

(18)

(19)

(20)

for the backward di↵erence approximation,

d

dt

x(t) ⇡ 1

�t

(x(t)� x(t��t)) (21)

=1

�t

✓x(t)�

✓x(t)��t

d

dt

x(t) +(�t)2

2!

d

2

dt

2x(t)� (�t)3

3!

d

3

dt

3x(t) + O((�t)4)

◆◆(22)

=d

dt

x(t)� �t

2

d

2

dt

2x(t) +

(�t)2

6

d

3

dt

3x(t) + O((�t)3), (23)

and for the central di↵erence approximation,

d

dt

x(t) ⇡ 1

2�t

(x(t+�t)� x(t��t)) (24)

=1

2�t

✓x(t) +�t

d

dt

x(t) +(�t)2

2

d

2

dt

2x(t) +

(�t)3

3!

d

3

dt

3x(t) + O((�t)4) (25)

�✓x(t)��t

d

dt

x(t) +(�t)2

2!

d

2

dt

2x(t)� (�t)3

3!

d

3

dt

3x(t) + O((�t)4)

◆◆(26)

=d

dt

x(t) +(�t)2

6

d

3

dt

3x(t) + O((�t)3). (27)

The above expressions confirm that the above forward and backward di↵erence approximations arefirst-order accurate and the central di↵erence approximation is second-order accurate, that is, theerror

E(�t) / (�t)p, (28)

where the order of accuracy p is equal to 1, 1, and 2, respectively.Finite di↵erence approximations for various order of derivatives and orders of accuracy are

summarized in handbooks on numerical methods such as [3, 4]. Especially popular is the second-order accurate central di↵erence approximation of the second-order derivative

d

2

dt

2x(t) ⇡ 1

(�t)2(x(t+�t)� 2x(t) + x(t��t)). (29)

3

The order of accuracy of these expressions can be derived analytically by expanding x(t+�t)and x(t��t) in Taylor series:

x(t+�t) = x(t) +�t

d

dt

x(t) +(�t)2

2!

d

2

dt

2x(t) +

(�t)3

3!

d

3

dt

3x(t) + O((�t)4), (16)

x(t��t) = x(t)��t

d

dt

x(t) +(�t)2

2!

d

2

dt

2x(t)� (�t)3

3!

d

3

dt

3x(t) + O((�t)4), (17)

which are valid provided that x is su�ciently smooth. Insertion of these Taylor series into the finitedi↵erence approximations (13)–(15) gives, for the forward di↵erence approximation,

d

dt

x(t) ⇡ 1

�t

(x(t+�t)� x(t)) (18)

=1

�t

✓x(t) +�t

d

dt

x(t) +(�t)2

2!

d

2

dt

2x(t) +

(�t)3

3!

d

3

dt

3x(t) + O((�t)4)� x(t)

◆(19)

=d

dt

x(t) +�t

2

d

2

dt

2x(t) +

(�t)2

6

d

3

dt

3x(t) + O((�t)3), (20)

for the backward di↵erence approximation,

d

dt

x(t) ⇡ 1

�t

(x(t)� x(t��t)) (21)

=1

�t

✓x(t)�

✓x(t)��t

d

dt

x(t) +(�t)2

2!

d

2

dt

2x(t)� (�t)3

3!

d

3

dt

3x(t) + O((�t)4)

◆◆(22)

=d

dt

x(t)� �t

2

d

2

dt

2x(t) +

(�t)2

6

d

3

dt

3x(t) + O((�t)3), (23)

and for the central di↵erence approximation,

(24)

(25)

(26)

(27)

The above expressions confirm that the above forward and backward di↵erence approximations arefirst-order accurate and the central di↵erence approximation is second-order accurate, that is, theerror

E(�t) / (�t)p, (28)

where the order of accuracy p is equal to 1, 1, and 2, respectively.Finite di↵erence approximations for various order of derivatives and orders of accuracy are

summarized in handbooks on numerical methods such as [3, 4]. Especially popular is the second-order accurate central di↵erence approximation of the second-order derivative

d

2

dt

2x(t) ⇡ 1

(�t)2(x(t+�t)� 2x(t) + x(t��t)). (29)

3

Finite Differences • Taylor expansions can be used to evaluate the accuracy of

finite difference approximations: 2 d

2 3 d d

3 ( t) ( t)x(t + t) = x(t) + t x(t) + x(t) + x(t) + O(( t)4),

dt 2! dt

2 3! dt

3

d ( t)2d

2 ( t)3 d

3 x(t t) = x(t) t x(t) + x(t) x(t) + O(( t)4),

dt 2! dt

2 3! dt

3

• Forward difference:d 1x(t) (x(t + t)

⇡✓

e x(t))dt t

2 3

1

◆ d ( t) d

2 ( t) d

3

= x(t) + t x(t) + x(t) + x(t) + O(( t)4) x(t) t dt 2! dt

2 3! dt

3 e

t d2 2 d d

3 ( t)= x(t) + x(t) + x(t) + O(( t)3),

• dt 2 dt2 6 dt

3

Central difference: d 1 x(t) ⇡ (x(t + t) x(t

dt 2 t ✓ ( ( t))

1 d ( t)2d

2 ( t)3 d

3

= x(t) + t x(t) + x(t) + x(t) + O(( t)4)2 t dt 2 dt

2 3! dt

3

✓ ◆◆ d ( t)2 d

2 ( t)3 3 d( x(t)( t x(t) + x(t) x(t) + O(( t)4)

dt 2! dt

2 ( 3! dt

3

d ( t)2 d

3

= x(t) + x(t) + O(( t)3). dt 6 dt

3 13

Page 14: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

• The order of accuracy of a finite difference approximation is given by the leading order error in the Taylor expansion.

• For example:

• is said to be a first order accurate approximation.

• If the error in the approximation is: , then the approximation is pth-order accurate

• The order of accuracy can be determined by calculating the error in the solution method after one step and plotting:

Finite Differences

E(�t) ⇠ (�t)p

log |E(�t)| ⇡ log c+ p log�t

The order of accuracy of these expressions can be derived analytically by expanding x(t+�t)and x(t��t) in Taylor series:

x(t+�t) = x(t) +�t

d

dt

x(t) +(�t)2

2!

d

2

dt

2x(t) +

(�t)3

3!

d

3

dt

3x(t) + O((�t)4), (16)

x(t��t) = x(t)��t

d

dt

x(t) +(�t)2

2!

d

2

dt

2x(t)� (�t)3

3!

d

3

dt

3x(t) + O((�t)4), (17)

which are valid provided that x is su�ciently smooth. Insertion of these Taylor series into the finitedi↵erence approximations (13)–(15) gives, for the forward di↵erence approximation,

d

dt

x(t) ⇡ 1

�t

(x(t+�t)� x(t)) (18)

=1

�t

✓x(t) +�t

d

dt

x(t) +(�t)2

2!

d

2

dt

2x(t) +

(�t)3

3!

d

3

dt

3x(t) + O((�t)4)� x(t)

◆(19)

=d

dt

x(t) +�t

2

d

2

dt

2x(t) +

(�t)2

6

d

3

dt

3x(t) + O((�t)3), (20)

for the backward di↵erence approximation,

d

dt

x(t) ⇡ 1

�t

(x(t)� x(t��t)) (21)

=1

�t

✓x(t)�

✓x(t)��t

d

dt

x(t) +(�t)2

2!

d

2

dt

2x(t)� (�t)3

3!

d

3

dt

3x(t) + O((�t)4)

◆◆(22)

=d

dt

x(t)� �t

2

d

2

dt

2x(t) +

(�t)2

6

d

3

dt

3x(t) + O((�t)3), (23)

and for the central di↵erence approximation,

d

dt

x(t) ⇡ 1

2�t

(x(t+�t)� x(t��t)) (24)

=1

2�t

✓x(t) +�t

d

dt

x(t) +(�t)2

2

d

2

dt

2x(t) +

(�t)3

3!

d

3

dt

3x(t) + O((�t)4) (25)

�✓x(t)��t

d

dt

x(t) +(�t)2

2!

d

2

dt

2x(t)� (�t)3

3!

d

3

dt

3x(t) + O((�t)4)

◆◆(26)

=d

dt

x(t) +(�t)2

6

d

3

dt

3x(t) + O((�t)3). (27)

The above expressions confirm that the above forward and backward di↵erence approximations arefirst-order accurate and the central di↵erence approximation is second-order accurate, that is, theerror

E(�t) / (�t)p, (28)

where the order of accuracy p is equal to 1, 1, and 2, respectively.Finite di↵erence approximations for various order of derivatives and orders of accuracy are

summarized in handbooks on numerical methods such as [3, 4]. Especially popular is the second-order accurate central di↵erence approximation of the second-order derivative

d

2

dt

2x(t) ⇡ 1

(�t)2(x(t+�t)� 2x(t) + x(t��t)). (29)

3

14

Page 15: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

• Explicit (or Forward) Euler method:

• Approximate the derivative with forward differences:

• This gives a sequence of approximations for the solution at different time points:

Explicit Methods for IVPs

As discussed in Section 1, finite di↵erence approximations are not really needed for higher orderderivatives since they can always be written in terms of first-order derivatives by defining addi-tional state variables. Deciding whether to insert finite di↵erence approximations for higher orderderivatives into an IVP directly or first rewriting the ODEs in terms of only first-order derviativesis a matter of convenience. Generic software for the numerical solution and dynamic analysis ofODEs almost always assumes that the model contains only first-order derivatives, so most userswill take this route rather than write their own ODE simulation code. If the full ODE simulationcode is being written from scratch for a system whose first-principles model naturally has higherorder derivatives, such as in the mass-spring-damper example, then it is common to insert finitedi↵erence approximations directly in place of the higher order derivatives.

Methods for deriving finite di↵erence approximations are described in textbooks on numericalmethods, which will not be covered here since any approximation that you would interested inapplying can be looked up in the tables.

A simple way to numerically evaluate the error in any approximation is to plot the numericalerror versus the discretization size:

E(�t) ⇡ c(�t)p (30)

ln |E(�t)| ⇡ ln c+ p ln�t (31)

where p is the order of accuracy and c is a constant. The slope of the error versus �t on a log-logplot is a line with slope equal to the order of accuracy, which is a very good validation test for thecorrectness of the implementation of a numerical algorithm.

4 Explicit Methods for the Simulation of IVPs

As mentioned in Section 3, IVPs are simulated by inserting finite di↵erence approximations of thederivatives and solving the resulting set of algebraic equations.

4.1 Explicit Euler Method

The simplest numerical algorithm for solving an IVP is to insert the forward di↵erence approxima-tion (13) into the ODE (3) to give

d

x(t) = f(x(t), t)dt

1(x(t+�t)� x(t)) ⇡ f(x(t), t)

�t

x(t+�t) x(t) + (�t)f(x(t), t),⇡

x(t0) = x0

x(t0 +�t) ⇡ x(t0) + (�t)f(x(t0), t0) = x0 + (�t)f(x0, t0)

x(t0 + 2�t) ⇡ x(t0 +�t) + (�t)f(x(t0 +�t), t0 +�t)

x(t0 + 3�t) ⇡ x(t0 + 2�t) + (�t)f(x(t0 + 2�t), t0 + 2�t)

...

x(t0 + (k + 1)�t) ⇡ x(t0 + k�t) + (�t)f(x(t0 + k�t), t0 + k�t)

...

These equations have implicitly employed a mesh along the time axis, and implicitly assumed thatthe �t is the same between each time step (removal of this assumption is made later). The notationcan be simplified by defining tk = t0 + k�t, with integer k = 0, 1, 2, . . . ,

x(t0) = x0 (40)

x(t1) ⇡ x(t0) + (�t)f(x(t0), t0) (41)

x(t2) ⇡ x(t1) + (�t)f(x(t1), t1) (42)

x(t3) ⇡ x(t2) + (�t)f(x(t2), t2) (43)

...

x(tk+1) ⇡ x(tk) + (�t)f(x(tk), tk) (44)

...

This algorithm is a time-marching method, as each iteration produces the next state based onlyon past and current values of the state. The numerical algorithm is said to be explicit because thealgebraic equations can be solved explicitly, that is, without resorting to iterative methods. Putsimply, the resulting set of algebraic equations can be solved at each time step without requiringthe simultaneous solution of coupled algebraic equations.

The explicit Euler method is first-order accurate, which is a result of using a first-order accu-rate approximation for the derivative in the ODE. Higher order explicit methods are obtained byinserting forward di↵erence approximations of higher order accuracy.

Runge-Kutta is a class of popular higher order methods that compute estimates of the statesat time points in between the time points of interest. For example, a second-order explicit Runge-Kutta is

x(t+�t/2) = x(t) +�t

2f(x(t), t) (45)

x(t+�t) = x(t) + (�t)f(x(t+�t/2), t+�t/2). (46)

The first equation estimates the state at the midpoint by Euler’s method, which is used to estimatethe slope over the entire time interval. A drawback of Runge-Kutta methods is their requirementof multiple function evaluations within each time interval. The function evaluation is usually themost expensive step in solving an IVP, so Runge-Kutta methods are rarely used in the simulationof IVPs in which computational considerations are important.

5

15

Page 16: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

• Explicit Euler method:

• Approximate the derivative with forward differences:

• This gives a sequence of approximations for the solution at different time points:

Explicit Methods for IVPs

As discussed in Section 1, finite di↵erence approximations are not really needed for higher orderderivatives since they can always be written in terms of first-order derivatives by defining addi-tional state variables. Deciding whether to insert finite di↵erence approximations for higher orderderivatives into an IVP directly or first rewriting the ODEs in terms of only first-order derviativesis a matter of convenience. Generic software for the numerical solution and dynamic analysis ofODEs almost always assumes that the model contains only first-order derivatives, so most userswill take this route rather than write their own ODE simulation code. If the full ODE simulationcode is being written from scratch for a system whose first-principles model naturally has higherorder derivatives, such as in the mass-spring-damper example, then it is common to insert finitedi↵erence approximations directly in place of the higher order derivatives.

Methods for deriving finite di↵erence approximations are described in textbooks on numericalmethods, which will not be covered here since any approximation that you would interested inapplying can be looked up in the tables.

A simple way to numerically evaluate the error in any approximation is to plot the numericalerror versus the discretization size:

E(�t) ⇡ c(�t)p (30)

ln |E(�t)| ⇡ ln c+ p ln�t (31)

where p is the order of accuracy and c is a constant. The slope of the error versus �t on a log-logplot is a line with slope equal to the order of accuracy, which is a very good validation test for thecorrectness of the implementation of a numerical algorithm.

4 Explicit Methods for the Simulation of IVPs

As mentioned in Section 3, IVPs are simulated by inserting finite di↵erence approximations of thederivatives and solving the resulting set of algebraic equations.

4.1 Explicit Euler Method

The simplest numerical algorithm for solving an IVP is to insert the forward di↵erence approxima-tion (13) into the ODE (3) to give

d

dt

x(t) = f(x(t), t) (32)

1

�t

(x(t+�t)� x(t)) ⇡ f(x(t), t) (33)

x(t+�t) ⇡ x(t) + (�t)f(x(t), t), (34)

where �t is a positive number (presumably small). This recursion is called the explicit Eulermethod or just simply the Euler method. Starting at time t = t0, the numerical estimates of x(t)

4

are

x(t0) = x0 (35)

x(t0 +�t) ⇡ x(t0) + (�t)f(x(t0), t0) = x0 + (�t)f(x0, t0) (36)

x(t0 + 2�t) ⇡ x(t0 +�t) + (�t)f(x(t0 +�t), t0 +�t) (37)

x(t0 + 3�t) ⇡ x(t0 + 2�t) + (�t)f(x(t0 + 2�t), t0 + 2�t) (38)

...

x(t0 + (k + 1)�t) ⇡ x(t0 + k�t) + (�t)f(x(t0 + k�t), t0 + k�t) (39)

...

These equations have implicitly employed a mesh along the time axis, and implicitly assumed thatthe �t is the same between each time step (removal of this assumption is made later). The notationcan be simplified by defining tk = t0 + k�t, with integer k = 0, 1, 2, . . . ,

x(t0) = x0 (40)

x(t1) ⇡ x(t0) + (�t)f(x(t0), t0) (41)

x(t2) ⇡ x(t1) + (�t)f(x(t1), t1) (42)

x(t3) ⇡ x(t2) + (�t)f(x(t2), t2) (43)

...

x(tk+1) ⇡ x(tk) + (�t)f(x(tk), tk) (44)

...

This algorithm is a time-marching method, as each iteration produces the next state based onlyon past and current values of the state. The numerical algorithm is said to be explicit because thealgebraic equations can be solved explicitly, that is, without resorting to iterative methods. Putsimply, the resulting set of algebraic equations can be solved at each time step without requiringthe simultaneous solution of coupled algebraic equations.

The explicit Euler method is first-order accurate, which is a result of using a first-order accu-rate approximation for the derivative in the ODE. Higher order explicit methods are obtained byinserting forward di↵erence approximations of higher order accuracy.

Runge-Kutta is a class of popular higher order methods that compute estimates of the statesat time points in between the time points of interest. For example, a second-order explicit Runge-Kutta is

x(t+�t/2) = x(t) +�t

2f(x(t), t) (45)

x(t+�t) = x(t) + (�t)f(x(t+�t/2), t+�t/2). (46)

The first equation estimates the state at the midpoint by Euler’s method, which is used to estimatethe slope over the entire time interval. A drawback of Runge-Kutta methods is their requirementof multiple function evaluations within each time interval. The function evaluation is usually themost expensive step in solving an IVP, so Runge-Kutta methods are rarely used in the simulationof IVPs in which computational considerations are important.

5

16

Page 17: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

• Explicit Euler method:

• Approximate the derivative with forward differences:

• This gives a sequence of approximations for the solution at different time points:

Explicit Methods for IVPs

As discussed in Section 1, finite di↵erence approximations are not really needed for higher orderderivatives since they can always be written in terms of first-order derivatives by defining addi-tional state variables. Deciding whether to insert finite di↵erence approximations for higher orderderivatives into an IVP directly or first rewriting the ODEs in terms of only first-order derviativesis a matter of convenience. Generic software for the numerical solution and dynamic analysis ofODEs almost always assumes that the model contains only first-order derivatives, so most userswill take this route rather than write their own ODE simulation code. If the full ODE simulationcode is being written from scratch for a system whose first-principles model naturally has higherorder derivatives, such as in the mass-spring-damper example, then it is common to insert finitedi↵erence approximations directly in place of the higher order derivatives.

Methods for deriving finite di↵erence approximations are described in textbooks on numericalmethods, which will not be covered here since any approximation that you would interested inapplying can be looked up in the tables.

A simple way to numerically evaluate the error in any approximation is to plot the numericalerror versus the discretization size:

E(�t) ⇡ c(�t)p (30)

ln |E(�t)| ⇡ ln c+ p ln�t (31)

where p is the order of accuracy and c is a constant. The slope of the error versus �t on a log-logplot is a line with slope equal to the order of accuracy, which is a very good validation test for thecorrectness of the implementation of a numerical algorithm.

4 Explicit Methods for the Simulation of IVPs

As mentioned in Section 3, IVPs are simulated by inserting finite di↵erence approximations of thederivatives and solving the resulting set of algebraic equations.

4.1 Explicit Euler Method

The simplest numerical algorithm for solving an IVP is to insert the forward di↵erence approxima-tion (13) into the ODE (3) to give

d

dt

x(t) = f(x(t), t) (32)

1

�t

(x(t+�t)� x(t)) ⇡ f(x(t), t) (33)

x(t+�t) ⇡ x(t) + (�t)f(x(t), t), (34)

where �t is a positive number (presumably small). This recursion is called the explicit Eulermethod or just simply the Euler method. Starting at time t = t0, the numerical estimates of x(t)

4

are

x(t0) = x0 (35)

x(t0 +�t) ⇡ x(t0) + (�t)f(x(t0), t0) = x0 + (�t)f(x0, t0) (36)

x(t0 + 2�t) ⇡ x(t0 +�t) + (�t)f(x(t0 +�t), t0 +�t) (37)

x(t0 + 3�t) ⇡ x(t0 + 2�t) + (�t)f(x(t0 + 2�t), t0 + 2�t) (38)

...

x(t0 + (k + 1)�t) ⇡ x(t0 + k�t) + (�t)f(x(t0 + k�t), t0 + k�t) (39)

...

These equations have implicitly employed a mesh along the time axis, and implicitly assumed thatthe �t is the same between each time step (removal of this assumption is made later). The notationcan be simplified by defining tk = t0 + k�t, with integer k = 0, 1, 2, . . . ,

x(t0) = x0 (40)

x(t1) ⇡ x(t0) + (�t)f(x(t0), t0) (41)

x(t2) ⇡ x(t1) + (�t)f(x(t1), t1) (42)

x(t3) ⇡ x(t2) + (�t)f(x(t2), t2) (43)

...

x(tk+1) ⇡ x(tk) + (�t)f(x(tk), tk) (44)

...

This algorithm is a time-marching method, as each iteration produces the next state based onlyon past and current values of the state. The numerical algorithm is said to be explicit because thealgebraic equations can be solved explicitly, that is, without resorting to iterative methods. Putsimply, the resulting set of algebraic equations can be solved at each time step without requiringthe simultaneous solution of coupled algebraic equations.

The explicit Euler method is first-order accurate, which is a result of using a first-order accu-rate approximation for the derivative in the ODE. Higher order explicit methods are obtained byinserting forward di↵erence approximations of higher order accuracy.

Runge-Kutta is a class of popular higher order methods that compute estimates of the statesat time points in between the time points of interest. For example, a second-order explicit Runge-Kutta is

x(t+�t/2) = x(t) +�t

2f(x(t), t) (45)

x(t+�t) = x(t) + (�t)f(x(t+�t/2), t+�t/2). (46)

The first equation estimates the state at the midpoint by Euler’s method, which is used to estimatethe slope over the entire time interval. A drawback of Runge-Kutta methods is their requirementof multiple function evaluations within each time interval. The function evaluation is usually themost expensive step in solving an IVP, so Runge-Kutta methods are rarely used in the simulationof IVPs in which computational considerations are important.

5

are

x(t0) = x0 (35)

x(t0 +�t) ⇡ x(t0) + (�t)f(x(t0), t0) = x0 + (�t)f(x0, t0) (36)

x(t0 + 2�t) ⇡ x(t0 +�t) + (�t)f(x(t0 +�t), t0 +�t) (37)

x(t0 + 3�t) ⇡ x(t0 + 2�t) + (�t)f(x(t0 + 2�t), t0 + 2�t) (38)

...

x(t0 + (k + 1)�t) ⇡ x(t0 + k�t) + (�t)f(x(t0 + k�t), t0 + k�t) (39)

...

These equations have implicitly employed a mesh along the time axis, and implicitly assumed thatthe �t is the same between each time step (removal of this assumption is made later). The notationcan be simplified by defining tk = t0 + k�t, with integer k = 0, 1, 2, . . . ,

x(t0) = x0 (40)

x(t1) ⇡ x(t0) + (�t)f(x(t0), t0) (41)

x(t2) ⇡ x(t1) + (�t)f(x(t1), t1) (42)

x(t3) ⇡ x(t2) + (�t)f(x(t2), t2) (43)

...

x(tk+1) ⇡ x(tk) + (�t)f(x(tk), tk) (44)

...

This algorithm is a time-marching method, as each iteration produces the next state based onlyon past and current values of the state. The numerical algorithm is said to be explicit because thealgebraic equations can be solved explicitly, that is, without resorting to iterative methods. Putsimply, the resulting set of algebraic equations can be solved at each time step without requiringthe simultaneous solution of coupled algebraic equations.

The explicit Euler method is first-order accurate, which is a result of using a first-order accu-rate approximation for the derivative in the ODE. Higher order explicit methods are obtained byinserting forward di↵erence approximations of higher order accuracy.

Runge-Kutta is a class of popular higher order methods that compute estimates of the statesat time points in between the time points of interest. For example, a second-order explicit Runge-Kutta is

x(t+�t/2) = x(t) +�t

2f(x(t), t) (45)

x(t+�t) = x(t) + (�t)f(x(t+�t/2), t+�t/2). (46)

The first equation estimates the state at the midpoint by Euler’s method, which is used to estimatethe slope over the entire time interval. A drawback of Runge-Kutta methods is their requirementof multiple function evaluations within each time interval. The function evaluation is usually themost expensive step in solving an IVP, so Runge-Kutta methods are rarely used in the simulationof IVPs in which computational considerations are important.

5

are

x(t0) = x0 (35)

x(t0 +�t) ⇡ x(t0) + (�t)f(x(t0), t0) = x0 + (�t)f(x0, t0) (36)

x(t0 + 2�t) ⇡ x(t0 +�t) + (�t)f(x(t0 +�t), t0 +�t) (37)

x(t0 + 3�t) ⇡ x(t0 + 2�t) + (�t)f(x(t0 + 2�t), t0 + 2�t) (38)

...

x(t0 + (k + 1)�t) ⇡ x(t0 + k�t) + (�t)f(x(t0 + k�t), t0 + k�t) (39)

...

These equations have implicitly employed a mesh along the time axis, and implicitly assumed thatthe �t is the same between each time step (removal of this assumption is made later). The notationcan be simplified by defining tk = t0 + k�t, with integer k = 0, 1, 2, . . . ,

x(t0) = x0 (40)

x(t1) ⇡ x(t0) + (�t)f(x(t0), t0) (41)

x(t2) ⇡ x(t1) + (�t)f(x(t1), t1) (42)

x(t3) ⇡ x(t2) + (�t)f(x(t2), t2) (43)

...

x(tk+1) ⇡ x(tk) + (�t)f(x(tk), tk) (44)

...

This algorithm is a time-marching method, as each iteration produces the next state based onlyon past and current values of the state. The numerical algorithm is said to be explicit because thealgebraic equations can be solved explicitly, that is, without resorting to iterative methods. Putsimply, the resulting set of algebraic equations can be solved at each time step without requiringthe simultaneous solution of coupled algebraic equations.

The explicit Euler method is first-order accurate, which is a result of using a first-order accu-rate approximation for the derivative in the ODE. Higher order explicit methods are obtained byinserting forward di↵erence approximations of higher order accuracy.

Runge-Kutta is a class of popular higher order methods that compute estimates of the statesat time points in between the time points of interest. For example, a second-order explicit Runge-Kutta is

x(t+�t/2) = x(t) +�t

2f(x(t), t) (45)

x(t+�t) = x(t) + (�t)f(x(t+�t/2), t+�t/2). (46)

The first equation estimates the state at the midpoint by Euler’s method, which is used to estimatethe slope over the entire time interval. A drawback of Runge-Kutta methods is their requirementof multiple function evaluations within each time interval. The function evaluation is usually themost expensive step in solving an IVP, so Runge-Kutta methods are rarely used in the simulationof IVPs in which computational considerations are important.

5

17

Page 18: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

Explicit Methods for IVPs• Explicit Euler method:

f = @(x,t) % Does something

t0 = 0;tf = 1;dt = 0.01;

x0 = % Initial condition

t = [t0:dt:tf]x = zeros( length( x0 ), length( t ) );

x( :, 1 ) = x0;

for i = 2:length( t )

x( :, i ) = x( :, i - 1 ) + dt * f( x( :, i - 1 ), t( i - 1 ) );

end;18

Page 19: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

• Explicit methods are termed explicit because the algebraic approximation to the IVP does not require a complicated solution method.

• Higher order explicit methods can be derived by incorporating information about the solution at intermediate or past time points.

• There are innumerable different methods by which this can be done. Some are more accurate, others are more stable, others still require fewer function evaluations.

• Example: explicit Runge-Kutta method

• uses information at the midpoint of the step

• requires twice as many function evaluations

Explicit Methods for IVPs

are

x(t0) = x0 (35)

x(t0 +�t) ⇡ x(t0) + (�t)f(x(t0), t0) = x0 + (�t)f(x0, t0) (36)

x(t0 + 2�t) ⇡ x(t0 +�t) + (�t)f(x(t0 +�t), t0 +�t) (37)

x(t0 + 3�t) ⇡ x(t0 + 2�t) + (�t)f(x(t0 + 2�t), t0 + 2�t) (38)

...

x(t0 + (k + 1)�t) ⇡ x(t0 + k�t) + (�t)f(x(t0 + k�t), t0 + k�t) (39)

...

These equations have implicitly employed a mesh along the time axis, and implicitly assumed thatthe �t is the same between each time step (removal of this assumption is made later). The notationcan be simplified by defining tk = t0 + k�t, with integer k = 0, 1, 2, . . . ,

x(t0) = x0 (40)

x(t1) ⇡ x(t0) + (�t)f(x(t0), t0) (41)

x(t2) ⇡ x(t1) + (�t)f(x(t1), t1) (42)

x(t3) ⇡ x(t2) + (�t)f(x(t2), t2) (43)

...

x(tk+1) ⇡ x(tk) + (�t)f(x(tk), tk) (44)

...

This algorithm is a time-marching method, as each iteration produces the next state based onlyon past and current values of the state. The numerical algorithm is said to be explicit because thealgebraic equations can be solved explicitly, that is, without resorting to iterative methods. Putsimply, the resulting set of algebraic equations can be solved at each time step without requiringthe simultaneous solution of coupled algebraic equations.

The explicit Euler method is first-order accurate, which is a result of using a first-order accu-rate approximation for the derivative in the ODE. Higher order explicit methods are obtained byinserting forward di↵erence approximations of higher order accuracy.

Runge-Kutta is a class of popular higher order methods that compute estimates of the statesat time points in between the time points of interest. For example, a second-order explicit Runge-Kutta is

x(t+�t/2) = x(t) +�t

2f(x(t), t) (45)

x(t+�t) = x(t) + (�t)f(x(t+�t/2), t+�t/2). (46)

The first equation estimates the state at the midpoint by Euler’s method, which is used to estimatethe slope over the entire time interval. A drawback of Runge-Kutta methods is their requirementof multiple function evaluations within each time interval. The function evaluation is usually themost expensive step in solving an IVP, so Runge-Kutta methods are rarely used in the simulationof IVPs in which computational considerations are important.

5

19

Page 20: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

Explicit Methods for IVPs• Example:

• Forward Euler:

• Midpoint:

y(t+�t) = y(t)��ty(t) = (1��t)y(t)

dy=

dt�y, y(0) = 1 y(t) = e�t

�ty(t+�t/2) = y(t)� �

y(t) =2

✓t

1� y2

◆(t)

✓�t

y(t+�t) = y(t)��ty(t+�t/2) = 1��t 1� y2

◆�(t)

y(�t) = 1��t

(�t)2y(�t) = 1��t+

2 20

Page 21: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

Explicit Methods for IVPs• Example:

dy

dt= �y, y(0) = 1 y(t) = e�t

10−7 10−6 10−5 10−4 10−3 10−2 10−110−12

10−10

10−8

10−6

10−4

10−2

100

|y(1)�

e�1|

�t

Forward EulerMidpoint

21

Page 22: Numerical Methods Applied to Chemical Engineering ......Chemical Engineering 1 Recap • Constrained optimization • Method of Lagrange multipliers • Interior point methods 2 •

MIT OpenCourseWarehttps://ocw.mit.edu

10.34 Numerical Methods Applied to Chemical EngineeringFall 2015

For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.