Top Banner
Dynamic Programming
90

Dynamic Programming

Jan 03, 2016

Download

Documents

zelda-page

Dynamic Programming. Dynamic Programming. Dynamic programming, like the divide-and-conquer method, solves problems by combining the solutions to sub-problems. “ Programming” in this context refers to a tabular method, not to writing computer code . - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Dynamic Programming

Dynamic Programming

Page 2: Dynamic Programming

Dynamic Programming

• Dynamic programming, like the divide-and-conquer method, solves problems by combining the solutions to sub-problems.

• “Programming” in this context refers to a tabular method, not to writing computer code.

• We typically apply dynamic programming to optimization problems• Many possible solutions. Each solution has a value, and we wish to

find a solution with the optimal (minimum or maximum) value.

Page 3: Dynamic Programming

Dynamic Programming

• When developing a dynamic-programming algorithm, we follow a sequence of four steps:

1. Characterize the structure of an optimal solution.2. Recursively define the value of an optimal solution.3. Compute the value of an optimal solution, typically in a bottom-up fashion.4. Construct an optimal solution from computed information.

• If we need only the value of an optimal solution, and not the solution itself, then we can omit step 4.

Page 4: Dynamic Programming

Given some items, pack the knapsack to get the maximum total value. Each item has some weight and some value. Total weight that we can carry is no more than some fixed number W.So we must consider weights of items as well as their values.

Item # Weight Value 1 1 8 2 3 6 3 5 5

Knapsack problem

Page 5: Dynamic Programming

Knapsack problem

There are two versions of the problem:1. “0-1 knapsack problem”

• Items are indivisible; you either take an item or not. Some special instances can be solved with dynamic programming

2. “Fractional knapsack problem”• Items are divisible: you can take any fraction of an item

Page 6: Dynamic Programming

• Given a knapsack with maximum capacity W, and a set S consisting of n items

• Each item i has some weight wi and benefit value bi (all wi and W are integer values)

• Problem: How to pack the knapsack to achieve maximum total value of packed items?

0-1 Knapsack problem

Page 7: Dynamic Programming

• Problem, in other words, is to find

Ti

iTi

i Wwb subject to max

0-1 Knapsack problem

The problem is called a “0-1” problem, because each item must be entirely accepted or rejected.

Page 8: Dynamic Programming

Let’s first solve this problem with a straightforward algorithm

• Since there are n items, there are 2n possible combinations of items.

• We go through all combinations and find the one with maximum value and with total weight less or equal to W

• Running time will be O(2n)

0-1 Knapsack problem: brute-force approach

Page 9: Dynamic Programming

• We can do better with an algorithm based on dynamic programming

• We need to carefully identify the subproblems

0-1 Knapsack problem: dynamic programming approach

Page 10: Dynamic Programming

• Given a knapsack with maximum capacity W, and a set S consisting of n items

• Each item i has some weight wi and benefit value bi (all wi and W are integer values)

• Problem: How to pack the knapsack to achieve maximum total value of packed items?

Defining a Subproblem

Page 11: Dynamic Programming

• We can do better with an algorithm based on dynamic programming

• We need to carefully identify the subproblems

Let’s try this:If items are labeled 1..n, then a subproblem would be to find an optimal solution for Sk = {items labeled 1, 2, .. k}

Defining a Subproblem

Page 12: Dynamic Programming

If items are labeled 1..n, then a subproblem would be to find an optimal solution for Sk = {items

labeled 1, 2, .. k}

• This is a reasonable subproblem definition.

• The question is: can we describe the final solution (Sn ) in terms of subproblems (Sk)?

• Unfortunately, we can’t do that.

Defining a Subproblem

Page 13: Dynamic Programming

Max weight: W = 20For S4:Total weight: 14Maximum benefit: 20

w1 =2

b1 =3

w2 =4

b2 =5

w3 =5

b3 =8

w4 =3

b4 =4 wi bi

10

85

54

43

32

Weight Benefit

9

Item

#

4

3

2

1

5

S4

S5

w1 =2

b1 =3

w2 =4

b2 =5

w3 =5

b3 =8

w5 =9

b5 =10

For S5:Total weight: 20Maximum benefit: 26

Solution for S4 is not part of the solution for S5!!!

?

Defining a Subproblem

Page 14: Dynamic Programming

• As we have seen, the solution for S4 is not part of the solution for S5

• So our definition of a subproblem is flawed and we need another one!

Defining a Subproblem

Page 15: Dynamic Programming

• Given a knapsack with maximum capacity W, and a set S consisting of n items

• Each item i has some weight wi and benefit value bi (all wi and W are integer values)

• Problem: How to pack the knapsack to achieve maximum total value of packed items?

Defining a Subproblem

Page 16: Dynamic Programming

• Let’s add another parameter: w, which will represent the maximum weight for each subset of items

• The subproblem then will be to compute V[k,w], i.e., to find an optimal solution for Sk = {items labeled 1, 2, .. k} in a knapsack of size w

Defining a Subproblem

Page 17: Dynamic Programming

• The subproblem will then be to compute V[k,w], i.e., to find an optimal solution for Sk = {items labeled 1, 2, .. k} in a knapsack of size w

• Assuming knowing V[i, j], where i=0,1, 2, … k-1, j=0,1,2, …w, how to derive V[k,w]?

Recursive Formula for subproblems

Page 18: Dynamic Programming

It means, that the best subset of Sk that has total weight w is:

1) the best subset of Sk-1 that has total weight w, or2) the best subset of Sk-1 that has total weight w-wk plus

the item k

else }],1[],,1[max{

if ],1[],[

kk

k

bwwkVwkV

wwwkVwkV

Recursive formula for subproblems:

Recursive Formula for subproblems (continued)

Page 19: Dynamic Programming

Recursive Formula

• The best subset of Sk that has the total weight w, either contains item k or not.

• First case: wk>w. Item k can’t be part of the solution, since if it was, the total weight would be > w, which is unacceptable.

• Second case: wk w. Then the item k can be in the solution, and we choose the case with greater value.

else }],1[],,1[max{

if ],1[],[

kk

k

bwwkVwkV

wwwkVwkV

Page 20: Dynamic Programming

for w = 0 to WV[0,w] = 0

for i = 1 to nV[i,0] = 0

for i = 1 to nfor w = 0 to W

if wi <= w // item i can be part of the solution

if bi + V[i-1,w-wi] > V[i-1,w]

V[i,w] = bi + V[i-1,w- wi]else

V[i,w] = V[i-1,w]

else V[i,w] = V[i-1,w] // wi > w

0-1 Knapsack Algorithm

Page 21: Dynamic Programming

for w = 0 to WV[0,w] = 0

for i = 1 to nV[i,0] = 0

for i = 1 to nfor w = 0 to W

< the rest of the code >

What is the running time of this algorithm?

O(W)

O(W)

Repeat n times

O(n*W)Remember that the brute-force algorithm

takes O(2n)

Running time

Page 22: Dynamic Programming

Let’s run our algorithm on the following data:

n = 4 (# of elements)W = 5 (max weight)Elements (weight, benefit):(2,3), (3,4), (4,5), (5,6)

Example

Page 23: Dynamic Programming

for w = 0 to WV[0,w] = 0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W

Example (2)

Page 24: Dynamic Programming

for i = 1 to nV[i,0] = 0

0

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W

Example (3)

Page 25: Dynamic Programming

if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w]else V[i,w] = V[i-1,w] // wi > w

0

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

0

i=1bi=3

wi=2

w=1w-wi =-1

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W

0

0

0

Example (4)

Page 26: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

300

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=1bi=3

wi=2

w=2w-wi =0

if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w]else V[i,w] = V[i-1,w] // wi > w

Example (5)

Page 27: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

300

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=1bi=3

wi=2

w=3w-wi =1

if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w]else V[i,w] = V[i-1,w] // wi > w

3

Example (6)

Page 28: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

300

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=1bi=3

wi=2

w=4w-wi =2

if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w]else V[i,w] = V[i-1,w] // wi > w

3 3

Example (7)

Page 29: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

300

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=1bi=3

wi=2

w=5w-wi =3

if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w]else V[i,w] = V[i-1,w] // wi > w

3 3 3

Example (8)

Page 30: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

00

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=2bi=4

wi=3

w=1w-wi =-2

3 3 3 3

0

if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w]else V[i,w] = V[i-1,w] // wi > w

Example (9)

Page 31: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

00

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=2bi=4

wi=3

w=2w-wi =-1

3 3 3 3

3

if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w]else V[i,w] = V[i-1,w] // wi > w

0

Example (10)

Page 32: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

00

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=2bi=4

wi=3

w=3w-wi =0

3 3 3 3

0

if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w]else V[i,w] = V[i-1,w] // wi > w

43

Example (11)

Page 33: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

00

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=2bi=4

wi=3

w=4w-wi =1

3 3 3 3

0

if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w]else V[i,w] = V[i-1,w] // wi > w

43 4

Example (12)

Page 34: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

00

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=2bi=4

wi=3

w=5w-wi =2

3 3 3 3

0

if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w]else V[i,w] = V[i-1,w] // wi > w

73 4 4

Example (13)

Page 35: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

00

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=3bi=5

wi=4

w= 1..3

3 3 3 3

0 3 4 4

if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w]else V[i,w] = V[i-1,w] // wi > w

7

3 40

Example (14)

Page 36: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

00

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=3bi=5

wi=4

w= 4w- wi=0

3 3 3 3

0 3 4 4 7

0 3 4 5

if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w]else V[i,w] = V[i-1,w] // wi > w

Example (15)

Page 37: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

00

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=3bi=5

wi=4

w= 5w- wi=1

3 3 3 3

0 3 4 4 7

0 3 4

if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w]else V[i,w] = V[i-1,w] // wi > w

5 7

Example (16)

Page 38: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

00

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=4bi=6

wi=5

w= 1..4

3 3 3 3

0 3 4 4

if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w]else V[i,w] = V[i-1,w] // wi > w

7

3 40

70 3 4 5

5

Example (17)

Page 39: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

00

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=4bi=6

wi=5

w= 5w- wi=0

3 3 3 3

0 3 4 4 7

0 3 4

if wi <= w // item i can be part of the solution if bi + V[i-1,w-wi] > V[i-1,w] V[i,w] = bi + V[i-1,w- wi] else V[i,w] = V[i-1,w]else V[i,w] = V[i-1,w] // wi > w

5

7

7

0 3 4 5

Example (18)

Page 40: Dynamic Programming

• All of the information we need is in the table.• V[n,W] is the maximal value of items that can be

placed in the Knapsack.• Let i=n and k=W

if V[i,k] V[i1,k] then mark the ith item as in the knapsacki = i1, k = k-wi

else i = i1 // Assume the ith item is not in the knapsack

// Could it be in the optimally packed knapsack?

How to find actual Knapsack Items

Page 41: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

00

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=4k= 5bi=6

wi=5

V[i,k] = 7V[i1,k] =7

3 3 3 3

0 3 4 4 7

0 3 4

i=n, k=Wwhile i,k > 0

if V[i,k] V[i1,k] then mark the ith item as in the knapsack

i = i1, k = k-wi

else i = i1

5 7

0 3 4 5 7

Finding the Items

Page 42: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

00

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=4k= 5bi=6

wi=5

V[i,k] = 7V[i1,k] =7

3 3 3 3

0 3 4 4 7

0 3 4

i=n, k=Wwhile i,k > 0

if V[i,k] V[i1,k] then mark the ith item as in the knapsack

i = i1, k = k-wi

else i = i1

5 7

0 3 4 5 7

Finding the Items (2)

Page 43: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

00

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=3k= 5bi=5

wi=4

V[i,k] = 7V[i1,k] =7

3 3 3 3

0 3 4 4 7

0 3 4

i=n, k=Wwhile i,k > 0

if V[i,k] V[i1,k] then mark the ith item as in the knapsack

i = i1, k = k-wi

else i = i1

5 7

0 3 4 5 7

Finding the Items (3)

Page 44: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

00

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=2k= 5bi=4

wi=3

V[i,k] = 7V[i1,k] =3k wi=2

3 3 3 3

0 3 4 4 7

0 3 4

i=n, k=Wwhile i,k > 0

if V[i,k] V[i1,k] then mark the ith item as in the knapsack

i = i1, k = k-wi

else i = i1

5 7

0 3 4 5 7

7

Finding the Items (4)

Page 45: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

00

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W i=1k= 2bi=3

wi=2

V[i,k] = 3V[i1,k] =0k wi=0

3 3 3 3

0 3 4 4 7

0 3 4

i=n, k=Wwhile i,k > 0

if V[i,k] V[i1,k] then mark the ith item as in the knapsack

i = i1, k = k-wi

else i = i1

5 7

0 3 4 5 7

3

Finding the Items (5)

Page 46: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

00

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W

3 3 3 3

0 3 4 4 7

0 3 4

i=n, k=Wwhile i,k > 0

if V[i,k] V[i1,k] then mark the nth item as in the knapsack

i = i1, k = k-wi

else i = i1

5 7

0 3 4 5 7

i=0k= 0

The optimal knapsack should contain {1, 2}

Finding the Items (6)

Page 47: Dynamic Programming

Items:1: (2,3)2: (3,4)3: (4,5) 4: (5,6)

00

0

0

0

0 0 0 0 000

1

2

3

4 50 1 2 3

4

i\W

3 3 3 3

0 3 4 4 7

0 3 4

i=n, k=Wwhile i,k > 0

if V[i,k] V[i1,k] then mark the nth item as in the knapsack

i = i1, k = k-wi

else i = i1

5 7

0 3 4 5 7

The optimal knapsack should contain {1, 2}

7

3

Finding the Items (7)

Page 48: Dynamic Programming

Memorization (Memory Function Method)

• Goal: • Solve only subproblems that are necessary and solve it only once

• Memorization is another way to deal with overlapping subproblems in dynamic programming

• With memorization, we implement the algorithm recursively:• If we encounter a new subproblem, we compute and store the solution.• If we encounter a subproblem we have seen, we look up the answer

• Most useful when the algorithm is easiest to implement recursively• Especially if we do not need solutions to all subproblems.

Page 49: Dynamic Programming

for i = 1 to nfor w = 1 to W

V[i,w] = -1

for w = 0 to WV[0,w] = 0

for i = 1 to nV[i,0] = 0

MFKnapsack(i, w)

if V[i,w] < 0

if w < wi

value = MFKnapsack(i-1, w)

else

value = max(MFKnapsack(i-1, w),

bi + MFKnapsack(i-1, w-wi))

V[i,w] = value

return V[i,w]

0-1 Knapsack Memory Function Algorithm

Page 50: Dynamic Programming

Matrix-chain multiplication

• We are given a sequence (chain) (A1,A2,…,An) of n matrices to be multiplied, and we wish to compute the product

A1*A2*….*An

• We can solve this by using the standard algorithm for multiplying pairs of matrices as a subroutine once we have parenthesized it to resolve all ambiguities in how the matrices are multiplied together.

• A product of matrices is fully parenthesized if it is either a single matrix or the product of two fully parenthesized matrix products, surrounded by parentheses.

Page 51: Dynamic Programming

Matrix-chain multiplication

• If we have four matrices A1, A2, A3, A4 , then:

Page 52: Dynamic Programming

Standard Algorithm for Multiplication

Page 53: Dynamic Programming

Order of Multiplication

• Suppose we have 3 Matrices A1, A2, A3 of sizes 10 x 100, 100 x 5, and 5 x 50 respectively

• We have two options ((A1, A2), A3) and (A1, (A2, A3))• For option 1 we will do 10*100*5 = 5000 + 10*5*50 = 2500 => 7,500

operations• For option 2 we will do 100*5*50 = 25,000 + 10*100*50 = 50,000 =>

75,000 operations• Option 1 is 10 times faster than option 2

Page 54: Dynamic Programming

matrix-chain multiplication problem

• Given a chain (A1,A2,…,An) of n matrices, where for i = 1,2,…,n, matrix Ai has dimension pi-1 x pi , fully parenthesize the product A1A2 … An in a way that minimizes the number of scalar multiplications.

Counting the number of parenthesizations

Page 55: Dynamic Programming

Applying dynamic programming

• Recall1. Characterize the structure of an optimal solution.2. Recursively define the value of an optimal solution.3. Compute the value of an optimal solution, typically in a bottom-up fashion.4. Construct an optimal solution from computed information.

Page 56: Dynamic Programming

Step 1: The structure of an optimal parenthesization

• Let us use the notation Ai..j for the matrix that results from the product Ai Ai+1 … Aj

• An optimal parenthesization of the product A1A2…An splits the product between Ak and Ak+1 for some integer k where1 ≤ k < n

• First compute matrices A1..k and Ak+1..n ; then multiply them to get the final matrix A1..n

• Example, k = 4 (A1A2A3A4)(A5A6)Total cost of A1..6 = cost of A1..4 plus total cost of multiplying these two matrices

together.

Page 57: Dynamic Programming

Step 1: The structure of an optimal parenthesization

• Key observation: parenthesizations of the subchains A1A2…Ak and Ak+1Ak+2…An must also be optimal if the parenthesization of the chain A1A2…An is optimal (why?)

• That is, the optimal solution to the problem contains within it the optimal solution to subproblems

• We must ensure that when we search for the correct place to split the product, we have considered all possible places, so that we are sure of having examined the optimal one.

Page 58: Dynamic Programming

Step 1: The structure of an optimal parenthesization

Page 59: Dynamic Programming

Step 2: A recursive solution

• we define the cost of an optimal solution recursively in terms of the optimal solutions to subproblems.

• We have to find the minimum cost for finding Ai Ai+1 … Aj for 1 <= i <= j <= n.

• Let m[i, j] be the minimum number of scalar multiplications necessary to compute Ai..j

• Minimum cost to compute A1..n is m[1, n]• Suppose the optimal parenthesization of Ai..j splits the product

between Ak and Ak+1 for some integer k where i ≤ k < j

Page 60: Dynamic Programming

Step 2: A recursive solution

• We can define m[i, j] recursively as follows. If i = j , the problem is trivial;

• To compute m[i, j] for i < j, we observe• Ai..j = (Ai Ai+1…Ak)·(Ak+1Ak+2…Aj)= Ai..k · Ak+1..j

• Cost of computing Ai..j = cost of computing Ai..k + cost of computing Ak+1..j + cost of multiplying Ai..k and Ak+1..j

• Cost of multiplying Ai..k and Ak+1..j is pi-1pk pj

m[i, j ] = m[i, k] + m[k+1, j ] + pi-1pk pj for i ≤ k < j• m[i, i ] = 0 for i=1,2,…,n

Page 61: Dynamic Programming

Step 2: A recursive solution

• But… optimal parenthesization occurs at one value of k among all possible i ≤ k < j

• Check all these and select the best one

m[i, j ] =0 if i=j

min {m[i, k] + m[k+1, j ] + pi-1pk pj } if i<ji ≤ k< j

• To keep track of how to construct an optimal solution, we use a table s

• s[i, j ] = value of k at which Ai Ai+1 … Aj is split for optimal parenthesization

Page 62: Dynamic Programming

Step 3: Computing the optimal costsInput: Array p[0…n] containing matrix dimensions and n

Result: Minimum-cost table m and split table s

MATRIX-CHAIN-ORDER(p[ ], n)

for i ← 1 to n

m[i, i] ← 0

for l ← 2 to n

for i ← 1 to n-l+1

j ← i+l-1

m[i, j] ←

for k ← i to j-1

q ← m[i, k] + m[k+1, j] + p[i-1] p[k] p[j]

if q < m[i, j]

m[i, j] ← q

s[i, j] ← kreturn m and s

Takes O(n3) time

Requires O(n2) space

Page 63: Dynamic Programming

Step 3: Computing the optimal costs

•Computing the optimal costs

• How much subproblems in total?

• One for each choice of i and j satisfying 1 ≤ i ≤ j ≤ n

• Θ(n2)

• MATRIX-CHAIN-ORDER(p)

• Input: a sequence p = < p0, p1, p2,…, pn> (length[p] = n+1)• Try to fill in the table m in a manner that corresponds to solving the parenthesization problem on matrix chains of increasing length

• Lines 4-12: compute m[i, i+1], m[i, i+2], … each time

Page 64: Dynamic Programming

Step 3: Computing the optimal costs

Page 65: Dynamic Programming

Step 3: Computing the optimal costs

Page 66: Dynamic Programming

Step 3: Computing the optimal costs

Page 67: Dynamic Programming

Step 4: Constructing an optimal solution

• Constructing an optimal solution• Each entry s[i, j] records the value of k such that the optimal

parenthesization of AiAi+1…Aj splits the product between Ak and Ak+1

• A1..n →

• A1..s[1..n] →

A1..s[1..n] As[1..n]+1..n

A1..s[1, s[1..n]] As[1, s[1..n]]+1..s[1..n]

• Recursive…

Page 68: Dynamic Programming

Step 4: Constructing an optimal solution

• Constructing an optimal solution

• A 1 A2 A3 A4 A5

A6

Page 69: Dynamic Programming

Matrix Chain Multiply

Algorithm Matrix-Chain-Multiply(A, s, i, j)

if j > i then

x = Matrix-Chain-Multiply(A, s, i, s[i, j)

y = Matrix-Chain-Multiply(A, s, s[i, j]+1, j)

return (x, y)

else

return Ai

Page 70: Dynamic Programming

Optimal binary search trees

• Given sequence K = k1, k2, . . . , kn of n distinct keys, sorted (k1 < k2 < · · · < kn).

• Want to build a binary search tree from the keys.• For ki , have probability pi that a search is for ki .• Want BST with minimum expected search cost.

Page 71: Dynamic Programming

Optimal binary search trees: Example

Page 72: Dynamic Programming

Example

Page 73: Dynamic Programming

Optimal substructure

Page 74: Dynamic Programming

Optimal substructure (1)

Page 75: Dynamic Programming

Recursive solution

Page 76: Dynamic Programming

Recursive solution (1)

Page 77: Dynamic Programming

Computing an optimal solution

Page 78: Dynamic Programming

Computing an optimal solution(1)

Page 79: Dynamic Programming

Computing an optimal solution(2)

Page 80: Dynamic Programming

Computing an optimal solution(3)

Page 81: Dynamic Programming

Construct an optimal solution

Page 82: Dynamic Programming

Longest common subsequence

• Biological applications often need to compare the DNA of two (or more) different organisms.

• A strand of DNA consists of a string of molecules called bases, where the possible bases are adenine, guanine, cytosine, and thymine. Representing each of these bases by its initial letter, we can express a strand of DNA as a string over the finite set {A; C; G; T}.

• For example, the DNA of one organism may be S1 = ACCGGTCGAGTGCGCGGAAGCCGGCCGAA,

and the DNA of another organism may be S2 = GTCGTTCGGAATGCCGTTGCTCTGTAAA.

GTCGTCGGAAGCCGGCCGAA.

Page 83: Dynamic Programming

Longest common subsequence

• Formally, given a sequence X = {x1; x2; : : : ;xm}, another sequence Z = {z1, z2, …. , zk} is a subsequence of X if there exists a strictly increasing sequence {i1, i2, … ,ik} of indices of X such that for all j = 1,2,…,k, we have xij

= zj .• For example, Z = {B; C; D; B} is a subsequence of X = {A; B;C; B;D;A;B}

with corresponding index sequence {2; 3; 5; 7}.• Given two sequences X and Y , we say that a sequence Z is a common

subsequence of X and Y if Z is a subsequence of both X and Y .

Longest common subsequence (LCS) is {B, C, B, A}

Page 84: Dynamic Programming

longest-common-subsequence problem• we are given two sequences X = {x1; x2; : : : ; xm} and Y = {y1; y2; : : : ; yn}

and wish to find a maximum length common subsequence of X and Y .

Page 85: Dynamic Programming

Step 1: Characterizing a longest common subsequence

To be precise, given a sequence X = {x1; x2; : : : ;xm}, we define the ith prefix of X, for i = { 0; 1; : : : ;m} as Xi = {x1; x2; : : : ; xi }. For example, if X = {A; B; C; B; D; A; B}, then X4 = {A;B;C;B} and X0 is the empty sequence.

Page 86: Dynamic Programming

Step 2: A recursive solution

• We can readily see the overlapping-subproblems property in the LCS problem.

• To find an LCS of X and Y , we may need to find the LCSs of X and Yn-1

and of Xm-1 and Y . But each of these subproblems has the subsubproblem of finding an LCS of Xm-1 and Yn-1. Many other subproblems share subsubproblems.

Page 87: Dynamic Programming

Step 3: Computing the length of an LCS• Procedure LCS-LENGTH takes two sequences X = {x1; x2; : : : ; xm} and Y

= {y1;y2; : : : ;yn} as inputs. It stores the c[I, j] values in a table c[0..m 0.. n],

• It computes the entries in row-major order. • The procedure also maintains the table b[1 .. m 1 … n] to help us

construct an optimal solution.

Page 88: Dynamic Programming

Step 3: Computing the length of an LCS

Page 89: Dynamic Programming

Step 3: Computing the length of an LCS

Page 90: Dynamic Programming

Step 4: Constructing an LCS