CSE 326: Data Structures Introduction & Part One: Complexity Henry Kautz Autumn Quarter 2002.

Post on 23-Dec-2015

220 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

Transcript

CSE 326: Data Structures Introduction &

Part One: Complexity

Henry Kautz

Autumn Quarter 2002

Overview of the Quarter• Part One: Complexity

– inductive proofs of program correctness– empirical and asymptotic complexity– order of magnitude notation; logs & series– analyzing recursive programs

• Part Two: List-like data structures• Part Three: Sorting• Part Four: Search Trees• Part Five: Hash Tables• Part Six: Heaps and Union/Find• Part Seven: Graph Algorithms• Part Eight: Advanced Topics

Material for Part One

• Weiss Chapters 1 and 2• Additional material

– Graphical analysis

– Amortized analysis

– Stretchy arrays

• Any questions on course organization?

Program Analysis

• Correctness– Testing

– Proofs of correctness

• Efficiency– How to define?

– Asymptotic complexity - how running times scales as function of size of input

Proving Programs Correct• Often takes the form of an inductive proof• Example: summing an array

int sum(int v[], int n)

{

if (n==0) return 0;

else return v[n-1]+sum(v,n-1);

}

What are the parts of an inductive proof?

Inductive Proof of Correctnessint sum(int v[], int n)

{

if (n==0) return 0;

else return v[n-1]+sum(v,n-1);

}

Theorem: sum(v,n) correctly returns sum of 1st n elements of array v for any n.

Basis Step: Program is correct for n=0; returns 0.

Inductive Hypothesis (n=k): Assume sum(v,k) returns sum of first k elements of v.

Inductive Step (n=k+1): sum(v,k+1) returns v[k]+sum(v,k), which is the same of the first k+1 elements of v.

Proof by Contradiction

• Assume negation of goal, show this leads to a contradiction

• Example: there is no program that solves the “halting problem”– Determines if any other program runs forever or not

Alan Turing, 1937

• Does NonConformist(NonConformist) halt?

• Yes? That means HALT(NonConformist) = “never

halts”

• No? That means HALT(NonConformist) = “halts”

Program NonConformist (Program P)If ( HALT(P) = “never halts” ) Then

Halt Else

Do While (1 > 0)Print “Hello!”

End WhileEnd If

End Program

Contradiction!

Defining Efficiency

• Asymptotic Complexity - how running time scales as function of size of input

• Why is this a reasonable definition?

Defining Efficiency

• Asymptotic Complexity - how running time scales as function of size of input

• Why is this a reasonable definition?– Many kinds of small problems can be solved in practice

by almost any approach• E.g., exhaustive enumeration of possible solutions

• Want to focus efficiency concerns on larger problems

– Definition is independent of any possible advances in computer technology

Technology-Depended Efficiency

• Drum Computers: Popular technology from early 1960’s

• Transistors too costly to use for RAM, so memory was kept on a revolving magnetic drum

• An efficient program scattered instructions on the drum so that next instruction to execute was under read head just when it was needed– Minimized number of full revolutions

of drum during execution

The Apocalyptic Laptop

Seth Lloyd, SCIENCE, 31 Aug 2000

Speed Energy Consumption

E = m c 2

25 million megawatt-hours

Quantum mechanics:Switching speed = h / (2 * Energy)

h is Planck’s constant

5.4 x 10 50 operations per second

1

100000

1E+10

1E+15

1E+20

1E+25

1E+30

1E+35

1E+40

1E+45

1E+50

1E+55

1E+60

1 10 100 1000

2^N

1.2^N

N 5̂

N 3̂

5N

Big BangUltimate Laptop,

1 year1 second

1000 MIPS,

since Big Bang

1000 MIPS,

1 day

Defining Efficiency

• Asymptotic Complexity - how running time scales as function of size of input

• What is “size”?– Often: length (in characters) of input

– Sometimes: value of input (if input is a number)

• Which inputs?– Worst case

• Advantages / disadvantages ?

– Best case• Why?

Average Case Analysis

• More realistic analysis, first attempt:– Assume inputs are randomly distributed according to

some “realistic” distribution – Compute expected running time

– Drawbacks• Often hard to define realistic random distributions

• Usually hard to perform math

Inputs( )

( , ) Prob ( ) RunTime( )x n

E T n x x

Amortized Analysis

• Instead of a single input, consider a sequence of inputs

• Choose worst possible sequence• Determine average running time on this sequence• Advantages

– Often less pessimistic than simple worst-case analysis

– Guaranteed results - no assumed distribution

– Usually mathematically easier than average case analysis

Comparing Runtimes

• Program A is asymptotically less efficient than program B iffthe runtime of A dominates the runtime of B, as the size

of the input goes to infinity

• Note: RunTime can be “worst case”, “best case”, “average case”, “amortized case”

RunTime( , ) as

RunTime( , )

A nn

B n

Which Function Dominates?

n3 + 2n2

n0.1

n + 100n0.1

5n5

n-152n/100

82log n

100n2 + 1000

log n

2n + 10 log n

n!

1000n15

3n7 + 7n

Race In3 + 2n2 100n2 + 1000vs.

Race IIn0.1 log nvs.

Race IIIn + 100n0.1 2n + 10 log nvs.

Race IV5n5 n!vs.

Race Vn-152n/100 1000n15vs.

Race VI82log(n) 3n7 + 7nvs.

Order of Magnitude Notation (big O)

• Asymptotic Complexity - how running time scales as function of size of input– We usually only care about order of magnitude of

scaling

• Why?

Order of Magnitude Notation (big O)

• Asymptotic Complexity - how running time scales as function of size of input– We usually only care about order of magnitude of

scaling

• Why?– As we saw, some functions overwhelm other functions

• So if running time is a sum of terms, can drop dominated terms

– “True” constant factors depend on details of compiler and hardware

• Might as well make constant factor 1

• Eliminate low order terms

• Eliminate constant coefficients

3 2 28

3 28

3 28

3 28 8

3 3 28 8

3 28

38

38

38

3

16 log (10 ) 100

16 log (10 )

log (10 )

log (10) log ( )

log (10) log ( )

log ( )

2 log ( )

log ( )

log (2) log( )

log( )

n n n

n n

n n

n n

n n n

n n

n n

n n

n n

n n

3 2 2 3816 log (10 ) 100 ( log( ))n n n O n n

Common NamesSlowest Growth

constant: O(1)

logarithmic: O(log n)

linear: O(n)

log-linear: O(n log n)

quadratic: O(n2)

exponential: O(cn) (c is a constant > 1)

Fastest Growth

superlinear: O(nc) (c is a constant > 1)

polynomial: O(nc) (c is a constant > 0)

Summary• Proofs by induction and contradiction• Asymptotic complexity• Worst case, best case, average case, and amortized

asymptotic complexity• Dominance of functions• Order of magnitude notation• Next:

– Part One: Complexity, continued– Read Chapters 1 and 2

Part One: Complexity, continued

Friday, October 4th, 2002

Determining the Complexity of an Algorithm

• Empirical measurement• Formal analysis (i.e. proofs)• Question: what are likely advantages and

drawbacks of each approach?

Determining the Complexity of an Algorithm

• Empirical measurement• Formal analysis (i.e. proofs)• Question: what are likely advantages and

drawbacks of each approach?– Empirical:

• pro: discover if constant factors are significant• con: may be running on “wrong” inputs

– Formal:• pro: no interference from implementation/hardware details• con: can make mistake in a proof!

In theory, theory is the same as practice, but not in practice.

Measuring Empirical Complexity:Linear vs. Binary Search

Linear Search Binary Search

Time to find one item:

Time to find N items:

• Find a item in a sorted array of length N• Binary search algorithm:

My C Code

void bfind(int x, int a[], int n)

{ m = n / 2;

if (x == a[m]) return;

if (x < a[m])

bfind(x, a, m);

else

bfind(x, &a[m+1], n-m-1); }

for (i=0; i<n; i++) a[i] = i;

for (i=0; i<n; i++) lfind(i,a,n);

void lfind(int x, int a[], int n)

{ for (i=0; i<n; i++)

if (a[i] == x)

return; }

or bfind

Graphical Analysis

Graphical Analysis

slope 2

slope 1

Property of Log/Log Plots

• On a linear plot, a linear function is a straight line• On a log/log plot, any polynomial function is a

straight line!– The slope y/ x is the same as the exponent

Proof: Suppose

Then log log( )

log log log

log log log

k

k

k

y cx

y cx

y c x

y c k x

horizontal axis

vertical axis slope

slope 1

Why does O(n log n) look like a straight line?

Summary

• Empirical and formal analyses of runtime scaling are both important techniques in algorithm development

• Large data sets may be required to gain an accurate empirical picture

• Log/log plots provide a fast and simple visual tool for estimating the exponent of a polynomial function

Formal Asymptotic Analysis

• In order to prove complexity results, we must make the notion of “order of magnitude” more precise

• Asymptotic bounds on runtime– Upper bound

– Lower bound

Definition of Order Notation• Upper bound: T(n) = O(f(n)) Big-O

Exist constants c and n’ such that

T(n) c f(n) for all n n’

• Lower bound: T(n) = (g(n)) OmegaExist constants c and n’ such that

T(n) c g(n) for all n n’

• Tight bound: T(n) = θ(f(n)) ThetaWhen both hold:

T(n) = O(f(n))

T(n) = (f(n))

Example: Upper Bound

2

2

2 2

2

2

2

Proof: Must find , such that for all ,

100

Let's try setting 2. Then

100 2

100

100

So we

Claim

can

: 10

set 100 and reverse the steps abov .

(

e

0 )

c n n n

n n cn

c

n n n

n

n n O

n

n

n

n

Using a Different Pair of Constants2

2 2

2 2

2

Proof: Must find , such that for all ,

100

Let's try setting 101. Then

100 100

100 101

Claim: 100 ( )

(divide both sides by

100 100

1

So we can set 1 and reve

n)

rs

c n n n

n n cn

c

n n n

n n

n

n

n

n n O n

e the steps above.

Example: Lower Bound2

2

2 2

2 2

2

Proof: Must find , such that for all ,

100

Let's try setting 1. Then

100

0

So we can set 0 and reverse the steps above.

Thus we can also conc

Claim: 100 ( )

100lude

c n n n

n n cn

c

n n n

n

n

n n n

n n

2( )n

Conventions of Order Notation2 2

2 2

Order notation is not symmetric: write

but never

The expression ( ( )) ( ( )) is equivalent to

( ) ( ( ))

The expression ( ( )) ( ( )) is equivalent to

( ) ( (

( )

2

))

T

(

he right

2

)

-h

O f n O g n

f n O g n

f n g n

f n

n n

g n

O n n n

O n

2

2 2

2 318

and side is a "cruder" version of the l

( ) ( )

18 ( ) (

(2

log ) ( )

f

)

e t:nn O n O n

n n n

O

n n

Upper/Lower vs. Worst/Best

• Worst case upper bound is f(n)– Guarantee that run time is no more than c f(n)

• Best case upper bound is f(n)– If you are lucky, run time is no more than c f(n)

• Worst case lower bound is g(n)– If you are unlikely, run time is at least c g(n)

• Best case lower bound is g(n)– Guarantee that run time is at least c g(n)

Analyzing Code

• primitive operations • consecutive statements• function calls• conditionals• loops• recursive functions

Conditionals

• Conditionalif C then S1 else S2

• Suppose you are doing a O( ) analysis?

• Suppose you are doing a ( ) analysis?

Conditionals• Conditional

if C then S1 else S2

• Suppose you are doing a O( ) analysis?Time(C) + Max(Time(S1),Time(S2))or Time(C)+Time(S1)+Time(S2)or …

• Suppose you are doing a ( ) analysis?Time(C) + Min(Time(S1),Time(S2))or Time(C)or …

Nested Loops

for i = 1 to n do

for j = 1 to n do

sum = sum + 1

Nested Loops

for i = 1 to n do

for j = 1 to n do

sum = sum + 1

2

1 1 1

1n n n

i j i

n n

Nested Dependent Loops

for i = 1 to n do

for j = i to n do

sum = sum + 1

Nested Dependent Loops

for i = 1 to n do

for j = i to n do

sum = sum + 1

1

1 ?n n

i j i

Summary

• Formal definition of order of magnitude notation• Proving upper and lower asymptotic bounds on a

function• Formal analysis of conditionals and simple loops• Next:

– Analyzing complex loops

– Mathematical series

– Analyzing recursive functions

Part One: Complexity,Continued

Monday October 7, 2002

Today’s Material

• Running time of nested dependent loops• Mathematical series• Formal analysis of linear search• Formal analysis of binary search • Solving recursive equations• Stretchy arrays and the Stack ADT• Amortized analysis

Nested Dependent Loops

for i = 1 to n do

for j = i to n do

sum = sum + 1

1

1 ?n n

i j i

Nested Dependent Loops

for i = 1 to n do

for j = i to n do

sum = sum + 1

1 1

2

1 1 11

1 ( 1)

1

n n n

i j i i

n n n

i i

n

ii

n

i

i

n i n n

Arithmetic Series

• Note that: S(1) = 1, S(2) = 3, S(3) = 6, S(4) = 10, …• Hypothesis: S(N) = N(N+1)/2

Prove by induction

– Base case: for N = 1, S(N) = 1(2)/2 = 1 – Assume true for N = k– Suppose N = k+1.

– S(k+1) = S(k) + (k+1)= k(k+1)/2 + (k+1) = (k+1)(k/2 + 1)

= (k+1)(k+2)/2.

N

i

iNNS1

?21)(

Other Important Series

• Sum of squares:

• Sum of exponents:

• Geometric series:

• Novel series: – Reduce to known series, or prove inductively

N largefor 36

)12)(1( 3

1

2 NNNNi

N

i

-1k and N largefor |1|

1

1

k

Ni

kN

i

k

1

11

0

A

AA

NN

i

i

Nested Dependent Loopsfor i = 1 to n do

for j = i to n do

sum = sum + 1

2 2

2 2

1

( 1)

2

( 1) ( 1)( 1)

2 2

/ 2 / 2 ( )

n

i

n n n n

n n n nn n

in n

n n n

Linear Search Analysis

• Best case, tight analysis:

• Worst case, tight analysis:

void lfind(int x, int a[], int n)

{ for (i=0; i<n; i++)

if (a[i] == x)

return; }

Iterated Linear Search Analysis

• Easy worst-case upper-bound:

• Worst-case tight analysis:

for (i=0; i<n; i++) a[i] = i;

for (i=0; i<n; i++) lfind(i,a,n);

Iterated Linear Search Analysis

• Easy worst-case upper-bound:• Worst-case tight analysis:

– Just multiplying worst case by n does not justify answer, since each time lfind is called i is specified

for (i=0; i<n; i++) a[i] = i;

for (i=0; i<n; i++) lfind(i,a,n);

2

1 1 1

( 1)1 ( )

2

n i n

i j i

n ni n

2( ) ( )nO n O n

Analyzing Recursive Programs

1. Express the running time T(n) as a recursive equation

2. Solve the recursive equation• For an upper-bound analysis, you can optionally

simplify the equation to something larger

• For a lower-bound analysis, you can optionally simplify the equation to something smaller

Binary Search

void bfind(int x, int a[], int n)

{ m = n / 2;

if (x == a[m]) return;

if (x < a[m])

bfind(x, a, m);

else

bfind(x, &a[m+1], n-m-1); }

What is the worst-case upper bound?

Binary Search

void bfind(int x, int a[], int n)

{ m = n / 2;

if (x == a[m]) return;

if (x < a[m])

bfind(x, a, m);

else

bfind(x, &a[m+1], n-m-1); }

What is the worst-case upper bound?

Trick question:

Binary Searchvoid bfind(int x, int a[], int n)

{ m = n / 2;

if (n <= 1) return;

if (x == a[m]) return;

if (x < a[m])

bfind(x, a, m);

else

bfind(x, &a[m+1], n-m-1); }

Okay, let’s prove it is (log n)…

Binary Search

Introduce some constants…

b = time needed for base case

c = time needed to get ready to do a recursive call

Running time is thus:

void bfind(int x, int a[], int n)

{ m = n / 2;

if (n <= 1) return;

if (x == a[m]) return;

if (x < a[m])

bfind(x, a, m);

else

bfind(x, &a[m+1], n-m-1); }

(1)

( ) ( / 2)

T b

T n T n c

Binary Search Analysis

One sub-problem, half as large

Equation: T(1) b

T(n) T(n/2) + c for n>1

Solution:

T(n) T(n/2) + c write equation T(n/4) + c + c expand T(n/8) + c + c + c T(n/2k) + kc inductive leap T(1) + c log n where k = log n select value for k b + c log n = O(log n) simplify

Solving Recursive Equations by Repeated Substitution

• Somewhat “informal”, but intuitively clear and straightforward

log

substitute for T(n/2)

substitute for T(

( )

( )

( )

( )

( )

( ) log

( ) log

(1) log

n/4)

"inductive leap"

choose k=lo

( / 2)

( /

( / 4)

(

g

4)

( / 2

( / 2 )

( /

)

/ 8)

n

)

k

n

T n

T n

T n c

T n c

T n c

T n

T n c

T n

c

T n c c

T n

T n c n

T n c n

T c n

c

T

T n

kc

n

n

log (log )b c n n

Solving Recursive Equations by Telescoping

• Create a set of equations, take their sum

( ) ( / 2)

( / 2) ( / 4)

( / 4) ( / 8)

( / 8) ( /1

initial equation

so this holds...

and this...

and this...6)

...

(2) (

and eventually...

sum equations, cancelling

terms that appear on both s

1)

T n T n c

T n T n c

T n T n c

T n T n c

T T c

ides

look famili( ) (1) log

( ) (log )

ar?T n T c n

T n n

Solving Recursive Equations by Induction

• Repeated substitution and telescoping construct the solution

• If you know the closed form solution, you can validate it by ordinary induction

• For the induction, may want to increase n by a multiple (2n) rather than by n+1

base case

Assume hypothesis

definition of T(n)

by in

(1) log1

( ) log

(2 ) ( )

(2 ) ( log )

(2 ) ((log ) 1)

(2 ) ((

duction hypothesis

Q.E

log ) (log 2))

(2 ) log(2 ) .D.

Thus: (

T b c b

T n b c n

T n T n c

T n b c n c

T n b c n

T n b c n

T n b c n

T n

) (log )n

Inductive Proof

Example: Sum of Integer Queue

sum_queue(Q){

if (Q.length() == 0 ) return 0;

else return Q.dequeue() +

sum_queue(Q); }– One subproblem

– Linear reduction in size (decrease by 1)

Equation: T(0) = b

T(n) = c + T(n – 1) for n>0

Lower Bound Analysis: Recursive Fibonacci

int Fib(n){ if (n == 0 or n == 1) return 1 ;

else return Fib(n - 1) + Fib(n - 2); }

• Lower bound analysis (n)• Instead of =, equations will use

T(n) Some expression

• Will simplify math by throwing out terms on the right-hand side

Analysis by Repeated Subsitution

( ) 2 ( 2) simplify to smaller quantit

(0) (1)

( ) 2( 2 ( 2 2)) substitute

( ) 3 4 ( 4)) simplify

( ) 3 4( 2 ( 4 2)) substitute

( ) 7

base case

( ) ( 1) ( 2) recursive ca

y

se

8 (

T n b b T n

T n b T n

T n b b T n

T n b T n

T T a

T n b T n T

n b

n

T T

/ 2 / 2

/

2

2

/

( ) (2 1) 2 ( 2 ) inductive

6)) simplify

( ) 7 8( 2 ( 6

( ) (2

2)) substitu

Note: this

leap

1) 2 ( 2( / 2)) choose k=(n

te

( ) 15 16 ( 8) simp

( ) (

/2)

( ) 2

lify

( ) s

2

i p

)

m lify

n

n

k k

n

n

T n b T n

T n b T n n

T n b a

n

T n b b T n

T n

b

n

k

b T n

T

is not the same as (2 )!!!n

Learning from Analysis

• To avoid recursive calls– store all basis values in a table

– each time you calculate an answer, store it in the table

– before performing any calculation for a value n • check if a valid answer for n is in the table

• if so, return it

• Memoization– a form of dynamic programming

• How much time does memoized version take?

Amortized Analysis

• Consider any sequence of operations applied to a data structure

• Some operations may be fast, others slow• Goal: show that the average time per operation is

still good

n

operationsn for timetotal

Stack Abstract Data Type

• Stack operations– push

– pop

– is_empty

• Stack property: if x is on the stack before y is pushed, then x will be popped after y is popped

What is biggest problem with an array implementation?

A

BCDEF

E D C B A

F

Stretchy Stack Implementation

int[] data;int maxsize;int top;

Push(e){if (top == maxsize){

temp = new int[2*maxsize];for (i=0;i<maxsize;i++) temp[i]=data[i]; ;data = temp; maxsize = 2*maxsize; }

else { data[++top] = e; }

Best case Push = O( )

Worst case Push = O( )

Stretchy Stack Amortized Analysis

• Consider sequence of n operationspush(3); push(19); push(2); …

• What is the max number of stretches?• What is the total time?

– let’s say a regular push takes time a, and stretching an array contain k elements takes time bk.

• Amortized time =

Stretchy Stack Amortized Analysis

• Consider sequence of n operationspush(3); push(19); push(2); …

• What is the max number of stretches?• What is the total time?

– let’s say a regular push takes time a, and stretching an array contain k elements takes time bk.

• Amortized time =

log n

log

(1 2 4 8 ... ) 2n

i

i o

an b n an b

Geometric Series

1

11

0

A

AA

NN

i

i

11

0

2 12 2 1

2 1

nni n

i

loglog 1 log 1

0

2 2 1 (2 )2 1 2 1n

i n n

i

n

Stretchy Stack Amortized Analysis

• Consider sequence of n operationspush(3); push(19); push(2); …

• What is the max number of stretches?• What is the total time?

– let’s say a regular push takes time a, and stretching an array contain k elements takes time bk.

• Amortized time =

log n

log

(1 2 4 8 ... ) 2

(2 1)

ni

i o

an b n an b

an b n

(2 1)( )

an b n

n

Surprise

• In an asymptotic sense, there is no overhead in using stretchy arrays rather than regular arrays!

top related