Top Banner
The Design & Analysis of The Design & Analysis of the Algorithms the Algorithms Lecture 1. 2010.. by me M. Lecture 1. 2010.. by me M. Sakalli Sakalli
31

The Design & Analysis of the Algorithms

Jan 11, 2016

Download

Documents

vahe

The Design & Analysis of the Algorithms. Lecture 1. 2010.. by me M. Sakalli. Your attendance is Mandatory, 70% . Your Evaluation will be algorithmic. Sources: Int2algorithms by Cormen, Leiserson, Rivest, and Stein. - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: The Design & Analysis of the Algorithms

The Design & Analysis of The Design & Analysis of the Algorithmsthe Algorithms

Lecture 1. 2010.. by me M. Sakalli Lecture 1. 2010.. by me M. Sakalli

Page 2: The Design & Analysis of the Algorithms

1-2M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Your attendance is Your attendance is Mandatory, 70%Mandatory, 70%.. Your Evaluation will be algorithmic. Your Evaluation will be algorithmic.

Sources:Sources: Int2algorithms by Cormen, Leiserson, Rivest, and Stein. Int2algorithms by Cormen, Leiserson, Rivest, and Stein.

Algorithm Design by Jon Kleinberg and Eva Tardos, Pearson Algorithm Design by Jon Kleinberg and Eva Tardos, Pearson Education, Inc. (2006). (Sample chapters)Education, Inc. (2006). (Sample chapters)

Int2Analysi&DesignOfAlg by A. Levitin. compact with good Int2Analysi&DesignOfAlg by A. Levitin. compact with good enough examples, by Levitin. enough examples, by Levitin.

Freedom of information therefore, Internet savvy, final and Freedom of information therefore, Internet savvy, final and ultimate resource is internet, courses around the world, and ultimate resource is internet, courses around the world, and MIT in particular and wikipedia. MIT in particular and wikipedia.

Kozen. Kozen.

Page 3: The Design & Analysis of the Algorithms

1-3M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

The emphasis of my triangle will be on the game algorithms, The emphasis of my triangle will be on the game algorithms, graph (NW algorithms), np-complexity. graph (NW algorithms), np-complexity.

On Proceeding Weeks: On Proceeding Weeks: Part IPart I Skip: More on asymptotic analysis, recurrences, substitution, Skip: More on asymptotic analysis, recurrences, substitution,

master and (generating functions) annihilation methods. master and (generating functions) annihilation methods. Skip: Lect.9 MSTBST and Randomized QS.. Skip: Lect.9 MSTBST and Randomized QS..

Round-robin scheduling and round-robin tournament, Round-robin scheduling and round-robin tournament, Combinatorial games, impartial gamesCombinatorial games, impartial games, , Min-max, alpha beta Min-max, alpha beta pruning, pruning,

MIT: MIT: Red-Black trees Red-Black trees, Amortized algorithms, Competitive , Amortized algorithms, Competitive analysis self organizing lists, greedy mst (repeat Kruskal-analysis self organizing lists, greedy mst (repeat Kruskal-Prim). Prim).

DP, optimality. Event Scheduling, LC subsequence, evaluation DP, optimality. Event Scheduling, LC subsequence, evaluation of some multiplication algorithms, Matrix multiplication. of some multiplication algorithms, Matrix multiplication.

Page 4: The Design & Analysis of the Algorithms

1-4M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

On Proceeding Weeks: On Proceeding Weeks:

Part IIPart II Graph algorithms, review SP algos, and then all pairs shortest Graph algorithms, review SP algos, and then all pairs shortest

paths, Bellman Ford, LP, Difference constraints. paths, Bellman Ford, LP, Difference constraints. NW flow mincut-maxflow, bipartite matching, stable NW flow mincut-maxflow, bipartite matching, stable

marriage. marriage. (Minimum) VC is the minimum set of vertices of G for which (Minimum) VC is the minimum set of vertices of G for which

at least one end of any edge in G ends in a vertex of S. at least one end of any edge in G ends in a vertex of S. Maximal Independent set problem is complementary of VC, Maximal Independent set problem is complementary of VC,

and is subset of vertices in G, such that no two adjacent and is subset of vertices in G, such that no two adjacent vertices are in the same set. vertices are in the same set.

Maximum Clique.Maximum Clique. Determinism (FSM) vs non-determinism, P, NP, NPC, Determinism (FSM) vs non-determinism, P, NP, NPC,

Reductions NP-completeness, SAT, 3SAT ..Reductions NP-completeness, SAT, 3SAT ..

Page 5: The Design & Analysis of the Algorithms

1-5M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

An An algorithmalgorithm: A sequence of : A sequence of unambiguous – well defined unambiguous – well defined procedures, instructions for solving a problem. procedures, instructions for solving a problem.

For an input size. Execution must be completed at For an input size. Execution must be completed at a finite amount a finite amount of timeof time..

Analysis meansAnalysis means: : evaluateevaluate the costs, time and space costs, and manage the resources and the the costs, time and space costs, and manage the resources and the

methods.. methods.. A generic RAM model of computation in which A generic RAM model of computation in which instructionsinstructions are executed are executed

consecutivelyconsecutively, but not concurrently or in parallel. , but not concurrently or in parallel. Computational Model in terms of an abstract computer: A Computational Model in terms of an abstract computer: A Turing machineTuring machine. .

AbstractionAbstraction. .

FSM, RAM, PRAM, uniform circuits

Problem, relating input parameters with certain state parameters..

algorithm

input output

Page 6: The Design & Analysis of the Algorithms

1-6M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Time efficiency: Estimation in the asymptotic sense called Time efficiency: Estimation in the asymptotic sense called Big Big OO, , , , ., comparing two functions to determine a constant c, ., comparing two functions to determine a constant c, and nand nii. Machine independence: Bandwidth, and the amount . Machine independence: Bandwidth, and the amount

of hardware involved, # of the gates. of hardware involved, # of the gates.

Space efficiency, Memory.Space efficiency, Memory.

Theoretically: Theoretically: • Prove its correctness.. Prove its correctness..

• Efficiency: Theoretical and Empirical analysisEfficiency: Theoretical and Empirical analysis

• Its optimality Its optimality

The methods applied, iterative, recursive and parallel. The methods applied, iterative, recursive and parallel. Desired scalability: Various range of inputs and the size and Desired scalability: Various range of inputs and the size and

dimension of the problem under consideration. dimension of the problem under consideration.

Page 7: The Design & Analysis of the Algorithms

1-7M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Historical PerspectiveHistorical Perspective

…… Muhammad ibn Musa al-Khwarizmi – 9Muhammad ibn Musa al-Khwarizmi – 9thth century century

mathematician mathematician http://www.ms.uky.edu/~carl/ma330/project2/al-http://www.ms.uky.edu/~carl/ma330/project2/al-khwa21.htmlkhwa21.html

http://en.wikipedia.org/wiki/Analysis_of_algorithms.http://en.wikipedia.org/wiki/Analysis_of_algorithms.

……

Page 8: The Design & Analysis of the Algorithms

1-8M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Euclid’s AlgorithmEuclid’s AlgorithmProblem definition: gcd(Problem definition: gcd(m, nm, n) of two nonnegative, not both zero ) of two nonnegative, not both zero

integers integers m m and and nn, , m m > > n n

Examples: gcd(60,24) = 12, gcd(60,0) = 60, gcd(0,0) = ? Examples: gcd(60,24) = 12, gcd(60,0) = 60, gcd(0,0) = ?

Euclid’s algorithm is based on repeated application of equalityEuclid’s algorithm is based on repeated application of equality

gcdgcd((m, nm, n) = ) = gcdgcd((n, m n, m mod mod nn))

until the second number reaches to 0.until the second number reaches to 0.

Example: gcd(60,24) = gcd(24,12) = gcd(12,0) = 12Example: gcd(60,24) = gcd(24,12) = gcd(12,0) = 12

rr00 m, rm, r11 n, n,

rri-1i-1 r rii q qii + r + ri+1i+1, , 0 < r0 < ri+1i+1 < r < rii , 1 , 1i<t,i<t,

……

rrt-1t-1 r rtt q qtt + 0 + 0

Page 9: The Design & Analysis of the Algorithms

1-9M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Asymptotic order of growthAsymptotic order of growth

A way of comparing functions that ignores constant factors and A way of comparing functions that ignores constant factors and small input sizessmall input sizes

O(O(gg((nn)): class of functions )): class of functions ff((nn) that grow ) that grow no fasterno faster than than gg((nn))

ΘΘ((gg((nn)): class of functions )): class of functions ff((nn) that grow ) that grow at the same rateat the same rate as as gg((nn))

ΩΩ((gg((nn)): class of functions )): class of functions ff((nn) that grow ) that grow at least as fastat least as fast as as gg((nn))

Page 10: The Design & Analysis of the Algorithms

1-10M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

O, O, ΩΩ, , ΘΘ

Page 11: The Design & Analysis of the Algorithms

1-11M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Establishing order of growth using the definitionEstablishing order of growth using the definitionDefinition:Definition: f f((nn) is in O() is in O(gg((nn)), )), O(O(gg((nn)), if order of growth of )), if order of growth of ff((nn) ) ≤≤ order of order of

growth of growth of gg((nn) () (within constant multiplewithin constant multiple),),i.e., there exist a i.e., there exist a positive constant positive constant c,c, c c N N, and , and non-negative integer non-negative integer nn00 such thatsuch that

ff((nn) ) ≤≤ c gc g((nn) for every ) for every nn ≥≥ nn0 0

ff(n) is o((n) is o(gg(n)), if (n)), if ff((nn) ) ≤≤ (1/ (1/cc)) g g((nn) for every ) for every nn ≥≥ nn00 (anf for every c) which (anf for every c) which means f grows strictly more slowly than any arbitrarily small constant of means f grows strictly more slowly than any arbitrarily small constant of g.g.

ff(n) is (n) is ((gg(n)), (n)), c c N N, , ff((nn) ≥ ) ≥ c gc g((nn) for ) for nn ≥≥ nn0 0

ff(n) is (n) is (n) if (n) if ff(n) is both O(n) and (n) is both O(n) and (n)(n)

Examples:Examples: 1010nn is O(c is O(cnn22), c ), c ≥≥ 10, since 10 10, since 10nn ≤≤ 10 10nn22 for for n n ≥≥ 1 1 or for smaller c values ie c or for smaller c values ie c ≥≥ 1, 10 1, 10nn ≤≤ nn2 2 for for n n ≥≥ 1010 55nn+20 is O(+20 is O(cncn), for all n>0, c>=25, since 5), for all n>0, c>=25, since 5nn+20 +20 ≤≤ 5 5nn+20n +20n ≤≤ 25 25nn, , or c>=10 for or c>=10 for n n ≥≥ 44

Page 12: The Design & Analysis of the Algorithms

1-12M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Emphasizing: The unit of efficiency analysis is comparison.Emphasizing: The unit of efficiency analysis is comparison.

Chapter.1 KozenChapter.1 Kozen A generic RAM model of computation in which A generic RAM model of computation in which instructionsinstructions

are executed are executed consecutivelyconsecutively, but not concurrently or in , but not concurrently or in parallel. parallel.

Running time analysisRunning time analysis: The # of the : The # of the primitive operationsprimitive operations executed for every line of the code; asymptotic analysis by executed for every line of the code; asymptotic analysis by ignoreing machine-dependent constants, looking at ignoreing machine-dependent constants, looking at computational growth of computational growth of T(n) T(n) as as nn→∞, →∞, n: n: input sizeinput size that that determines the number of iterationsdetermines the number of iterations: :

Relative speed (in the same machine)Relative speed (in the same machine)

Absolute speed (between computers).. Parallel..Absolute speed (between computers).. Parallel..

Page 13: The Design & Analysis of the Algorithms

1-13M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Step 1 Step 1

if (if (nn == 0 or m==n), == 0 or m==n),

return return mm and stop; and stop;

otherwise go to Step 2otherwise go to Step 2

Step 2 Step 2

Divide Divide mm by by n n and assign the value of the remainder toand assign the value of the remainder to r r

Assign the value of Assign the value of n n to to mm and the value of and the value of rr to to n. n.

Go to Step 1.Go to Step 1.

whilewhile nn ≠ 0 ≠ 0 do do

{{ r ← m r ← m mod mod nn

m← n m← n

n ← rn ← r }}

returnreturn mm

Page 14: The Design & Analysis of the Algorithms

1-14M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

whilewhile nn ≠ 0 ≠ 0 do do

{{ r ← m r ← m mod mod nn

m← n m← n

n ← rn ← r }}

returnreturn mm

The upper bound of i iterations: iThe upper bound of i iterations: iloglog(n)+1 where (n)+1 where is (1+sqrt(5))/2. is (1+sqrt(5))/2.

ii = O(log(max(m, n))), since = O(log(max(m, n))), since rri+1i+1 r ri-1i-1/2./2. cci i

The Lower bound, The Lower bound, (log(max(m, n)))(log(max(m, n))), = therefore , = therefore ((log(max(m, n))).log(max(m, n))).

Page 15: The Design & Analysis of the Algorithms

1-15M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Step 1 Step 1

if (if (nn == 0 or m==n), == 0 or m==n),

return return mm and stop; and stop;

otherwise go to Step 2otherwise go to Step 2

Step 2 Step 2

Divide Divide mm by by n n and assign the value of the remainder toand assign the value of the remainder to r r

Assign the value of Assign the value of n n to to mm and the value of and the value of rr to to n. n.

Go to Step 1.Go to Step 1.

whilewhile nn ≠ 0 ≠ 0 do do

{ { r ← m r ← m mod mod nn

m← n m← n

n ← rn ← r }}

returnreturn mm

The upper bound of i iterations: iThe upper bound of i iterations: iloglog(n)+1 where (n)+1 where is (1+sqrt(5))/2. is (1+sqrt(5))/2.

ii = O(log(max(m, n))), since = O(log(max(m, n))), since rri+1i+1 r ri-1i-1/2./2. cci i

The Lower bound, The Lower bound, (log(max(m, n)))(log(max(m, n))), = therefore , = therefore ((log(max(m, n))).log(max(m, n))).

Page 16: The Design & Analysis of the Algorithms

1-16M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Proof of Correctness of the Euclid’s algorithmProof of Correctness of the Euclid’s algorithm

Step 1, If n divides m, then gcd(m,n)=n Step 1, If n divides m, then gcd(m,n)=n Step 2, gcd(mn,)=gcd(n, m mod(n)).Step 2, gcd(mn,)=gcd(n, m mod(n)).

• If gcd(m, n) divides n, this implies that gcd must be If gcd(m, n) divides n, this implies that gcd must be n: n: gcd(m, n)gcd(m, n)n. n divides m and n, implies that n must be n. n divides m and n, implies that n must be smaller than the gcd of the pair {m,n}, nsmaller than the gcd of the pair {m,n}, ngcd(m, n)gcd(m, n)

• If m=n*b+r, for r, b integer numbers, then gcd(m, n) = If m=n*b+r, for r, b integer numbers, then gcd(m, n) = gcd(n, r). Every common divisor of m and n, also divides r. gcd(n, r). Every common divisor of m and n, also divides r.

• Proof: m=cp, n=cq, c(p-qb)=r, therefore g(m,n), divides r, Proof: m=cp, n=cq, c(p-qb)=r, therefore g(m,n), divides r, so that this yields g(m,n)so that this yields g(m,n)gcd(n,r). gcd(n,r).

Page 17: The Design & Analysis of the Algorithms

1-17M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Other methods for computing gcd(Other methods for computing gcd(m,nm,n))Consecutive integer checking algorithm, not a good way, it checks Consecutive integer checking algorithm, not a good way, it checks

all the .. all the .. Step 1 Assign the value of min{Step 1 Assign the value of min{m,nm,n} to } to tt

Step 2 Step 2 Divide Divide mm by by t. t. If the remainder is 0, go to Step 3;If the remainder is 0, go to Step 3; otherwiseotherwise, go to Step 4, go to Step 4

Step 3 Step 3 Divide Divide nn by by t. t. If the remainder is 0, return If the remainder is 0, return tt and stop; and stop; otherwise, go to Step 4 otherwise, go to Step 4

Step 4 Decrease Step 4 Decrease t t by 1 and go to Step 2by 1 and go to Step 2

Bruteforce!! Exhaustive??.. Very slow even if zero inputs would be checked.. Bruteforce!! Exhaustive??.. Very slow even if zero inputs would be checked..

O(min(m, n))O(min(m, n))..

((min(m, n))min(m, n)), when gcd(m,n) =1. , when gcd(m,n) =1.

(1) for each operation, (1) for each operation,

Overall complexity is Overall complexity is ((min(m, n))min(m, n))

Page 18: The Design & Analysis of the Algorithms

1-18M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Other methods for gcd(Other methods for gcd(m,nm,n) [cont.]) [cont.]

Middle-school procedureMiddle-school procedureStep 1 Find the prime factorization of Step 1 Find the prime factorization of mm

Step 2 Find the prime factorization of Step 2 Find the prime factorization of nn

Step 3 Find all the common prime factorsStep 3 Find all the common prime factors

Step 4 Compute the product of all the common prime factorsStep 4 Compute the product of all the common prime factors and return it as gcd and return it as gcd(m,n(m,n))

Is this an algorithm?Is this an algorithm?

Page 19: The Design & Analysis of the Algorithms

1-19M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Sieve of Eratosthenes-Method appliedSieve of Eratosthenes-Method applied

Input: Input: Integer Integer n n ≥≥ 2 2Output: List of primes less than or equal to Output: List of primes less than or equal to nn, sift out the numbers , sift out the numbers

that are not. that are not.

for for p p ← 2← 2 to to nn do do AA[[pp] ← ] ← pp

for for p p ← 2← 2 to to nn do do if if AA[[pp] ] 0 // 0 //p p hasn’t been previously eliminated from the listhasn’t been previously eliminated from the list j j ← ← pp** pp

while while j j ≤≤ n n dodo

AA[[jj] ] ← 0← 0 // mark element as eliminated// mark element as eliminated j j ← ← jj + p+ pEx: Ex: 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 252 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

22 3 5 7 9 11 13 15 17 19 21 23 25 3 5 7 9 11 13 15 17 19 21 23 25 2 2 33 5 7 11 13 17 19 23 25 5 7 11 13 17 19 23 25 2 3 2 3 55 7 11 13 17 19 23 7 11 13 17 19 23

Page 20: The Design & Analysis of the Algorithms

1-20M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Example of computational problem: sortingExample of computational problem: sorting

Statement of problem:Statement of problem:• Input:Input: A sequence of A sequence of nn numbers <a numbers <a11, , aa22, …, a, …, ann>>• ProblemProblem: Reorder <a: Reorder <a ´́

11, , aa´́22, …, a, …, a´́

nn> in ascending or > in ascending or descending order.descending order.

• Output desired:Output desired: a a´́ii ≤≤ aa´́

jj , for , for ii < < j j or or ii > > j j

Instance: The sequence <5, 3, 2, 8, 3>Instance: The sequence <5, 3, 2, 8, 3>

Algorithms:Algorithms:• Selection sortSelection sort• Insertion sortInsertion sort• Merge sortMerge sort• (many others)(many others)

Page 21: The Design & Analysis of the Algorithms

1-21M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Selection Sort Selection Sort Input: array indexed from 0 to n-1: Input: array indexed from 0 to n-1: a[0], …, a[n-1]a[0], …, a[n-1] Output: array Output: array AA sorted in non-decreasing order; scanning sorted in non-decreasing order; scanning

elements of unsorted part from i to n-1.. elements of unsorted part from i to n-1.. Smallest is substituted into the ith initial position.., initial Smallest is substituted into the ith initial position.., initial

position shifts to n-1.. position shifts to n-1..

Algorithm makes n-1 passes, and after i passes, the first i+1 Algorithm makes n-1 passes, and after i passes, the first i+1 numbers in the array is sorted. Strategy: In pass i, swap ith numbers in the array is sorted. Strategy: In pass i, swap ith with the smallest.with the smallest.

Algorithm: (Algorithm: (Insertion in placeInsertion in place)) for (i = 0; i < n; i++)for (i = 0; i < n; i++)

swap swap a[a[ii]] with the smallest of with the smallest of a[i+1],.., a[n]a[i+1],.., a[n]

Swapping. Swapping. (1+... + n) = (1+... + n) = (n-1)*n/2=(n-1)*n/2=(n(n22) for all cases sorted or ) for all cases sorted or unsorted, therefore it is independent from the input!!. unsorted, therefore it is independent from the input!!.

Page 22: The Design & Analysis of the Algorithms

1-22M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Insertion Sort Insertion Sort Input: array indexed from 0 to n-1: Input: array indexed from 0 to n-1: a[0], …, a[n-1]a[0], …, a[n-1] Output: array Output: array aa sorted in non-decreasing order; scanning sorted in non-decreasing order; scanning

elements of unsorted part backward, n-1, n-2, ..1 elements of unsorted part backward, n-1, n-2, ..1

Algorithm makes n-1 passes, and after i passes, the first i+1 Algorithm makes n-1 passes, and after i passes, the first i+1 numbers in the array get sorted. numbers in the array get sorted.

Strategy: In pass i, move ith item left to its proper place.Strategy: In pass i, move ith item left to its proper place.

Algorithm: (Algorithm: (Insertion in placeInsertion in place)) for (i = 1; i < n; i++)for (i = 1; i < n; i++)

swap swap a[a[ii]] if smaller of if smaller of a[i-1],.., a[0]a[i-1],.., a[0]

Write swapping part of this algorithm:Write swapping part of this algorithm:

Swapping. Swapping. (1+... + n) = (1+... + n) = (n-1)*n/2=(n-1)*n/2=(n(n22). ). Dependence on the input stats. Dependence on the input stats.

Page 23: The Design & Analysis of the Algorithms

1-23M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Example:Example:originaloriginal 3434 88 6464 5151 3232 2121

^̂after i=1after i=1 88 3434 6464 5151 3232 2121

^̂after i=2after i=2 88 3434 6464 5151 3232 2121

^̂after i=3after i=3 88 3434 5151 6464 3232 2121

^̂after i=4after i=4 88 3232 3434 5151 6464 2121

^̂after i=5after i=5 88 2121 3232 3434 5151 6464In terms of the number of inversions:In terms of the number of inversions:if sorted no inversions.if sorted no inversions.If reverse sorted then (n-1)*n/2 inversions. If reverse sorted then (n-1)*n/2 inversions. If randomly sorted, (n-1)*n/4 inversions.If randomly sorted, (n-1)*n/4 inversions.Its performance Its performance n2-n(logn) n2-n(logn)

Page 24: The Design & Analysis of the Algorithms

1-24M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Shell Sort Shell Sort

Shell Sort, o(nShell Sort, o(n22), runs very fast in place, invented by Don Shell, the first ), runs very fast in place, invented by Don Shell, the first algorithm <= nalgorithm <= n22 barrier. barrier.

It's like insertion sort, but swaps non-adjacent elements.It's like insertion sort, but swaps non-adjacent elements.

Strategy. Pick a increment sequence: hStrategy. Pick a increment sequence: h11 < h < h22 < h < h33 ... < h ... < ht t , where h, where h1 1 = 1. After = 1. After

phase k, a[i] <= a[i + hphase k, a[i] <= a[i + hkk], for all i. (Notice in insertion sort, hi = 1, for all i.], for all i. (Notice in insertion sort, hi = 1, for all i.

indexindex 0 1 2 3 4 5 6 7 8 9 10 11 12 0 1 2 3 4 5 6 7 8 9 10 11 12

aft 5aft 5 81 94 11 96 12 35 17 95 28 58 41 75 1581 94 11 96 12 35 17 95 28 58 41 75 15

aft 3aft 3 35 17 11 28 12 41 75 15 96 58 81 94 9535 17 11 28 12 41 75 15 96 58 81 94 95

aft 1aft 1 becomes all sorted, since behaves like insertion sort..becomes all sorted, since behaves like insertion sort.. Shellsort used in practice simple to code, and its performance is often not Shellsort used in practice simple to code, and its performance is often not

better than some O(n log n) algorithms.better than some O(n log n) algorithms.

Page 25: The Design & Analysis of the Algorithms

1-25M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Merge Sort. A Merge Sort. A classicclassic example of example of divide-and-conquerdivide-and-conquer. . Divide A into left and right halves and recursively sort each.Divide A into left and right halves and recursively sort each.When the array gets a smaller in size (2 or 4, use insertion sort).When the array gets a smaller in size (2 or 4, use insertion sort).Sort while merging of sorted Lhalf, sand orter Rhalf. That is,Sort while merging of sorted Lhalf, sand orter Rhalf. That is,

A A LA and RA LA and RA Sorted (LA) and Sorted (RA) Sorted (LA) and Sorted (RA) Merging!! Merging!! Comparing against key Comparing against key

Assume that a new array is used for the output C.Assume that a new array is used for the output C.Put ptrs at first elements of L and R arrays. Copy the smaller Put ptrs at first elements of L and R arrays. Copy the smaller

element to the output array C, and advance the ptr of the smaller element to the output array C, and advance the ptr of the smaller element. Repeat until one ptr reaches the end, then copy the element. Repeat until one ptr reaches the end, then copy the second array over.second array over.

Merge Example:Merge Example:1 2 13 24 26 + 3 15 27 38 = 1 2 3 13 15 24 26 27 381 2 13 24 26 + 3 15 27 38 = 1 2 3 13 15 24 26 27 38

Time to merge two lists is LINEAR in their total size, always Time to merge two lists is LINEAR in their total size, always advance one ptr.advance one ptr.

Page 26: The Design & Analysis of the Algorithms

1-26M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

The runtime analysis through recursion: substation, master, The runtime analysis through recursion: substation, master, annihilation..annihilation..

T(n) = 1 if T(n) = 1 if n = 1 Base staten = 1 Base state

T(n) = 2 T (n/2) + nT(n) = 2 T (n/2) + n otherwise.otherwise.

Solve the recurrence by substitution, (unraveling it): Solve the recurrence by substitution, (unraveling it):

T(n) = 2 T (n/2) + n T(n) = 2 T (n/2) + n

= 2 (2 T(n/4) + n/2) + n = 2 (2 T(n/4) + n/2) + n

= 2= 222 T(n/2 T(n/222) + 2n) + 2n

........

= 2= 2ii T(n/2 T(n/2ii) + in) + in

n/2n/2ii = 1, i = lg(n), so T(n) = n + nlgn = nlgn + n. = 1, i = lg(n), so T(n) = n + nlgn = nlgn + n.

Theoretically optimal, rarely used in practice; extra memory Theoretically optimal, rarely used in practice; extra memory needed in merging, and algorithms easier to code and with needed in merging, and algorithms easier to code and with same or better performance exist (e.g. quicksort).same or better performance exist (e.g. quicksort).

Page 27: The Design & Analysis of the Algorithms

1-27M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Merge Sort. A Merge Sort. A classicclassic example of example of divide-and-conquerdivide-and-conquer. . Divide A into left and right halves and recursively sort each.Divide A into left and right halves and recursively sort each.When the array gets a smaller in size (2 or 4, use insertion sort).When the array gets a smaller in size (2 or 4, use insertion sort).Sort while merging of sorted Lhalf, sand orter Rhalf. That is,Sort while merging of sorted Lhalf, sand orter Rhalf. That is,

A A LA and RA LA and RA Sorted (LA) and Sorted (RA) Sorted (LA) and Sorted (RA) Merging!! Comparing against key Merging!! Comparing against key

Assume that a new array is used for the output C.Assume that a new array is used for the output C.Put ptrs at first elements of L and R arrays. Copy the smaller element to the output array C, and Put ptrs at first elements of L and R arrays. Copy the smaller element to the output array C, and

advance the ptr of the smaller element. Repeat until one ptr reaches the end, then copy the advance the ptr of the smaller element. Repeat until one ptr reaches the end, then copy the second array over.second array over.

Merge Example:Merge Example:1 2 13 24 26 + 3 15 27 38 = 1 2 3 13 15 24 26 27 381 2 13 24 26 + 3 15 27 38 = 1 2 3 13 15 24 26 27 38

Time to merge two lists is LINEAR in their total size, always advance one ptr.Time to merge two lists is LINEAR in their total size, always advance one ptr.

The runtime analysis through recursion:The runtime analysis through recursion:T(n) = 1 if T(n) = 1 if n = 1 Base staten = 1 Base stateT(n) = 2 T (n/2) + nT(n) = 2 T (n/2) + n otherwise.otherwise.Solve the recurrence by substitution, (unraveling it): Solve the recurrence by substitution, (unraveling it): T(n) T(n) = 2 T (n/2) + n = 2 T (n/2) + n

= 2 (2 T(n/4) + n/2) + n = 2 (2 T(n/4) + n/2) + n = 2= 222 T(n/2 T(n/222) + 2n) + 2n ........ = 2= 2ii T(n/2 T(n/2ii) + in) + in

n/2n/2ii = 1, i = lg(n), so T(n) = n + nlgn = nlgn + n. = 1, i = lg(n), so T(n) = n + nlgn = nlgn + n.Theoretically optimal, rarely used in practice; extra memory needed in merging, and algorithms Theoretically optimal, rarely used in practice; extra memory needed in merging, and algorithms

easier to code and with same or better performance exist (e.g. quicksort).easier to code and with same or better performance exist (e.g. quicksort).

Page 28: The Design & Analysis of the Algorithms

1-28M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Quicksort. One of the fastest in practice. Quicksort. One of the fastest in practice.

Expected E[T(n)] is O(n log n), the worst-case is O(nExpected E[T(n)] is O(n log n), the worst-case is O(n22); It can be ); It can be exponentially unlikely with little effort. exponentially unlikely with little effort.

General Strategy:General Strategy:

if |A| == 0 or 1, returnif |A| == 0 or 1, return

p p A(any(point)); //Pivot any element in A.A(any(point)); //Pivot any element in A.

ALAL ((A\p) <=p); AR ((A\p) <=p); AR ((A\p)>= p). //Partition into two ((A\p)>= p). //Partition into two disjoint groupsdisjoint groups

return (QS(AL) + p + QS(AR))return (QS(AL) + p + QS(AR))

Example:Example:

A = {13, 81, 65, 92, 43, 31, 57, 26, 75, 0}A = {13, 81, 65, 92, 43, 31, 57, 26, 75, 0}

Pivot p = 65Pivot p = 65

AL AL {13, 0, 43, 31, 26, 57} and AR {13, 0, 43, 31, 26, 57} and AR {81, 92, 75} {81, 92, 75}

……..

Subarray sizes: ~equal sized in MS, determined by the pivot in QS. Subarray sizes: ~equal sized in MS, determined by the pivot in QS.

Strategies for choosing pivots, and the in place partition.. Strategies for choosing pivots, and the in place partition..

Page 29: The Design & Analysis of the Algorithms

1-29M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Pivoting Choices:Pivoting Choices:

First or last array element. Doesn’t matter if A is random, but if array is First or last array element. Doesn’t matter if A is random, but if array is (partially) sorted, not recommended.(partially) sorted, not recommended.

Random pivoting. Generally works very well. recommended.Random pivoting. Generally works very well. recommended.

Median of 3: Pick 3 random elements and choose their median, or median of Median of 3: Pick 3 random elements and choose their median, or median of left, right, and middle.left, right, and middle.

E.g. in {…, 8, 1, 4, 9, 0, 3, 5, 2, 6}, left=8, right=6, mid=9, so the median is 6.??E.g. in {…, 8, 1, 4, 9, 0, 3, 5, 2, 6}, left=8, right=6, mid=9, so the median is 6.??

In place Partitioning.In place Partitioning.

swap the pivot with the last array item.swap the pivot with the last array item.

i=i=0, and j0, and j n-2. n-2.

while i < j dowhile i < j do

i i i++; (as long as element is <= pivot) i++; (as long as element is <= pivot)

j j j - -; (as long as element is >= pivot) j - -; (as long as element is >= pivot)

swap these elementsswap these elements

//i is pointing at element > pivot and//i is pointing at element > pivot and

//j is pointing at element < pivot//j is pointing at element < pivot

Page 30: The Design & Analysis of the Algorithms

1-30M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

Example:Example:

after pivot and last element swap,after pivot and last element swap,

8 1 4 9 0 3 5 2 7 68 1 4 9 0 3 5 2 7 6ii j j advance i,jadvance i,jii j j2 1 4 9 0 3 5 8 7 62 1 4 9 0 3 5 8 7 6 swappedswapped

i ji j advance i,jadvance i,j2 1 4 5 0 3 9 8 7 62 1 4 5 0 3 9 8 7 6 swap and advanceswap and advance

j ij i now j < i; stopnow j < i; stop

Now swap pivot element with the element at position i;Now swap pivot element with the element at position i;

2 1 4 5 0 3 6 8 7 92 1 4 5 0 3 6 8 7 9 done.done.

Here we assumed that all elements were distinct. What if there Here we assumed that all elements were distinct. What if there are elements equal to the pivot? One suggested solution it to are elements equal to the pivot? One suggested solution it to stop both i and j when pivot element encounted, swap and continue.stop both i and j when pivot element encounted, swap and continue.

Page 31: The Design & Analysis of the Algorithms

1-31M, Sakalli, CS246 Design & Analysis of Algorithms, Lecture Notes

The methods The methods

Brute force, heuristics..Brute force, heuristics.. Divide and conquerDivide and conquer Decrease and conquerDecrease and conquer Transform and conquerTransform and conquer Greedy approachGreedy approach Dynamic programmingDynamic programming Iterative improvementIterative improvement Backtracking Backtracking Branch and boundBranch and bound Randomized algorithmsRandomized algorithms