Top Banner
week 4 Complexity of Algorithms 1 Fundamental Techniques There are some algorithmic tools that are quite specialised. They are good for problems they are intended to solve, but they are not very versatile. There are also more fundamental (general) algorithmic tools that can be applied to a wide variety of different data structure and algorithm design problems.
32

Fundamental Techniques

Jan 02, 2016

Download

Documents

regan-rutledge

Fundamental Techniques. There are some algorithmic tools that are quite specialised . They are good for problems they are intended to solve, but they are not very versatile . - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Fundamental Techniques

week 4 Complexity of Algorithms 1

Fundamental Techniques

There are some algorithmic tools that are quite specialised. They are good for problems they are intended to solve, but they are not very versatile.There are also more fundamental (general) algorithmic tools that can be applied to a wide variety of different data structure and algorithm design problems.

Page 2: Fundamental Techniques

week 4 Complexity of Algorithms 2

The Greedy Method

An optimisation problem (OP) is a problem that involves searching through a set of configurations to find one that minimises or maximizes an objective function defined on these configurationsThe greedy method solves a given OP going through a sequence of (feasible) choicesThe sequence starts from well-understood starting configuration, and then iteratively makes the decision that seems best from all those that are currently possible.

Page 3: Fundamental Techniques

week 4 Complexity of Algorithms 3

The Greedy Method

The greedy approach does not always lead to an optimal solution.The problems that have a greedy solution are said to posses the greedy-choice property.The greedy approach is also used in the context of hard (difficult to solve) problems in order to generate an approximate solution.

Page 4: Fundamental Techniques

week 4 Complexity of Algorithms 4

Fractional Knapsack Problem

In fractional knapsack problem, where we are given a set S of n items, s.t., each item I has a positive benefit bi and a positive weight wi, and we wish to find the maximum-benefit subset that doesn’t exceed a given weight W.We are also allowed to to take arbitrary fractions of each item.

Page 5: Fundamental Techniques

week 4 Complexity of Algorithms 5

Fractional Knapsack Problem

I.e., we can take an amount xi of each item i such that

The total benefit of the items taken is determined by the objective function

Page 6: Fundamental Techniques

week 4 Complexity of Algorithms 6

Fractional Knapsack Problem

Page 7: Fundamental Techniques

week 4 Complexity of Algorithms 7

Fractional Knapsack Problem

In the solution we use a heap-based PQ to store the items of S, where the key of each item is its value indexWith PQ, each greedy choice, which removes an item with the greatest value index, takes O(log n) timeThe fractional knapsack algorithm can be implemented in time O(n log n).

Page 8: Fundamental Techniques

week 4 Complexity of Algorithms 8

Fractional Knapsack Problem

Fractional knapsack problem satisfies the greedy-choice property, henceThm: Given an instance of a fractional knapsack problem with set S of n items, we can construct a maximum benefit subset of S, allowing for fractional amounts, that has a total weight W in O(n log n) time.

Page 9: Fundamental Techniques

week 4 Complexity of Algorithms 9

Task Scheduling

Suppose we are given a set T of n tasks, s.t., each task i has a start time si and a completion time fi.Each task has to be performed on a machine and each machine can execute only one task at a time.Two tasks i and j are non-conflicting if fi sj or fj si.Two tasks can be executed on the same machine only if they are non-conflicting.

Page 10: Fundamental Techniques

week 4 Complexity of Algorithms 10

Task Scheduling

The task scheduling problem is to schedule all the tasks in T on the fewest machines possible in a non-conflicting way

Page 11: Fundamental Techniques

week 4 Complexity of Algorithms 11

Task Scheduling (algorithm)

Page 12: Fundamental Techniques

week 4 Complexity of Algorithms 12

Task Scheduling (analysis)

In the algorithm TaskSchedule, we begin with no machines and we consider the tasks in a greedy fashion, ordered by their start times.For each task i, if we have the machine that can handle task i, then we schedule i on that machine.Otherwise, we allocate a new machine, schedule i on it, and repeat this greedy selection process until we have considered all the tasks in T.

Page 13: Fundamental Techniques

week 4 Complexity of Algorithms 13

Task Scheduling (analysis)

Task scheduling problem satisfies the greedy-choice property, henceThm: Given an instance of a task scheduling problem with set of n tasks, the algorithm TaskSchedule produces a schedule of the tasks with the minimum number of machines in O(n log n) time.

Page 14: Fundamental Techniques

week 4 Complexity of Algorithms 14

Divide and Conquer

Divide: if the input size is small then solve the problem directly; otherwise divide the input data into two or more disjoint subsetsRecur: recursively solve the sub-problems associated with the subsetsConquer: take the solutions to the sub-problems and merge them into a solution to the original problem

Page 15: Fundamental Techniques

week 4 Complexity of Algorithms 15

Divide and Conquer

To analyse the running time of a divide-and-conquer algorithm we utilise a recurrence equation, whereT(n) denotes the running time of the algorithm on an input of size n, andCharacterise T(n) using an equation that relates T(n) to values of function T for problem sizes smaller than n, e.g.,

Page 16: Fundamental Techniques

week 4 Complexity of Algorithms 16

Substitution Method

One way to solve a divide-and-conquer recurrence equation is to use the iterative substitution method, a.k.a., plug-and-chug method, e.g., having

We get

And after i-1 substitutions we have

And for i = log n, we get

Page 17: Fundamental Techniques

week 4 Complexity of Algorithms 17

Recursion Tree (visual approach)

In recursion tree method, some overhead (forming a part of a recurrence equation) is associated with every node of the tree. E.g., having

Where the overhead corresponds to summand +bn. We get

The value of T(n) corresponds to the sum of all overheads. In this example, depth of the tree times overhead at each level, which is O(n log n)

Page 18: Fundamental Techniques

week 4 Complexity of Algorithms 18

Guess-and-Prove

In guess-and-prove method the solution to a recurrence equation is guessed and then proved by mathematical induction

We guess that T(n) = O(n log n). We have to prove that T(n) < C n· log n for some constant C and large enough n. We use inductive assumption that T(n/2) < C · n/2 · log (n/2) = Cn/2·(log n –1) = (Cn · log n)/2 – Cn/2.T(n) = 2T(n/2) +bn < 2((Cn · log n)/2 – Cn/2) +bn = Cn · log n + (-Cn + bn) < Cn · log n, for any C > b.

Page 19: Fundamental Techniques

week 4 Complexity of Algorithms 19

The Master Method

Page 20: Fundamental Techniques

week 4 Complexity of Algorithms 20

Matrix Multiplication

Suppose we are given two n x n matrices X and Y, and we wish to compute their product Z=X·Y, which is defined so that:

Which naturally leads to a simple O(n3) time algorithm.

Page 21: Fundamental Techniques

week 4 Complexity of Algorithms 21

Matrix Multiplication

Another way of viewing this product is in terms of sub-matrices:

where

However this gives a divide-and-conquer algorithm with running time T(n), s.t., T(n) =8T(n/2) +bn2 = O(n3)

Page 22: Fundamental Techniques

week 4 Complexity of Algorithms 22

Strassen’s Algorithm

Define seven matrix products:

Page 23: Fundamental Techniques

week 4 Complexity of Algorithms 23

Strassen’s Algorithm

Having Sis we can represent I, J, K, L:

Page 24: Fundamental Techniques

week 4 Complexity of Algorithms 24

Strassen’s Algorithm

Thus, we can compute Z=XY using seven recursive multiplications of matrices of size (n/2) x (n/2), where

One can prove, e.g., using Master Theorem, that: Thm: We can multiply two n x n matrices in O(nlog 7) = O(n2.808) time.

Page 25: Fundamental Techniques

week 4 Complexity of Algorithms 25

Dynamic Programming

The dynamic programming (DP) algorithm-design technique is similar to divide-and-conquer technique.The main difference is in replacing (possibly) repetitive recursive calls by the reference to already computed values stored in a special table.

Page 26: Fundamental Techniques

week 4 Complexity of Algorithms 26

Dynamic Programming

DP technique is used primarily for optimisation problemsWe very often apply DP where the brute-force search for the best is infeasibleHowever DP is efficient only if the problem has a certain amount of structure that we can exploit

Page 27: Fundamental Techniques

week 4 Complexity of Algorithms 27

Dynamic Programming

Simple sub-problems: there must be a way of braking the whole optimisation problem into smaller pieces sharing a similar structureSub-problem optimality: an optimal solution to the global problem must be a composition of optimal sub-problem solutionsSub-problem overlap: optimal solutions to unrelated sub-problems can contain sub-problems in common

Page 28: Fundamental Techniques

week 4 Complexity of Algorithms 28

0-1 Knapsack Problem

In 0-1 knapsack problem, is the knapsack problem where taking fractions of items is not allowed, i.e., each item si S, for 1 i

n, must be entirely accepted or rejectedItem si has a benefit bi (s.t., b1 b2 … bn) and an integer weight wi

We have the following objective:

where T S

Page 29: Fundamental Techniques

week 4 Complexity of Algorithms 29

0-1 Knapsack Problem

Exponential solution: we can easily solve 0-1 knapsack problem in O(2n) time by testing all possible subsets of itemsUnfortunately exponential complexity is not acceptable for large n and we rather have to focus on nice characterisation for sub-problems in order to use DP approach

Page 30: Fundamental Techniques

week 4 Complexity of Algorithms 30

0-1 Knapsack Problem

Let Sk = {si: i= 1,2,…,k}

Let B[k,w] be the maximum total benefit of a subset of Sk from among all those subsets having total weight exactly wWe have b[0,w]=0, for each wW, and

Page 31: Fundamental Techniques

week 4 Complexity of Algorithms 31

0-1 Knapsack Problem

Page 32: Fundamental Techniques

week 4 Complexity of Algorithms 32

0-1 Knapsack Problem

The running time of the 01Knapsack algorithm is dominated by the two nested for-loops, where the outer one iterates n times and the inner one iterates at most W timesThm: 01Knapsack algorithm finds the highest benefit subset of S with total weight at most W in O(nW) time