Top Banner

of 43


Oct 24, 2014



Assignment Set-1(60 marks)1) Describe the following: o Well known Sorting Algorithms o Divide and Conquer Techniques Ans:Well Known Sorting Algorithms In this section, we discuss the following well known algorithms for sorting a given list of numbers: 1. Insertion sort 2. Bubble sort 3. Selection sort 4. Shell sort 5. Heap sort 6. Merge sort 7. Quick sort Ordered set: Any set S with a relation, say, , is said to be ordered if for any two elements x and y of S, either or is true. Then, we may also say that (S, ) is an ordered set. Insertion sort The insertion sort, algorithm for sorting a list L of n numbers represented by an array A [ 1 n] proceeds by picking up the numbers in the array from left one by one and each newly picked up number is placed at its relative position, w.r.t. the sorting order, among the earlier ordered ones. The process is repeated till each element of the list is placed at its correct relative position i.e., when the list is sorted.

: Bubble Sort The Bubble Sort algorithm for sorting of n numbers, represented by an array A [1..n], proceeds by scanning the array from left to right. At each stage, compares adjacent pairs of numbers at positions A[i] and A [i +1] and whenever a pair of adjacent numbers is found to be out of order, then the positions of the numbers are exchanged. The algorithm repeats the process for numbers at positions A [i + 1] and A [i + 2] Thus in the first pass after scanning once all the numbers in the given list, the largest number will reach its destination, but other numbers in the array, may not be in order. In each subsequent pass, one more number reaches its destination.

Selection sort Selection Sort for sorting a list L of n numbers, represented by an array A [ 1.. n], proceeds by finding the maximum element of the array and placing it in the last position of the array representing the list. Then repeat the process on the sub array representing the sublist obtained from the list excluding the current maximum element. The following steps constitute the selection sort algorithm: Step 1: Create a variable MAX to store the maximum of the values scanned upto a particular stage. Also create another variable say Max POS which keeps track of the position of such maximum values. Step 2: In each iteration, the whole list / array under consideration is scanned once find out the position of the current maximum through the variable MAX and to find out the position of the current maximum through MAX POS. Step 3: At the end of iteration, the value in last position in the current array and the (maximum) value in the position Max POS are exchanged.

Step 4: For further consideration, replace the list L by L ~ { MAX} { and the array A by the corresponding sub array} and go to step 1. Heap Sort In order to discuss Heap Sort algorithm, we recall the following definitions; where we assume that the concept of a tree is already known: Binary Tree: A tree is called a binary tree, if it is either empty, or it consists of a node called the root together with two binary trees called the left subtree and a right subtree. In respect of the above definition, we make the following observations. 1. It may be noted that the above definition is a recursive definition, in the sense that definition of binary tree is given in its own terms (i.e. binary tree). 2. The following are all distinct and the only binary trees having two nodes.

The following are all distinct and only binary trees having three nodes

Heap: is defined as a binary tree with keys assigned to its nodes (one key per node) such that the following conditions are satisfied. i) The binary tree is essentially complete (or simply complete), i.e. all its levels are full except possibly the last where only some rightmost leaves may be missing. ii) The key at reach node is greater than or equal to the key at its children. The following binary tree is Heap

However, the following is not a heap because the value 6 in a child node is more than the value 5 in the parent node. Also, the following is not a heap, because some leaves (e.g., right child of 5), in the between two other leaves ( viz 4 and 1), are missing.

Alternative definition of Heap: Heap is an array H. [1.. n] in which every element in position i (the parent) in the first half of the array is greater than or equal to elements in positions 2i and 2i + 1 ( the children). HEAP SORT is a three step algorithm as discussed below: i) Heap Construction for a given array. ii) (Maximum deletion) Copy the root value (which is maximum of all values in the Heap) to right most yet to be occupied location of the array used to store the sorted values and copy the value in the last node of the tree (or of the corresponding array) to the root. iii) Consider the binary tree (which is not necessarily a Heap now) obtained from the Heap through the modification through step (ii) above and by removing currently the last node from further consideration. Convert the binary tree into a Heap by suitable modifications. Divide and Conquer Technique The Divide and Conquer is a technique of solving problems from various domains and will be discussed in details later on. Here, we briefly discuss how to use the technique in solving sorting problems.

A sorting algorithms based on Divide and Conquer technique has the following outline. Procedure Sort (list) If the list has length 1 then return the list Else { i.e., when length of the list is greater than 1} begin partition the list into two sublists say L and H, Sort (L) Sort (H) Combine (sort (L), Sort (H)) {during the combine operation, the sublists are merged in sorted order} end There are two well known Divide and conquer methods for sorting viz: i) Merge sort ii) Quick sort Merge Sort In this method, we recursively chop the list into two sublists of almost equal sizes and when we get lists of size one, then start sorted merging of lists in the reverse order in which these lists were obtained through chopping. The following example clarifies the method. Example of Merge Sort: Given List: 4 6 7 5 2 1 3 Chop the list to get two sublists viz

(( 4, 6, 7, 5), (2 1 3)) Where the symbol / separates the two sublists Again chop each of the sublists to get two sublists for each viz (((4, 6), (7, 5))), ((2), (1, 3))) Again repeating the chopping operation on each of the lists of size two or more obtained in the previous round of chopping, we get lists of size 1 each viz 4 and 6 , 7 and 5, 2, 1 and 3. In terms of our notations, we get ((((4), (6)), ((7), (5))), ((2), ((1), (3)))) At this stage, we start merging the sublists in the reverse order in which chopping was applied. However, during merging the lists are sorted. Start merging after sorting, we get sorted lists of at most two elements viz (((4, 6), (5, 7)), ((2), (1, 3))) Merge two consecutive lists, each of at most 4 elements, we get the sorted lists (( 4, 5, 6, 7), (1, 2, 3)) Finally merge two consecutive lists of at most 4 elements, we get the sorted list (1, 2, 3, 4, 5, 6, 7) Procedure mergesort (A [ 1..n]) If n > 1 then m L1 L2 L end begin {of the procedure} [ n / 2] A [ 1..m] A [ m + 1 ..n] merge (mergesort (L1) , mergesort (L2))

{L is now sorted with elements in non decreasing order} Next, we discuss merging of already sorted sublists Procedure merge (L1 , L2: lists) L empty list

While L1 and L2 are both nonempty do begin Remove smaller of the first elements of L1 and L2 from the list and place it in L, immediately next to the right of the earlier elements in L. If removal of this element makes one list empty then remove all elements from the other list and append them to L keeping the relative order of the elements intact. else repeat the process with the new lists L1 and L2 end Quick Sort Quick sort is also a divide and conquer method of sorting. It was designed by C. A. R Hoare, one of the pioneers of Computer Science and also Turing Award Winner for the year 1980. This method does more work in the first step of partitioning the list into two sublists. Then combining the two lists becomes trivial. To partition the list, we first choose some value from the list for which, we hope, about half the values will be less than the chosen value and the remaining values will be more than the chosen value. Division into sublist is done through the choice and use of a pivot value, which is a value in the given list so that all values in the list less than the pivot are put in one list and rest of the values in the other list. The process is applied recursively to the sublists till we get sublists of lengths one.

2) Explain in your own words the different Asymptotic functions and notations.

Ans: Well Known Asymptotic Functions & Notations We often want to know a quantity only approximately and not necessarily exactly, just to compare with another quantity. And, in many situations, correct comparison may be possible even with approximate values of the quantities. The advantage of the possibility of correct comparisons through even approximate values of quantities, is that the time required to find approximate values may be much less than the times required to find exact values. We will introduce five approximation functions and their notations. The purpose of these asymptotic growth rate functions to be introduced, is to facilitate the recognition of essential character of a complexity function through some simpler functions delivered by these notations. For examples, a complexity function f(n) = 5004 n3 + 83 n2 + 19 n + 408, has essentially same behavior as that of g(n) = n3 as the problem size n becomes larger and larger. But g(n) = n3 is much more comprehensible and its value easier to compute than the function f(n) Enumerate the five well known approximation functions and how these are pronounced i) of n2) is pronounced as big oh of n2 or sometimes just as oh

ii) is pronounced as