Top Banner

of 13

mc0080 set 1 july 2011

Apr 06, 2018

Download

Documents

killer1
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 8/3/2019 mc0080 set 1 july 2011

    1/13

    - 1 -

    July 2011

    Master of Computer Application (MCA) Semester 4

    MC0080 Analysis and Design of Algorithms 4 Credits

    (Book ID: B0891)

    Assignment Set 1

    Answer the following:

    Q1 . Describe the following:

    o Fibonacci Heapso Binomial Heaps

    Fibonacci Heaps

    1) Structure of Fibonacci heaps

    Like a binomial heap, a Fibonacci heap is a collection of min-heap-ordered trees. The trees in a

    Fibonacci heap are not constrained to be binomial trees, however. Figure (a) shows an example of

    a Fibonacci heap.

    Unlike trees within binomial heaps, which are ordered, trees within Fibonacci heaps are rooted

    but unordered. As Figure (b) shows, each nodex contains a pointerp [x] to its parent and a

    pointer child [x] to any one of its children. The children ofx are linked together in a circular,

    doubly linked list, which we call the child list ofx. Each child y in a child list has pointers left

    [y] and right [y]that point to ys left and right siblings, respectively. If node y is an only child,then left [y] = right [y] = y. The order in which siblings appear in a child list is arbitrary.

  • 8/3/2019 mc0080 set 1 july 2011

    2/13

    - 2 -

    Figure (a) A Fibonacci heap consisting of five min-heap-ordered trees and 14 nodes. The

    dashed line indicates the root list. The minimum node of the heap is the node containing the

    key 3. The three marked nodes are blackened. The potential of this particular Fibonacci

    heap is 5+2.3=11. (b) A more complete representation showing pointers p (up arrows), child(down arrows), and left and right (sideways arrows).

    Two other fields in each node will be of use. The number of children in the child list of node x is

    stored in degree[x]. The Boolean-valued field mark[x] indicates whether node x has lost a child

    since the last timex was made the child of another node. Newly created nodes are unmarked, and

    a node x becomes unmarked whenever it is made the child of another node.

    A given Fibonacci heap H is accessed by a pointer min [H] to the root of a tree containing a

    minimum key; this node is called the minimum nodeof the Fibonacci heap. If a Fibonacci heapH is empty, then min [H] = NIL.

    The roots of all the trees in a Fibonacci heap are linked together using their left and right pointers

    into a circular, doubly linked list called theroot list of the Fibonacci heap. The pointer min

    [H] thus points to the node in the root list whose key is minimum. The order of the trees within a

    root list is arbitrary.

    We rely on one other attribute for a Fibonacci heap H : the number of nodes currently in H is kept

    in n[H].

    2) Potential function

    For a given Fibonacci heap H, we indicate by t (H) the number of trees in the root list of H and

    by m(H) the number of marked nodes inH. The potential of Fibonacci heapHis then defined by

    (a)

    For example, the potential of the Fibonacci heap shown in Figure 4.3 is

    5+2.3 = 11. The potential of a set of Fibonacci heaps is the sum of the potentials of its constituent

    http://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0467.gifhttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image04420.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0467.gifhttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image04420.jpg
  • 8/3/2019 mc0080 set 1 july 2011

    3/13

    - 3 -

    Fibonacci heaps. We shall assume that a unit of potential can pay for a constant amount of work,

    where the constant is sufficiently large to cover the cost of any of the specific constant-time

    pieces of work that we might encounter.

    We assume that a Fibonacci heap application begins with no heaps. The initial potential,

    therefore, is 0, and by equation (a), the potential is nonnegative at all subsequent times.

    3) Maximum degree

    The amortized analyses we shall perform in the remaining sections of this unit assume that there

    is a known upper boundD(n) on the maximum degree of any node in an n-node Fibonacci heap.

    Binomial Heaps

    A binomial heap His a set of binomial trees that satisfies the following binomial heap

    properties.

    1. Each binomial tree inHobeys the min-heap property: the key of a node is greater than or

    equal to the key of its parent. We say that each such tree is min-heap-ordered.

    2. For any nonnegative integer k, there is at most one binomial tree inHwhose root has degree k.

    The first property tells us that the root of a min-heap-ordered tree contains the smallest key in the

    tree.

    The second property implies that an n-node binomial heapHconsists of at most [lg n] +

    1 binomial trees. To see why, observe that the binary representation of n has [lg n] + 1 bits,

    say , so that . By property 1 of 4.4.2, therefore,

    binomial treeBiappears inHif and only if bit bI = 1. Thus, binomial heap H contains at most [lg

    n] + 1 binomial trees.

    Q2 . If2

    )(3n

    nf , 1712037)( 2 nnng then show that )( fOg and )(gOf

    Solution:

    (i) For g = O (f), g (n) should be = k.

    When C = 53 and k = 3, we get

    g(n)

  • 8/3/2019 mc0080 set 1 july 2011

    4/13

    - 4 -

    Hence the proof.

    So, f is not equal to O (g).

    Q3. Explain the concept of bubble sort and also write the algorithm for bubble sort.

    Bubble sort:

    If you are sorting content into an order, one of the simplest techniques that exist is the bubble

    sort technique. It works by repeatedly stepping through the list to be sorted, comparing each

    pair of adjacent items and swapping them if they are in the wrong order. The pass through the

    list is repeated until noswaps are needed, which indicates that the list is sorted.

    Step-by-step example

    Let us take the array of numbers "5 1 4 2 8", and sort the array from lowest number to greatest

    number using bubble sort algorithm. In each step, elements written inBold are being compared.

    First Pass:

    ( 5 1 4 2 8 ) ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps them.

    ( 1 5 4 2 8 ) ( 1 4 5 2 8 ), Swap since 5 > 4

    ( 1 4 5 2 8 ) ( 1 4 2 5 8 ), Swap since 5 > 2

    ( 1 4 2 5 8 ) ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5), algorithm does

    not swap them..

    Second Pass:

    ( 1 4 2 5 8 ) ( 1 4 2 5 8 )

    ( 1 4 2 5 8 ) ( 12 4 5 8 ), Swap since 4 > 2

    ( 1 2 4 5 8 ) ( 1 2 4 5 8 )

    ( 1 2 4 5 8) ( 1 2 4 5 8 )Now, the array is already sorted, but our algorithm does not know if it

    is completed. The algorithm needs one whole pass without any swap to know it is sorted.

    Third Pass:

  • 8/3/2019 mc0080 set 1 july 2011

    5/13

    - 5 -

    ( 1 2 4 5 8 ) ( 1 2 4 5 8 )

    ( 1 2 4 5 8 ) ( 1 2 4 5 8 )

    ( 1 2 4 5 8 ) ( 1 2 4 5 8 )

    ( 1 2 4 5 8 ) ( 1 2 4 5 8 )Finally, the array is sorted, and the algorithm can terminate.

    Q4. Prove that If n 1, then for any n-key, B-tree T of height hand minimum degree t 2,

    2

    1log

    nh t .

    In computer science, a B-tree is a tree data structure that keeps data sorted and allows searches,

    sequential access, insertions, and deletions in logarithmic amortized time. The B-tree is a

    generalization of a binary search tree in that more than two paths diverge from a single node.

    Unlike self-balancing binary search trees, the B-tree is optimized for systems that read and write

    large blocks of data. It is commonly used in databases and file systems.

    Proof:

    If a Btree has height h, the root contains at least one key and all other nodes contain at least t-1keys. Thus, there are at least 2 nodes at depth 1, at least 2t nodes at depth 2, at least 2t raise to 2

    nodes at depth 3, and so on, until at depth h there are at least 2t raise to h1 nodes.

    n >= 1 + (t-1) Summation of 2t raise to i-1 from i=1 to h

    = 1 + 2(t-1) (t raise to h - 1 / t-1)

    = 2t raise to h - 1.

    By simple algebra, we get t raise to h

  • 8/3/2019 mc0080 set 1 july 2011

    6/13

    - 6 -

    set V and edge set E and a particular source vertex s, breadth first search find or discovers every

    vertex that is reachable from s. First it discovers every vertex adjacent to s, then systematically

    for each of those vertices find all the vertices adjacent to them and so on. In doing so, it computes

    the distance and the shortest path in terms of fewest numbers of edges from the source node s to

    each of the reachable vertex. Breadth-first Search also produces a breadth-first tree with root

    vertex the process of searching or traversing the graph.

    For recording the status of each vertex, whether it is still unknown, whether it has been

    discovered (or found) and whether all of its adjacent vertices have also been discovered. Thevertices are termed as unknown, discovered and visited respectively. So if (u, v) E and u isvisited then v will be either discovered or visited i.e., either v has just been discovered or vertices

    adjacent to v have also been found or visited.

    As breadth first search forms a breadth first tree, so if in the edge (u, v) vertex v is discovered in

    adjacency list of an already discovered vertex u then we say that u is the parent or

    predecessor vertex of V. Each vertex is discovered once only.

    The data structure we use in this algorithm is a queue to hold vertices. In this algorithm we

    assume that the graph is represented using adjacency list representation. front [u] us used to

    represent the element at the front of the queue. Empty() procedure returns true if queue is emptyotherwise it returns false. Queue is represented as Q. Procedure enqueue() and dequeue() are usedto insert and delete an element from the queue respectively. The data structure Status[ ] is used to

    store the status of each vertex as unknown or discovered or visite.

    1) Algorithm of Breadth First Search

    1. for each vertex u V {s}

    2. status[u] = unknown

    3. status[s] = discovered

    4. enqueue (Q, s)

    5. while (empty[Q]! = false)

    6. u = front[Q]

    7. for each vertex v Adjacent to u

    8. if status[v] = unknown

    9. status[v] = discovered

    10. parent (v) = u

    11. end for12. enqueue (Q, v);

    13. dequeue (Q)

    14. status[u] = visited

    15. print u is visited

  • 8/3/2019 mc0080 set 1 july 2011

    7/13

    - 7 -

    16. end while

    The algorithm works as follows. Lines 1-2 initialize each vertex to unknown. Because we haveto start searching from vertex s, line 3 gives the status discovered to vertex s. Line 4 inserts theinitial vertex s in the queue. The while loop contains statements from line 5 to end of the

    algorithm. The while loop runs as long as there remains discovered vertices in the queue. And

    we can see that queue will only contain discovered vertices. Line 6 takes an element u at thefront of the queue and in lines 7 to 10 the adjacency list of vertex u is traversed and eachunknown vertex u in the adjacency list of u, its status is marked as discovered, its parent ismarked as u and then it is inserted in the queue. In the line 13, vertex u is removed from the

    queue. In line 14-15, when there are no more elements in adjacency list of u, vertex u is removed

    from the queue its status is changed to visited and is also printed as visited.

    The algorithm given above can also be improved by storing the distance of each vertex u from the

    source vertex s using an array distance [ ] and also by permanently recording the predecessor or

    parent of each discovered vertex in the array parent[ ]. In fact, the distance of each reachable

    vertex from the source vertex as calculated by the BFS is the shortest distance in terms of the

    number of edges traversed. So next we present the modified algorithm for breadth first search.

    2) Modified Algorithm

    Program BFS (G, s)

    1. for each vertex u s v {s}

    2. status[u] = unknown

    3. parent[u] = NULL

    4. distance[u] = infinity

    5. status[s] = discovered

    6. distance[s] = 0

    7. parent[s] = NULL

    8. enqueue(Q, s)

    9. while empty (Q) ! = false

    10. u = front[Q]

    11. for each vertex v adjacent to u

    12. if status[v] = unknown

    13. status[v] = discovered

    14. parent[v] = u

    15. distance[v] = distance[u]+1

    16. enqueue(Q, v)

    17. dequeue(Q)

  • 8/3/2019 mc0080 set 1 july 2011

    8/13

    - 8 -

    18. status[u] = visited

    19. print u is visited

    In the above algorithm the newly inserted line 3 initializes the parent of each vertex to NULL,

    line 4 initializes the distance of each vertex from the source vertex to infinity, line 6 initializes the

    distance of source vertex s to 0, line 7 initializes the parent of source vertex s NULL, line 14records the parent of v as u, line 15 calculates the shortest distance of v from the source vertex s,

    as distance of u plus 1.

    Example:

    In the figure given below, we can see the graph given initially, in which only sources is

    discovered.

    Figure (a): Initial Input Graph

    Figure (b): After we visited s

    We take unknown (i.e., undiscovered) adjacent vertex of a s and insert them in queue, first a and

    then b. The values of the data structures are modified as given below:

    Next, after completing the visit of a we get the figure and the data structures as given below:

    Figure (c): After we visit a

    http://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image08610.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image08410.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image08212.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image08610.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image08410.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image08212.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image08610.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image08410.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image08212.jpg
  • 8/3/2019 mc0080 set 1 july 2011

    9/13

    - 9 -

    Figure (d): After we visit b

    Figure (e): After we visit c

    Figure (f): After we visit d

    Figure (g): After we visit e

    http://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0946.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0928.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0908.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0888.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0946.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0928.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0908.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0888.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0946.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0928.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0908.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0888.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0946.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0928.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0908.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0888.jpg
  • 8/3/2019 mc0080 set 1 july 2011

    10/13

    - 10 -

    Figure (h): After we visit f

    Figure (i): After we visit g

    Figure (a) : Initial Input Graph

    Figure (b): We take unknown (i.e., undiscovered) adjacent vertices of s and insert them in the

    queue.

    Figure (c): Now the gray vertices in the adjacency list of u are b, c and d, and we can visit any ofthem depending upon which vertex is inserted in the queue first. As in this example, we have

    inserted b first which is now at the front of the queue, so next we will visit b.

    Figure (d): As there is no undiscovered vertex adjacent to b, so no new vertex will be inserted in

    the queue, only the vertex b will be removed from the queue.

    Figure (e): Vertices e and f are discovered as adjacent vertices of c, so they are inserted in the

    queue and then c is removed from the queue and is visited.

    Figure (f): Vertex g is discovered as the adjacent vertex of d an after that d is removed from the

    queue and its status is changed to visit.

    Figure (g): No undiscovered vertex adjacent to e is found so e is removed from the queue and its

    status is changed to visit.

    Figure (h): No undiscovered vertex adjacent to f is found so f is removed from the queue and its

    status is changed to visit.

    Figure (i): No undiscovered vertex adjacent to g is found so g is removed from the queue

    and its status is changed to visit. Now as queue becomes empty so the while loop stops.

    http://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0986.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0965.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0986.jpghttp://resources.smude.edu.in/slm/wp-content/uploads/2010/07/clip-image0965.jpg
  • 8/3/2019 mc0080 set 1 july 2011

    11/13

    - 11 -

    Q6. Explain Kruskals Algorithm.

    Kruskals Algorithm

    Finding minimal spanning tree of a given weighted graph, which is suggested by Kruskal. In this

    method, we stress on the choice of edges of minimum weight from amongst all the available

    edges subject to the condition that chosen edges do not form a cycle.

    The connectivity of the chosen edges, at any stage, in the form of a subtree, which was

    emphasized in Prims algorithm, isnot essential.

    We briefly describe the Kruskals algorithm to find minimal spanning tree of a given weightedand connected graph, as follows:

    i) First of all, order all the weights of the edges in increasing order. Then repeat the following two

    steps till a set of edges is selected containing all the vertices of the given graph.

    ii) Choose an edge having the weight which is the minimum of the weights of the edges not

    selected so far.

    iii) If the new edge forms a cycle with any subset of the earlier selected edges, then drop it, else,

    add the edge to the set of selected edges.

    We illustrate the Kruskals algorithm through the following:

    Example:

    Let us consider the following graph, for which the minimal spanning tree is required.

    Figure 1

    Let Eg denote the set of edges of the graph that are chosen upto some stage.

    According to the step (i) above, the weights of the edges are arranged in increasing order as the

    set

    {1, 3, 4.2, 5, 6}

    In the first iteration, the edge (a, b) is chosen which is of weight 1, the minimum of all the

    weights of the edges of the graph.

    As single edge do not form a cycle, therefore, the edge (a, b) is selected, so that Eg = ((a, b))

    After first iteration, the graph with selected edges in bold is as shown below:

    Figure 2

    Second Iteration

    Next the edge (c, d) is of weight 3, minimum for the remaining edges. Also edges (a, b) and (c, d)

    do not form a cycle, as shown below. Therefore, (c, d) is selected so that,

    Eg = ((a, b), (c, d))

    Thus, after second iteration, the graph with selected edges in bold is as shown below:

    Figure 3

  • 8/3/2019 mc0080 set 1 july 2011

    12/13

    - 12 -

    It may be observed that the selected edges do not form a connected subgraph or subtree of the

    given graph.

    Third Iteration

    Next, the edge (a, d) is of weight 4.2, the minimum for the remaining edges. Also the edges in

    Egalong with the edge (a, d) do not form a cycle. Therefore, (a, d) is selected so that new Eg = ((a,b), (c, d), (a, d)). Thus after third iteration, the graph with selected edges in bold is as shown

    below:

    Figure 4

    Fourth Iteration

    Next, the edge (a, c) is of weight 5, the minimum for the remaining edge. However, the edge (a,

    c) forms a cycles with two edges in Eg, viz., (a, d) and (c, d). Hence (a, c) is not selected and

    hence not considered as a part of the to-be-found spanning tree.

    Figure 5

    At the end of fourth iteration, the graph with selected edges in bold remains the same as at the end

    of the third iteration, as shown below:

    Figure 6

    Fifth Iteration

    Next, the edge (e, d), the only remaining edge that can be considered, is considered. As (e, d)

    does not form a cycle with any of the edges in E g. Hence the edge (e, d) is put in E g. The graph at

    this stage, with selected edge in bold is as follows:

    Figure 7.6.7

    At this stage we find each of the vertices of the given graph is a vertex of some edge in E g.

    Further we observe that the edges in Egform a tree, and hence, form the required spanning tree.Also, from the choice of the edges in Eg, it is clear that the spanning tree is of minimum weight.

    Next, we consider semi-formal definition of Kruskals algorithm.ALGORITHM Spanning-Kruskal (G)

    // The algorithm constructs a minimum spanning tree by choosing successively edges

    // of minimum weights out of the remaining edges.

    // The input to the algorithm is a connected graph G = (V, E), in which V is the set of

    // vertices and E, the set of edges and weight of each edge is also given.

    // The output is the set of edges, denoted by ET, which constitutes a minimum

    // spanning tree of G

    // the variable edge-counter is used to count the number of selected edges so far.

    // variable t is used to count the number of edges considered so far.

    Arrange the edges in E in nondecreasing order of the weights of edges. After the arrangement, the

    edges in order are labeled as e1, e2, ..eEET // initialize the set of tree edges as emptyedge-counter 0 // initialize the encounter to zero

  • 8/3/2019 mc0080 set 1 july 2011

    13/13

    - 13 -

    t 0 // initialize the number of processed edges as zero

    // let n = number of edges in V

    While edge-counter < n1

    t t + 1 // increment the counter for number of edges considered so far

    Ifthe edges e t does not form a cycle with any subset of edges in ETthen

    begin

    // if, e t along with edges earlier in ET do not form a cycle

    // then add et to ET and increase edge counter

    ET ET {e t};edge-counter edge-counter + 1

    end if

    return ET