Top Banner
Literature Survey On A Load Balancing Model Based on Cloud Partitioning for the Public Cloud
12
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript

Literature Survey On A Load Balancing Model Based on Cloud Partitioning for the Public Cloud

Literature Survey On A Load Balancing Model Based on Cloud Partitioningfor the Public Cloud

What is Cloud Computing?

Cloud Computing is a concept that has many computers interconnected through a real time network like internet.

Cloud computing means distributed computing. Cloud computing enables convenient, on-demand, dynamic and reliable use of distributed computing resources.

The cloud computing model has five main characteristics- on-demand servicebroad network accessresource poolingflexibility measured service.Challenges on Cloud Computing

Very difficult problem to maintain the stability of processing many jobs in the Cloud Computing.

The job arrival pattern cannot be predicted and the capacities of each node in the cloud differ. Hence for balancing the load, it is crucial to control workloads to improve system performance and maintain stability.

The load on every cloud is variable and dependant on various factors.

To handle this problem of imbalance of load on clouds and to increase its workingefficiency, we will try to implement A Model for load balancing by Partitioning the Public Cloud

Existing system

A locally distributed system has various computers interconnected by a local communication network.

In cloud computing, controlling access to information is difficult. Also the user does not know where exactly data is stored .If in cloud computing the data is stored as distributed manner. It is saved on remote location or virtual locations randomly.

If it is going to upload the data randomly on cloud it leads to the imbalance in cloud server storage. For example: some of the nodes may be heavily loaded while other nodes might be idle or doing very little work i.e. one server has been loaded by 10 GB data while second server has 0 GB data uploaded on it.

The job arrival pattern cannot be predicted and the capacities of each node in the cloud differ. Hence for balancing the load, it is crucial to control workloads to improve system performance and maintain stability.

Amazon exampleCHALLENGES FOR LOAD BALANCING

There are some qualitative metrics that can be improved for better load balancing in cloud computing

Throughput: It is the total number of tasks that have completed execution for a given scale of time. It is required to have high through put for better performance of the system.

Associated Overhead: It describes the amount of overhead during the implementation of the load balancing algorithm. It is a composition of movement of tasks, inter process communication and inter processor. For load balancing technique to work properly, minimum overhead should be there.

Fault tolerant: We can define it as the ability to perform load balancing by the appropriate algorithm without arbitrary link or node failure. Every load balancing algorithm should have good fault tolerance approach.

Migration time: It is the amount of time for a process to be transferred from one system node to another node for execution. For better performance of the system this time should be always less.

Response time: In Distributed system, it is the time taken by a particular load balancing technique to respond. This time should be minimized for better performance.

Resource Utilization: It is the parameter which gives the information within which extant the resource is utilized. For efficient load balancing in system, optimum resource should be utilized.

Scalability: It is the ability of load balancing algorithm for a system with any finite number of processor and machines. This parameter can be improved for better system performance.

Performance: It is the overall efficiency of the system. If all the parameters are improved then the overall system performance can be improved.Proposed System

The load balancing is simplified by partitioning the clouds.

There is a main controller in cloud which chooses the suitable partitions for arriving jobs. The best load balancing strategy helps to select the appropriate partition.

All the status information is gathered and analyzed by main controller and balancers. They also perform the load balancing operations

Fig: Partition of cloud

We will use approximately 4 different servers, which are partitioned into small clouds called balancers (each balancer will have some servers).

Cloud Service Provider (CSP) is used to handle a Main cloud (which is made up of small Clouds) called Main Controller or Controller main. Client interacts with cloud using a web application called at client site.

When client uploads file it will be stored in the server. The cloud will take care that it will be loaded into the server which has minimum load.Fig: Load Balancing architectureAlgorithm

When a job arrives at the public cloud, the first step is to choose the right partition. The cloud partition status can be divided into three types:

(1) Idle: When the percentage of idle nodes exceeds alpha, change to idle status.(2) Normal: When the percentage of the normal nodes exceeds beta, change to normal load status.(3) Overload: When the percentage of the overloaded nodes exceeds gamma, change to overloaded status.

The parameters alpha, beta, and gamma are set by the cloud partition balancers.

Best Partition Searching Algorithm:

Begin while request dosearchbest_part (request); if part_state == idle OR part_state == normal then Send request to Part; else search for another Part; end if end whileendWe use round robin algorithm to select the suitable node.The main controller checks the load of every balancer and then assigns the job to the balancer with minimum load.The balancer then further checks load of each node and the job is assigned to the node with minimum load.

Fig: Process WorkflowReferences

[1] Dongliang Zhang, Changjun Jiang,Shu Li, A fast adaptive load balancing method for parallel particle-based simulations, Simulation Modelling Practice and Theory 17 (2009) 10321042.

[2] Dhinesh Babu L.D, P. VenkataKrishna, Honey bee behaviour inspired load balancing of tasks in cloud computing environments, Applied Soft Computing 13 (2013) 22922303.

[3] Bin Dong, Xiuqiao Li, Qimeng Wu, Limin Xiao, Li Ruan, A dynamic and adaptive load balancing strategy for parallel file system with large-scale I/O servers, J. Parallel Distribution Computing. 72 (2012) 12541268.

[4] Yunhua Deng, Rynson W.H. Lau, Heat diffusion based dynamic load balancing for distributed virtual environments, in: Proceedings of the17th ACM Symposium on Virtual Reality Software and Technology, ACM, 2010, pp. 203210.

[5] Markus Esch, Eric Tobias, Decentralized scale-free network construction and load balancing in Massive Multiuser Virtual Environments, in: Collaborative Computing: Networking, Applications and Worksharing, Collaborate Com, 2010, 6th International Conference on, IEEE, 2010, pp. 110.

[6] B. Godfrey, K. Lakshminarayanan, S. Surana, R. Karp, I. Stoica, Load balancing in dynamic structured P2P systems, in: INFOCOM 2004. Twenty-third AnnualJoint Conference of the IEEE Computer and Communications Societies, vol. 4, IEEE, 2004, pp. 22532262.