Top Banner
Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT PROJECT REPORT ON WINDS OF CHANGE: FROM VENDOR LOCK-in TO THE META CLOUD Submitted to Jawaharlal Nehru Technological University Anantapuramu, In partial fulfillment of the requirements for the award of degree of BACHELOR OF TECHNOLOGY IN COMPUTER SCIENCE AND ENEGINEERING SUBMITTED By BATCH NO.: 3 N.NAWAZ KHAN 103P1A0548 M.GOWRI SANKAR 103P1A0547 K.SREENIVASULU 103P1A0532 T.MUKESH 103P1A0563 Under The Esteemed Guidance Of Mrs. R. ROOPA, M.Tech. Assistant Professor Department Of Computer Science and Engineering Department Of Computer Science and Engineering PRIYADARSHINI INSTITUTE OF TECHNOLOGY 1
128

Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Sep 12, 2014

Download

Technology

A PROJECT REPORT ON WINDS OF CHANGE FROM VENDOR LOCK-IN TO THE META CLOUD
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

PROJECT REPORT ONWINDS OF CHANGE: FROM VENDOR LOCK-in TO THE

META CLOUDSubmitted to Jawaharlal Nehru Technological University Anantapuramu,

In partial fulfillment of the requirements for the award of degree of

BACHELOR OF TECHNOLOGYIN

COMPUTER SCIENCE AND ENEGINEERINGSUBMITTED

ByBATCH NO.: 3

N.NAWAZ KHAN 103P1A0548

M.GOWRI SANKAR 103P1A0547

K.SREENIVASULU 103P1A0532

T.MUKESH 103P1A0563

Under The Esteemed Guidance Of

Mrs. R. ROOPA, M.Tech.

Assistant Professor

Department Of Computer Science and Engineering

Department Of Computer Science and EngineeringPRIYADARSHINI INSTITUTE OF TECHNOLOGY

(Approved by A.I.C.T.E., New Delhi, Affiliated to J N T U A, Anantapuramu)Ramachandrapuram, Tirupathi – 517561 .A.P

2013-2014

1

Page 2: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

PRIYADARSHINI INSTITUTE OF TECHNOLOGY(Approved by A.I.C.T.E., New Delhi, Affiliated to J N T U A, Anantapuramu)

Ramachandrapuram, Tirupathi – 517561 .A.PDEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Certificate This is to certify that the project work entitled “WINDS OF CHANGE: FROM VENOR LOCK-IN TO THE META CLOUD” is the bonafide work done by

N.NAWAZ KHAN 103P1A0548

N.GOWRI SANKAR 103P1A0547

K.SREENUVASULU 103P1A0532

T.MUKESH 103P1A0563in partial fulfillment of the requirements for the award of BACHELOR OF TECHNOLOGY in COMPUTER SCIENCE AND ENGINEERING to the JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY ANANTAPURAMU. This record is a bonafide work carried out by his/her under my guidance and supervision. The results embodied in this project report have not been submitted to any other University, Institute for any Degree or Diploma.

Guide: Head of the Department:Mrs.R.R.ROOPA, M.Tech. Mrs.B.GEETHAVANI,M.Tech,(P.hd).Assistant Professor Professor

Submitted for the university examination (viva-voce) held on …………………………………….

2

Page 3: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

INTERNAL EXAMINER EXTERNAL EXAMINER

ACKNOWLEDGEMENT

First we would like to thank our parents for their kind help, encouragement

and moral support.

We are also thankful to our beloved Chairman Late. Sri. P.SUBBIRAMI

REDDY and Secretary Sri. P.V.SREENADHA REDDY for permitting us to use

the facilities available to accomplish the project successfully.

Special thanks are due to our principal Dr. M.Murali Krishna for providing

us all the requirements for completion of the project successfully.

We express our heart full gratitude to Prof. B.GEETHAVANI, Head of the

Department of Computer Science and Engineering, for her kind attention and

valuable guidance to us throughout this course.

We are thankful with profound respect to our guide Mrs. R.ROOPA,

Assistant Professor for her invaluable guidance and constant encouragement given

to us during this work.

We also thank all the Teaching and Non-teaching staff of Computer Science

and Engineering Department, who have made us, complete the project in time.

Project Associates:

N.NAWAZ KHAN 103P1A0548N.GOWRI SANKAR 103P1A0547K.SREENUVASULU 103P1A0532T.MUKESH 103P1A0563

3

Page 4: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

ABSTRACT

The cloud computing paradigm has achieved widespread adoption in recent years. Its

success is due largely to customers’ ability to use services on demand with a pay-as-you go

pricing model, which has proved convenient in many respects. Low costs and high flexibility

make migrating to the cloud compelling. Despite its obvious advantages, however, many

companies hesitate to “move to the cloud,” mainly because of concerns related to service

availability, data lock-in, and legal uncertainties. Lock in is particularly problematic. For one

thing, even though public cloud availability is generally high, outages still occur. Businesses

locked into such a cloud are essentially at a standstill until the cloud is back online. Moreover,

public cloud providers generally don’t guarantee particular service level agreements (SLAs)—

that is, businesses locked into a cloud have no guarantees that it will continue to provide the

required quality of service (QoS). Finally, most public cloud providers’ terms of service let that

provider unilaterally change pricing at any time. Hence, a business locked into a cloud has no

mid- or long term control over its own IT costs. At the core of all these problems, we can

identify a need for businesses to permanently monitor the cloud they’re using and be able to

rapidly “change horses” — that is, migrate to a different cloud if they discover problems or if

their estimates predict future issues.

Key Words: Meta cloud, cloud computing, business, service level agreements, migrating cloud

4

Page 5: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

CONTENTS

Chapter No Content Page No.

1 INTRODUCTION 1

2 SYSTEM ANALYSIS 5

2.1 SYSTEM STUDY 5

2.1.1 EXISTING SYSTEM 5

2.1.2 PROPOSED SYSTEM 8

2.2 LITERATURE SURVEY 10

2.3 FESIBILITY STUDY 12

3 SYSTEM REQUIREMENTS 13

3.1 HARDWARE REQUIREMENTS 13

3.2 SOFTWARE REQUIREMENTS 13

4 SYSTEM DESIGN 14

4.1 INTRODUCTION 14

4.2 DESIGN OBJECTIVE 18

4.3 UNIFIED MODELING LANGUAGE 20

4.4 DESIGN OF UML DIAGRAMS 28

4.5 MODULES 43

4.5.1 REGISTRATION 43

5

Page 6: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

4.5.2 LOGIN 43

4.5.3 FILE UPLOAD 43

4.5.4 MIGRATE CLOUD 44

4.5.5 SEND MAIL 44

5 IMPLEMENTATION 45

5.1 ABOUT SOFTWARE 45

5.1.1 FRONT END SOFTWARE 45

5.1.2 BACK END SOFTWARE 59

6 SYSTEM TESTING 61

6.1 TESTING FUNDAMENTALS 61

6.2 WHITE BOX TESTING 61

6.3 BLACK BOX TESTING 61

6.4 SAMPLE TEST CASES 61

7 RESULTS 64

8 CONCLUSION 83

REFERENCES 84

6

Page 7: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

LIST OF FIGURES

Figure No Title Page No.

4.3.1 DATAFLOW DIAGRAM 28

4.4.1 CLASS DIAGRAM 30

4.4.2 USE CASE DIAGRAM 31

4.4.3 SEQUENCE DIAGRAM 36

4.4.4 COLLABORATION DIAGRAM 39

4.4.5 ACTIVITY DIAGRAM 41

4.4.6 STATE CHART DIAGRAM 42

7

Page 8: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

LIST OF TABLES

Table No Table Name Page No.

1 REGISTRATION 32

2 LOGIN 32

3 VIEW FILE 32

4 SEE ALERT 33

5 SEND MAIL 33

6 MIGRATE CLOUD 33

7 UPLOAD FILE 34

8 DOWNLOAD FILE 34

9 UPLOAD INTO CLOUD 35

8

Page 9: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

1. INTRODUCTION

The Greek myths tell of creatures plucked from the surface of the Earth and enshrined as

constellations in the night sky. Something similar is happening today in the world of computing.

Data and programs are being swept up from desktop PCs and corporate server rooms and

installed in “the compute cloud”. In general, there is a shift in the geography of computation.

What is cloud computing?

“An emerging computer paradigm where data and services reside in massively scalable data

centers in the cloud and can be accessed from any connected devices over the internet”

Like other definitions of topics like these, an understanding of the term cloud computing

requires an understanding of various other terms which are closely related to this. While there is

a lack of precise scientific definitions for many of these terms, general definitions can be given.

Cloud computing is an emerging paradigm in the computer industry where the computing is

moved to a cloud of computers. It has become one of the buzz words of the industry. The core

concept of cloud computing is, quite simply, that the vast computing resources that we need will

reside somewhere out there in the cloud of computers and we’ll connect to them and use them as

and when needed.

Computing can be described as any activity of using and/or developing computer hardware

and software. It includes everything that sits in the bottom layer, i.e. everything from raw

compute power to storage capabilities. Cloud computing ties together all these entities and

delivers them as a single integrated entity under its own sophisticated management.

Cloud is a term used as a metaphor for the wide area networks (like internet) or any such

large networked environment. It came partly from the cloud-like symbol used to represent the

complexities of the networks in the schematic diagrams. It represents all the complexities of the

network which may include everything from cables, routers, servers, data centers and all such

other devices.

Computing started off with the mainframe era. There were big mainframes and everyone

connected to them via “dumb” terminals. This old model of business computing was frustrating

for the people sitting at the dumb terminals because they could do only what they were

“authorized” to do. They were dependent on the computer administrators to give them

permission or to fix their problems. They had no way of staying up to the latest innovations.

9

Page 10: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

The personal computer was a rebellion against the tyranny of centralized computing operations.

There was a kind of freedom in the use of personal computers.

But this was later replaced by server architectures with enterprise servers and others

showing up in the industry. This made sure that the computing was done and it did not eat up any

of the resources that one had with him. All the computing was performed at servers. Internet

grew in the lap of these servers. With cloud computing we have come a full circle. We come

back to the centralized computing infrastructure. But this time it is something which can easily

be accessed via the internet and something over which we have all the control.

Cloud computing is a way of providing various services on virtual machines allocated on

top of a large physical machine pool which resides in the cloud. Cloud computing comes into

focus only when we think about what IT has always wanted - a way to increase capacity or add

different capabilities to the current setting on the fly without investing in new infrastructure,

training new personnel or licensing new software. Here ‘on the fly’ and ‘without investing or

training’ becomes the keywords in the current situation. But cloud computing offers a better

solution.

We have lots of compute power and storage capabilities residing in the distributed

environment of the cloud. What cloud computing does is to harness the capabilities of these

resources and make available these resources as a single entity which can be changed to meet the

current needs of the user.

The basis of cloud computing is to create a set of virtual servers on the available vast

resource pool and give it to the clients. Any web enabled device can be used to access the

resources through the virtual servers. Based on the computing needs of the client, the

infrastructure allotted to the client can be scaled up or down.

From a business point of view, cloud computing is a method to address the scalability

and availability concerns for large scale applications which involves lesser overhead. Since the

resource allocated to the client can be varied based on the needs of the client and can be done

without any fuss, the overhead is very low.

10

Page 11: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Characteristics of Cloud Computing:

1. Self-Healing:

Any application or any service running in a cloud computing environment has the

property of self-healing. In case of failure of the application, there is always a hot backup of the

application ready to take over without disruption. There are multiple copies of the same

application - each copy updating itself regularly so that at times of failure there is at least one

copy of the application which can take over without even the slightest change in its running state.

2. Multi-tenancy:

With cloud computing, any application supports multi-tenancy - that is multiple tenants

at the same instant of time. The system allows several customers to share the infrastructure

allotted to them without any of them being aware of the sharing. This is done by virtualizing the

servers on the available machine pool and then allotting the servers to multiple users. This is

done in such a way that the privacy of the users or the security of their data is not compromised.

3. Linearly Scalable:

Cloud computing services are linearly scalable. The system is able to break down the

workloads into pieces and service it across the infrastructure. An exact idea of linear scalability

can be obtained from the fact that if one server is able to process say 1000 transactions per

second, then two servers can process 2000 transactions per second.

4. Service-oriented:

Cloud computing systems are all service oriented - i.e. the systems are such that they are

created out of other discrete services. Many such discrete services which are independent of each

other are combined together to form this service. This allows re-use of the different services that

are available and that are being created. Using the services that were just created, other such

services can be created.

11

Page 12: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

5. SLA Driven:

Usually businesses have agreements on the amount of services. Scalability and

availability issues cause clients to break these agreements. But cloud computing services are

SLA driven such that when the system experiences peaks of load, it will automatically adjust

itself so as to comply with the service-level agreements. The services will create additional

instances of the applications on more servers so that the load can be easily managed.

6. Virtualized:

The applications in cloud computing are fully decoupled from the underlying hardware.

The cloud computing environment is a fully virtualized environment.

7. Flexible:

Another feature of the cloud computing services is that they are flexible. They can be

used to serve a large variety of workload types - varying from small loads of a small consumer

application to very heavy loads of a commercial application.

12

Page 13: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

2. SYSTEM ANALYSIS

2.1 SYSTEM STUDY

2.1.1 Existing system

Today, almost any business or major activity uses, or relies in some form, on IT and IT services.

These services need to be enabling and appliance-like, and there must be an economy of-scale

for the total-cost-of-ownership to be better than it would be without cyber infrastructure.

Technology needs to improve end-user productivity and reduce Technology-driven overhead.

Cloud computing is the delivery of computing services over the Internet. Cloud services

allow individuals and businesses to use software and hardware that are managed by third parties

at remote locations. Examples of cloud services include online file storage, social networking

sites, webmail, and online business applications. The cloud computing model allows access to

information and computer resources from anywhere that a network connection is available.

Cloud computing provides a shared pool of resources, including data storage space, networks,

computer processing power, and specialized corporate and user applications.

Cloud architecture:

The systems architecture of the software systems involved in the delivery of cloud

computing, comprises hardware and software designed by a cloud architect who typically works

for a cloud integrator. It typically involves multiple cloud components communicating with each

other over application programming interfaces, usually web services.

This closely resembles the UNIX philosophy of having multiple programs doing one

thing well and working together over universal interfaces. Complexity is controlled and the

resulting systems are more manageable than their monolithic counterparts.

Cloud architecture extends to the client, where web browsers and/or software applications

access cloud applications. Cloud storage architecture is loosely coupled, where metadata

operations are centralized enabling the data nodes to scale into the hundreds, each independently

delivering data to applications or users.

13

Page 14: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Fig.2.1 Cloud architecture

Cloud –Types:

Public cloud:

Public cloud or external cloud describes cloud computing in the traditional mainstream.

Public clouds are run by third parties, and applications from different customers are likely to be

mixed together on the cloud’s servers, storage systems, and networks. A public cloud provides

services to multiple customers.

Hybrid cloud:

Hybrid clouds combine both public and private cloud models. This is most often seen

with the use of storage clouds to support Web 2.0 applications.

Private cloud:

Private clouds are built for the exclusive use of one client, providing the utmost control

over data, security, and quality of service (Figure 4). The company owns the infrastructure and

has control over how applications are deployed on it. Private clouds can be built and managed by

a company’s own IT organization or by a cloud provider.

Cloud computing products and services can be classified into 4 major

categories:

They are:

1. Application as service ( AaaS)

2. Platform as a Service (PaaS)

14

Page 15: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

3. Infrastructure as a service (IaaS)

4. Software as a Service (SaaS)

1. Application as s service (AaaS):

These are the first kind of cloud computing services that came into being. Under this, a

service is made available to an end-user. The end-user is asked to create an account with the

service provider and start using the application. One of first famous application was web-based

email service by Hotmail started in 1996. Scores of such services are available now on the web.

2. Platform as a Service (PaaS):

Cloud vendors are companies that offer cloud computing services and products. One of

the services that they provide is called PaaS. Under this a computing platform such as operating

system is provided to a customer or end user on a monthly rental basis. Some of the major cloud

computing vendors are Amazon, Microsoft, and Google etc.

3. Infrastructure as a service: (IaaS)

The cloud computing vendors offer infrastructure as a service. One may avail hardware

services such as processors, memory, networks etc on agreed basis for specific duration and

price.

4. Software as a service (SaaS):

Software package such as CRM or CAD/CAM can be accessed under cloud computing

scheme. Here a customer upon registration is allowed to use software accessible through net and

use it for his or his business process. The related data and work may be stored on local machines

or with the service providers. SaaS services may be available on rental basis or on per use basis.

Deployment of cloud services:

Cloud services are typically made available via a private cloud, community cloud, public

cloud or hybrid cloud. Generally speaking, services provided by a public cloud are offered over

the Internet and are owned and operated by a cloud provider. Some examples include services

aimed at the general public, such as online photo storage services, e-mail services, or social

networking sites. However, services for enterprises can also be offered in a public cloud.

In a private cloud, the cloud infrastructure is operated solely for a specific organization,

and is managed by the organization or a third party. In a community cloud, the service is shared

15

Page 16: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

by several organizations and made available only to those groups. The infrastructure may be

owned and operated by the organizations or by a cloud service provider.

Cloud providers are flooding the market with a confusing body of services, including

computer services such as the Amazon Elastic Compute Cloud (EC2) and VMware v Cloud, or

key-value stores, such as the Amazon Simple Storage Service (S3).

Some of these services are conceptually comparable to each other, whereas others are

vastly different, but they’re all, ultimately, technically incompatible and follow no standards but

their own. To further complicate the situation, many companies not (only) build on public

clouds for their cloud computing needs, but combine public offerings with their own private

clouds, leading to so-called hybrid clouds.

Businesses locked into such a cloud are essentially at a standstill until the cloud is back

online. Moreover, public cloud providers generally don’t guarantee particular service level

agreements (SLAs) — that is, businesses locked into a cloud have no guarantees that it will

continue to provide the required quality of service (QoS). Finally, most public cloud providers’

terms of service let that provider unilaterally change pricing at any time. Hence, a business

locked into a cloud has no mid- or long- term control over its own IT costs.

Disadvantages of existing system:

Its success is due largely to customers’ ability to use services on demand with a pay-as-

you go pricing model, which has proved convenient in many aspects.

Low costs and high flexibility make migrating to the cloud compelling.

2.1.2 Proposed system

The concept of a Meta cloud that incorporates design time and runtime components. This

Meta cloud would abstract away from existing offerings’ technical incompatibilities, thus

mitigating vendor lock-in. It helps users find the right set of cloud services for a particular use

case and supports an application’s initial deployment and runtime migration.

To some extent, we can realize the Meta cloud based on a combination of existing tools

and concepts, part of which we just examined. The Meta cloud’s main components. We can

16

Page 17: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

categorize these components based on whether they’re important mainly for cloud software

engineers during development time or whether they perform tasks during runtime.

The emergence of yet more cloud offerings from a multitude of service providers calls for

a Meta cloud to smoothen the edges of the jagged cloud landscape. This Meta cloud could solve

the vendor lock-in problems that current public and hybrid cloud users face.

Current Weather in the (Meta) Cloud:

First, standardized programming APIs must enable developers to create cloud-neutral

applications that aren’t hardwired to any single provider or cloud service. Cloud provider

abstraction libraries such as Libcloud (http:// libcloud.apache.org), fog (http://fog.io), and jclouds

(www.jclouds.org) provide unified APIs for accessing different vendors’ cloud products. Using

these libraries, developers are relieved of technological vendor lock- in because they can switch

cloud providers for their applications with relatively low overhead.

As a second ingredient, the Meta cloud uses resource templates to define concrete

features that the application requires from the cloud. For instance, an application must be able to

specify that it requires a given number of computing resources, Internet access, and database

storage. Some current tools and initiatives — for example, Amazon’s Cloud Formation (http://

aws.amazon.com/cloudformation/) or the upcoming TOSCA specification (www.oasis-

open.org/committees/ tosca) — is working toward similar goals and can be adapted to provide

these required features for the Meta cloud. In addition to resource templates, the automated

formation and pro- visioning of cloud applications also depends on sophisticated features to

actually deploy and install applications automatically. Predictable and controlled application

deployment is a central issue for cost-effective and efficient deployments in the cloud, and even

more so for the meta cloud. Several application provisioning solutions exist, enabling developers

and administrators to declaratively specify deployment artifacts and dependencies to allow for

repeatable and managed resource provisioning.

Notable examples include Opscode Chef (www.opscode.com/chef/), Puppet

(http://puppetlabs.com), and juju (http://juju.ubuntu.com). At runtime, an important aspect of the

Meta cloud is application monitoring, which enables the Meta cloud to decide whether it’s

necessary to provision new instances of the application or migrate parts of it. Various vendors

provide tools for cloud monitoring, ranging from system-level monitoring (such as CPU and

bandwidth) to application-level monitoring (Amazon’s CloudWatch;

17

Page 18: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

http://aws.amazon.com/cloudwatch/) to SLA monitoring (as with monitis;

http://portal.monitis.com/index.php/cloud-monitoring). However, the Meta cloud requires more

sophisticated monitoring techniques and, in particular, approaches for making automated

provisioning decisions at runtime based on current application users’ context and location.

Advantages of proposed system:

The concept of a Meta cloud that incorporates design time and runtime components.

This Meta cloud would abstract away from existing offerings’ technical incompatibilities,

thus mitigating vendor lock-in.

2.2 LITEREATURE SURVEY

Literature survey is the most important step in software development process. Before

developing the tool it is necessary to determine the time factor, economy n company strength.

Once these things are satisfied, then next steps are to determine which operating system and

language can be used for developing the tool.

Once the programmers start building the tool the programmers need lot of external

support. This support can be obtained from senior programmers, from book or from websites.

Before building the system the above consideration are taken into account for developing the

proposed system.

A Meta Cloud Use Case:

Let’s come back to the sports application use case. A meta-cloud-compliant variant of

this application accesses cloud services using the meta cloudAPI and doesn’t directly talk to the

cloud-provider-specific service APIs.For our particular case, this means the application doesn’t

depend on Amazon EC2, SQS, or RDS service APIs, but rather on the meta cloud’s compute,

message queue, and relational database service APIs. For initial deployment, the developer

submits the application’s resource template to the meta cloud.

It specifies not only the three types of cloud services needed to run the sports

application, but also their necessary properties and how they depend on each other. For compute

resources, for instance, the developer can specify CPU, RAM, and disk space according to

terminology defined by the meta cloud resource template DSL. Each resource can be named in

the template, which allows for referencing during deployment, runtime, and migration. The

18

Page 19: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

resource template specification should also contain interdependencies, such as the direct

connection between the Web service compute instances and the message queue service. The rich

information that resource templates provide helps the provisioning strategy component make

profound decisions about cloud service ranking.

We can explain the working principle for initial deployment with a Web search analogy,

in which resource templates are queries and cloud service provider QoS and pricing information

represent indexed documents. Algorithmic aspects of the actual ranking are beyond this article’s

scope. If some resources in the resource graph are only loosely coupled, then the meta cloud will

be more likely to select resources from different cloud providers for a single application. In our

usecase, however, we assume that the provisioning strategy ranks the respective Amazon cloud

services first, and that the customer follows this recommendation.

After the resources are determined, the meta cloud deploys the application, together with

an instance of the meta cloud proxy, according to customer-provided recipes. During runtime,

the meta cloud proxy mediates between the application components and the Amazon cloud

resources and sends monitoring data to the resource monitoring component running within the

meta cloud.

Monitoring data helps refine the application’s resource template and the provider’s

overall QoS values, both stored in the knowledge base. The provisioning strategy component

regularly checks this updated information, which might trigger a migration. The meta cloud

could migrate front-end nodes to other providers to place them closer to the application’s users,

for example.

Another reason for a migration might be updated pricing data. After a price cut by

Rackspace, for example, services might migrate to its cloud offerings. To make these decisions,

the provisioning strategy component must consider potential migration costs regarding time and

money.

The actual migration is performed based on customer-provided migration recipes.

Working on the meta cloud, we face the following technical challenges. Resource monitoring

must collect and process data describing different cloud providers’ services such that the

provisioning strategy can compare and rank their QoS properties in a normalized, provider

independent fashion. Although solutions for deployment in the cloud are relatively mature,

19

Page 20: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

application migration isn’t as well supported. Finding the balance between migration facilities

provided by the meta cloud and the application is particularly important.

2.3 Feasibility study

The feasibility of the project is analyzed in this phase and business proposal is put forth

with a very general plan for the project and some cost estimates. During system analysis the

feasibility study of the proposed system is to be carried out. This is to ensure that the proposed

system is not a burden to the company.

Three key considerations involved in the feasibility analysis are

Economic Feasibility

Technical Feasibility

Social Feasibility

Economic Feasibility:

This study is carried out to check the economic impact that the system will have on the

organization. The amount of fund that the company can pour into the research and development of the

system is limited. The expenditures must be justified. Thus the developed system as well within the

budget and this was achieved because most of the technologies used are freely available. Only the

customized products had to be purchased.

Technical Feasibility:

This study is carried out to check the technical feasibility, that is, the technical

requirements of the system. Any system developed must not have a high demand on the available

technical resources. This will lead to high demands on the available technical resources. This

will lead to high demands being placed on the client.

Social Feasibility:

The aspect of study is to check the level of acceptance of the system by the user. This

includes the process of training the user to use the system efficiently. The user must not feel

threatened by the system, instead must accept it as a necessity. The level of acceptance by the

users solely depends on the methods that are employed to educate the user.

20

Page 21: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

3. SYSTEM REQUIREMENTS

As far as computer projects are concern two types of requirements are essential.

They are as follows

Hardware requirements

Software requirements

3.1 Hardware Requirements:-

Processor - Pentium –III

Speed - 1.1 GHz

RAM - 256 MB (min)

Hard Disk - 20 GB

Floppy Drive - 1.44 MB

Key Board - Standard Windows Keyboard

Mouse - Two or Three Button Mouse

Monitor - SVGA

3.2 Software Requirements:-

Operating System : Windows95/98/2000/XP

Application Server : Tomcat5.0/6.X

Front End : HTML, Java, JSP

Scripts : JavaScript.

Server side Script : Java Server Pages.

Database : My sql

Database Connectivity : JDBC.

21

Page 22: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

4. SYSTEM DESIGN

4.1 INTRODUCTION:

The concept of a Meta cloud incorporates design time and runtime components. This

Meta cloud would abstract away from existing offerings’ technical incompatibilities, thus

mitigating vendor lock-in. It helps users find the right set of cloud services for a particular use

case and supports an application’s initial deployment and runtime migration.

ARCHITECTURE

THE META CLOUD:

The concept of a Meta cloud that incorporates design time and runtime components. This

Meta cloud would abstract away from existing offerings’ technical incompatibilities, thus

mitigating vendor lock-in.

It helps users find the right set of cloud services for a particular use case and supports an

application’s initial deployment and runtime migration.

Fig 4.1 Meta Cloud Architecture

22

Page 23: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Inside the Meta Cloud:

To some extent, we can realize the Meta cloud based on a combination of existing tools

and concepts, part of which we just examined. Figure 1 depicts the Meta cloud’s main

components. We can categorize these components based on whether they’re important mainly

for cloud software engineers during development time or whether they perform tasks during

runtime. We illustrate their interplay using the sports betting portal example.

Meta Cloud API:

The Meta cloud API provides a unified programming interface to abstract from the

differences among provider API implementations. For customers, using this API prevents their

application from being hard-wired to a specific cloud service offering.The Meta cloud API can

build on available cloud provider abstraction APIs, as previously mentioned. Although these deal

mostly with key- value stores and compute services, in principle, all services can be covered that

are abstract enough for more than one provider to offer and whose specific APIs don’t differ too

much, conceptually.

Resource Templates:

Developers describe the cloud services necessary to run an application using resource

templates. They can specify service types with additional properties, and a graph model

expresses the interrelation and functional dependencies between services. Developers create the

Meta cloud resource templates using a simple domain-specific language (DSL), letting them

concisely specify required resources. Resource definitions are based on a hierarchical

composition model; thus developers can create configurable and reusable template components,

which enable them and their teams to share and reuse common resource templates in different

projects. Using the DSL, developers model their application components and their basic runtime

requirements, such as (provider- independently normalized) CPU, memory, and I/O capacities,

as well as dependencies and weighted communication relations between these components. The

provisioning strategy uses the weighted component relations to determine the application’s

optimal deployment configuration. Moreover, resource templates allow developers to define

constraints based on costs, component proximity, and geographical distribution.

23

Page 24: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Migration and Deployment Recipes:

Deployment recipes are an important ingredient for automation in the Meta cloud

infrastructure. Such recipes allow for controlled deployment of the application, including

installing packages, starting required services, managing package and application parameters,

and establishing links between related components. Automation tools such as Opscode Chef

provide an extensive set of functionalities that are directly integrated into the Meta cloud

environment. Migration recipes go one step further and describe how to migrate an application

during runtime — for example, migrate storage functionality from one service provider to

another. Recipes only describe initial deployment and migration; the provisioning strategy and

the Meta cloud proxy execute the actual process using the aforementioned automation tools.

Meta Cloud Proxy:

The Meta cloud provides proxy objects, which are deployed with the application and run

on the provisioned cloud resources. They serve as mediators between the application and the

cloud provider. These proxies expose the Meta cloud API to the application, transform

application requests into cloud-provider-specific requests, and forward them to the respective

cloud services.

Proxies provide a way to execute deployment and migration recipes triggered by the meta

cloud’s provisioning strategy. Moreover, proxy objects send QoS statistics to the resource

monitoring component running within the Meta cloud. The Meta cloud obtains the data by

intercepting the application’s calls to the underlying cloud services and measuring their

processing time, or by executing short benchmark programs.

Applications can also define and monitor custom QoS metrics that the proxy objects send

to the resource monitoring component to enable advanced, application-specific management

strategies. To avoid high load and computational bottlenecks, communication between proxies

and the Meta cloud is kept at a minimum. Proxies don’t run inside the Meta cloud, and regular

service calls from the application to the proxy aren’t routed through the Meta cloud, either.

24

Page 25: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Resource Monitoring:

On an application’s request, the resource monitoring component receives data collected

by Meta cloud proxies about the resources they’re using. The component filters and processes

these data and then stores them on the knowledge base for further processing. This helps

generate comprehensive QoS information about cloud service providers and the particular

services they provide, including response time, availability, and more service-specific quality

statements.

Provisioning Strategy:

The provisioning strategy component primarily matches an application’s cloud service

requirements to actual cloud service providers. It finds and ranks cloud services based on data in

the knowledge base. The initial deployment decision is based on the resource templates,

specifying the resource requirements of an application, together with QoS and pricing

information about service providers.

The result is a list of possible cloud service combinations ranked according to expected

QoS and costs. At runtime, the component can reason about whether migrating a resource to

another resource provider is beneficial based on new insights into the application’s behavior and

updated cloud provider QoS or pricing data. Reasoning about migrating also involves calculating

migration costs. Decisions about the provisioning strategy result in the component executing

customer-defined deployment or migration scripts.

Knowledge Base:

The knowledge base stores data about cloud provider services, their pricing and QoS, and

information necessary to estimate migration costs. It also stores customer-provided resource

templates and migration or deployment recipes. The knowledge base indicates which cloud

providers are eligible for a certain customer.

These usually comprise all providers the customer has an account with and providers that

offer possibilities for creating (sub) accounts on the fly. Several information sources contribute

25

Page 26: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

to the knowledge base: Meta cloud proxies regularly send data about application behavior and

cloud service QoS.

Users can add cloud service providers’ pricing and capabilities manually or use crawling

techniques that can get this information automatically.

4.2 DESIGN OBJECTIVES:

INPUT DESIGN

The input design is the link between the information system and the user. It comprises the

developing specification and procedures for data preparation and those steps are necessary to put

transaction data in to a usable form for processing can be achieved by inspecting the computer to

read data from a written or printed document or it can occur by having people keying the data

directly into the system. The design of input focuses on controlling the amount of input required,

controlling the errors, avoiding delay, avoiding extra steps and keeping the process simple. The

input is designed in such a way so that it provides security and ease of use with retaining the

privacy.

Input Design considered the following things:

What data should be given as input?

How the data should be arranged or coded?

The dialog to guide the operating personnel in providing input.

Methods for preparing input validations and steps to follow when error occur.

OBJECTIVES:

1. Input Design is the process of converting a user-oriented description of the input into a

computer-based system. This design is important to avoid errors in the data input process

and show the correct direction to the management for getting correct information from

the computerized system.

2. It is achieved by creating user-friendly screens for the data entry to handle large volume

of data. The goal of designing input is to make data entry easier and to be free from

26

Page 27: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

errors. The data entry screen is designed in such a way that all the data manipulates can

be performed. It also provides record viewing facilities.

3. When the data is entered it will check for its validity. Data can be entered with the help of

screens. Appropriate messages are provided as when needed so that the user will not be in

maize of instant. Thus the objective of input design is to create an input layout that is

easy to follow

OUTPUT DESIGN

A quality output is one, which meets the requirements of the end user and presents the

information clearly. In any system results of processing are communicated to the users and to

other system through outputs. In output design it is determined how the information is to be

displaced for immediate need and also the hard copy output. It is the most important and direct

source information to the user. Efficient and intelligent output design improves the system’s

relationship to help user decision-making. Designing computer output should proceed in an

organized, well thought out manner; the right output must be developed while ensuring that each

output element is designed so that people will find the system can use easily and effectively.

When analysis design computer output, they should Identify the specific output that is needed to

meet the requirements.

1. Select methods for presenting information.

2. Create document, report, or other formats that contain information produced by

the system.

The output form of an information system should accomplish one or more of the following

objectives.

Convey information about past activities, current status or projections of the

Future.

Signal important events, opportunities, problems, or warnings.

Trigger an action.

Confirm an action.

27

Page 28: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

4.3 UNIFIED MODELING LANGUAGE (UML):

An Overview of UML:

UML is a graphical language which provides a vocabulary and set of semantics and rules.

The UML focuses on the conceptual and physical representation of the system. It captures the

decisions and understandings about systems that must be constructed. It is used to understand,

design, configure, maintain and control information about the systems.

Definition:

UML is a general purpose visual modeling language that is used to specify, visualize,

construct and document the artifacts of the software system.

UML is a language:

It will provide vocabulary and rules for communication and function on conceptual and

physical representation. So it is modeling language.

UML Specification:

Specification means building models that are precise, unambiguous and complete. In

particular, the UML address the specification of all the important analysis, design and

implementation decisions that must be made in developing and displaying a software intensive

system.

UML Visualization:

The UML includes both graphical and textual represents. It makes easy to visualize the

system and for better understanding.

UML Constructing:

UML models can be directly connected to a variety of programming languages and it is

sufficiently express and free from any ambiguity to permit the direct execution of models.

28

Page 29: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

UML Documenting:

UML provides variety of documents in addition raw executable codes. UML system is

represented using five different views that describe the system from distinctly different

perspective. Each view is defined by a set of diagrams, which is as follows.

User model view

This view represents the system from the user’s perspective. The analysis represents

describes a usage scenario from the end –users perspective.

Structural model view

In this model the data and functionality are arrived from inside the system. This model

view model view.

Behavioral model view

It represents the dynamic of behavioral as parts of the system, depicting the interactions

of collection between various structural elements described in the user model and structural

model view.

Model Implementation view

In this the structural and behavioral as parts of the system are represents as they are to be

built.

Environmental model view

In this the structural and behavioral aspects of the environment in which the system is to

be implemented are presented.

Rules of UML:

The UML has semantic rules for

29

Page 30: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Names: it will call things, relationships and diagrams.

Scope: the content that gives specific meaning to a name.

Visibility: how those names can be seen and used by others.

Integrity: how things properly and consistently relative to another.

Execution: what it means is to run or dynamic model.

Goals of UML:

The primary goals in the design of the UML were:

Provides users with a ready-to-use, expressive visual modeling language so they can

develop and exchange meaningful models.

Provide extensibility and specialization mechanisms to extend the core concepts.

Be independent of particular programming languages and development processes.

Provide a formal basis for understanding the modeling language.

Encourage the growth of the OO tools market.

Support higher-level development concepts such as collaborations, frameworks, patters

and components

Integrate best practices.

Uses of UML:

The UML is intended primarily for software intensive systems. It has been used

effectively for such domains as

Enterprise information system

A Conceptual Model of UML:

The three major elements of UML are

The UML’s basic building blocks.

The rules that dictate how those building blocks may be put together.

Some common mechanisms that apply throughout the UML.

30

Page 31: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Basic Building Blocks of UML:

The vocabulary of UML encompasses three kinds of building blocks:

Things

Relationships

Diagram

Things are the abstractions that are first-class citizens in a model.

Relationships tie these things together.

Diagrams group the interesting collection of things.

Things in UML:

There are four kinds of things in the UML

1. Structural things

2. Behavioral things

3. Grouping things

4. An notational things

Structural things:

Structural things are the nouns of the UML models. These are mostly static parts of the

model, representing elements that are either conceptual or physical. In all, there are seven kinds

of Structural things.

Class:

Graphically a class is rendered as a rectangle, usually including its name, attributes and

operations, as shown below. Class diagrams are the most common diagrams found in modeling

object-oriented systems. A class diagrams is a diagram that shows a set of classes, interfaces and

collaborations and their relationships. Graphically a class diagram is a collection of vertices and

31

Page 32: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

arcs. A class is a description of a set of objects that share the same attributes, operations,

relationships, and semantics. A class implements one or more interfaces.

Window

originSize

Open()Close()Display()

Interface:

An Interface is a collection of operation that specifies a service of a class or component.

An interface describes the externally visible behavior of that element. Graphically, the interface

is rendered as a circle together with its name.

Use case:

Use case is a description of a set of sequence of actions that a system performs that

yields an observable result of value to a particular things in a model. Graphically, Use case is

rendered as an ellipse with dashed lines, usually including only its name as shown below.

32

Place Order

Chain of Responsibility

ISpelling

Page 33: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Component:

Component is a physical and replaceable part o a system that conforms to and provides

the realization of a set of interfaces. Graphically, a component is rendered as a rectangle with

tabs, usually including only its name, as shown below.

orderform.java

Node:

A Node is a physical element that exists at run time and represents a computational resource,

generally having at least some memory and often, processing capability. Graphically, a node is

rendered as a cube, usually including only its name, as shown below.

server

Behavioral things:

Behavioral Things are the dynamic parts of UML models. These are the verbs of a model,

representing behavior over time and space.

Interaction:

An interaction is a behavior that comprises a set of messages exchanged among set

objects within a particular context to accomplish a specific purpose. Graphically, a message is

rendered as a direct line, almost always including the name of its operation, as shown below.

Display

33

Page 34: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

State Machine:

A state machine is a behavior that specifies the sequence of states an object or an

interaction goes through during its lifetime on response to events, together with its responses to

those events. Graphically, a state is rendered as a rounded rectangle usually including its name

and its sub-states, if any, as shown below.

Grouping things:

Grouping things are the organizational parts of the UML models. These are the boxes into

which a model can be decomposed.

Package:

A package is a general-purpose mechanism for organizing elements into groups.

Business Rules

Annotational things:

Annotational things are the explanatory parts of the UML models. It doesn’t correspond

to any object but serves as comment.

Relationships:

Relationships tie the things together. Relationships in the UML are

1. Dependency

2. Association

34

Waiting

Page 35: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

3. Generalization

4. Realization

Dependency:

Dependency is a semantic relationship between two things in which a change to one thing

may affect the semantic of the other things.

- - - - - - >

Generalization:

Generalization is a specialization/generalization relationship in which objects of the

specialized element (child) are substitutable for objects of the generalized element (parent).

-----------

Association:

An Association is a structural relationship that describes a set of links, A link being a

connection among objects.

Aggregation is a special kind of association, representing a structural relationship

between a whole and its parts.

______________

Realization:

It is the relationship between classifiers, where one classifier specifies the contact and the

other generates to carry out.

Data flow diagram

The Data Flow Diagram is also called as bubble chart.

It is a simple graphical formalism that can be used to represent a system in terms of the

input data to the system, various processing carried out on these data, and the output data is

generated by the system.

35

Page 36: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Data flow diagram:

• A data flow diagram is a graphical representation of the "flow" of data through an

information system, modeling its process aspects.

FIG.NO 4.3.1 DATAFLOW DIAGRAM

4.4 DESIGN OF UML DIAGRAMS

A diagram is the graphical presentation of set of elements, most often rendered as a

connected graph of vertices (things) and arcs (relationships).

There are two types of diagrams. They are:

36

Page 37: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Structural Diagrams

Behavioral Diagram

Structural diagrams:

The UML‘s four structural diagrams exist to visualize, specify, construct and document

the static aspects of a system. Structural diagrams consist of

1. Class diagram

2. Object diagram

3. Component diagram

4. Deployment diagram

Behavioral diagrams:

The UML’s five behavioral diagrams are used to visualize, specify, construct and

document the dynamic aspects of a system. The UML’s behavioral diagrams are roughly

organized around the major ways can model the dynamics of a system.

Behavioral diagrams consist of

1. Use case diagram

2. Class diagram

3. Sequence diagram

4. Collaboration diagram

5. Activity diagram

CLASS DIAGRAM:

A class diagram is a type of static structure diagram that describes the structure of a system by

showing the system’s classes, their attributes, and the relationships between the classes

Class

37

Page 38: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Interface: Interface is a collection of operations that specify a service of a class or component.

An interface describes the externally visible behavior of that element. An interface might

represent the complete behavior of a class or component.

Interface

Collaboration: Collaboration defines an interaction and is a society of other elements that work

together to provide some cooperative behavior. So collaborations have structural as well as

behavioral, dimensions. These collaborations represent the implementation of patterns that make

up a system.

Class diagrams are the most common diagrams found in modeling object-oriented

systems. A class diagrams shows a set of classes, interfaces, and collaborations and their

relationships. Graphically, a class diagram is a collection of vertices and arcs. It is a description

of set of objects that share same attributes, operations, relationships, and semantics. Graphically

a class is rendered as a rectangle.

The most important parts of a class diagrams are names, attributes, and operations. Every

class must have a name that distinguishes it from other classes. A name is textual string. That

name alone is known as a simple name. An attribute represents some property of modeling that is

shared by all objects of that class. A class may have any number of attributes or no attributes at

all. An operation is the implementation of a service that can be requested from any object of

classes.

A responsibility is a contract or obligation of a class. When you create a class, you are making

a statement that all objects of that class have the same kind of state and the same kind of

behavior. At these level class responsibilities carried out.

38

Collaboration

Page 39: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

FIG. NO 4.4.1 CLASS DIAGRAM

USE CASE DIAGRAM:

Use case diagrams are one of the five diagrams in the UML for modeling the dynamic

aspects of systems (activity diagrams, sequence diagrams, state chart diagrams and collaboration

diagrams are the four other kinds of diagrams in the UML for modeling the dynamic aspects of

systems). Use case diagrams are central to modeling the behavior of the system, a sub-system, or

a class. Each one shows a set of use cases and actors and relationship.

A use case diagram shows a set of use cases and actors and their relationships. Use case

diagrams address the static use case view of a system. These diagrams are especially important in

organization and modeling the behaviors of a system. Use case diagram consist of use case,

actors, and the relationship between them.

39

Page 40: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

FIG. NO 4.4.2 USECASE DIAGRAM

USECASE DESCRIPTION:

Table 1: Registration

Use case Registration

Actors User, owner, CSP, TTP

Description This usecase is used to register in to the user.

40

Page 41: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Table 2: Login

Use case Login

Actors User, owner, CSP, TTP

Description This use case is used to maintain the login.

Table 3: View file

Use case View/Verify

Actors User, owner, CSP, TTP

Purpose This use case is used to view file.

Table 4: See Alert

Use case See Alert

Actor Owner

Purpose This use case is used to see alert by the owner.

41

Page 42: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Table 5: Send Mail

Use case Send mail

Actor Owner

Description This use case is used to send mail to the users.

Table 6: Migrate Cloud

Use case Migrate cloud

Actor Owner

Description This use case is used to migrate from one cloud

to another.

Table 7: Upload File

Use case Upload file

Actors Owner, CSP,TTP

42

Page 43: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Description This use case is used to upload file.

Table 8: Download file

Use case Download file

Actor User

Description This use case is used to download file.

Table 9: Upload Into Cloud

Use case Upload into cloud

Actor CSP

Description This use case is used to upload the file into

cloud.

INTERACTION DIAGRAM:

43

Page 44: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

An Interaction diagram shows an interaction, consisting of a set of objects and their

relationships, including the messages that may be dispatched among them. Interaction diagrams

are used for modeling the dynamic aspects of the system.

A sequence diagram is an interaction diagram that emphasizes the time ordering of the

messages. Graphically, a sequence diagram is a table that shows objects arranged along the X-

axis and messages, ordered in increasing time, along the Y-axis and messages, ordered in

increasing time, along the Y-axis.

Sequence diagrams establish the role of objects and provide essential information to

determine class responsibilities and interfaces.

Contents: Interaction diagrams commonly contain objects, links and messages. Like all other

diagrams, interaction diagrams may contain Notes and constraints.

SEQUENCE DIAGRAM:

The sequence diagram is an interaction diagram that emphasizes the time ordering of

messages for modeling a real time system. Graphically, a sequence diagram is a table that shows

objects arranged along the X axis and messages, ordered in increasing time, along the y axis

sequence diagram consist of objects, likes, lifeline, and focus of control, and messages.

Object: are typically named or anonymous instances of class. But may also represent

instances of their things such as components, collaboration and nodes.

Link: A link is a semantic connection among objects i.e., an object of an association is

called as a link.

Lifeline: a lifeline is vertical dashed line that represents the lifetime of an object.

Focus of control: a focus of control is tall, thin rectangle that shows the period of time

during which an object is object is performing an action.

Message: a message is a specification of a communication between objects that conveys

the information with the expectation that the activity will ensue.

INTERACTION DIAGRAM-1:

44

Page 45: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

FIG.NO 4.4.3: SEQUENCE DIAGRAM

INTERACTION DIAGRAM-2:

45

Page 46: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

FIG.NO 4.4.4: SEQUENCE DIAGRAM

COLLABORATION DIAGRAM-1:

46

Page 47: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Collaboration diagrams are also interaction diagrams. They convey the same information

as sequence diagrams, but they focus on object roles instead of the times that messages are sent.

In a sequence diagram, object roles are the vertices and messages are the connecting links.

Each message in a collaboration diagram has a sequence number. The top-level message

is numbered 1. Messages at the same level (sent during the same call) have the same decimal

prefix but suffixes of 1, 2, etc. according to when they occur.

FIG.NO 4.4.5: COLLABORATION DIAGRAM

COLLABORATION DIAGRAM-2:

47

Page 48: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

FIG.NO 4.4.6: COLLABORATION DIAGRAM

ACTIVITY DIAGRAM:

48

Page 49: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

An Activity diagrams essentially a flow chart showing flow of control from Activity to

Activity. They are used to model the dynamic aspects of as system. They can also be used to

model the flow of an object as it moves from State to State of different points in the flow of

control.

Activity diagrams are closely related to state chart diagrams. The main difference

between the two diagrams is that state chart diagrams are state centric where as activity diagrams

are activity centric. Once the activity is completed, the flow of control moves to next activity or

state through a transition.

FIG.NO 4.4.7 : ACTIVITY DIAGRAM

State Chart Diagram:

49

Page 50: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

It shows a state machine consisting of states, transition events and activities. State chart diagram

addresses the dynamic view of a system. They are especially important in modeling the behavior

of an interface class or collaboration and emphasize the event ordered behavior of an object.

FIG.NO 4.4.8 : STATE CHART DIAGRAM

4.5 MODULE DESCRIPTION

50

Page 51: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Modules:

1. Registration

2. Login

3. File Upload

4. Migrate Cloud

5. Send Mail

Modules Description:

4.5.1 Registration:

In this module if an User or Owner or TTP(trusted third party) or CSP(cloud service

provider) have to register first, then only he/she has to access the data base.

4.5.2 Login:

In this module, any of the above mentioned person have to login, they should login by

giving their username and password.

4.5.3 File Upload:

In this module Owner uploads a file (along with meta data) into cloud, before it gets

uploaded, it subjects into Validation by TTP. Then TTP sends the file to CSP.CSP decrypt the

file by using file key.

If CSP tries to modify the data of the file, He can’t modify it. If he made an attempt the

alert will go to the Owner of the file. It results in the Cloud Migration.

4.5.4 Migrate Cloud:

51

Page 52: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

The advantage of this Meta cloud is, if we are not satisfied with one CSP, we can switch

over to next cloud.so that we are using two clouds at a time. In second cloud, their cant

modify/corrupt the real data, if they made an attempt, they will fail.

4.5.5 Send Mail:

The Mail will be sent to the end user along with file decryption key, so as to end user can

download the file. Owner sends the mail to the users who are registered earlier while uploaded

the file into the correct cloud.

Working Of Cloud Computing:

Cloud computing system can be divided it into two sections: the front end and the back

end. They connect to each other through a network, usually the Internet. The front end is the side

the computer user, or client, sees. The back end is the "cloud" section of the system. On the

back end there are various computers, servers and data storage systems that create the "cloud" of

computing services.

A central server administers the system, monitoring traffic and client demands to ensure

everything runs smoothly. It follows a set of rules called protocols

Servers and remote computers do most of the work and store the data.

Implementation is the stage of the project when the theoretical design is turned out into a

working system. Thus it can be considered to be the most critical stage in achieving a successful

new system and in giving the user, confidence that the new system will work and be effective.

The implementation stage involves careful planning, investigation of the existing system

and it’s constraints on implementation, designing of methods to achieve changeover and

evaluation of changeover methods.

The emergence of yet more cloud offerings from a multitude of service providers calls for

a Meta cloud to smoothen the edges of the jagged cloud landscape. This Meta cloud could solve

the vendor lock-in problems that current public and hybrid cloud users face.

5. IMPLEMENTATION52

Page 53: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

5.1 ABOUT SOFTWARE:

5.1.1 Frontend software

Java Technology

Java technology is both a programming language and a platform.

The Java Programming Language

The Java programming language is a high-level language that can be characterized by all

of the following buzzwords:

Simple

Architecture neutral

Object oriented

Portable

Distributed

High performance

Multithreaded

Robust

Dynamic

Secure

With most programming languages, you either compile or interpret a program so that you

can run it on your computer. The Java programming language is unusual in that a program is

both compiled and interpreted. With the compiler, first you translate a program into an

intermediate language called Java byte codes —the platform-independent codes interpreted by

the interpreter on the Java platform. The interpreter parses and runs each Java byte code

instruction on the computer. Compilation happens just once; interpretation occurs each time the

program is executed. The following figure illustrates how this works.

You can think of Java byte codes as the machine code instructions for the Java Virtual

Machine (Java VM). Every Java interpreter, whether it’s a development tool or a Web browser

53

Page 54: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

that can run applets, is an implementation of the Java VM. Java byte codes help make “write

once, run anywhere” possible. You can compile your program into byte codes on any platform

that has a Java compiler. The byte codes can then be run on any implementation of the Java VM.

That means that as long as a computer has a Java VM, the same program written in the Java

programming language can run on Windows 2000, a Solaris workstation, or on an iMac.

The Java Platform

A platform is the hardware or software environment in which a program runs. We’ve

already mentioned some of the most popular platforms like Windows 2000, Linux, Solaris, and

MacOS. Most platforms can be described as a combination of the operating system and

hardware. The Java platform differs from most other platforms in that it’s a software-only

platform that runs on top of other hardware-based platforms.

The Java platform has two components:

The Java Virtual Machine (Java VM)

The Java Application Programming Interface (Java API)

The Java API is a large collection of ready-made software components that provide many

useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into

libraries of related classes and interfaces; these libraries are known as packages. The next

section, What Can Java Technology Do? Highlights what functionality some of the packages in

the Java API provide.

The following figure depicts a program that’s running on the Java platform. As the figure

shows, the Java API and the virtual machine insulate the program from the hardware.

54

Page 55: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Native code is code that after you compile it, the compiled code runs on a specific

hardware platform. As a platform-independent environment, the Java platform can be a bit

slower than native code. However, smart compilers, well-tuned interpreters, and just-in-time byte

code compilers can bring performance close to that of native code without threatening

portability.

What Can Java Technology Do?

The most common types of programs written in the Java programming language are

applets and applications. If you’ve surfed the Web, you’re probably already familiar with

applets. An applet is a program that adheres to certain conventions that allow it to run within a

Java-enabled browser.

However, the Java programming language is not just for writing cute, entertaining applets

for the Web. The general-purpose, high-level Java programming language is also a powerful

software platform. Using the generous API, you can write many types of programs.

An application is a standalone program that runs directly on the Java platform. A special

kind of application known as a server serves and supports clients on a network.

Examples of servers are Web servers, proxy servers, mail servers, and print servers.

Another specialized program is a servlet. A servlet can almost be thought of as an applet that

runs on the server side. Java Servlets are a popular choice for building interactive web

applications, replacing the use of CGI scripts. Servlets are similar to applets in that they are

runtime extensions of applications. Instead of working in browsers, though, servlets run within

Java Web servers, configuring or tailoring the server.

How does the API support all these kinds of programs? It does so with packages of

software components that provides a wide range of functionality. Every full implementation of

the Java platform gives you the following features:

The essentials:

55

Page 56: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Objects, strings, threads, numbers, input and output, data structures, system properties,

date and time, and so on.

Applets:

The set of conventions used by applets.

Networking:

URLs, TCP (Transmission Control Protocol), UDP (User Data gram Protocol) sockets,

and IP (Internet Protocol) addresses.

Internationalization:

Help for writing programs that can be localized for users worldwide. Programs can

automatically adapt to specific locales and be displayed in the appropriate language.

Security:

Both low level and high level, including electronic signatures, public and private key

management, access control, and certificates.

Software components:

Known as JavaBeans TM, can plug into existing component architectures.

Object serialization:

Allows lightweight persistence and communication via Remote Method Invocation

(RMI).

Java Database Connectivity (JDBCTM):

Provides uniform access to a wide range of relational databases.

The Java platform also has APIs for 2D and 3D graphics, accessibility, servers, collaboration,

telephony, speech, animation, and more. The following figure depicts what is included in the

56

Page 57: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Java 2 SDK.

How Will Java Technology Change My Life?

We can’t promise you fame, fortune, or even a job if you learn the Java programming

language. Still, it is likely to make your programs better and requires less effort than other

languages. We believe that Java technology will help you do the following:

Get started quickly:

Although the Java programming language is a powerful object-oriented language, it’s

easy to learn, especially for programmers already familiar with C or C++.

Write less code:

Comparisons of program metrics (class counts, method counts, and so on) suggest that a

program written in the Java programming language can be four times smaller than the same

program in C++.

Write better code:

The Java programming language encourages good coding practices, and its garbage

collection helps you avoid memory leaks. Its object orientation, its JavaBeans component

architecture, and its wide-ranging, easily extendible API let you reuse other people’s tested code

and introduce fewer bugs.

57

Page 58: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Develop programs more quickly:

Your development time may be as much as twice as fast versus writing the same

program in C++. Why? You write fewer lines of code and it is a simpler programming language

than C++.

Avoid platform dependencies with 100% Pure Java:

You can keep your program portable by avoiding the use of libraries written in other

languages. The 100% Pure Java TM Product Certification Program has a repository of historical

process manuals, white papers, brochures, and similar materials online.

Write once, run anywhere:

Because 100% Pure Java programs are compiled into machine-independent byte codes,

they run consistently on any Java platform.

Distribute software more easily:

You can upgrade applets easily from a central server. Applets take advantage of the

feature of allowing new classes to be loaded “on the fly,” without recompiling the entire

program.

ODBC

Microsoft Open Database Connectivity (ODBC) is a standard programming interface for

application developers and database systems providers. Before ODBC became a de facto

standard for Windows programs to interface with database systems, programmers had to use

proprietary languages for each database they wanted to connect to. Now, ODBC has made the

choice of the database system almost irrelevant from a coding perspective, which is as it should

be. Application developers have much more important things to worry about than the syntax that

is needed to port their program from one database to another when business needs suddenly

change.

Through the ODBC Administrator in Control Panel, you can specify the particular

database that is associated with a data source that an ODBC application program is written to

58

Page 59: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

use. Think of an ODBC data source as a door with a name on it. Each door will lead you to a

particular database. For example, the data source named Sales Figures might be a SQL Server

database, whereas the Accounts Payable data source could refer to an Access database. The

physical database referred to by a data source can reside anywhere on the LAN.

The ODBC system files are not installed on your system by Windows 95. Rather, they

are installed when you setup a separate database application, such as SQL Server Client or

Visual Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a file called

ODBCINST.DLL. It is also possible to administer your ODBC data sources through a stand-

aloneprogramcalledODBCADM.EXE.

From a programming perspective, the beauty of ODBC is that the application can be

written to use the same set of function calls to interface with any data source, regardless of the

database vendor. The source code of the application doesn’t change whether it talks to Oracle or

SQL Server. We only mention these two as an example. There are ODBC drivers available for

several dozen popular database systems. Even Excel spreadsheets and plain text files can be

turned into data sources. The operating system uses the Registry information written by ODBC

Administrator to determine which low-level ODBC drivers are needed to talk to the data source

(such as the interface to Oracle or SQL Server). The loading of the ODBC drivers is transparent

to the ODBC application program. In a client/server environment, the ODBC API even handles

many of the network issues for the application programmer.

The advantages of this scheme are so numerous that you are probably thinking there must

be some catch. The only disadvantage of ODBC is that it isn’t as efficient as talking directly to

the native database interface. ODBC has had many detractors make the charge that it is too slow.

Microsoft has always claimed that the critical factor in performance is the quality of the driver

software that is used. In our humble opinion, this is true. The availability of good ODBC drivers

has improved a great deal recently. And anyway, the criticism about performance is somewhat

analogous to those who said that compilers would never match the speed of pure assembly

language. Maybe not, but the compiler (or ODBC) gives you the opportunity to write cleaner

programs, which means you finish sooner. Meanwhile, computers get faster every year.

JDBC

59

Page 60: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

In an effort to set an independent database standard API for Java; Sun Microsystems

developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access

mechanism that provides a consistent interface to a variety of RDBMSs. This consistent interface

is achieved through the use of “plug-in” database connectivity modules, or drivers. If a database

vendor wishes to have JDBC support, he or she must provide the driver for each platform that the

database and Java run on.

To gain a wider acceptance of JDBC, Sun based JDBC’s framework on ODBC. As you

discovered earlier in this chapter, ODBC has widespread support on a variety of platforms.

Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than

developing a completely new connectivity solution.

JDBC was announced in March of 1996. It was released for a 90 day public review that

ended June 8, 1996. Because of user input, the final JDBC v1.0 specification was released soon

after.

The remainder of this section will cover enough information about JDBC for you to know

what it is about and how to use it effectively. This is by no means a complete overview of JDBC.

That would fill an entire book.

JDBC Goals

Few software packages are designed without goals in mind. JDBC is one that, because of

its many goals, drove the development of the API. These goals, in conjunction with early

reviewer feedback, have finalized the JDBC class library into a solid framework for building

database applications in Java.

The goals that were set for JDBC are important. They will give you some insight as to why

certain classes and functionalities behave the way they do. The eight design goals for JDBC are

as follows:

SQL Level API

The designers felt that their main goal was to define a SQL interface for Java. Although not

the lowest database interface level possible, it is at a low enough level for higher-level tools and

APIs to be created. Conversely, it is at a high enough level for application programmers to use it

confidently. Attaining this goal allows for future tool vendors to “generate” JDBC code and to

hide many of JDBC’s complexities from the end user.

60

Page 61: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

SQL Conformance

SQL syntax varies as you move from database vendor to database vendor. In an effort to

support a wide variety of vendors, JDBC will allow any query statement to be passed through it

to the underlying database driver. This allows the connectivity module to handle non-standard

functionality in a manner that is suitable for its users.

JDBC must be implemental on top of common database interfaces

The JDBC SQL API must “sit” on top of other common SQL level APIs. This goal allows

JDBC to use existing ODBC level drivers by the use of a software interface. This interface

would translate JDBC calls to ODBC and vice versa.

Provide a Java interface that is consistent with the rest of the Java system

Because of Java’s acceptance in the user community thus far, the designers feel that they

should not stray from the current design of the core Java system.

Keep it simple

This goal probably appears in all software design goal listings. JDBC is no exception. Sun

felt that the design of JDBC should be very simple, allowing for only one method of completing

a task per mechanism.

Allowing duplicate functionality only serves to confuse the users of the API.

Use strong, static typing wherever possible

Strong typing allows for more error checking to be done at compile time; also, less error

appear at runtime.

Java is also unusual in that each Java program is both compiled and interpreted.

With a compile you translate a Java program into an intermediate language called Java

byte codes the platform-independent code instruction is passed and run on the

61

Page 62: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Java Program

Compilers

Interpreter

My Program

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

computer. Compilation happens just once; interpretation occurs each time the program

is executed. The figure illustrates how this works.

You can think of Java byte codes as the machine code instructions for the Java

Virtual Machine (Java VM). Every Java interpreter, whether it’s a Java development

tool or a Web browser that can run Java applets, is an implementation of the Java VM.

The Java VM can also be implemented in hardware.

Java byte codes help make “write once, run anywhere” possible. You can compile

your Java program into byte codes on my platform that has a Java compiler. The byte

codes can then be run any implementation of the Java VM. For example, the same Java

program can run Windows NT, Solaris, and Macintosh.

Networking

TCP/IP stack

The TCP/IP stack is shorter than the OSI one:

62

Page 63: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

TCP is a connection-oriented protocol; UDP (User Datagram Protocol) is a

connectionless protocol.

IP datagram’s

The IP layer provides a connectionless and unreliable delivery system. It considers each

datagram independently of the others. Any association between datagram must be supplied by

the higher layers. The IP layer supplies a checksum that includes its own header. The header

includes the source and destination addresses. The IP layer handles routing through an Internet. It

is also responsible for breaking up large datagram into smaller ones for transmission and

reassembling them at the other end.

UDP

UDP is also connectionless and unreliable. What it adds to IP is a checksum for the

contents of the datagram and port numbers. These are used to give a client/server model - see

later.

TCP

TCP supplies logic to give a reliable connection-oriented protocol above IP. It provides a

virtual circuit that two processes can use to communicate.

63

Page 64: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Internet addresses

In order to use a service, you must be able to find it. The Internet uses an address scheme

for machines so that they can be located. The address is a 32 bit integer which gives the IP

address.

Network address

Class A uses 8 bits for the network address with 24 bits left over for other addressing.

Class B uses 16 bit network addressing. Class C uses 24 bit network addressing and class D uses

all 32.

Subnet address

Internally, the UNIX network is divided into sub networks. Building 11 is currently on

one sub network and uses 10-bit addressing, allowing 1024 different hosts.

Host address

8 bits are finally used for host addresses within our subnet. This places a limit of 256

machines that can be on the subnet.

Total address

The 32 bit address is usually written as 4 integers separated by dots.

64

Page 65: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Port addresses

A service exists on a host, and is identified by its port. This is a 16 bit number. To send a

message to a server, you send it to the port for that service of the host that it is running on. This

is not location transparency! Certain of these ports are "well known".

Sockets

A socket is a data structure maintained by the system to handle network connections. A

socket is created using the call socket. It returns an integer that is like a file descriptor. In fact,

under Windows, this handle can be used with Read File and Write File functions.

#include <sys/types>

#include <sys/socket>

int socket (int family, int type, int protocol);

Here "family" will be AF_INET for IP communications, protocol will be zero, and type

will depend on whether TCP or UDP is used. Two processes wishing to communicate over a

network create a socket each. These are similar to two ends of a pipe - but the actual pipe does

not yet exist.

JFree Chart

JFreeChart is a free 100% Java chart library that makes it easy for developers to display

professional quality charts in their applications. JFreeChart extensive feature set includes:

A flexible design that is easy to extend, and targets both server-side and client-side

applications;

Support for many output types, including Swing components, image files (including PNG

and JPEG), and vector graphics file formats (including PDF, EPS and SVG);

JFreeChart is "open source" or, more specifically, free software. It is distributed under the

terms of the GNU Lesser General Public License (LGPL), which permits use in proprietary

applications.

65

Page 66: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

1. Map Visualizations

Charts showing values that relate to geographical areas. Some examples include: (a)

population density in each state of the United States, (b) income per capita for each country in

Europe, (c) life expectancy in each country of the world. The tasks in this project include:

Sourcing freely redistributable vector outlines for the countries of the world,

states/provinces in particular countries (USA in particular, but also other areas);

Creating an appropriate dataset interface (plus default implementation), a rendered, and

integrating this with the existing XYPlot class in JFreeChart;

Testing, documenting, testing some more, documenting some more.

2. Time Series Chart Interactivity

Implement a new (to JFreeChart) feature for interactive time series charts --- to display a

separate control that shows a small version of ALL the time series data, with a sliding "view"

rectangle that allows you to select the subset of the time series data to display in the main chart.

3. Dashboards

There is currently a lot of interest in dashboard displays. Create a flexible dashboard mechanism

that supports a subset of JFreeChart chart types (dials, pies, thermometers, bars, and lines/time

series) that can be delivered easily via both Java Web Start and an applet.

4. Property Editors

The property editor mechanism in JFreeChart only handles a small subset of the

properties that can be set for charts. Extend (or reimplement) this mechanism to provide greater

end-user control over the appearance of the charts.

Tomcat 6.0 web server

Tomcat is an open source web server developed by Apache Group. Apache Tomcat is the

servlet container that is used in the official Reference Implementation for the Java Servlet and

Java Server Pages technologies. The Java Servlet and Java Server Pages specifications are

66

Page 67: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

developed by Sun under the Java Community Process. Web Servers like Apache Tomcat support

only web components while an application server supports web components as well as business

components (BEAs Web logic, is one of the popular application server).To develop a web

application with jsp/servlet install any web server like JRun, Tomcat etc to run your application.

Fig.5.1 Tomcat Web server

5.1.2 Back end software:

Features of SQL-SERVER:

The OLAP Services feature available in SQL Server version 7.0 is now called SQL

Server 2000 Analysis Services. The term OLAP Services has been replaced with the term

Analysis Services. Analysis Services also includes a new data mining component. The

Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server

2000 Meta Data Services. References to the component now use the term Meta Data Services.

The term repository is used only in reference to the repository engine within Meta Data

Services

SQL-SERVER database consist of six type of objects,

67

Page 68: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

They are,

1. TABLE

2. QUERY

3. FORM

4. REPORT

5. MACRO

Table: A database is a collection of data about a specific topic.

Views of table: We can work with a table in two types,

1. Design View

2. Datasheet View

Design View: To build or modify the structure of a table we work in the table design view. We

can specify what kind of data will be hold.

Datasheet View: To add, edit or analyses the data itself we work in tables datasheet view mode.

Query: A query is a question that has to be asked the data. Access gathers data that answers the

question from one or more table. The data that make up the answer is either dynaset (if you edit

it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in the

dynaset. Access either displays the dynaset or snapshot for us to view or perform an action on it,

such as deleting or updating.

68

Page 69: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

6. SYSTEM TESTING

6.1 TESTING FUNDAMENTALS:

The purpose of testing is to discover errors. Testing is the process of trying to discover

every conceivable fault or weakness in a work product. It provides a way to check the

functionality of components, sub-assemblies, assemblies and/or a finished product It is the

process of exercising software with the intent of ensuring that the Software system meets its

requirements and user expectations and does not fail in an unacceptable manner. There are

various types of test. Each test type addresses a specific testing requirement.

6.2 WHITE BOX TESTING:

White Box Testing is a testing in which in which the software tester has knowledge of the

inner workings, structure and language of the software, or at least its purpose. It is purpose. It is

used to test areas that cannot be reached from a black box level.

6.3 BLACK BOX TESTING:

Black Box Testing is testing the software without any knowledge of the inner workings,

structure or language of the module being tested. Black box tests, as most other kinds of tests,

must be written from a definitive source document, such as specification or requirements

document, such as specification or requirements document. It is a testing in which the software

under test is treated, as a black box .you cannot “see” into it. The test provides inputs and

responds to outputs without considering how the software works.

6.4 SAMPLE TEST CASES:

Unit testing

Unit testing involves the design of test cases that validate that the internal program logic is

functioning properly, and that program inputs produce valid outputs. All decision branches and

internal code flow should be validated. It is the testing of individual software units of the

application .it is done after the completion of an individual unit before integration. This is a

structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform

69

Page 70: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

basic tests at component level and test a specific business process, application, and/or system

configuration. Unit tests ensure that each unique path of a business process performs accurately

to the documented specifications and contains clearly defined inputs and expected results.

Unit testing is usually conducted as part of a combined code and unit test phase of the

software lifecycle, although it is not uncommon for coding and unit testing to be conducted as

two distinct phases.

Test strategy and approach

Field testing will be performed manually and functional tests will be written in detail.

Test objectives

All field entries must work properly.

Pages must be activated from the identified link.

The entry screen, messages and responses must not be delayed.

Features to be tested

Verify that the entries are of the correct format

No duplicate entries should be allowed

All links should take the user to the correct page.

Integration testing

Integration tests are designed to test integrated software components to determine if

they actually run as one program. Testing is event driven and is more concerned with the basic

outcome of screens or fields. Integration tests demonstrate that although the components were

individually satisfaction, as shown by successfully unit testing, the combination of components is

correct and consistent. Integration testing is specifically aimed at exposing the problems that

arise from the combination of components.Software integration testing is the incremental

integration testing of two or more integrated software components on a single platform to

produce failures caused by interface defects. The task of the integration test is to check that

components or software applications, e.g. components in a software system or – one step up –

software applications at the company level – interact without error.

Test Results:

70

Page 71: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

All the test cases mentioned above passed successfully. No defects encountered.

Functional test

Functional tests provide systematic demonstrations that functions tested are available as

specified by the business and technical requirements, system documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key functions, or

special test cases.

System Test

System testing ensures that the entire integrated software system meets requirements. It

tests a configuration to ensure known and predictable results. An example of system testing is the

configuration oriented system integration test. System testing is based on process descriptions

and flows, emphasizing pre-driven process links and integration points.

User acceptance Testing:

User Acceptance Testing is a critical phase of any project and requires significant

participation by the end user. It also ensures that the system meets the functional requirement.

71

Page 72: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

7. RESULTS

Server login:

72

Page 73: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Fig 7.1: Server login

Here we open the apache tomcat 7.0 version and click the manager apps.

73

Page 74: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Browsing project:

Fig.7.2: Browsing project

Here we login as the admin and enter the password then the list of from the list of

applications we browse and select the project.

74

Page 75: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Homepage:

Fig.7.3: Home page

This is the homepage which describes the Meta cloud and shows the architecture of meta cloud.

75

Page 76: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Owner Registration:

Fig.7.4: Owner registration page

In this webpage owner can register himself by filling the form.

76

Page 77: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

User Registration:

Fig.7.5: User registration page

In this webpage user can register himself by filling the form.

77

Page 78: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

TTP registration page:

Fig.7.5: TTP registration page

In this webpage TTP can register himself by filling the form.

78

Page 79: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

CSP Registration:

Fig.7.9 CSP Registration

In this webpage CSP can register himself by filling the form.

79

Page 80: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Owner login page:

Fig.7.11 Owner login page

In this webpage owner can login by filling the form with his username and password.

80

Page 81: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

File Upload:

Fig.7.12 File Upload

In this webpage owner can login and fill the form and submit the details.

81

Page 82: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

TTP login:

Fig, 7.13 TTP login

In this webpage TTP can login and fill the form and submit the details.

82

Page 83: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

File details:

Fig.7.14 File details

In this webpage we can see the files which were uploaded previously,

Containing the file details like, date of created, file size, key number etc.

83

Page 84: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

File Upload:

Fig.7.15 File Upload

The file can be uploaded by choosing the file, and submitting after selecting the file.

84

Page 85: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

CSP login:

Fig. 7.16 CSP login

In this webpage TTP can login and fill the form and submit the details.

85

Page 86: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

CSP files details:

Fig.7.18 CSP files details

In this webpage CSP verifies the uploaded file details.

86

Page 87: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

File key number:

Fig.7.20 File key number

By clicking on get key button the key will be generated in this webpage.

87

Page 88: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Change cloud:

Fig.7.21 Change cloud

If the owner thinks to change the cloud he can do so by selecting from the list of clouds service

providers available in the Meta cloud.

88

Page 89: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Send mail:

Fig.7.22 Send mail

The owner can send a mail to any of the users who already registered and the mail contains the

file key.

89

Page 90: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

Mail search:

Fig.7.24 Mail search

After successfully uploading the file the user can check and download the file by using the file

key which is sent to the user’s mail.

90

Page 91: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

8. CONCLUSION & FUTURE WORK

The Meta cloud can help mitigate vendor lock-in and promises transparent use of cloud

computing services. Most of the basic technologies necessary to realize the Meta cloud already

exist, yet lack integration. Thus, integrating these state-of-the-art tools promises a huge leap

toward the Meta cloud. To avoid Meta cloud lock-in, the community must drive the ideas and

create a truly open Meta cloud with added value for all customers and broad support for different

providers and implementation technologies.

To some extent, we can realize the Meta cloud based on a combination of existing tools

and concepts, part of which we just examined. The Meta cloud’s main components. We can

categorize these components based on whether they’re important mainly for cloud software

engineers during development time or whether they perform tasks during runtime.

The emergence of yet more cloud offerings from a multitude of service providers calls for

a Meta cloud to smoothen the edges of the jagged cloud landscape. This Meta cloud could solve

the vendor lock-in problems that current public and hybrid cloud users face.

91

Page 92: Project book on WINDS OF CHANGE:FROM VENDOR LOCK-IN TO THE META CLOUD

Winds Of Change: From Vendor Locking To The Meta Cloud CSE DEPT, PDIT

REFERENCES

1. M. Armbrust et al., “A View of Cloud Computing,” Comm. ACM, vol. 53,

no. 4,2010, pp. 50–58.

2. B.P. Rimal, E. Choi, and I. Lumb, “A Taxonomy and Survey of Cloud

Computing Systems,” Proc. Int’l Conf. Networked Computing and

Advanced Information Management, IEEE CS Press, 2009, pp. 44–51.

3. J. Skene, D.D. Lamanna, and W. Emmerich, “Precise Service Level

Agreements,”Proc. 26th Int’l Conf. SoftwareEng. (ICSE 04), IEEE CS

Press, 2004, pp. 179–188.

4. Q. Zhang, L. Cheng, and R. Boutaba, “Cloud Computing: State-of-the-Art

and Research Challenges,” J. Internet Services and Applications, vol. 1, no.

1, 2010, pp. 7–18.

5. M.D. Dikaiakos, A. Katsifodimos, and G. Pallis, “Minersoft: Software

Retrieval in Grid and Cloud Computing Infrastructures,”ACM Trans.

Internet Technology.

6. http://www.infoworld.com/article/08/04/07/15FE-cloud-computing-

reality_1.html,

“What Cloud Computing Really Means”.

7. http://www.spinnakerlabs.com/CloudComputing.pdf “Welcome to the new

era of cloud computing PPT”.

8. http://www.johnmwillis.com/ “Demystifying Clouds” - discusses many

players in the cloud space.

9. Cloud Computing – MLADEN .A.VOUK -Issues, Research an

Implementations, Information Technical Interfaces, June 2008.

92