Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th GLOBAL INSTITUTE OF TECHNOLOGY, JAIPUR RTU Paper Solution Branch – CSE Subject Name – CLOUD COMPUTING Paper Code – 7CS1A Date of Exam – 13/11/2019
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
GLOBAL INSTITUTE OF TECHNOLOGY,
JAIPUR
RTU Paper Solution
Branch – CSE
Subject Name – CLOUD COMPUTING
Paper Code – 7CS1A
Date of Exam – 13/11/2019
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
UNIT-I
Q.1 a) What are the risk in the Migration into Cloud? Also explain the process steps used in the migration into
cloud. (8)
ANSWER:
The several risks are there in the migration into the cloud including:
Performance, monitoring and tuning essentially by identifying all possible production level deviants
The business continuity and disaster recovery in the world of cloud computing service
The compliance with standards and governance issues
The IP and licensing issues
The quality of service (QoS) parameters as well as the corresponding SLAs committed to the ownership,
transfer, and storage of data in the application
The portability and interoperability issues.
On the security basis cloud migration risks are visible at various levels of the enterprise application as
applicable on the cloud in addition to issues of trust and issues of privacy. There are several legal compliances
that a migration strategy and implementation has to fulfill including obtaining the right execution logs as well
as retaining the rights to all audit tracks at a detailed level which currently may not be fully available.
Approaches of Migration into Cloud:
The Seven-Step Model of Migration into the Cloud for understanding and leveraging the cloud computing service
offerings in the enterprise context. The seven steps in the model of migration into the cloud are:
Conduct Cloud Migration Valuations
Separate the Dependencies
Map the Messaging & Environment
Re-architect & Implement the lost functionalities
Control Cloud functionalities & features
Test the Migration
Repeat and Optimize
The process of the seven-step migration into the cloud is given in the figure below:
Seven Step Model
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
The workings of all the seven steps are as such:
Assess Isolate Map Re-Architect Augment Test Optimize
Cloudonomics Runtime
Environment
Messages
mapping
Approximate
lost
functionality
using cloud
runtime support
API
Exploit
additional
cloud
features
Expand Test
Cases and
Test
Automation
Optimize
Migration
Costs
Licensing Mapping
Environment
New Use cases Seek Low
cost
extension
Run Proof of
concepts
Significantly
satisfy
cloudonomics
of migration
Recurring
Costs
Libraries
Dependency
Mapping
Libraries &
runtime
approximation
Analysis Auto-
scaling
Test
Migration
strategy
Optimize
compliance
with
standards and
governance
Database Data
segmentation
Application
Dependency
Design Storage Test new test
cases due to
cost
augmentation
Deliver best
migration
ROI
Database
Migration
Latencies
Bottlenecks
Bandwidth Test for
Production
Loads
Develop
Roadmap for
leveraging
new cloud
features
Functionality
Migration
Performance
Bottlenecks
Security
NFR Support Architectural
Dependencies
This is just a subset of our Seven-step Migration Model and is very specific and proprietary to cloud offering by many
organizations in the market.
b) What are the ethical issues in cloud computing? (4)
ANSWER: Ethical Issues in Cloud Computing:
Cloud computing has mainly the following Ethical issues on the basis of market trends:
The control is give up to third-party services like unauthorized access, data corruption, infrastructure failure, and
service unavailability.
The data is stored on multiple sites administered by several organizations.
Multiple services interoperate across the network.
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
The complex structure of cloud services make it difficult to determine who is responsible in case something
undesirable happens and therefore no one can be held responsible for undesirable. This is called problem of many
hands.
Identity fraud and theft are made possible by the unauthorized access to personal data in circulation. New forms of
distribution using social networks also position a danger to cloud computing.
Accountability is a necessary element of cloud computing. Suitable information about how data is handled within
the cloud and about allocation of responsibility are key elements for enforcing ethics rules in cloud computing.
c) Briefly describe the vision of the cloud computing towards its development in the market. (4)
ANSWER: Vision:
Cloud computing allows to provision virtual hardware, runtime environments, and services.
The entire stack of a computing system is transformed into a collection of utilities, which can be provisioned and
composed together to deploy systems in hours rather than days and virtually with no maintenance costs.
Despite its evolution, the use of cloud computing is often limited to a single service at a time or, more commonly a
set of related services offered by the same vendor.
Previously the lack of effective standardization efforts made it difficult to move hosted services from one vendor to
another.
The long-term vision of cloud computing is that IT services are traded as utilities in an open market, without
technological and legal barriers. In this cloud marketplace, cloud service providers and consumers, trading cloud
services as utilities, play a vital role.
The need for ubiquitous storage and compute power on demand is the most common reason to consider cloud
computing. A scalable runtime for applications is an attractive option for application and system developers that do
not have infrastructure or cannot afford any further expansion of existing infrastructure.
The discovery of such services is done by human intervention: a person or a team of people who looks over the
internet to identify offerings that meet his or her needs.
We imagine that in the near future it will be possible to find the solution that matches our needs by simply entering
our request in a global digital market that trades cloud computing services.
The existence of such market will enable the automation of the discovery process and its integration into existing
software systems, thus allowing users to transparently leverage cloud resources in their applications and systems.
(OR)
1. a) Describe the reasons for customer to prefer cloud computing over the traditional computing in owned
infrastructure. (8)
ANSWER: Cloud computing brings a number of new benefits compared to traditional computing. There are briefly
described here:
Scalability and on-demand services
Cloud computing provides resources and services for users on demand. The resources are scalable over
several data centers.
User-centric interface
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
Cloud interfaces are location independent and can be accesses by well-established interfaces such as Web services
and Internet browsers.
Guaranteed Quality of Service (QoS)
Cloud computing can guarantee QoS for users in terms of hardware/CPU performance, bandwidth, and memory
capacity.
Autonomous system
The cloud computing systems are autonomous systems managed transparently by users. However software and data
inside clouds can be automatically reconfigured and consolidated to a simple platform depending on user’s needs.
Pricing
Cloud computing does not require up-from investment. No capital expenditure is required. Users pay for services
and capacity as they need them.
b) What are the enabling technologies for cloud computing? Explain networking support for cloud computing. (8)
ANSWER: Enabling Technology:
Key technologies that enabled cloud computing are virtualization, Web service and service-oriented architecture,
service flows and workflows, and Web 2.0 and mashup. The brief discussion of all is given below:
Virtualization
The advantage of cloud computing is the ability to virtualize and sharing resources among different applications
with the objective for better server utilization. In non-cloud computing three independent platforms (SAAS, PAAS
& IAAS) exist for three different applications running on its own server. In the cloud servers can be shared or
virtualized for operating systems and applications resulting in fewer servers.
Web Service and Service Oriented Architecture
Web Services and Service Oriented Architecture (SOA) represent the base technologies for cloud computing. Cloud
services are typically designed as Web services, which follow industry standards including WSDL, SOAP, and
UDDI. A Service Oriented Architecture organizes and manages Web services inside clouds. A SOA also includes a
set of cloud services, which are available on various distributed platforms.
Service Flow and Workflows
The concept of service flow and workflow refers to an integrated view of service based activities provided in
clouds. Workflows have become one of the important areas of research in the field of database and information
systems.
Web 2.0 and Mashup
Web 2.0 is a new concept that refers to the use of Web technology and Web design to enhance creativity,
information sharing, and association among users. On the other hand, Mashup is a web application that combines
data from more than one source into a single integrated storage tool. Both technologies are very beneficial for cloud
computing. The components in this architecture are dynamic in nature. The components closer to the user are
smaller in nature and more reusable.
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
Enabling Technologies
UNIT II
2. a) Differentiate between public, private and hybrid cloud according to their functionality. (6)
ANSWER: Cloud types are estimated by the deployment models. According to the NIST cloud can be classified as
public, private, community or hybrid based on model of deployment.
Private cloud: The cloud infrastructure is operated exclusively for an organization. It may be managed by the
organization or a third party and may exist on premise or off premise.
Community cloud: The cloud infrastructure is shared by several organizations and supports a specific community
that has shared concerns like mission, security requirements, policy, and compliance considerations. It may be
managed by the organizations or a third party and may exist on premise or off premise.
Public cloud: The cloud infrastructure is made available to the general public or a large industry group and is
owned by an organization selling cloud services.
Hybrid cloud: The cloud infrastructure is a composition of two or more clouds (private, community, or public) that
remain unique entities but are bound together by standardized or proprietary technology that enables data and
application portability.
Private cloud Public cloud Hybrid cloud
Private cloud: The cloud
infrastructure is operated
exclusively for an organization. It
may be managed by the
organization or a third party and
may exist on premise or off
premise.
Public cloud: The cloud
infrastructure is made available
to the general public or a large
industry group and is owned by
an organization selling cloud
services.
Hybrid cloud: The cloud
infrastructure is a composition of
two or more clouds (private,
community, or public) that
remain unique entities but are
bound together by standardized
or proprietary technology that
enables data and application
portability.
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
Types of Cloud
b) What is need of data centres? Explain by providing case study of any industry where data centre is used. (6)
ANSWER: Despite the fact that hardware is constantly getting smaller, faster and more powerful, we are an
increasingly data-hungry species, and the demand for processing power, storage space and information in
general is growing and constantly threatening to outstrip companies' abilities to deliver.
Any entity that generates or uses data has the need for data centers on some level, including government
agencies, educational bodies, telecommunications companies, financial institutions, retailers of all sizes, and
the purveyors of online information and social networking services such as Google and Facebook. Lack of
fast and reliable access to data can mean an inability to provide vital services or loss of customer satisfaction
and revenue.
Examples: facebook, youtube, google drive, dropbox, wetransfer etc
c) Explain the features provided by Google App Engine. (4)
ANSWER: Google App engine:
GAE programming model is used to be developed by using following two programming languages:
o Java
o Python.
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
A client environment that contains Eclipse and its plug-in for Java allows debugging GAE on local machine. The
GWT Google Web Toolkit is available for Java web application developers. Developers can use this language by
using JVM based interpreter or compiler.
Python is generally used with frameworks such as Django and CherryPy but Google also provides ‘webapp’ Python
environment.
There are many concepts for storing and accessing data.
The data store used is “NOSQL” data management system for entities that are of 1 MB in size and are categorized
by a set of schema less properties.
GAE provides the facility to work on image data by using a dedicated images service and which can resize, rotate,
flip, crop and enhance images.
Application developed on GAE can also be executed on other platforms like Android, windows and i-Phone by
mapping the platform with GAE.
GAE used to store information of the uses of data on the daily and hourly basis.
(OR)
2. a) Explain the concept of parallel and distributed programming paradigms in cloud computing.(6)
ANSWER: Parallel and distributed programming Paradigms:
Consider a distributed computing system consisting of a set of networked nodes or workers. The system issues for
running a typical parallel program are:
• Partitioning: This is applicable to both computation and data as follows:
o Computation partitioning: Computation partitioning splits a given job or a program into smaller tasks. It
significantly depends on properly identifying portions of the job or program that can be implemented
concurrently. On identifying parallelism in the structure of the program than it can be divided into parts to be
run on different workers. Different parts also process different data or a copy of the same data.
o Data partitioning: Data partitioning splits the input or intermediate data into smaller pieces. On identification
of parallelism in the input data than it can also be divided into pieces to be processed on different workers. Data
pieces also be processed by different parts of a program or a copy of the same program.
Mapping: Mapping assigns either the smaller parts of a program or the smaller pieces of data to original resources.
This process aims to properly assign parts or pieces to be run simultaneously on different workers and is commonly
handled by resource allocators in the system.
Synchronization: It is necessary because different workers used to perform different tasks like synchronization and
coordination between workers so that race conditions are prevented and data dependency between different workers
is properly managed. Multiple accesses to a shared resource by different workers might raise race conditions. Data
dependency take place when a worker needs the processed data of other workers.
Communication: Data dependency is one of the main reasons for communication among workers. So the
communication is always triggered when the intermediate data is sent to workers.
Scheduling: When the number of computation parts of tasks or data pieces of the job or program are more than the
number of available workers than scheduler selects tasks for the workers. Resource allocator does the actual
mapping of the computation. While the scheduler only picks the next part from the queue of unassigned tasks based
on a set of rules called the scheduling policy. For multiple jobs or programs scheduler selects a sequence of jobs or
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
programs for running over the distributed computing system. Scheduling is also necessary when system resources
are not sufficient to run multiple job all together.
b) Explain various service layers in layered architecture of cloud with help of a neat and labeled diagram. (6)
ANSWER: Services Layers:
Cloud service provider’s begins with the boundary between a client's network, management, and responsibilities. With
the development of cloud computing different vendors comes in the market that have different services associated with
them. The collection of services offered by them is called the service model.
Layers of Cloud
Three service types have been accepted universally which are as such:
Infrastructure as a Service: IaaS provides virtual machines, virtual storage, virtual infrastructure, and other
hardware assets as resources that clients can run. The IaaS service provider manages all the infrastructure and the
client is responsible for all other aspects of the deployment. This can include the operating system, applications, and
user interactions with the system. Examples of IaaS service providers include:
o Amazon Elastic Compute Cloud (EC2)
o Eucalyptus
o GoGrid
o FlexiScale
o Linode
o RackSpace Cloud
o Terremark
Platform as a Service: PaaS provides virtual machines, operating systems, applications, services, development
frameworks, transactions, and control structures. The client can deploy its applications on the cloud infrastructure
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
and use applications that were programmed using languages and tools that are supported by the PaaS service
provider. The service provider manages the cloud infrastructure, the operating systems, and the enabling software.
The client is responsible for installing and managing the application that they deployed. Examples of PaaS services
are:
o Force.com
o GoGrid CloudCenter
o Google AppEngine
o Windows Azure Platform
Software as a Service: SaaS is a complete operating environment with applications, management, and the user
interface. In the SaaS model the application is provided to the client through a thin client interface generally a web
browser and the customer's responsibility begins and ends with entering and managing its data and user interaction.
Everything from the application down to the infrastructure is the vendor's responsibility. Examples of SaaS cloud
service providers are:
o GoogleApps
o Oracle On Demand
o SalesForce.com
o SQL Azure
When these three different service models taken together than it is known as the SPI model of cloud computing. Many
other service models are as such:
StaaS, Storage as a Service
IdaaS, Identity as a Service
CmaaS, Compliance as a Service
c) Explain the working of Map Reduce. (4)
ANSWER: MapReduce:
MapReduce is a software framework which supports parallel and distributed computing on large data sets.
The Map and Reduce functions of MapReduce are both defined as the data structured in (key, value) pairs. Map takes
one pair of data with a type in one data domain and returns a list of pairs in a different domain:
Map(k1,v1) → list(k2,v2)
The Map function is functional in parallel to each pair in the input dataset and produces a list of pairs for each call. After
that MapReduce framework collects all pairs with the same key from all lists and groups them together and create one
group for each key.
Then Reduce function is applied in parallel to each group to produces a collection of values in the same domain:
Reduce(k2, list (v2)) → list(v3)
Each Reduce call typically produces either one value v3 or an empty return. So one call is allowed to return more than
one value. The returns of all calls are collected as the wanted result list.
MapReduce framework transforms a list of (key, value) pairs into a list of values. This process is different from the
typical functional programming map and reduce combination which accepts a list of arbitrary values and returns one
single value that combines all the values returned by map.
It is necessary but not sufficient to implementation the map and reduce abstractions in order to implement MapReduce.
Distributed implementations of MapReduce require a means of connecting the processes which are performing the Map
and Reduce.
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
The overall structure of a user’s program containing the MapReduce and the Main functions is given below. The Map
and Reduce are two major subroutines. They will be called to implement the desired function performed in the main
program.
Map Function (….)
{
… …
}
Reduce Function (….)
{
… …
}
Main Function (….)
{
Initialize Spec object
… …
MapReduce (Spec, & Results)
}
UNIT -III
3. a Explain the virtualization of CPU in details. (8)
ANSWER: CPU Virtualization: Virtual Machine is a copy of existing system and instructions of it are executed
on the host processor in native mode. Unprivileged instructions of Virtual Machine runs directly on the host
machine for higher efficiency.
CPU is virtualized if it executes the Virtual Machine’s privileged and unprivileged instructions in the CPU user mode
and VMM executes in supervisor mode. Privileged instructions like control and behavior instructions of a VM are
executed by securing in the VMM. Here VMM works like a unified mediator for hardware access from different VMs to
guarantee the correctness and stability of the whole system. But all CPU’s not support virtualization. As RISC CPU can
be virtualized but x86 CPU not support virtualization.
It provides the work convenience and security. As one can access remotely, you are able to work from any
location and on any PC. It provides a lot of flexibility for employees to work from home or on the go. It also
protects confidential data from being lost or stolen by keeping it safe on central servers.
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
b) What do you mean by hypervisor? Explain different types of hypervisors. (8)
ANSWER: In virtualization a virtualization layer is inserted between the hardware and the operating system. So the
virtualization layer is responsible for converting the real hardware into virtual hardware. By using this different
operating systems such as Linux and Windows can run on the same physical machine simultaneously.
In this section we will discuss about the virtualization software’s/Tools like Hypervisor VMware, KVM and Xen which
are used for the development of the virtualized structure in between the hardware and operating system.
Hypervisor VMware:
The hypervisor works on hardware level for virtualization of devices like CPU, memory, disk and network interfaces.
The hypervisor software assembles directly between the physical hardware and OS. The virtualization layer developed
on hardware is known as the VMM or the hypervisor. The hypervisor provides hyper calls for the guest OS and
applications. Hypervisor uses a microkernel architecture like the Microsoft Hyper-V or monolithic hypervisor
architecture like the VMware ESX for server virtualization.
The device drivers and some unreliable components are not used in the hypervisor. A monolithic hypervisor uses all
functions like device drivers. Size of the hypervisor code of a micro-kernel hypervisor is smaller than that of a
monolithic hypervisor.
Xen:
Xen is an open source hypervisor program developed by Cambridge University. Xen is a micro-kernel hypervisor which
implements only the mechanisms. And the policies are to be handled by Domain 0. Xen does not contain any device
drivers. It just provides a mechanism to guest OS for getting direct access to the physical devices. Size of the Xen
hypervisor is generally very small. Xen provides a virtual environment between the hardware and the OS. Development
of Xen hypervisors in not done yet.
Xen Architecture
The core components of a Xen system are the hypervisor, kernel and applications. In Xen system guest OS works on top
of the hypervisor.
KVM (kernel based VM):
This is a Linux para-virtualization system and is a part of the Linux version 2.6.20 kernel. Memory management and
scheduling is done by using existing Linux kernel and KVM does all the remaining work. It is very simple than the
hypervisor and also controls the complete machine. KVM improves the performance of the system.
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
(OR)
3. a) Explain the differences between emulation, native virtualization and host virtualization. (8)
ANSWER: For many, emulation and virtualization go hand in hand, but there are actually some really key differences. When a device is being emulated, a software-based construct has replaced a hardware component. Its possible to run a complete virtual machine on an emulated server. However, virtualization makes it possible for that virtual machine to run directly on the underlying hardware, without needing to impose an emulation tax (the processing cycles needed to emulate the hardware).
With virtualization, the virtual machine uses hardware directly, although there is an overarching scheduler. As
such, no emulation is taking place, but this limits what can be run inside virtual machines to operating systems
that could otherwise run atop the underlying hardware. That said, this method provides the best overall
performance of the two solutions.
With emulation, since an entire machine can be created as a virtual construct, there are a wider variety of
opportunities, but with the aforementioned emulation penalty. But, emulation makes it possible to, for
example, run programs designed for a completely different architecture on an x86 PC. This approach is
common, for example, when it comes to running old games designed for obsolete platforms on todays modern
systems. Because everything is emulated in software, there is a performance hit in this method, although
todays massively powered processors often cover for this.
b) What do you mean by virtualization of memory and I/O devices? Explain in details. (8)
ANSWER: I/O Virtualization: I/O virtualization manages the I/O requests between virtual machine and the
physical machine. There are three ways to implement I/O virtualization:
o Full device emulation: It provides the emulation of the well-known real world devices.
o Para-virtualization: It is also known as the split driver model which contains frontend driver and backend
driver. They both works on different domains and they interacts with each other by using block of shared
memory. Frontend driver manages the I/O requests of the virtual OS and the backend driver manages the real
I/O devices. So para-virtualization is better than full device emulation.
o Direct I/O virtualization: It allows Virtual machine to access devices directly.
UNIT - IV
4. a) Explain the cloud security requirements and give fundamental model for cloud information security. (4)
ANSWER:
1. Data in transit protection
User data transiting networks should be adequately protected against tampering and eavesdropping.
2. Asset protection and resilience
User data, and the assets storing or processing it, should be protected against physical tampering, loss, damage
or seizure.
3. Separation between users
A malicious or compromised user of the service should not be able to affect the service or data of another.
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
4. Governance framework
The service provider should have a security governance framework which coordinates and directs its
management of the service and information within it. Any technical controls deployed outside of this
framework will be fundamentally undermined.
5. Operational security
The service needs to be operated and managed securely in order to impede, detect or prevent attacks. Good
operational security should not require complex, bureaucratic, time consuming or expensive processes.
6. Personnel security
Where service provider personnel have access to your data and systems you need a high degree of confidence
in their trustworthiness. Thorough screening, supported by adequate training, reduces the likelihood of
accidental or malicious compromise by service provider personnel.
7. Secure development
Services should be designed and developed to identify and mitigate threats to their security. Those which
aren’t may be vulnerable to security issues which could compromise your data, cause loss of service or enable
other malicious activity.
8. Supply chain security
The service provider should ensure that its supply chain satisfactorily supports all of the security principles
which the service claims to implement.
9. Secure user management
Your provider should make the tools available for you to securely manage your use of their service.
Management interfaces and procedures are a vital part of the security barrier, preventing unauthorised access
and alteration of your resources, applications and data.
10. Identity and authentication
All access to service interfaces should be constrained to authenticated and authorised individuals.
11. External interface protection
All external or less trusted interfaces of the service should be identified and appropriately defended.
12. Secure service administration
Systems used for administration of a cloud service will have highly privileged access to that service. Their
compromise would have significant impact, including the means to bypass security controls and steal or
manipulate large volumes of data.
13. Audit information for users
You should be provided with the audit records needed to monitor access to your service and the data held
within it. The type of audit information available to you will have a direct impact on your ability to detect and
respond to inappropriate or malicious activity within reasonable timescales.
14. Secure use of the service
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
The security of cloud services and the data held within them can be undermined if you use the service poorly.
Consequently, you will have certain responsibilities when using the service in order for your data to be
adequately protected.
b) Describe all the possible attacks that can be used to disrupt the cloud. (8)
ANSWER:
1: DDoS attacks
As more and more businesses and operations move to the cloud, cloud providers are becoming a bigger target
for malicious attacks. Distributed denial of service (DDoS) attacks are more common than ever before.
Verisign reported IT services, cloud and SaaS was the most frequently targeted industry during the first
quarter of 2015.
A DDoS attack is designed to overwhelm website servers so it can no longer respond to legitimate user
requests. If a DDoS attack is successful, it renders a website useless for hours, or even days. This can result in
a loss of revenue, customer trust and brand authority.
Complementing cloud services with DDoS protection is no longer just good idea for the enterprise; it’s a
necessity. Websites and web-based applications are core components of 21st century business and require
state-of-the-art security.
2: Data breaches
Known data breaches in the U.S. hit a record-high of 738 in 2014, according to the Identity Theft Research
Center, and hacking was (by far) the number one cause. That’s an incredible statistic and only emphasizes the
growing challenge to secure sensitive data.
Traditionally, IT professionals have had great control over the network infrastructure and physical hardware
(firewalls, etc.) securing proprietary data. In the cloud (in private, public and hybrid scenarios), some of those
controls are relinquished to a trusted partner. Choosing the right vendor, with a strong record of security, is
vital to overcoming this challenge.
3: Data loss
When business critical information is moved into the cloud, it’s understandable to be concerned with its
security. Losing data from the cloud, either though accidental deletion, malicious tampering (i.e. DDoS) or an
act of nature brings down a cloud service provider, could be disastrous for an enterprise business. Often a
DDoS attack is only a diversion for a greater threat, such as an attempt to steal or delete data.
To face this challenge, it’s imperative to ensure there is a disaster recovery process in place, as well as an
integrated system to mitigate malicious attacks. In addition, protecting every network layer, including the
application layer (layer 7), should be built-in to a cloud security solution.
4: Insecure access points
One of the great benefits of the cloud is it can be accessed from anywhere and from any device. But, what if
the interfaces and APIs users interact with aren’t secure? Hackers can find these types of vulnerabilities and
exploit them.
A behavioral web application firewall examines HTTP requests to a website to ensure it is legitimate traffic.
This always-on device helps protect web applications from security breaches.
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
5: Notifications and alerts
Awareness and proper communication of security threats is a cornerstone of network security and the same
goes for cloud security. Alerting the appropriate website or application managers as soon as a threat is
identified should be part of a thorough security plan. Speedy mitigation of a threat relies on clear and prompt
communication so steps can be taken by the proper entities and impact of the threat minimized.
c) Explain different legal issues in cloud computing. (4)
ANSWER: Legal Issues For The Cloud:
Service levels: It should go without saying that the starting point should be the business case and intended use
of the service, and not any legal document, such as a service level agreement (SLA). Understand what
business problem the service will be solving; the intended internal and external users; when, where and how
the service will be accessed; whether or not the service is business-critical; the practical consequences if the
service is down or degraded for any period of time; and how the use of the service may change over time.
Then, ensure the SLA reflects your needs.
Termination or suspension of service: The software application and/or the data running or housed in the
cloud may be critical to your business. Continuity of access and use (to both the application and data),
especially when both are on a third-party server, are of utmost importance. To that end, does the cloud vendor
in each instance notify you when any of the terms of the agreement may have been violated, and are you given
an opportunity to remedy each violation?
Representations and warranties; indemnities. While seemingly arcane, in terms of potential pitfalls, these
provisions may be the most important. A representation is a statement of fact, either past or present, while a
warranty may express a promise. Typical reps and warranties should confirm that there are no pending or
threatened claims of intellectual property right (IPR) infringement (after all, who wants legal problems on day
one?) and address continued no infringement, performance (as to the underlying app), and data security and
privacy.
Breach of a warranty will typically give rise to a limited remedy and thus will be to the exclusion of other
remedies, such as money damages. Therefore, be sure the limited remedy makes business sense and will
suffice. Note also that cloud providers typically request reps and warranties from the customer, including
those pertaining to the customer's data. To that end, the buyer must be careful about the sources of its data or
risk exposing itself to liability.
An indemnity is a contractual obligation to compensate a party for a loss. Thus, an indemnity would
compensate the cloud customer for any claims that its use of the service violated any third-party IP rights,
such as patent, copyright or trademark. These suits (especially patent) are costly, so care must be taken to
ensure that you are adequately covered.
Confidentiality. Cloud customers should be sure to get satisfactory promises regarding which vendor
personnel will have access to confidential information (including customer data) and what steps the vendor
will undertake to maintain the confidentiality of that information. Data is king, and this provision deserves
considerable attention.
Commercial/other. The considerations above are a good starting point but they are just the tip of the iceberg.
Here are a few more to consider: storage fees, if and when there are automatic upgrades; whether or not there
are multiple environments (e.g., development, test, and production) available to customer; how customization
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
works in a cloud setting; how many data recoveries does the vendor provide free of charge (and what are the
costs of additional backups); and how easy is it to move to another cloud and how will the vendor support the
transition?
(OR)
4. a) Explain business continuity planning and Disaster recovery Planning. (6)
ANSWER: Business Continuity and Disaster Recovery are the measures designed and implemented to
ensure operational resiliency in the event of any service interruptions. Business continuity and disaster
recovery provides flexible and reliable failover for required services in the event of any service interruptions,
including those caused by natural or man-made disasters or disruptions. Cloud-centric business continuity and
disaster recovery makes use of the cloud's flexibility to minimize cost and maximize benefits.
The core of this concept is the business continuity plan — a defined strategy that includes every facet of your
organization and details procedures for maintaining business availability.
1 Start with a business continuity plan
Business continuity management starts with planning how to maintain your critical functions (e.g., IT, sales
and support) during and after a disruption.
A business continuity plan (BCP) should comprise the following element
1. Threat Analysis
The identification of potential disruptions, along with potential damage they can cause to affected resources.
Examples include:
THREAT POTENTIAL IMPACT
Power outage Inability to access servers
Natural disaster Critical infrastructure damage
Illness Widespread employee absences
Cyberattack Data theft and network downtime
Vendor error Inability to execute integrated business functions
2. Role assignment
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
Every organization needs a well-defined chain of command and substitute plan to deal with absence of staff in
a crisis scenario. Employees must be cross-trained on their responsibilities so as to be able to fill in for one
another.
3. Communications
A communications strategy details how information is disseminated immediately following and during a
disruptive event, as well as after it has been resolved.
4. Backups
From electrical power to communications and data, every critical business component must have an adequate
backup plan that includes:
Data backups to be stored in different locations. This prevents the destruction of both the original and
backup copies at the same time. If necessary, offline copies should be kept as well.
Backup power sources, such as generators and inverters that are provisioned to deal with power
outages.
Backup communications (e.g., mobile phones and text messaging to replace land lines) and backup
services (e.g., cloud email services to replace on-premise servers).
Disaster recovery plan (DCP) – Your second line of defense
As the name implies, a disaster recovery plan deals with the restoration of operations after a major disruption.
It’s defined by two factors: RTO and RPO.
Recovery time objective (RTO) – The acceptable downtime for critical functions and components,
i.e., the maximum time it should take to restore services. A different RTO should be assigned to each
of your business components according to their importance (e.g., ten minutes for network servers, an
hour for phone systems).
Recovery point objective (RPO) – The point to which your state of operations must be restored
following a disruption. In relation to backup data, this is the oldest age and level of staleness it can
have. For example, network servers updated hourly should have a maximum RPO of 59 minutes to
avoid data loss.
Deciding on specific RTOs and RPOs helps clearly show the technical solutions needed to achieve your
recovery goals. In most cases the decision is going to boil down to choosing the right failover solution.
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
b) Explain security challenges and security architecture in cloud computing. (6)
ANSWER: SECURITY CHALLENGES IN CLOUD COMPUTING
1. Organizational Security
Organizational risks are categorized are categorized as the risks that may impact the structure of the
organization or the business as an entity . If a CSP goes out of business or gets acquired by another entity,
this may negatively affect their CSPs since any Service Level Agreements (SLA) they had may have changed
and they would then have to migrate to another CSP that more closely aligns with their needs. In addition to
this, there could be the threat of malicious insiders in the organization who could do harm using the data
provided by their CSCs.
2. Physical Security
The physical location of the cloud data center must be secured by the CSP in order to prevent unauthorized
on-site access of CSC data. Even firewalls and encryption cannot protect against the physical theft of data.
Since the CSP is in charge of the physical infrastructure, they should implement and operate appropriate
infrastructure controls including staff training, physical location security, network firewalls. It is also
important to note that the CSP is not only responsible for storing and process data in specific jurisdictions but
is also responsible for obeying the privacy regulations of those jurisdictions.
3. Technological Security
These risks are the failures associated with the hardware, technologies and services provided by the CSP. In
the public cloud, with its multi tenancy features, these include resource sharing isolation problems, and risks
related to changing CSPs, i.e. portability. Regular maintenance and audit of infrastructure by CSP is
recommended.
4. Compliance and Audit
These are risks related to the law. That is, risks related to lack of jurisdiction information, changes in
jurisdiction, illegal clauses in the contract and ongoing legal disputes. For example, depending on location,
some CSPs may be mandated by law to turn over sensitive information if demanded by government.
5. Data Security
There are a variety of data security risks that we need to take into account. The three main properties that we
need to ensure are data integrity, confidentiality and availability. We will go more into depth on this in the
next subsection since this is the area most at risk of being compromised and hence where the bulk of cloud
security efforts are focused.
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
Cloud Computing Security Architecture:
Fig: Cloud Computing Security Architecture
c) Explain service level agreements in short. (4)
2 ANSWER: Service level agreements in Cloud computing
A Service Level Agreement (SLA) is the bond for performance negotiated between the cloud services
provider and the client. Earlier, in cloud computing all Service Level Agreements were negotiated between a
client and the service consumer. Nowadays, with the initiation of large utility-like cloud computing providers,
most Service Level Agreements are standardized until a client becomes a large consumer of cloud services.
Service level agreements are also defined at different levels which are mentioned below:
Customer-based SLA
Service-based SLA
Multilevel SLA
Few Service Level Agreements are enforceable as contracts, but mostly are agreements or contracts which are
more along the lines of an Operating Level Agreement (OLA) and may not have the restriction of law. It is
fine to have an attorney review the documents before making a major agreement to the cloud service provider.
Service Level Agreements usually specify some parameters which are mentioned below:
1. Availability of the Service (uptime)
2. Latency or the response time
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
3. Service components reliability
4. Each party accountability
5. Warranties
UNIT - V
5. a) Describe the amazon EC2 and its features. (6)
ANSWER: Amazon Elastic Compute Cloud (Amazon EC2)
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute
capacity in the cloud. It is designed to make web-scale computing easier for developers and system
administrators. Amazon EC2’s simple web service interface allows you to obtain and configure
capacity with minimal friction.
Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you
actually use. Amazon EC2 provides developers and system administrators the tools to build failure
resilient applications and isolate themselves from common failure scenarios.
Auto Scaling
Auto Scaling allows you to scale your Amazon EC2 capacity up or down automatically according to
conditions.
Auto Scaling is particularly well suited for applications that experience hourly, daily, or weekly
variability in usage.
Auto Scaling is enabled by Amazon CloudWatch and available at no additional charge beyond
Amazon CloudWatch fees.
Elastic Load Balancing
Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon
EC2 instances. It enables you to achieve even greater fault tolerance in your applications, seamlessly
providing the amount of load balancing capacity needed in response to incoming application traffic.
Customers can enable Elastic Load Balancing within a single Availability Zone or across multiple
zones for even more consistent application performance.
b) Illustrate the Aneka cloud application platform in respect of private cloud. (4)
ANSWER: Aneka: Cloud Application Platform -Integration of Private and Public Clouds
Aneka is Manjrasoft Pty. Ltd.’s solution for developing, deploying, and managing cloud applications.
Aneka is an Application Platform-as-a-Service (Aneka PaaS) for Cloud Computing. It acts as a
framework for building customized applications and deploying them on either public or private
Clouds. One of the key features of Aneka is its support for provisioning resources on different public
Cloud providers such as Amazon EC2, Windows Azure and GoGrid.
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
Aneka is a market oriented Cloud development and management platform with rapid application
development and workload distribution capabilities.
Aneka is an integrated middleware package which allows you to seamlessly build and manage an
interconnected network in addition to accelerating development, deployment and management of
distributed applications using Microsoft .NET frameworks on these networks.
It is market oriented since it allows you to build, schedule, provision and monitor results using pricing,
accounting, QoS/SLA services in private and/or public (leased) network environments.
One of Aneka’s key advantages is its extensible set of application programming interfaces (APIs)
associated with different types of programming models—such as Task, Thread, and MapReduce—
used for developing distributed applications, integrating new capabilities into the cloud, and
supporting different types of cloud deployment models: public, private, and hybrid.
These features differentiate Aneka from infrastructure management software and characterize it as a
platform for developing, deploying, and managing execution of applications on various types of
clouds.
c) What are Dropbox and iCloud? Explain the applications of both. (6)
ANSWER:
Dropbox is a file hosting service operated by the American company Dropbox, Inc., headquartered in San
Francisco, California, that offers cloud storage, file synchronization, personal cloud, and client software.
Dropbox was founded in 2007 by MIT students Drew Houston and Arash Ferdowsi as a startup company,
with initial funding from seed accelerator Y Combinator.
Applications of Dropbox: Mailbox Carousel, Dropbox Paper, User-created projects, sync instant
messaging chat, logs; BitTorrent management; password management; remote application launching and
system monitoring; and as a free web hosting service.
iCloud is a cloud storage and cloud computing service from Apple Inc. launched on October 12, 2011. As of
2018, the service had an estimated 850 million users, up from 782 million users in 2016.
iCloud enables users to store data such as documents, photos, and music on remote servers for download
to iOS, macOS or Windows devices, to share and send data to other users, and to manage their Apple devices
if lost or stolen.
iCloud also provides the means to wirelessly back up iOS devices directly to iCloud, instead of being reliant
on manual backups to a host Mac or Windows computer using iTunes. Service users are also able to share
photos, music, and games instantly by linking accounts via AirDrop wireless.
Applications of iCloud: Backup and restore, Back to My Mac, Email, Find My Friends, Find My iPhone,
iCloud Keychain, iTunes, Match, iWork for iCloud, Photo Stream, iCloud Photos, Storage, iCloud Drive,
Messages on iCloud.
(OR)
5. a) Compare Amazon, Azure and Google app Engine. (6)
ANSWER: Here's the summary cloud comparison between AWS vs. Azure vs. Google:
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
Amazon Web Services – With a vast tool set that continues to grow exponentially, Amazon’s
capabilities are unmatched. Yet its cost structure can be confusing, and its singular focus on public
cloud rather than hybrid cloud or private cloud means that interoperating with your data center isn't
AWS's top priority.
Microsoft Azure – A close competitor to AWS with an exceptionally capable cloud infrastructure. If
you’re an enterprise customer, Azure speaks your language – few companies have the enterprise
background (and Windows support) as Microsoft. Azure knows you still run a data center, and the
Azure platform works hard to interoperate with data centers; hybrid cloud is a true strength.
Google Cloud – A well-funded underdog in the competition, Google entered the cloud market later
and doesn't have the enterprise focus that helps draw corporate customers. But its technical expertise is
profound, and its industry-leading tools in deep learning and artificial intelligence, machine
learning and data analytics are significant advantages.
b) Describe some example of CRM and ERP based on cloud computing. (6)
ANSWER:
RETAIL_ORG: was established to address the needs of African immigrants who live in New York City or
its suburbs and seek authentic groceries, clothing, and crafts from Africa. The firm is a 100% family-owned
fair-trade business that imports directly from the African homeland and also exports commodities to other
parts of the continent and the Caribbean for the convenience of customers.
Workday: Workday is a cloud-based enterprise resource management (ERP) suite suitable for global
businesses of all sizes in a variety of industry verticals. Workday delivers user and administrative tools across
financials, HR, planning.
SAP Business ByDesign: SAP Business ByDesign is software as a service (SaaS) enterprise resource
planning (ERP) system. The software is designed to serve all the key needs of a business and offers
applications for customer relationship management (CRM).
Salesflare: Salesflare is a cloud-based CRM solution which helps startups and small businesses automate the
process of data entry. It gathers data from social profiles, emails and professional email signatures.
HubSpot CRM:With its cloud-based, customer relationship management platform, HubSpot helps companies
of all sizes track and nurture leads and analyze business metrics. HubSpot is suitable for any B2B or B2C
business in a variety of segments.
Zoho CRM: Zoho offers a cloud-based customer relationship management (CRM) solution tailored to the
needs of small and midsized businesses. Its interface includes CRM tools such as sales and marketing
automation, customer support and a help.
c) Explain Cloud federation in details. (4)
ANSWER: Cloud federation is the practice of interconnecting the cloud computing environments of two or
more service providers for the purpose of loads balancing traffic and accommodating spikes in demand.
Cloud federation requires one provider to wholesale or rent computing resources to another cloud provider.
Those resources become a temporary or permanent extension of the buyer's cloud computing environment,
depending on the specific federation agreement between providers.
Subject: CLOUD COMPUTING Code: 7CS1A Semester: VII Year: 4th
Cloud federation offers two substantial benefits to cloud providers. First, it allows providers to earn revenue
from computing resources that would otherwise be idle or underutilized. Second, cloud federation enables
cloud providers to expand their geographic footprints and accommodate sudden spikes in demand without
having to build new points-of-presence (POPs).
Service providers strive to make all aspects of cloud federation—from cloud provisioning to billing support
systems (BSS) and customer support— transparent to customers. When federating cloud services with a
partner, cloud providers will also establish extensions of their customer-facing service-level agreements
(SLAs) into their partner provider's data centers.