Top Banner
33

IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Apr 15, 2017

Download

Education

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students
Page 2: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students
Page 3: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Cloud computing is an Internet-based computing pattern through which shared resources are provided

to devices ondemand. Its an emerging but promising paradigm to integrating mobile devices into cloud

computing, and the integration performs in the cloud based hierarchical multi-user data-shared

environment. With integrating into cloud computing, security issues such as data confidentiality and

user authority may arise in the mobile cloud computing system, and it is concerned as the main

constraints to the developments of mobile cloud computing. In order to provide safe and secure

operation, a hierarchical access control method using modified hierarchical attribute-based encryption

(M-HABE) and a modified three-layer structure is proposed in this paper. In a specific mobile cloud

computing model, enormous data which may be from all kinds of mobile devices, such as smart phones,

functioned phones and PDAs and so on can be controlled and monitored by the system, and the data

can be sensitive to unauthorized third party and constraint to legal users as well. The novel scheme

mainly focuses on the data processing, storing and accessing, which is designed to ensure the users

with legal authorities to get corresponding classified data and to restrict illegal users and unauthorized

legal users get access to the data, which makes it extremely suitable for the mobile cloud computing

paradigms.

ETPL

CLD - 001 A Modified Hierarchical Attribute-based Encryption Access Control

Method for Mobile Cloud Computing

Quality of cloud service (QoS) is one of the crucial factors for the success of cloud providers in mobile

cloud computing. Context-awareness is a popular method for automatic awareness of the mobile

environment and choosing the most suitable cloud provider. Lack of context information may harm the

users’ confidence in the application rendering it useless. Thus, mobile devices need to be constantly

aware of the environment and to test the performance of each cloud provider, which is inefficient and

wastes energy. Crowdsourcing is a considerable technology to discover and select cloud services in

order to provide intelligent, efficient, and stable discovering of services for mobile users based on

group choice. This article introduces a crowdsourcing-based QoS supported mobile cloud service

framework that fulfils mobile users’ satisfaction by sensing their context information and providing

appropriate services to each of the users. Based on user’s activity context, social context, service

context, and device context, our framework dynamically adapts cloud service for the requests in

different kinds of scenarios. The context-awareness based management approach efficiency achieves

a reliable cloud service supported platform to supply the Quality of Service on mobile device.

ETPL

CLD - 002 Using Crowdsourcing to Provide QoS for Mobile Cloud Computing

Page 4: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Offering real-time data security for petabytes of data is important for cloud computing. A recent survey

on cloud security states that the security of users' data has the highest priority as well as concern. We

believe this can only be able to achieve with an approach that is systematic, adoptable and well-

structured. Therefore, this paper has developed a framework known as Cloud Computing Adoption

Framework (CCAF) which has been customized for securing cloud data. This paper explains the

overview, rationale and components in the CCAF to protect data security. CCAF is illustrated by the

system design based on the requirements and the implementation demonstrated by the CCAF multi-

layered security. Since our Data Center has 10 petabytes of data, there is a huge task to provide real-

time protection and quarantine. We use Business Process Modeling Notation (BPMN) to simulate how

data is in use. The use of BPMN simulation allows us to evaluate the chosen security performances

before actual implementation. Results show that the time to take control of security breach can take

between 50 and 125 hours. This means that additional security is required to ensure all data is well-

protected in the crucial 125 hours. This paper has also demonstrated that CCAF multi-layered security

can protect data in real-time and it has three layers of security: 1) firewall and access control; 2) identity

management and intrusion prevention and 3) convergent encryption. To validate CCAF, this paper has

undertaken two sets of ethical-hacking experiments involved with penetration testing with 10,000

trojans and viruses. The CCAF multi-layered security can block 9,919 viruses and trojans which can

be destroyed in seconds and the remaining ones can be quarantined or isolated. The experiments show

although the percentage of blocking can decrease for continuous injection of viruses and trojans, 97.43

percent of them can be quarantined. Our CCAF multi-layered security has an average of 20 percent b-

tter performance than the single-layered approach which could only block 7,438 viruses and trojans.

CCAF can be more effective when combined with BPMN simulation to evaluate security process and

penetrating testing results.

ETPL

CLD - 003 Towards Achieving Data Security with the Cloud Computing Adoption

Framework

Multiple resource procurement from several cloud vendors participating in bidding is addressed in this

paper. This is done by assigning dynamic pricing for these resources. Since we consider multiple

resources to be procured from several cloud vendors bidding in an auction, the problem turns out to be

one of a combinatorial auction. We pre-process the user requests, analyze the auction and declare a set

of vendors bidding for the auction as winners based on the Combinatorial Auction Branch on Bids

(CABOB) model. Simulations using our approach with prices procured from several cloud vendors'

datasets show its effectiveness in multiple resource procurement in the realm of cloud computing.

ETPL

CLD - 004 A Combinatorial Auction mechanism for multiple resource procurement

in cloud computing

Page 5: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

With the booming cloud computing industry, computational resources are readily and elastically

available to the customers. In order to attract customers with various demands, most Infrastructure-as-

a-service (IaaS) cloud service providers offer several pricing strategies such as pay as you go, pay less

per unit when you use more (so called volume discount), and pay even less when you reserve. The

diverse pricing schemes among different IaaS service providers or even in the same provider form a

complex economic landscape that nurtures the market of cloud brokers. By strategically scheduling

multiple customers' resource requests, a cloud broker can fully take advantage of the discounts offered

by cloud service providers. In this paper, we focus on how a broker can help a group of customers to

fully utilize the volume discount pricing strategy offered by cloud service providers through cost-

efficient online resource scheduling. We present a randomized online stack-centric scheduling

algorithm (ROSA) and theoretically prove the lower bound of its competitive ratio. Three special cases

of the offline concave cost scheduling problem and the corresponding optimal algorithms are

introduced. Our simulation shows that ROSA achieves a competitive ratio close to the theoretical lower

bound under the special cases. Trace-driven simulation using Google cluster data demonstrates that

ROSA is superior to the conventional online scheduling algorithms in terms of cost saving.

ETPL

CLD - 005 Online Resource Scheduling Under Concave Pricing for Cloud

Computing

Never before have data sharing been more convenient with the rapid development and wide adoption

of cloud computing. However, how to ensure the cloud user’s data security is becoming the main

obstacles that hinder cloud computing from extensive adoption. Proxy re-encryption serves as a

promising solution to secure the data sharing in the cloud computing. It enables a data owner to encrypt

shared data in cloud under its own public key, which is further transformed by a semi trusted cloud

server into an encryption intended for the legitimate recipient for access control. This paper gives a

solid and inspiring survey of proxy re-encryption from different perspectives to offer a better

understanding of this primitive. In particular, we reviewed the state-of-the-art of the proxy re-

encryption by investigating the design philosophy, examining the security models and comparing the

efficiency and security proofs of existing schemes. Furthermore, the potential applications and

extensions of proxy re-encryption have also been discussed. Finally, this paper is concluded with a

summary of the possible future work.

ETPL

CLD - 006 A Survey of Proxy Re-Encryption for Secure Data Sharing in Cloud

Computing

Page 6: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

With the popularity of cloud computing, there have been increasing concerns about its security and

privacy. Since the cloud computing environment is distributed and untrusted, data owners have to

encrypt outsourced data to enforce confidentiality. Therefore, how to achieve practicable access control

of encrypted data in an untrusted environment is an urgent issue that needs to be solved. Attribute-

Based Encryption (ABE) is a promising scheme suitable for access control in cloud storage systems.

This paper proposes a hierarchical attribute-based access control scheme with constant-size ciphertext.

The scheme is efficient because the length of ciphertext and the number of bilinear pairing evaluations

to a constant are fixed. Its computation cost in encryption and decryption algorithms is low. Moreover,

the hierarchical authorization structure of our scheme reduces the burden and risk of a single authority

scenario. We prove the scheme is of CCA2 security under the decisional q-Bilinear Diffie-Hellman

Exponent assumption. In addition, we implement our scheme and analyse its performance. The analysis

results show the proposed scheme is efficient, scalable, and fine-grained in dealing with access control

for outsourced data in cloud computing.

ETPL

CLD - 007 Attribute-based Access Control with Constant-size Ciphertext in Cloud

Computing

Mobile systems are gaining more and more importance, and new promising paradigms like Mobile

Cloud Computing are emerging. Mobile Cloud Computing provides an infrastructure where data

storage and processing could happen outside the mobile node. Specifically, there is a major interest in

the use of the services obtained by taking advantage of the distributed resource pooling provided by

nearby mobile nodes in a transparent way. This kind of systems is useful in application domains such

as emergencies, education and tourism. However, these systems are commonly based on dynamic

network topologies, in which disconnections and network partitions can occur frequently, and thus the

availability of the services is usually compromised. Techniques and methods from Autonomic

Computing can be applied to Mobile Cloud Computing to build dependable service models taking into

account changes in the context. In this work, a context-aware software architecture is proposed to

support the availability of the services deployed in mobile and dynamic network environments. The

proposal is based on a service replication scheme together with a self-configuration approach for the

activation/hibernation of the replicas of the service depending on relevant context information from

the mobile system. To that end, an election algorithm has been designed and implemented.

ETPL

CLD - 008 A Context-Aware Architecture Supporting Service Availability in

Mobile Cloud Computing

Page 7: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

With the development of cloud computing, outsourcing data to cloud server attracts lots of attentions.

To guarantee the security and achieve flexibly fine-grained file access control, attribute based

encryption (ABE) was proposed and used in cloud storage system. However, user revocation is the

primary issue in ABE schemes. In this article, we provide a ciphertext-policy attribute based encryption

(CP-ABE) scheme with efficient user revocation for cloud storage system. The issue of user revocation

can be solved efficiently by introducing the concept of user group. When any user leaves, the group

manager will update users’ private keys except for those who have been revoked. Additionally, CP-

ABE scheme has heavy computation cost, as it grows linearly with the complexity for the access

structure. To reduce the computation cost, we outsource high computation load to cloud service

providers without leaking file content and secret keys. Notbaly, our scheme can withstand collusion

attack performed by revoked users cooperating with existing users. We prove the security of our

scheme under the divisible computation Diffie-Hellman (DCDH) assumption. The result of our

experiment shows computation cost for local devices is relatively low and can be constant. Our scheme

is suitable for resource constrained devices.

ETPL

CLD - 009 Flexible and Fine-Grained Attribute-Based Data Storage in Cloud

Computing

To address the computing challenge of ’big data’, a number of data-intensive computing frameworks

(e.g., MapReduce, Dryad, Storm and Spark) have emerged and become popular. YARN is a de facto

resource management platform that enables these frameworks running together in a shared system.

However, we observe that, in cloud computing environment, the fair resource allocation policy

implemented in YARN is not suitable because of its memoryless resource allocation fashion leading

to violations of a number of good properties in shared computing systems. This paper attempts to

address these problems for YARN. Both singlelevel and hierarchical resource allocations are

considered. For single-level resource allocation, we propose a novel fair resource allocation mechanism

called Long-Term Resource Fairness (LTRF) for such computing. For hierarchical resource allocation,

we propose Hierarchical Long-Term Resource Fairness (H-LTRF) by extending LTRF. We show that

both LTRF and H-LTRF can address these fairness problems of current resource allocation policy and

are thus suitable for cloud computing. Finally, we have developed LTYARN by implementing LTRF

and H-LTRF in YARN, and our experiments show that it leads to a better resource fairness than

existing fair schedulers of YARN.

ETPL

CLD - 010 Fair Resource Allocation for Data-Intensive Computing in the Cloud

Page 8: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Cloud computing is an Internet-based computing pattern through which shared resources are provided

to devices ondemand. Its an emerging but promising paradigm to integrating mobile devices into cloud

computing, and the integration performs in the cloud based hierarchical multi-user data-shared

environment. With integrating into cloud computing, security issues such as data confidentiality and

user authority may arise in the mobile cloud computing system, and it is concerned as the main

constraints to the developments of mobile cloud computing. In order to provide safe and secure

operation, a hierarchical access control method using modified hierarchical attribute-based encryption

(M-HABE) and a modified three-layer structure is proposed in this paper. In a specific mobile cloud

computing model, enormous data which may be from all kinds of mobile devices, such as smart phones,

functioned phones and PDAs and so on can be controlled and monitored by the system, and the data

can be sensitive to unauthorized third party and constraint to legal users as well. The novel scheme

mainly focuses on the data processing, storing and accessing, which is designed to ensure the users

with legal authorities to get corresponding classified data and to restrict illegal users and unauthorized

legal users get access to the data, which makes it extremely suitable for the mobile cloud computing

paradigms.

ETPL

CLD - 011 Secure Data Sharing in Cloud Computing Using Revocable-Storage

Identity-Based Encryption

Cloud computing technologies have enabled a new paradigm for advanced product development

powered by the provision and subscription of computational services in a multi-tenant distributed

simulation environment. The description of computational resources and their optimal allocation

among tenants with different requirements holds the key to implementing effective software systems

for such a paradigm. To address this issue, a systematic framework for monitoring, analyzing and

improving system performance is proposed in this research. Specifically, a radial basis function neural

network is established to transform simulation tasks with abstract descriptions into specific resource

requirements in terms of their quantities and qualities. Additionally, a novel mathematical model is

constructed to represent the complex resource allocation process in a multi-tenant computing

environment by considering priority-based tenant satisfaction, total computational cost and multi-level

load balance. To achieve optimal resource allocation, an improved multi-objective genetic algorithm

is proposed based on the elitist archive and the K-means approaches. As demonstrated in a case study,

the proposed framework and methods can effectively support the cloud simulation paradigm and

efficiently meet tenants’ computational requirements in a distributed environment.

ETPL

CLD - 012 Knowledge-Based Resource Allocation for Collaborative Simulation

Development in a Multi-tenant Cloud Computing Environment

Page 9: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Cloud computing becomes increasingly popular for data owners to outsource their data to public cloud

servers while allowing intended data users to retrieve these data stored in cloud. This kind of computing

model brings challenges to the security and privacy of data stored in cloud. Attribute-based encryption

(ABE) technology has been used to design fine-grained access control system, which provides one

good method to solve the security issues in cloud setting. However, the computation cost and cipher

text size in most ABE schemes grow with the complexity of the access policy. Outsourced ABE

(OABE) with fine-grained access control system can largely reduce the computation cost for users who

want to access encrypted data stored in cloud by outsourcing the heavy computation to cloud service

provider (CSP). However, as the amount of encrypted files stored in cloud is becoming very huge,

which will hinder efficient query processing? To deal with above problem, we present a new

cryptographic primitive called attribute-based encryption scheme with outsourcing key-issuing and

outsourcing decryption, which can implement keyword search function (KSF-OABE). The proposed

KSF-OABE scheme is proved secure against chosen-plaintext attack (CPA). CSP performs partial

decryption task delegated by data user without knowing anything about the plaintext. Moreover, the

CSP can perform encrypted keyword search without knowing anything about the keywords embedded

in trapdoor

ETPL

CLD - 013 KSF-OABE: Outsourced Attribute-Based Encryption with Keyword

Search Function for Cloud Storage

Cloud computing is rapidly changing the digital service landscape. A proliferation of Cloud providers

has emerged, increasing the difficulty of consumer decisions. Trust issues have been identified as a

factor holding back Cloud adoption. The risks and challenges inherent in the adoption of Cloud services

are well recognised in the computing literature. In conjunction with these risks, the relative novelty of

the online environment as a context for the provision of business services can increase consumer

perceptions of uncertainty. This uncertainty is worsened in a Cloud context due to the lack of

transparency, from the consumer perspective, into the service types, operational conditions and the

quality of service offered by the diverse providers. Previous approaches failed to provide an appropriate

medium for communicating trust and trustworthiness in Clouds. A new strategy is required to improve

consumer confidence and trust in Cloud providers. This paper presents the operationalisation of a trust

label system designed to communicate trust and trustworthiness in Cloud services. We describe the

technical details and implementation of the trust label components. Based on a use case scenario, an

initial evaluation was carried out to test its operations and its usefulness for increasing consumer trust

in Cloud services.

ETPL

CLD - 014 A Trust Label System for Communicating Trust in Cloud Services

Page 10: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

The prominence of cloud computing led to unprecedented proliferation in the number of Web services

deployed in cloud data centres. In parallel, service communities have gained recently increasing

interest due to their ability to facilitate discovery, composition, and resource scaling in large-scale

services’ markets. The problem is that traditional community formation models may work well when

all services reside in a single cloud but cannot support a multi-cloud environment. Particularly, these

models overlook having malicious services that misbehave to illegally maximize their benefits and that

arises from grouping together services owned by different providers. Besides, they rely on a centralized

architecture whereby a central entity regulates the community formation; which contradicts with the

distributed nature of cloud-based services. In this paper, we propose a three-fold solution that includes:

trust establishment framework that is resilient to collusion attacks that occur to mislead trust results;

bootstrapping mechanism that capitalizes on the endorsement concept in online social networks to

assign initial trust values; and trust-based hedonic coalitional game that enables services to distributive

form trustworthy multi-cloud communities. Experiments conducted on a real-life dataset demonstrate

that our model minimizes the number of malicious services compared to three state-of-the-art cloud

federations and service communities’ models.

ETPL

CLD - 015 Towards Trustworthy Multi-Cloud Services Communities: A Trust-based

Hedonic Coalitional Game

The significant growth in cloud computing has led to increasing number of cloud providers, each

offering their service under different conditions – one might be more secure whilst another might be

less expensive or more reliable. At the same time user applications have become more and more

complex. Often, they consist of a diverse collection of software components, and need to handle

variable workloads, which poses different requirements on the infrastructure. Therefore, many

organisations are considering using a combination of different clouds to satisfy these needs. It raises,

however, a non-trivial issue of how to select the best combination of clouds to meet the application

requirements. This paper presents a novel algorithm to deploy workflow applications on federated

clouds. Firstly, we introduce an entropy-based method to quantify the most reliable workflow

deployments. Secondly, we apply an extension of the Bell-LaPadula Multi-Level security model to

address application security requirements. Finally, we optimise deployment in terms of its entropy and

also its monetary cost, taking into account the cost of computing power, data storage and inter-cloud

communication. We implemented our new approach and compared it against two existing scheduling

algorithms: Extended Dynamic Constraint Algorithm (EDCA) and Extended Biobjective dynamic

level scheduling (EBDLS). We show that our algorithm can find deployments that are of equivalent

reliability but are less expensive and meet security requirements. We have validated our solution

through a set of realistic scientific workflows, using well-known cloud simulation tools (WorkflowSim

and DynamicCloudSim) and a realistic cloud based data analysis system (e-Science Central).

ETPL

CLD - 016 Cost Effective, Reliable and Secure Workflow Deployment over Federated Clouds

Page 11: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Search over encrypted data is a critically important enabling technique in cloud computing, where

encryption-before-outsourcing is a fundamental solution to protecting user data privacy in the untrusted

cloud server environment. Many secure search schemes have been focusing on the single-contributor

scenario, where the outsourced dataset or the secure searchable index of the dataset are encrypted and

managed by a single owner, typically based on symmetric cryptography. In this paper, we focus on a

different yet more challenging scenario where the outsourced dataset can be contributed from multiple

owners and are searchable by multiple users, i.e., multi-user multi-contributor case. Inspired by

attribute-based encryption (ABE), we present the first attribute-based keyword search scheme with

efficient user revocation (ABKS-UR) that enables scalable fine-grained (i.e., file-level) search

authorization. Our scheme allows multiple owners to encrypt and outsource their data to the cloud

server independently. Users can generate their own search capabilities without relying on an always

online trusted authority. Fine-grained search authorization is also implemented by the owner-enforced

access policy on the index of each file. Further, by incorporating proxy re-encryption and lazy re-

encryption techniques, we are able to delegate heavy system update workload during user revocation

to the resourceful semi-trusted cloud server. We formalize the security definition and prove the

proposed ABKS-UR scheme selectively secure against chosen-keyword attack. To build confidence of

data user in the proposed secure search system, we also design a search result verification scheme.

Finally, performance evaluation shows the efficiency of our scheme.

ETPL

CLD - 017 Protecting Your Right: Verifiable Attribute-Based Keyword Search with

Fine-Grained Owner-Enforced Search Authorization in the Cloud

Allocating service capacities in cloud computing is based on the assumption that they are unlimited

and can be used at any time. However, available service capacities change with workload and cannot

satisfy users’ requests at any time from the cloud provider’s perspective because cloud services can be

shared by multiple tasks. Cloud service providers provide available time slots for new user’s requests

based on available capacities. In this paper, we consider workflow scheduling with deadline and time

slot availability in cloud computing. An iterated heuristic framework is presented for the problem under

study which mainly consists of initial solution construction, improvement, and perturbation. Three

initial solution construction strategies, two greedy- and fair-based improvement strategies and a

perturbation strategy are proposed. Different strategies in the three phases result in several heuristics.

Experimental results show that different initial solution and improvement strategies have different

effects on solution qualities.

ETPL

CLD - 018 Cloud workflow scheduling with deadlines and time slot availability

Page 12: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

In the cloud, for achieving access control and keeping data confidential, the data owners could adopt

attribute-based encryption to encrypt the stored data. Users with limited computing power are however

more likely to delegate the mask of the decryption task to the cloud servers to reduce the computing

cost. As a result, attribute-based encryption with delegation emerges. Still, there are caveats and

questions remaining in the previous relevant works. For instance, during the delegation, the cloud

servers could tamper or replace the delegated ciphertext and respond a forged computing result with

malicious intent. They may also cheat the eligible users by responding them that they are ineligible for

the purpose of cost saving. Furthermore, during the encryption, the access policies may not be flexible

enough as well. Since policy for general circuits enables to achieve the strongest form of access control,

a construction for realizing circuit ciphertext-policy attribute-based hybrid encryption with verifiable

delegation has been considered in our work. In such a system, combined with verifiable computation

and encrypt-then-mac mechanism, the data confidentiality, the fine-grained access control and the

correctness of the delegated computing results are well guaranteed at the same time. Besides, our

scheme achieves security against chosen-plaintext attacks under the k-multilinear Decisional Diffie-

Hellman assumption. Moreover, an extensive simulation campaign confirms the feasibility and

efficiency of the proposed solution.

ETPL

CLD - 019 Circuit Ciphertext-Policy Attribute-Based Hybrid Encryption with

Verifiable Delegation in Cloud Computing

Most current security solutions are based on perimeter security. However, Cloud computing breaks the

organization perimeters. When data resides in the Cloud, they reside outside the organizational bounds.

This leads users to a loos of control over their data and raises reasonable security concerns that slow

down the adoption of Cloud computing. Is the Cloud service provider accessing the data? Is it

legitimately applying the access control policy defined by the user? This paper presents a data-centric

access control solution with enriched role-based expressiveness in which security is focused on

protecting user data regardless the Cloud service provider that holds it. Novel identity-based and proxy

re-encryption techniques are used to protect the authorization model. Data is encrypted and

authorization rules are cryptographically protected to preserve user data against the service provider

access or misbehavior. The authorization model provides high expressiveness with role hierarchy and

resource hierarchy support. The solution takes advantage of the logic formalism provided by Semantic

Web technologies, which enables advanced rule management like semantic conflict detection. A proof

of concept implementation has been developed and a working prototypical deployment of the proposal

has been integrated within Google services.

ETPL

CLD - 020 SecRBAC: Secure data in the Clouds

Page 13: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Cloud radio access network (C-RAN) has emerged as a potential candidate of the next generation

access network technology to address the increasing mobile traffic, while mobile cloud computing

(MCC) offers a prospective solution to the resource-limited mobile user in executing computation

intensive tasks. Taking full advantages of above two cloud-based techniques, C-RAN with MCC are

presented in this paper to enhance both performance and energy efficiencies. In particular, this paper

studies the joint energy minimization and resource allocation in C-RAN with MCC under the time

constraints of the given tasks. We first review the energy and time model of the computation and

communication. Then, we formulate the joint energy minimization into a non-convex optimization

with the constraints of task executing time, transmitting power, computation capacity and fronthaul

data rates. This non-convex optimization is then reformulated into an equivalent convex problem based

on weighted minimum mean square error (WMMSE). The iterative algorithm is finally given to deal

with the joint resource allocation in C-RAN with mobile cloud. Simulation results confirm that the

proposed energy minimization and resource allocation solution can improve the system performance

and save energy.

ETPL

CLD - 021 Joint Energy Minimization and Resource Allocation in C-RAN with

Mobile Cloud

Due to the increasing popularity of cloud computing, more and more data owners are motivated to

outsource their data to cloud servers for great convenience and reduced cost in data management.

However, sensitive data should be encrypted before outsourcing for privacy requirements, which

obsoletes data utilization like keyword-based document retrieval. In this paper, we present a secure

multi-keyword ranked search scheme over encrypted cloud data, which simultaneously supports

dynamic update operations like deletion and insertion of documents. Specifically, the vector space

model and the widely-used TF x IDF model are combined in the index construction and query

generation. We construct a special tree-based index structure and propose a “Greedy Depth-first

Search” algorithm to provide efficient multi-keyword ranked search. The secure kNN algorithm is

utilized to encrypt the index and query vectors, and meanwhile ensure accurate relevance score

calculation between encrypted index and query vectors. In order to resist statistical attacks, phantom

terms are added to the index vector for blinding search results. Due to the use of our special tree-based

index structure, the proposed scheme can achieve sub-linear search time and deal with the deletion and

insertion of documents flexibly. Extensive experiments are conducted to demonstrate the efficiency of

the proposed scheme.

ETPL

CLD - 022 A Secure and Dynamic Multi-Keyword Ranked Search Scheme over

Encrypted Cloud Data

Page 14: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

In this paper, we develop a decentralized probabilistic method for performance optimization of cloud

services. We focus on Infrastructure-as-a-Service where the user is provided with the ability of

configuring virtual resources on demand in order to satisfy specific computational requirements. This

novel approach is strongly supported by a theoretical framework based on tail probabilities and sample

complexity analysis. It allows not only the inclusion of performance metrics for the cloud but the

incorporation of security metrics based on cryptographic algorithms for data storage. To the best of the

authors’ knowledge this is the first unified approach to provision performance and security on demand

subject to the Service Level Agreement between the client and the cloud service provider. The quality

of the service is guaranteed given certain values of accuracy and confidence. We present some

experimental results using the Amazon Web Services, Amazon Elastic Compute Cloud service to

validate our probabilistic optimization method.

ETPL

CLD - 023 Probabilistic Optimization of Resource Distribution and Encryption for

Data Storage in the Cloud

Energy efficiency of data centers (DCs) has become a major concern as DCs continue to grow large

often hosting tens of thousands of servers or even hundreds of thousands of them. Clearly, such a

volume of DCs implies scale of data center network (DCN) with a huge number of network nodes and

links. The energy consumption of this communication network has skyrocketed and become the same

league as computing servers’costs. With the ever-increasing amount of data that need to be stored and

processed in DCs, DCN traffic continues to soar drawing increasingly more power. In particular, more

than one-third of the total energy in DCs is consumed by communication links, switching and

aggregation elements. In this paper, we concern the energy efficiency of data center explicitly taking

into account both servers and DCN. To this end, we present VPTCA, as a collective energy-efficiency

approach to data center network planning, which deals with virtual machine(VM) placement and

communication traffic configuration. VPTCA aims particularly to reduce the energy consumption of

DCN by assigning interrelated VMs into the same server or pod, which effectively helps reduce the

amount of transmission load. In the layer of traffic message, VPTCA optimally uses switch ports and

link bandwidth to balance the load and avoid congestions, enabling DCN to increase its transmission

capacity, and saving a significant amount of network energy. In our evaluation via NS-2 simulations,

the performance of VPTCA is measured and compared with two well-known DCN management

algorithms, Global First Fit and Elastic Tree. Based on our experimental results, VPTCA outperforms

existing algorithms in providing DCN more transmission capacity with less energy consumption.

ETPL

CLD - 024 Collective Energy-Efficiency Approach to Data Center Networks

Planning

Page 15: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Fully automated provisioning and deployment of applications is one of the most essential prerequisites

to make use of the benefits of Cloud computing in order to reduce the costs for managing applications.

A huge variety of approaches, tools, and providers are available to automate the involved processes.

The DevOps community, for instance, provides tooling and reusable artifacts to implement deployment

automation in an applicationoriented manner. Platform-as-a-Service frameworks are available for the

same purpose. In this work we systematically classify and characterize available deployment

approaches independently from the underlying technology used. For motivation and evaluation

purposes, we choose Web applications with different technology stacks and analyze their specific

deployment requirements. Afterwards, we provision these applications using each of the identified

types of deployment approaches in the Cloud to perform qualitative and quantitative measurements.

Finally, we discuss the evaluation results and derive recommendations to decide which deployment

approach to use based on the deployment requirements of an application. Our results show that

deployment approaches can also be efficiently combined if there is no ‘best fit’ for a particular

application.

ETPL

CLD - 025 Middleware-oriented Deployment Automation for Cloud Applications

Cloud computing is popularizing the computing paradigm in which data is outsourced to a third-party

service provider (server) for data mining. Outsourcing, however, raises a serious security issue: how

can the client of weak computational power verify that the server returned correct mining result? In

this paper, we focus on the specific task of frequent itemset mining. We consider the server that is

potentially untrusted and tries to escape from verification by using its prior knowledge of the

outsourced data. We propose efficient probabilistic and deterministic verification approaches to check

whether the server has returned correct and complete frequent itemsets. Our probabilistic approach can

catch incorrect results with high probability, while our deterministic approach measures the result

correctness with 100 percent certainty. We also design efficient verification methods for both cases

that the data and the mining setup are updated. We demonstrate the effectiveness and efficiency of our

methods using an extensive set of empirical results on real datasets.

ETPL

CLD - 026 Trust-but-Verify: Verifying Result Correctness of Outsourced Frequent

Itemset Mining in Data-Mining-As-a-Service Paradigm

Page 16: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

The infrastructure cloud (IaaS) service model offers improved resource flexibility and availability,

where tenants – insulated from the minutiae of hardware maintenance – rent computing resources to

deploy and operate complex systems. Large-scale services running on IaaS platforms demonstrate the

viability of this model; nevertheless, many organizations operating on sensitive data avoid migrating

operations to IaaS platforms due to security concerns. In this paper, we describe a framework for data

and operation security in IaaS, consisting of protocols for a trusted launch of virtual machines and

domain-based storage protection. We continue with an extensive theoretical analysis with proofs about

protocol resistance against attacks in the defined threat model. The protocols allow trust to be

established by remotely attesting host platform configuration prior to launching guest virtual machines

and ensure confidentiality of data in remote storage, with encryption keys maintained outside of the

IaaS domain. Presented experimental results demonstrate the validity and efficiency of the proposed

protocols. The framework prototype was implemented on a test bed operating a public electronic health

record system, showing that the proposed protocols can be integrated into existing cloud environments.

ETPL

CLD - 027 Providing User Security Guarantees in Public Infrastructure Clouds

Providing real-time cloud services to Vehicular Clients (VCs) must cope with delay and delay-jitter

issues. Fog computing is an emerging paradigm that aims at distributing small-size self-powered data

centers (e.g., Fog nodes) between remote Clouds and VCs, in order to deliver data-dissemination real-

time services to the connected VCs. Motivated by these considerations, in this paper, we propose and

test an energy-efficient adaptive resource scheduler for Networked Fog Centers (NetFCs). They

operate at the edge of the vehicular network and are connected to the served VCs through

Infrastructure-to-Vehicular (I2V) TCP/IP-based single-hop mobile links. The goal is to exploit the

locally measured states of the TCP/IP connections, in order to maximize the overall communication-

plus-computing energy efficiency, while meeting the application-induced hard QoS requirements on

the minimum transmission rates, maximum delays and delay-jitters. The resulting energy-efficient

scheduler jointly performs: (i) admission control of the input traffic to be processed by the NetFCs; (ii)

minimum-energy dispatching of the admitted traffic; (iii) adaptive reconfiguration and consolidation

of the Virtual Machines (VMs) hosted by the NetFCs; and, (iv) adaptive control of the traffic injected

into the TCP/IP mobile connections. The salient features of the proposed scheduler are that: (i) it is

adaptive and admits distributed and scalable implementation; and, (ii) it is capable to provide hard QoS

guarantees, in terms of minimum/maximum instantaneous rates of the traffic delivered to the vehicular

clients, instantaneous rate-jitters and total processing delays. Actual performance of the proposed

scheduler in the presence of: (i) client mobility; (ii) wireless fading; and, (iii) reconfiguration and

consolidation costs of the underlying NetFCs, is numerically tested and compared against the

corresponding ones of some state-of-the-art schedulers, under both synthetically generated and

measured - eal-world workload traces.

ETPL

CLD - 028 Energy-efficient Adaptive Resource Management for Real-time

Vehicular Cloud Services

Page 17: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

With rapid adoption of the cloud computing model, many enterprises have begun deploying cloud-

based services. Failures of virtual machines (VMs) in clouds have caused serious quality assurance

issues for those services. VM replication is a commonly used technique for enhancing the reliability of

cloud services. However, when determining the VM redundancy strategy for a specific service, many

state-of-the-art methods ignore the huge network resource consumption issue that could be experienced

when the service is in failure recovery mode. This paper proposes a redundant VM placement

optimization approach to enhancing the reliability of cloud services. The approach employs three

algorithms. The first algorithm selects an appropriate set of VM-hosting servers from a potentially

large set of candidate host servers based upon the network topology. The second algorithm determines

an optimal strategy to place the primary and backup VMs on the selected host servers with k-fault-

tolerance assurance. Lastly, a heuristic is used to address the task-to-VM reassignment optimization

problem, which is formulated as finding a maximum weight matching in bipartite graphs. The

evaluation results show that the proposed approach outperforms four other representative methods in

network resource consumption in the service recovery stage.

ETPL

CLD - 029 Cloud Service Reliability Enhancement via Virtual Machine Placement

Optimization

This work presents a novel statistical cost model for applications that can be offloaded to cloud

computing environments. The model constructs a tree structure, referred to as the execution

dependency tree (EDT), to accurately represent various execution relations, or dependencies (e.g.,

sequential, parallel and conditional branching) among the application modules, along its different

execution paths. Contrary to existing models that assume fixed average offloading costs, each module’s

cost is modelled as a random variable described by its Cumulative Distribution Function (CDF) that is

statistically estimated through application profiling. Using this model, we generalize the offloading

cost optimization functions to those that use more user tailored statistical measures such as cost

percentiles. We employ these functions to propose an efficient offloading algorithm based on a

dynamic programming formulation. We also show that the proposed model can be used as an efficient

tool for application analysis by developers to gain insights on the applications’ statistical performance

under varying network conditions and users behaviours. Performance evaluation results show that the

achieved mean absolute percentage error between the model-based estimated cost and the measured

one for the application execution time can be as small as 5% for applications with sequential and

branching module dependencies.

ETPL

CLD - 030 A Novel Statistical Cost Model and an Algorithm for Efficient

Application Offloading to Clouds

Page 18: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

The Internet was designed with the end-to-end principle where the network layer provided merely the

best-effort forwarding service. This design makes it challenging to add new services into the Internet

infrastructure. However, as the Internet connectivity becomes a commodity, users and applications

increasingly demand new in-network services. This paper proposes PacketCloud, a cloudlet-based

open platform to host in-network services. Different from standalone, specialized middleboxes,

cloudlets can efficiently share a set of commodity servers among different services, and serve the

network traffic in an elastic way. PacketCloud can help both Internet Service Providers (ISPs) and

emerging application/content providers deploy their services at strategic network locations. We have

implemented a proof-of-concept prototype of PacketCloud. PacketCloud introduces a small additional

delay, and can scale well to handle high-throughput data traffic. We have evaluated PacketCloud in

both a fully functional emulated environment, and the real Internet.

ETPL

CLD - 031 Packet Cloud: A Cloudlet-Based Open Platform for In-Network Services

Load-balanced flow scheduling for big data centers in clouds, in which a large amount of data needs

to be transferred frequently among thousands of interconnected servers, is a key and challenging issue.

The Open Flow is a promising solution to balance data flows in a data center network through its

programmatic traffic controller. Existing Open Flow based scheduling schemes, however, statically set

up routes only at the initialization stage of data transmissions, which suffers from dynamical flow

distribution and changing network states in data centers and often results in poor system performance.

In this paper, we propose a novel dynamical load-balanced scheduling (DLBS) approach for

maximizing the network throughput while balancing workload dynamically. We firstly formulate the

DLBS problem, and then develop a set of efficient heuristic scheduling algorithms for the two typical

OpenFlow network models, which balance data flows time slot by time slot. Experimental results

demonstrate that our DLBS approach significantly outperforms other representative load-balanced

scheduling algorithms Round Robin and LOBUS; and the higher imbalance degree data flows in data

centers exhibit, the more improvement our DLBS approach will bring to the data centers.

ETPL

CLD - 032 A Dynamical and Load-Balanced Flow Scheduling Approach for Big

Data Centers in Clouds

Page 19: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Companies have a fast growing amounts of data to process and store, a data explosion is happening

next to us. Currently one of the most common approaches to treat these vast data quantities are based

on the MapReduce parallel programming paradigm. While its use is widespread in the industry,

ensuring performance constraints, while at the same time minimizing costs, still provides considerable

challenges. We propose a coarse grained control theoretical approach, based on techniques that have

already proved their usefulness in the control community. We introduce the first algorithm to create

dynamic models for Big Data MapReduce systems, running a concurrent workload. Furthermore, we

identify two important control use cases: relaxed performance - minimal resource and strict

performance. For the first case we develop two feedback control mechanism. A classical feedback

controller and an evenbased feedback, that minimises the number of cluster reconfigurations as well.

Moreover, to address strict performance requirements a feedforward predictive controller that

efficiently suppresses the effects of large workload size variations is developed. All the controllers are

validated online in a benchmark running in a real 60 node MapReduce cluster, using a data intensive

Business Intelligence workload. Our experiments demonstrate the success of the control strategies

employed in assuring service time constraints.

ETPL

CLD - 033 Feedback Autonomic Provisioning for Guaranteeing Performance in

MapReduce Systems

Heterogeneity prevails not only among physical machines but also among workloads in real IaaS Cloud

data centers (CDCs). The heterogeneity makes performance modelling of large and complex IaaS

CDCs even more challenging. This paper considers the scenario where the number of virtual CPUs

requested by each customer job may be different. We propose a hierarchical stochastic modelling

approach applicable to IaaS CDC performance analysis under such a heterogeneous workload.

Numerical results obtained from the proposed analytic model are verified through discrete-event

simulations under various system parameter settings.

ETPL

CLD - 034 Effective Modelling Approach for IaaS Data Center Performance

Analysis under Heterogeneous Workload

Page 20: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

We propose an integrated, energy-efficient, resource allocation framework for overcommitted clouds.

The framework makes great energy savings by 1) minimizing Physical Machine (PM) overload

occurrences via VM resource usage monitoring and prediction, and 2) reducing the number of active

PMs via efficient VM migration and placement. Using real Google data consisting of a 29-day traces

collected from a cluster containing more than 12K PMs, we show that our proposed framework

outperforms existing overload avoidance techniques and prior VM migration strategies by reducing

the number of unpredicted overloads, minimizing migration overhead, increasing resource utilization,

and reducing cloud energy consumption.

ETPL

CLD - 035 An Energy-Efficient VM Prediction and Migration Framework for

Overcommitted Clouds

Identity-based encryption (IBE) is a public key cryptosystem and eliminates the demands of public key

infrastructure (PKI) and certificate administration in conventional public key settings. Due to the

absence of PKI, the revocation problem is a critical issue in IBE settings. Several revocable IBE

schemes have been proposed regarding this issue. Quite recently, by embedding an outsourcing

computation technique into IBE, Li et al. proposed a revocable IBE scheme with a key-update cloud

service provider (KU-CSP). However, their scheme has two shortcomings. One is that the computation

and communication costs are higher than previous revocable IBE schemes. The other shortcoming is

lack of scalability in the sense that the KU-CSP must keep a secret value for each user. In the article,

we propose a new revocable IBE scheme with a cloud revocation authority (CRA) to solve the two

shortcomings, namely, the performance is significantly improved and the CRA holds only a system

secret for all the users. For security analysis, we demonstrate that the proposed scheme is semantically

secure under the decisional bilinear Diffie-Hellman (DBDH) assumption. Finally, we extend the

proposed revocable IBE scheme to present a CRA-aided authentication scheme with period-limited

privileges for managing a large number of various cloud services.

ETPL

CLD - 036 Identity-Based Encryption with Cloud Revocation Authority and Its

Applications

Page 21: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Many believe the future of gaming lies in the cloud, namely Cloud Gaming, which renders an

interactive gaming application in the cloud and streams the scenes as a video sequence to the player

over Internet. This paper proposes GCloud, a GPU/CPU hybrid cluster for cloud gaming based on the

user-level virtualization technology. Specially, we present a performance model to analyze the server-

capacity and games' resource-consumptions, which categorizes games into two types: CPU-critical and

memory-of-critical. Consequently, several scheduling strategies have been proposed to improve the

resource-utilization and compared with others. Simulation tests show that both of the First-Fit-like and

the Best-Fit-like strategies outperform the other(s); especially they are near optimal in the batch

processing mode. Other test results indicate that GCloud is efficient: An off-the-shelf PC can support

five high-end video-games run at the same time. In addition, the average per-frame processing delay

is 8~19 ms under different image-resolutions, which outperforms other similar solutions.

ETPL

CLD - 037 A Cloud Gaming System Based on User-Level Virtualization and Its

Resource Scheduling

Cloud offloading is an indispensable solution to supporting computationally demanding applications

on resource constrained mobile devices. In this paper, we introduce the concept of wireless aware joint

scheduling and computation offloading (JSCO) for multicomponent applications, where an optimal

decision is made on which components need to be offloaded as well as the scheduling order of these

components. The JSCO approach allows for more degrees of freedom in the solution by moving away

from a compiler predetermined scheduling order for the components towards a more wireless aware

scheduling order. For some component dependency graph structures, the proposed algorithm can

shorten execution times by parallel processing appropriate components in the mobile and cloud. We

define a net utility that trades-off the energy saved by the mobile, subject to constraints on the

communication delay, overall application execution time, and component precedence ordering. The

linear optimization problem is solved using real data measurements obtained from running multi-

component applications on an HTC smartphone and the Amazon EC2, using WiFi for cloud offloading.

The performance is further analyzed using various component dependency graph topologies and sizes.

Results show that the energy saved increases with longer application runtime deadline, higher wireless

rates, and smaller offload data sizes.

ETPL

CLD - 038 Optimal Joint Scheduling and Cloud Offloading for Mobile Applications

Page 22: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Cloud data owners prefer to outsource documents in an encrypted form for the purpose of privacy

preserving. Therefore it is essential to develop efficient and reliable ciphertext search techniques. One

challenge is that the relationship between documents will be normally concealed in the process of

encryption, which will lead to significant search accuracy performance degradation. Also the volume

of data in data centers has experienced a dramatic growth. This will make it even more challenging to

design ciphertext search schemes that can provide efficient and reliable online information retrieval on

large volume of encrypted data. In this paper, a hierarchical clustering method is proposed to support

more search semantics and also to meet the demand for fast ciphertext search within a big data

environment. The proposed hierarchical approach clusters the documents based on the minimum

relevance threshold, and then partitions the resulting clusters into sub-clusters until the constraint on

the maximum size of cluster is reached. In the search phase, this approach can reach a linear

computational complexity against an exponential size increase of document collection. In order to

verify the authenticity of search results, a structure called minimum hash sub-tree is designed in this

paper. Experiments have been conducted using the collection set built from the IEEE Xplore. The

results show that with a sharp increase of documents in the dataset the search time of the proposed

method increases linearly whereas the search time of the traditional method increases exponentially.

Furthermore, the proposed method has an advantage over the traditional method in the rank privacy

and relevance of retrieved documents.

ETPL

CLD - 039 An Efficient Privacy-Preserving Ranked Keyword Search Method

Hundreds of papers on job scheduling for distributed systems are published every year and it becomes

increasingly difficult to classify them. Our analysis revealed that half of these papers are barely cited.

This paper presents a general taxonomy for scheduling problems and solutions in distributed systems.

This taxonomy was used to classify and make publicly available the classification of 109 scheduling

problems and their solutions. These 109 problems were further clustered into ten groups based on the

features of the taxonomy. The proposed taxonomy will facilitate researchers to build on prior art,

increase new research visibility, and minimize redundant effort.

ETPL

CLD - 040 A Taxonomy of Job Scheduling on Distributed Computing Systems

Page 23: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

The advent of software defined networking enables flexible, reliable and feature-rich control planes

for data center networks. However, the tight coupling of centralized control and complete visibility

leads to a wide range of issues among which scalability has risen to prominence due to the excessive

workload on the central controller. By analyzing the traffic patterns from a couple of production data

centers, we observe that data center traffic is usually highly skewed and thus edge switches can be

clustered into a set of communicationintensive groups according to traffic locality. Motivated by this

observation, we present LazyCtrl, a novel hybrid control plane design for data center networks where

network control is carried out by distributed control mechanisms inside independent groups of switches

while complemented with a global controller. LazyCtrl aims at bringing laziness to the global controller

by dynamically devolving most of the control tasks to independent switch groups to process frequent

intra-group events near the datapath while handling rare inter-group or other specified events by the

controller. We implement LazyCtrl and build a prototype based on Open vSwitch and Floodlight.

Tracedriven experiments on our prototype show that an effective switch grouping is easy to maintain

in multi-tenant clouds and the central controller can be significantly shielded by staying “lazy”, with

its workload reduced by up to 82%.

ETPL

CLD - 041 LazyCtrl: A Scalable Hybrid Network Control Plane Design for Cloud

Data Centers

We introduce Ensemble, a runtime framework and associated tools for building application

performance models on-the-fly. These dynamic performance models can be used to support complex,

highly dimensional resource allocation, and/or what-if performance inquiry in modern heterogeneous

environments, such as data centers and Clouds. Ensemble combines simple, partially specified, and

lower-dimensionality models to provide good initial approximations for higher dimensionality

application performance models. We evaluated Ensemble on industry-standard and scientific

applications. The results show that Ensemble provides accurate, fast, and flexible performance models

even in the presence of significant environment variability.

ETPL

CLD - 042 Ensemble: A Tool for Performance Modeling of Applications in Cloud

Data Centers

Page 24: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Elasticity is undoubtedly one of the most striking characteristics of cloud computing. Especially in the

area of high performance computing (HPC), elasticity can be used to execute irregular and CPU-

intensive applications. However, the on- the-fly increase/decrease in resources is more widespread in

Web systems, which have their own IaaS-level load balancer. Considering the HPC area, current

approaches usually focus on batch jobs or assumptions such as previous knowledge of application

phases, source code rewriting or the stop-reconfigure-and-go approach for elasticity. In this context,

this article presents AutoElastic, a PaaS-level elasticity model for HPC in the cloud. Its differential

approach consists of providing elasticity for high performance applications without user intervention

or source code modification. The scientific contributions of AutoElastic are twofold: (i) an Aging-

based approach to resource allocation and deallocation actions to avoid unnecessary virtual machine

(VM) reconfigurations (thrashing) and (ii) asynchronism in creating and terminating VMs in such a

way that the application does not need to wait for completing these procedures. The prototype

evaluation using OpenNebula middleware showed performance gains of up to 26 percent in the

execution time of an application with the AutoElastic manager. Moreover, we obtained low

intrusiveness for AutoElastic when reconfigurations do not occur.

ETPL

CLD - 043 AutoElastic: Automatic Resource Elasticity for High Performance

Applications in the Cloud

The production of huge amount of data and the emergence of cloud computing have introduced new

requirements for data management. Many applications need to interact with several heterogeneous data

stores depending on the type of data they have to manage: traditional data types, documents, graph data

from social networks, simple key-value data, etc. Interacting with heterogeneous data models via

different APIs, and multiple data store applications imposes challenging tasks to their developers.

Indeed, programmers have to be familiar with different APIs. In addition, the execution of complex

queries over heterogeneous data models cannot, currently, be achieved in a declarative way as it is used

to be with mono-data store application, and therefore requires extra implementation efforts. Moreover,

developers need to master and deal with the complex processes of cloud discovery, and application

deployment and execution. In this paper we propose an integrated set of models, algorithms and tools

aiming at alleviating developers task for developing, deploying and migrating multiple data stores

applications in cloud environments. Our approach focuses mainly on three points. First, we provide a

unifying data model used by applications developers to interact with heterogeneous relational and

NoSQL data stores. Based on that, they express queries using OPEN-PaaS-DataBase API (ODBAPI),

a unique REST API allowing programmers to write their applications code independently of the target

data stores. Second, we propose virtual data stores, which act as a mediator and interact with integrated

data stores wrapped by ODBAPI. This run-time component supports the execution of single and

complex queries over heterogeneous data stores. Finally, we present a declarative approach that enables

to lighten the burden of the tedious and non-standard tasks of (1) discovering relevant cloud

environment and (2) deploying applications on them while letting developers to simply focus on

specifying th- ir storage and computing requirements. A prototype of the proposed solution has been

developed and is currently used to implement use cases from the OpenPaaS project.

ETPL

CLD - 044 Supporting Multi Data Stores Applications in Cloud Environments

Page 25: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

With simple access interfaces and flexible billing models, cloud storage has become an attractive

solution to simplify the storage management for both enterprises and individual users. However,

traditional file systems with extensive optimizations for local disk-based storage backend can not fully

exploit the inherent features of the cloud to obtain desirable performance. In this paper, we present the

design, implementation, and evaluation of Coral, a cloud based file system that strikes a balance

between performance and monetary cost. Unlike previous studies that treat cloud storage as just a

normal backend of existing networked file systems, Coral is designed to address several key issues in

optimizing cloud-based file systems such as the data layout, block management, and billing model.

With carefully designed data structures and algorithms, such as identifying semantically correlated data

blocks, kd-tree based caching policy with self-adaptive thrashing prevention, effective data layout, and

optimal garbage collection, Coral achieves good performance and cost savings under various

workloads as demonstrated by extensive evaluations.

ETPL

CLD - 045 Coral: A Cloud-Backed Frugal File System

Cloud users no longer physically possess their data, so how to ensure the integrity of their outsourced

data becomes a challenging task. Recently proposed schemes such as “provable data possession” and

“proofs of retrievability” are designed to address this problem, but they are designed to audit static

archive data and therefore lack of data dynamics support. Moreover, threat models in these schemes

usually assume an honest data owner and focus on detecting a dishonest cloud service provider despite

the fact that clients may also misbehave. This paper proposes a public auditing scheme with data

dynamics support and fairness arbitration of potential disputes. In particular, we design an index

switcher to eliminate the limitation of index usage in tag computation in current schemes and achieve

efficient handling of data dynamics. To address the fairness problem so that no party can misbehave

without being detected, we further extend existing threat models and adopt signature exchange idea to

design fair arbitration protocols, so that any possible dispute can be fairly settled. The security analysis

shows our scheme is provably secure, and the performance evaluation demonstrates the overhead of

data dynamics and dispute arbitration are reasonable.

ETPL

CLD - 046 Dynamic and Public Auditing with Fair Arbitration for Cloud Data

Page 26: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

The explosive growth of data brings new challenges to the data storage and management in cloud

environment. These data usually have to be processed in a timely fashion in the cloud. Thus, any

increased latency may cause a massive loss to the enterprises. Similarity detection plays a very

important role in data management. Many typical algorithms such as Shingle, Simhash, Traits and

Traditional Sampling Algorithm (TSA) are extensively used. The Shingle, Simhash and Traits

algorithms read entire source file to calculate the corresponding similarity characteristic value, thus

requiring lots of CPU cycles and memory space and incurring tremendous disk accesses. In addition,

the overhead increases with the growth of data set volume and results in a long delay. Instead of reading

entire file, TSA samples some data blocks to calculate the fingerprints as similarity characteristics

value. The overhead of TSA is fixed and negligible. However, a slight modification of source files will

trigger the bit positions of file content shifting. Therefore, a failure of similarity identification is

inevitable due to the slight modifications. This paper proposes an Enhanced Position-Aware Sampling

algorithm (EPAS) to identify file similarity for the cloud by modulo file length. EPAS concurrently

samples data blocks from the head and the tail of the modulated file to avoid the position shift incurred

by the modifications. Meanwhile, an improved metric is proposed to measure the similarity between

different files and make the possible detection probability close to the actual probability. Furthermore,

this paper describes a query algorithm to reduce the time overhead of similarity detection. Our

experimental results demonstrate that the EPAS significantly outperforms the existing well known

algorithms in terms of time overhead, CPU and memory occupation. Moreover, EPAS makes a more

preferable tradeoff between precision and recall than that of other similarity detection algorithms.

Theref- re, it is an effective approach of similarity identification for the cloud.

ETPL

CLD - 047 EPAS: A Sampling Based Similarity Identification Algorithm for the

Cloud

Attribute-based Encryption (ABE) is regarded as a promising cryptographic conducting tool to

guarantee data owners’ direct control over their data in public cloud storage. The earlier ABE schemes

involve only one authority to maintain the whole attribute set, which can bring a single-point bottleneck

on both security and performance. Subsequently, some multi-authority schemes are proposed, in which

multiple authorities separately maintain disjoint attribute subsets. However, the single-point bottleneck

problem remains unsolved. In this paper, from another perspective, we conduct a threshold multi-

authority CP-ABE access control scheme for public cloud storage, named TMACS, in which multiple

authorities jointly manage a uniform attribute set. In TMACS, taking advantage of threshold secret

sharing, the master key can be shared among multiple authorities, and a legal user can generate his/her

secret key by interacting with anyauthorities. Security and performance analysis results show that

TMACS is not only verifiable secure when less thanauthorities are compromised, but also robust when

no less than authorities are alive in the system. Furthermore, by efficiently combining the traditional

multi-authority scheme with TMACS, we construct a hybrid one, which satisf- es the scenario of

attributes coming from different authorities as well as achieving security and system-level robustness.

ETPL

CLD - 048 TMACS: A Robust and Verifiable Threshold Multi-Authority Access

Control System in Public Cloud Storage

Page 27: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Security enhancements to the emerging IaaS (Infrastructure as a Service) cloud computing systems

have become the focus of much research, but little of this targets the underlying infrastructure. Trusted

Cloud systems are proposed to integrate Trusted Computing infrastructure with cloud systems. With

remote attestations, cloud customers are able to determine the genuine behaviors of their applications’

hosts; and therefore they establish trust to the cloud. However, the current Trusted Clouds have

difficulties in effectively attesting to the cloud service dependency for customers’ applications, due to

the cloud’s complexity, heterogeneity and dynamism. In this paper, we present RepCloud, a

decentralized cloud trust management framework, inspired by the reputation systems from the research

in peerto- peer systems. With RepCloud, cloud customers are able to determine the properties of the

exact nodes that may affect the genuine functionalities of their applications, without obtaining much

internal information of the cloud. Experiments showed that besides achieving fine-grained cloud

service dependency attestation, RepCloud incurred lower trust management overhead than the existing

trusted cloud systems.

ETPL

CLD - 050 Rep Cloud: Attesting to Cloud Service Dependency

A sensor cloud consists of various heterogeneous wireless sensor networks (WSNs). These WSNs may

have different owners and run a wide variety of user applications on demand in a wireless

communication medium. Hence, they are susceptible to various security attacks. Thus, a need exists to

formulate effective and efficient security measures that safeguard these applications impacted from

attack in the sensor cloud. However, analyzing the impact of different attacks and their

causeconsequence relationship is a prerequisite before security measures can be either developed or

deployed. In this paper, we propose a risk assessment framework for WSNs in a sensor cloud that

utilizes attack graphs. We use Bayesian networks to not only assess but also to analyze attacks on

WSNs. The risk assessment framework will first review the impact of attacks on a WSN and estimate

reasonable time frames that predict the degradation of WSN security parameters like confidentiality,

integrity and availability. Using our proposed risk assessment framework allows the security

administrator to better understand the threats present and take necessary actions against them. The

framework is validated by comparing the assessment results with that of the results obtained from

different simulated attack scenarios.

ETPL

CLD - 049 Risk Assessment in a Sensor Cloud Framework Using Attack Graphs

Page 28: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Due to the increasing usage of cloud computing applications, it is important to minimize energy cost

consumed by a data center, and simultaneously, to improve quality of service via data center

management. One promising approach is to switch some servers in a data center to the idle mode for

saving energy while to keep a suitable number of servers in the active mode for providing timely

service. In this paper, we design both online and offline algorithms for this problem. For the offline

algorithm, we formulate data center management as a cost minimization problem by considering

energy cost, delay cost (to measure service quality), and switching cost (to change servers’s active/idle

mode). Then, we analyze certain properties of an optimal solution which lead to a dynamic

programming based algorithm. Moreover, by revising the solution procedure, we successfully

eliminate the recursive procedure and achieve an optimal offline algorithm with a polynomial

complexity. For the online algorithm, we design it by considering the worst case scenario for future

workload. In simulation, we show this online algorithm can always provide near-optimal solutions.

ETPL

CLD - 052 Cost Minimization Algorithms for Data Center Management

With the prevalence of cloud computing and virtualization, more and more cloud services including

parallel soft real-time applications (PSRT applications) are running in virtualized data centers.

However, current hypervisors do not provide adequate support for them because of soft real-time

constraints and synchronization problems, which result in frequent deadline misses and serious

performance degradation. CPU schedulers in underlying hypervisors are central to these issues. In this

paper, we identify and analyze CPU scheduling problems in hypervisors. Then, we design and

implement a parallel soft real-time scheduler according to the analysis, named Poris, based on Xen. It

addresses both soft real-time constraints and synchronization problems simultaneously. In our

proposed method, priority promotion and dynamic time slice mechanisms are introduced to determine

when to schedule virtual CPUs (VCPUs) according to the characteristics of soft real-time applications.

Besides, considering that PSRT applications may run in a virtual machine (VM) or multiple VMs, we

present parallel scheduling, group scheduling and communication-driven group scheduling to

accelerate synchronizations of these applications and make sure that tasks are finished before their

deadlines under different scenarios. Our evaluation shows Poris can significantly improve the

performance of PSRT applications no matter how they run in a VM or multiple VMs. For example,

compared to the Credit scheduler, Poris decreases the response time of web search benchmark by up

to 91.6 percent.

ETPL

CLD - 051 Poris: A Scheduler for Parallel Soft Real-Time Applications in

Virtualized Environments

Page 29: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Cloud computing has emerged as a very flexible service paradigm by allowing users to require virtual

machine (VM) resources on-demand and allowing cloud service providers (CSPs) to provide VM

resources via a pay-as-you-go model. This paper addresses the CSP's problem of efficiently allocating

VM resources to physical machines (PMs) with the aim of minimizing the energy consumption.

Traditional energy-aware VM allocations either allocate VMs to PMs in a centralized manner or

implement VM migrations for energy reduction without considering the migration cost in cloud

computing systems. We address these two issues by introducing a decentralized multiagent (MA)-

based VM allocation approach. The proposed MA works by first dispatching a cooperative agent to

each PM to assist the PM in managing VM resources. Then, an auction-based VM allocation

mechanism is designed for these agents to decide the allocations of VMs to PMs. Moreover, to tackle

system dynamics and avoid incurring prohibitive VM migration overhead, a local negotiation-based

VM consolidation mechanism is devised for the agents to exchange their assigned VMs for energy cost

saving. We evaluate the efficiency of the MA approach by using both static and dynamic simulations.

The static experimental results demonstrate that the MA can incur acceptable computation time to

reduce system energy cost compared with traditional bin packing and genetic algorithm-based

centralized approaches. In the dynamic setting, the energy cost of the MA is similar to that of

benchmark global-based VM consolidation approaches, but the MA largely reduces the migration cost.

ETPL

CLD - 054 Multiagent-Based Resource Allocation for Energy Minimization in

Cloud Computing Systems

Energy efficiency of data centers (DCs) has become a major concern as DCs continue to grow large

often hosting tens of thousands of servers or even hundreds of thousands of them. Clearly, such a

volume of DCs implies scale of data center network (DCN) with a huge number of network nodes and

links. The energy consumption of this communication network has skyrocketed and become the same

league as computing servers’costs. With the ever-increasing amount of data that need to be stored and

processed in DCs, DCN traffic continues to soar drawing increasingly more power. In particular, more

than one-third of the total energy in DCs is consumed by communication links, switching and

aggregation elements. In this paper, we concern the energy efficiency of data center explicitly taking

into account both servers and DCN. To this end, we present VPTCA, as a collective energy-efficiency

approach to data center network planning, which deals with virtual machine (VM) placement and

communication traffic configuration. VPTCA aims particularly to reduce the energy consumption of

DCN by assigning interrelated VMs into the same server or pod, which effectively helps reduce the

amount of transmission load. In the layer of traffic message, VPTCA optimally uses switch ports and

link bandwidth to balance the load and avoid congestions, enabling DCN to increase its transmission

capacity, and saving a significant amount of network energy. In our evaluation via NS-2 simulations,

the performance of VPTCA is measured and compared with two well-known DCN management

algorithms, Global First Fit and Elastic Tree. Based on our experimental results, VPTCA outperforms

existing algorithms in providing DCN more transmission capacity with less energy consumption.

ETPL

CLD - 053 Collective Energy-Efficiency Approach to Data Center Networks

Planning

Page 30: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Key-exposure resistance has always been an important issue for in-depth cyber defence in many

security applications. Recently, how to deal with the key exposure problem in the settings of cloud

storage auditing has been proposed and studied. To address the challenge, existing solutions all require

the client to update his secret keys in every time period, which may inevitably bring in new local

burdens to the client, especially those with limited computation resources, such as mobile phones. In

this paper, we focus on how to make the key updates as transparent as possible for the client and

propose a new paradigm called cloud storage auditing with verifiable outsourcing of key updates. In

this paradigm, key updates can be safely outsourced to some authorized party, and thus the key-update

burden on the client will be kept minimal. In particular, we leverage the third party auditor (TPA) in

many existing public auditing designs, let it play the role of authorized party in our case, and make it

in charge of both the storage auditing and the secure key updates for key-exposure resistance. In our

design, TPA only needs to hold an encrypted version of the client's secret key while doing all these

burdensome tasks on behalf of the client. The client only needs to download the encrypted secret key

from the TPA when uploading new files to cloud. Besides, our design also equips the client with

capability to further verify the validity of the encrypted secret keys provided by the TPA. All these

salient features are carefully designed to make the whole auditing procedure with key exposure

resistance as transparent as possible for the client. We formalize the definition and the security model

of this paradigm. The security proof and the performance simulation show that our detailed design

instantiations are secure and efficient.

ETPL

CLD - 056 Enabling Cloud Storage Auditing with Verifiable Outsourcing of Key

Updates

Ciphertext-policy attribute-based encryption (CP-ABE) has been a preferred encryption technology to

solve the challenging problem of secure data sharing in cloud computing. The shared data files

generally have the characteristic of multilevel hierarchy, particularly in the area of healthcare and the

military. However, the hierarchy structure of shared files has not been explored in CP-ABE. In this

paper, an efficient file hierarchy attribute-based encryption scheme is proposed in cloud computing.

The layered access structures are integrated into a single access structure, and then, the hierarchical

files are encrypted with the integrated access structure. The ciphertext components related to attributes

could be shared by the files. Therefore, both ciphertext storage and time cost of encryption are saved.

Moreover, the proposed scheme is proved to be secure under the standard assumption. Experimental

simulation shows that the proposed scheme is highly efficient in terms of encryption and decryption.

With the number of the files increasing, the advantages of our scheme become more and more

conspicuous.

ETPL

CLD - 055 An Efficient File Hierarchy Attribute-Based Encryption Scheme in

Cloud Computing

Page 31: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Cloud computing is an Internet-based computing pattern through which shared resources are provided

to devices ondemand. It’s an emerging but promising paradigm to integrating mobile devices into cloud

computing, and the integration performs in the cloud based hierarchical multi-user data-shared

environment. With integrating into cloud computing, security issues such as data confidentiality and

user authority may arise in the mobile cloud computing system, and it is concerned as the main

constraints to the developments of mobile cloud computing. In order to provide safe and secure

operation, a hierarchical access control method using modified hierarchical attribute-based encryption

(M-HABE) and a modified three-layer structure is proposed in this paper. In a specific mobile cloud

computing model, enormous data which may be from all kinds of mobile devices, such as smart phones,

functioned phones and PDAs and so on can be controlled and monitored by the system, and the data

can be sensitive to unauthorized third party and constraint to legal users as well. The novel scheme

mainly focuses on the data processing, storing and accessing, which is designed to ensure the users

with legal authorities to get corresponding classified data and to restrict illegal users and unauthorized

legal users get access to the data, which makes it extremely suitable for the mobile cloud computing

paradigms.

ETPL

CLD - 058 A Modified Hierarchical Attribute-based Encryption Access Control

Method for Mobile Cloud Computing

With the rapid increase of monitoring devices and controllable facilities in the demand side of

electricity networks, more solid information and communication technology (ICT) resources are

required to support the development of demand side management (DSM). Different from traditional

computation in power systems which customizes ICT resources for mapping applications separately,

DSM especially asks for scalability and economic efficiency, because there are more and more

stakeholders participating in the computation process. This paper proposes a novel cost-oriented

optimization model for a cloud-based ICT infrastructure to allocate cloud computing resources in a

flexible and cost-efficient way. Uncertain factors including imprecise computation load prediction and

unavailability of computing instances can also be considered in the proposed model. A modified

priority list algorithm is specially developed in order to efficiently solve the proposed optimization

model and compared with the mature simulating annealing based algorithm. Comprehensive numerical

studies are fulfilled to demonstrate the effectiveness of the proposed cost-oriented model on reducing

the operation cost of cloud platform in DSM.

ETPL

CLD - 057 Optimal Cloud Computing Resource Allocation for Demand Side

Management

Page 32: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students

Due to the increasing popularity of cloud computing, more and more data owners are motivated to

outsource their data to cloud servers for great convenience and reduced cost in data management.

However, sensitive data should be encrypted before outsourcing for privacy requirements, which

obsoletes data utilization like keyword-based document retrieval. In this paper, we present a secure

multi-keyword ranked search scheme over encrypted cloud data, which simultaneously supports

dynamic update operations like deletion and insertion of documents. Specifically, the vector space

model and the widely-used TF x IDF model are combined in the index construction and query

generation. We construct a special tree-based index structure and propose a “Greedy Depth-first

Search” algorithm to provide efficient multi-keyword ranked search. The secure kNN algorithm is

utilized to encrypt the index and query vectors, and meanwhile ensure accurate relevance score

calculation between encrypted index and query vectors. In order to resist statistical attacks, phantom

terms are added to the index vector for blinding search results. Due to the use of our special tree-based

index structure, the proposed scheme can achieve sub-linear search time and deal with the deletion and

insertion of documents flexibly. Extensive experiments are conducted to demonstrate the efficiency of

the proposed scheme.

ETPL

CLD - 060 A Secure and Dynamic Multi-Keyword Ranked Search Scheme over

Encrypted Cloud Data

Cloud service certifications (CSC) attempt to assure a high level of security and compliance. However,

considering that cloud services are part of an ever-changing environment, multi-year validity periods

may put in doubt reliability of such certifications. We argue that continuous auditing (CA) of selected

certification criteria is required to assure continuously reliable and secure cloud services, and thereby

increase trustworthiness of certifications. CA of cloud services is still in its infancy, thus, we conducted

a thorough literature review, interviews, and workshops with practitioners to conceptualize an

architecture for continuous cloud service auditing. Our study shows that various criteria should be

continuously audited. Yet, we reveal that most of existing methodologies are not applicable for third

party auditing purposes. Therefore, we propose a conceptual CA architecture, and highlight important

components and processes that have to be implemented. Finally, we discuss benefits and challenges

that have to be tackled to diffuse the concept of continuous cloud service auditing. We contribute to

knowledge and practice by providing applicable internal and third party auditing methodologies for

auditors and providers, linked together in a conceptual architecture. Further on, we provide groundings

for future research to implement CA in cloud service contexts.

ETPL

CLD - 059 Trust is Good, Control is better: Creating Secure Clouds by Continuous

Auditing

Page 33: IEEE Projects 2016-2017Updated Top list of Cloud Computing for ME/MTech,BE/BTech Final Year students