TTA FINAL YEAR PROJECTS TITLES WITH ABSTRACT IEEE 2013,2012,2011, 2010,etc.., Projects for B.E/B.Tech/M.E/MCA/Bsc/Msc For complete base paper, call now and talk to our expert 89396 41060 | 89396 41061 | 044 4218 1385
Jun 20, 2015
TTA FINAL YEAR PROJECTS TITLES
WITH ABSTRACT
IEEE 2013,2012,2011, 2010,etc..,
Projects for B.E/B.Tech/M.E/MCA/Bsc/Msc
For complete base paper, call now and talk
to our expert
89396 41060 | 89396 41061 | 044 4218 1385
DOMAIN : NETWORKING
CODE
PROJECT TITLE
DESCRIPTION
REFERENCE
TTASTJ01 Bloom Cast: Efficient
and Effective Full-Text
Retrieval in
Unstructured P2P
Networks
Efficient and effective full-text retrieval in
unstructured peer-to-peer networks remains
a challenge in the research community.
First, it is difficult, if not impossible, for
unstructured P2P systems to effectively
locate items with guaranteed recall. Second,
existing schemes to improve search success
rate often rely on replicating a large number
of item replicas across the wide area
network, incurring a large amount of
communication and storage costs. In this
paper, we propose BloomCast, an efficient
and effective full-text retrieval scheme, in
unstructured P2P networks. By leveraging a
hybrid P2P protocol, BloomCast replicates
the items uniformly at random across the
P2P networks, achieving a guaranteed recall
at a communication cost of O(√N), where N
is the size of the network. Furthermore, by
casting Bloom Filters instead of the raw
documents across the network, BloomCast
significantly reduces the communication and
storage costs for replication. We
demonstrate the power of BloomCast design
through both mathematical proof and
comprehensive simulations based on the
query logs from a major commercial search
engine and NIST TREC WT10G data
collection. Results show that BloomCast
achieves an average query recall of 91
percent, which outperforms the existing WP
algorithm by 18 percent, while BloomCast
greatly reduces the search latency for query
processing by 57 percent.
IEEE 2012
TTAECJ02 Cooperative Density
Estimation in Random
Wireless Ad Hoc
Networks
Density estimation is crucial for wireless ad
hoc networks for adequate capacity
planning. Protocols have to adapt their
operation to the density since the
throughput in an ad hoc network approaches
asymptotically to zero as the density
increases. A wireless node can estimate the
global density by using local information
such as the received power from neighbors.
In this paper, we propose a cross layer
protocol to compute the density estimate.
IEEE 2012
The accuracy of the estimate can be
enhanced and its variance can be reduced
through cooperation among the nodes.
Nodes share the received power
measurements with each other. Based on
the collected observations, the maximum
likelihood estimate is computed. It is shown
that cooperative density estimation has
better accuracy with less variance than the
individual estimation. When nodes share
received power measurements from further
away neighbors, the variance of the
estimate is further reduced.
TTAECJ03 FireCol A Collaborative
Protection Network for
the Detection of
Flooding DDoS Attacks
Distributed denial-of-service (DDoS) attacks
remain a major security problem, the
mitigation of which is very hard especially
when it comes to highly distributed botnet-
based attacks. The early discovery of these
attacks, although challenging, is necessary
to protect end-users as well as the
expensive network infrastructure resources.
In this paper, we address the problem of
DDoS attacks and present the theoretical
foundation, architecture, and algorithms of
FireCol. The core of FireCol is composed of
intrusion prevention systems (IPSs) located
at the Internet service providers (ISPs)
level. The IPSs form virtual protection rings
around the hosts to defend and collaborate
by exchanging selected traffic information.
The evaluation of FireCol using extensive
simulations and a real dataset is presented,
showing FireCol effectiveness and low
overhead, as well as its support for
incremental deployment in real networks.
IEEE 2012
TTAECJ04 Game-Theoretic Pricing
for Video Streaming in
Mobile Networks
Mobile phones are among the most popular
consumer devices, and the recent
developments of 3G networks and smart
phones enable users to watch video
programs by subscribing data plans from
service providers. Due to the ubiquity of
mobile phones and phone-to-phone
communication technologies, data-plan
subscribers can redistribute the video
content to nonsubscribers. Such a
redistribution mechanism is a potential
competitor for the mobile service provider
and is very difficult to trace given users' high
mobility. The service provider has to set a
reasonable price for the data plan to prevent
such unauthorized redistribution behavior to
IEEE 2012
protect or maximize his/her own profit. In
this paper, we analyze the optimal price
setting for the service provider by
investigating the equilibrium between the
subscribers and the secondary buyers in the
content-redistribution network. We model
the behavior between the subscribers and
the secondary buyers as a non-cooperative
game and find the optimal price and
quantity for both groups of users. Based on
the behavior of users in the redistribution
network, we investigate the evolutionarily
stable ratio of mobile users who decide to
subscribe to the data plan. Such an analysis
can help the service provider preserve
his/her profit under the threat of the
redistribution networks and can improve the
quality of service for end users.
TTAECJ5,TT
AECD5
Throughput and Energy
Efficiency in Wireless
Ad Hoc Networks With
Gaussian Channels
This paper studies the bottleneck link
capacity under the Gaussian channel model
in strongly connected random wireless ad
hoc networks, with n nodes independently
and uniformly distributed in a unit square.
We assume that each node is equipped with
two transceivers (one for transmission and
one for reception) and allow all nodes to
transmit simultaneously. We draw lower and
upper bounds, in terms of bottleneck link
capacity, for homogeneous networks (all
nodes have the same transmission power
level) and propose an energy-efficient power
assignment algorithm (CBPA) for
heterogeneous networks (nodes may have
different power levels), with a provable
bottleneck link capacity guarantee of
Ω(Blog(1+1/√nlog2n)), where B is the
channel bandwidth. In addition, we develop
a distributed implementation of CBPA
with O(n2) message complexity and provide
extensive simulation results.
IEEE 2012
TTAECJ06 Packet-Hiding Methods
for Preventing
Selective Jamming
Attacks
The open nature of the wireless medium
leaves it vulnerable to intentional
interference attacks, typically referred to as
jamming. This intentional interference with
wireless transmissions can be used as a
launch-pad for mounting Denial-of-Service
attacks on wireless networks. Typically,
jamming has been addressed under an
external threat model. However, adversaries
with internal knowledge of protocol
specifications and network secrets can
IEEE 2012
launch low-effort jamming attacks that are
difficult to detect and counter. In this work,
we address the problem of selective
jamming attacks in wireless networks. In
these attacks, the adversary is active only
for a short period of time, selectively
targeting messages of high importance. We
illustrate the advantages of selective
jamming in terms of network performance
degradation and adversary effort by
presenting two case studies; a selective
attack on TCP and one on routing. We show
that selective jamming attacks can be
launched by performing real-time packet
classification at the physical layer. To
mitigate these attacks, we develop three
schemes that prevent real-time packet
classification by combining cryptographic
primitives with physical-layer attributes. We
analyze the security of our methods and
evaluate their computational and
communication overhead.
TTAECJ07 Optimizing Cloud
Resources for
Delivering IPTV
Services through
Virtualization
Virtualized cloud-based services can take
advantage of statistical multiplexing across
applications to yield significant cost savings.
However, achieving similar savings with
real-time services can be a challenge. In this
paper, we seek to lower a provider's costs
for real-time IPTV services through a
virtualized IPTV architecture and through
intelligent time-shifting of selected services.
Using Live TV and Video-on-Demand (VoD)
as examples, we show that we can take
advantage of the different deadlines
associated with each service to effectively
multiplex these services. We provide a
generalized framework for computing the
amount of resources needed to support
multiple services, without missing the
deadline for any service. We construct the
problem as an optimization formulation that
uses a generic cost function. We consider
multiple forms for the cost function (e.g.,
maximum, convex and concave functions)
reflecting the cost of providing the service.
The solution to this formulation gives the
number of servers needed at different time
instants to support these services. We
implement a simple mechanism for time-
shifting scheduled jobs in a simulator and
study the reduction in server load using real
traces from an operational IPTV network.
IEEE 2012
Our results show that we are able to reduce
the load by ~ 24% (compared to a possible
~ 31%). We also show that there are
interesting open problems in designing
mechanisms that allow time-shifting of load
in such environments. TTAECJ08 Maximal Scheduling in
Wireless Ad Hoc
Networks With
Hypergraph
Interference Models
This paper proposes a hyper graph
interference model for the scheduling
problem in wireless ad hoc networks. The
proposed hyper graph model can take the
sum interference into account and,
therefore, is more accurate as compared
with the traditional binary graph model.
Further, different from the global signal-to-
interference-plus-noise ratio (SINR) model,
the hyper graph model preserves a localized
graph-theoretic structure and, therefore,
allows the existing graph-based efficient
scheduling algorithms to be extended to the
cumulative interference case. Finally, by
adjusting certain parameters, the hyper
graph can achieve a systematic tradeoff
between the interference approximation
accuracy and the user node coordination
complexity during scheduling. As an
application of the hyper graph model, we
consider the performance of a simple
distributed scheduling algorithm, i.e.,
maximal scheduling, in wireless networks.
We propose a lower bound stability region
for any maximal scheduler and show that it
achieves a fixed fraction of the optimal
stability region, which depends on the
interference degree of the underlying hyper
graph. We also demonstrate the interference
approximation accuracy of hyper graphs in
random networks and show that hyper
graphs with small hyper edge sizes can
model the interference quite accurately.
Finally, the analytical performance is verified
by simulation results.
IEEE 2012
TTAECD09 Load Balancing
Multipath Switching
System with Flow
Multipath Switching systems (MPS) are
intensely used in state-of-the-art core
routers to provide terabit or even petabit
switching capacity. One of the most
intractable issues in designing MPS is how to
load balance traffic across its multiple paths
while not disturbing the intra flow packet
orders. Previous packet-based solutions
either suffer from delay penalties or lead to
O(N2 ) hardware complexity, hence do not
IEEE 2012
scale. Flow-based hashing algorithms also
perform badly due to the heavy-tailed flow-
size distribution. In this paper, we develop a
novel scheme, namely, Flow Slice (FS) that
cuts off each flow into flow slices at every
intra flow interval larger than a slicing
threshold and balances the load on a finer
granularity. Based on the studies of tens of
real Internet traces, we show that setting a
slicing threshold of 1-4 ms, the FS scheme
achieves comparative load-balancing
performance to the optimal one. It also
limits the probability of out-of-order packets
to a negligible level (10-6
) on three popular
MPSes at the cost of little hardware
complexity and an internal speedup up to
two. These results are proven by theoretical
analyses and also validated through trace-
driven prototype simulations.
TTAECD10 Distributed Throughput
Maximization in
Wireless Networks via
Random Power
Allocation
We develop a distributed throughput-optimal
power allocation algorithm in wireless
networks. The study of this problem has
been limited due to the non-convexity of the
underlying optimization problems that
prohibits an efficient solution even in a
centralized setting. By generalizing the
randomization framework originally
proposed for input queued switches to SINR
rate-based interference model, we
characterize the throughput-optimality
conditions that enable efficient and
distributed implementation. Using gossiping
algorithm, we develop a distributed power
allocation algorithm that satisfies the
optimality conditions, thereby achieving
(nearly) 100 percent throughputs. We
illustrate the performance of our power
allocation solution through numerical
simulation. solution (an NP-hard problem).
IEEE 2012
TTAECD11 Automatic
Reconfiguration for
Large-Scale Reliable
Storage Systems
Byzantine-fault-tolerant replication enhances
the availability and reliability of Internet
services that store critical state and preserve
it despite attacks or software errors.
However, existing Byzantine-fault-tolerant
storage systems either assume a static set
of replicas, or have limitations in how they
handle reconfigurations (e.g., in terms of the
scalability of the solutions or the consistency
levels they provide). This can be problematic
in long-lived, large-scale systems where
IEEE 2012
system membership is likely to change
during the system lifetime. In this paper, we
present a complete solution for dynamically
changing system membership in a large-
scale Byzantine-fault-tolerant system. We
present a service that tracks system
membership and periodically notifies other
system nodes of membership changes. The
membership service runs mostly
automatically, to avoid human configuration
errors; is itself Byzantine-fault-tolerant and
reconfigurable; and provides applications
with a sequence of consistent views of the
system membership. We demonstrate the
utility of this membership service by using it
in a novel distributed hash table called dBQS
that provides atomic semantics even across
changes in replica sets. dBQS is interesting
in its own right because its storage
algorithms extend existing Byzantine
quorum protocols to handle changes in the
replica set, and because it differs from
previous DHTs by providing Byzantine fault
tolerance and offering strong semantics. We
implemented the membership service and
dBQS. Our results show that the approach
works well, in practice: the membership
service is able to manage a large system
and the cost to change the system
membership is low.
TTAECD12 Connectivity of Multiple
Cooperative Cognitive
Radio Ad Hoc Networks
In cognitive radio networks, the signal
reception quality of a secondary user
degrades due to the interference from
multiple heterogeneous primary networks,
and also the transmission activity of a
secondary user is constrained by its
interference to the primary networks. It is
difficult to ensure the connectivity of the
secondary network. However, since there
may exist multiple heterogeneous secondary
networks with different radio access
technologies, such secondary networks may
be treated as one secondary network via
proper cooperation, to improve connectivity.
In this paper, we investigate the
connectivity of such a cooperative secondary
network from a percolation-based
perspective, in which each secondary
network's user may have other secondary
networks' users acting as relays. The
connectivity of this cooperative secondary
network is characterized in terms of
IEEE 2012
percolation threshold, from which the benefit
of cooperation is justified. For example,
while a non-cooperative secondary network
does not percolate, percolation may occur in
the cooperative secondary network; or when
a non-cooperative secondary network
percolates, less power would be required to
sustain the same level of connectivity in the
cooperative secondary network.
DOMAIN –WIRELESS COMMUNICATION/WIRELESS NETWORK
TTAECJ13 An Adaptive
Opportunistic Routing
Scheme for Wireless
Ad-hoc Networks
A distributed adaptive opportunistic routing
scheme for multi-hop wireless ad hoc
networks is proposed. The proposed scheme
utilizes a reinforcement learning framework
to opportunistically route the packets even
in the absence of reliable knowledge about
channel statistics and network model. This
scheme is shown to be optimal with respect
to an expected average per-packet reward
criterion. The proposed routing scheme
jointly addresses the issues of learning and
routing in an opportunistic context, where
the network structure is characterized by the
transmission success probabilities. In
particular, this learning framework leads to a
stochastic routing scheme that optimally
“explores” and “exploits” the opportunities in
the network.
IEEE 2012
TTAECJ14 AMPLE An Adaptive
Traffic Engineering
System Based on
Virtual Routing
Topologies
Handling traffic dynamics in order to avoid
network congestion and subsequent service
disruptions is one of the key tasks
performed by contemporary network
management systems. Given the simple but
rigid routing and forwarding functionalities in
IP base environments, efficient resource
management and control solutions against
dynamic traffic conditions is still yet to be
obtained. In this article, we introduce AMPLE
- an efficient traffic engineering and
management system that performs adaptive
traffic control by using multiple virtualized
routing topologies. The proposed system
consists of two complementary components:
offline link weight optimization that takes as
input the physical network topology and
tries to produce maximum routing path
diversity across multiple virtual routing
IEEE 2012
topologies for long term operation through
the optimized setting of link weights. Based
on these diverse paths, adaptive traffic
control performs intelligent traffic splitting
across individual routing topologies in
reaction to the monitored network dynamics
at short timescale. According to our
evaluation with real network topologies and
traffic traces, the proposed system is able to
cope almost optimally with unpredicted
traffic dynamics and, as such, it constitutes
a new proposal for achieving better quality
of service and overall network performance
in IP networks.
DOMAIN : NETWORK SECURITY
CODE
PROJECT TITLE
DESCRIPTION
REFERENCE
TTAECJ15 Distributed Private Key
Generation for Identity
Based Cryptosystems in
Ad Hoc Networks
Identity Based Cryptography (IBC) has
the advantage that no public key
certification is needed when used in a
mobile ad hoc network (MANET). This is
especially useful when bi-directional
channels do not exist in a MANET.
However, IBC normally needs a
centralized server for issuing private
keys for different identities. We give a
protocol distributing this task among all
users, thus eliminating the need of a
centralized server in IBC for use in
MANETs.
IEEE 2012
TTAECJ16 Joint Relay and Jammer
Selection for Secure Two-
Way Relay Networks
In this paper, we investigate joint relay
and jammer selection in two-way
cooperative networks, consisting of two
sources, a number of intermediate
nodes, and one eavesdropper, with the
constraints of physical-layer security.
Specifically, the proposed algorithms
select two or three intermediate nodes to
enhance security against the malicious
eavesdropper. The first selected node
operates in the conventional relay mode
and assists the sources to deliver their
data to the corresponding destinations
using an amplify-and-forward protocol.
The second and third nodes are used in
IEEE 2012
different communication phases as
jammers in order to create intentional
interference upon the malicious
eavesdropper. First, we find that in a
topology where the intermediate nodes
are randomly and sparsely distributed,
the proposed schemes with cooperative
jamming outperform the conventional
non-jamming schemes within a certain
transmitted power regime. We also find
that, in the scenario where the
intermediate nodes gather as a close
cluster, the jamming schemes may be
less effective than their non-jamming
counterparts. Therefore, we introduce a
hybrid scheme to switch between
jamming and non-jamming modes.
Simulation results validate our
theoretical analysis and show that the
hybrid switching scheme further
improves the secrecy rate.
TTAECJ17 A Secure Single Sign-On
Mechanism for Distributed
Computer Networks
User identification is an important access
control mechanism for client-server
networking architectures. The concept of
single sign-on can allow legal users to
use the unitary token to access different
service providers in distributed computer
networks. Recently, some user
identification schemes have been
proposed for distributed computer
networks. Unfortunately, most existing
schemes cannot preserve user
anonymity when possible attacks occur.
Also, the additional time-synchronized
mechanisms they use may cause
extensive overhead costs. To overcome
these drawbacks, we propose a secure
single sign-on mechanism that is
efficient, secure, and suitable for mobile
devices in distributed computer
networks.
IEEE 2012
TTAECD18 A Novel Data Embedding
Method Using Adaptive
Pixel Pair Matching
This paper proposes a new data-hiding
method based on pixel pair matching
(PPM). The basic idea of PPM is to use
the values of pixel pair as a reference
coordinate, and search a coordinate in
the neighborhood set of this pixel pair
according to a given message digit. The
pixel pair is then replaced by the
searched coordinate to conceal the digit.
Exploiting modification direction (EMD)
IEEE 2012
and diamond encoding (DE) are two
data-hiding methods proposed recently
based on PPM. The maximum capacity of
EMD is 1.161 bpp and DE extends the
payload of EMD by embedding digits in a
larger notational system. The proposed
method offers lower distortion than DE
by providing more compact
neighborhood sets and allowing
embedded digits in any notational
system. Compared with the optimal pixel
adjustment process (OPAP) method, the
proposed method always has lower
distortion for various payloads.
Experimental results reveal that the
proposed method not only provides
better performance than those of OPAP
and DE, but also is secure under the
detection of some well-known steg-
analysis techniques.
TTAECD19 Characterizing the
Efficacy of the NRL
Network Pump in
Mitigating Covert Timing
Channels
The Naval Research Laboratory (NRL)
Network Pump, or Pump, is a standard
for mitigating covert channels that arise
in a multilevel secure (MLS) system
when a high user (HU) sends
acknowledgements to a low user (LU).
The issue here is that HU can encode
information in the "timings" of the
acknowledgements. The Pump aims at
mitigating the covert timing channel by
introducing buffering between HU and
LU, as well as adding noise to the
acknowledgment timings. We model the
working of the Pump in certain
situations, as a communication system
with feedback and use then this
perspective to derive an upper bound on
the capacity of the covert channel
between HU and LU in the Pump. This
upper bound is presented in terms of a
directed information flow over the
dynamics of the system. We also present
an achievable scheme that can transmit
information over this channel. When the
support of the noise added by Pump to
acknowledgment timings is finite, the
achievable rate is nonzero, i.e., infinite
number of bits can be reliably
communicated. If the support of the
noise is infinite, the achievable rate is
zero and hence a finite number of bits
can be communicated.
IEEE 2012
TTAECD20 Design and
Implementation of TARF A
Trust-Aware Routing
Framework for WSNs
The multi-hop routing in wireless sensor
networks (WSNs) offers little protection
against identity deception through
replaying routing information. An
adversary can exploit this defect to
launch various harmful or even
devastating attacks against
the routing protocols, including sinkhole
attacks, wormhole attacks, and Sybil
attacks. The situation is further
aggravated by mobile and harsh network
conditions. Traditional cryptographic
techniques or efforts at developing trust-
aware routing protocols do not
effectively address this severe problem.
To secure the WSNs against adversaries
misdirecting the multi-hop routing, we
have designed and implementedTARF, a
robust trust-
aware routing framework for dynamic
WSNs. Without tight time
synchronization or known geographic
information, TARF provides
trustworthy and energy-efficient route.
Most importantly, TARF proves effective
against those harmful attacks developed
out of identity deception; the
resilience of TARF is verified through
extensive evaluation with both
simulation and empirical experiments on
large-scale WSNs under various
scenarios including mobile and RF-
shielding network conditions. Further, we
have implemented a low-overhead
TARF module in TinyOS; as
demonstrated, this implementation can
be incorporated into
existing routing protocols with the least
effort. Based on TARF, we also
demonstrated a proof-of-concept mobile
target detection application that
functions well against an ant detection
mechanism.
IEEE 2012
TTAECD21 Risk-Aware Mitigation for
MANET Routing Attacks
Mobile Ad hoc Networks (MANET) have
been highly vulnerable to attacks due to
the dynamic nature of its network
infrastructure. Among
these attacks, routing attacks have
received considerable attention since it
could cause the most devastating
damage to MANET. Even though there
exist several intrusions response
IEEE 2012
techniques to mitigate such
critical attacks, existing solutions
typically attempt to isolate malicious
nodes based on binary or naive fuzzy
response decisions. However, binary
responses may result in the unexpected
network partition, causing additional
damages to the network infrastructure,
and naive fuzzy responses could lead to
uncertainty in
countering routing attacks in MANET. In
this paper, we propose a risk-
aware response mechanism to
systematically cope with the
identified routing attacks. Our risk-
aware approach is based on an extended
Dempster-Shafer mathematical theory of
evidence introducing a notion of
importance factors. In addition, our
experiments demonstrate the
effectiveness of our approach with the
consideration of several performance
metrics.
DOMAIN :CLOUD COMPUTING
TTASTD22,T
TASTJ22,TT
ASTA22
Payments for Outsourced
Computations
With the recent advent of cloud
computing, the concept of outsourcing
computations, initiated by volunteer
computing efforts, is being revamped.
While the two paradigms differ in several
dimensions, they also share challenges,
stemming from the lack of trust between
outsourcers and workers. In this work,
we propose a unifying trust framework,
where correct participation is financially
rewarded: neither participant is trusted,
yet outsourced computations are
efficiently verified and validly
remunerated. We propose three solutions
for this problem, relying on an offline
bank to generate and redeem payments;
the bank is oblivious to interactions
between outsourcers and workers. We
propose several attacks that can be
launched against our framework and
study the effectiveness of our solutions.
We implemented our most secure
solution and our experiments show that
it is efficient: the bank can perform
hundreds of payment transactions per
second and the overheads imposed on
IEEE 2012
outsourcers and workers are negligible.
TTASTJ23 In Cloud, Can Scientific
Communities Benefit from
the Economies of Scale?
The basic idea behind cloud computing is
that resource providers offer elastic
resources to end users. In this paper, we
intend to answer one key question to the
success of cloud computing: in cloud,
can small-to-medium scale scientific
communities benefit from the economies
of scale? Our research contributions are
threefold: first, we propose an innovative
public cloud usage model for small-to-
medium scale scientific communities to
utilize elastic resources on a public cloud
site while maintaining their flexible
system controls, i.e., create, activate,
suspend, resume, deactivate, and
destroy their high-level management
entities-service management layers
without knowing the details of
management. Second, we design and
implement an innovative system-
Dawning Cloud, at the core of which are
lightweight service management layers
running on top of a common
management service framework. The
common management service framework
of Dawning Cloud not only facilitates
building lightweight service management
layers for heterogeneous workloads, but
also makes their management tasks
simple. Third, we evaluate the systems
comprehensively using both emulation
and real experiments. We found that for
four traces of two typical scientific
workloads: High-Throughput Computing
(HTC) and Many-Task Computing (MTC),
Dawning Cloud saves the resource
consumption maximally by 59.5 and 72.6
percent for HTC and MTC service
providers, respectively, and saves the
total resource consumption maximally by
54 percent for the resource provider with
respect to the previous two public cloud
solutions. To this end, we conclude that
small-to-medium scale scientific
communities indeed can benefit from the
economies of scale of public clouds with
the support of the enabling system.
IEEE 2012
TTASTJ24 Secure Erasure Code-
Based Cloud Storage
System with Secure Data
A cloud storage system, consisting of a
collection of storage servers, provides
long-term storage services over the
IEEE 2012
Forwarding
Internet. Storing data in a third party's
cloud system causes serious concern
over data confidentiality. General
encryption schemes protect data
confidentiality, but also limit the
functionality of the storage system
because a few operations are supported
over encrypted data. Constructing a
secure storage system that supports
multiple functions is challenging when
the storage system is distributed and has
no central authority. We propose a
threshold proxy re-encryption scheme
and integrate it with a decentralized
erasure code such that a secure
distributed storage system is formulated.
The distributed storage system not only
supports secure and robust data storage
and retrieval, but also lets a user forward
his data in the storage servers to
another user without retrieving the data
back. The main technical contribution is
that the proxy re-encryption scheme
supports encoding operations over
encrypted messages as well as
forwarding operations over encoded and
encrypted messages. Our method fully
integrates encrypting, encoding, and
forwarding. We analyze and suggest
suitable parameters for the number of
copies of a message dispatched to
storage servers and the number of
storage servers queried by a key server.
These parameters allow more flexible
adjustment between the number of
storage servers and robustness.
DOMAIN :MOBILE COMPUTING
TTAECJ25 Improving QoS in High-
Speed Mobility Using
Bandwidth Maps
It is widely evidenced that location has a
significant influence on the actual
bandwidth that can be expected from
Wireless Wide Area Networks (WWANs),
e.g., 3G. Because a fast-moving vehicle
continuously changes its location,
vehicular mobile computing is confronted
with the possibility of significant
variations in available network
bandwidth. While it is difficult for
providers to eliminate bandwidth
disparity over a large service area, it
may be possible to map network
IEEE 2012
bandwidth to the road network through
repeated measurements. In this paper,
we report results of an extensive
measurement campaign to demonstrate
the viability of such bandwidth maps. We
show how bandwidth maps can be
interfaced with adaptive multimedia
servers and the emerging vehicular
communication systems that use on-
board mobile routers to deliver Internet
services to the passengers. Using
simulation experiments driven by our
measurement data, we quantify the
improvement in Quality of Service (QoS)
that can be achieved by taking
advantage of the geographical
knowledge of bandwidth provided by the
bandwidth maps. We find that our
approach reduces the frequency of
disruptions in perceived QoS for both
audio and video applications in high-
speed vehicular mobility by several
orders of magnitude.
TTAECJ26 Energy-Efficient
Strategies for Cooperative
Multichannel MAC
Protocols
Distributed Information Sharing (DISH)
is a new cooperative approach to
designing multichannel MAC protocols. It
aids nodes in their decision making
processes by compensating for their
missing information via information
sharing through neighboring nodes. This
approach was recently shown to
significantly boost the throughput of
multichannel MAC protocols. However, a
critical issue for ad hoc communication
devices, viz. energy efficiency, has yet to
be addressed. In this paper, we address
this issue by developing simple solutions
that reduce the energy consumption
without compromising the throughput
performance and meanwhile maximize
cost efficiency. We propose two energy-
efficient strategies: in-situ energy
conscious DISH, which uses existing
nodes only, and altruistic DISH, which
requires additional nodes called altruists.
We compare five protocols with respect
to these strategies and identify altruistic
DISH to be the right choice in general: it
1) conserves 40-80 percent of energy, 2)
maintains the throughput advantage,
and 3) more than doubles the cost
efficiency compared to protocols without
IEEE 2012
this strategy. On the other hand, our
study also shows that in-situ energy
conscious DISH is suitable only in certain
limited scenarios.
TTAECJ27 FESCIM: Fair, Efficient,
and Secure Cooperation
Incentive Mechanism for
Multi-hop Cellular
Networks
In multi-hop cellular networks, the
mobile nodes usually relay others'
packets for enhancing the network
performance and deployment. However,
selfish nodes usually do not cooperate
but make use of the cooperative nodes
to relay their packets, which has a
negative effect on the network fairness
and performance. In this paper, we
propose a fair and efficient incentive
mechanism to stimulate the node
cooperation. Our mechanism applies a
fair charging policy by charging the
source and destination nodes when both
of them benefit from the communication.
To implement this charging policy
efficiently, hashing operations are used
in the ACK packets to reduce the number
of public-key-cryptography operations.
Moreover, reducing the overhead of the
payment checks is essential for the
efficient implementation of the incentive
mechanism due to the large number of
payment transactions. Instead of
generating a check per message, a
small-size check can be generated per
route, and a check submission scheme is
proposed to reduce the number of
submitted checks and protect against
collusion attacks. Extensive analysis and
simulations demonstrate that our
mechanism can secure the payment and
significantly reduce the checks'
overhead, and the fair charging policy
can be implemented almost
computationally free by using hashing
operations.
IEEE 2012
TTAECJ28 Topology Control in
Mobile Ad Hoc Networks
with Cooperative
Communications
Cooperative communication has received
tremendous interest for wireless
networks. Most existing works on
cooperative communications are focused
on link-level physical layer issues.
Consequently, the impacts of cooperative
communications on network-level upper
layer issues, such as topology control,
routing and network capacity, are largely
ignored. In this article, we propose a
IEEE 2012
Capacity-Optimized Cooperative (COCO)
topology control scheme to improve the
network capacity in MANETs by jointly
considering both upper layer network
capacity and physical layer cooperative
communications. Through simulations,
we show that physical layer cooperative
communications have significant impacts
on the network capacity, and the
proposed topology control scheme can
substantially improve the network
capacity in MANETs with cooperative
communications.
TTAECD29 Cooperative download in
vehicular environments
We consider a complex (i.e., nonlinear)
road scenario where users aboard
vehicles equipped with communication
interfaces are interested in downloading
large files from road-side Access Points
(APs). We investigate the possibility of
exploiting opportunistic encounters
among mobile nodes so to augment the
transfer rate experienced by vehicular
downloaders. To that end, we devise
solutions for the selection of carriers and
data chunks at the APs, and evaluate
them in real-world road topologies,
under different AP deployment
strategies. Through extensive
simulations, we show that carry &
forward transfers can significantly
increase the download rate of vehicular
users in urban/suburban environments,
and that such a result holds throughout
diverse mobility scenarios, AP
placements and network loads.
IEEE 2012
TTAECD30 Network Assisted Mobile
Computing with Optimal
Uplink Query Processing
Many mobile applications retrieve
content from remote servers via user
generated queries. Processing these
queries is often needed before the
desired content can be identified.
Processing the request on the mobile
devices can quickly sap the limited
battery resources. Conversely,
processing user-queries at remote
servers can have slow response times
due communication latency incurred
during transmission of the potentially
large query. We evaluate a network-
assisted mobile computing scenario
where mid-network nodes with "leasing"
capabilities are deployed by a service
provider. Leasing computation power can
IEEE 2012
reduce battery usage on the mobile
devices and improve response times.
However, borrowing processing power
from mid-network nodes comes at a
leasing cost which must be accounted for
when making the decision of where
processing should occur. We study the
tradeoff between battery usage,
processing and transmission latency, and
mid-network leasing. We use the
dynamic programming framework to
solve for the optimal processing policies
that suggest the amount of processing to
be done at each mid-network node in
order to minimize the processing and
communication latency and processing
costs. Through numerical studies, we
examine the properties of the optimal
processing policy and the core tradeoffs
in such systems.
TTASTJ31 Smooth Trade-Offs
between Throughput and
Delay in Mobile Ad Hoc
Networks
Throughput capacity in mobile ad hoc
networks has been studied extensively
under many different mobility models.
However, most previous research
assumes global mobility, and the results
show that a constant per-node
throughput can be achieved at the cost
of very high delay. Thus, we are having a
very big gap here, i.e., either low
throughput or low delay in static
networks or high throughput and high
delay in mobile networks. In this paper,
employing a practical restricted random
mobility model, we try to fill this gap.
Specifically, we assume that a network of
unit area with n nodes is evenly divided
into cells with an area of n -2α, each of
which is further evenly divided into
squares with an area of n-2β
(0≤ α ≤ β
≤1/2). All nodes can only move inside
the cell which they are initially
distributed in, and at the beginning of
each time slot, every node moves from
its current square to a uniformly chosen
point in a uniformly chosen adjacent
square. By proposing a new multihop
relay scheme, we present smooth trade-
offs between throughput and delay by
controlling nodes' mobility. We also
consider a network of area nγ (0 ≤ γ ≤
1) and find that network size does not
IEEE 2012
affect the results obtained before.
TTASTJ32 Stateless Multicast
Protocol for Ad Hoc
Networks
Multicast routing protocols typically rely
on the a priori creation of a multicast
tree (or mesh), which requires the
individual nodes to maintain state
information. In dynamic networks with
bursty traffic, where long periods of
silence are expected between the bursts
of data, this multicast state maintenance
adds a large amount of communication,
processing, and memory overhead for no
benefit to the application. Thus, we have
developed a stateless receiver-based
multicast (RBMulticast) protocol that
simply uses a list of the multicast
members' (e.g., sinks') addresses,
embedded in packet headers, to enable
receivers to decide the best way to
forward the multicast traffic. This
protocol, called Receiver-Based Multicast,
exploits the knowledge of the geographic
locations of the nodes to remove the
need for costly state maintenance (e.g.,
tree/mesh/neighbor table maintenance),
making it ideally suited for multicasting
in dynamic networks. RBMulticast was
implemented in the OPNET simulator and
tested using a sensor network
implementation. Both simulation and
experimental results confirm that
RBMulticast provides high success rates
and low delay without the burden of
state maintenance.
IEEE 2012
TTASTJ33 Handling Selfishness in
Replica Allocation over a
Mobile Ad Hoc Network
In a mobile ad hoc network, the mobility
and resource constraints of mobile nodes
may lead to network partitioning or
performance degradation. Several data
replication techniques have been
proposed to minimize performance
degradation. Most of them assume that
all mobile nodes collaborate fully in
terms of sharing their memory space. In
reality, however, some nodes may
selfishly decide only to cooperate
partially, or not at all, with other nodes.
These selfish nodes could then reduce
the overall data accessibility in the
network. In this paper, we examine the
impact of selfish nodes in a mobile ad
IEEE 2012
hoc network from the perspective of
replica allocation. We term this selfish
replica allocation. In particular, we
develop a selfish node detection
algorithm that considers partial
selfishness and novel replica allocation
techniques to properly cope with selfish
replica allocation. The conducted
simulations demonstrate the proposed
approach outperforms traditional
cooperative replica allocation techniques
in terms of data accessibility,
communication cost, and average query
delay.
TTASTNS34 Secure High-Throughput
Multicast Routing in
Wireless Mesh Networks
Recent work in multicast routing for
wireless mesh networks has focused on
metrics that estimate link quality to
maximize throughput. Nodes must
collaborate in order to compute the path
metric and forward data. The assumption
that all nodes are honest and behave
correctly during metric computation,
propagation, and aggregation, as well as
during data forwarding, leads to
unexpected consequences in adversarial
networks where compromised nodes act
maliciously. In this work, we identify
novel attacks against high-throughput
multicast protocols in wireless mesh
networks. The attacks exploit the local
estimation and global aggregation of the
metric to allow attackers to attract a
large amount of traffic. We show that
these attacks are very effective against
multicast protocols based on high-
throughput metrics. We conclude that
aggressive path selection is a double-
edged sword: While it maximizes
throughput, it also increases attack
effectiveness in the absence of defense
mechanisms. Our approach to defend
against the identified attacks combines
measurement-based detection and
accusation-based reaction techniques.
The solution accommodates transient
network variations and is resilient
against attempts to exploit the defense
mechanism itself. A detailed security
analysis of our defense scheme
establishes bounds on the impact of
attacks. We demonstrate both the
attacks and our defense using ODMRP, a
IEEE 2012
representative multicast protocol for
wireless mesh networks, and SPP, an
adaptation of the well-known ETX unicast
metric to the multicast setting.
DOMAIN :ANDROID
TTASTA35,T
TASTJ35
Ubisoap: A Service-
Oriented Middleware for
Ubiquitous Networking
The computing and networking capacities
of today's wireless portable devices allow
for ubiquitous services, which are
seamlessly networked. Indeed, wireless
handheld devices now embed the
necessary resources to act as both
service clients and providers. However,
the ubiquitous networking of services
remains challenged by the inherent
mobility and resource constraints of the
devices, which make services a priori
highly volatile. This paper discusses the
design, implementation, and
experimentation of the ubiSOAP service-
oriented middleware, which leverages
wireless networking capacities to
effectively enable the ubiquitous
networking of services. ubiSOAP
specifically defines a layered
communication middleware that
underlies standard SOAP-based
middleware, hence supporting legacy
Web Services while exploiting nowadays
ubiquitous connectivity.
IEEE 2012
TTASTA36,T
TASTJ36
Ensuring Distributed
Accountability for Data
Sharing in the Cloud
Cloud computing enables highly scalable
services to be easily consumed over the
Internet on an as-needed basis. A major
feature of the cloud services is that
users' data are usually processed
remotely in unknown machines that
users do not own or operate. While
enjoying the convenience brought by this
new emerging technology, users' fears of
losing control of their own data
(particularly, financial and health data)
can become a significant barrier to the
wide adoption of cloud services. To
address this problem, in this paper, we
propose a novel highly decentralized
information accountability framework to
keep track of the actual usage of the
users' data in the cloud. In particular, we
propose an object-centered approach
that enables enclosing our logging
mechanism together with users' data and
IEEE 2012
policies. We leverage the JAR
programmable capabilities to both create
a dynamic and traveling object, and to
ensure that any access to users' data will
trigger authentication and automated
logging local to the JARs. To strengthen
user's control, we also provide
distributed auditing mechanisms. We
provide extensive experimental studies
that demonstrate the efficiency and
effectiveness of the proposed
approaches.
TTASTA37,T
TASTJ37
Who, When, Where:
Timeslot Assignment to
Mobile Client
We consider variations of a problem in
which data must be delivered to mobile
clients en route, as they travel toward
their destinations. The data can only be
delivered to the mobile clients as they
pass within range of wireless base
stations. Example scenarios include the
delivery of building maps to firefighters
responding to multiple alarms. We cast
this scenario as a parallel-machine
scheduling problem with the little-studied
property that jobs may have different
release times and deadlines when
assigned to different machines. We
present new algorithms and also adapt
existing algorithms, for both online and
offline settings. We evaluate these
algorithms on a variety of problem
instance types, using both synthetic and
real-world data, including several
geographical scenarios, and show that
our algorithms produce schedules
achieving near-optimal throughput.
IEEE 2012
TTASTA38 Characterizing the
Security Implications of
Third-Party Emergency
Alert Systems over
Cellular Text Messaging
Services
Cellular text messaging services are
increasingly being relied upon to
disseminate critical information during
emergencies. Accordingly, a wide
range of organizations including colleges
and universities now partner with third-
party providers that promise to improve
physical security by rapidly delivering
such messages. Unfortunately, these
products do not work as advertised due
to limitations of cellular infrastructure
and therefore provide a false
sense of security to their users. In this
paper, we perform the first extensive
investigation and
characterization of the limitations of an E
IEEE 2012
mergency Alert System (EAS)
using text messages as
a security incident response mechanism.
We show emergency alert systems built
on text messaging not only can
meet the 10 minute delivery requirement
mandated by the WARN Act, but also
potentially cause other voice and SMS
traffic to be blocked at rates
upward of 80 percent. We then show
that our results are representative of
reality by comparing them to a
number of documented but not
previously understood failures. Finally,
we analyze a
targeted messaging mechanism as a
means of efficiently using currently
deployed infrastructure and third-
party EAS. In so doing, we demonstrate
that this increasingly
deployed security infrastructure does not
achieve its stated requirements for large
populations.
TTASTA39 Design and
Implementation of
Improved Authentication
System for Android
Smartphone Users
The devices most often used for IT
services are changing from PCs and
laptops to smart phones and tablets.
These devices need to be small for
increased portability. These technologies
are convenient, but as the devices start
to contain increasing amounts of
important personal information, better
security is required. Security systems are
rapidly being developed, as well as
solutions such as remote control
systems. However, even with these
solutions, major problems could still
result after a mobile device is lost. In
this thesis, we present our upgraded
Lock Screen system, which is able to
support authentication for the user's
convenience and provide a good security
system for smart phones. We also
suggest an upgraded authentication
system for Android smart phones.
IEEE 2012
TTASTA40 Android Application for
Spiral Analysis in
Parkinson’s Disease
The paper presents an application for
spiral analysis in Parkinson's Disease
(PD). PD is one of the most common
degenerative disorders of the central
nervous system that affects elderly. Four
cardinal symptoms of the disease are
IEEE 2012
tremor, rigidity, slowness of movement,
and postural instability. The current
diagnosis is based on clinical observation
which relies on skills and experiences of
a trained specialist. Thus, an additional
method is desirable to help in the
diagnosis process and possibly improve
the detection of early PD as well as the
measurement of disease severity. Many
studies have reported that the spiral
analysis may be useful in the diagnosis
of motor dysfunction in PD patient. We
therefore implement a mobile, safe, easy
to use, inexpensive, and online
application for detection of movement
disorders with a comprehensive test
analysis according to the indices from
Archimedean and octagon spirals tracing
tasks. We introduce the octagon tracing
task along with the conventional
Archimedean spiral task because a shape
tracing task with clear sequential
components may increase a likelihood of
detecting tremors and other cardinal
features of PD. A widely used Android
mobile operating system, the fastest
markets share growth among
smartphone platforms, is chosen as our
development platform. We also show
that the preliminary results of selected
indices in the application could
potentially be used to distinguish
between PD patient and healthy control.
TTASTA41 Android Suburban Railway
Ticketing with GPS as
Ticket Checker
One of the biggest challenges in the
current ticketing facility is “QUEUE” in
buying our suburban railway tickets. In
this fast growing world of technology we
still stand in the queue or buy with
oyster & octopus cards for our suburban
tickets, which is more frustrating at
times to stand in the queue or if we
forget our cards. This paper Android
Suburban Railway (ASR) ticketing is
mainly to buy the suburban tickets which
is the most challenging when compared
to booking the long journey tickets
through `M-ticket' which fails with
suburban(local travel) tickets. Our ASR
ticket can be bought with just a smart
phone application, where you can carry
your suburban railway tickets in your
smart phone as a QR (Quick Response)
IEEE 2012
code. It uses the smart phones “GPS”
facility to validate and delete your ticket
automatically after a specific interval of
time once the user reaches the
destination. User's ticket information is
stored in a CLOUD database for security
purpose which is missing in the present
suburban system. Also the ticket checker
is provided with a checker application to
search for the user's ticket with the
ticket number in the cloud database for
checking purposes.
TTASTA42 On the Use of Mobile
Phones and Biometrics for
Accessing Restricted Web
Services
In this study, an application that allows a
mobile phone to be used as a biometric-
capture device is shown. The main
contribution of our proposal is that this
capture, and later recognition, can be
performed during a standard web
session, using the same architecture that
is used in a personal computer (PC), thus
allowing a multiplatform (PC, personal
digital assistant (PDA), mobile phone,
etc.) biometric web access. The review,
which is from both an academic and
commercial point of view, of the
biometry and mobile device state of the
art shows that in other related works,
the biometric capture and recognition is
either performed locally in the mobile or
remotely but using special
communication protocols and/or
connection ports with the server. The
second main contribution of this study is
an in-depth analysis of the present
mobile web-browser limitations; thus, it
is concluded that, in general, it is
impossible to use the same technologies
that can be used to capture biometrics in
PC platforms (i.e., Applet Java, ActiveX
Control, JavaScript, or Flash); therefore,
new solutions, as shown here, are
needed.
IEEE 2012
TTASTA43 Android-Based Mobile
Payment Service
Protected by 3-Factor
Authentication and Virtual
Private Ad Hoc
Networking
This work develops a pair of mobile
payment devices, a counter reader and a
paying client, on Android-based smart
phone platforms for emerging mobile
payment or electronic wallet services.
These two devices featuring 3-factor
authentication and virtual private Ad Hoc
networking can make an easier and
much secure transaction than traditional
credit cards or electronic payment cards.
IEEE 2012
3-factor authentication feature combines
PIN code authentication, USIM card
authentication, and facial biometric
authentication. Especially, this work
proposes and implements a simple but
practical method, Fast Semi-3D Face
Vertical Pose Recovery, to cope with the
vertical pose variation issue bothering
face recognition systems so far.
Experimental results show the proposed
method can significantly raise the
recognition accuracy and enlarge the
operating angle range of face recognition
system under various vertical pose
conditions. Besides, virtual private Ad
Hoc networking feature based on
OpenSSL and i-Jetty open-source
libraries is also integrated seamlessly.
TTASTA44 Research and Design of
Chatting Room System
based on Android
Bluetooth
Bluetooth provides a low-power and low-
cost wireless connection among mobile
devices and their accessories, which is
an open standard for implementing a
short-range wireless communication.
Bluetooth is integrated into Android
which is a mainstream smart phone
platform as a mean of mobile
communication. Android has attracted a
large number of developers because of
its character of open sourcing and
powerful application AP I. This article
takes designing a Bluetooth chat room
for example to research Bluetooth and its
architecture of android platform and
introduce the process of realizing the
Bluetooth communication in detail. Then
we design and implement a chat room
based on Bluetooth by using APIs of
Android platform. At last, a further
prospect of the function of this system
was made.
IEEE 2012
TTASTA45 Android application for
sending SMS messages
with speech recognition
interface
Voice SMS is an application developed in
this work that allows a user to record
and convert spoken messages into SMS
text message. User can send messages
to the entered phone number or the
number of contact from the phonebook.
Speech recognition is done via the
Internet, connecting to Google's server.
The application is adapted to input
messages in English. Used tools are
Android SDK and the installation is done
IEEE 2012
on mobile phone with Android operating
system. In this article we will give basic
features of the speech recognition and
used algorithm. Speech recognition for
Voice SMS uses a technique based on
hidden Markov models (HMM - Hidden
Markov Model). It is currently the most
successful and most flexible approach to
speech recognition.
TTASTA46 Android Mobile
Augmented Reality
Application Based on
Different Learning
Theories for Primary
School Children
Due to advancements in the mobile
technology and the presence of strong
mobile platforms, it is now possible to
use the revolutionizing augmented
reality technology in mobiles. This
research work is based on the
understanding of different types of
learning theories, concept of mobile
learning and mobile augmented reality
and discusses how applications using
these advanced technologies can shape
today's education systems.
IEEE 2012
TTASTA47 Android Botnets on the
Rise: Trends and
Characteristics
Smart phones are the latest technology
trend of the 21st century. Today's social
expectation of always staying connected
and the need for an increase in
productivity are the reasons for the
increase in smart phone usage. One of
the leaders of the smart phone evolution
is Google's Android Operating System
(OS). The openness of the design and
the ease of customizing are the aspects
that are placing Android ahead of the
other smart phone OSs. Such popularity
has not only led to an increase in
Android usage but also to the rise of
Android malware. Although such
malware is not having a significant
impact on the popularity of Android
smart phones, it is however creating new
possibilities for threats. One such threat
is the impact of botnets on Android
smart phones. Recently, malware has
surfaced that revealed specific
characteristics relating to traditional
botnet activities. Malware such as
Geinimi, Pjapps, DroidDream, and
RootSmart all display traditional botnet
functionalities. These malicious
applications show that Android botnets is
a reality. From a security perspective it is
important to understand the underlying
IEEE 2012
structure of an Android botnet. This
paper evaluates Android malware with
the purpose of identifying specific trends
and characteristics relating to botnet
behavior. The botnet trends and
characteristics are detected by a
comprehensive literature study of well-
known Android malware applications.
The identified characteristics are then
further explored in terms of the Android
Botnet Development Model and the
Android Botnet Discovery Process. The
common identified trends and
characteristics aid the understanding of
Android botnet activities as well as the
possible discovery of an Android bot.
DOMAIN : WEBMINING
CODE
PROJECT TITLE
DESCRIPTION
REFERENCE
TTAECJ48 Learn to Personalized
Image Search from the
Photo Sharing Websites
Increasingly developed social sharing
websites like Flickr and Youtube allow
users to create, share, annotate, and
comment media. The large-scale user-
generated metadata not only facilitate
users in sharing and organizing
multimedia content, but provide useful
information to improve media retrieval
and management. Personalized search
serves as one of such examples where
the web search experience is improved
by generating the returned list according
to the modified user search intents. In
this paper, we exploit the social
annotations and propose a novel
framework simultaneously considering
the user and query relevance to learn to
personalized image search. The basic
premise is to embed the user preference
and query-related search intent into
user-specific topic spaces. Since the
users' original annotation is too sparse
for topic modeling, we need to enrich
users' annotation pool before user-
specific topic spaces construction. The
IEEE 2012
proposed framework contains two
components: (1) a ranking-based multi-
correlation tensor factorization model is
proposed to perform annotation
prediction, which is considered as users'
potential annotations for the images; (2)
we introduce user-specific topic modeling
to map the query relevance and user
preference into the same user-specific
topic space. For performance evaluation,
two resources involved with users' social
activities are employed. Experiments on
a large-scale Flickr dataset demonstrate
the effectiveness of the proposed
method.
DOMAIN :DATA MINING
TTASTD49 A Genetic Programming
Approach to Record De-
duplication
Several systems that rely on consistent
data to offer high-quality services, such
as digital libraries and e-commerce
brokers, may be affected by the
existence of duplicates, quasi replicas, or
near-duplicate entries in their
repositories. Because of that, there have
been significant investments from private
and government organizations for
developing methods for removing
replicas from its data repositories. This is
due to the fact that clean and replica-
free repositories not only allow the
retrieval of higher quality information but
also lead to more concise data and to
potential savings in computational time
and resources to process this data. In
this paper, we propose a genetic
programming approach to record de-
duplication that combines several
different pieces of evidence extracted
from the data content to find a de-
duplication function that is able to
identify whether two entries in a
repository are replicas or not. As shown
by our experiments, our approach
outperforms an existing state-of-the-art
method found in the literature.
Moreover, the suggested functions are
computationally less demanding since
they use fewer evidences. In addition,
our genetic programming approach is
capable of automatically adapting these
IEEE 2012
functions to a given fixed replica
identification boundary, freeing the user
from the burden of having to choose and
tune this parameter.
TTASTD50 Discover Dependencies
from Data—A Review
Functional and inclusion dependency
discovery is important to knowledge
discovery, database semantics analysis,
database design, and data quality
assessment. Motivated by the
importance of dependency discovery,
this paper reviews the methods for
functional dependency, conditional
functional dependency, approximate
functional dependency, and inclusion
dependency discovery in relational
databases and a method for discovering
XML functional dependencies.
IEEE 2012
CODE
PROJECT TITLE
DESCRIPTION
REFERENCE
TTASTD
51
Tree-Based Mining for Discovering Patterns of
Human Interaction in Meetings
Discovering semantic knowledge is
significant for understanding and
interpreting how people
interact in a meetingdiscussion. In this
paper, we propose a mining method to
extract
frequent patterns of human interaction base
don the captured content of face-to-
face meetings. Human interactions, such as
proposing an idea, giving comments, and
expressing a positive opinion, indicate user
intention toward a topic or role in a
discussion.Human interaction flow in a
discussion session is represented as
a tree. Tree-
based interaction mining algorithms are
designed to analyze the structures of the
trees and to
extract interaction flow patterns. The
experimental results show that we can
successfully extract several
interesting patterns that are useful for the
interpretationof human behavior in meeting
discussions, such as determining
frequent interactions,
typical interaction flows, and relationships
between different types of interactions.
IEEE 2012
TTASTD
52
Automatic Discovery of
Personal Name Aliases from the Web
An individual is typically referred by
numerous name aliases on the web.
Accurate identification of aliases of a given
person name is useful in various web related
tasks such as information retrieval,
sentiment analysis,
personal name disambiguation, and relation
extraction. We propose a method to
extract aliases of a given
personal name from the web. Given
a personal name, the proposed method first
extracts a set of candidate aliases. Second,
we rank the extracted candidates according
to the likelihood of a candidate being a
correct alias of the given name. We propose
a novel, automatically extracted lexical
pattern-based approach to efficiently extract
a large
set of candidate aliases from snippets
retrieved from a web search engine. We
define numerous ranking scores to evaluate
candidate aliases using three approaches:
lexical pattern frequency, word co-
occurrences in an anchor text graph, and
page counts on the web. To construct a
robust alias detection system, we
integrate the different ranking scores into a
single ranking function using ranking
support vector machines. We
evaluate the proposed method on three data
sets: an English personal names data set, an
English place names data set, and a
Japanese personal names data
set. The proposed method outperforms
numerous baselines and previously
proposed name alias extraction methods,
achieving a statistically significant mean
reciprocal rank (MRR) of 0.67. Experiments
carried out using location names and
Japanese personal namessuggest the possibi
lity of extending the proposed method to
extract aliases for different
types of named entities, and for different
languages. Moreover, the aliases extracted
using the proposed method are successfully
utilized in an information retrieval task and
improve recall by 20 percent in a relation-
detection task.
IEEE 2012
TTASTD
53
Horizontal Aggregations in SQL to Prepare Data Sets
for Data Mining Analysis
Preparing a data set for analysis is generally
the most time consuming
task in a data mining project, requiring
many complex SQL queries, joining tables,
IEEE 2012
and aggregating columns.
Existing SQL aggregations have
limitationsto prepare data sets because they
return one column per aggregated
group. In general, a significant manual effort
is required to build data sets, where
a horizontal layout is required. We propose
simple, yet powerful,
methods to generate SQL code to return
aggregated columns in a horizontal tabular
layout, returning a set of numbers instead of
one number per row. This new class of
functions is
called horizontal aggregations. Horizontalagg
regations build data sets with
a horizontal denormalized layout (e.g.,
point-dimension, observation-variable,
instance-feature), which is the standard
layout required by
most data mining algorithms. We propose
three fundamental
methods to evaluate horizontal aggregations
: CASE: Exploiting the programming CASE
construct; SPJ: Based on standard relational
algebra operators (SPJ queries); PIVOT:
Using the PIVOT operator, which is offered
by some DBMSs. Experiments with large
tables compare the proposed query
evaluation methods. Our CASE method has
similar speed to the PIVOT operator and it is
much faster than the SPJ
method. In general, the CASE and PIVOT
methods exhibit linear scalability, whereas
the SPJ method does not
TTASTD
54
Outsourced Similarity Search on Metric Data
Assets
This paper considers a cloud computing
setting in which similarity querying
of metric data is outsourced to a service
provider. The data is to be revealed only to
trusted users, not to the service provider or
anyone else. Users query the server for the
most similar data objects to a query
example. Outsourcing offers the data owner
scalability and a low-initial investment. The
need for privacy may be due to
the data being sensitive (e.g., in medicine),
valuable (e.g., in astronomy), or otherwise
confidential. Given this setting, the paper
presents techniques that transform
the data prior to supplying it to the service
provider for similarity queries on the
transformed data. Our techniques provide
interesting trade-offs between query cost
IEEE 2012
and accuracy. They are then further
extended to offer an intuitive privacy
guarantee. Empirical studies with
real data demonstrate that the techniques
are capable of offering privacy while
enabling efficient and accurate processing
of similarity queries.
TTASTD
55
On the Spectral Characterization and Scalable Mining of Network
Communities
Network communities refer to
groups of vertices within which their
connecting links are dense but between
which they are sparse.
A network community mining problem (or
NCMP for short) is concerned
with the problem offinding all
such communities from a given network. A
wide variety of applications can be
formulated as NCMPs, ranging from
social and/or biological network analysis to
web mining and searching. So far, many
algorithms addressing NCMPs have been
developed and most of them fall
into the categories of either optimization
based or heuristic methods. Distinct
from the existing studies, the work
presented in this paper
explores the notion ofnetwork communities
and their properties
based on the dynamics of a stochastic model
naturally introduced. Inthe paper, a
relationship
between the hierarchical community structur
e of a network and the local mixing
properties of such a stochastic model has
been established with the large-deviation
theory. Topological information regarding
to the community structures hidden
in networks can be inferred from
their spectralsignatures.
Based on the above-mentioned relationship,
this work proposes a general framework for
characterizing,
analyzing, and mining network communities.
Utilizing the two basic
properties of metastability, i.e., being locally
uniform and temporarily fixed, an efficient
implementation of the framework,
called the LM algorithm, has been developed
that can scalably mine communities hidden
in large-
scale networks. Theeffectiveness and efficien
cy of the LM algorithm have been
theoretically analyzed as well as
IEEE 2012
experimentally validated.
TTASTD
56
Mining Web Graphs for Recommendations
As the exponential explosion of various
contents generated on
the Web, Recommendation techniques have
become increasingly indispensable.
Innumerable different kinds
of recommendations are made on
the Web every day, including movies, music,
images, books recommendations, query
suggestions, tags recommendations, etc. No
matter what types of data sources are
used for the recommendations, essentially
these data sources can be modeled in the
form of various types of graphs. In this
paper, aiming at providing a general
framework on mining
Web graphs for recommendations, (1) we
first propose a novel diffusion method which
propagates similarities between different
nodes and generates recommendations; (2)
then we illustrate how to generalize different
recommendation problems into
our graph diffusion framework. The
proposed framework can be utilized in many
recommendation tasks on the World
Wide Web, including query suggestions,
tag recommendations, expert finding,
image recommendations, image
annotations, etc. The experimental analysis
on large data sets shows the promising
future of our work
IEEE 2012
TTAECJ
57
Ranking Model Adaptation for Domain-Specific Search
With the explosive emergence of
vertical search domains, applying the broad-
based ranking model directly to different
domains is no longer desirable due
to domain differences, while building a
unique ranking model for each domain is
both laborious for labeling data and time
consuming for training models. In this
paper, we address these difficulties by
proposing a regularization-based algorithm
called ranking adaptation SVM (RA-SVM),
through which we can adapt an
existing ranking model to a new domain, so
that the amount of labeled data and the
training cost is reduced while the
performance is still guaranteed. Our
algorithm only requires the prediction from
IEEE 2012
the existing ranking models, rather than
their internal representations or the data
from auxiliary domains. In addition, we
assume that documents similar in
the domain-specific feature space should
have consistent rankings, and add some
constraints to control the margin and slack
variables of RA-SVM adaptively.
Finally, ranking adaptability measurement is
proposed to quantitatively estimate if an
existing ranking model can be adapted to a
new domain. Experiments performed over
Lector and two large scale data sets crawled
from a commercial search engine
demonstrate the applicability’s of the
proposed ranking adaptation algorithms and
the ranking adaptability measurement.
TTAEC
D58
Segmentation and
Sampling of Moving Object Trajectories Based on
Representativeness
Moving Object Databases (MOD), although
ubiquitous, still call for methods that will be
able to understand, search,
analyze, and browse their spatiotemporal
content. In this paper, we propose a method
for trajectorysegmentation and sampling bas
ed on the representativeness of the
(sub)trajectories in the MOD. In order to find
the most representative sub trajectories, the
following methodology is proposed. First, a
novel global voting algorithm is
performed, based on local
density and trajectory similarity information.
This method is applied for each
segment of the trajectory, forming a
local trajectory descriptor that represents
line segment representativeness. The
sequence of this descriptor over
a trajectory gives the voting
signal of the trajectory, where high values
correspond to the most representative parts.
Then, a novel segmentation algorithm is
appliedon this signal that automatically
estimates the number of partitions and the
partition borders, identifying homogenous
partitions concerning
their representativeness. Finally,
a sampling method over the resulting
segments yields the most representative
subtrajectories in the MOD. Our
experimental results in synthetic andreal
MOD verify the effectiveness of the proposed
scheme, also in comparison with
other sampling techniques..
IEEE 2012
TTASTJ
59
Effective Pattern Discovery
for Text Mining
Many data mining techniques have been
proposed for mining useful patterns in text d
ocuments. However, how to effectively use
and update discovered patterns is still an
open research issue, especially in the
domain of textmining. Since most
existing text mining methods adopted term-
based approaches, they all suffer from the
problems of polysemy and synonymy. Over
the years, people have often held the
hypothesis that pattern (or phrase)-based
approaches should perform better than the
term-based ones, but many experiments do
not support this hypothesis. This paper
presents an innovative
and effective pattern discovery technique
which includes the processes
of pattern deploying and pattern evolving, to
improve the effectiveness of using and
updating discovered patterns for finding
relevant and interesting information.
Substantial experiments on RCV1 data
collection and TREC topics demonstrate that
the proposed solution achieves encouraging
performance.
IEEE 2012
TTASTJ
60
Incremental Information Extraction Using Relational Databases
Information extraction systems are traditionally implemented as a
pipeline of special-purpose processing modules targeting
the extraction of a particular kind of information. A major drawback of
such an approach is that whenever a new extraction goal emerges or a
module is improved, extraction has to be reapplied from scratch to the
entire text corpus even though only a small part of the corpus might be
affected. In this paper, we describe a novel approach
for information extraction in
which extraction needs are expressed in the form
of databasequeries, which are evaluated and optimized
by database systems. Using database queries
for informationextraction enables
IEEE 2012
generic extraction and minimizes
reprocessing of data by performing incremental extraction to
identify which part of the data is affected by the change of
components or goals. Furthermore, our approach provides automated
query generation components so that casual users do not have to learn the
query language in order to perform extraction. To demonstrate
the feasibility of
our incremental extraction approach, we performed experiments to
highlight two important aspects of an information extraction system:
efficiency and quality of extraction results. Our
experiments show that in the event of deployment of a new module, our
incremental extraction approach reduces the processing time by
89.64 percent as compared to a traditional pipeline approach. By
applying our methods to a corpus of 17 million biomedical abstracts, our
experiments show that the query
performance is efficient for real-time applications. Our experiments also
revealed that our approach achieves high
quality extraction results.
TTASTJ
61
Learning Bregman Distance Functions for
Semi-Supervised Clustering
Learning distance functions with side
information plays a key role in many
data mining applications.
Conventionaldistance metric learning
approaches often assume that the
target distance function is
represented in some form of
Mahalanobis distance. These
approaches usually work well when
data are in low dimensionality, but
often become computationally
IEEE 2012
expensive or even infeasible when
handling high-dimensional data. In
this paper, we propose a novel
scheme
of learning nonlinear distance functio
ns with side information. It aims
to learn a Bregmandistance function
using a nonparametric approach that
is similar to Support Vector
Machines. We emphasize that the
proposed scheme is more general
than the conventional
approach for distance metric learning
, and is able to handle high-
dimensional data efficiently. We
verify the efficacy of the
proposed distance learning method
with extensive experiments on semi-
supervised clustering. The
comparison with state-of-the-art
approaches forlearning distance functi
ons with side information reveals clear
advantages of the proposed technique.
TTASTJ
62
Resilient Identity Crime Detection
Identity crime is well known, prevalent, and
costly; and credit application fraud is a
specific case of identitycrime. The existing
nondata mining detection system of
business rules and scorecards, and known
fraud matching have limitations. To address
these limitations and
combat identity crime in real time, this
paper proposes a new
multilayered detection system
complemented with two additional layers:
communal detection(CD) and
spike detection (SD). CD finds real social
relationships to reduce the suspicion score,
and is tamper resistant to synthetic social
relationships. It is the whitelist-oriented
approach on a fixed set of attributes. SD
finds spikes in duplicates to increase the
suspicion score, and is probe-resistant for
attributes. It is the attribute-oriented
IEEE 2012
approach on a variable-size set of attributes.
Together, CD and SD can detect more types
of attacks, better account for changing legal
behavior, and remove the redundant
attributes. Experiments were carried out on
CD and SD with several million real credit
applications. Results on the data support the
hypothesis that successful credit application
fraud patterns are sudden and exhibit sharp
spikes in duplicates. Although this research
is specific to credit application
fraud detection, the concept of resilience,
together with adaptivity and quality data
discussed in the paper, are general to the
design, implementation, and evaluation of
all detectionsystems.
TTASTJ
63
TSCAN: A Content
Anatomy Approach to
Temporal Topic
Summarization
A topic is defined as a seminal event or
activity along with all directly related events
and activities. It is represented
by a chronological sequence of documents
published by different authors on the
Internet. In this study, we define a task
called topic anatomy, which summarizes and
associates the core parts
of a topictemporally so that readers can
understand the content easily. The
proposed topic anatomy model,
called TSCAN, derives the major themes
of a topic from the eigenvectors
of a temporal block association matrix.
Then, the significant events of the themes
and their summaries are extracted by
examining the constitution of the
eigenvectors. Finally, the extracted events
are associated through
their temporal closeness and context
similarity to form an evolution graph of
the topic. Experiments based on the official
TDT4 corpus demonstrate that the
generated temporal summaries present the
storylines of topics in a comprehensible
form. Moreover, in terms
of content coverage, coherence, and
consistency, the summaries are
superior to those derived by
existingsummarization methods based on
human-composed reference summaries.
IEEE 2012
TTASTJ
64
Privacy Preserving Decision Tree
Learning Using Unrealized Data
Sets
Privacy preservation is important for
machine learning and data mining, but
measures designed to protect private
information often result in a trade-off:
reduced utility of the training samples. This
paper introduces
a privacypreserving approach that can be
applied to decision tree learning, without
concomitant loss of accuracy. It describes an
approach to the preservation of
the privacy of collected data samples in
cases where information from the sample
database has been partially lost. This
approach converts the original
sample data sets into a group of
unreal data sets, from which the original
samples cannot be reconstructed without the
entire group of unreal data sets. Meanwhile,
an accurate decision tree can be built
directly from those unreal data sets. This
novel approach can be applied directly to
the data storage as soon as the first sample
is collected. The approach is compatible with
other privacy preserving approaches, such
as cryptography, for extra protection.
IEEE 2012
TTASTJ
65
Mining O Mining Online
Reviews for Predicting Sales
Performance: A Case Study in the
Movie Domain
Posting reviews online has become an
increasingly popular way for people to
express opinions and sentiments
toward the products bought or services
received. Analyzing the large volume
of online reviews available would produce
useful actionable knowledge that could be of
economic values to vendors and other
interested parties. In this paper, we
conduct a case study in the movie domain,
and tackle the problem of mining reviews for
predicting product sales performance. Our
analysis shows that both the sentiments
expressed in the reviews and the quality
of the reviews have a significant impact
on the future sales performance of
products in question. Forthe sentiment
factor, we propose Sentiment PLSA (S-
PLSA), in which a review is considered
as a document generated by a number of
hidden sentiment factors, in order to
capture the complex nature of sentiments.
Training an S-PLSA model enables us to
obtain a succinct summary of the sentiment
information embedded in there views. Based
on S-PLSFA, we propose ARSA, an
Autoregressive Sentiment-Aware
IEEE 2012
model for sales prediction. We then seek to
further improve the accuracy of prediction
by considering the quality factor,
with a focus on predicting the quality
of a review in the absence of user-supplied
indicators, and present ARSQA, an
Autoregressive Sentiment and Quality Aware
model, to utilize sentiments and
quality for predicting product sales
performance. Extensive experiments
conducted on a large movie data set
confirm the effectiveness of the proposed
approach.
TTASTJ
66
Ranking Model Adaptation for
Domain-Specific Search
With the explosive emergence of
vertical search domains, applying the broad-
based ranking model directly to different
domains is no longer desirable due
to domain differences, while building a
unique ranking model for each domain is
both laborious for labeling data and time
consuming for training models. In this
paper, we address these difficulties by
proposing a regularization-based algorithm
called ranking adaptation SVM (RA-SVM),
through which we can adapt an
existing ranking model to a new domain, so
that the amount of labeled data and the
training cost is reduced while the
performance is still guaranteed. Our
algorithm only requires the prediction from
the existing ranking models, rather than
their internal representations or the data
from auxiliary domains. In addition, we
assume that documents similar in
the domain-specific feature space should
have consistent rankings, and add some
constraints to control the margin and slack
variables of RA-SVM adaptively.
Finally, ranking adaptability measurement is
proposed to quantitatively estimate if an
existing ranking model can be adapted to a
new domain. Experiments performed over
Letor and two large scale data sets crawled
from a commercial search engine
demonstrate the applicability’s of the
proposed ranking adaptation algorithms and
the ranking adaptability measurement.
IEEE 2012
DOMAIN :IMAGE PROCESSING
CODE
PROJECT TITLE
DESCRIPTION
REFERENCE
TTAECD67 A Fusion Approach for Efficient Human
Skin Detection
A reliable human skin detection method
that is adaptable to
different human skin colors and
illumination conditions is
essential for better human skin segment
ation. Even though different human skin-
color detection solutions have been
successfully applied, they are prone to
false skin detection and are not able to
cope with the variety
of human skin colors across different
ethnic. Moreover, existing methods
require high computational cost. In this
paper, we propose a
novel human skin detection approach tha
t combines a smoothed 2-D histogram
and Gaussian
model, for automatic human skin detecti
on in color image(s). In our approach, an
eye detector is used to refine
the skin model for a specific person. The
proposed approach reduces
computational costs as no training is
required, and it improves the accuracy
of skin detection despite wide variation
in ethnicity and illumination. To the best
of our knowledge, this is the first method
to employ fusion strategy for this
purpose. Qualitative and quantitative
results on three standard public datasets
and a comparison with state-of-the-art
methods have shown the effectiveness
and robustness of the
proposed approach.
IEEE 2012
TTAECD68 PDE-Based Enhancement
of Color Images in RGB
Space
A novel method
for color image enhancement is proposed
as an extension of the scalar-diffusion-
shock-filter coupling model, where noisy
and blurred images are de noised and
sharpened. The proposed model
is based on using the single
vectors of the gradient magnitude and
the second derivatives as a manner to
relate different
color components of the image. This
model can be viewed as a
generalization of the Bettahar-Stambouli
filter to multivalve images. The proposed
IEEE 2012
algorithm is more efficient than the
mentioned filter and some previous
works at color images de noising and
deblurring without creating false colors.
TTASTD69 Improving Various Reversible Data
Hiding Schemes Via Optimal Codes for
Binary Covers
In reversible data hiding (RDH), the
original cover can be lossless restored
after the embedded information is
extracted. Walker and Willets established
a rate-distortion model for RDH, in which
they proved out the rate-distortion
bound and proposed a
recursive code construction. In our
previous paper, we improved the
recursive construction to approach the
rate-distortion bound. In this paper, we
generalize the method in our previous
paper using a decompression algorithm
as
the coding scheme for embedding data a
nd prove that the generalized codes can
reach the rate-distortion bound as long
as the compression algorithm reaches
entropy. By the proposed binary codes,
we improve three RDH schemes that
use binary feature sequence as covers,
i.e., an RSscheme for spatial images,
one scheme for JPEG images, and a
pattern
substitution scheme for binary images.
The experimental results show that the
novel codes can significantly reduce the
embedding distortion. Furthermore, by
modifying the histogram shift (HS)
manner, we also apply
this coding method to one scheme that
uses HS, showing that the
proposed codes can be also exploited
to improve integer-operation-based
schemes.
IEEE 2012
TTASTD70 Impact of the Lips for Biometrics
In this
paper, the impact of the lips for identity
recognition is investigated. In fact, it is a
challenging issue for identity recognition
solely by the lips. In the first
stage of the proposed system, a fast box
filtering is proposed to generate a noise-
free source with high processing
efficiency. Afterward, five various mouth
corners are detected
through the proposed system, in which it
is also able to resist shadow, beard, and
IEEE 2012
rotation problems. For the feature
extraction, two geometric ratios and ten
parabolic-related parameters are
adopted for further recognition
through the support vector machine.
Experimental results demonstrate that,
when the number of subjects is fewer or
equal to 29, the correct accept rate
(CAR) is greater than 98%, and the false
accept rate (FAR) is smaller than 0.066%
. Moreover, the processing
speed of the overall system achieves
34.43 frames per second, which
meets the real-time requirement.
Thus, the proposed system can be
effective
candidate forfacial biometrics application
s when other facial organs are covered or
when it is applied for an access control
system.
TTASTJ71 Image Editing With
Spectrogram Transfer
This paper presents a unified model
for image editing in terms of Sparse
Matrix-Vector (SpMV) multiplication. In
our framework, we cast image editing as
a linear energy minimization problem
and ad dress it by solving a sparse linear
system, which is able to yield a globally
optimal solution. First, three
classical image editing operations,
including linear filtering, resizing and
selecting, are reformulated in the SpMV
multiplication form. The SpMV form helps
us set up a straightforward mechanism
to flexibly and naturally combine
various image features (low-level visual
features or geometrical features) and
constraints together into an integrated
energy minimization function under the
L2 norm. Then, we apply our model to
implement the tasks of pan-
sharpening, image cloning, image
mixed editing and texture transfer, which
are now popularly used in the field of
digital art. Comparative experiments are
reported to validate the effectiveness
and efficiency of our model.
IEEE 2012
DOMAIN :PARALLEL & DISTRIBUTED COMPUTING
TTAECD72 An Efficient Adaptive A deadlock-
free minimal routing algorithm call
IEEE 2012
Deadlock-Free Routing
Algorithm for Torus
Networks
ed clue is first proposed for VCT
(virtual cut-through)-switched tori. Only two virtual
channels are required. One channel is applied in
the deadlock-free routing algorithm for the
mesh sub network based on a known base routing scheme, such
as, negative-first or dimension-order routing. The other channel
is similar to an adaptive channel.
This combination presents a novel fully adaptive minimal routing
scheme because the first channel does not
supply routing paths for every source-destination pair. Other two
algorithms named flow controlled clue and wormhole clue are
proposed. Flow controlled clue is proposed for VCT-switched tore,
which is fully adaptive minimal deadlock-
free with no virtual channel. Each input port requires at least two
buffers, each of which is able to
keep a packet. A simple but well-designed flow control function is
used in the proposed flow controlled
clue routing algorithm to avoid deadlocks. Wormhole clue is
proposed for wormhole-switched tori. It is
partially adaptive because we add some constraints to
the adaptive channels for deadlock avoidance. It is shown
that clue and flow controlled clue work better than the bubble flow
control scheme under several
popular traffic patterns in 3-dimensional (3D) torus. In a
wormhole-switched tori, the
advantage of wormhole clue over Duato's protocol is also very
apparent.
TTAECD73 Scalable and Secure
Sharing of Personal
Health Records in
Cloud Computing Using
Attribute-Based
Encryption
Personal health record (PHR) is an
emerging patient-centric
model of health information exchange,
which is often outsourced to be stored at
a third party, such as cloud providers.
However, there have been wide privacy
concerns as personal health information
could be exposed to those third party
servers and to unauthorized parties. To
assure the patients' control over access
to their own PHRs, it is a promising
method to encrypt the PHRs before
outsourcing. Yet, issues such as
risks of privacy exposure,
scalability in key management, flexible
access, and efficient user revocation,
have remained the most important
challenges toward achieving fine-
grained, cryptographically enforced data
access control. In this paper, we propose
a novel patient-centric framework and a
suite of mechanisms for data access
control to PHRs stored in semitrusted
servers. To achieve fine-
grained and scalable data access control
for PHRs, we leverage attribute-
based encryption (ABE) techniques to
encrypt each patient's PHR file. Different
from previous works in secure data
outsourcing, we focus on the multiple
data owner scenario, and divide the
users in the PHR system into multiple
security domains that greatly reduces
the key management complexity for
owners and users. A high
degree of patient privacy is guaranteed
simultaneously by exploiting
multiauthority ABE. Our scheme also
enables dynamic modification ofaccess
policies or file attributes, supports
efficient on-demand
user/attribute revocation and break-glass
access under emergency scenarios.
Extensive analytical and experimental
results are presented which show the
security, scalability, and efficiency of our
proposed scheme
IEEE 2012
TTAECD74 SPOC A Secure and
Privacy preserving
Opportunistic Computing
Framework for Mobile
Healthcare Emergency
With the pervasiveness of smart phones and the advance of
wireless body sensor networks
(BSNs), mobile Healthcare (m-Healthcare), which extends the
operation of Healthcare provider into a pervasive environment for
better health monitoring, has attracted considerable interest
recently. However, the flourish of m-Healthcare still faces many
challenges including information security and privacy preservation.
In this paper, we propose a secure and privacy-preserving
opportunistic computing framework, called SPOC, for m-
Healthcare emergency. With
SPOC, smart phone resources including computing power and
energy can be opportunistically gathered to process the
computing-intensive personal health information (PHI) during
m-Healthcare emergency with minimal privacy disclosure. In
specific, to leverage the PHI privacy disclosure and the high
reliability of PHI process and transmission in m-Healthcare
emergency, we introduce an efficient user-centric privacy
access control in SPOC
framework, which is based on an attribute-based access control and
a new privacy-preserving scalar product computation (PPSPC)
technique, and allows a medical user to decide who can participate
in the opportunistic computing to assist in processing his
overwhelming PHI data. Detailed security analysis shows that the
IEEE 2012
proposed SPOC framework can
efficiently achieve user-centric privacy access control in m-
Healthcare emergency. In addition, performance evaluations
via extensive simulations demonstrate the SPOC's
effectiveness in term of providing high reliable PHI process and
transmission while minimizing the privacy disclosure during m-
Healthcare emergency.
TTAECD75 The Three-Tier Security
Scheme in Wireless
Sensor Networks with
Mobile Sinks
Mobile sinks (MSs) are
vital in many wireless sensor network (WSN) applications for
efficient data accumulation, localized sensor reprogramming,
and for distinguishing and revoking compromised sensors.
However, in sensor networks that make use of the existing key
redistribution schemes for pair wise key establishment and
authentication between sensor nodes
and mobile sinks, the employment of mobile sinks for data collection
elevates a
new security challenge: in the basic probabilistic and q-composite
key redistribution schemes, an attacker can easily obtain a large
number of keys by capturing a small fraction of nodes, and
hence, can gain control of the network by deploying a
replicated mobile sink preloaded with some compromised keys.
This article describes a three-tier general framework that
permits the use of any pair wise key redistribution scheme as its
basic component. The new
IEEE 2012
framework requires two separate
key pools, one for the mobile sink to access the
network, and one for pair wise key establishment
between the sensors. To further reduce the damages caused by
stationary access node replication attacks, we have
strengthened the authentication mechanism between the
sensor and the stationary
TTASTJ76 User-Level
Implementations of Read-
Copy Update
Read-copy update (RCU) is a synchronization technique that
often replaces reader-writer locking because RCU'sread-side
primitives are both wait-free and an order of magnitude faster than
uncondensed locking. Although RCUupdates are relatively heavy
weight, the importance of read-
side performance is increasing as computing systems become more
responsive to changes in their environments. RCU is heavily
used in several kernel-level environments. Unfortunately,
kernel-level implementations use facilities that are often unavailable
to user applications. The few prior user-
level RCU implementations either provided inefficient read-side
primitives or restricted the application architecture. This
paper fills this gap by describing
efficient and flexible RCUimplementations based on
primitives commonly available to user-level applications. Finally,
this paper compares these RCU implementations with each
other and with standard locking, which enables choosing the best
IEEE 2012
mechanism for a given workload.
This work opens the door to widespread user-application
use of RCU.
TTASTJ77 Aho-Corasick String
Matching on Shared and
Distributed-Memory
Parallel Architectures
String matching requires a combination
of (sometimes all) the following
characteristics: high and/or predictable
performance, support for large data
sets and flexibility of
integration and customization. This paper
compares several software-based
implementations of the Aho-
Corasick algorithm for high-performance
systems. We focus on the matching of
unknown inputs streamed from a single
source, typical of security
applications and difficult to manage since
the input cannot be preprocessed to
obtain locality. We consider shared-
memory architectures (Niagara 2, x86
multiprocessors, and Cray
XMT) and distributed-
memory architectures with homogeneous
(InfiniBand cluster of x86 multicourse) or
heterogeneous processing elements
(InfiniBand cluster of x86 multicourse
with NVIDIA Tesla C1060 GPUs). We
describe how each solution achieves the
objectives of supporting large
dictionaries, sustaining high
performance, and enabling
customization and flexibility using
various data sets.
IEEE 2012
TTASTJ78 Semantic-Aware Metadata
Organization Paradigm in
Next-Generation File
Systems
Existing data
storage systems based on the hierarchical directory-
tree organization do not meet the scalability and functionality
requirements for exponentially growing data sets and
increasingly
complex metadata queries inlarge-scale, Exabyte-
level file systems with billions of files. This paper proposes a
novel decentralized semantic-aware metadata organization,
IEEE 2012
called Smart Store, which exploits
semantics of files' metadata to judiciously aggregate
correlated files into semantic-aware groups by using
information retrieval tools. The key idea of Smart Store is to limit
the search scope of a complex metadata query to a
single or a minimal number of semantically correlated groups
and avoid or alleviate brute-force
search in the entire system. The decentralized design of Smart
Store can improve system scalability and
reduce query latency for complex queries
TTASTJ79 Enabling Secure and
Efficient Ranked Keyword
Search over Outsourced
Cloud Data
Cloud computing economically enables the
paradigm
of data service outsourcing. However, to protect data privacy,
sensitive cloud data have to be encrypted before outsourced to
the commercial public cloud, which makes
effective data utilization service a very challenging task. Although
traditional searchable encryption techniques allow users to
securely search over encrypted data through keywords, they
support only Boolean search and are not yet sufficient
to meet the
effective data utilization need that is inherently demanded by large
number of users and huge amount of data files in cloud. In
this paper, we define and solve the problem
of secureranked keyword search over encrypted cloud data. Ranked
IEEE 2012
search greatly enhances system
usability by enabling search result relevance ranking instead of
sending undifferentiated results, and further ensures the
file retrieval accuracy. Specifically, we explore the statistical measure
approach, i.e., relevance score, from information retrieval to build
a secure searchable index, and develop a one-to-many
order-preserving mapping
technique to properly protect those sensitive score information.
The resulting design is able to facilitate efficient server-side
ranking without losing keyword privacy. Thorough
analysis shows that our proposed solution enjoys “as-strong-as-
possible” security guarantee compared to previous searchable
encryption schemes, while correctly realizing the goal
of ranked keyword search. Extensive experimental results
demonstrate the efficiency of the
proposed solution
TTASTD80 Bloom Cast: Efficient and
Effective Full-Text
Retrieval in Unstructured
P2P Networks
Efficient and effective full-
text retrieval in unstructured peer-to-peer networks remains a
challenge in the research community. First, it is difficult, if
not impossible, for unstructured P2P systems to
effectively locate items with
guaranteed recall. Second, existing schemes to improve
search success rate often rely on replicating a large number of item
replicas across the wide area network, incurring a large
amount of communication and storage costs. In this paper, we
IEEE 2012
propose BloomCast,
an efficient and effective full-text retrieval scheme, inunstructu
red P2P networks. By leveraging a hybrid P2P protocol, BloomCast
replicates the items uniformly at random across the P2P networks,
achieving a guaranteed recall at a communication cost of O(√N),
where N is the size of the network. Furthermore,
by casting Bloom Filters instead of
the raw documents across the network, BloomCast
significantly reduces the communication and storage costs
for replication. We demonstrate the power of BloomCast design
through both mathematical proof and comprehensive
simulations based on the query logs from a major commercial
search engine and NIST TREC WT10G data collection. Results
show that BloomCast achieves an average query recall of 91
percent, which outperforms the
existing WP algorithm by 18 percent, while BloomCast greatly
reduces the search latency for query processing by 57 percent.
.
TTASTD81 A Systematic Approach
toward Automated
Performance Analysis and
Tuning
High productivity is critical in
harnessing the power of high-performance computing systems
to solve science andengineering
problems. It is a challenge to bridge the gap between the
hardware complexity and the software limitations. Despite
significant progress in
IEEE 2012
programming language,
compiler, and performance tools, tuning an application remains
largely a manual task, and is done mostly by experts. In this paper,
we propose a systematicapproach toward auto
mated performance analysis and tuning that we expect to improve
the productivity ofperformance debugging
significantly. Our approach seeks
to build a framework that facilitates the combination of
expert knowledge, compiler techniques, and performance rese
arch for performance diagnosis and sol
ution discovery. With our framework, once a
diagnosis and tuning strategy has been developed, it can be stored
in an open and extensible database and thus be reused in
the future. We demonstrate the effectiveness of
ourapproach through
the automated performance analysis and tuning of two scientific
applications. We show that the tuning process is
highly automated, and the performance improvement is significant.
TTASTJ82 ST-CDP: Snapshots in
TRAP for Continuous
Data Protection
Continuous Data Protection (CDP)
has become increasingly important as digitization
continues. This paper presents a new architecture and an
implementation of CDP in Linux kernel. The new architecture
takes advantages of both traditional snapshot technology
and recent Timely Recovery to
IEEE 2012
Any Point-in-time (TRAP)
architecture [CHECK END OF SENTENCE]. The idea is to
periodically insert snapshots within the parity
logs of changed data blocks in order to
ensure fast and reliable data recovery in case of
failures. A mathematical model is developed as a guide to designers
to determine when and how to
insert snapshots to optimize performance interms of space
usage and recovery time. Based on the mathematical model, we
have designed and implemented a CDP module in the Linux
system. Our implementation is at block level as a device driver that
is capable of recovering data to any point-in-time in case of
various failures. Extensive experiments have been carried
out to show that the implementation is fairly robust
and numerical results
demonstrate that the implementation is efficient.
DOMAIN :DEPENDABLE AND SECURE COMPUTING
TTASTJ83 Detecting and Resolving
Firewall Policy Anomalies
The advent of emerging computing technologies such as
service-oriented architecture and cloud computing
has enabled us to perform business services more
efficiently and effectively.
However, we still suffer from unintended security leakages by
unauthorized actions in business services. Firewalls are the most
widely deployed security mechanism to ensure the security
IEEE 2012
of private networks in most
businesses and institutions. The effectiveness of security
protection provided by a firewall mainly depends on the
quality of policy configured in the firewall. Unfortunately,
designing and managing firewall policies are often error prone due
to the complex nature of firewall configurations as well as the lack
of systematic analysis
mechanisms and tools. In this paper, we represent an
innovative policy anomaly management framework for firewalls,
adopting a rule-based segmentation technique to
identify policy anomalies and derive effective anomaly resolutions.
In particular, we articulate a grid-based representation technique,
providing an intuitive cognitive sense about policy anomaly. We
also discuss a proof-of-concept implementation of a visualization-
based firewall policy analysis tool
called Firewall Anomaly Management Environment
(FAME). In addition, we demonstrate how efficiently our
approach can discover and resolve anomalies in firewall polici
es through rigorous experiments.
TTASTJ84 Double Guard: Detecting
Intrusions in Multitier
Web Applications
Internet services and applications have become an
inextricable part of daily life, enabling communication and the
management of personal information from anywhere. To
accommodate this increase in application and data
complexity, web services have
IEEE 2012
moved to a multitier design
wherein the web server runs the application front-end logic and
data are outsourced to a database or file server. In this paper, we
present Double Guard, an IDS system that models the network
behavior of user sessions across both the front-end web server and
the back-end database. By monitoring both web and
subsequent database requests, we
are able to ferret out attacks that independent IDS would not be
able to identify. Furthermore, we quantify the limitations of
any multitierIDS in terms of training sessions and functionality
coverage. We implemented Double Guard using an Apache
web server with MySQL and lightweight virtualization. We then
collected and processed real-world traffic over a 15-day period of
system deployment in both dynamic and
static web applications. Finally,
using DoubleGuard, we were able to expose a wide range of attacks
with 100 percent accuracy while maintaining 0 percent false
positives for static web services and 0.6 percent false positives for
dynamic web services.
TTASTD85 Automatic
Reconfiguration for
Large-Scale Reliable
Storage Systems
Byzantine-fault-tolerant replication enhances the
availability and reliability of Internet services that store critical
state and preserve it despite attacks or software errors.
However, existing Byzantine-fault-tolerant storage systems either
assume a static set of replicas, or
IEEE 2012
have limitations in how they
handle reconfigurations (e.g., in terms of the scalability of the
solutions or the consistency levels they provide). This can be
problematic in long-lived, large-scale systems where system mem
bership is likely to change during the system lifetime. In this paper,
we present a complete solution for dynamically
changing system membership in
a large-scale Byzantine-fault-tolerant system. We present a
service that tracks system membership and
periodically notifies other system nodes of membership changes.
The membership service runs mostly automatically, to avoid
human configuration errors; is itself Byzantine-fault-tolerant and
reconfigurable; and provides applications with a sequence of
consistent views of the system membership. We
demonstrate the utility of this
membership service by using it in a novel distributed hash table
called dBQS that provides atomic semantics even across changes in
replica sets. dBQS is interesting in its own right because
its storage algorithms extend existing Byzantine quorum
protocols to handle changes in the replica set, and because it differs
from previous DHTs by providing Byzantine fault tolerance and
offering strong semantics. We implemented the membership
service and dBQS. Our results
show that the approach works well, in practice: the membership
service is able to manage
a large system and the cost to change the system membership is
low.
DOMAIN :SERVICS COMPUTING
TTASTD86 Dynamic Authentication
for Cross-Realm SOA-
Based Business Processes
Modern distributed applications
are embedding an increasing degree of dynamism,
from dynamic supply-chain management, enterprise
federations, and virtual collaborations
to dynamic resource acquisitions and service interactions across
organizations. Such dynamism leads to new challenges in
security and dependability. Collaborating services in a system
with a Service-Oriented Architecture (SOA) may belong to
different security realms but often
need to be engaged dynamically at runtime. If their security
realms do not have a directcross-realm authentication relationship,
it is technically difficult to enable any secure collaboration between
the services. A potential solution to this would be to locate
intermediate realms at runtime, which serve as an
authentication path between the two separate realms. However,
the process of generating an authentication path for two
distributed services can be highly
complicated. It could involve a large number of extra operations
for credential conversion and require a long chain of invocations
to intermediate services. In this paper, we address this problem
by designing and implementing a
IEEE 2012
new cross-
realm authentication protocol for dynamic service
interactions, based on the notion of service-oriented
multiparty business sessions. Our protocol requires neither
credential conversion nor establishment of
any authentication path between the participating services in
a business session. The
correctness of the protocol is formally analyzed and proven,
and an empirical study is performed using two production-
quality Grid systems, Globes 4 and CROWN. The experimental
results indicate that the proposed protocol and its implementation
have a sound level of scalability and impose only a limited degree
of performance overhead, which is for example comparable with
those security-related overheads in Globes 4.
TTASTD87 A Proxy-Based
Architecture for Dynamic
Discovery and Invocation
of Web Services from
Mobile Devices
Mobile devices are getting more
pervasive, and it is becoming
increasingly necessary to
integrate web servicesinto
applications that run on
these devices. We introduce a novel
approach for dynamically
invoking web servicemethods from m
obile devices with minimal user
intervention that only involves
entering a search
phrase andvalues for the method
parameters.
The architecture overcomes technical
challenges that involve consuming
IEEE 2012
discovered services dynamically by
introducing a man-in-the-middle
(MIM) server that
provides a web servicewhose
responsibility is to discover
needed services and build the client-
side proxies at runtime.
The architecturemoves to the MIM
server energy-consuming tasks that
would otherwise run on
the mobile device. Such tasks involve
communication with servers over the
Internet, XML-
parsing of files, and on-the-fly
compilation of source code. We
perform extensive evaluations of the
system performance to measure
scalability as it relates to the
capacity of the MIM server in
handling mobile client
requests, and device battery power
savings resulting fromdelegating
the service discovery tasks to the
server.
DOMAIN:SOFTWARE ENGINEERING
TTASTJ88 Automatically
Generating Test Cases for
Specification Mining
Dynamic specification mining obse
rves program executions to infer models of normal program
behavior. What makes us believe that we have seen sufficiently
many executions ? The TAUTOKO (“Tautoko” is the Maori word
for “enhance, enrich.”) type state miner generates test cases that
cover previously unobserved
behavior, systematically extending the execution space,
and enriching the specification. To our knowledge, this is the first
combination of
IEEE 2012
systematic test case generation
and type state mining-a combination with clear benefits:
On a sample of 800 defects seeded into six Java subjects, a
static type state verifier fed with enriched models would report
significantly more true positives and significantly fewer false
positives than the initial models.
TTASTD89 Automatic Detection of
Unsafe Dynamic
Component Loadings
Dynamic loading of software components (e.g., libraries or modules)
is a widely used mechanism for an
improved system modularity and flexibility.
Correct component resolution is critical for reliable and secure
software execution. However, programming mistakes may lead
to unintended or even malicious components being
resolved and loaded. In particular, dynamic loading can be
hijacked by placing an arbitrary file with the specified name in a
directory searched before resolving the target component.
Although this issue has been
known for quite some time, it was not considered serious because
exploiting it requires access to the local file system on the vulnerable
host. Recently, such vulnerabilities have started to
receive considerable attention as their remote exploitation became
realistic. It is now important to detect and fix these
vulnerabilities. In this paper, we present the first automated
technique to detect vulnerable and unsafe dynamic component lo
adings. Our analysis has two
phases: 1) apply dynamic binary
instrumentation to collect runtime information
on component loading (online phase), and 2) analyze the
collected information to detect vulnerable component loadings (of
fline phase). For evaluation, we implemented our technique to
detect vulnerable and unsafe component loadings in
popular software on Microsoft
Windows and Linux. Our evaluation results show
that unsafe component loading is prevalent in software on both OS
platforms, and it is more severe on Microsoft Windows. In
particular, our tool detected more than
4,000 unsafe component loadings in our evaluation, and some can
lead to remote code execution on Microsoft Windows.
DOMAIN:PERVASIVE COMPUTING
TTASTJ90 Advertising on Public
Display Networks
For advertising-
based public display networks to
become truly pervasive, they must
provide a tangible social benefit and
be engaging without being obtrusive,
blending advertisements with
informative content.
IEEE 2012
DOMAIN: IT IN BIOMEDICINE / FRENSICS AND SECURITY
TTASTJ91 SparkMed: A Framework
for Dynamic Integration of
Multimedia Medical Data
Into Distributed m-Health
Systems
With the advent of 4G and other long-term evolution (LTE) wireless
networks, the traditional
boundaries of patient record propagation are diminishing as
networking technologies extend the reach of hospital
IEEE 2012
infrastructure and provide on-
demand mobile access to medical multimedia data.
However, due to legacy and proprietary software, storage and
decommissioning costs, and the price of centralization and
redevelopment, it remains complex, expensive, and often
unfeasible for hospitals to deploy their infrastructure for online and
mobile use. This paper proposes
the Sparked data integration framework for mobile healthcare (m-
Health), which significantly benefits from the enhanced
network capabilities of LTE wireless technologies, by
enabling a wide range of heterogeneous medical s
oftware and database systems (such as the
picture archiving and communication systems, hospital
information system, and reporting systems) to be
dynamically
integrated into acloud-like peer-to-peer multimedia data store.
Our framework allows medical data applications to share data with
mobile hosts over a wireless network (such as Wi-Fi and 3G),
by binding to existing software systems and deploying
them as m-Health applications. SparkMed int
egrates techniques from multimedia streaming, rich
Internet applications (RIA), and remote procedure call
(RPC) frameworks to
construct a Self-managing, Pervasive Automated
network for Medical Enterprise Da
ta (Sparked). Further, it is resilient to failure, and able to use
mobile and handheld devices to maintain its network, even in the
absence of dedicated server devices. We have
developed a prototype of the SparkMed framework for evaluation
on a radiological workflow simulation, which
uses SparkMed to
deploy a radiological image viewer as an m-Health application for tele
medical use by radiologists and stakeholders. We have evaluated
our prototype using ten devices over Wi-Fi and 3G, verifying that
our framework meets its two main objectives: 1) interactive-
delivery of medical multimedia data to mobile devices; and 2)
attaching to non-networked medical software
processes without significantly impacting their performance.
Consistent response
times of under 500 ms and graphical frame rates of over 5
frames per second were observed under intended usage conditions.
Further, overhead measurements displayed linear scalability and low
resource requirements.
TTASTJ92 Data Fusion and Cost
Minimization for
Intrusion Detection
Statistical pattern recognition techniques have recently been
shown to provide a finer balance between misdetections and false
alarms than the more conventional intrusion detection a
pproaches, namely misuse detection and anomaly detection.
A variety of classical machine
IEEE 2011 IEEE
2012
learning and pattern recognition
algorithms has been applied to intrusion detection with varying
levels of success. We make two observations about intrusion
detection. One is that intrusion detection is
significantly more effective by using multiple sources of
information in an intelligent way, which is precisely what human
experts rely on. Second, different
errors in intrusion detection have different costs associated with
them-a simplified example being that a false alarm may be more
expensive than a misdetection and, hence, the true
objective function to be minimized is the cost of errors and not the
error rate itself. We present a pattern recognition approach that
addresses both of these issues. It utilizes an ensemble of a
classifiers approach to intelligently combine information from multiple
sources and is explicitly tuned
toward minimizing the cost of the errors as opposed to the error
rate itself. The information fusion approach learning alone is
shown to achieve state-of-the-art performances better than those
reported in the literature so far, and the cost minimization stra
tegy dCMS further reduces the cost with a significant margin.
DOMAIN: GRID COMPUTING
TTASTJ93 Leveraging a Compound
Graph-Based DHT for
Multi-Attribute Range
Queries with
Performance Analysis
Resource discovery is critical to the usability and accessibility of
grid computing systems. Distributed Hash Table (DHT) has
been applied to grid systems
IEEE 2012
as a distributed
mechanism for providing scalable range-query and multi-
attribute resource discovery. Multi-DHT-
based approaches depend on multiple DHT networks with each
network responsible for a single attribute.
Single-DHT-based approaches keep the resource information of
all attributes in a single node.
Both classes of approaches lead to high overhead. In this paper, we
propose a Low-Overhead Range-query Multi-
attribute (LORM) DHT-based resource discovery
approach. Unlike other DHT-based approaches, LORM relies
on a single compound graph-based DHT network and
distributes resource information among nodes in balance by taking
advantage of the compound graph structure.
Moreover, it has high capability to
handle the large-scale and dynamic characteristics of
resources in grids. Experimental results demonstrate the efficiency
of LORM in comparison with other resource discovery approaches.
LORM dramatically reduces maintenance and resource
discovery overhead. In addition, it yields significant improvements in
resource location efficiency. We also analyze the performance of
the LORM approach rigorously by comparing it with othermulti-DHT-
based and single-DHT-
based approaches with respect to their overhead and efficiency. The
analytical results are
consistent with experimental results, and prove the superiority
of the LORM approach in theory.
TTASTJ94 Locality-Preserving
Clustering and Discovery
of Resources in Wide-Area
Distributed
Computational Grids
In large-
scale computational Grids, discovery of heterogeneous resources as
a working group is crucial to achieving scalable performance.
This paper presents a resource management scheme
including a hierarchical cycloid overlay
architecture, resource clustering and discovery algorithms for wide-
area distributed Grid systems. We establish
program/data locality by clustering resources based on their
physical proximity and functional
matching with user applications. We further develop dynamism-
resilient resource management algorithm, cluster-token
forwarding algorithm, and deadline-
driven resource management algorithms. The advantage of the
proposed scheme lies in low overhead, fast and dynamism-
resilient multiresource discovery. The paper presents the scheme,
new performance metrics, and experimental
simulation results. This scheme
compares favorably with other resource discovery methods
in static and dynamic Grid applications. In particular, it supports
efficient resource clustering, reduces communications
cost, and enhances resource discovery success rate in promoting
IEEE 2012
large-
scale distributed supercomputing applications.
TTASTJ95 Online System for Grid
Resource Monitoring and
Machine Learning-Based
Prediction
Resource allocation and job
scheduling are the core functions
of grid computing. These functions
are based on adequate information of
available resources. Timely
acquiring resource status information
is of great importance in ensuring
overall performance
of grid computing. This work aims at
building a distributed system for grid
resource monitoring and prediction.
In this paper, we present the
design and evaluation of
a system architecturefor grid resourc
e monitoring and prediction. We
discuss the key
issues for system implementation,
includingmachine learning-
based methodologies for modeling an
d optimization
of resource prediction models.
Evaluations are performed on a
prototype system. Our experimental
results indicate that the
efficiency and accuracy of
oursystem meet the demand
of online system for grid resource mo
nitoring and prediction.
IEEE 2012
TTASTJ96 Real-Time Head
and Hand Tracking Based
on 2.5D Data
A novel real-time algorithm
for head and hand tracking is proposed in this paper. This
approach is based on 2.5Ddata from a
range camera, which is exploited to resolve
IEEE 2012
ambiguities and overlaps.
Experimental results show high robustness against partial
occlusions and fast movements. The estimated positions are fairly
stable, allowing the extraction of accurate trajectories which may
be used for gesture classification purposes
DOMAIN: MATLAB
TTASTM97 Semantic Image Retrieval
in Magnetic Resonance
Brain Volumes
Practitioners in the area of neurology
often need to retrieve
multimodal magnetic resonance (MR)
images of thebrain to study disease
progression and to correlate
observations across multiple
subjects. In this paper, a novel
technique for retrieving 2-D
MR images (slices) in 3-
D brain volumes is proposed. Given a
2-D MR query slice, the technique
identifies the 3-D volume among
multiple subjects in the database,
associates the query slice with a
specific region of the brain, and
retrieves the matching slice within
this region in the identified volumes.
The proposed technique is capable of
retrieving an image in multimodal
and noisy scenarios. In this study,
support vector machines (SVM) are
used for identifying 3-D
MR volume and for
performing semantic classification of
the human brain into
various semantic regions. In order to
achieve
reliable image retrieval performance i
n the presence of misalignments,
an image registration-
based retrieval framework is
IEEE 2012
developed. The
proposed retrievaltechnique is tested
on various modalities. The test
results reveal superior robustness
performance with respect to
accuracy, speed, and multimodality.
PROJECT
ID:
TTASTM98
(2005)
Drowsiness Detection
based on Eye Movement,
Yawn Detection and
Head Rotation
IEEE 2012
TTASTM99
(2010)
Automatic Detection of
Geospatial objects using
Taxonomic Semantics
In this letter, we propose a novel
method to solve the problem of detecting geospatial o
bjects present in high-resolution remote sensing images
automatically. Each image is represented as a segmentation
tree by applying a multi scale segmentation algorithm at first,
and all of the tree nodes are
described as coherent groups instead of binary classified values.
The trees are matched to select the maximally matched sub trees,
denoted as common subcategories. Then, we organize
these subcategories to learn the embedded taxonomic semantics o
f objects categories, which allow categories to be defined
recursively, and express both explicit and implicit spatial
configuration of categories. Detection, recognition, and
segmentation of the geospatial ob
IEEE 2012
jects in a new image can be
simultaneously conducted by using the
learned taxonomic semantics. This procedure also provides a
meaningful explanation for image understanding. Experiments for
complex and compound objects demonstrate
the precision, robustness, and effectiveness of the proposed
method.
TTASTM10
0
Improved Iris recognition
through fusion of
hamming distance &
fragile bit distance
The most common iris biometric algorithm represents the
texture of an iris using a binary iris code. Not all bits in
an iris code are equally consistent. A bit is
deemed fragile if its value
changes across iris codes created from different images of the
same iris. Previous research has shown
that iris recognition performance can be improved by masking
these fragile bits. Rather than ignoring fragile bits completely,
we consider what beneficial information can be obtained from
the fragile bits. We find that the locations of fragile bits tend to be
consistent across different iris codes of the same
eye. We present a metric, called
the fragile bit distance, which quantitatively measures the
coincidence of the fragile bit patterns in two iris codes. We find that
score fusion of fragile bit distance and Hamming distance w
orks better
TTASTM10
1
Iris Recognition Using
Possibility Fuzzy
Matching on Local
Features
In this paper, we propose a novel
possibilistic fuzzy matching strategy with invariant properties,
which can provide a robust and effective matching scheme for two
sets of iris feature points. In addition, the nonlinear
normalization model is adopted to provide more accurate position
before matching. Moreover, an effective irissegmentation method
is proposed to refine the detected
inner and outer boundaries to smooth curves.
Forfeature extraction, the Gabor filters are adopted to detect
the local feature points from the segmented iris image in the
Cartesian coordinate system and to generate a rotation-invariant
descriptor for each detected point. After that, the
proposed matching algorithm is used to compute a similarity
score for two sets of feature points from a pair
of iris images. The experimental
results show that the performance of our system is better than those
of the systems based on the local features and is
comparable to those of the typical systems.
IEEE 2012
TTASTM10
2
A change Information
Based Fast Algorithm
for Video Object
Detection and Tracking
In this paper, we
present a novel algorithm for moving object detection and tracking.
The proposed algorithm includes two schemes: one for spatio-
temporal spatial segmentation and the
other for temporal segmentation. A combination of
IEEE 2012
these schemes is used to identify
moving objects and to track them. A compound Markov random field
(MRF) model is used as the prior image attribute model, which
takes care of the spatial distribution of color, temporal
color coherence and edge map in the temporal frames to
obtain a spatio-temporal spatial segmentation. In this scheme,
segmentation is considered
as a pixel labeling problem and is solved using the
maximum posteriori probability (MAP) estimation technique. The
MRF-MAP framework is computation intensive due to
random initialization. To reduce this burden, we
propose change information based heuristic initialization technique.
The scheme requires an initially segmented frame. For initial
frame segmentation, compound MRF model is used to model
attributes and MAP estimate is
obtained by a hybrid algorithm [combinatio
n of both simulated annealing (SA) and iterative conditional
mode (ICM)] that converges fast. For temporal
segmentation, instead of using a gray level
difference based change detection mask (CDM), we
propose a CDM based on label difference of two frames. The
proposed scheme resulted in less effect of silhouette.
Further, a combination of both
spatial and temporal segmentation process is used to
detect the moving objects. Results
of the proposed spatial segmentation approach are
compared with those of JSEG method, and edgeless and edge
based approaches of segmentation. It is noticed that
the proposed approach provides a better spatial
segmentation