-
Computers, Environment and Urban Systems xxx (2015) xxxxxx
Contents lists available at ScienceDirect
Computers, Environment and Urban Systems
journal homepage: www.elsevier .com/locate /compenvurbsys
Performance improvement techniques for geospatial web servicesin
a cyberinfrastructure environment A case study with a
disastermanagement portal
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.0030198-9715/
2015 Elsevier Ltd. All rights reserved.
Corresponding author.
Please cite this article in press as: Li, W., et al. Performance
improvement techniques for geospatial web services in a
cyberinfrastructure environmcase study with a disaster management
portal. Computers, Environment and Urban Systems (2015),
http://dx.doi.org/10.1016/j.compenvurbsys.2015
Wenwen Li a,, Miaomiao Song a,b, Bin Zhou b, Kai Cao c, Song Gao
da GeoDa Center for Geospatial Analysis and Computation, School of
Geographical Sciences and Urban Planning, Arizona State University,
Tempe, AZ 85287-5302, United Statesb Institute of Oceanographic
Instrument, Shandong Academy of Sciences, Qingdao, Shandong 266001,
Chinac Department of Geography, National University of Singapore,
117570, Singapored Department of Geography, University of
California, Santa Barbara, CA 93117, United States
a r t i c l e i n f o a b s t r a c t
Article history:Received 24 June 2014Received in revised form 28
April 2015Accepted 30 April 2015Available online xxxx
Keyword:Geospatial cyberinfrastructure (GCI)Rapid
responseDisaster managementGeoJSONGMLWMSWFSService-oriented
architecture (SOA)
High population growth, urbanization, and global climate change
drive up the frequency of disasters,affecting the safety of peoples
lives and property worldwide. Because of the inherent big-data
natureof this disaster-related information, the processes of data
exchange and transfer among physically dis-tributed locations are
increasingly challenging. This paper presents our proposed
efficient network trans-mission model for interoperating
heterogeneous geospatial data in a cyberinfrastructure
environment.This transmission model supports multiple data encoding
methods, such as GML (Geography MarkupLanguage) and GeoJSON, as
well as data compression/decompression techniques, including LZMA
andDEFLATE. Our goal is to tackle fundamental performance issues
that impact efficient retrieval of remotedata. Systematic
experiments were conducted to demonstrate the superiority of the
proposed transmis-sion model over the traditional OGC Web Feature
Service (WFS) transmission model. The experimentsalso identified
the optimized configuration for data encoding and compression
techniques in differentnetwork environments. To represent a
real-world user request scenario, the Amazon EC2 cloud platformwas
utilized to deploy multiple client nodes for the experiments. A web
portal was developed to integratethe real-time geospatial web
services reporting with real-time earthquake related information
for spatialpolicy analysis and collaborative decision-making.
2015 Elsevier Ltd. All rights reserved.
1. Introduction
High population growth, urbanization, and global climatechange
drive up the frequency of disasters, affecting the safety ofpeoples
lives and property worldwide. For example, in China74% of state
capitals and over 62% of counties are located inearthquake risk
zones with potential for earthquakes larger thanmagnitude 7.
Additionally, regions with high risk of natural disas-ters contain
half of Chinas population, who live within 70% of theurban centers
where 75% of the national gross domestic productare distributed.
Disaster management aims at alleviating theeffects of disasters by
supporting timely collection ofdisaster-related data, estimation of
damage, evacuation routesplanning and effective resource scheduling
(Auf der Heide, 2006;McEntire, 2002; Goodchild, 2006; Alinia and
Delavar, 2011).More specifically, a management system should be
able to coordi-nate disaster-related data, most of which may be
heterogeneous
across geographically dispersed government agencies. Also,
thesystem should provide an efficient transmission model for
rapidresponse of end users spatial information requests. Lastly,
the dis-aster management system should provide a user-friendly,
andresponsive web portal to facilitate humancomputer interactionfor
successful decision-making purpose.
The emerging geospatial cyberinfrastructure (GCI; Yang et
al.,2011) is a promising instrument for building a disaster
manage-ment system by harnessing tremendous advances in
computerhardware, GIS middleware, network and sharable geospatial
webservices. GCI is a descendent of Spatial Data Infrastructure
(SDI).It focuses on providing better organization, integration,
computa-tion and visualization of institutionally scattered
geospatialresources through the development of computationally
efficientmiddleware. Within the context of GCI or SDI,
service-orientationis a well-accepted strategy to improve the
integration andexchange of heterogeneous geospatial data (Li et
al., 2011). Usingan Earthquake study as an example: to study the
correlationbetween the location and magnitude of earthquake events
and
ent A.04.003
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003http://www.sciencedirect.com/science/journal/01989715http://www.elsevier.com/locate/compenvurbsyshttp://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
-
2 W. Li et al. / Computers, Environment and Urban Systems xxx
(2015) xxxxxx
the mortality rate caused by earthquake-induced disasters,
loca-tion data may come from a vector data model, such as an
ESRIshapefile, whereas, the mortality data may cover a continuous
sur-face, in a raster format. A service-oriented approach enables
theconversion of various raw data types into a commonly
understand-able format to improve geospatial interoperability. Some
web ser-vice solutions, such as Open Geospatial Consortium (OGC)
WebMap Service (WMS), enhance remote interoperation by
convertingraw data into static images. The conversion comes at the
cost oflosing substantial attribute information from the original
data.The OGC Web Feature Service (WFS), in comparison, is capable
ofpreserving actual data, but it generates very large file while
serial-izing the geospatial and attribute data. This leads to a
long delay indata transfer in a clientserver model. In this paper,
we introduce anetwork transmission model that improves the
performance inremote data transfer in a cyberinfrastructure
environment by com-bining multiple data encoding and compression
techniques. Thismodel is successfully integrated into a GCI portal
for efficient dis-aster data management.
The rest of the paper is organized as follows: Section 2
reviewsrecent literature in Cyberinfrastructure and geospatial
interoper-ability. Section 3 describes the architecture of a
disaster responsesystem. Section 4 discusses the solution
techniques to accelerategeospatial processing in terms of vector
data encoding and trans-mission in a service-oriented
cyberinfrastructure environment.Section 5 demonstrates the
performance of proposed methodsthrough a series of experiments.
Section 6 demonstrates aGraphic User Interface (GUI) for real-time
disaster analysis.Finally, Section 7 concludes the work and
proposes future researchdirections.
2. Related work
2.1. Service-oriented geospatial cyberinfrastructure
In a disaster management scenario, the required data (e.g.
satel-lite imagery showing the change before and after a disaster)
areoften geographically separated from (1) the web server portal
onwhich the data is processed, and (2) where the
decision-makingtakes place. This scenario requires the adoption of
a decentralizedand interconnected architectural design: distributed
geoprocessingcapabilities need to be supported and distributed
resources shouldbe reused and integrated easily. A service-oriented
GCI fits rightinto this vision (Foster, 2005). However, existing
researchersmostly focus on a single aspect of technological
advancements ina GCI portal, such as service access, integration or
high perfor-mance geospatial computing. For example, Li, Yang, and
Yang(2010) and Lopez-Pellicer, Florczyk, Bjar, Muro-Medrano,
andZarazaga-Soria (2011) adopted large-scale web crawling to
dis-cover scattered geospatial web services to enhance
accessibilityand to foster better geospatial data usage.
Mansourian,Rajabifard, Valadan Zoej, and Williamson (2006),
Wei,Santhana-Vannan, and Cook (2009) and Li et al. (2011)
proposedimplementations of service-based spatial web portal to
integratedistributed web services and visualize composite maps from
datahosted through these services. Wang, Armstrong, Ni, and
Liu(2005), Wei et al. (2006), Yang, Li, Xie, and Zhou (2008),
Zhangand Tsou (2009) proposed grid-enabled cyberinfrastructure
witha geoportal to speed up computational-intensive tasks.
Thoughlocal processing performance is greatly improved, these
processesare implemented in a standalone application, rather than
in widelyadopted OGC web services. Therefore their reusability is
limited. Toaddress this issue, Wang (2010) proposed a CyberGIS
framework tosynergize advancement in both cyberinfrastructure and
geospatialsciences. Accordingly, this project seeks to provide
parallel dataprocessing through standardized geospatial web
services.
Please cite this article in press as: Li, W., et al. Performance
improvement techncase study with a disaster management portal.
Computers, Environment and Urb
2.2. Geospatial web services and their performance issues
Geospatial web services, which foster the interoperabilityamong
heterogeneous data and computing resources, are a keycomponent of
service-oriented GCI (Li, Li, Goodchild, & Anselin,2013; Li,
Goodchild, Anselin, & Weber, 2015). Since the 1990s, anumber of
government agencies, research institutes, andnon-profit
organizations have been collaborating to foster interop-erability
for geospatial data. For example, the standards organiza-tion OGC
has released a number of specifications allowinguniform requests
and exchange of geographic features over theInternet. These service
standards have been intensively used tosearch, analyze, and update
crucial disaster related informationfor disaster management (Weiser
& Zipf, 2007). Among over 60standards, the most widely adopted
services are OGC Web MapService (WMS; de La Beaujardire, 2002) and
Web FeatureService (WFS; Vretanos, 2005a). A WMS allows the request
ofgeo-referenced, raster, imagery over the Internet. When aGetMap
request arrives at the server side through HTTP(Hypertext Transfer
Protocol), a dynamic rendering process is trig-gered to generate a
static image from vector feature data. This pro-cess is easier when
hosting raw raster data since raster data can betiled and cached in
advance. Once completed, the map image isreturned to a client for
visualization with other resources, e.g. basemaps or images layers.
A WFS service, in comparison, allows theretrieval of actual feature
data. When a GetFeature request isreceived by a WFS server, the
vector feature geometries areselected (Vretanos, 2005b), encoded
usually by the OGCGeographical Markup Language (GML; Cox et al.,
2002) andreturned to the client. Hence, WFS delivers actual data to
the clientand allows users to perform spatial analyses in addition
to thevisual display.
Despite their popularity, WMS and WFS suffer from perfor-mance
bottlenecks. Performance improvement techniques forOGC WMS have
been widely discussed, such as that in Yang,Wong, Yang, Kafatos,
and Li (2005), Mikula, Trotts, Stone, andJones (2007), Baumann
(2001), Hu et al. (2010), primary due toits easy implementation and
visualization. However, in a WFSenvironment, the transmission of
actual data poses big challengesin making data exchange among WFS
servers efficient. Server-sideencoding, data transmissions,
client-side decoding and client siderendering remain the primary
WFS performance bottlenecks. Thismay be the reason why WFSs have
not yet been adopted as widelyas WMSs. Zhang, Zhao, and Li (2013)
describes a technique toimprove the query performance of WFS using
a Voronoi diagramindexing and parallel task scheduling. Yang et al.
(2011) conductedsome preliminary study on encoding data by a binary
XML withsome compression techniques, which form the basis of this
work.
To overcome the WFS performance bottleneck, three
researchquestions arise concerning the implementation of a high
perfor-mance disaster management system: (1) how can spatial data
beefficiently encoded and transmitted to support near
real-time,remote data retrieval, (2) is a single pre-defined
encoding andtransmission strategy suitable for diverse hardware and
networkenvironments encountered by users? and (3) what form does
anextendable architecture take such that distributed and
heteroge-neous geospatial services can be seamlessly integrated? In
thispaper, we will discuss our solution to the above questions
towardbuilding a service-oriented high-performance
cyberinfrastructurefor rapid disaster response and
decision-making.
3. Architecture
Fig. 1 demonstrates the service-oriented architecture for
disas-ter management. From bottom to top, a disaster processing
system
iques for geospatial web services in a cyberinfrastructure
environment Aan Systems (2015),
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
-
Fig. 1. Service-oriented GCI for efficient disaster
decision-making.
W. Li et al. / Computers, Environment and Urban Systems xxx
(2015) xxxxxx 3
is divided into five tiers. Tier 1 is the data layer, in which
largeamount of geospatial data are located and managed in either
adatabase management system (DBMS) or a file system. These
datasources directly interact with a spatial data engine that
contains aseries of data adapters to operate data embedded in
differentDBMSs and their spatial query servers through supported
APIs(Application Programming Interfaces). On top of the spatial
dataengine lays Tier 2, the web service engine, composed of three
com-ponents: filter engine, encoding engine and rendering engine.
Thefilter engine is responsible for sending spatial filtering
requests tothe database; the encoding engine is in charge of
wrapping spatialvector data into an intermediate format, e.g. GML
for easy dataexchange; and the rendering engine is responsible for
renderingthe spatial data from its original format into static
images for avector-based WMS. Once data are generated according to
OGCweb service standards, they are pushed up to web service
con-tainer at Tier 3 for standardized spatial data handling. These
ser-vices interact with clients and handle clients request
throughHTTP (HyperText Transfer Protocol). The web service
containeralso supports the configuration of map styles for returned
images,including symbolization, color ramps, and line width through
SLD(Style Layer Descriptor). Tiers 13 contain nested encapsulations
ofdata, processing, and service interfaces, each based upon the
tierbelow. Tier 4 is the frontend application. In a browser, it
providesa user interface for web service query, integration, and
visual dis-play of maps from a remote server. For system
administrators, italso provides a management interface to upload or
update spatialdata in real-time and to monitor running status of
web services.The uppermost layer, Tier 5, is the application layer.
It containsthe client tools customized to meet the requirements of
specificdecision-making purposes, e.g. early-warning systems or
disasterassessment systems.
As discussed earlier, the system has bottlenecks shown with
redarrows. In Tier 2, the bottlenecks are due to expensive data
encod-ing in a web service engine. Between Tier 3 and Tier 4,
vector data
Please cite this article in press as: Li, W., et al. Performance
improvement techncase study with a disaster management portal.
Computers, Environment and Urb
transmission acts as second a bottleneck, which needs to
beresolved in order to further improve system performance. The
nextsection discusses proposed solutions to address these
challenges.The vector-based environment the WFS will be our
focus.
4. An efficient network transmission model for WFS
In general, a WFS network transmission model adopted to han-dle
an incoming request involves the process of: pulling data fromthe
backend database, filtering them according to the given spatialand
temporal constraints, encoding them into an intermediate for-mat,
and then sending the data back to the client for visualization(grey
modules in Fig. 2). To resolve the performance bottleneckwhen large
data is being transferred, we extend the regular modelby
integrating multiple encoding and compression techniques.
Redmodules in Fig. 2 illustrate the proposed modules.
4.1. Vector data encoding for network transmission
A WFS facilitate data exchange across different GIS platforms
byencoding raw spatial data into a common file format. GML
SimpleFeature Profile is used as the primary encoding format for a
WFS(Burggraf, 2006). GML is based upon XML (Extensive
MarkupLanguage) a well-known text-based markup language with
richexpression capability for complex data. GML defines the
encodingformats for simple points, lines, polygons and other
complex spatialdata structures, such as multi-points, multi-part
polygon etc. UsingGML to encode spatial geometries has the
following advantages:first, GML is a text-based data format and is
machine-processablein a cross-platform environment; second, GML is
carried by HTTP,so it can easily cross firewalls, making GML one of
the ideal candi-dates to carry, share, and interoperate spatial
data among disparateGIS systems. Despite its advantages, GML also
receives criticismsfor its redundant tagging strategy. Also, due to
its text-based
iques for geospatial web services in a cyberinfrastructure
environment Aan Systems (2015),
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
-
Fig. 2. Network transmission model to handle a WFS request.
4 W. Li et al. / Computers, Environment and Urban Systems xxx
(2015) xxxxxx
nature, encoding a large number of features can result in
hugeGML files.
To enable a light-weighted encoding method, in this
WFSrequest/response workflow, we also enable the GeoJSON
encoding.GeoJSON (Butler et al. 2008) is built upon JSON
(Javascript ObjectNotation), which uses key: value pairs rather
than open and closetags to encode geospatial data. JSON is both
computer parsable andhuman readable, and is capable of describing
complex data struc-tures. GeoJSON extends the JSON format to
include geometric fea-tures including Point, LineString, Polygon,
MultiPoint,MultiLineString, MultiPolygon, and GeometryCollection.
Each fea-ture in GeoJSON has two parts: vector and attribute data,
describ-ing spatial extent and non-spatial properties of a
feature.
Table 1 illustrates GML and GeoJSON representations of
point,line and polygon features. As shown, a GML point feature will
beenclosed by embedded open and closed tags and
Table 1An example of using GML and GeoJSON to encode geometric
features.
Featuretype
GML GeoJSON
Point { type: Point,44.371525,-
22.172413coordinates:
[44.371525,22.172413]
}
LineString { type:LineString,44.371525,-
22.172413coordinates: [
43.860966,-21.677410
[44.371525,22.172413],
43.979008,-21.417412
[43.860966,21.677410],
[-43.979008,21.417412]] }
Polygon { type:Polygon,44.371525,-
22.172413coordinates: [
43.860966,-21.677410
[[44.371525,22.172413],
43.979008,-21.417412
[43.860966,21.677410],
44.371525,-22.172413
[43.979008,21.417412],
[44.371525,22.172413]] ]
}
Please cite this article in press as: Li, W., et al. Performance
improvement techncase study with a disaster management portal.
Computers, Environment and Urb
, while GeoJSON uses a flat structure with keysof type and
coordinates. In a GML format, line or polygon fea-tures coordinates
are separated by commas and each vertex is sep-arated by a space.
In GeoJSON format, coordinates are alsoseparated by commas but each
vertex is represented as a collectionand is enclosed by [ and ].
The geometry information is storedin the geom section of a WFS
request, together with any otherfeature attributes. As shown in
Table 1, encoding vertex-based datawith GeoJSON generates more
compact files than using GML,because the GeoJSON structure does not
require long tags (suchas and ). However, GML is moreefficient,
than GeoJSON is, at representing line and polygon datawhen the
features contain multiple vertices. This observation isvalidated in
the experiment section. (Note that we only considerWFS GetFeature
Response which contains features of a single type,either of the
type of point, or line, or polygon. A hybrid geometrycollection is
not considered).
4.2. Introducing compression strategy to the network
transmissionmodel of a WFS
Despite GML and GeoJSONs popularity in a variety of GIS
appli-cations, the text-based nature of GML and GeoJSON results in
thestorage of redundant information during the data encoding
pro-cess. Therefore, large, text based files are created when
large, com-plex datasets are requested. To reduce the amount of
datatransferred over the Internet and to reduce client-waiting
time,our framework introduces a compression/decompression
module(red1 module in Fig. 2). After a dataset is encoded, its size
is reducedin the compression module before being transmitted to the
client.The compression module supports two compression
algorithms:LempelZiv Markov-chain Algorithm (LZMA; Morse, 2005)
andDEFLATE algorithm (Deutsch, 1996). These two algorithms are
cho-sen to integrate into the transmission model because of their
popu-larity (Dorward & Quinlan, 2000), because they both are
losslesscompression algorithms and they represent two typical
compressiontechniques: DEFLATE achieves faster compression speed
but rela-tively lower compression rate. The LZMA is the
opposite.
Given a sequence of characters in a string X, the LZMA
algo-rithm (Fig. 3) is capable of completing the compression
throughonly one pass of the data. As a string X is read, it is
partitioned intonon-overlapped substrings fzrg. For any zr1 and zr2
in X, if r1 < r2,zr2 must not be the same as any zr1 . Each zr
is then encoded intoa pair of characters ir ; yr, where ir is the
index of zr which appears
1 For interpretation of color in Fig. 2, the reader is referred
to the web version ofthis article.
iques for geospatial web services in a cyberinfrastructure
environment Aan Systems (2015),
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
-
Fig. 3. LZMA Algorithm.
W. Li et al. / Computers, Environment and Urban Systems xxx
(2015) xxxxxx 5
as the part of the current string and yr is the rest part of the
stringand is one element in the alphabet in X. For the final step
of theencoding process, the pair of characters are converted into
binarynumbers. In a specific implementation, the dictionary for
storingfzrg can be limited to save memory space. When the
maximumnumber of dictionary entries is reached, old and
infrequently usedwords can be removed from the dictionary. As a
lossless compres-sion technique, the LZMA is very effective. In a
worst case scenario,the data size will be at most frlogr2 log
a2g, in which r is the size
of fzrg and a is the size of the alphabet in X. A LZMA decoder
willreconstruct zr first and then using a lookup table to convert
the ele-ments in zr into the original sequence. This lookup
operation iscomputationally intensive. So despite the benefits of a
high com-pression ratio, LZMA algorithm is disadvantaged due to a
relativelylong compression time.
A DEFLATE algorithm, in comparison, combines two
stages:duplicate string elimination and bit reduction, to compress
adata stream. The duplicate string elimination (stage1) tries touse
a shorter code to represent recurring patterns in the data.Two
moving windows of the same size are stored during thecompressing
process: w1 storing n characters backward fromcurrent location and
w2 storing length-n characters forward fromcurrent location. If a
substring in w2 is contained in w1, this sub-string is replaced by
a pair of numbers: the length of the sub-string and the offset
referring how far back this substringappears in w1. The first stage
describes a classic LZ77 losslesscompression algorithm.
To further reduce the size of data, a Huffman encoding is
intro-duced as the second stage of DEFLATE algorithm. This
encodingreduces the average code length by using the shortest code
forthe most frequently occurring character and the longest code
forthe least frequently occurring character. A binary coding tree
isconstructed where each character in the original data flow is
rep-resented as a leaf node. The left child node is encoded as 0
andright child node is encoded as 1. The final code of a character
is asequence of 0s and1s on the path from the root to the leaf
node.To generate such a Huffman tree, the original data flow first
needsto be scanned to count the number of occurrences of each
uniquecharacter. Then characters with minimal frequencies will
beselected to construct a subtree, and this process will continue
untilall characters have been encoded in the Huffman tree.
Duringdecoding process, the compressed data flow must first
bede-serialized from its Huffman code to the original
characters,and then the repeated occurrence pattern will be
replaced by theoriginal string to recover the data stream before
compression.
Please cite this article in press as: Li, W., et al. Performance
improvement techncase study with a disaster management portal.
Computers, Environment and Urb
Huffman encoding is efficient in runtime and it takes onlyOn log
n operations to construct it. It also provides good compres-
sion ratio atP
pixilog1=pi xi 2
logk2, where pixi is the probability of a char-
acter xi occurring in the data stream and k is the unique
numberof characters in the data stream to be compressed.
Though the above compression techniques substantially reducethe
amount of data to be transferred over the Internet, they
alsointroduce extra time for server compression and client
decompres-sion. For example, in a traditional WFS request/response
flow, thewall time t, a.k.a, total response time, is composed by
the time usedfor handling a HTTP request and encoding vector data
into an inter-mediate format (t1), such as GML, result transfer or
client down-load (t3) and client decoding (t5).
t t1 t3 t5 1
After introducing the compression module, the total response
t0
also includes the time for result compression on the server side
(t2)and client decompression (t4):
t0 t1 t2 t03 t4 t5 2
Although compression reduces the amount of data and time(t3), it
also introduces extra time t2 t4. Therefore, only when
t0 t t2 t4 t03 t3 < 0 3
are the advantages of compression realized. It is known that
LZMAhas better compression ratio than DEFLATE, however DEFLATE
runsfaster than LZMA, so when handling data with different repeat
pat-terns, they will behave differently. Section 5 will compare
their per-formance under different transmission and network
conditions.
5. Experiment
In this section, we present systematic experiments to test
theperformance of WFSs with proposed network transmission model.To
simulate a real-world scenario, we set up the following
experi-mental environments: (1) the server hosting web services and
theclient application hosting the CI web portal (client) were
deployedat physically distributed locations; (2) the web service
engine withsupport of a geospatial database (PostGreSQL 9.4 +
PostGIS1.5) wasdeployed on a Windows server located at UCSB; (2) a
main clientnode, which is also the permanent host for the CI
portal, wasdeployed at ASU; (3) we also deployed two additional
client nodeson the cloud through Amazon EC2 platform: One cloud
node islocated at Singapore and another cloud node is located at
Ireland.
iques for geospatial web services in a cyberinfrastructure
environment Aan Systems (2015),
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
-
6 W. Li et al. / Computers, Environment and Urban Systems xxx
(2015) xxxxxx
Both of the cloud nodes were deployed as EC2 microinstances
withthe same hardware and network conditions. The configuration
ofthe cloud nodes simulates users who have lower Internet speedor
unstable network conditions, as might occur after major disas-ters.
As the speed of EC2 instances is variable, each result is
theaveraged value among six runs. Fig. 4 displays the locations
and
Fig. 4. Cloud deployment of se
(a) Comparison in data size after GeoJSO
(b) Comparison in data size after GeoJSON and GML encoding with
DEFLATE compression
Fig. 5. Comparison WFS GetFeature responses using differ
Please cite this article in press as: Li, W., et al. Performance
improvement techncase study with a disaster management portal.
Computers, Environment and Urb
the connection speed between the server node and the
clientnodes.
Sections 5.1 and 5.2 focus on comparing WFS performance withand
without our proposed techniques using point and polygon dataon the
main client node. Section 5.3 compares the performance ofWFS with
the proposed model under different network conditions.
rver and client test nodes.
N and GML encoding with no compression
(c) Comparison in data size after GeoJSON and GML encoding with
LZMA compression
ent encoding and compression method for point data.
iques for geospatial web services in a cyberinfrastructure
environment Aan Systems (2015),
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
-
W. Li et al. / Computers, Environment and Urban Systems xxx
(2015) xxxxxx 7
5.1. WFS performance for point data on main client node
We used a 12-year earthquake location dataset to test the
net-work transmission efficiency of the proposed model. The
compres-sion methods, encoding methods and data size all varied
withinthe tests. Fig. 5(a)(c) compares the differences in size of
datatransferred back to the client side using the combination of
encod-ing (GML or GeoJSON) and compression techniques (LZMA
orDEFLATE). The x-axis shows the number of years of earthquakedata
used for the test. For the four leftmost encodings (18 years),as
the time interval doubles, the amount of data transferred
almost
(a) Response time for GeoJSON encod
(b) Response time for GML encoding wi
Fig. 6. Comparison of total response time for a WF
Please cite this article in press as: Li, W., et al. Performance
improvement techncase study with a disaster management portal.
Computers, Environment and Urb
also doubles. Fig. 5(a) shows that GeoJSON is much more
efficientthan GML in encoding point data. The data size after
GeoJSONencoding is less than half of that in GML. This can be
attributedto GMLs redundant open and close tag encoding.Fig. 5(b)
and (c) demonstrate the data sizes after employingDEFLATE and LZMA
compression. It can be clearly observed that:(1) no matter which
compression method is used or the size ofthe dataset, data encoded
in GeoJSON are always smaller thanthe data in GML. This is not
surprising since the raw GeoJSON fileis much smaller than GML file,
according to Fig. 5(a). It is also clearin Fig. 5(b) and (c) that
the differences between GML and GeoJSON
ing with DEFLATE and LZMA compression
th DEFLATE and LZMA compression
S GetFeature request of earthquake point data.
iques for geospatial web services in a cyberinfrastructure
environment Aan Systems (2015),
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
-
8 W. Li et al. / Computers, Environment and Urban Systems xxx
(2015) xxxxxx
encoded files are small after either compression method,
becausethe majority of the content (coordinates) are the same for
bothfiles. (2) LZMA yields better compression rate than
DEFLATE.When compressing GML file, LZMA achieves on average
a29-time compression rate and it was only 24 for DEFLATE.Similarly,
LZMA compresses a GeoJSON file at a compression ratioof 16:1, and
DEFLATE only receives a value of 10. (3) GML has moreredundancy
than GeoJSON when encoding point data, reflected byits higher
compression ratio using either DEFLATE or LZMA thanGeoJSON.
In addition to data size, we also compared WFS efficiency
interms of total response time t0 of a WFS GetFeature Request
usingthe proposed transmission model. Fig. 6 compares the
responsetime for GML and GeoJSON encoding combined with different
com-pression methods. As the amount of requested data increases,
theproposed transmission model presents different
performance.First, no matter which text encoding method is used,
GeoJSON orGML, DEFLATE compression presents the fastest response
time.Using DEFLATE, the transmission model can be more than twiceas
fast as compression-free transmission. When more data isrequested,
DEFLATE achieves even greater speed up. For instance,the speed up
of total response time using DEFLATE compressioncan reach up to 2.9
for GeoJSON encoding (Fig. 6a) and 3.7 forGML encoding (Fig. 6b)
using the 12-year data. In contrast, LZMAdoes not perform as well
as DEFLATE. For GeoJSON data, the WFSresponse time using LZMA
compression is slower than when nocompression is used. For
transmitting GML data, the LZMA com-pression is faster than the
compression-free transmission, but itis still slower than DEFLATE
compression. This is because although
(a) Comparison in data size after GeoJSON
(b) Comparison in data size after GeoJSON and GML encoding with
DEFLATE compression
Fig. 7. Comparison WFS GetFeature response using differen
Please cite this article in press as: Li, W., et al. Performance
improvement techncase study with a disaster management portal.
Computers, Environment and Urb
LZMA compression can achieve better compression ratio, it
takeslonger for server-side compression and client-side
decompression.However, it may be noted that this time is
significantly longer thantransmitting the uncompressed data over
the Internet would be,given the high-speed connection between the
ASU client and theUCSB server. For the GML case, as the GML file is
really large, muchlarger than GeoJSON for encoding same amount of
data, the trans-mission time over the Internet is long enough to
make the LZMAapproach demonstrate better performance than the
regular WFStransmission model.
Through the above experiments, we show that when the
clientserver maintains a high connection speed, in another words,
whenthe network transmission is not a bottleneck: (1) the
transmissionmodel using DEFLATE compression is the most efficient
among allcases; (2) GeoJSON format is more effective in encoding
point datathan GML; and (3) LZMA has better compression ratio
thanDELFATE for compressing either GeoJSON or GML files, and
thiscompression technique performs better than the regular WFSmodel
when the requested data is in GML and the requested datasize is
large.
5.2. WFS performance for polygon data on main client node
This section compares the performance of the proposed
tech-niques for polygon data. US 2010 census tract data is used in
thestudy. The same set of experiments was conducted here as thosein
Section 5.1. Fig. 7 shows the data size for 5 increasing samplesof
data after LZMA and DEFLATE compression. Fig. 7(a) shows theraw
data size for GML and GeoJSON. This is the data volume
and GML encoding with no compression
(c) Comparison in data size after GeoJSON and GML encoding with
LZMA compression
t encoding and compression method for polygon data.
iques for geospatial web services in a cyberinfrastructure
environment Aan Systems (2015),
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
-
W. Li et al. / Computers, Environment and Urban Systems xxx
(2015) xxxxxx 9
transferred in a regular WFS transmission model. It can be
seenthat GML is more effective in encoding polygon data
thanGeoJSON. This is not surprising since the analysis of the GML
andGeoJSON data structures indicated that GeoJSON used an extra[]
and a comma to separate coordinates in its presentation. Fora
polyline or a polygon feature, which usually has multiple
ver-tices, the redundancy caused by the extra separators in
GeoJSONsubstantially increases the final data size.
Fig. 7(b) and (c) presents the data size in GML and GeoJSONafter
applying the proposed compression techniques. As shown,for both
GeoJSON and GML files, the LZMA compression output ishalf the size
of the DEFLATE compression output. This observationis consistent
with the point data experiment. However, there is anotable
difference: the compression ratio is only about 3:1 forDEFLATE and
6:1 for LZMA, both much lower than the point data
(a) Response time for GeoJSON encoding
(b) Response time for GML encoding w
Fig. 8. Comparison of total response time for a WFS GetFeature
request of census tra
Please cite this article in press as: Li, W., et al. Performance
improvement techncase study with a disaster management portal.
Computers, Environment and Urb
compression rates of 10:1 and 16:1. This means that point
dataencoding structure is less efficient than polygon encoding in
bothGML and GeoJSON. Another observable difference inFig. 7(b) and
(c) is that GeoJSON and GML files remain very differ-ent in size
after applying the same compression technique. Thisindicates that
current data structure of GeoJSON for encoding poly-gon data is not
as compressible as GML.
Fig. 8 illustrates the total client wait time as the sum of:
requestprocessing time on the server end, data decompression time,
anddecoding time on the client side. It shows that under current
net-work conditions, the total DEFLATE response time is faster than
theGeoJSON response time by about 3:1. This is true whether
theencoding is GeoJSON (Fig. 8a) or GML (Fig. 8b). The stacked
barsshow that the majority of the request-response cycle is
monopo-lized by the client for on-the-fly decompression of
streaming data.
with LZMA and DEFLATE compression
ith LZMA and DEFLATE compression
ct data. Data used in each column group doubles the size of the
one on its left.
iques for geospatial web services in a cyberinfrastructure
environment Aan Systems (2015),
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
-
10 W. Li et al. / Computers, Environment and Urban Systems xxx
(2015) xxxxxx
For a regular WFS (the none category), this simplifies the
datatransfer time. Contrasting the earlier point geometry
experiment,polygon transmission with LZMA is slower than the
regular WFSmodel, suggesting that LZMA is not suitable for a
polygonal datarequest in a fast network environment.
The results suggest that: (1) GML has a better encoding
struc-ture for polygon dataset than GeoJSON. As polygon and
polylineencodings are very similar, this conclusion can apply to
polylinedata as well; (2) transmission models using DEFLATE are
shownto be more efficient than regular WFS model and the LZMA
modelin a fast clientserver network; (3) the compression ratio for
bothGML and GeoJSON of polygon data is smaller than those for
pointdata, suggesting that GML and GeoJSONs polygon encoding
struc-ture is better than their point encoding structures.
5.3. Comparison of response time for multiple, world-wide cloud
nodes
This section tests the performance of the proposed
networktransmission model with the client application running on
dis-tributed cloud nodes. After occurrences of major disasters
andinfrastructure failure, remote users network stability may
begreatly affected. Therefore, we deployed two micro-instances
withidentical hardware and network configuration to represent
thisdisrupted scenario. For consistency, we used the same census
tractdata as the test dataset. As GML was validated to be the more
effec-tive encoding method for polygon data, we only tested the
perfor-mance of the model combining GML with LZMA and
DEFLATEcompressions. The four columns of data (from left to right)
inFig. 9 are of sizes of 7 M, 14 M, 28 M and 56 M respectively.
Thisfigure shows that (1) as the amount of requested data
doubles,the total response time almost doubles. The web server does
notintroduce extra complexity to process larger datasets,
whichreflects the scalability of the web server. (2) For the same
request,it takes a longer time for the Singapore node to get the
responsethan it takes the Ireland node. Given that they are pinging
the sameserver, this result reveals that location matters in
network trans-mission. In the test case, the server is located in
North America,so the Singapore node may need access its data
through morephysical routers than the Ireland node, resulting in
longer waitingperiod. (3) Because of this geographic inequity,
different users atdifferent locations must carefully choose the
most suitable WFStransmission model according to their network
condition. As
Fig. 9. Total response time on cloud nodes: Ir
Please cite this article in press as: Li, W., et al. Performance
improvement techncase study with a disaster management portal.
Computers, Environment and Urb
shown, the total response time using different compression
tech-niques are close in their performance. This is because
althoughDEFLATE decompression is faster than LZMA decompression,
inthe network with lower speed and high latency, decompressionis no
longer a bottleneck. Instead, what matters is the speed
fortransferring the data stream. More importantly, as the data
volumeincreases to certain amount (>28 M bytes), the model with
LZMAcompression shows faster response time than the DEFLATE modelon
the Singapore node. However, the DEFLATE model still worksslightly
better than the LZMA model on the Ireland node.
Through above experiments, we verified the better performanceof
the proposed transmission model than the regular WFS model.In next
section we describe the integration of the proposed modelinto an
operational CI system for disaster management.
6. Graphic user interface for the disaster management portal
Fig. 10 demonstrates the Graphic User Interface (GUI) of the
dis-aster management CI portal that integrates the proposed
tech-niques. This portal integrates data services for disaster
datavisualization and analysis. A backend data crawler
retrievesreal-time earthquake data from a USGS website
(http://earth-quake.usgs.gov/earthquakes/map/) and stores it into
the backendgeospatial database, an essential component shown
inFigs. 1 and 2. A web service engine, enabled by GeoServer
anddeployed in the service pool of the cluster framework, pulls
spatialdata directly from the database. Due to this dynamic
invocationmechanism, a new earthquake record is immediately
reflected inthe portal. The earthquake data are published as WFS
enabled bythe proposed network transmission model. The window in
thelower-left corner of the screenshot is a configuration
interface,which allows end users to select their preferred encoding
and com-pression methods. Choosing compression methods that
comple-ment the users network conditions maximizes the
WFSperformance. Once the actual earthquake data is on the client,
itcan be further analyzed using statistical or spatial analytical
func-tions. For example in Fig. 10, a bar chart displays the number
ofearthquakes within a spacetime frame. In addition to WFS,
thisportal is also capable of adding distributed WMSs for
correlatedanalysis. A WMS can be added directly by an end user, or
can besearched through a CSW (Catalog Service for the Web;
Senkler,Voges, & Remke, 2004) interface provided by a central
repository
eland node (IE) and Singapore node (SG).
iques for geospatial web services in a cyberinfrastructure
environment Aan Systems (2015),
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
http://earthquake.usgs.gov/earthquakes/map/http://earthquake.usgs.gov/earthquakes/map/http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
-
Fig. 10. GUI of the disaster management portal
(http://polar.geodacenter.org/gci).
W. Li et al. / Computers, Environment and Urban Systems xxx
(2015) xxxxxx 11
located at ASU. This way, all modules in this portal, including
thedata discovery module, data integration, and visualization
moduleare all web service-based.
As an example of a disaster management scenario, consider
aresearcher who wants to analyze the correlation between
earth-quake locations and mortality of landslides (a disaster
caused oftenby earthquakes) across different geographic regions.
Fig. 10 pre-sents how this example would look in the disaster
managementCI portal. The researcher can first retrieve any data
availablethrough the cataloged web services, including earthquake
data in2013 from an ASU WFS server, US state boundary from a
UCSBWMS server, and the landslide mortality data from NASA
SEDAC(Social-Economic Data Application Center) WMS server. These
datacan be seamlessly integrated and visualized in our CI portal.
It canbe seen that although California, US has a high number of
earth-quakes in 2013 (number is reflected by the size of the orange
cir-cles), the mortality caused indirectly by its induced disaster
islow (indicated by the very sparse red color on the mortality
distri-bution map). In contrast, places such as Northern Africa
andNorthwest China have suffered seriously from earthquake
andlandslide disasters, as indicated by the large orange circles
anddark red hues on the map. Using the results from this initial
inves-tigation, further studies can then be conducted in these
areas toimprove disaster management, such as comparing the
geologyand urban disaster preparedness infrastructure.
7. Conclusion and discussion
This paper reports our proposed network transmission model
toenhance the performance of geospatial web services, in
particularWFS, in a Cyberinfrastructure environment. The network
transmis-sion model enables an efficient delivery of geospatial
data in differ-ent network environments. Systematic experiments
revealedseveral interesting findings. First, the proposed
transmission modelpresents significant advantages over the regular
WFS transmissionmodel. The two compression techniques are suitable
for differentnetwork environments. In a network with high speed and
lowlatency, the DEFLATE model works better than LZMA; in
contrast,
Please cite this article in press as: Li, W., et al. Performance
improvement techncase study with a disaster management portal.
Computers, Environment and Urb
in an environment with a data transmission bottleneck, theLZMA
model performs better. Second, GML is more efficient atencoding
polyline/polygon data than GeoJSON; and GeoJSON ismore efficient
than GML for encoding point data. Both methodshave redundant point
data encoding, reflected by their high com-pression ratios. This
finding suggests that the renovation of theencoding structure for
both GML and GeoJSON may be necessary.Third, location matters in
the selection of the transmission model.Even with the same
hardware, software, and network environ-ment, different users may
receive different experiences with thesame web server. Therefore,
service quality in terms ofrequest-response latency needs to
incorporate thislocation-awareness and be tailored for users in
different spacesand times. These findings contribute to a
comprehensive under-standing of performance issues in vector-based
data transmissionin a service-oriented cyberinfrastructure
environment, and mayserve as a guide for refining existing encoding
standards and datatransmission middleware.
We have successfully integrated the proposed techniques into
aservice-oriented CI portal for disaster management. This
portalprovides a central access point where general public or
researcherscan interact with physically distributed data services,
configure thebest network transmission model considering his/her
hardwareand network environment, view statistical information, and
con-duct spacetime queries about remote datasets. To our best
knowl-edge, this is the first time such transmission model has
beenintegrated into an operational system. We expect that the
disasterresponse community can greatly benefit from the
establishment ofthis service-oriented cyberinfrastructure for
efficient spatialdecision-making. Additionally, this performance
boosting modelfor WFS and the portal techniques be applied for
analyzing morethan just disaster data, it can also be applied to
other applications,such as emergency response, which has real-time
requirementsthat can benefit from the efficient compilation of
distributed data.
Last but not the least, per a latest study conducted Li (2014),
Li,Wang, and Bhatia (2015), though the interoperability process
inGIScience has been greatly advanced over the past few years
(indi-cated by the over twenty times of increase in the number of
WMSsemerging on the Web in 2014 than that in 2010), the support
to
iques for geospatial web services in a cyberinfrastructure
environment Aan Systems (2015),
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
http://polar.geodacenter.org/gcihttp://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
-
12 W. Li et al. / Computers, Environment and Urban Systems xxx
(2015) xxxxxx
WFS remains insufficient by the community. This fact may be
lar-gely due to the challenges in WFSs heavy data transfer process.
Byproviding timely solutions to improve the performance of WFS,
weexpect this work to serve as a momentum to widen the use of
WFS,and eventually broaden the adoption of open geospatial
serviceand science. The solution technique will also be open
sourced tobenefit other GIScience researchers and users. In the
future, we willalso deploy the proposed transmission model to the
GeoDas highperformance cluster to achieve high system
throughput.
Acknowledgements
This paper is in part supported by National Science
FoundationPLR-1349259, BCS-1455349 and PLR-1504432, as well as the
OpenGeospatial Consortium.
References
Alinia, H. S., & Delavar, M. R. (2011). Tehrans seismic
vulnerability classificationusing granular computing approach.
Applied Geomatics, 3(4), 229240.
Baumann, P. (2001). Web-enabled raster GIS services for large
image and mapdatabases. Paper read at Database and Expert Systems
Applications, 2001. InProceedings 12th international workshop
on.
Burggraf, D. S. (2006). Geography markup language. Data Science
Journal, 5,178204.
Butler, H., Daly, M., Doyle, A., Gillies, S., Schaub, T., &
Schmidt, C. (2008). The GeoJSONFormat Specification.
Cox, S., Cuthbert, A., Daisey, P., Davidson, J., Johnson, S.,
Keighan, E., et al. (2002).OpenGIS Geography Markup Language (GML)
implementation specification.Version.
de La Beaujardire, J. (2002). Web map service implementation
specification. OpenGIS Consortium, 82.
der Heide, E. A. (2006). The importance of evidence-based
disaster planning. Annalsof emergency medicine, 47(1), 3449.
Deutsch, L. P. (1996). DEFLATE compressed data format
specification version 1.3.Dorward, S., Quinlan, S. (2000). Robust
data compression of network packets. In Bell
Labs, Lucent Technologies, .
Foster, I. (2005). Service-oriented science. Science, 308(5723),
814817.Goodchild, M. F. (2006). GIS and disasters: Planning for
catastrophe. Computers,
Environment and Urban Systems, 30(3), 227229.Hu, C., Zhao, Y.,
Li, J., Liu, M., Ma, D., & Li, X. (2010). OGC-compatible
high-
performance web map service for remote sensing data
visualization. In Paperread at proceedings of the 12th
international conference on information integrationand web-based
applications & services.
Li, W., Goodchild, M. F., Anselin, L., & Weber, K. (2015). A
service-oriented smartCyberGIS framework for data-intensive
geospatial problems. In S. Wang, & M. F.Goodchild (Eds.).
CyberGIS: Fostering a new wave of geospatial discovery
andinnovation, Springer (in press).
Li, W., Li, L., Goodchild, M. F., & Anselin, L. (2013). A
geospatial cyberinfrastructurefor urban economic analysis and
spatial decision-making. ISPRS InternationalJournal of
Geo-Information, 2(2), 413431.
Li, W., Yang, C., Nebert, D., Raskin, R., Houser, P., Wu, H., et
al. (2011). Semantic-based web service discovery and chaining for
building an Arctic spatial datainfrastructure. Computers &
Geosciences, 37(11), 17521762.
Please cite this article in press as: Li, W., et al. Performance
improvement techncase study with a disaster management portal.
Computers, Environment and Urb
Li, W., Yang, C., & Yang, C. (2010). An active crawler for
discovering geospatial webservices and their distribution patternA
case study of OGC Web Map Service.International Journal of
Geographical Information Science, 24(8), 11271147.
Li, W., 2014. PolarHub. (accessed20.03.15).
Li, W., Wang, S., Bhatia, V., 2015. PolarHub: A large-scale
cybersearch engine forgeospatial data discoverability and
accessibility. GeoDa Center Working Paper.(23pp).
Lopez-Pellicer, F. J., Florczyk, A. J., Bjar, R., Muro-Medrano,
P. R., & Zarazaga-Soria, F.J. (2011). Discovering geographic
web services in search engines. OnlineInformation Review, 35(6),
909927.
Mansourian, A., Rajabifard, A., Valadan Zoej, M., &
Williamson, I. (2006). Using SDIand web-based system to facilitate
disaster management. Computers &Geosciences, 32(3), 303315.
McEntire, D. A. (2002). Coordinating multi-organisational
responses to disaster:lessons from the March 28, 2000, Fort Worth
tornado. Disaster Prevention andManagement: An International
Journal, 11(5), 369379.
Mikula, S., Trotts, I., Stone, J. M., & Jones, E. G. (2007).
Internet-enabled high-resolution brain mapping and virtual
microscopy. Neuroimage, 35(1), 915.
Morse, K. G. Jr, (2005). Compression tools compared. Linux
Journal, 2005(137), 3.Senkler, K., Voges, U., & Remke, A.
(2004). An ISO 19115/19119 profile for OGC
catalogue services CSW 2.0. In Paper read at workshop paper
presented at 10thEC-GI & GIS workshop, Warsaw, Poland.
Vretanos, P. (2005a). Web feature service implementation
specification. OpenGeospatial Consortium Specification, 04094.
Vretanos, P. A. (2005). OpenGIS (R) Filter Encoding
Implementation Specification:OGC.
Wang, S. (2010). A cyberGIS framework for the synthesis of
cyberinfrastructure, GIS,and spatial analysis. Annals of the
Association of American Geographers, 100(3),535557.
Wang, S., Armstrong, M. P., Ni, J., & Liu, Y. (2005).
GISolve: A grid-based problemsolving environment for
computationally intensive geographic informationanalysis. In Paper
read at challenges of large applications in
distributedenvironments, 2005. CLADE 2005. proceedings.
Wei, Y., Santhana-Vannan, S.-K., & Cook, R. B. (2009).
Discover, visualize, and delivergeospatial data through OGC
standards-based WebGIS system. In Paper read atgeoinformatics, 2009
17th international conference on.
Wei, Y., Yue, P., Dadi, U., Min, M., Hu, C., & Di, L.
(2006). Effective acquisition ofgeospatial data products in a
collaborative grid environment. In Paper read atservices computing,
2006. SCC06. IEEE international conference on.
Weiser, A., & Zipf, A. (2007). Web service orchestration of
OGC web services fordisaster management. In Geomatics solutions for
disaster management(pp. 239254). Berlin Heidelberg: Springer.
Yang, C., Li, W., Xie, J., & Zhou, B. (2008). Distributed
geospatial informationprocessing: sharing distributed geospatial
resources to support Digital Earth.International Journal of Digital
Earth, 1(3), 259278.
Yang, C., Wong, D. W., Yang, R., Kafatos, M., & Li, Q.
(2005). Performance-improvingtechniques in web-based GIS.
International Journal of Geographical InformationScience, 19(3),
319342.
Yang, C., Wu, H., Huang, Q., Li, Z., Li, J., Li, W., et al.
(2011). WebGIS performanceissues and solutions. Advances in
Web-based GIS, Mapping Services andApplications, 121138.
Zhang, C., Zhao, T., & Li, W. (2013). Towards improving
query performance of WebFeature Services (WFS) for disaster
response. ISPRS International Journal of Geo-Information, 2(1),
6781.
Zhang, T., & Tsou, M.-H. (2009). Developing a grid-enabled
spatial Web portal forInternet GIServices and geospatial
cyberinfrastructure. International Journal ofGeographical
Information Science, 23(5), 605630.
iques for geospatial web services in a cyberinfrastructure
environment Aan Systems (2015),
http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
http://refhub.elsevier.com/S0198-9715(15)00053-8/h9000http://refhub.elsevier.com/S0198-9715(15)00053-8/h9000http://refhub.elsevier.com/S0198-9715(15)00053-8/h0010http://refhub.elsevier.com/S0198-9715(15)00053-8/h0010http://refhub.elsevier.com/S0198-9715(15)00053-8/h0015http://refhub.elsevier.com/S0198-9715(15)00053-8/h0015http://refhub.elsevier.com/S0198-9715(15)00053-8/h0020http://refhub.elsevier.com/S0198-9715(15)00053-8/h0020http://refhub.elsevier.com/S0198-9715(15)00053-8/h0020http://refhub.elsevier.com/S0198-9715(15)00053-8/h0025http://refhub.elsevier.com/S0198-9715(15)00053-8/h0025http://refhub.elsevier.com/S0198-9715(15)00053-8/h9005http://refhub.elsevier.com/S0198-9715(15)00053-8/h9005http://www.%20cs.%20bell-labs.%20com/cm/cs/who/seanq/networkcomp.%20pdfhttp://www.%20cs.%20bell-labs.%20com/cm/cs/who/seanq/networkcomp.%20pdfhttp://refhub.elsevier.com/S0198-9715(15)00053-8/h0040http://refhub.elsevier.com/S0198-9715(15)00053-8/h9010http://refhub.elsevier.com/S0198-9715(15)00053-8/h9010http://refhub.elsevier.com/S0198-9715(15)00053-8/h0055http://refhub.elsevier.com/S0198-9715(15)00053-8/h0055http://refhub.elsevier.com/S0198-9715(15)00053-8/h0055http://refhub.elsevier.com/S0198-9715(15)00053-8/h0060http://refhub.elsevier.com/S0198-9715(15)00053-8/h0060http://refhub.elsevier.com/S0198-9715(15)00053-8/h0060http://refhub.elsevier.com/S0198-9715(15)00053-8/h0065http://refhub.elsevier.com/S0198-9715(15)00053-8/h0065http://refhub.elsevier.com/S0198-9715(15)00053-8/h0065http://polar.geodacenter.org/polarhub/#http://refhub.elsevier.com/S0198-9715(15)00053-8/h0080http://refhub.elsevier.com/S0198-9715(15)00053-8/h0080http://refhub.elsevier.com/S0198-9715(15)00053-8/h0080http://refhub.elsevier.com/S0198-9715(15)00053-8/h0085http://refhub.elsevier.com/S0198-9715(15)00053-8/h0085http://refhub.elsevier.com/S0198-9715(15)00053-8/h0085http://refhub.elsevier.com/S0198-9715(15)00053-8/h9015http://refhub.elsevier.com/S0198-9715(15)00053-8/h9015http://refhub.elsevier.com/S0198-9715(15)00053-8/h9015http://refhub.elsevier.com/S0198-9715(15)00053-8/h0090http://refhub.elsevier.com/S0198-9715(15)00053-8/h0090http://refhub.elsevier.com/S0198-9715(15)00053-8/h0095http://refhub.elsevier.com/S0198-9715(15)00053-8/h0105http://refhub.elsevier.com/S0198-9715(15)00053-8/h0105http://refhub.elsevier.com/S0198-9715(15)00053-8/h0115http://refhub.elsevier.com/S0198-9715(15)00053-8/h0115http://refhub.elsevier.com/S0198-9715(15)00053-8/h0115http://refhub.elsevier.com/S0198-9715(15)00053-8/h9020http://refhub.elsevier.com/S0198-9715(15)00053-8/h9020http://refhub.elsevier.com/S0198-9715(15)00053-8/h9020http://refhub.elsevier.com/S0198-9715(15)00053-8/h0140http://refhub.elsevier.com/S0198-9715(15)00053-8/h0140http://refhub.elsevier.com/S0198-9715(15)00053-8/h0140http://refhub.elsevier.com/S0198-9715(15)00053-8/h0145http://refhub.elsevier.com/S0198-9715(15)00053-8/h0145http://refhub.elsevier.com/S0198-9715(15)00053-8/h0145http://refhub.elsevier.com/S0198-9715(15)00053-8/h0150http://refhub.elsevier.com/S0198-9715(15)00053-8/h0150http://refhub.elsevier.com/S0198-9715(15)00053-8/h0150http://refhub.elsevier.com/S0198-9715(15)00053-8/h0165http://refhub.elsevier.com/S0198-9715(15)00053-8/h0165http://refhub.elsevier.com/S0198-9715(15)00053-8/h0165http://refhub.elsevier.com/S0198-9715(15)00053-8/h0170http://refhub.elsevier.com/S0198-9715(15)00053-8/h0170http://refhub.elsevier.com/S0198-9715(15)00053-8/h0170http://dx.doi.org/10.1016/j.compenvurbsys.2015.04.003
Performance improvement techniques for geospatial web services
in a cyberinfrastructure environment A case study with a disaster
management portal1 Introduction2 Related work2.1 Service-oriented
geospatial cyberinfrastructure2.2 Geospatial web services and their
performance issues
3 Architecture4 An efficient network transmission model for
WFS4.1 Vector data encoding for network transmission4.2 Introducing
compression strategy to the network transmission model of a WFS
5 Experiment5.1 WFS performance for point data on main client
node5.2 WFS performance for polygon data on main client node5.3
Comparison of response time for multiple, world-wide cloud
nodes
6 Graphic user interface for the disaster management portal7
Conclusion and discussionAcknowledgementsReferences