Abstract— Supporting transmission of stream media over wireless mobile networks is often difficult because packets may be lost due to the rerouting of packets during handoff, and also because bursts of packet loss may occur during handoff due to the disparity in the amount of available bandwidth among different cells. In this paper, we propose an end-to-end multi-path handoff scheme that provides smooth handoff for stream media in wireless networks with different bandwidth from cell to cell. In the proposed scheme, multiple paths are established during handoff to reach a mobile destination node. The stream media sources Manuscript received on Feb 14, 2003, revised on Sept 30, 2003. This work was supported by the National Science Foundation through grants ANI-0083074 and ANI- 9903427, by DARPA through Grant MDA972-99-1-0007, by AFOSR through Grant MURI F49620- 00-1-0330, and by grants from the University of California MICRO Program, California State University of Northridge Research and Sponsored Projects, Hitachi, Hitachi America, Novell,Nippon Telegraph and Telephone Corporation (NTT), NTT Docomo, Fujitsu, Korea Research Foundation through BK21, and NS-Solutions. This work has been presented in part at WMASH’03, San Diego, CA, Sept. 2003. *Yi Pan and Tatsuya Suda are with the School of Information and Computer Science, University of California, Irvine, CA 92697 USA (e-mail: [email protected]; [email protected]. Phone: +1-949-824-4105. Fax: +1-949-824-2886) Meejeong Lee is with the Dept. of Computer Science and Engineering, Ewha Woman’s University, 11-1 Daehyun-Dong, Seoul, Korea (120-750) (e-mail: [email protected]. Phone: +822-3277-2388). Jaime B. Kim is with the Computer Science Department, California State University, Northridge, CA 91330 USA (e-mail: [email protected], phone: +1 818-677-3892). *Yi Pan is the correspondence author. An End-to-End Multi-Path Smooth Handoff Scheme for Stream Media 1
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Abstract— Supporting transmission of stream media over wireless mobile networks is often difficult
because packets may be lost due to the rerouting of packets during handoff, and also because bursts of
packet loss may occur during handoff due to the disparity in the amount of available bandwidth among
different cells. In this paper, we propose an end-to-end multi-path handoff scheme that provides smooth
handoff for stream media in wireless networks with different bandwidth from cell to cell. In the proposed
scheme, multiple paths are established during handoff to reach a mobile destination node. The stream
media sources are equipped with an adaptive multi-layer encoder, and important layers in the encode
video stream are duplicated and transmitted over multiple paths during handoff. The effectiveness of the
proposed multi-path handoff scheme is verified and compared with existing schemes through extensive
simulations. The simulation results show that the proposed scheme provides higher throughput and better
quality for stream media.
Index Terms— Wireless networks, handoff, stream media, multi-layer video encoder, slow start,
congestion.
Manuscript received on Feb 14, 2003, revised on Sept 30, 2003. This work was supported by the National Science Foundation through grants ANI-0083074 and ANI-9903427, by DARPA through Grant MDA972-99-1-0007, by AFOSR through Grant MURI F49620-00-1-0330, and by grants from the University of California MICRO Program, California State University of Northridge Research and Sponsored Projects, Hitachi, Hitachi America, Novell,Nippon Telegraph and Telephone Corporation (NTT), NTT Docomo, Fujitsu, Korea Research Foundation through BK21, and NS-Solutions. This work has been presented in part at WMASH’03, San Diego, CA, Sept. 2003.
*Yi Pan and Tatsuya Suda are with the School of Information and Computer Science, University of California, Irvine, CA 92697 USA (e-mail: [email protected]; [email protected]. Phone: +1-949-824-4105. Fax: +1-949-824-2886)
Meejeong Lee is with the Dept. of Computer Science and Engineering, Ewha Woman’s University, 11-1 Daehyun-Dong, Seoul, Korea (120-750) (e-mail: [email protected]. Phone: +822-3277-2388).
Jaime B. Kim is with the Computer Science Department, California State University, Northridge, CA 91330 USA (e-mail: [email protected], phone: +1 818-677-3892).
*Yi Pan is the correspondence author.
An End-to-End Multi-Path Smooth Handoff Scheme for Stream Media
Applications using stream media are becoming popular in wireless mobile networks. Providing smooth
handoffs for stream media without any transmission disruption and drastic quality degradation is a
challenging problem due to the following two factors: 1) rerouting of packets during handoff may result in
burst loss of packets (referred to as rerouting packet loss in this paper), and 2) disparity of available
bandwidths in cells may also result in bursts of packet loss during handoff (referred to as congestion
packet loss in this paper). In wireless mobile networks, when the data path is disconnected and the change
of the point-of-attachment to the network is required as a result of user movement, packets on the path
through the old point-of-attachment may be lost, degrading the quality of service for the stream media. In
addition, the amount of bandwidth available at the new point-of-attachment may be smaller than the
previous one, and the mobile node may experience congestion. This bandwidth disparity problem is
exacerbated as different wireless access techniques with difference link speeds, such as WLAN,
Bluetooth, and W-CDMA, are being deployed to provide ubiquitous connectivity. This mismatch in
available bandwidths may result in bursts of packet loss, if the stream media transmission continues
without appropriate rate adjustment in a new cell.
A number of mobility management techniques have been proposed in the literatures [1, 2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12, 13, 14, 15, 16]. Many of them address the smooth handoff of stream media in a QoS
enabled network [1, 2, 3, 4, 5], such as wireless ATM, but very few techniques have been proposed to
provide smooth handoff for stream media in a highly heterogeneous best effort network, such as the
Internet.
In this paper, we propose a novel scheme for smooth handoff of stream media in heterogeneous best-
effort wireless mobile networks. In order to avoid the drastic quality degradation, the proposed scheme
reduces the rerouting and congestion packet loss during handoff (1) by establishing multiple paths to the
mobile node and transmitting duplicate packets over the multiple paths to reduce the negative impact of
2
packet loss and (2) by probing the available bandwidth in the new cell and allowing stream media
applications to gradually adapt their transmission rates to the available bandwidth in a new cell.
The proposed scheme assumes that the stream media sources employ an adaptive multi-layer video
encoder (such as MPEG-1, 2 and 4 [17, 18]) that produces multiple encoded video streams (layers) of
differing importance from a video input. The adaptive multi-layer encoder adjusts the number of video
layers and the corresponding encoding rate of each layer based on the input from the lower layer
protocols. The proposed scheme also exploits the fact that cells in wireless networks overlap with each
other. In the proposed scheme, as a mobile node (receiving a stream media) moves from an old cell to a
new cell and enters a cell overlapping area, paths from the mobile node to the stream media source
through the new cell are established, while maintaining the already existing paths through the old cell.
With the proposed scheme, more important (video encoded) layers of the stream media are transmitted
redundantly through more number of paths, increasing the robustness to packet loss during handoff, while
the higher available bandwidths on some of the paths are exploited to transmit enhancement layers for
higher quality. For instance, the base layer of the stream media with high importance are replicated and
transmitted over all new and existing paths during handoff. In the proposed scheme, separate end-to-end
rate control is applied onto each separate path in order to adjust to a different amount of available
bandwidth on each path. The number of layers and encoding rate of each layer at the stream media
source are adjusted accordingly.
The overhead of the proposed scheme includes bandwidth required for the redundant transmission of
important layers of the stream media during handoff and computation of encoding rates at the stream
media source. The performance of the proposed scheme is investigated through extensive simulations.
Obtained performance measures include QoS related metrics for the stream media (such as throughput and
packet loss ratio), overheads of the proposed scheme such as reduced transmission efficiency due to
redundant transmission, and the scalability with respect to the number of paths used during handoff.
3
Yi Pan, 01/03/-1,
****Ask Michael if is should be “the stream media” or “stream media” (without “the”.) Be consistent throughout the paper. ***** ( the stream media
The paper is organized as follows. Section II surveys related work. Section III describes the proposed
multi-path smooth handoff scheme in detail. In Section IV, simulation models are described, and in
Section V, numerical results are presented to investigate the performance of the proposed scheme.
Concluding remarks and possible future work are described in Section VI.
II. RELATED WORK
In the literatures, different mobility management techniques have been proposed for QoS enabled
networks (e.g., wireless ATM and IntServ supported networks) and best effort networks (e.g., the
Internet).
Many existing mobility management techniques proposed for QoS enabled networks use resource
reservation protocols to reserve some amount of bandwidth on the new data path for mobile nodes. These
techniques [1, 2, 3, 4, 5] all require extensive QoS support in the network to keep soft states of resource
reservation for each active flow at network devices. Given that end-to-end QoS support is hardly
available in a highly heterogeneous, large-scale network as the Internet, these techniques may not be
applicable to such networks.
There are few mobility techniques proposed for stream media in the Internet. Existing mobility
management techniques proposed for the Internet may be classified into two categories: network layer
mobility management techniques [6, 7, 8, 9, 10, 11, 12] and transport layer mobility management
techniques [13, 14, 15, 16]. None of these existing technologies, however, considers the requirements of
the stream media transmission specifically. For instance, most of the existing network layer mobility
management techniques only remove rerouting packet loss during handoff, while the bandwidth disparity
among the cells is not considered. Thus, with these schemes, congestion loss may still occur in the new
cell during handoff. Most of the existing transport layer mobility management techniques are proposed for
window-based congestion control and use retransmission to recover packet loss. The transmission rate of
the stream media is adjusted to the available bandwidth in a new cell either through a slow start or a
4
Yi Pan, 01/03/-1,
“the packet loss”?? or “packet loss”? ( packet loss DONE
multiplicative decreasing procedure. This rate adjustment process usually is slow and causes rate
disruption or significant packet loss. Because of this downside, together with the high fluctuation of the
transmission rate caused by oscillation of the window size in a window-based congestion control
algorithm, existing transport layer mobility management techniques based on TCP extensions may not be
directly applied to stream media.
Recently, some researchers investigated the stream media transmission in best effort CDNs (Content
Distribution Networks) and proposed alternative schemes where mobility support is provided at the
session layer. A common design of stream CDNs includes an overlay network of proxies deployed at the
edges of a network and a data center to distribute the contents and redirect the user requests [20, 21, 22].
Requested streams are cached at local proxies. The transmission rate of the stream media is adjusted at the
proxies through filtering or transcoding the incoming streams based on the locally available resource. In
these schemes, mobility is handled at a session level. A mobile node moving into a new cell is redirected
to a new proxy, and a new session is established between the mobile node and the server through the new
proxy. Then, a mobile node starts receiving packets from the new proxy [22]. The deployment of an
overlay network of proxies can be excessive and may only be justified for popular contents. Session set-up
and rate adjustment (e.g., transcoding) delays during handoff may also cause interruptions in the
transmission of stream media.
The scheme proposed in this paper maintains multiple paths between the sender of the stream media and
the receiver (i.e., a mobile node) during handoff. Recently, some new transport layer protocols that
maintain multiple connections between the sender and the receiver have been proposed in the literatures.
They are SCTP (Stream Control Transport Protocol) [23], multi-path TCP [24], and p-TCP [25]. Unlike
the proposed scheme that prevents packet loss through redundant transmission and applies rate-based
congestion control, all existing schemes are mainly proposed for reliable data transmission and have one
or both of the following drawbacks for the stream media: 1) packet loss is recovered through
5
retransmission, and it introduces unpredictable delays in packet delivery and may break the real-time
requirements of the stream media transmission, 2) window-based congestion control is used, and it causes
high fluctuation in transmission rate and may not achieve smooth rate adjustment for stream media.
III. SCHEME DESCRIPTION
In this section, the proposed multi-path handoff scheme for stream media is described in detail. In
Section III. A, overall system architecture of the proposed scheme is presented, and detailed descriptions
on architectural components are given in subsections III. B through III E. Overhead of the proposed
scheme is discussed in subsection III F.
A. System Architecture
The proposed scheme acquires multiple paths from the sender of the stream media to the receiver (i.e., a
mobile node) during handoff, estimates the available bandwidth on each path through an end-to-end rate
control algorithm, calculates the number of video layers and the target encoding rate of each layer, and
assigns different layers to different paths for transmission.
In the proposed scheme, new paths are established as soon as a mobile node enters an overlapping area,
before the mobile node reaches the middle point of the overlapping area, unlike in many existing handoff
schemes, making it possible to start using the new paths earlier.
The proposed scheme inserts four components in the transport protocol layer at the sender and the
receiver: a path management module at both the sender and the receiver, a multi-path distributor module
at the sender, a pair of rate control modules at both ends for each path (at the sender and at the receiver),
and a multi-path collector module at the receiver. Figure 1 illustrates the overall system architecture of the
proposed scheme.
The path management module at each end of the transport protocol manages the currently available
paths, using COA binding update messages in the mobile IP protocol [6]. (COA binding update messages
indicate the existence of multiple paths during handoff.) The path management modules report the
6
Yi Pan, 01/03/-1,
*************************** Make the following changes on Fig.1 (1) Rate control models -> Rate control modules (2) Box only the 4 components. Do not box “video encoder” and “application buffer”, that are not a part of 4 components. ***************************(DONE
Yi Pan, 01/03/-1,
************************ Fig.2 and Fig.3 are not clean/clear. Is that a screen dump image or something? Try to come up with a better (i.e., clearer) image.(DONE ************************
existence of multiple paths to the multi-path distributor (at the sender) and the multi-path collector (at the
receiver). . The path management module also assigns a rate control module to a new path and removes a
rate control module from an old path during handoff. The rate control modules for each path perform end-
to-end feedback-based rate control. This end-to-end feedback-based rate control includes an initial
probing phase using a slow start algorithm to estimate the available bandwidth on a new path. Based on
the estimated available bandwidth on a new path provided by the rate control modules and the notification
of existence of multiple paths from the path management module at the sender, the multi-path distributor
module calculates and reports the number of video layers and the target encoding rate for each layer to the
multi-layer encoder at the sender. The multi-path distributor module also assigns different video layers to
appropriate paths based on the differing importance of video layers. Note that the multi-path distributor
may assign the same video layer to multiple paths, depending on the importance of the video layer. The
multi-path collector module at the receiver accepts incoming packets from multiple paths, filters and
reorders them before passing them to the decoder.
B. Path Management
During handoff, to allow the path management module at the sender to maintain multiple paths
simultaneously, the mobile IP simultaneous binding [6] and route optimization [7] options are used. The
simultaneous binding option allows the receiver (i.e., a mobile node) to simultaneously register multiple
COAs, and the route optimization option allows the sender to be informed of the current COA
registrations.
COA binding update messages report the existence or loss of a COA. COAs that a mobile node (i.e., the
receiver) is registered with is used to identify a path from the sender to the receiver. When a new COA is
reported, the path management module assigns a rate control module to the new path and notifies the local
multi-path distributor/collector of the new path; when a loss of a COA is reported, the path management
module removes a rate control module and notifies the local multi-path distributor/collector of a loss of a
7
path.
C. Rate Control Module
A rate control module is installed at the sender and the receiver, immediately after a new path is notified
of by the path management module. We choose a TCP Friendly Rate Control (TFRC) rate control
algorithm [26] at the rate control modules to estimate the available bandwidth on a path. Rate control on a
path consists of two phases. In the first phase (called the probing phase), a slow start algorithm is used to
detect the available bandwidth on a new path. In the second phase (called the congestion avoidance
phase), once the initial estimation of the available bandwidth is done in the first phase, the available
bandwidth on the path is estimated through a predefined equation to avoid congestion.
The rate control module at the sender sets the transmission rate on each path at the estimated available
bandwidth of the path, and applies a token bucket algorithm to enforce that the actual transmission rate
conforms to the estimated available bandwidth. Note that in the congestion avoidance phase, base layer
packets are always allowed to be transmitted, while enhancement layer packets are subject to token bucket
control. However, since the estimation of the available bandwidth on the path in the probing phase may
not be accurate, all packets (including base layer packets) are subject to token bucket control to avoid
significant congestion loss on a new path. Packets that are in excess of the available bandwidth are simply
discarded. The rate control module at the receiver receives the packets and notifies the multi-path collector
of packet arrival on the corresponding path. The rate control module at the receiver also sends a packet
loss ratio report to the rate control module at the sender periodically with the time interval of the round
trip time (RTT) for the path.
D. Multi-path Distributor Module
In order to calculate the number of video layers and target encoding rates of different layers for the
multi-layer video encoder, the multi-path distributor module performs a VLPA (Video Layer-Path
Adaptation) algorithm to decide 1) the number of video layers and the target encoding rate for each layer
8
Yi Pan, 01/03/-1,
******Talk to Michael about if we need “the” in front of “congestion avoidance phase”, and also infront of “probing phase”. I think we need “the”. Make the necessary changes throughout the paper.****(the congestion avoidance phase, the probing phase DONE
and 2) the paths to transmit the packets for each layer. In the following description, represents the
cumulative video stream rate from layer 0 (the base layer) to (and including) layer i, and (= )
represents the encoding rate of layer i.
VLPA algorithm decides the number of video layers based on the number of paths in the congestion
avoidance phase and the available bandwidths on the paths. Note that the VLPA algorithm only considers
the paths in the congestion avoidance phase, not in the probing phase, because the slow start algorithm in
the probing phase can potentially cause highly dynamic changes in estimation of the available bandwidth
on a path, and thus, in the number of video layers and the encoding rates to use. This may lead the system
unstable. The number of video layers is set to the number of distinct transmission rates on the paths in the
congestion avoidance phase. (For instance, if the transmission rates on the paths are 1Mbps, 2 Mbps,
2Mbps, and 3Mbps, the number of video layers is set to 3). The cumulative rate up to and including layer i
(ri) is set to the ith transmission rate among the paths in the congestion avoidance phase, sorted in
ascending order. (For instance, if the transmission rates on the paths are 1Mbps, 2 Mbps, 2Mbps, and
3Mbps, cumulative encoding rates are r0 = 1Mps, r1 = 2Mbps, and r2 = 3Mbps.) Targeted encoding rate for
each layer is calculated as the difference between the two consecutive cumulative rates ri and ri-1 (
). In an exceptional case when the path with the highest transmission rate is in the probing phase,
the VLPA algorithm also considers that new path in calculation of video layers. An additional video layer
is added and the cumulative rate of the entire stream is set to the current transmission rate on that new
path, allowing utilization of extra bandwidth on the new path. This video encoding rate calculation is
executed periodically, and the updated number of video layers and the encoding rates are reported to the
multi-layer video encoder. (In the simulations in Section IV, the time interval between two consecutive
video encoding rate calculations is set to be one video frame time (i.e. 40ms).)
The VLPA algorithm also determines the paths to transmit each layer through the following policy:
1. Video layer i is transmitted redundantly through all the paths whose transmission rates are greater
9
than or equal to cumulative rate .
2. If a path is in the probing phase (i.e., a new path), its transmission rate is not used to determine a
cumulative rate. As a result, the new path may have a transmission rate between two consecutive
cumulative rates and . In this case, both layers (layer i and i+1) are assigned to this path to
assure that the available bandwidth on this new path will be fully used in the probing phase.
Note that with the above path assignment policy, the base layer (i.e., layer 0) is assigned to all paths, and
the more important a video layer is, the more number of paths are assigned to that layer. A more detailed
algorithm can be found in [28].
E. Multi-path Collector Module
The proposed scheme transmits duplicate packets through multiple paths simultaneously, and thus, out-
of-order and/or duplicate arrivals of packets may occur. The multi-path collector module at the receiver is
responsible for buffering and reordering of packets from multiple paths. It also filters out the redundant
packets before delivering packets to the application.
For stream media applications, receiving the video stream on a real time basis is critical. Therefore,
when a packet is missing, the multi-path collector only waits for the packet for a predetermined time
interval. If the missing packet does not arrive within the time interval, the multi-path collector delivers its
buffer content to the application with the unfilled holes in packet sequence.
F. Overhead of proposed scheme
The overhead of the proposed scheme is in two folds: reduction in transmission efficiency due to
transmission of duplicated video packets and transmission of control packets associated with the proposed
scheme, and processing of the proposed scheme at the sender and receiver.
Reduction in transmission efficiency impacts the number of users supported in the network, and this
aspect is evaluated in detail in Section V.
Processing of the proposed scheme is mostly at the multi-path distributor module running the VLPA
10
algorithm and rate control module running the TFRC algorithm. The dominating computation in VLPA
algorithm is to assign paths to each video layer. This requires checking each path against each layer i and
takes O(m*n) steps, where m is the number of paths (available during handoff) and n is the number of
video layers. Since n is always less than or equal to m in the proposed scheme, the VLPA computation for
multi-path cases requires O(m2) more steps than for single path cases. The dominating computation in the
TFRC algorithm is to identify packet loss at the receiver and to compute the transmission rate at the
sender. Each computation takes the constant number of steps per RTT (the round trip time) for a given
path [26]. Therefore, computational cost for m paths is approximately O(m) times the computational cost
for a single path with the shortest RRT among all paths. The number of paths during handoff (m) is
usually very limited, and thus, the increase of processing in the proposed scheme is limited.
IV. SIMULATION MODEL
The performance of the proposed scheme is evaluated through extensive simulation using OPNET [27].
The purpose of the simulation is two-fold: first to investigate the performance of the proposed scheme
with various network parameters, and second to compare the proposed scheme with other existing
schemes.
Existing schemes chosen to compare against the proposed schme use a single path from the stream
media source to the mobile node (i.e., the receiver). With only one path reaching the mobile node during
handoff, stream media handoff may be performed in an end-to-end manner or using proxies. Hereinafter,
we will refer to them as a single-path scheme and a proxy-based scheme, respectively. In the single path
scheme, upon receiving a new COA binding update message from mobile IP during handoff, the stream
media sender stops sending video packets to the base station of the cell that the mobile node is leaving
(old base station) and starts sending video packets to the base station of the cell the mobile node is
entering (new base station). ***I am here.**** An adaptive multi-layer encoder and TFRC rate control
are applied at the sender in a single path scheme to provide a fair comparison with the proposed scheme.
11
In addition, another version of single path scheme using mobile IP fast forwarding option is simulated to
represent single path end-to-end handoff with network layer mobility management techniques. In this
version, the video packets on the fly toward the old base station will be forwarded to the new base station
during the handoff, in order to avoid re-routing loss. The above two versions of single path schemes will
be referred to as single path without forwarding scheme and single path with forwarding scheme in the
rest of the paper, respectively.
Proxy-based scheme models the stream media transmission in a generic CDN. In proxy-based scheme,
mobility is handled at a session level as we mentioned in stream CDN (Section II). We assume a perfect
scenario in which the stream media is always available in the buffer of the new proxy and the proxy
transcodes the stream media according to available bandwidth. The stream media transmission is resumed
by re-transmission of the last incomplete video frame through the new proxy in a handoff. Session set-up
and transcoding delay in the new proxy are ignored. For proxy-based scheme, the proxy typically exists
very close to the mobile node and has very short round trip time to the mobile node. Under such a
situation, TFRC is not applicable since TFRC rate control algorithm does not scale to very short round trip
time due to heavy overhead incurred by the feedback per round trip time. Therefore, we assume a
centralized resource controller will let the proxy know the available bandwidth in the new cell
immediately.
Our simulation experiments concentrate on analyzing a single handoff instance since a more general
scenario with a sequence of handoffs consists of individual handoffs with different parameter settings. In
most experiments, a handoff occurs between two overlapping cells since it is the most common case.
Scenarios with more than two overlapping cells are used to illustrate the growth of overhead with the
increasing number of paths used in handoff.
Figure 2 shows the network configuration used in our simulations. The wireless domain includes base
stations for two neighboring cells, physical links connecting base stations with the wireless gateway, and a
12
wireless gateway connecting the wireless domain to the wired backbone network. For the proxy-based
scheme, a proxy is assumed to be located at each base station. Wireless links are 802.11b WLAN 11Mbps
links with a bit error rate between 110-3 to 110-5 (average is 210-5). The coverage radius of the base
station for each cell is 300 meters, and the distance between two neighboring base stations is 500 meters.
In each cell, a background node is placed to simulate the background traffic. All wired links in the
simulation have 155Mbps link capacity, 110-12 bit error rate, and 10s of propagation delay as default
settings.
In order to evaluate the overhead of the proposed scheme with respect to the number of paths used
during a handoff, network configurations with multiple cells are used (See Figure 3.) All parameter values
are same as that assumed in Figure 2, except for the distance between the wireless base stations to allow
more number of overlapping cells. In one of the configurations (Figure 3 (a)), base stations are placed on
a grid, and the distance between the two adjacent base stations is assumed to be 500 meters. This allows a
maximum of 4 paths in the overlapping area. In Figure 3 (b), every three neighboring base stations form
a triangle with equal-length edges of 500 meters.
The video source traffic is generated using a source adaptive multi-layer encoder model presented in
[19]. The video packet size is fixed to 1024 bytes.
To simulate cells with different amount of available bandwidth in the wireless network, background
traffic is introduced. Background traffic in each cell is generated by a Poisson process, and it is
transmitted from the base stations to the background nodes.
To investigate the impact of various network parameters on the performance of the proposed scheme,
following network parameters are varied:
available bandwidth in the cell the mobile node is entering,
round trip time from the source to the destination
speed of mobile node movement, and
13
number of overlapping cells
We summarize the parameters we use in different sets of simulations in Table 1, 2, and 3 below. In the
tables, old cell refers to the cell that mobile node is leaving and new cell refers to the cell the mobile node
is entering. As shown in Table 2 and 3, we run the simulation in two cases when changing round trip time
and speed of mobile node movement. In Case I, the mobile node moves from a cell with lower available
bandwidth to a cell with higher available bandwidth. In Case II, the mobile node moves from a cell with
higher available bandwidth into a cell with lower available bandwidth, in which congestion happens.
In the simulation, the available bandwidth is varied by changing the mean of the Poisson background
traffic volume in the new cell. The round trip time is varied by tuning the propagation delay of the first
link from the Corresponding Node to Backbone Router in the simulation network model (Figure 2). The
speed of a mobile node is varied from 4.5mph to 140mph during handoff in the simulation. The trajectory
of a mobile node is simulated using the trajectory model provided in the OPNET simulator [27]. Number
of overlapping cells is varied from 2 to 4, depending on the configuration of wireless networks.
We measure the performance during the handoff period to show the difference of the proposed scheme
with other schemes. The handoff period is defined as the time period during which a mobile node stays in
the cell overlapping area.
The performance of handoff schemes is evaluated with respect to two aspects: the quality of service and
the overhead. To measure the quality of service of the video stream, throughput, packet loss ratio, and
goodput in terms of video frame rate per second are measured.
The overhead of the proposed scheme is measured as reduction in transmission efficiency and network
capacity. The transmission efficiency is defined as the ratio of the number of unique application packets
received to the total number of packets transmitted during the handoff period. The network capacity is
defined as the number of users supported in a fixed bandwidth multi-cell wireless network. A reduction
In the simulation, available bandwidth in the new cell is calculated as the link speed minus the volume of background traffic. The actual throughput, however, is much lower due to the overhead of the MAC layer and link errors.
14
ratio of network capacity using multi-path handoff is measured in a comparison to network capacity using
the proxy-based scheme.
V. NUMERICAL RESULTS
In this section, simulation results are presented to evaluate the performance of the proposed scheme. In
all the figures presented in this section, SP_NF denotes single-path handoff scheme with no forwarding;