CHAPTER 9 MULTIMEDIA STREAMING IN MOBILE WIRELESS NETWORKS SANJEEV VERMA Nokia Research Center Tampere, Finland MUHAMMAD MUKARRAM BIN TARIQ DocoMo Communication Laboratories USA, Inc. San Jose, California TAKESHI YOSHIMURA Multimedia Laboratories, NTT DoCoMo, Inc. Yokosuka, Kanagawa, Japan TAO WU Nokia Research Center Burlington, Massachusetts 9.1 INTRODUCTION Multimedia services, such as streaming applications, are growing in popularity with advances in compression technology, high-bandwidth storage devices, and high- speed access networks. Streaming services are generally used in applications like multimedia information and message retrieval, video on demand, and pay TV. Also, there has been growing popularity of portable devices, such as notebook com- puters, PDAs, and mobile phones in recent years. Now it is possible to provide very high-speed access to portable devices with emerging technologies like WLAN and 3G networks. For instance, emerging 3G wireless technologies provide data rates of 144 kbps for vehicular, 384 kbps for pedestrian, and 2 Mbps for indoor environ- ments [1,2]. Hence, it is now possible to enrich the end user’s experience by com- bining multimedia services [3,4] with mobile-specific services such as geographic positioning, user profiling, and mobile payment. One example of such service is “mobile cinema ticketing,” which uses geographic positioning and user-defined Content Networking in the Mobile Internet, Edited by Sudhir Dixit and Tao Wu ISBN 0-471-46618-2 Copyright # 2004 John Wiley & Sons, Inc.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
CHAPTER 9
MULTIMEDIA STREAMING IN MOBILEWIRELESS NETWORKS
SANJEEV VERMANokia Research CenterTampere, Finland
MUHAMMAD MUKARRAM BIN TARIQDocoMo Communication Laboratories USA, Inc.San Jose, California
TAKESHI YOSHIMURAMultimedia Laboratories, NTT DoCoMo, Inc.Yokosuka, Kanagawa, Japan
TAO WUNokia Research CenterBurlington, Massachusetts
9.1 INTRODUCTION
Multimedia services, such as streaming applications, are growing in popularity with
advances in compression technology, high-bandwidth storage devices, and high-
speed access networks. Streaming services are generally used in applications like
multimedia information and message retrieval, video on demand, and pay TV.
Also, there has been growing popularity of portable devices, such as notebook com-
puters, PDAs, and mobile phones in recent years. Now it is possible to provide very
high-speed access to portable devices with emerging technologies like WLAN and
3G networks. For instance, emerging 3G wireless technologies provide data rates of
144 kbps for vehicular, 384 kbps for pedestrian, and 2 Mbps for indoor environ-
ments [1,2]. Hence, it is now possible to enrich the end user’s experience by com-
bining multimedia services [3,4] with mobile-specific services such as geographic
positioning, user profiling, and mobile payment. One example of such service
is “mobile cinema ticketing,” which uses geographic positioning and user-defined
Content Networking in the Mobile Internet, Edited by Sudhir Dixit and Tao WuISBN 0-471-46618-2 Copyright # 2004 John Wiley & Sons, Inc.
preferences to offer a mobile user a selection of movies from nearby movie theatres.
A user views corresponding movie trailers through a streaming service before select-
ing a movie and purchasing a ticket.
Streaming services are services in which continuous video and audio data are
delivered to an end user. A multimedia streaming service consists of one or more
media streams. A multimedia streaming application may have both audio and
video components (e.g., news reviews, movie trailers) or it may have audio stream-
ing with visual presentation comprising still images and or graphics animations,
such as a quarterly Webcast of earnings by corporations. These applications are gen-
erally stored at a Web-based server and streamed to clients on request. Streaming
audiovideo clips are sufficiently large, which makes their transmission time
longer (several minutes or longer) than the acceptable playback latency. Hence,
downloading the entire audiovideo content before its playback is not an option.
The streaming audiovideo clips are played out while parts of the clips are being
received and decoded. This is the biggest advantage of streaming service, since a
user is able to see video soon after downloading begins.
Figure 9.1 illustrates a general architecture for providing streaming services [5].
The multimedia content for streaming services is created from one or more media
sources (videocamera, microphone, etc). It can also be created synthetically
without using any natural media source. Examples of synthetically generated multi-
media contents are computer-generated graphics and digitally generated music.
Typically, the storage space required for raw multimedia content can be huge.
The multimedia content is digitally edited and compressed in order to provide attrac-
tive multimedia retrieval services over low-speed modem connections. The edited
Figure 9.1 A general architecture designed to provide streaming services.
276 MULTIMEDIA STREAMING IN MOBILE WIRELESS NETWORKS
and compressed multimedia clips are then stored in storage devices at the server. On
receiving a request from the client, the streaming server retrieves the compressed
multimedia clip from storage devices and the application layer QoS module
adapts the multimedia stream based on the QoS feedback at the application layer.
After adaptation at the application layer, transport protocols packetize the com-
pressed multimedia clips and send them over the Internet. The packets may suffer
losses and accumulate delay jitter while traversing the Internet. To further
improve the QoS, continuous media distribution services (e.g., caching) may be
deployed in the Internet. The successfully delivered media packets are decom-
pressed and decoded at the client end. Compensation or playout buffers are deployed
at the terminal end to mitigate the impact of delay jitter in the Internet and to achieve
seamless QoS. Clients also use media synchronization mechanisms to achieve syn-
chronization across different media streams, for example, between audio and video
streams.
There are several challenges in providing streaming services in wireless environ-
ments due to some issues that are specific to these environments (see Fig. 9.2). For
example, wireless terminals typically have power constraints due to battery power.
Also, they have limited buffering and processing power available due to size and
power constraints. In addition, wireless environments are very harsh. The character-
istics of a wireless channel have a very unpredictable time-varying behavior due to
several factors such as interference, multipath fading, and atmospheric conditions.
This results in more delay jitter, more delay, and higher error rates, compared to
that in wired networks. Moreover, the mobility or the movement of a mobile user
from one cell to another cell introduces additional uncertainty. The movement trig-
gers a handoff mechanism to minimize interruption to an ongoing session. The wire-
less channel characteristics may be entirely different in a new cell after handoff. The
access point (typically a basestation) of the mobile host to the wired network also
changes after the handoff. This results in the establishment of entirely new route
in the wired network. The new route in the fixed network may have very different
path characteristics. This problem becomes even more severe as wireless networks
Composite Capabilities/Preferences Profile (CC/PP) RTS does not provide a
very good capability exchange mechanism. In most cases the server decides on the
type of media and its other properties without first consulting the client about its
capabilities. The client may have several capabilities or limitations, which, if com-
municated to the server, would allow the server to customize the presentation and
media based on client capabilities.
The client device may have limited bandwidth, or a constrained display, software
constraints such as support for some SMIL features and not other features, or some
user preferences that may impact the presentation of media at the user agent. CC/PP
can be used to express all these scenarios and more.
CC/PP OVERVIEW A CC/PP description is a statement of capabilities and profiles
of a device or a user agent. CC/PP is based on resource description framework
9.5 PROTOCOLS FOR STREAMING MEDIA 301
(RDF1) and can be expressed using an XML document or some other structured rep-
resentation format. A CC/PP description is structured such that each profile has a
number of components and each component has one or more related attribute–
value pairs, which are sometimes also referred to as properties. Figure 9.14 shows
CC/PP structure for a hypothetical profile. Two components, HardwarePlatform
and Streaming, and some of their respective attributes are shown. The Hardware-
Profile component above, groups together BitsPerPixel, ColorCapable, and
PixelAspectRatio properties, which are presumably properties related to the
hardware of the device.
As with all the languages and description formats, we must have a set of mutually
understood vocabulary and rules for their interpretation. CC/PP is no exception.
With CC/PP any operational environment may define its own vocabulary and
schema that specify the allowable attributes and values, along with their syntax
and semantics. This vocabulary and schema may be understood only by the relevant
applications. For instance, W3C [32] defines a core vocabulary for print and display,
and WAP forum’s user-agent profile (UAProf) specification WAP [33] defines a
vocabulary that can be used to express different capabilities and preferences
related to the hardware, software, and networking available at the device. A discus-
sion on CC/PP attribute vocabularies can be found in Ref. 34.
CC/PP allows specification of default attributes and values in the schema corre-
sponding to each component. If a user agent’s capabilities and preferences related to
a particular component match the default, it can just specify so without actually
giving details of all the attributes and their values. If values of some of the attributes
differ from the default values, only a device can create a profile containing only the
differing attribute value pairs while referring to the defaults for other attributes. This
mechanism shortens the profile descriptions and saves precious wireless bandwidth.
Other methods of reducing size of profile description include using binary encoding
such as WAP binary XML.
Profile
HardwarePlatform Streaming
More Components
16
BitsPerPixel
yes
1x2
Mono
AudioChannels
8
3GPP-R5
ioPixelAspectRat
ColorCapable MaxPolyphony
PssVersion
Figure 9.14 An example CC/PP profile.
1If you are not familiar with RDF, an excellent premier can be found in [68].
302 MULTIMEDIA STREAMING IN MOBILE WIRELESS NETWORKS
9.5.1.5 UAProf SpecificationUAProf [33] is worth mentioning here because the capability exchange framework
and vocabulary defined in this specification is used, with modifications in some
cases, in many mobile content delivery systems, including 3GPP-PSS. UAProf spe-
cifies (1) end-to-end capability exchange architecture; (2) a vocabulary and schema
comprising six components, namely, HardwarePlatform, SoftwarePlatform, Brow-
serUA, NetworkCharacteristics, WapCharacteristics, and PushCharacteristics; (3)
encoding methods for the profiles; and (4) methods for transport of profiles.
UAProf also outlines usage scenarios for user-agent profiles and behavior of differ-
ent entities involved in the capability exchange process. A brief description of the
six components described in Ref. 33 follows in Table 9.2.
CC/PP Exchange HTTP is typically used as the transport protocol for CC/PP
description from client to server. However, potentially tens of components and hun-
dreds of properties may be required to fully express the capabilities and preferences
profile of a user device. A profile description can therefore be very large and trans-
port of such description between the user device and the server can entail significant
overhead.
TABLE 9.2 UAProf Component Description
Component Description
HardwarePlatform Comprises a set of attributes that describe the hardware
characteristics of a user-agent device, such as type,
model, and input/output capabilities
SoftwarePlatform Consists of a set of attributes related to the software
environment on the device, such as the operating
system, available audio video encoding/decoding
components, user language preferences
BrowserUA This component encompasses the properties related to the
HTML browser at the user agent
NetworkCharacteristics The attributes in this component describe the
characteristics of the network that the user device is
connected to
WapCharacteristics Includes attributes concerning Wireless Application
Protocol (WAP) capabilities
PushCharacteristics Covers attributes specific to push capabilities of the
device; the push model is slightly different from the
traditional request/response model used for most
content; instead the content can be “pushed” to the
client without receiving an explicit request from the
client (see Ref. 69 for details)
9.5 PROTOCOLS FOR STREAMING MEDIA 303
We already saw that CC/PP allows referring to default attribute values, which
may reduce size of the description, but what about the properties that deviate
from default. The CC/PP exchange protocol [35] has been designed with precisely
these constraints in mind. This protocol allows the user agents to specify only the
attributes that differ from default or last capability exchange. This reduces the
size of descriptions significantly. Because of the dependency between different
descriptions sent by a client, the network must maintain state information about pre-
vious a CC/PP exchanges. For this purpose a new logical entity called a CC/PPrepository is introduced. This repository stores the default and predefined profiles.
The CC/PP exchange protocol [35] extends HTTP by defining three new HTTP
headers, two of which are request headers, namely, profile, profile-diff, and
one response header, named profile-warning. The profile header contains a
list of references to (predefined) profiles or profile descriptions expressed carried
in profile-diff header in the same message. profile-diff header contains the
actual profile description. Profile-warning header is used to convey any
warning information to the requestor, such as when the server fails to fully
resolve a profile description. Ref. 33 defines similar headers for use with Wireless
profiled HTTP, and these headers are called x-wap-profile, x-wap-profile-
diff, and x-wap-profile-warning, respectively, and have meanings similar to
those of the corresponding headers defined for CCPP exchange protocol.
A simple example of the content delivery process based on CC/PP is shown in
Figure 9.15. The client includes the CC/PP description in the request for the
content. The server resolves the profile and selects or creates appropriate content
and sends it back to the client. In reality this same model may include intermediaries
such as proxies and gateways, which may manipulate the user request and its capa-
bility profile before forwarding the request to the server.
Content ServerClientHTTP or RTSP request for content with references to profile
1
4 Response
Profile Repository
Server retrieves the referenced pieces of profile
2
Appropriate content is selected or created
3
Delivered content is appropriate for user’s capability and preference profile
Figure 9.15 Capability exchange with CC/PP.
304 MULTIMEDIA STREAMING IN MOBILE WIRELESS NETWORKS
Needless to say, CC/PP is a generic mechanism for expressing capabilities and
profiles and can be used in a variety of situations besides the classical client–
server scenario depicted in Figure 12.15. It should also be noted here that currently
mostly HTTP is used to carry CC/PP descriptions, RTSP may become more widely
used in the future.
9.5.2 The Streaming Media Transport Protocols
For the application to render the media while they are still being transmitted over the
data network, some care must be taken in media transport. The media transport
mechanisms must provide means through which the media are transported in a
sequential manner, and with all the relevant information about how and when
they must be rendered (e.g., the media format types and the timestamps). Currently
the hypertext transport protocol (HTTP) [36] TCP [37], UDP [38], and real-time
transport protocol (RTP) [39] [coupled with the real-time transport control protocol
(RTCP)] are used for multimedia streaming over the Internet. Among these proto-
cols, only RTP can be regarded as a true real-time transport protocol, but presence
of firewalls that do not understand the streaming protocols and block UDP-based
traffic can sometimes make use of HTTP and TCP unavoidable.
In many scenarios a multimedia session consists of many different streams, each
with its own unique requirements with respect to media transport, thus necessitating
the use of more than one media transport protocols. One such scenario is the 3GPP-
PSS architecture, which we will describe later in this chapter
9.5.2.1 The Real-Time Transport ProtocolThis protocol has emerged as the dominant streaming media transport protocol. The
basic protocol is defined in IETF RFC 1889 [39]. The RFC defines two protocols that
are meant to work in tandem, namely, the RTP for media transport and the accom-
panying protocol called real-time transport control protocol (RTCP) for transport
feedback to the senders from the receivers. While RFC 1889 provides the base spe-
cification, several additional specifications have been developed for packetization
and use with individual media types such as H.263 [40] and GSM-AMR [41]. In
the following text we will briefly overview functionality provided by RTP and
RTCP and their use in streaming media environment.
Figure 9.16 shows the RTP packet format. RTP provides payload type identifi-
cation, fragmentation (M-bit), sequencing, and timing information in each individ-
ual packet. The payload type field allows the application to determine the correct
codec type to use with the media. Fragmentation information allows the appli-
cations to reassemble protocol data units correctly. Timing and sequence infor-
mation allows the applications to recognize any out of sequence packets and
compensate for delay-jitter variations incurred on the network. All of these com-
bined allow an application to render the multimedia stream correctly and
smoothly. RTP also provides synchronization source (SSRC) and contributing
source (CSRC) identifiers to identify the packets belonging to same stream inde-
pendent of the transport layer address. This is especially helpful in multiparty
9.5 PROTOCOLS FOR STREAMING MEDIA 305
streaming scenarios but is rarely used in contemporary streaming multimedia
delivery. RTP is also capable of transporting encrypted media; however, the key
generation and distribution is out of scope of RTP.
RTCP specifies periodic transmission of control packets to all the participants in a
session. It serves four main functions:
1. Feedback on quality of reception of data through RTCP sender and receiver
reports.
2. Carrying a persistent transport level identifier for RTP source. This identifier
is called canonical (CNAME), this is very helpful in multimedia scenarios
where a RTP source may contribute more than one streams. Such as when
transmitting audiovideo streams of a conversation, the common CNAME
for the individual SSRC allows the receiver to recognize these streams as
associated, indicating need for synchronization (e.g., for lip-synchronization).
3. Rate control for RTCP messages. The number of RTCP messages generated
can quickly get out of control in a conference with large number of partici-
pants. This functionality allows the participants to control the rate of RTCP
reports.
4. Session control information for loosely controlled sessions, where, partici-
pants may join and leave without strict membership control. However, stream-
ing multimedia sessions are often tightly controlled and complete session
control information is established via separate session control protocols
such as RTSP and RTCP, allowing only loose control within the parameters
established by the session control protocol.
Figure 9.17 shows the format of RTCP senders report. Receiver reports are similar,
except that the header does not contain the NTP timestamp and there is no sender
information block. The payload type for receiver reports is 201.
Figure 9.16 RTP packet format.
306 MULTIMEDIA STREAMING IN MOBILE WIRELESS NETWORKS
In addition to senders and receivers reports, RTCP also provides for source
description or SDES packets (see Fig. 9.18). These packets include information,
such as name, email, phone number, and geographic location about the synchroni-
zation and contributing sources.
Although RTP is transport-independent as long as the transport protocol provides
multiplexing and correct delivery, because of the stringent delay requirements of
most real-time traffic and high acceptance of IP, UDP is primarily used as transport
Figure 9.17 RTCP sender report packet format.
V=2 P Source Count (SC)
Payload type=SDES=202 Length
0 7 15 23 31
SSRC-1 or CSRC-1
SDES Items for SSRC/CSRC -1
……
SSRC-2 or CSRC-2
SDES Items for SSRC/CSRC -1
……
Figure 9.18 RTCP source description format.
9.5 PROTOCOLS FOR STREAMING MEDIA 307
for RTP. Although RFC 1889 states that RTP uses checksum and multiplexing capa-
bility of UDP, it is worth noting that most media codecs are either not sensitive to bit
errors, or may be encoded with error correction codes; therefore, it is not wise to
discard the entire packet if the checksum fails. In such cases it may be wise to
disable UDP checksum or use protocols such as “UDP-lite” [42,43].
RTP and RTCP are usually used in tandem and multiplexed onto the same
network layer address; for instance, if UDP/IP is used, they will typically share
the IP address. By convention the RTP stream uses an even-numbered port
number and the corresponding RTCP channel uses one immediately following the
odd-numbered port.
As stated earlier, individual profiles for specific media types have been defined.
These profiles specify the payload type, any modifications to the semantics of differ-
ent fields in the header and payload, and any new header types if necessary.
Examples of such media-specific profiles include Ref. 44 for H.263 and Ref. 41
for AMR. These profiles sometimes provide functionality for rate adaptation and
other in-band signaling; for example, Sjoberg et al. [41] allow the receiver to
specify one of several AMR codec rates or modes of operation. Applications
using these media types must conform to the corresponding profiles to ensure
compatibility.
9.5.2.2 Other Media Transport ProtocolsHTTP and RTSP tunneling or plain UDP or TCP are sometimes used for media trans-
port. HTTP and RTSP tunneling is useful in cases where a firewall blocks RTP/UDP
traffic. With HTTP and RTSP tunneling, the streaming media are sent embedded or
interleaved in the body of the HTTP or RTSP messages; this approach, however, can
be highly inefficient in terms of the amount of bandwidth used. But as streaming mul-
timedia gains wider deployment and acceptability, there are more firewalls that
understand the streaming media protocols and can therefore open the desired ports
to allow streaming media. So we will likely see less use of tunneling in the future.
9.6 3GPP PACKET-SWITCHED STREAMING SERVICE
As discussed in previous sections, a basic streaming service consists of streaming
control protocols, transport protocols, media codecs, and scene description proto-
cols. 3GPP has formulated a set of 3G PSS standards to provide mobile packet-
switched streaming service (PSS). The 3GPP standard specifies protocols, codecs
and architecture to provide mobile streaming service. The 3GPP codecs and
media types were discussed in Section 3.3 of this chapter. Figures 9.19 and 9.20
depict the 3GPP protocols and applications used in a PSS client. The protocols
and their applications are
. RTSP and SDP for session setup and description
. SMIL for session layout description
308 MULTIMEDIA STREAMING IN MOBILE WIRELESS NETWORKS
. HTTP for capability exchange and transporting static media such as session
layout description (SMIL files), text, graphics, and so on
. RTP for transporting real-time media such as audio, video, and speech
Providing end-to-end streaming service implies harmonized interworking between
protocols and mechanisms specified by IETF and 3GPP. Both 3GPP and IETF
Figure 9.19 3GPP streaming protocols and their applications.
have their own sets of protocols and mechanisms to provide QoS and connectivity in
3G access network and external IP-PDN (Internet), respectively. External IP-PDN
can deploy either IntServ or DiffServ QoS model to provide QoS.
3GPP release 4 does have a support for streaming services in its QoS model.
3GPP release 5 has an upgraded packet-switched core network by adding an “Inter-
net multimedia subsystem (IMS)” that consists of network elements used in session
initiation protocol (SIP)-based session control. Release 5 has also upgraded network
elements GSNs (GPRS support nodes) to support delay-sensitive real-time services.
In addition, the radio access network (UTRAN) has been upgraded to support real-
time handover of PS (packet-switched) traffic. The main purpose of release 5 is to
enable an operator to offer new services like multimedia, gaming, and location-
based services. The Internet multimedia domain is mainly concerned with new ser-
vices—their access, creation, and payment—but in a way that gives an operator full
control over the content and revenue.
9.6.1 3GPP Packet-Switched Domain Architecture
Figure 9.20 depicts the network architecture of an end-to-end 3GPP packet-switched
streaming service. We need at least a streaming client and a content server to
implement the streaming service. Content servers may be either hosted in the
UMTS architecture itself or accessed externally through an IP-PDN. A proxy
server may be needed in UMTS architecture to provide sufficient QoS, if the
content servers are accessed externally through an IP-PDN. The end-to-end stream-
ing architecture has following network elements that are specific to streaming:
. Content Servers. They can be either hosted in the UMTS architecture (added to
the IMS) or can be accessed externally. Content servers consist of streaming
servers that store streaming content and Web servers that hold SMIL pages,
images, and other static content.
. Proxy Server. This may be included in the IMS (especially when the streaming
server is external) to provide enhanced QoS streaming service. The proxy
server’s [45,46] main role is to smooth (eliminate delay jitter) incoming
streaming traffic from the external IP-PDN. During transmission of the stream-
ing content to the client, the proxy dynamically adapts the delivered QoS in
accordance with the available bandwidth. The proxy server uses the feedback
from the client application, radio network, and IP network. The proxy server
can also implement an appropriate quality adaptation scheme by switching
on the fly to a lower-quality streaming when the available bandwidth is not suf-
ficient. Moreover, it can perform additional functionality of transcoding.
Transcoding may be needed for several reasons, such as, when a user moves
from a high-bandwidth wireless LAN to a GPRS or 3G networks. This may
also be needed if the mobile node is unable to handle high-bandwidth stream-
ing traffic.
310 MULTIMEDIA STREAMING IN MOBILE WIRELESS NETWORKS
. User and Profile Servers. These servers store user preferences and device capa-
bilities. This information can be used to control presentation of streamed media
to a mobile user.
. Content Cache. Content cache can be optionally used to improve the overall
service quality.
. Portals. Portals are servers that allow convenient access to streamed media
content. For example, a portal might offer content browse and search facilities.
In the simplest case, it can be a Webpage with a list of links to streaming content.
Apart from the abovementioned network elements that are specific to streaming
service, other network elements in the 3GPP UMTS architecture play a significant
role in the QoS management of streaming service. The UMTS radio access
network (UTRAN) ensures seamless handover between basestations with minimal
disruption to ongoing real-time services. The radio resource control (RRC) protocol
[1] (3GPP-TS-25.331) is used for controlling resources on the UTRAN (universal ter-
restrial radio access network). The radio access network application part (RANAP)
protocol [1] (TS-25.431) is used between UTRAN and core network entities. The
serving GPRS support node (SGSN) acts as the gateway for the entire packet-based
communications between user equipments (UEs) within its serving area. The
SGSN is responsible for packet routing and transfer, mobility management (attach/detach and location management), logical link management, authentication, and char-
ging functions. The gateway GPRS support node (GGSN) acts as a gateway between
UMTS core network and external IP-PDN. There is an active PDP context for every
active packet-switched bearer or session. The PDP context is stored in UE, SGSN, and
GGSN. With an active PDP context, the UE is visible for the external IP-PDN and is
able to send and receive data packets. The PDP context describes the characteristics of
the session. It contains a PDP type (e.g., IPv4), the IP address assigned to the UE,
requested QoS, and the address of the GGSN that serves as the access point
to the IP-PDN. Table 9.3 shows the different QoS classes supported in the UMTS
architecture [1].
The PDP activitation (see Fig. 9.21) in the UMTS architecture works as follows.
The UE first sends an “Activate PDP context request” message to the SGSN through
the session management (SM) protocol. SGSN contacts the home location register
TABLE 9.3 UMTS QoS Classes
Class Requirements Example
Conversational Very delay-sensitive Traditional voice; VoIP
(HLR) and performs authentication and authorization functions. SGSN then per-
forms the local admission and initiates radio access bearer (RAB) assignment pro-
cedure in the RAN/GERAN through RANAP procedure. A local call admission
based on the availability of radio resources and UMTS QoS attributes is mapped
on radio bearer (RB) parameters used in the physical and link layers. After the estab-
lishment of RB, SGSN sends a “Create PDP context request” message to the GGSN.
The GGSN performs local admission control and creates a new entry in the PDP
context table that enables the GGSN to route data between SGSN and external
IP-PDN. Afterward, the GGSN returns a confirmation message “Create PDP
context response” to the SGSN” that contains the PDP address. The SGSN
updates its local PDP context table and sends an “Activate PDP context accept”
message to the UE.
9.6.2 The 3GPP PSS Framework
The 3GPP PSS specifications consist of three 3GPP technical specifications: 3GPP
TS 22.233, 3GPP TS 26.233, and 3GPP TS 26.234. PSS provides a framework for
IP-based streaming applications in 3G networks. This framework is very much in
line with what we have discussed so far in this chapter. This framework uses CC/PP for capability exchange (see Fig. 9.22), SMIL for presentation description, and
UE UTRAN/GERAN SGSN GGSN
Radio Bearer
Activate PDP Context RequestSecurity Functions
RAB Assignment Request
RAB Assignment Response
Create PDP Context Request
Create PDP Context Response
Activate PDP Context Accept
Figure 9.21 PDP context activation procedure.
312 MULTIMEDIA STREAMING IN MOBILE WIRELESS NETWORKS
RTSP for session control SDP for session description. However, there are minor
differences here and there. Let’s go over these one by one.
9.6.2.1 Streaming Media Session Setup Procedures for PSSFigure 9.23 shows an example of a simple session establishment. The first step is to
know what content to get and where to start. The client can obtain the URI of the
content from an SMIL presentation document, a simple Webpage, or an email, or
just simply by word of mouth. Once the URI is known, the client application
sends a request for the primary PDP context that is opened to allocate the IP
address for the UE as well as the access point. The primary PDP context is used
to access content servers in either IMS domain or external IP-PDN. Since the
primary PDP context is used for RTSP signaling, it is created with UMTS interactive
QoS profile. A socket is opened for RTSP signaling and is tied to the primary PDP
context. The client can now query the content server to learn more about the content
using RTSP DESCRIBE request.2 The client may include its CC/PP description in
the request. The client does not need to include the profile description if it is sure that
the URI that it is using in the RTSP request already points to a resource that is com-
patible with its profile. Such would be the case if the URI were obtained from an
SMIL document, which was obtained after presenting a valid CC/PP description.
If the profile is included, it is carried using the x-wap-profile and the x-wap-
profile-diff headers for CC/PP exchange protocol that we discussed earlier.