Top Banner

of 12

Base Journal 2010

Jun 04, 2018

Download

Documents

phhkmu
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 8/13/2019 Base Journal 2010

    1/12

    Robust Multilayer Control for EnhancedWireless Telemedical Video Streaming

    Maria G. Martini, Senior Member , IEEE , Robert S.H. Istepanian, Senior Member , IEEE ,Matteo Mazzotti, Member , IEEE , and Nada Y. Philip, Member , IEEE

    Abstract M-health is an emerging area of research and one of the key challenges in future research in this area is medical videostreaming over wireless channels. Contrasting requirements of almost lossless compression and low available bandwidth have to betackled in medical quality video streaming in ultrasound and radiology applications. On one side, compression techniques need to beconservative, in order to avoid removing perceptively important information; on the other side, error resilience and correction should beprovided, with the constraint of a limited bandwidth. A quality-driven, network-aware approach for joint source and channel codingbased on a controller structure specifically designed for enhanced video streaming in a robotic teleultrasonography system ispresented. The designed application based on robotic teleultrasonography is described and the proposed method is simulated in awireless environment in two different scenarios; the video quality improvement achievable through the proposed scheme in such anapplication is remarkable, resulting in a peak signal-to-noise ratio (PSNR) improvement of more than 4 dB in both scenarios.

    Index Terms Medical video streaming, wireless telemedicine, m-health, cross-layer design, robotic ultrasonography.

    1 INTRODUCTION

    CURRENT and emerging developments in wireless com-munications integrated with developments in perva-sive and wearable technologies will have a radical impacton future healthcare delivery systems. M-Health can bedefined as mobile computing, medical sensors, andcommunication technologies for healthcare [1], [2]. Thisemerging concept represents the evolution of e-healthsystems from traditional desktop telemedicine platformsto wireless and mobile configurations.

    In this paper, we present an advanced mobile healthcareapplication example (a mobile robotic tele-echographysystem) requiring a demanding medical data and videostreaming traffic in a heterogeneous network topology thatcombines 3G and WLAN environments.

    Since, in this case, medical video streaming is the mostdemanding application, it represents the main focus of thiswork. Medical video compression techniques for telemedi-cal applications have requirements of high fidelity, in orderto avoid loss of information that could help diagnosis. Inorder to keep diagnostic accuracy, lossless compressiontechniques are thus often considered when medical videosequences are involved. However, when transmission isover band-limited, error-prone wireless channels, a com-promise should be made between compression fidelity andprotection/resilience to channel errors and packet loss.From the medical imaging perspective, it has been observed

    that when lossy compression is limited to ratios from 1:5 to1:29, compression can be achieved with no loss indiagnostic accuracy [3]. Furthermore, even if the finaldiagnosis should be done using an image that has beenreversibly compressed, irreversible compression still playsa critical role when quick access to data stored in a remotelocation is needed. For these reasons, lossy compressiontechniques have been considered for medical images andultrasound medical video [4].

    Recently, joint source and channel coding and decoding(JSCC/D) [5] techniques that include a coordination between source and channel encoders were investigated[9], e.g., for transmission of audio data [10], images [11], andvideo [12]. It was shown that, for wireless audio and videotransmission, separate design of source and channel codingdoes not necessarily lead to the optimal solution [5], nor isalways applicable, in particular, when transmitting datawith real-time constraints or operating on sources where the bit error sensitivity of encoded data varies significantly. Insome of these works, transmission is adapted to sourcecharacteristics (unequal error protection (UEP)), either atchannel coding level or through source adaptive modulation[10], [13]. JSCC/D techniques may also require the use of rate/distortion curves or models of the source in order toperform the optimal compromise between source compres-sion and channel protection [11]. Joint source and channelcoding involves joint design of the source encoder, at theapplication layer, and of channel encoder/modulator, at thephysical layer. The most recent video source codecs (asMPEG-4 [7], H.264 [8], SVC [14]) have in-built errorresilience tools, and allow the decoder to cope with errorsand conceal them at the receiver side [15], [16], [17]. Suchtools are taken into account in the considered framework.

    Cross-layer design is a recent further evolution of theconcept of joint source and channel coding, targeting at jointly designing the classically separated OSI layers.

    Characteristics and limits of such an approach are described,

    IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 9, NO. 1, JANUARY 2010 5

    . M.G. Martini, R.S.H. Istepanian, and N.Y. Philip are with the Faculty of Computing, Information Systems and Mathematics, Kingston University,Penrhyn Road, Kingston-Upon-Thames, KT1 2EE, London, UK.E-mail: {m.martini, r.istepanian, n.philip}@kingston.ac.uk.

    . M. Mazzotti is with CNIT/DEIS, University of Bologna, Italy.E-mail: [email protected].

    Manuscript received 7 Mar. 2007; revised 29 Apr. 2008; accepted 3 Mar.2009; published online 2 Apr. 2009.For information on obtaining reprints of this article, please send e-mail to:[email protected], and reference IEEECS Log Number TMC-2007-03-0068.

    Digital Object Identifier no. 10.1109/TMC.2009.78.1536-1233/10/$26.00 2010 IEEE Published by the IEEE CS, CASS, ComSoc, IES, & SPS

  • 8/13/2019 Base Journal 2010

    2/12

    e.g., in [18], [19]. Regardless of the recent interest in suchtechniques, no study to date addresses the application of JSCC/D and cross-layer approaches to advanced mobiletelemedical applications ("m-JSCC" in the following).

    In this paper, we present the application and theperformance analysis of our cross-layer approach [6] for arobotic ultrasonography system with high (diagnostic)-quality medical video streaming requirements. Among theseveral assessment metrics for medical video quality thathave been proposed (examples are those described in [20],[21], [22], [23], [24], [25], [26]), we will focus on the classicpeak signal-to-noise ratio (PSNR), for comparison purpose,and on metrics evaluating structural distortion [22] and better representing diagnostic accuracy. A quality-drivenapproach is considered here in the sense that the receivedquality is monitored and such information is used for theselection of system parameters.

    The paper is organized as follows: In Section 2, therobotic mobile tele-echography system is presented to-gether with the requirements for ultrasonography videotransmission. In Section 3, the management of the informa-tion to be exchanged among the system component blocksis addressed and the logical units responsible of systemoptimization, in the following referred to as multilayer (or JSCC/D) controllers, having a key role in the system, areanalyzed. Results and discussion are provided in Section 4.

    2 THE TELEMEDICAL PLATFORM : A MOBILETELE -ECHOGRAPHY ROBOT SYSTEM (OTELO)

    Teleultrasound systems for remote diagnosis have beenproposed in the last 10 years [27], [28], [29], [30], [31], [32],[33], [34] given the need to allow teleconsultation whenaccess of the medical specialist to the sonographer is notpossible. An advanced medical robotic system was devel-oped in the mObile Tele-Echography using an ultra-LightrObot (OTELO) European IST project. The project resultedin a fully integrated end-to-end mobile tele-echographysystem for population groups that are not served locally,either temporarily or permanently, by medical ultrasoundexperts [35]. The system comprises a fully portableteleoperated robot allowing a specialist sonographer toperform a real-time robotized tele-echography to remote

    patients. OTELO is a remotely controlled system designed

    to achieve reliable ultrasound imaging at an isolated site,distant from a specialist clinician. Fig. 1 shows the mainoperational blocks. This tele-echography system is com-posed of the following:

    1. An expert site, where the medical expert interactswith a dedicated patented pseudohaptic fictiveprobe, instrumented to control the positioning of the remote robot and emulating an ultrasound probethat medical experts are used to handle, thusproviding a better ergonomy.

    2. The communication media. We developed commu-nication software based upon IP protocol to adaptto different communication links (wired and wire-less links).

    3. A patient site made up of the 6 degrees of freedom(Dof) lightweight robotic system and its control unit.

    Further details on this system are described in [2], [35].With recent advances in mobile technologies, WLAN,PDAs, and other hand-held devices featured with differentsoftware and hardware capabilities are used by physicians,nurses, and other paramedical staff needing to be updatedwith the medical reports of patients over the air interface. Inthis paper,OTELO is integrated in a 3G/WLAN environmentso that the healthcare professionals, who are on the move,might have continuous access to the patient information.

    2.1 The OTELO System: Functional Modalities and3G/WLAN Wireless Connectivity

    It is well known that 3G cellular technology is characterized by wide area coverage, which is its biggest advantage. On

    the other hand, 802.11 WLAN offers high bandwidthconnections at low cost but in limited range. These twomainstream wireless access methods have dominated thewireless broadband Internet market. However, the mostprobable application scenario is the coexistence of both.Telemedicine is one of the multimedia applications that will benefit from this scenario.

    OTELO can be considered as a bandwidth-demandingm-health system, with challenging classes of QoS require-ments, since several medical ultrasound images, robotic,and other data have to be transmitted simultaneously.

    Fig. 2 shows the proposed 3G/WLAN connectivity of theOTELO system and the interface requirements. In this

    scenario, we assume that the OTELOs Expert Station is

    6 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 9, NO. 1, JANUARY 2010

    Fig. 1. The OTELO mobile robotic system.

  • 8/13/2019 Base Journal 2010

    3/12

    connected to the OTELO system via the specialist hospitalWLAN network.

    The detailed medical and nonmedical OTELO datatraffic characteristics are shown in Table 1. As ultrasoundimages are mostly transferred from the robot probe to theOTELO Expert Station, the air interface between the OTELO

    Patient Station and the Radio Network Controller (RNC) bearer is characterized by asymmetric traffic load. Stillultrasound images, stream ultrasound images, ambientvideo, sound, and robot control data are sent over theuplink channel, while only robot control, ambient video,and sound need to be downloaded to the patient side (i.e.,Expert station uploading).

    From Table 1, it can be seen that for the OTELO system,the most bandwidth-demanding traffic consists of medicalultrasound (US) streaming data. For this reason, the focus inthis work is on the transmission of US data. According tocommunication link limitations, various scenarios can beidentified with respect to the data traffic that should be sent

    simultaneously in order to enable the medical examination

    performance. For the current study, we consider thefollowing options:

    1. When the expert is searching for a specific organ(liver, kidney, etc.), high-quality images may not berequired and simple compression methods or lossy

    techniques can be applied. The lowest data rateacceptable by medical experts in this scenario isapproximately 210 Kbits/s with a frame rate of 15 fps.

    2. When the organ of interest is found and smalldisplacements of the robot take place, it may benecessary to consider lossless compression techni-ques that would bring higher image quality to theexpert. This lossless compression can be applied tothe whole image or to a region of interest (ROI). Fromthe medical perspective and in order to provide areal-time virtual interactivity between the remoteconsultant and the manipulated robot, the best roundtrip delay from the expert station between the robotcommanded position and the received correspond-

    ing image should not exceed 300 ms.

    MARTINI ET AL.: ROBUST MULTILAYER CONTROL FOR ENHANCED WIRELESS TELEMEDICAL VIDEO STREAMING 7

    Fig. 2. 3G/WLAN wireless OTELO connectivity system. The hospital site is represented on the left in the ellipse and the patient station is depicted onthe right.

    TABLE 1OTELO Medical Data Requirements and Corresponding Data Rates [35]

  • 8/13/2019 Base Journal 2010

    4/12

    3. There is the need to have a multisite specialistwireless connectivity in the hospital and to provide asecond diagnostic opinion about the received ultra-sound images. Hence, in this study, we assume anadditional multispecialist WLAN connectivity sys-tem to provide such a service.

    2.2 WLAN Connectivity for Expert DiagnosisIt is well known that the IEEE 802.11e WLAN standardadds quality-of-service (QoS) features and multimediasupport to the existing IEEE 802.11b and IEEE 802.11awireless standards, while maintaining full backwardcompatibility with them. An orthogonal frequency division

    multiplexing (OFDM) encoding scheme rather than FHSSor DSSS is used in 802.11a standard. 802.11b, often calledWi-Fi, considers complementary code keying (CCK) asmodulation method, which allows higher data speeds, andis less susceptible to multipath-propagation interference.

    Although WLAN connectivity allows the possibility touse higher bandwidth, we need to consider that datatransmitted in the hospital WLAN are possibly receivedfrom the UMTS link. In this case, the UMTS link representsthe bottleneck and the source bit rate considered in theWLAN section is limited by the one received from theUMTS link. However, more error protection can beprovided to data in the WLAN section.

    As shown in Fig. 2, the extended use of WLANconnectivity is configured for second opinion and multi-diagnosis services in the expert side of the OTELO systemconfiguration.

    3 R OBUST MULTILAYER CONTROLLER STRUCTUREFOR ENHANCED MEDICAL VIDEO STREAMING :ARCHITECTURE AND ALGORITHMIC APPROACH

    The proposed architecture for ultrasound video transmissionover 3G/WLAN systems is described in this section,focusing, in particular, on the system controller structure.

    Fig. 3 illustrates the overall system architecture proposed,from the transmitter side (patient side) in the upper part of

    the figure to the receiver side (expert side) in the lower part,

    and including the signalization used for transmitting the JSCC/D control information in the system. We focus, in fact,on the transmission of ultrasound video from patient tospecialist. Beside the traditional tasks performed at applica-tion level (source encoding, application processing such asciphering), at network level (including real time transportprotocol (RTP)/UDPLite/IPv6 packetization, impact of IPv6wired network, and Robust header Compression (RoHC)),medium access (including enhanced mechanisms for WiFi),and radio access (channel encoding, interleaving, modula-tion), the architecture includes two controller units atphysical and application layers. Those controllers areintroduced for supervising the different (de)coders, (de)-modulation, (de)compression modules and to adapt saidmodules parameters to changing conditions, through thesharing of information about the source, network andchannel conditions, and user requirements. For the control-ling purpose, a signaling mechanism has been defined, withdetail in the following section.

    3.1 Side Information Exchange Mechanisms in theSystem

    System optimization is performed according to informationabout the different system blocks, which is collected andmanaged by the system controllers. In particular, theinformation that is taken into account by the system foroptimization is composed of source significance information(SSI), i.e., the information on the sensitivity of the source(encoded medical video) bitstream to channel errors;channel state information (CSI); decoder reliability informa-tion (DRI), i.e., soft values output by the channel decoder;source a priori information (SRI), e.g., statistical informationon the source; source a posteriori information (SAI), i.e.,information only available after source decoding; networkstate information (NSI), represented, e.g., by packet loss rateand delay; and finally, the ultrasonography video qualitymeasure, output from the source decoder (at the expert site)and used as feedback information for system optimization.This last measure is critical as the target of the overall systemoptimization is the maximization of the received ultrasound

    video quality, which is theultimate goal, since it corresponds

    8 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 9, NO. 1, JANUARY 2010

    Fig. 3. m-JSCC/D architecture for OTELO system.

  • 8/13/2019 Base Journal 2010

    5/12

    to the possibility to perform a correct diagnosis. Seealso [26],[23] for the relevance of perceptual optimization. Due to thefact that this quality measure is regularly sent back to the

    patient side for action, the evaluation should be performedon the fly and without reference (or reduced reference) tothe transmitted frame (see, e.g., [25]).

    Clearly, when considering real-time diagnostic systems,this control information needs to be transferred throughthe network and systems layers, in a timely and band-width-efficient manner. The impact of the network andprotocol layers is quite often neglected when studying jointsource and channel coding and only minimal effort ismade into finding solutions for providing efficient inter-layer signaling mechanisms for JSCC/D. Different mechan-isms have been identified, which could allow informationexchange transparently for the network layers (see, e.g.,[36], [37]). Besides the said novel solutions, severaltransport protocols allow carrying the payload and alsosome control information. In particular, UDP, UDPlite, anddatagram congestion control protocol (DCCP) protocols areconsidered at transport layer level. For further informationon such solutions, the reader may refer to [37].

    Finally, it should be noted that additional information isrequested by the system for the setup phase, whereinformation on available options (e.g., available channelencoders and available channel coding rates, availablemodulators, . . . ) and a priori information on the transmittedultrasound video (e.g., statistical characterization of videosequence) are exchanged, the session is negotiated, anddefault parameters are set (e.g., authentication key, modulesdefault settings).

    3.2 Principle of the m-JSCC/D Controller StructureFig. 4 shows a schematization of the multilayer controllerstructure, representing the core of the proposed m-healthtransmission system, aiming at a global system optimizationin terms of received medical video quality, by collecting step by step information on the system and providing up-to-datecontrol parameters to the relevant system blocks.

    The system controller is represented by two distinctunits, namely the physical layer (PHY) controller and theapplication layer (APP) controller. The latter collectsinformation from the network (NSI: packet loss rate, delay,and delay jitter) and from the source (e.g., SSI), and has

    availability of reduced channel state information and the

    quality metric of the previously decoded frame (or groupof frames) of ultrasound video. According to this informa-tion, it produces controls for the source encoder block (e.g.,

    quantization parameters, frame rate, error resilience tools toactivate) and the network.

    The task of the PHY controller unit is to provide controlsto the physical layer blocks, i.e., the channel encoder,modulator, and interleaver.

    A more detailed description of the controller componentunits with exemplifications of their functionalities ispresented below.

    3.3 Application Layer Controller UnitGiven the amount of information that can be exploited andthe number of parameters to be set, the specific applicationcontroller has been modeled as a finite state machine so that

    the controller allows to switch among the different statesaccording to the collected information. At the beginning of each iteration cycle, the controller decides the next operat-ing state, which is defined by a fixed set of configurationparameters for the different blocks of the chain. The choiceof the new state is based on the history and feedbackinformation coming from the blocks at the receiver (expert)side, relevant to the previous cycle.

    The algorithm dynamically performs adaptation to chan-nel conditions, network conditions, and source character-istics, by considering the feedback information available.

    The feedback information collected at the expert sideand used at patient side for the choice of medical videocoding parameters and transmission parameters is sum-marized as follows:

    . Medical video quality at the expert end: PSNR orother quality metric (e.g., based on structuraldistortion [22], or without reference to the originalsequence [25]). A no-reference or reduced referencemetric should in fact be considered in a realisticimplementation: at the expert side, the originalframe transmitted from the patient side is notavailable for comparison; furthermore, attentionshould be paid on the choice of a metric representingdiagnostic accuracy, which is the final goal of

    medical image and video transmission.

    MARTINI ET AL.: ROBUST MULTILAYER CONTROL FOR ENHANCED WIRELESS TELEMEDICAL VIDEO STREAMING 9

    Fig. 4. Controller structure (patient side).

  • 8/13/2019 Base Journal 2010

    6/12

    . Reduced CSI, composed, for example, of the averagesignal-to-noise ratio (SNR) in one controller step andof channel coherence time.

    . NSI: number of lost packets, average jitter, andaverage round trip time (RTT).

    The main configuration parameters set by the APP m- JSCC/D and modifiable at each simulation step are:

    . frame rate of encoded medical video,

    . quantization parameters of encoded medical video,

    . group of pictures (GOPs) size (i.e., intraframe refreshrate) of encoded medical video, and

    . average code-rate channel protection, as a conse-quence of the choice of the source encodingparameters and of the knowledge of the available bandwidth.

    In order to reduce the dimension of the possibleconfigurations and to avoid switching continuously fromone given set of parameters to another, only a limited set of possibilities for these parameters is considered, resulting ina limited number N for the possible states. Exemplificationsare provided in Section 4 and in Table 2 for medical videoencoded according to the MPEG-4 standard.

    The adaptive algorithm that has been tested takes intoaccount the trend of the medical video quality feedbackfrom the source decoder and the average E b=N 0 experi-enced on the wireless link in the previous controlling cycle,where E b is the average energy-per-coded bit and N 0 is theone-sided noise spectral density.

    Typically, a low ultrasonography video quality valueassociated to a negative video quality trend will cause atransition to a state characterized by a higher robustness,i.e., higher source compression, more error resilience tools[16], [17], and higher channel protection.

    TheAPP controller algorithmis runeverycycleof length T (e.g., one second). States are numbered from the most robust

    (State 1) to the less robust (State N ) corresponding to thehighest error-free quality. An example is reported in Table 2.At theend of each cycle, thenetwork condition is checked.When there is a network congestion, indicated by a high

    value for the packet loss rate (PLR) feedback in the NSI, thecontroller sets immediately the state to the first one,characterized by the lowest source bit rate (correspondingto the minimum requirements for OTELO medical video),in order to reduce as much as possible the amount of datathat have to flow through the IPv6 network.

    Otherwise, the state is selected according to the ultra-sound video quality Qi ; Qi 1 ; . . . achieved in previous cycles.

    The state number is decreased (a more robust state is

    selected for the following cycle) if a reduction in US video

    quality is observed, i.e., Qi Qi 1 < 0 , and increased (a lessrobust state, corresponding to a higher errorless quality, isselected) if a US video quality improvement is observed,i.e., Q i Q i 1 > 0 . In order to avoid too many oscillations,proper controls checking previous states are considered.

    The state number can be increased/decreased of onestep state i 1 state i 1 or two steps state i 1 state i 2 according to the US video quality value Qiobserved, which is compared with proper thresholds. The

    temporal average of the quality of video frames in the cycle just terminated is considered as the US video quality:

    Q i 1 =N f XN f

    k1Qki ; 1

    where Qki is the quality of the kth video frame in controllercycle i and N f is the number of frames transmitted in cycle i just terminated.

    As anticipated above, the video quality metric consid-ered for closed-loop control should be evaluated with noreference or with only partial reference to the original USvideo frames, which are not available at the expert side. An

    example of such metric is the metric defined in [24].However, for the final assessment and results shown inSection 4, the following metrics are considered:

    1.

    PSNR 20 log 10255

    RMSE ; 2

    where RMSE is the square root of the mean squareerror

    MSE

    XW

    i1

    XH

    j1

    f i; j f i; j 2

    W H ; 3

    where f i; j and f i; j are the luminance of pixelsi; j in the reconstructed and the original frame of dimension W H .

    2. Structural similarity metric (SSIM) [22]. The SSIMindex presentedin [22], as shown in (4), canbe writtenas the product of three independent contributions,representing the luminance information, the contrastinformation, and the structural information.

    With x and y indicating the reference andreceived image:

    SSIM x; y lx; y cx; y sx; y; 4

    10 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 9, NO. 1, JANUARY 2010

    TABLE 2Sets of Parameter Values Used by the APP Controller Unit

  • 8/13/2019 Base Journal 2010

    7/12

    where the luminance comparison is represented bythe term

    lx; y 2 x y C 1

    2x

    2y C 1

    5

    and for the contrast comparison:

    cx; y 2 x y C 2 2x

    2y C 2

    : 6

    The structural comparison term sx; y is

    sx; y xy C 3

    x y C 3: 7

    In the expressions above, it is

    x 1

    N XN

    i1xi ; 8

    x 1N 1 XN

    i1xi x2 !

    12

    ; 9

    xy 1

    N 1 XN

    i1xi xyi y; 10

    and C 1 , C 2 , and C 3 are proper constants.These metric have been considered since more com-

    monly used, allowing thus easier evaluation and compar-ison of results.

    The GOP length, i.e., intraframe refresh rate, is selectedaccording to the reduced CSI available at the applicationlayer: higher GOP lengths are selected for better channelconditions.

    If the indication is to increase the state number and thecurrent state corresponds to the maximum value, the stateis unchanged. Similarly, the state is unchanged if theindication is to decrease and the current state correspondsto the minimum.

    Given the average bit rate associated to the chosen state,the code rate available for signal protection is evaluatedconsidering the constraint on the total Rmax Rs =Rc, whereRs is the average source coding rate and Rc is the targetaverage protection rate. If physical layer UEP is adopted,given the available total coded bit rate ( Rmax ), the averagechannel coding rate ( Rc) is derived by the application m- JSCC/D controller and proposed to the PHY controller (seeFig. 4). The knowledge of the bit rate is of courseapproximated and based on rate/source parameters modelsdeveloped by the authors or average values evaluated inprevious controller steps.

    The length of the controller time step has to be chosenaccording to the requirements of the application. Thecontroller time step has to be short enough to allow a quickadaptation to channel conditions, but long enough not toreduce too much compression efficiency.

    The information about the system is acquired bycontrollers in the first time steps, where, e.g., the motionand statistics characteristics of the medical video sequence,to be exploited for the following time steps, are acquired by

    the APP controller subunit.

    Furthermore, as mentioned above, the OTELO systemrequires a maximum delay of 300 ms. In this case, attentionshould be paid in order to comply with such constraints. If preencoding is considered for medical video and frames areencoded group by group, the group of frames codedtogether should correspond to a portion of video shorterthan 300 ms. It is important in fact that the medical expert

    could experience a small delay between the positioning of the probe and the visualized ultrasonography video frame.

    3.4 Physical Layer Controller UnitThe compressed medical video bitstream may be separatedin partitions or layers with different sensitivity to channelerrors. A different protection can thus be used for thedifferent partitions, when allocating the average channelcoding rate available. As an example, in the case of MPEG-4video, the bitstream can be separated into packets and eachpacket can be separated into a header and two datapartitions, with different error sensitivity. Packets from I(intra) frames can be separated in a first class related to DCDCT coefficients and a second class related to AC DCTcoefficients, whereas P (predicted) frames packets can beseparated into two partitions relevant to motion and texturedata, respectively. This different sensitivity can be exploitedto perform UEP, either at application or at physical level.The video stream sensitivity can be modeled similar as [12]in order to simplify the UEP policy. Similarly, in the case of H.264-based compression, the data partitioning tool can beexploited, as in SVC the granularity offered by scalablevideo coding may be invoked. Unequal protection based onROI can also be considered, by exploiting the possibilityoffered by the MPEG-4 standard to separate any videosequence in video objects that can differently be managed.In this view, the identification of regions of interest allows

    dedicating a higher protection to the region of interest,resulting in an increase in diagnostic accuracy, for a fixedavailable bandwidth.

    ThetaskofthePhysicallayercontrollersubunitistodecideon the unequal error protection strategy, i.e., on the channelcoding rate for the different source layers, each with adifferentsensitivity to errors, with thegoal of minimizing thetotal distortion DS C due to compression and channel errors,with the constraint of the average channel coding rate Rc,selected by the application controller. The general procedureforchannel codingratesselection is describedin detailin [12].

    Furthermore, the PHY controller subunit sets the para-meters for bit-loading in multicarrier modulation, interlea-ver characteristics and it performs a trade-off with receivercomplexity. Again, the metric chosen for representingdistortion should be representative of diagnostic accuracy.

    4 S IMULATION RESULTS AND DISCUSSIONIn order to demonstrate the feasibility of the system and toevaluate the performance achievable, the proposed con-trolled system has been implemented with its differentsubblocks, namely, application layer controller; sourceencoder/decoder (three possible codecs: MPEG-4, H.264/AVC, and SVC); cipher/decipher unit; RTP header inser-tion/removal; transport protocol header (e.g., UDPLite,UDP, or DCCP) insertion/removal; IPv6 header insertion/

    removal; IPv6 mobility modeling; IPv6 network simulation;

    MARTINI ET AL.: ROBUST MULTILAYER CONTROL FOR ENHANCED WIRELESS TELEMEDICAL VIDEO STREAMING 11

  • 8/13/2019 Base Journal 2010

    8/12

    RoHC; DLL header insertion/removal; Radio Link, includ-ing physical layer controller, channel encoder/decoder(convolutional, rate compatible punctured convolutional(RCPC), low-density parity check (LDPC) codes with softand iterative decoding allowed), interleaver, modulator(also OFDM, TCM, TTCM, STTC, soft and iterativedemodulation allowed), and channel (e.g., additive white

    gaussian noise (AWGN), Rayleigh fading, shadowing,frequency selective channels). The implementation of someof the blocks above was performed in the framework of thePHOENIX EU project and is described in [37].

    The proposed structure is implemented in simulatedlaboratory environment with images and video streamacquired from the real OTELO system. The videosequences acquired by the robotic sonographer are fed tothe source codec, which performs source (MPEG-4/H.264)encoding (according to the parameters suggested by theAPP controller) by every controller time step. The encoded bitstream is then processed by the lower layers, and finallytransmitted over the wireless channel. The parameters of upper layers, down to the network, are determined by theapplication layer controller unit by every APP controllertime step. The parameters of lower layers, in particular of the physical layer, are determined runtime by the PHYcontroller unit with its relevant time step (lower or equalto the one of the APP controller). The application controllerunit performs adaptive source bit rate adaptation and thephysical layer one provides UEP, according to the average bit rate suggested by the APP layer controller, and drivesadaptive bit-loading for multicarrier modulation. A defaultparameter setting is considered in the initialization phase.

    4.1 Simulation SetupThe 802.11e WLAN support was added at the radio link

    level, with a coded bit rate of 12 Mbit/s. The ultrasono-graphy video stream is coded according to the MPEG-4standard and is assumed multiplexed with other real-timetransmissions, so that it occupies only an average portion of the available bandwidth corresponding to a coded bit rateof 650 Kbit/s. CIF resolution has been selected. TheMoMuSys MPEG-4 reference video codec is considered,with some modifications in the decoder to improve bit errorresilience. The modified decoder is used both in theadapted and nonadapted system.

    RoHC is applied, in order to compress the transportand network headers by transmitting only nonredundantinformation.

    Channel codes are irregular repeat-accumulate (IRA)LDPC codes with a mother code rate of (3,500, 10,500),properly punctured and shortened in order to obtaindifferent code rates. The resulting codewords are always4,200 bits long. The code rate is 2/3 for the nonadaptedsystem (EEP); in the adapted case, the code rate can changeaccording to SSI in order to perform UEP. The averagecoded bit rate is the same in both cases considered.

    In the first case, the modulation is a classical OFDMwith 48 carriers for data transmission and a frame durationof 4 s; margin adaptive bit-loading techniques managed bythe PHY joint source and channel coding (JSCC) areconsidered in the adapted system.

    The channel is obtained according to the ETSI channel

    A model, representing the conditions of a typical office

    (hospital) environment. It takes into account also a log-normal flat fading component with channel coherence timeof 5 s, to consider the fading effects due to large obstacles.A median signal-to-noise ratio of E b=N 0 13 :2 dB has been considered.

    For the scenario considered, the number of possiblestates controlled at APP layer is 5. Each state is character-

    ized by different sets of values for the above-mentionedparameters. State 1 corresponds to the lowest source datarate (the lowest video quality) and the highest robustness,whereas state 5 corresponds to the highest source data rate(the highest video quality) and the lowest robustness. Thus,increasing the state number corresponds to increasing therobustness of transmission at the cost of a loss in the error-free received video quality. Table 2 summarizes the statesconsidered in the example.

    The source bit rate after MPEG-4 compression dependson the APP controller status and ranges from 210 Kbit/s(state 1) to 384 Kbit/s (state 5), taking into account also theoverhead due to the various network headers, and it is thusin good accordance with the OTELO requirements reportedin Table 1 and Section 2. Note that in some cases, to keep anacceptable video quality in very deep fades or networkcongestion, the most robust states have to consider lowerframe rates than those required by the OTELO system. Thissimulation setup is summarized in Table 3.

    A second scenario is described in the following. The caseof transmission over a nonfrequency selective channelaffected by Rayleigh fading and log-normal shadowing withmedian SN R 5 dB is taken into account. The consideredchannel codes are RCPC codes with mother code rate 1=3 ,constraint length K 5 , and puncturing period P 8 .Simple BPSK modulation is considered here. Robust headercompression is applied. The APP controller states corre-

    spond to a minimum source bit rate after MPEG-4 coding of 210 Kbps and a maximum of 384 Kbps (thus, respectingOTELO specifications in Table 1). The considered combina-tions of parameters for MPEG-4 video are the same as above.

    State 4 is the reference one, i.e., the one considered in thenonadapted case, whereas the controller switches amongthe states in Table 2 in the adapted case. The maximum bitrate over the channel is 450 Kbps.

    4.2 Numerical ResultsFig. 5 reports comparative results in terms of PSNR andSSIM [22] versus time, in the first setup described above. Thequality curves reported in the graph have been obtainedthrough the average of four distinct simulations, run withdifferent noise seeds. Quality values averaged over 1s arereported in the curves. The quality values have beennormalized with respect to the maximum value achievedin order to allow the comparison of different metrics in thesame figure. An average gain of 4.4 dB in terms of PSNR isprovided by the adapted system, allowing the performanceof the diagnosis with much higher accuracy than in thenonadapted case, as visual results confirm. It is evident how,in deep channel fades, the medical video quality in theproposed system is kept at acceptable levels. In particular, if such fades happen in the first part of the ultrasonography,where the medical doctor is searching for the specific organ,this allows a reduction of the search time, avoiding the time

    where the quality of the communication is not acceptable.

    12 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 9, NO. 1, JANUARY 2010

  • 8/13/2019 Base Journal 2010

    9/12

    Fig. 6 reports the complementary cumulative distribu-tion function of the received medical video qualityexpressed in terms of SSIM for the proposed and thereference system. The graph allows visualizing thecomparison in terms of the percentage of time the videoquality is above a prefixed threshold.

    Since PSNR is a more commonly used quality metric,results in terms of PSNR for 30 seconds are reported inTable 4 for an easier evaluation of results.

    Fig. 7 shows the comparative visual results for theechocardiography sequence acquired on the expert side, in

    the same setup. The original frame (no. 422) is reported inFig. 7a. The corresponding received video frame with thenonadapted system is reported in Fig. 7b; this figure clearlyshows evident artifacts, in terms of light stripes, affectingthe accuracy of the diagnosis. Fig. 7 shows the correspond-ing received video frame with the adapted system,presenting a much higher visual quality, also reflected invery good diagnosis accuracy.

    Fig. 8 reports comparative results showing the perfor-mance with (ROAM) and without the proposed controllerstructure, in the second setup described in the previous

    MARTINI ET AL.: ROBUST MULTILAYER CONTROL FOR ENHANCED WIRELESS TELEMEDICAL VIDEO STREAMING 13

    TABLE 3Summary of the Main Simulation Parameters Considered (First Scenario)

    Fig. 6. Complementary cumulative distribution function of ultrasoundvideo quality in terms of SSIM.

    Fig. 5. Comparative performance of the proposed and referencearchitecture. Normalized PSNR and SSIM versus time.

  • 8/13/2019 Base Journal 2010

    10/12

    section, i.e., in the case of time correlated, nonfrequencyselective channel. Results are obtained through the averageof five simulations of 30 seconds each. Again, video quality

    of subsequent frames in one second is averaged to provide asingle value. The average gain observed in terms of PSNR is4 dB in this case.

    Fig. 9 shows the complementary cumulative distributionfunction of video quality at the expert side, in terms of SSIM, in the same setup. The gain with the adapted(ROAM) system is evident also with this different visuali-zation, highlighting the percentage of time the ultrasoundvideo quality is above a prefixed quality value.

    5 C ONCLUSIONSA new multilayer controller structure for enhanced medicalvideo streaming in robotic teleultrasonography applications(m-JSCC/D) is introduced in this paper. In particular, anapplication controller unit, driving the source encoderparameters with the knowledge of channel and networkstate information and of the medical video quality at theexpert site, and a physical controller unit, allowing theadaptation of the medical video bitstream to channelconditions by exploiting the knowledge of the character-istics of the ultrasonography video stream, are described indetail. The proposed structure is implemented in simulatedlaboratory environment with images and video streamacquired from the real robotic ultrasonography system(OTELO). Comparative simulation results in the case of ultrasonography video transmission over a WLAN linkshow that a considerable improvement in terms of bothobjective medical video quality and of subjective quality isachieved with the proposed system.

    Ongoing work is currently underway to test theperformance of the proposed system in real telemedicaland clinical settings to verify the performance of the roboticdiagnostic system in hospital and emergency situations.

    14 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 9, NO. 1, JANUARY 2010

    Fig. 7. Comparative visual results of the acquired medical video images.(a) Frame no. 422original. (b) Frame no. 422MPEG-4withoutm-JSCC/D. (c) Frame no. 422MPEG-4with m-JSCC/D.

    TABLE 4PSNR Results (in dB)30s Ultrasound Video Sequence

    Fig. 8. Normalized PSNR and SSIM versus time with the adapted(ROAM) and reference systems. Second scenario.

  • 8/13/2019 Base Journal 2010

    11/12

    ACKNOWLEDGMENTSThe work of M.G. Martini and M. Mazzotti was partiallysupported by the European Commission in the frameworkof the PHOENIX IST project under contract FP6-IST-1-001812. The PHOENIX project partners are also acknowl-edged. R.S.H. Istepanian and N. Philip are grateful to theEuropean Union for supporting the project EU IST-32516OTELO: Integrated, end-to-end, mobile tele-echographysystem. Maria G. Martini was with CNIT, University of Bologna, Italy, at the time of this work.

    REFERENCES

    [1] R.S.H. Istepanian, S. Laxminarayan, and C.C. Pattichis, M-HealthEmerging Mobile Health Systems. Springer, 2006.[2] R.S.H. Istepanian, E. Jovanov, and Y.T. Zhang, M-Health: Beyond

    Seamless Mobility for Global Wireless Healthcare Connectivity-Editorial, IEEE Trans. IT in Biomedicine, vol. 8, no. 4, pp. 405-414,Dec. 2004.

    [3] P.C. Cosman et al. Thoracic CT Images: Effect of Lossy ImageCompression on Diagnostic Accuracy, Radiology, vol. 190,pp. 517-524, 1994.

    [4] H. Yu, Z. Lin, and F. Pan, Applications and Improvement of H.264 in Medical Video Compression, IEEE Trans. Circuits andSystems I, special issue on biomedical circuits and systems: a newwave of technology, vol. 52, no. 12, pp. 2707-2716, Dec. 2005.

    [5] J.L. Massey, Joint Source and Channel Coding, CommunicationsSystems and Random Process Theory, pp. 279-293, Sijthoff &Noordhoff, 1978.

    [6] M.G. Martini, M. Mazzotti, C. Lamy-Bergot, J. Huusko, and P.

    Amon, Content Adaptive Network Aware Joint Optimization forWireless Video Transmission, IEEE Comm. Magazine, vol. 45,no. 1, pp. 84-90, Jan. 2007.

    [7] F. Pereira and T. Ebrahimi, The MPEG-4 Book. Prentice Hall, 2002.[8] T. Wiegand, G.J. Sullivan, G. Bjntegaard, and G. Luthra, An

    Overview of the H.264/AVC Video Coding Standard, IEEETrans. Circuits and Systems for Video Technology, vol. 13, no. 7,pp. 560-576, July 2003.

    [9] J. Hagenauer and T. Stockhammer, Channel Coding andTransmission Aspects for Wireless Multimedia, Proc. IEEE,vol. 87, no. 10, pp. 1764-1777, Oct. 1999.

    [10] J. Hagenauer, N. Seshadri, and C.E. Sundberg, The Performanceof Rate-Compatible Punctured Convolutional Codes for DigitalMobile Radio, IEEE Trans. Comm., vol. 38, no. 7 pp. 966-980, July1990.

    [11] J. Modestino and D. Daut, Combined Source-Channel Coding of Images, IEEE Trans. Comm., vol. 27, no. 11 pp. 1644-1659, Nov.

    1979.

    [12] M.G. Martini and M. Chiani, Rate-Distortion Models for UnequalError Protection for Wireless Video Transmission, Proc. IEEEVehicular Technology Conf. (VTC 04), May 2004.

    [13] D. Dardari, M.G. Martini, M. Mazzotti, and M. Chiani, LayeredVideo Transmission on Adaptive OFDM Wireless Systems,Eurasip J. Applied Signal Processing, vol. 2004, no. 10, pp. 1557-1567, Aug. 2004.

    [14] H. Schwarz, D. Marpe, and T. Wiegand, Overview of the ScalableVideo Coding Extension of the H.264/AVC Standard, IEEETrans. Circuits and Systems for Video Technology, vol. 17, no. 9,pp. 1103-1120, Sept. 2007.

    [15] Y. Wang, S. Wenger, J. Wen, and A.K. Katsaggelos, Error-Resilient Video Coding Techniques, IEEE Signal Processing Magazine, vol. 17, no. 4, pp. 61-82, July 2000.

    [16] R. Talluri, Error-Resilient Video Coding in the ISO MPEG-4Standard, IEEE Comm. Magazine, vol. 36, no. 6, pp. 112-119, June1998.

    [17] Y. Wang and Q.F. Zhu, Error Control and Concealment for VideoCommunication: A Review, Proc. IEEE, vol. 86, no. 5, pp. 974-997,May 1998.

    [18] V. Srivastava and M. Motani, Cross-Layer Design: A Survey andthe Road Ahead, IEEE Comm. Magazine, vol. 43, no. 12, pp. 112-119, Dec. 2005.

    [19] V. Kawadia and P.R. Kumar, A Cautionary Perspective on Cross-Layer Design, IEEE Wireless Comm., vol. 12, no. 1, pp. 3-11, Feb.2005.

    [20] J.O. Limb, Distortion Criteria of the Human Viewer, IEEE Trans.Systems, Man and Cybernetics (SMC), vol. 9, no. 12, pp. 778-793,Dec. 1979.

    [21] C.J. van Den, B. Lambrecht, and O. Verscheure, PerceptualQuality Measure Using a Spatio-Temporal Model of the HumanVisual System, Proc. Soc. Photo-Optical Instrumentation Engineers(SPIE) Conf., pp. 450-461, 1996.

    [22] Z. Wang, L. Lu, and A.C. Bovik, Video Quality AssessmentBased on Structural Distortion Measurement, Signal Processing:Image Comm., vol. 29, no. 1, Jan. 2004.

    [23] I. Cheng and A. Basu, Perceptually Optimized 3D Transmissionover Wireless Networks IEEE Trans. Multimedia, vol. 9, no. 2,pp. 386-396, Feb. 2007.

    [24] M. Zanotti, M.G. Martini, and M. Chiani, Reduced ReferenceImage and Video Quality Assessment Based on StructuralSimilarity, Technical Report IEIIT-002-06, 2006.

    [25] Z. Wang, H.R. Sheikh, and A.C. Bovik, No-Reference PerceptualQuality Assessment of JPEG Compressed Images, Proc. IEEE IntlConf. Image Processing, Sept. 2002.

    [26] J.L. Mannos and D.J. Sakrison, The Effects of a Visual FidelityCriterion on Encoding of Images, IEEE Trans. Information Theory,vol. 20, no. 4, pp. 525-536, July 1974.

    [27] J. Sublett, B. Dempsey, and A.C. Weaver, Design and Imple-mentation of a Digital Teleultrasound System for Real-TimeRemote Diagnosis, Proc. IEEE Ann. Symp. Computer-Based MedicalSystems, pp. 292-299, June 1995.

    [28] R. Ribeiro, R. Conceicao, J.A. Rafael, A.S. Pereira, M. Martins, andR. Lourenco, Teleconsultation for Cooperative Acquisition,Analysis and Reporting of Ultrasound Studies, Proc. Conf.Telemedicine (TeleMed 98), Nov. 1998.

    [29] G. Kontaxakis, S. Walter, and G. Sakas, Eu-Teleinvivo, anIntegrated Portable Telemedicine Workstation Featuring Acquisi-tion, Processing and Transmission over Low-Bandwidth Lines of 3D Ultrasound Volume Images, Proc. Intl Conf. InformationTechnology Applications in Biomedicine, Nov. 2000.

    [30] A. Vilchis, J. Troccaz, P. Cinquin, F. Courreges, G. Poisson, and B.Tondu, Robotic Tele-Ultrasound System (TER): Slave RobotControl, Proc. First Intl Fed. Automatic Control (IFAC) Conf.Telematics Application in Automation and Robotics, pp. 95-100, July2001.

    [31] K. Masuda, E. Kimura, N. Tateishi, and K. Ishihara, Develop-ment of Remote Echographic Diagnosis System by Using ProbeMovable Mechanism and Transferring Echogram via High SpeedDigital Network, Proc. Ninth Mediterranean Conf. Medical andBiological Eng. and Computing (MEDICON 01), pp. 96-98, June2001.

    [32] F. Courreges, P. Vieyres, R.S.H. Istepanian, P. Arbeille, and C. Bru,Clinical Trials and Evaluation of a Mobile, Robotic Tele-Ultrasound System, J. Telemedicine and Telecare (JTT), vol. 2005,

    no. 1, pp. 46-49, 2005.

    MARTINI ET AL.: ROBUST MULTILAYER CONTROL FOR ENHANCED WIRELESS TELEMEDICAL VIDEO STREAMING 15

    Fig. 9. Complementary cumulative distribution function of ultrasoundvideo quality in terms of SSIM. Adapted (ROAM) versus referencesystem. Second scenario.

  • 8/13/2019 Base Journal 2010

    12/12

    [33] A. Gourdon, P. Poignet, G. Poisson, P. Vieyres, and P. Marche, ANew Robotic Mechanism for Medical Application, Proc. IEEE/ ASME Conf. Advanced Intelligent Mechatronics, pp. 33-38, Sept. 1999.

    [34] S.A. Garawi, F. Courreges, R.S.H. Istepanian, H. Zisimopoulus,and P. Gosset, Performance Analysis of a Compact Robotic Tele-Echography e-Health System over Terrestrial and Mobile Com-munication Links, Proc. Fifth IEE Intl Conf. 3G Mobile Comm.Technologies3G 2004, pp. 118-122, Oct. 2004.

    [35] S.A. Garawi, R.S.H. Istepanian, and M.A. Abu-Rgheff, 3G

    Wireless Communications for Mobile Robotic Tele-Ultrasonogra-phy Systems, IEEE Comm. Magazine, vol. 44, no. 4, pp. 91-96, Apr.2006.

    [36] M.G. Martini and M. Chiani, Proportional Unequal ErrorProtection for MPEG-4 Video Transmission, Proc. IEEE Intl Conf.Comm. (ICC 01), June 2001.

    [37] M.G. Martini, M. Mazzotti, C. Lamy-Bergot, P. Amon, G. Panza, J.Huusko, J. Peltola, G. Jeney, G. Feher, and S.X. Ng, ADemonstration Platform for Network Aware Joint Optimizationof Wireless Video Transmission, Proc. IST Mobile Summit, June2006.

    Maria G. Martini received the laurea degree inelectronic engineering (summa cum laude) fromthe University of Perugia, Italy, in July 1998, andthe PhD degree in electronics and computerscience from the University of Bologna, Italy, in

    March 2002. She is currently a senior lecturer inthe Faculty of Computing, Information Systemsand Mathematics, at Kingston University, Lon-don, where she is also coordinating the WirelessMultimedia Networking Research Group and the

    participation of the group in the OPTIMIX European project. After acollaboration with the University Hospital of Perugia, Italy, and with theUniversity of Rome, Italy, she joined the Dipartimento di Elettronica,Informatica e Sistemistica (DEIS), University of Bologna, in February1999. In 2004-2007, she was with the National Inter-UniversityConsortium for the Telecommunications (CNIT), Italy. She has workedas a key person for several national and international projects, such asthe JSCC project, with Philips Research, the Joint Source and ChannelCoding-Driven Digital Baseband Design for 4G Multimedia Streaming(JOCO) EU IST project, and the PHOENIX (Jointly Optimizing Multi-media Transmission in IP-Based Wireless Networks) European ISTproject, leading in particular the activity on the cross-layer system

    controller. She serves as a reviewer for international journals andconferences and has participated or is participating in the organizingcommittees and technical program committees of several internationalconferences (recently, MOBIMEDIA 2008, IEEE PIMRC 2008, IEEEWCNC 2009, and IEEE Pervasive Healthcare 2009). She was thegeneral chair of the EUMOB 2008 Symposium in Oulu, Finland, and iscurrently the general chair of the Fifth International Mobile MultimediaCommunications Conference (MOBIMEDIA 2009) in London. She iscoordinating the edition of the Strategic Applications Agenda (SAA) onmobile health and inclusion applications in the framework of theeMobility European Technology Platform. Her research interests aremainly in joint source and channel coding, error resilient videotransmission, wireless multimedia networks, cross-layer design, deci-sion theory, frame synchronization, and in the application of knowledgefrom the communications field to the medical field. She holds severalinternational patents on wireless video transmission. She is a seniormember of the IEEE.

    Robert S.H. Istepanian received the PhDdegree from the Electronic and Electrical En-gineering Department at Loughborough Univer-sity, United Kingdom, in 1994. He is currently aprofessor of data communications at KingstonUniversity, London, and a visiting professor inthe Division of Cellular and Molecular Medicineat St. Georges University of London. He is thefounder and director of the Mobile Information

    and Network Technologies Research Centre(MINT) in Kingston University. He held several academic and researchacademic posts in the UK and Canada including senior lectureships inthe Universities of Portsmouth and Brunel University in the UK, and wasalso an associate professor in the Universities of Ryerson, Toronto, andan adjunct professor in the University of West Ontario in Canada. He iscurrently the 2008 Leverhulme distinguished visiting fellow at the Centrefor Global e-Health Innovation at the University of Toronto and theUniversitys Health Network. He is a investigator and coinvestigator ofseveral EPSRC and EU research grants on wireless telemedicine andother research/visiting grants from the British Council and the RoyalSociety, the Royal Academy of Engineering, and the Leverhulme Trust.He was also the UK lead investigator of several EU-IST and e-Tenprojects in the areas of mobile healthcare. He is also a member onseveral experts and grants review committees, and more recently, was amember for the Canada Foundation for Innovations experts panel andtheir strategic healthcare projects. He currently serves on several IEEE

    Transactions and international journals editorial boards, including theIEEE Transactions on Information Technology in Biomedicine (since1997), the IEEE Transactions on NanoBioScience, the IEEE Transac- tions on Mobile Computing , the International Journal of Telemedicine and Applications, and the Journal of Mobile Multimedia . He has alsoserved as the guest editor of several special issues of IEEETransactions. He was the cochairman of the UK/RI chapter of IEEEEngineering in Medicine and Biology in 2002. He has also served as anexpert and reviewer on numerous funding bodies in the UK and Canada,as an invited keynote speaker at several international conferences, andas a technical committee member or chair of several national andinternational conferences. He has published more than 170 refereed journal and conference papers and edited three books, includingchapters in the areas of mobile communications for healthcare, m-healthtechnologies, and biomedical signals processing. He is a fellow of theInstitute of Engineering Technology (IET) (formerly the IEE) and a seniormember of the IEEE.

    Matteo Mazzotti received a degree in tele-communications engineering (with the highesthonors) and the PhD degree in electronicengineering, computer science and telecommu-nication from the University of Bologna in July2002 and May 2007, respectively. Currently, heis working with the National Research Council(CNR), Italy, and the National Inter-UniversityConsortium for the Telecommunications (CNIT),Italy. His main research interests include multi-

    media communications, joint source and channel coding, broadcasttechnologies, and wireless communication systems. He is a memberof the IEEE.

    Nada Y. Philip received the PhD degree for herthesis titled Medical Quality of Service for

    Optimized Ultrasound Streaming in WirelessRobotic Teleultrasonography System from theFaculty of Computing, Information Systems andMathematics at KingstonUniversity,United King-dom, in 2008. Currently, she is a lecturer atKingston University and an honorary tutor at St.Georges University of London, United Kingdom.She is a member of the Mobile Information and

    Network Technologies Research Centre (MINT) at Kingston University.Her research interests include data communication, networking, andinformation technology in healthcare and medical applications. She is amember of the Institute of Engineering Technology (IET) and the IEEE.

    16 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 9, NO. 1, JANUARY 2010