Top Banner

of 131

DTN REP1

Apr 14, 2018

Download

Documents

Tushar Saxena
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 7/30/2019 DTN REP1

    1/131

    Performance Characteristics of Convergence Layers in Delay Tolerant Networks

    A thesis presented to

    the faculty of

    the Russ College of Engineering and Technology of Ohio University

    In partial fulfillment

    of the requirements for the degree

    Master of Science

    Mithun Roy Rajan

    August 2011

    2011 Mithun Roy Rajan. All Rights Reserved.

  • 7/30/2019 DTN REP1

    2/131

    2

    This thesis titled

    Performance Characteristics of Convergence Layers in Delay Tolerant Networks

    by

    MITHUN ROY RAJAN

    has been approved for

    the Electrical Engineering and Computer Science

    and the Russ College of Engineering and Technology by

    Shawn D. Ostermann

    Associate Professor of Engineering and Technology

    Dennis Irwin

    Dean, Russ College of Engineering and Technology

  • 7/30/2019 DTN REP1

    3/131

    3

    Abstract

    RAJAN, MITHUN ROY, M.S., August 2011, Computer Science Engineering

    Performance Characteristics of Convergence Layers in Delay Tolerant Networks (131 pp.)

    Director of Thesis: Shawn D. Ostermann

    Delay Tolerant Networks (DTNs) are designed to operate in environments with high

    delays, significant losses and intermittent connectivity. Internet protocols like TCP/IP and

    UDP/IP are not designed to perform effectively in challenging environments. DTN uses

    the Bundle Protocol which is an overlay protocol to store and forward data units. This

    Bundle Protocol works in cooperation with convergence layers to function in extreme

    environments. The convergence layers augment the underlying communication layer to

    provide services like reliable delivery and message boundaries. This research focuses on

    the kind of performance that can be expected from two such convergence layers - the TCP

    Convergence Layer and the Licklider Transmission Protocol Convergence Layer - under

    various realistic conditions. Tests were conducted to calculate the throughput using these

    convergence layers under different losses and delays. The throughput that was obtained

    using different convergence layers was compared and the performance patterns were

    analyzed to determine which of these convergence layers has a higher performance under

    the various scenarios.

    Approved:

    Shawn D. Ostermann

    Associate Professor of Engineering and Technology

  • 7/30/2019 DTN REP1

    4/131

    4

    Acknowledgments

    I would like to thank my advisor, Dr. Ostermann, for his endless supply of ideas and

    suggestions in helping me complete my thesis. I am grateful to him for giving me the

    opportunity to work in the IRG Lab. A very special thanks to Dr. Kruse for his guidance

    and for always having answers to all my questions.

    Thanks to Gilbert Clark, Josh Schendel, James Swaro, Kevin Janowiecki, Samuel

    Jero and David Young for all their help in the lab. It was always great to have someone to

    discuss research issues and world issues with.

    I appreciate all the love and support from my family. Thanks for being patient and

    having faith in me.

  • 7/30/2019 DTN REP1

    5/131

    5

    Table of Contents

    Page

    Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    2 DTN Architecture and Convergence Layers . . . . . . . . . . . . . . . . . . . . 14

    2.1 DTN Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    2.2 Bundle Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    2.3 Convergence Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    2.3.1 TCP Convergence Layer . . . . . . . . . . . . . . . . . . . . . . . 22

    2.3.2 LTP Convergence Layer . . . . . . . . . . . . . . . . . . . . . . . 25

    3 Experiment Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    3.1 Hardware Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    3.2 Software Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    4 Experiments, Result and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 33

    4.1 TCP Convergence Layer Tests . . . . . . . . . . . . . . . . . . . . . . . . 33

    4.1.1 TCPCL Without Custody Transfer . . . . . . . . . . . . . . . . . . 33

    4.1.2 TCPCL With Custody Transfer . . . . . . . . . . . . . . . . . . . 44

    4.2 LTP Convergence Layer Tests . . . . . . . . . . . . . . . . . . . . . . . . 49

    4.2.1 LTPCL Without custody transfer . . . . . . . . . . . . . . . . . . . 50

    4.2.2 LTPCL With custody transfer . . . . . . . . . . . . . . . . . . . . 60

    5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

    References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

    Appendix A: ION Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

    Appendix B: Supporting programs . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

  • 7/30/2019 DTN REP1

    6/131

    6

    List of Tables

    Page

    3.1 Testbed Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    3.2 Experimental Settings and Parameters . . . . . . . . . . . . . . . . . . . . . . 31

    4.1 Number of sessions used for various RTTs . . . . . . . . . . . . . . . . . . . . 53

  • 7/30/2019 DTN REP1

    7/131

    7

    List of Figures

    Page

    2.1 Bundle Protocol in Protocol Stack . . . . . . . . . . . . . . . . . . . . . . . . 15

    2.2 Bundle Protocol Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    2.3 Convergence Layer Protocol in Protocol Stack . . . . . . . . . . . . . . . . . . 21

    2.4 TCPCL Connection Lifecycle . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    2.5 LTPCL Connection Lifecycle . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    2.6 LTPCL Error Connection Lifecycle . . . . . . . . . . . . . . . . . . . . . . . . 28

    3.1 Physical Testbed Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    3.2 Logical Testbed Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    4.1 TCPCL Throughput vs RTT . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    4.2 Outstanding Window Graph for TCP Reno with 2 sec RTT . . . . . . . . . . . 34

    4.3 Outstanding Window Graph for TCP Cubic with 2 sec RTT . . . . . . . . . . . 35

    4.4 Time Sequence Graph of TCP Link with 20sec RTT . . . . . . . . . . . . . . . 37

    4.5 TCPCL Throughput vs Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    4.6 TCPCL(w/o custody) Throughput vs RTT & Loss for 200 Byte Bundles . . . . 41

    4.7 TCPCL(w/o custody) Throughtput vs RTT & Loss for 5000 Byte Bundles . . . 42

    4.8 TCPCL(w/o custody) Throughput vs RTT & Loss for 15000 Byte Bundles . . . 43

    4.9 TCPCL(w/custody) Throughtput vs RTT . . . . . . . . . . . . . . . . . . . . 46

    4.10 Comparing TCPCL(w/custody) and TCPCL(w/o custody) Throughtput vs RTT 47

    4.11 TCPCL(w/custody) Throughput vs Loss . . . . . . . . . . . . . . . . . . . . . 48

    4.12 Comparing TCPCL(w/custody) and TCPCL(w/o custody) Throughtput vs Loss 49

    4.13 TCPCL(w/custody) Throughput vs RTT & Loss for 200 Byte Bundles . . . . . 50

    4.14 TCPCL(w/custody) Throughput vs RTT & Loss for 5000 Byte Bundles . . . . 51

    4.15 TCPCL(w/custody) Throughput vs RTT & Loss for 15000 Byte Bundles . . . 52

    4.16 TCPCL(w/custody) Throughput vs RTT & Loss for 40000 Byte Bundles . . . 53

    4.17 LTPCL Throughtput vs RTT . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

    4.18 Comparing LTPCL and TCPCL Throughput vs RTT . . . . . . . . . . . . . . 55

    4.19 LTPCL Throughput vs Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

    4.20 Comparing LTPCL and TCPCL Throughput vs Loss . . . . . . . . . . . . . . 59

    4.21 LTPCL Throughput vs RTT & Loss for 5000 Byte Bundles . . . . . . . . . . . 604.22 LTPCL Throughput vs RTT & Loss for 15000 Byte Bundles . . . . . . . . . . 61

    4.23 Comparing LTPCL(w/o custody) and LTPCL(w/cusotdy) Throughput vs Delay 62

    4.24 Comparing LTPCL(w/o custody) and LTPCL(w/custody) Throughput vs Loss . 63

    4.25 LTPCL(w/custody) Throughput vs RTT & Loss for 5000 Byte Bundles . . . . 64

    4.26 LTPCL(w/custody) Throughput vs RTT & Loss for 15000 Byte Bundles . . . . 65

  • 7/30/2019 DTN REP1

    8/131

    8

    1 Introduction

    Environments with limited network connectivity, high losses and delays require

    special networks for communication. Terrestrial networks which generally use TCP/IP do

    not perform well under such conditions. Deep space exploration, wireless and sensor

    networks, etc. operate in such challenging environments and it was important to develop a

    solution for such challenging environments. Terrestrial networks can be described as

    having the following characteristics:

    delay in the order of milliseconds

    low error rates

    continuous connectivity

    high bandwidth

    The protocols that have been developed for terrestrial networks exploit the above

    mentioned benefits. For example, TCP performs a three-way handshake between two

    communicating entities to establish connection between them. In a high bandwidth

    environment with negligible delay, the additional time required to establish a connection

    will be insignificant. Furthermore, during a TCP transmission with packet loss, TCP will

    block the delivery of subsequent packets until the lost packet is retransmitted and this

    would lead to under-utilization of the bandwidth. Since terrestrial networks have

    continuous connectivity, under-utilization of the bandwidth will not affect data delivery.

    The sender will store the data until it receives an acknowledgement from the receiver

    because if any packets are lost during transmission, the lost packets can be retransmitted.

    Since the round trip time is in the order of milliseconds, the senders retransmission buffer

    will only need to store the sent data for a few milliseconds or seconds.

  • 7/30/2019 DTN REP1

    9/131

    9

    On the other end of the spectrum, a challenged network can be defined by the following

    characteristics:

    delay in the order of minutes/hours

    high error rates depending on the environmental conditions

    disrupted connectivity

    limited bandwidth

    asymmetric bandwidth

    Burleigh et al. [2003] explains why using an Internet protocol on a delayed/disrupted

    link is not advisable. When TCP/IP is used in disrupted networks, the connection

    establishment by the 3-way handshake will be an overhead. The high round-trip time for

    the 3-way handshake will delay the flow of application data. This is unfavorable in an

    environment where connection can be frequently disrupted. Moreover, if there is limited

    opportunity to send data, using that limited connectivity to establish a connection will be a

    misuse of the available bandwidth. Similarly, waiting on lost packets will lead to

    under-utilization of the bandwidth. It is important to maximize the usage of bandwidth

    when there is connectivity instead of waiting for lost packets. End-to-end TCP

    retransmissions will cause the retransmission buffer at the senders end to retain the data it

    sends for long periods of time. These periods of time can range in length from a few

    minutes to hours [Burleigh et al. 2003].

    There are a few modified versions of TCP with extensions specifically for deep space

    communication, like Space Communications Protocol Standards-Transport Protocol

    (SCPS-TP) [Durst et al. 1996], and TCP peach [Akyildiz et al. 2001]. In addition, there

    are also some protocols/architectures which were developed for interplanetary

    communication, like Satellite Transport Protocol (STP) [Henderson and Katz 1999], and

  • 7/30/2019 DTN REP1

    10/131

    10

    Performance enhancing transport architecture (PETRA) [Marchese et al. 2004]. Most of

    these protocols and architecture solve the high delays, losses, and asymmetric bandwidth

    problems. Delay/Disruption Tolerant Networks [Fall 2003; Burleigh et al. 2003] is one

    such architecture that has been designed to solve the problems posed by such

    environments. Delay tolerant networks are deployed for deep space communication

    [Wood et al. 2008, p.1], networks for mobile devices and military networks [Fall 2003].

    Delay and Disruption Tolerant networks are generally deployed in environments

    where the source and destination nodes do not have end-to-end connectivity. In areas with

    limited networking infrastructure or areas where communication networks are difficult to

    build, it is easier and cheaper to set up Delay Tolerant Networks. Farrell and Cahill [2006]

    explains a number of interesting applications of DTNs. Some of these applications are

    detailed in the following paragraphs.

    Deep-space communication is an excellent example of an environment that deploys

    Delay/Disruption Tolerant networks. The round-trip times for space links are on the order

    of minutes (Mars is anywhere between 9 to 50 minutes depending on the location of Earth

    and Mars on its orbit [Akan et al. 2002]). The communication link between nodes/entities

    in space is intermittent because there may be points in time when the nodes are not in line

    of sight of each other. Losses are also common in deep space communication because of

    low signal strength over large distances.

    A good example for DTN is the communication between Mars rovers and ground

    stations. A Mars rover could transmit data directly to earth stations but doing so will be

    energy-intensive and could drain the power of the rover. Thus, it would be reasonable to

    use a relay to transmit the data, so the rover can conserve power. Rovers on Mars collect

    data and pictures which need to be transmitted to a ground station on earth. After the rover

    collects the data, it will wait until it can make contact with the Mars orbiter. Therefore, the

    rover stores the data until it can forward it on to the orbiter. The rover transfers the data to

  • 7/30/2019 DTN REP1

    11/131

    11

    the orbiter when it can make contact. The Mars orbiter will wait until it is in line of sight

    of the Deep Space Network antenna on earth, so it can relay that data from the rover.

    Juang et al. [2002] explains another interesting application of Delay Tolerant

    Networks called Zebranet. Zebranet tracks the migration of zebras in central Kenya.

    There are a number of options available to track zebras. One option is to have collars for

    the zebras that use sophisticated GPS to track their positions and upload the data directly

    to a satellite. Since these collars will be powered by solar cells and a satellite upload will

    drain the cell quickly and make the collar useless, this option is not viable. Another

    alternative is to build a network infrastructure, but building a network infrastructure in the

    wilderness would be an expensive alternative.

    The cost effective way of doing this is to use Delay Tolerant Networks. In this case,

    the zebras will be fitted with collars that have a GPS, a short range radio and solar cells to

    power the collar. The GPS wakes up periodically to record the position of the zebra.

    When a collar is in close range with another collar, the radio will transmit all the data of

    one zebra to the other zebra and vice versa. When a node comes within close range of

    multiple nodes, it will aggregate data from multiple zebras. Finally, when a researcher

    drives by and comes within range of any of these collars, it will transfer the aggregated

    data to its final destination, which is the base station. The collars will delete the

    accumulated data when it has transferred the data to the base station [Juang et al. 2002].

    DTN Architecture (explained in Chapter 2) is an overlay architecture which can

    function on top of multiple transport layers; thus making DTN deployable in many

    different environments that support various transport layer protocols. DTN uses a standard

    message format called Bundle Protocol (BP)[Scott and Burleigh 2007] for transferring

    data. An extra layer of reliability can be added to BP by using custody transfer, which is

    explained in Chapter 2. Multiple transport layers can be used under BP by using the

    corresponding convergence layers above the transport layer. These convergence layers

  • 7/30/2019 DTN REP1

    12/131

    12

    enhance the underlying transport layer protocol and use the transport protocol to send and

    receive bundles. Some of the Convergence layer protocols that DTN supports are TCP

    [Demmer and Ott 2008], UDP [Kruse and Ostermann 2008], LTP [Burleigh et al. 2008]

    and Saratoga [Wood et al. 2008]. The TCP convergence layer (TCPCL) uses TCP, but the

    other three convergence layers use UDP to send and receive bundles. LTP is also designed

    to use Consultive Committee for Space Data Systems (CCSDS) link layer

    protocol[CCSDS 2006] and is currently being developed to use DCCP [Kohler et al.

    2006].

    Interplanetary Overlay Network (ION) implements DTN architecture and it was

    developed by the NASA Jet Propulsion Lab. ION was intended to be used on embedded

    system such as space flight mission systems but it can also be used to develop DTN

    protocols and test the performance of DTN protocols. DTN2[DTNRG 2010], DTN1

    [DTNRG 2010], IBR-DTN [Doering et al. 2008] are some of the other implementations of

    DTN architecture.

    There has been earlier work that evaluates the performance of TCPCL implemented

    by DTN2, LTP implemented by ION, and TCPCL-LTPCL hybrid for long-delay cislunar

    communication in Wang et al. [2010]. This thesis, unlike previous works, evaluates the

    maximum performance that can be expected from these convergence layers under various

    conditions of delay and losses. The tests are conducted with a 4 node setup, with the

    capability of modifying the link characteristics of the middle link. The throughput of

    TCPCL and LTPCL is compared for different delays, losses, combination of delays and

    losses, and bundle sizes. Performance is evaluated for BP with custody transfer and

    without custody transfer.

    Some of the main contributions of this thesis are LTP dissector for wireshark and

    SBP Iperf. The LTP dissector decodes LTP data in wireshark. This has proved to be very

    useful tool to debug and analyze LTP behaviour. SBP Iperf is a network performance tool

  • 7/30/2019 DTN REP1

    13/131

    13

    that is used with DTN protocols. All the performance tests in this thesis were performed

    using SBP Iperf.

    Chapter 2 gives an insight into DTN architecture, BP and the convergence layers that

    were evaluated. Chapter 3 discusses the experimental setup, test parameters, and testing

    tools. Chapter 4 presents the results of the performance tests and analyzes the result of the

    tests. Chapter 5 concludes with some recommendations for future work.

  • 7/30/2019 DTN REP1

    14/131

    14

    2 DTN Architecture and Convergence Layers

    2.1 DTN Architecture

    Fall [2003] proposes an overlay architecture that will function above transport layer

    protocols like TCP, UDP, and sensor network transport protocols. The architecture

    suggests that networks be divided into regions, which can be connected by DTN gateways.

    The nodes within a region can send and receive data without using the DTN gateway. The

    DTN gateway will act as a bridge between the 2 regions; therefore, the DTN gateway

    should understand the transport protocols of both regions. The naming convention

    consists of 2 parts: region name and entity name. The region name should be unique, but

    the entity name needs to be unique only within the region. Hence, while routing packets in

    delay tolerant networks, the DTN gateways use the region names for routing. Only the

    DTN gateway at the end of the destination region needs to resolve the entity name.

    Selecting a path and scheduling a packet transfer in DTN is performed by the use of a

    list of contact information of the nodes. The contact information consists of the relative or

    absolute time when contact can be made with other nodes, the delay between nodes, and

    the data rate that the link will support. This method of routing is contact graph routing.

    Other than contact graph routing, there are other routing techniques that can be used

    depending on the environment. The zebranet explained in the previous chapter uses flood

    routing, where each node transfers all data to every node it comes in contact with.

    The routing nodes can either have persistent or non-persistent storage. The persistent

    routing nodes will support custody transfer, which implies that the routing node will store

    a packet/bundle that it receives in persistent storage and send an acknowledgement to the

    sending node. The routing node will store the bundle until it has successfully transferred it

    to the next node that accepts custody. Custody transfer of the overlay architecture is an

    important concept of DTN because it gives reliability to the architecture amidst losses and

  • 7/30/2019 DTN REP1

    15/131

    15

    Figure 2.1: Bundle Protocol in Protocol Stack

    delays. Some of the transport layer protocols like TCP, LTP, etc. also provide reliability.

    Therefore, custody transfer will add a second layer of reliability. Since this overlay

    architecture functions over transport layer protocols, this architecture would need to

    enhance the underlying transport protocols that will be used by using a convergence layer

    above the transport protocol. If the transport layer protocol is a connection oriented

    protocol, then it is the responsibility of the convergence layer to restart connectivity when

    the connection is lost. Convergence layers are explained more in depth later in this chapter

    [Fall 2003, p.30 - 33].

  • 7/30/2019 DTN REP1

    16/131

    16

    2.2 Bundle Protocol

    The BP is the overlay protocol that is used in the DTN architecture. This protocol

    gives the DTN architecture the capability to store a bundle and then forward it to the next

    node. A Bundle is a collection of data blocks that is preceded by a common header. As

    shown in Figure 2.1, BP operates on top of the transport layer protocol, and it is designed

    to function over a wide range of networks including deep space networks, sensor

    networks, and ad-hoc networks. Wood et al. [2009] closely studies the BP and examines

    some of the BPs design decisions. Every delay tolerant network differs vastly in

    characteristics, and it is difficult to find a general networking solution to the problem.

    Therefore, DTN architecture decided to implement a generalized message format known

    as BP, which could be used over a variety of networks. Hence the protocol by itself is not

    the solution to delay tolerant/disrupted networks; however, it needs to work in cooperation

    with convergence transport layer protocols or transport layer protocols designed specially

    for delay tolerant networks. This protocol only provides a standard format for data but

    most of the support for a disrupted network is provided by the convergence layers

    implemented in the architecture [Wood et al. 2009].

    Figure 2.2 shows the bundle structure as specified in Scott and Burleigh [2007]. A

    bundle consists of a primary block followed by one or more payload blocks. Most of the

    fields in the BP structure are Self-Delimiting Numeric Values (SDNVs) [Eddy and Davies

    2011], which helps to reduce size of the bundle. Using SDNV also helps to make the

    design scalable for the various underlying network layers that the bundle protocol might

    use.

    The following are the fields in the bundle structure :

    The bundle processing flags are sub-divided into general flags, class of service and

    status report.

  • 7/30/2019 DTN REP1

    17/131

    17

    Figure 2.2: Bundle Protocol Structure

  • 7/30/2019 DTN REP1

    18/131

    18

    The block length is an SDNV of the total remaining length of the block.

    The dictionary offsets represents the offset of specified fields in the dictionary. It

    contains the scheme offset and scheme specific part (SSP) offset of the destination,

    source, report-to, and custodian. Hence, the destination scheme offset is the position

    within the dictionary which contains the scheme name of the endpoint. Similarly,

    the destination SSP offset holds the offset within the dictionary which contains

    scheme-specific part of the destination endpoint.

    The next field in the primary bundle block is creation timestamp and creation

    timestamp sequence number both of which are SDNVs. Creation timestamp is the

    UTC when the bundle was created. If more than one bundle is created in the same

    second, then the bundles are differentiated by the incrementing bundle timestamp

    sequence number.

    Dictionary information contains the dictionary length which is the SDNV of the size

    of the dictionary and the dictionary itself. The dictionary accommodates the scheme

    and the scheme specific parts which are referenced by the offsets earlier in the block.

    The last two fields in the primary bundle block (fragment offset, payload size) are

    included in the block only if the bundle processing flags signal that the bundle is a

    fragment. The fragment offset is the offset from the beginning of the actual

    application data, and payload size is an SDNV of the total application data of the

    bundle.

    The primary bundle block is usually followed by one or more canonical bundle

    blocks. The general format of the bundle block is also represented in Figure 2.2. The

    bundle block has a 1 byte type field, and if the bundle block is a payload block then type

    field will be 1. The type field is followed by bundle processing flags which can set certain

  • 7/30/2019 DTN REP1

    19/131

    19

    special bundle features, like custody transfer, etc. The length field which is an SDNV,

    represents the size of payload field, and the payload field contains the application data

    [Scott and Burleigh 2007].

    In the DTN Architecture, the application hands over data to a local bundle agent, and

    the bundle agent is responsible to get the data over to the bundle agent at the destination.

    Bundling of the data is done by these bundle agents. The bundle agent applies routing

    algorithms to calculate the next hop to forward the bundle. The bundle agent then

    advances the bundle to the corresponding convergence layer adapter to forward the bundle

    to its next hop. In environments where DTNs are applied, there is a high probability that

    the forwarding will fail because of loss or lack of connectivity. If the convergence layer

    adapters return with failure due to lack of connectivity, the bundle agent will wait until

    connection is restored and then reforward the bundle. However, if a bundle is lost due to

    network losses, then it will depend on the reliability of the transport layer protocol and/or

    the reliability of bundle protocol, which ensure the bundle gets from one hop to the next.

    On reception of a bundle, the bundle agent needs to compute the next hop and continue

    the same steps mentioned above. However, if a bundle is destined for a local endpoint, the

    application data will be sent to the application after reassembly.

    BP can add a layer of reliability by using an optional service called custody transfer.

    Custody transfer requests that a node with persistent storage store the bundle if it has

    storage capacity. The node that accepts custody of a bundle will transmit a custody signal

    to the node that previously had custody of the bundle. On reception of the custody signal,

    the previous custodian will delete the bundle from the custodians storage and/or

    retransmission buffer. The previous custodian is not required to wait until the bundle

    reaches the destination to delete the bundle from its storage. The responsibility of reliably

    transferring the bundle to the destination lies with the node that accepted custody of the

    bundle. Therefore, if the bundle is lost or corrupted at some point of time, it is not the

  • 7/30/2019 DTN REP1

    20/131

    20

    responsibility of the sender to resend the bundle, but the responsibility of the node that

    accepted custody or the custodian of that bundle to resend the bundle.

    Custody transfer not only ensures that the bundle reaches the destination from the

    source, but it also shifts the accountability of the bundle in the direction of the destination.

    Every time a node accepts custody of a bundle, it will free the previous custodians

    resources. On the other hand, bundle transmission without custody transfer relies on the

    reliability of the transport protocol.

    BP is a relatively newer protocol, so there are still some undecided issues. Wood

    et al. [2008] and Wood et al. [2009] list the problems with the current design of the BP.

    The following are some of the significant concerns :

    Synchronizing time on all the nodes is important for BP because a bundle can be

    rejected if the time stamp check claims that the bundle has expired due to

    unsynchronized clocks on the sender and receiver.

    More than one naming scheme is currently used in different BP implementations

    and each of the naming schemes have separate rules for creating end point identifier

    (EID).

    There is no accepted routing protocol, essentially because there is more than one

    naming scheme. Dynamic routing protocols will help to improve the scalability of

    the network.

    2.3 Convergence Layers

    The convergence layer acts as an interface between the BP and the transport layer

    protocol. Figure 2.3 shows the position of the convergence layer protocol in the protocol

    stack. Since BP is an overlay protocol that can be used above different transport layer

    protocols, the corresponding convergence layer protocol needs to be used to allow data

  • 7/30/2019 DTN REP1

    21/131

    21

    flow from BP to the transport protocol and vice versa. The main function of this layer is to

    aid the underlying transport layer. As mentioned before, TCPCL, UDPCL, LTP, and

    Saratoga are some of the DTN convergence layer protocols.

    Figure 2.3: Convergence Layer Protocol in Protocol Stack

    A DTN that uses TCP for communication will require a TCP Convergence layer

    between the BP and TCP. The convergence layer enhances the underlying transport

    protocol by adding some additional functionality to the transport layer protocol which

    might make it suitable in extreme environments. For example, the convergence layer for a

    connection oriented protocol will have to maintain the connection state. Thus, if

    connection is lost for any reason, the convergence layer will try to re-establish connection.

  • 7/30/2019 DTN REP1

    22/131

    22

    Additionally, if there are transport protocols that do not provide congestion control, then

    the convergence layer adds this functionality to the stack.

    2.3.1 TCP Convergence Layer

    DTN uses TCPCL when the transport layer protocol being used is TCP. TCP is a

    reliable protocol, and it ensures that a packet reaches from one node to another. Every

    packet that a node sends has to be acknowledged by the sender. If a packet is lost in

    transmission and the receiver does not receive any acknowledgement for a packet, then it

    will retransmit the packet. Timers on the sender side will keep track of the time when the

    acknowledgment is expected. When the timer runs out, the packet is retransmitted.

    Similarly when an acknowledgment is lost, the sender will have to retransmit the packet.

    TCP implements flow control using congestion window. The congestion window size

    dictates the amount of data that is in the network. TCP also uses certain congestion

    control algorithms, when there is loss of packets. TCP is designed to conclude that packet

    losses are because of congestion. Slow start, congestion avoidance, fast retransmit, and

    fast recovery [Allman et al. 2009] are used for congestion control. A TCP connection

    starts with slow start and it is again applied when there is a retransmission timeout. The

    congestion window size is 1 segment size and during slow start this congestion window

    size is increased by 1 segment size for every acknowledgment received; thus, increasing

    the congestion window size exponentially. Furthermore, TCP performs congestion

    avoidance when the congestion window size either reaches the slow start threshold or

    when there is data loss. The initial size of the slow start threshold is fixed high, which is

    usually the receiver window size.

    In the congestion avoidance phase, every round trip time, the congestion window size

    increases by 1 segment size. However, the congestion window cannot exceed the receiver

    window size. When a packet is lost, the receiver will send duplicate ACKs for the last

  • 7/30/2019 DTN REP1

    23/131

    23

    packet in sequence that it received. On receiving 3 duplicate ACKs, the sender will

    retransmit the missing segments. This algorithm is called fast retransmit because the

    sender does not wait for the retransmission timer. During fast retransmit, the congestion

    window size is reduced by half its original value and so is the slow start threshold . Fast

    recovery phase follows fast retransmit. During fast recovery the lost packet is

    retransmitted and the congestion window size is advanced by 1 segment size each time a

    duplicate acknowledgement is received. When the receiver acknowledges all the missing

    data, the congestion window size drops to the value of the slow start threshold and returns

    to the congestion avoidance phase. On the other hand, if it does not receive an

    acknowledgment there will be retransmission timeout. This timeout causes the congestion

    window to drop to 1 segment size and the connection returns to slow start.

    The transport layer implements both congestion control and flow control. Hence

    TCPCL does not have to perform any congestion or flow control. TCP also ensures

    reliable data delivery; therefore, TCPCL can assign the responsibility of reliable data

    transfer to TCP. On the other hand, the convergence layers responsibilities include

    re-initiating connection and managing message boundaries for the TCP stream.

    Demmer and Ott [2008] proposes a TCP-based convergence layer protocol. Similar

    to TCPs three-way handshake, TCPCL establishes connection by exchanging contact

    headers. This exchange sets up connection parameters, like keep alive period,

    acknowledgements, etc. Figure 2.4 shows an example TCPCL message exchange. When

    NodeA needs to send a bundle to NodeB using TCPCL, NodeA first needs to establish a

    TCPCL connection with NodeB. To establish a TCPCL connection, NodeA sends a

    contact header to NodeB. NodeB responds with a contact header and the connection

    parameters are negotiated. Following connection establishment, both the nodes can

    exchange data. TCPCL has an option to add an extra layer of reliability by using bundle

    acknowledgements. If the 2 communicating nodes decide during the contact header

  • 7/30/2019 DTN REP1

    24/131

    24

    exchange that they will support acknowledgements, then each bundle that is transmitted

    has to be acknowledged by the receiver. The nodes also send keep-alives back and forth,

    the keep-alives is a method to ensure that the nodes are still connected.

    Figure 2.4: TCPCL Connection Lifecycle

    TCP/IP is already considered to be a chatty protocol due to the three-way handshake

    and acknowledgements. Exchange of contact headers and TCPCL acknowledgements

    adds an extra layer of chattiness. In environments with limited connectivity and

    bandwidth the additional packets and bundles that are exchanged for connection setup can

    affect the performance of the convergence layer negatively. In theory, there is a possibility

  • 7/30/2019 DTN REP1

    25/131

    25

    of 3 layers of reliability using TCPCL over TCP - TCP reliability, TCPCL reliability using

    TCPCL acknowledgements, which is optional, and BP reliability using custody transfer.

    Akan et al. [2002] presents the performance of different versions of TCP on deep

    space links. The performance of TCPCL can be expected to be very similar to TCP.

    According to the test results in Akan et al. [2002], TCP recorded a throughput high of

    120KB/sec on a 1 Mbps link with no delay, and the throughput drops down to 10 bytes/sec

    when the round trip time (RTT) is 40 minutes.

    2.3.2 LTP Convergence Layer

    LTP was specially designed for links with high delays and intermittent connectivity.

    LTP is used over space links or links with long round-trip time as a convergence layer.

    Similar to TCP, LTP acknowledges bundles that it receives and retransmits any lost

    bundles. However, unlike TCP, LTP does not perform a 3-way handshake for connection

    setup. LTP is generally used above the data link layer on space links, but for testing

    purposes on terrestrial networks it is also used over UDP.

    Burleigh et al. [2008] explains the objective behind the design of this protocol and

    the reasoning behind the design decisions. Some of the important design decisions are

    listed below:

    To utilize the intermittent links bandwidth efficiently, LTP uses the concept of

    sessions. Each LTP data block is sent using a session and LTP will open as many

    sessions as the link permits. This will allow multiple data blocks to be sent over the

    link in parallel.

    LTP retransmits unacknowledged data, which is what makes LTP a reliable

    protocol. LTP receiver uses report segments to let the sender know all the data

    segments the receiver received successfully.

  • 7/30/2019 DTN REP1

    26/131

    26

    Connection setup has been omitted in LTP because in an environment with long

    delay and limited communication opportunities, a protocol that needs connection

    setup will under utilize the bandwidth.

    Unlike TCP, LTP connections are unidirectional; therefore, communication between

    nodes talking to each other on LTP will have two unidirectional connections open.

    Ramadas [2007] and Burleigh et al. [2008] describe the operation of the LTP

    protocol. A basic LTP operation without link errors or losses is depicted in Figure 2.5.

    The sender opens up a session to send a LTP data block. As mentioned earlier, a node can

    open as many sessions as the link will permit. This is also a flow control mechanism

    because bandwidth is limited by the number of sessions that can be opened

    simultaneously. Each time a node sends a LTP data block, it will use a different session

    number. The last data segment of a data block is a checkpoint/End of Block (EOB)

    segment. When the receiver receives a checkpoint, it will respond with a report segment.

    The report segment is an acknowledgement of all the data segments of the data block that

    it received. On receiving the report acknowledging all the segments of the bundle block,

    the sender closes the session and sends a report acknowledgement. The receiver will close

    the import session on reception of the report acknowledgement.

    The following figure, Figure 2.6, shows an LTP session on an error prone network.

    The sender will send all the data segments of the block. If some of the data segments are

    lost during transmission, the receiver will report all the segments it received in its report

    segment. This will let the sender know which data segments it needs to retransmit. The

    sender responds with a report acknowledgement, and then it retransmits the lost segments.

    After the sender retransmits all the lost segments, it will send a checkpoint segment to the

    receiver. On reception of the checkpoint, the receiver will respond with another report

    segment with a report of all the segments it received. If the report claims to have received

  • 7/30/2019 DTN REP1

    27/131

    27

    Figure 2.5: LTPCL Connection Lifecycle

    all the segments of the block, the sender will close the session and send a report

    acknowledgement. When the receiver receives the report acknowledgement, it will close

    the LTP session [Ramadas 2007; Burleigh et al. 2008].

    Sessions is the flow control mechanism in LTP. LTP can be configured to open a

    certain number of sessions, so when all the sessions are being used, data blocks that need

    to be sent have to wait until a previous session closes. LTP can technically transmit at

    links bandwidth only limited by the number of sessions that LTP can open

    simultaneously.

  • 7/30/2019 DTN REP1

    28/131

    28

    Figure 2.6: LTPCL Error Connection Lifecycle

  • 7/30/2019 DTN REP1

    29/131

    29

    3 Experiment Setup

    This chapter explains the hardware and software configuration of the test setup.

    3.1 Hardware Configuration

    Figure 3.1: Physical Testbed Setup

    Figure 3.1 shows the physical setup of the testbeds. The 4 testbeds are connected to

    each other using a 100Mbps Fast Ethernet Switch. The logical testbed setup for the tests is

    shown in Figure 3.2. Node1 is assigned as the source node and Node4 is assigned as the

    destination node. All the nodes have host-specific routes. Node1 has to make 2 hops to get

    to Node4 through Node2 and Node3. Similarly Node4 also needs to make 2 hops to reach

    Node1 through Node3 and Node2. The routing information is present in the ION

    configuration files presented in A.

  • 7/30/2019 DTN REP1

    30/131

    30

    Figure 3.2: Logical Testbed Setup

    Table 3.1 provides details about the individual testbeds. The tests are done on Linux

    machines, running Linux kernel version 2.6.24. All the testbeds are 32 bit machines

    except for Node2. Node2 is a 64-bit machine, so that ION will allocate more small pool

    memory. This will ensure that it doesnt run out of small pool memory during the tests.1

    Table 3.1: Testbed Configuration

    Configuration Node1 Node2 Node3 Node4

    Operating System Ubuntu

    8.04

    Ubuntu

    8.04

    Ubuntu

    8.04

    Ubuntu

    8.04

    Linux Kernel Version 2.6.24 2.6.24 2.6.24 2.6.24

    CPU Intel Pen-

    tium 4 CPU

    2.80GHz

    AMD Tu-

    rion 64

    X2

    Intel Pen-

    tium 4 CPU

    3.00GHz

    Intel Pen-

    tium 4 CPU

    2.80GHz

    Memory 1GB 2GB 1GB 1GB

    Architecture 32-bit 64-bit 32-bit 32-bit

    1 ION can allocate a maximum of 16MB small pool memory on a 32-bit machine and a maximum of 256

    bytes. The small pool memory is used to store bundle related information within ION. Since Node2 might

    have to retain bundles in the system when there are losses and delays, it is important for Node2 to have more

    small pool memory. Otherwise Node2 will run out of small pool memory and crash ION.

  • 7/30/2019 DTN REP1

    31/131

    31

    3.2 Software Configuration

    Table 3.2 lists some of the important experimental factors used in the performance

    tests. All the testbeds run an opensource version of Interplanetary Overlay Network which

    is an implementation of DTN architecture from NASAs Jet Propulsion Lab. The number

    of syn/synacks that TCP will send is increased from the default value 5 to 255 by setting

    the kernel parameter net.ipv4.tcp syn retries = 255 and emphnet.ipv4.tcp synack retries =

    255. This ensures that TCP does not give up connection establishment for high delay

    links. The bundle lifetime within ION is set to 10 hours. This helps to ensure that the

    bundles do not expire during the duration of the test. The tests are performed over 2 sets

    of DTN protocol stacks - BP over TCPCL over TCP/IP and BP over LTP over UDP.

    Table 3.2: Experimental Settings and Parameters

    Parameters Values

    DTN implementation ION Opensource

    DTN Convergence Layers LTP(ION v2.4.0), TCP(ION v2.3.0)

    Bundle Size(bytes) 200 - 40000

    Channel Bandwidth 100 Mbits/sec

    RTT(sec) 0 - 20

    Loss(%) 0 - 10

    The throughput of the 2 protocols (TCP, LTP) is measured for bundle sizes varying

    from 200 bytes to 40000 bytes. The behavior of these protocols in extreme environments

    is modeled by measuring the throughput for various values of round-trip time and packet

    loss. We also measure the throughput in environments with some delay and loss. The

  • 7/30/2019 DTN REP1

    32/131

    32

    delay or loss on this multi hop link between Node1 and Node4 is simulated on Link2

    between Node2 and Node3. Netem [Hemminger 2005] is used on Node2 and Node3 to

    emulate delay and loss. The netem rules have been set up such that the delay or loss are

    applied to incoming traffic on Node2 and Node3. The network emulation rules are applied

    to incoming traffic rather than outgoing traffic because if loss is applied to outgoing traffic,

    the loss is reported to the higher layers, and protocols like TCP will retransmit the lost

    packets, which is not the desired effect.

    All the tests use a tool called SBP Iperf, which is a variation of the commonly used

    Iperf. SBP Iperf is similar to Iperf except that it runs over DTN protocols. The SBP Iperf

    client runs on Node1, and the SBP Iperf server runs on Node4. The SBP Iperf client

    transmits data and SBP Iperf server waits until it receives all the bundles and responds

    back to the client with a server report. The SBP Iperf client and server code is presented

    in B.3.

    The client will use an incrementing sequence number in each of the data segment it

    sends. The last data segment will have a negative sequence number, which helps the

    server to determine that the client has stopped sending data. The SBP Iperf server will

    wait for a few seconds before it sends the server report. This will ensure that none of the

    out of order bundles are reported as lost bundles. This wait period can be adjusted by

    changing the timeout parameter on the server side. The server report contains the

    following information : throughput, loss, jitter, bundles transmitted and transmission time.

  • 7/30/2019 DTN REP1

    33/131

    33

    4 Experiments, Result and Analysis

    4.1 TCP Convergence Layer Tests

    This section discusses the results of the performance tests with the TCPCL.

    4.1.1 TCPCL Without Custody Transfer

    The first set of tests uses Bundle Protocol over TCP Convergence Layer over TCP/IP.

    We measure the throughput for varying bundle sizes and round-trip times. In this test the

    SBP Iperf client transmits data for 30 seconds, and when the server receives all the data

    the server sends a report back to the client. As explained in the previous chapter, the

    round-trip time between Node2 and Node3 is varied using netem. This test does not use

    the custody transfer option in BP and the loss between the links is set to 0. Figure 4.1

    shows the TCPCL throughput in Kbits/sec for various round-trip times(seconds).

    101

    102

    103

    104

    105

    0 2 4 6 8 10 12 14 16 18 20

    Throughput(Kbits/sec)

    RTT (sec)

    200 bytes

    500 bytes

    1000 bytes

    1500 bytes

    2000 bytes

    5000 bytes

    7500 bytes

    10000 bytes

    15000 bytes

    Figure 4.1: TCPCL Throughput vs RTT

  • 7/30/2019 DTN REP1

    34/131

    34

    As we can see in Figure 4.1, throughput decreases with increase in round-trip time.

    This is because the TCP congestion window size will increase more gradually when the

    RTT is high, during slow start. The congestion window is doubled every round trip during

    slow start. However, this increase will be more gradual when RTT is higher.

    Figure 4.2: Outstanding Window Graph for TCP Reno with 2 sec RTT

    The tests where RTT is greater than 0.4 seconds and less than 10 seconds use TCP

    Cubic instead of TCP Reno which is used for the other tests. TCP Cubic yields better

    throughput for these range of RTT values since TCP Cubic is able to increase the

  • 7/30/2019 DTN REP1

    35/131

    35

    Figure 4.3: Outstanding Window Graph for TCP Cubic with 2 sec RTT

    congestion window higher than TCP Reno. Figure 4.2 and Figure 4.3 shows the difference

    in outstanding window when using TCP Reno and TCP Cubic on a link with a 2 second

    RTT. The outstanding window in the figures is depicted by the red line.

    Additionally, when the RTT is greater than 9 seconds (when the SYN is retransmitted

    for the third time), the sender will assume that the SYN packet is lost since it has not

    received the SYN/ACK from the receiver within 9 seconds. This behavior will trigger a

    retransmission timeout. Under normal circumstances, the slow start threshold (ssthresh) is

  • 7/30/2019 DTN REP1

    36/131

    36

    set to the size of the advertised window. Contrary to this, when TCP detects loss due to

    congestion, TCP calculates the ssthresh as [Allman et al. 2009]

    ssthresh = max(Flight Size /2, 2 * Maximum Segment Size)

    Since, data has not been sent as a part of this connection, the flight size will be zero.

    Thus setting ssthresh to twice the maximum segment size(MSS) . The ssthresh and the

    size of the congestion window (cwnd) determines whether a TCP connection is in slow

    start or in congestion avoidance. If ssthresh is greater than cwnd, TCP will be in slow start

    phase and in slow start the congestion window will double every RTT. The TCP

    connection will go into congestion avoidance when the congestion window increases to a

    value greater than the ssthresh. The congestion window will increase by 1 segment size

    every RTT during congestion avoidance phase as mentioned earlier in chapter 2. Allman

    et al. [2009] also specifies that the initial window size is set to 1 maximum segment size

    when either a SYN or SYN/ACK is lost.

    Hence in the above mentioned case where the RTT is greater than 9 seconds, the

    initial window size is set to 1 MSS and the ssthresh will be set to twice the MSS. This low

    value of ssthresh will cause TCP to go into congestion avoidance after one roundtrip time.

    This behavior of TCP is shown in the TCP time sequence graph in Figure 4.4. Therefore, a

    connection with high RTT will take longer to reach its maximum congestion window size.

    This drop in the ssthresh value due to retransmission timeout event during the TCP

    handshake period can be prevented if we can configure the initial timeout depending on

    the RTT of the link. But the TCP version in Linux 2.6.24 does not allow us to set this

    value without recompiling the kernel hence causing any connection with an RTT greater

    than 3 seconds to start with the congestion avoidance phase rather than slow start.

  • 7/30/2019 DTN REP1

    37/131

    37

    Figure 4.4: Time Sequence Graph of TCP Link with 20sec RTT

    We also notice that that the throughput varies with bundle size. When bundle size is

    200 bytes, the throughput is approximately 2500Kbits/sec, and the throughput is 40 times

    higher when bundle size is 15000 bytes. This is so because ION makes approximately 64

    semop2 calls for every bundle that is sent. We use a simple program that performs a

    million semop operations and calculate the time it takes to run the program. By running

    this program on one of the testbeds, we calculated that the time taken to perform one

    semop function call is 1.07 * 10-6 seconds(sec).

    2 semop is a system call that performs semaphore operations. Semaphores are used for inter-process

    communication to regulate access to a common resource by multiple processes.

  • 7/30/2019 DTN REP1

    38/131

    38

    1 Bundle => 64 semop

    1 Semop => 1.07 106 sec

    Semop overhead /bundle => 64 1.07 106 sec

    Semop overhead /bundle => 68.87 106 sec

    Hence there is a semop overhead per bundle that is sent. Other than the semop

    overhead, there is also a semaphore contention overhead within ION. Semaphores are used

    within ION for inter-process communication. The ION application will add the bundle to

    the forward queue and depending on the addressing scheme, it will release the scheme

    specific semaphore. The scheme specific forwarder will wait on the scheme specific

    semaphore. The forwarder, on grabbing this semaphore, will enqueue the bundle into the

    outduct queue and release the outduct semaphore. The outduct will wait on the outduct

    semaphore, and on receiving the semaphore, the outduct process will dequeue the bundle

    and send it out. Additionally, ION also uses semaphores when it makes changes to shared

    memory. The combination of all these semaphores will cause a contention overhead.

    We use a program called SystemTap, which can be used to get information about the

    linux system. We tap ipc lock() to record the amount of time ION waits on this function to

    get an approximate contention overhead time. Refer to B.2 for the SystemTap script.

    Semaphore contention overhead /bundle => 149.38 106 sec

    Adding the semop overhead and the contention overhead for each bundle we get

    Total overhead /bundle => 218.25 106 sec

    No of bundles transmitted per second => 1/218.25 106 sec

  • 7/30/2019 DTN REP1

    39/131

    39

    No of bundles transmitted per second => 4582 bundles

    Hence for a bundle of size 200 bytes, the expected maximum throughput is

    Throughput of 200 byte bundle => 4582 200 bytes/sec

    Throughput of 200 byte bundle => 6.99 Mbps

    The client reports the transmission rate at which the data left the client. The server,

    on receiving all the data sends a server report back to the client reporting the rate at which

    it received the data. The client node reports a transmission rate of approximately 6.3Mbps

    for 200 byte bundles, which reduces to 3Mbps on the server report. Some factor of semop

    and contention overhead needs to be factored into throughput over 3 hops from client to

    server. That explains the difference in throughput between the client report and the server

    report. Since the number of bundles that can be transmitted is limited by semop and

    contention overhead, throughput will increase with increase in size of bundles.

    The next set of tests measures TCPCL throughput for different percentages of packet

    loss without custody transfer. In the tests X% packet loss is defined as, X packets out of

    100 packets are dropped from Node2 to Node3, and X packets out of 100 packets are

    dropped from Node3 to Node2. Like the previous set of tests, the SBP Iperf client in this

    case also sends data for 30 seconds and waits for the SBP Iperf server to respond back

    with the report. Figure 4.5 shows the throughput in Kbits/sec of TCPCL for different

    losses and the maximum theoretical throughput. We use the Mathis equation [Mathis et al.

    1997] to model the the maximum theoretical throughput for a TCP connection with packet

    loss. The Mathis equation is given by

  • 7/30/2019 DTN REP1

    40/131

    40

    102

    103

    104

    105

    0 1 2 3 4 5 6 7 8 9 10

    Throughput(Kbits/sec)

    Loss (%)

    Max

    200 bytes

    500 bytes

    1000 bytes

    1500 bytes

    2000 bytes

    5000 bytes

    7500 bytes

    10000 bytes

    15000 bytes

    Figure 4.5: TCPCL Throughput vs Loss

    Bandwidth = (Maximum Segment Size C)/(RTT p) (4.1)

    Where, C is Mathis constant and

    p is loss probability

    As you can see in Figure 4.5, the theoretical throughput and the actual throughput are

    almost the same for 2% loss and the lines start to diverge from each other after 3% loss.

    As the loss increases, chances of retransmission timeouts also increase. Unfortunately, the

  • 7/30/2019 DTN REP1

    41/131

    41

    10

    0

    101

    102

    103

    10

    4

    105

    0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

    Throughput(Kbits/sec)

    RTT (sec)

    0% Loss

    theoretical max(.05 % loss)

    .05% Loss

    theoretical max(2 % loss)

    2% Loss

    theoretical max(5 % loss)

    5% Loss

    Figure 4.6: TCPCL(w/o custody) Throughput vs RTT & Loss for 200 Byte Bundles

    Mathis equation does not consider retransmission timeouts in the model. That is why the

    maximum theoretical throughput diverges from the actual throughput.

    TCP will go into congestion avoidance phase when there is packet loss. When the

    TCP sender gets 3 duplicate acknowledgements from the receiver, the TCP sender reduces

    the congestion window to half the previous size and retransmit the next unacknowledged

    segment. However, in cases when either one of the 3 duplicate acknowledgements is lost,

    or the retransmission gets lost, a timeout event occurs. When a timeout event occurs, TCP

    reduces the congestion window to 1 maximum segment size and goes back into slow start.

    For smaller loss percentage, losing one of the triple duplicate acknowledgements or losing

    a retransmission is rare. But the occasional packet loss will halve the congestion window

  • 7/30/2019 DTN REP1

    42/131

    42

    10

    0

    101

    102

    103

    10

    4

    105

    0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

    Throughput(Kbits/sec)

    RTT (sec)

    0% Loss

    theoretical max (.05% loss)

    .05% Loss

    theoretical max (2% loss)

    2% Loss

    theoretical max (5% loss)

    5% Loss

    Figure 4.7: TCPCL(w/o custody) Throughtput vs RTT & Loss for 5000 Byte Bundles

    from time to time, and since the chances of timeout events are less, the congestion

    window size will gradually increase with time. On the other hand, higher loss percentage

    increases the chance of timeout events due to loss of one of the triple duplicate

    acknowledgements or due to loss of the retransmitted packet. Thus causing the connection

    to reduce its congestion window to 1 MSS from time to time. This explains the drop in

    throughput after 3% packet loss.

    The last set of tests for TCPCL without custody transfer measures the throughput for

    combinations of loss and delay. The throughput is measured for 0, .05, 2 and 5% loss for

    delay ranging from 0 to 5 seconds. Unlike the previous 2 sets of tests, in this set we only

    transfer 25MB of data from the SBP Iperf client to the SBP Iperf server. This helps to

  • 7/30/2019 DTN REP1

    43/131

    43

    10

    0

    101

    102

    103

    10

    4

    105

    0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

    Throughput(Kbits/sec)

    RTT (sec)

    0% Loss

    theoretical max(.05% loss)

    .05% Loss

    theoretical max(2% loss)

    2% Loss

    theoretical max(5% loss)

    5% Loss

    Figure 4.8: TCPCL(w/o custody) Throughput vs RTT & Loss for 15000 Byte Bundles

    reduce the run time of some of the tests. Since the throughput of some of the tests are

    really low, the run time of some of the extreme test cases can be very long.

    Figure 4.6, Figure 4.7 and Figure 4.8 presents the throughput of TCPCL without

    custody transfer for a combination of RTTs and losses for 200 byte, 5000 byte and 15000

    byte bundles respectively. The combination of loss and delay has a very significant impact

    on throughput when compared to the impact on throughput solely due to delay or loss.

    The above figures also depict the maximum theoretical throughput which is obtained

    using Equation 4.1. The observed throughput values are very close to the maximum

    theoretical throughput according to the Mathis equation [Mathis et al. 1997].

  • 7/30/2019 DTN REP1

    44/131

    44

    4.1.2 TCPCL With Custody Transfer

    This sections presents the results from using TCP Convergence Layer with custody

    transfer enabled in the BP layer. The same set of tests performed in Section 4.1.1 are

    repeated with custody transfer. We also compare the throughput of TCPCL with and

    without custody transfer. In all the TCPCL with custody transfer test cases, SBP Iperf

    client sends data for 10 seconds. The reason behind reducing the amount of time the

    SBP Iperf client spends sending data when custody transfer is enabled is that the ION

    nodes will have to store the bundles for a longer time since the nodes have to wait for a

    custody signal from its neighboring node before destroying the bundle. Storing huge

    amounts of bundles can cause the ION node to run out of memory; hence, to ensure the

    tests do not fail we have to reduce the amount of data the client sends. In this test scenario,

    SBP Iperf client sends approximately 52000 bundles in 10 seconds when the bundle size

    is 200 bytes.

    The behavior of ION when custody transfer is enabled is explained here. When a

    bundle requesting custody transfer is received by a node, a custody signal will be sent to

    the previous custodian of the bundle. If the receiving node decides to accept custody for

    the bundle, the custodian is notified by sending a custody accepted signal. The custodian

    will now delete the bundle from its persistent storage. But if the receiver discards the

    bundle for reasons like redundant bundle, no memory, error block, etc., the receiver will

    send a custody error signal with a reason code to the custodian. The custodian will take

    appropriate action depending on the error code. If the receiver discards the bundle because

    it does not have memory, the custodian will reforward the bundle to another node which

    can transmit the bundle to the desired destination.

    We explore the behavior of ION when custody signal is lost, since these nodes operate on

    links with losses. Custody signals will not be lost when using TCP as the transport layer,

    but there is a possiblity of losing the custody signal when running LTP over UDP.

  • 7/30/2019 DTN REP1

    45/131

    45

    Custody acceptance signal - When the custody acceptance signal is lost, the receiver

    (custodian2) will accept custody for the bundle, but the previous custodian

    (custodian1) will not release custody for the bundle. Technically, there will be 2

    custodians for the bundle in the network. Custodian2 will continue to forward the

    bundle to its destination. Custodian1 will keep the bundle in persistent storage until

    its TTL expires. On expiration of TTL, custodian1 will delete the bundle and send a

    status report to the source.

    Custody error signal - If the custody error signal from the receiver is lost, the

    custodian retains the bundle in persistent storage until the TTL expires. When the

    TTL expires, the custodian will delete the bundle and send a status report to the

    source.

    The TCPCL performance for different RTT is shown in Figure 4.9. Figure 4.10

    compares the performance of TCPCL with and without custody transfer for varying RTT.

    The throughput of TCPCL with custody is lower than the throughput of TCPCL without

    custody. When custody transfer is enabled, the bundle is not destroyed until a node

    receives custody signal from a node that has accepted custody for the bundle. On receiving

    a custody signal, the node will find the bundle corresponding to the custody signal and

    delete it from ION memory. This additional processing time reduces the number of

    bundles ION can transmit, hence affecting the performance of TCPCL with custody.

    The difference in throughput is more profound for smaller bundles because more

    bundles have to be transmitted when the bundle size is smaller. As mentioned earlier when

    bundle size is 200 bytes, SBP Iperf client will send approximately 52000 bundles in 10

    seconds. On the other hand, SBP Iperf client will send approximately 8000 bundles in 10

    seconds for a bundle of size 15000 bytes. Hence the number of custody signals will be

    lesser when bundle size is 15000 bytes as compared to bundle size of 200 bytes. This

  • 7/30/2019 DTN REP1

    46/131

    46

    101

    102

    103

    104

    105

    0 5 10 15 20

    Throughput(Kbits/sec)

    RTT (sec)

    200 bytes 5000 bytes 15000 bytes 40000 bytes

    Figure 4.9: TCPCL(w/custody) Throughtput vs RTT

    would imply that ION will spend less time processing custody signals when bundle size is

    15000 bytes.

    The performance of TCPCL with custody for different loss rates is presented in

    Figure 4.11. The maximum theoretical throughput using Mathis equation [Mathis et al.

    1997] is included in this figure. The performance comparison between TCPCL with

    custody and without custody for different loss rates is illustrated in Figure 4.12.

  • 7/30/2019 DTN REP1

    47/131

    47

    101

    102

    103

    104

    105

    0 5 10 15 20

    Throughput(Kbits/sec)

    RTT (sec)

    200 bytes without custody

    200 bytes with custody

    15000 bytes without custody

    15000 bytes with custody

    Figure 4.10: Comparing TCPCL(w/custody) and TCPCL(w/o custody) Throughtput vs

    RTT

    The performance of TCPCL with custody for higher loss rates is similar to the

    performance of TCPCL without custody for higher loss rates. A combination of low

    congestion window size and retransmission time outs throttle the number of TCP

    segments that can be transmitted. Hence the additional time ION nodes spend processing

    custody signals, does not have an impact on the throughput. The number of bundles ION

    can release for transmission with the custody signal processing overhead is still more than

    what TCP can actually send at higher loss rates.

  • 7/30/2019 DTN REP1

    48/131

    48

    102

    103

    104

    105

    0 1 2 3 4 5 6 7 8 9 10

    Throughput(Kbits/sec)

    Loss (%)

    theoretical max

    200 bytes

    5000 bytes

    15000 bytes

    40000 bytes

    Figure 4.11: TCPCL(w/custody) Throughput vs Loss

    The last set of tests measure the performance of TCPCL with custody for a

    combination of different RTT and loss rates. The throughput measure for 200, 5000,

    15000 and 40000 byte bundles are shown in Figure 4.13, Figure 4.14, Figure 4.15 and

    Figure 4.16 respectively.

    The performance of TCPCL with custody transfer for a combination of delay and

    loss is similar to the performance of TCPCL without custody transfer. Similar to TCPCL

  • 7/30/2019 DTN REP1

    49/131

    49

    102

    103

    104

    105

    0 1 2 3 4 5 6 7 8 9 10

    Throughput(Kbits/sec)

    Loss (%)

    theoretical max

    200 bytes without custody

    200 bytes with custody

    15000 bytes without custody

    15000 bytes with custody

    Figure 4.12: Comparing TCPCL(w/custody) and TCPCL(w/o custody) Throughtput vs

    Loss

    without custody, the throughput curve for TCPCL with custody is identical to the

    maximum theoretical throughput curve as given by the Mathis equation.

    4.2 LTP Convergence Layer Tests

    We discuss the results of performance tests using LTP convergence layer over

    UDP/IP in this section. The first subsection presents the results of LTPCL without custody

    transfer followed by the results of performance tests of LTPCL with custody.

  • 7/30/2019 DTN REP1

    50/131

    50

    10

    0

    101

    102

    103

    10

    4

    105

    0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

    Throughput(Kbits/sec)

    RTT (sec)

    0% Loss

    theoretical max(.05 % loss)

    .05% Loss

    theoretical max(2 % loss)

    2% Loss

    theoretical max(5 % loss)

    5% Loss

    Figure 4.13: TCPCL(w/custody) Throughput vs RTT & Loss for 200 Byte Bundles

    4.2.1 LTPCL Without custody transfer

    The first set of tests measure the throughput of LTPCL without custody transfer for

    different delay between Node2 and Node3. The SBP Iperf client sends 25MB of data to

    the SBP Iperf server. Refer to Section A.2 for the LTPCL configuration files used in ION

    for the tests.

    The performance of LTPCL without custody transfer for various RTT is shown in

    Figure 4.17. The LTP configuration file uses a command called span to control the

    transmission rate. The main parameters in span that regulate transmission are aggregation

    size, LTP segment size, and number of export sessions. The span parameters between

    Node1-Node2 and Node3-Node4 always remain the same because delay and loss is

  • 7/30/2019 DTN REP1

    51/131

    51

    10

    0

    101

    102

    103

    10

    4

    105

    0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

    Throughput(Kbits/sec)

    RTT (sec)

    0% Loss

    theoretical max (.05% loss)

    .05% Loss

    theoretical max (2% loss)

    2% Loss

    theoretical max (5% loss)

    5% Loss

    Figure 4.14: TCPCL(w/custody) Throughput vs RTT & Loss for 5000 Byte Bundles

    applied between Node2 and Node3. This would mean only Node2-Node3 span has to be

    tweaked for different delays and losses.

    Aggregation size is the amount of data that ION will aggregate into an LTP block

    before using an LTP session to transmit the data. As soon as ION accumulates data equal

    to or more than the aggregation size, ION will break the LTP block into LTP segments and

    hand it down to the UDP layer for transmission. However, if the internal clock within ION

    triggers before reaching the aggregation size, ION will segment the LTP block and send it

    anyway. LTP divides the LTP block into LTP segments of size specified by the LTP

    segment size parameter within the span. Export session configuration, as explained earlier,

    is a method to do flow control in LTP.

  • 7/30/2019 DTN REP1

    52/131

    52

    10

    0

    101

    102

    103

    10

    4

    105

    0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

    Throughput(Kbits/sec)

    RTT (sec)

    0% Loss

    theoretical max(.05% loss)

    .05% Loss

    theoretical max(2% loss)

    2% Loss

    theoretical max(5% loss)

    5% Loss

    Figure 4.15: TCPCL(w/custody) Throughput vs RTT & Loss for 15000 Byte Bundles

    A link with no artificial delay uses smaller number of export session (2 in this case)

    and a higher LTP segment size (5600 bytes). Increasing the number of export session

    increases the amount of data transmitted, but it also risks the chances of packets getting

    dropped because of buffer overflow. The intermediate nodes do twice as much work as the

    end nodes. Thus increasing the number of the export sessions will send more bundles to

    the intermediate node than it can process. Packet loss will cause a drop in throughput

    because the lost packets will have to be retransmitted. Additionally, using a higher LTP

    segment size will reduce the time ION spends segmenting an LTP block. In turn, the IP

    layer will segment the bigger LTP segment. The advantage of this approach is, IP

    segmentation takes less time than LTP segmentation, and hence throughput will be higher.

    On the other hand, losing one IP segment of the bigger LTP segment will mean LTP will

  • 7/30/2019 DTN REP1

    53/131

    53

    10

    0

    101

    102

    103

    10

    4

    105

    0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

    Throughput(Kbits/sec)

    RTT (sec)

    0% Loss

    theoretical max(.05% loss)

    .05% Loss

    theoretical max(2% loss)

    2% Loss

    theoretical max(5% loss)

    5% Loss

    Figure 4.16: TCPCL(w/custody) Throughput vs RTT & Loss for 40000 Byte Bundles

    have to resend the whole LTP segment again. A link with no delay uses lesser number of

    export sessions: hence, we can use the IP segmentation of the LTP segment to our

    advantage, since losses will be minimal.

    Table 4.1: Number of sessions used for various RTTs

    No. of Sessions RTT(sec)

    2 0

    50 .2

    200 1

    400 >1

  • 7/30/2019 DTN REP1

    54/131

    54

    103

    104

    105

    0 5 10 15 20

    Throughput(Kbits/sec)

    RTT (sec)

    5000 bytes 15000 bytes 40000 bytes

    Figure 4.17: LTPCL Throughtput vs RTT

    For a link with delay we use a higher number of export sessions and a lower LTP

    segment size (1400 bytes) on the nodes connected to the delayed link. Table 4.1 shows the

    number of export sessions that we configure LTP to use for various RTTs. The extra

    export sessions will guarantee that there is constant dialogue between the sender and the

    receiver. Consider a case where the number of export sessions is 10 and the delay between

    the nodes is 5 seconds. If ION can fill 10 sessions worth of data at a time, once we send 10

    sessions of data, ION needs to wait for 10 seconds to receive the corresponding reports

    from the receiver so it can clear out the session. And only after the session is cleared can it

    be re-used to send data. However, if we have 400 export sessions, then we can use the

    other 390 unused sessions to transmit data while waiting for the report segments from the

    first 10 sessions.

  • 7/30/2019 DTN REP1

    55/131

    55

    102

    103

    104

    105

    0 5 10 15 20

    Throughput(Kbits/sec)

    RTT (sec)

    5000 bytes with LTPCL

    5000 bytes with TCPCL

    15000 bytes with LTPCL

    15000 bytes with TCPCL

    Figure 4.18: Comparing LTPCL and TCPCL Throughput vs RTT

    Figure 4.18 shows the comparison in performance of LTPCL and TCPCL on link

    with delay. TCP is optimized to perform well on links with minimal delay. But on links

    with higher delays, LTP can be fine tuned to obtain higher performance than TCP. Unlike

    TCP, LTP is designed to use the available bandwidth at all time. TCP spends a

    considerable amount of time in slow start/congestion avoidance on high delay links, but

    LTP is always transmitting at the constant configured transmission rate. As mentioned

    earlier, the transmission rate is controlled by the number of export sessions, aggregation

    size, and LTP segment size. The heap memory reserved for LTP depends on the number of

    sessions.

  • 7/30/2019 DTN REP1

    56/131

    56

    LTP space reservation = (Max export bundle size

    No. of export sessions) +

    (Max import bundle size No. of import sessions)

    If ION is configured such that heap memory allocated for LTP is less than the actual

    LTP space reservation, then ION will store the extra LTP blocks in files instead of storing

    it on the heap. However, writing and reading from a file will be more expensive than

    writing and reading from the heap. Hence reserving the right amount of space for LTP

    blocks is important for the performance of LTP.

    The amount of heap reserved for LTP should also not exceed the total amount of heap

    memory allocated to ION. Hence the number of session that can be configured for LTP

    indirectly depends on the memory. Therefore, the LTP test results are constrained by the

    amount of memory allocated to ION and also by the OS buffer size for UDP connections.

    The maximum number of sessions that we use in our tests is 400. Theoretically we can

    open more sessions on links with high delay to get better throughput, but practically we

    are constrained by the memory.

    The LTPCL throughput for various amounts of packet loss is shown in Figure 4.19.

    When an LTP segment is lost, the LTP sender and receiver will keep the session for the

    block open until the LTP segment is retransmitted. The drop in throughput is mainly

    because of the retransmission of lost segments and under utilization of the session. The

    session which has a lost segment will have to complete retransmission of the LTP segment

    before the session can be re-used to send other LTP blocks. Not only does this prevent

    other LTP blocks from using this session, but it also under utilizes the session during

    retransmission. Take for example a LTP block of size 40000 bytes and LTP segment size

    of 1500 bytes. This would require a session to send approximately 25 segments of LTP

    blocks. If 2 segments in this session are lost, the retransmission will send the 2 segments.

  • 7/30/2019 DTN REP1

    57/131

    57

    103

    104

    105

    0 2 4 6 8 10

    Throughput(Kbits/sec)

    Loss(%)

    5000 bytes 15000 bytes 40000 bytes

    Figure 4.19: LTPCL Throughput vs Loss

    Thus a session which would generally send 40000 bytes during the retransmission phase

    sends only 3000 bytes. The situation worsens with loss because more sessions will have to

    retransmit lost segments. Hence the performance reduces with loss.

    The behavior of ION on losing different segments is explained below

    Data segment - When a data segment is lost, the loss is reported by the receiver

    when it sends a report segment. The sender on receiving the report, acknowledges

    the report segment and retransmits the lost segment.

    Checkpoint segment- After sending a checkpoint segment, the sender will start a

    checkpoint retransmission timer. If the report segment doesnt arrive before the

    timer expires, it will resend the checkpoint segment. Consequently, on losing a

  • 7/30/2019 DTN REP1

    58/131

    58

    checkpoint segment, the retransmission timer will expire, forcing the sender to

    retransmit the checkpoint segment.

    Report Segment - Loss of a report segment will cause the checkpoint retransmission

    timer on the sender side and the report segment retransmission timer on the receiver

    side to expire. This will result in both checkpoint retransmission and a report

    retransmission.

    Report Acknowledgement - The receiver will send a report segment, when it

    receives a checkpoint segment from the sender. The sender responds to the report

    segment with a report acknowledgment. If the receiver reports that all the segments

    were received successfully, the sender will send a report acknowledgement and

    close the export session. However, if the report acknowledgement is lost, the report

    retransmission timer on the receiver side will expire, and it will retransmit a report

    segment. Since the sender closed the session, it will have no record of the session

    the report segment is reporting about, so it will discard the report segment. The

    receiver will retransmit the report segment until the number of retransmissions reach

    a preset retransmission max. On reaching the retransmission maximum, the receiver

    will cancel the session. However, the sender will still have no record of the session

    that the receiver is trying to cancel, so it will discard the cancel segment.

    Furthermore the receiver will retransmit the cancel segment until it reaches the

    retransmission maximum.

    Figure 4.20 gives the performance comparison of LTPCL and TCPCL versus loss. As

    mentioned earlier, LTP under utilizes sessions during retransmission of lost LTP segments.

    The session transmits less data than it would normally send during regular transmission.

    TCP, on losing a packet will lower its congestion window by half, hence transmitting half

    as much as it was previously transmitting. The congestion window size of the TCP sender

  • 7/30/2019 DTN REP1

    59/131

    59

    102

    103

    104

    105

    0 2 4 6 8 10

    Throughput(Kbits/sec)

    Loss (%)

    5000 bytes with LTPCL

    5000 bytes with TCPCL

    15000 bytes with LTPCL

    15000 bytes with TCPCL

    Figure 4.20: Comparing LTPCL and TCPCL Throughput vs Loss

    is increased by one segment each round trip time until it reaches advertised receive

    window size, or there is another packet loss. However, when the loss rate is high, the

    congestion window constantly remains low, and hence the throughput degrades constantly

    with loss. On the other hand, LTPs behavior remains approximately the same for low loss

    and high loss. Therefore, the performance of LTPCL is better than TCPCL for greater

    losses.

    The performance of LTPCL without custody for delay and loss combination is shown

    in Figure 4.21 and Figure 4.22. The performance of LTPCL on a link with delay and loss

    is similar to the performance of LTPCL with only loss. The number of export session that

    can be used to transmit LTP blocks will help keep a constant dialogue between the sender

  • 7/30/2019 DTN REP1

    60/131

    60

    103

    104

    105

    0 1 2 3 4 5

    Throughput(Kbits/sec)

    RTT (sec)

    0% Loss .05% Loss 2% Loss 5% Loss

    Figure 4.21: LTPCL Throughput vs RTT & Loss for 5000 Byte Bundles

    and the receiver even on links with some delay. Hence the performance of LTP on links

    with RTT less than 5 seconds is determined by the loss on the link.

    4.2.2 LTPCL With custody transfer

    This section presents the results of LTPCL with custody transfer on a link with

    different delays and losses. Figure 4.23 displays the throughput of LTPCL with custody

    for various RTTs and compares the throughput of LTP with custody and without custody.

    As explained in the Section 4.1.2, when custody transfer is enabled, the nodes needs to

    spend extra time handling custody signals. The intermediate nodes in the setup have to

    handle incoming bundles, routing bundles and transmitting bundles if custody transfer is

    not enabled. When custody transfer is enabled, these nodes additionally need to send

  • 7/30/2019 DTN REP1

    61/131

    61

    103

    104

    105

    0 1 2 3 4 5

    Throughput(Kbits/sec)

    RTT (sec)

    0% Loss .05% Loss 2% Loss 5% Loss

    Figure 4.22: LTPCL Throughput vs RTT & Loss for 15000 Byte Bundles

    custody signals to the previous custodian and handle incoming custody signals from the

    next custodian. The additional processing reduces the transmission rate for LTPCL with

    custody transfer.

    As Figure 4.24 illustrates, the performance of LTPCL with custody is better than LTP

    without custody when losses are high. A link with lower loss acts similar TCPCL, where

    the throughput of LTPCL without custody is higher than LTPCL with custody. The

    difference in throughput for low loss links can be attributed to custody signal processing

    overhead. Contrary to that, links with high loss have a high probability of losing custody

    signals. Custody signals are transferred between administrative endpoints on the nodes

    and do not use the sessions that are used for transmitting data. Losing a data segment or a

    report segment would entail the session to be open until the segment is retransmitted.

  • 7/30/2019 DTN REP1

    62/131

    62

    103

    104

    105

    0 5 10 15 20

    Throughput(Kbits/sec)

    RTT (sec)

    5000 bytes without custody

    5000 bytes with custody

    15000 bytes without custody

    15000 bytes with custody

    Figure 4.23: Comparing LTPCL(w/o custody) and LTPCL(w/cusotdy) Throughput vs

    Delay

    Therefore, the session needs to be open for a longer time, and it prevents the session from

    being reused for other bundles. On the contrary, losing a custody signal will not affect the

    session behavior and consequently will not affect the transmission rate. Losing custody

    signals has bigger consequences in the long run, as mentioned earlier, but in this test

    scenario results in better performance.

  • 7/30/2019 DTN REP1

    63/131

    63

    103

    104

    105

    0 2 4 6 8 10

    Throughput(Kbits/sec)

    Loss (%)

    5000 bytes without custody

    5000 bytes with custody

    15000 bytes without custody

    15000 bytes with custody

    Figure 4.24: Comparing LTPCL(w/o custody) and LTPCL(w/custody) Throughput vs

    Loss

    Figure 4.25 and Figure 4.26 illustrates the throughput of LTPCL with custody over

    links with both delay and loss.

  • 7/30/2019 DTN REP1

    64/131

    64

    103

    104

    0 1 2 3 4 5

    Throughput(Kbits/sec)

    RTT (sec)

    0% Loss .05% Loss 2% Loss 5% Loss

    Figure 4.25: LTPCL(w/custody) Throughput vs RTT & Loss for 5000 Byte Bundles