Page 1
Experimental Evaluation of DOCSIS 1.1 Upstream
Performance
BY
Chaitanya K. Godsay
B.E., University of Pune, India (1999)
THESIS
Submitted to the University of New Hampshirein partial fulfillment of
the requirements for the degree of
Master of Science
in
Computer Science
December 2003
Page 2
This thesis has been examined and approved.
Thesis director, Dr. Radim BartosAssociate Professor of Computer Science
Dr. Robert D. RussellAssociate Professor of Computer Science
Mr. Steven FultonDOCSIS Consortium Manager, InterOperability Labora-tory
Date
Page 3
Dedication
To my family.
iii
Page 4
Acknowledgments
I would like to express my sincere gratitude to Dr. Radim Bartos without whose guidance
this thesis would not have been possible. It has been a pleasure to work with him. I would
also like to thank Dr. Robert Russell and Mr. Steve Fulton for their guidance. I am also
grateful to the InterOperability Laboratory for providing me the required resources for the
research and supporting me for the past three years. Finally, I would like to thank Swapnil
Bhatia and Mugdha Kulkarni for the numerous helpful discussions.
iv
Page 5
Contents
Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
1 Introduction 1
2 Background 4
2.1 Ranging and registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Upstream data transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Performance enhancers and QoS . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.1 Performance enhancers . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.2 QoS services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3 Motivation and Goals 12
3.1 Performance Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.1 PHY parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1.2 MAC parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.3 Traffic parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Parameters considered in this thesis . . . . . . . . . . . . . . . . . . . . . . 17
4 Methodology 20
5 Experimental evaluation 23
5.1 Channel rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
v
Page 6
5.2 Performance enhancers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.3 Modulation format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.4 CM chipset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.4.1 Number of MAPs used for data transmission . . . . . . . . . . . . . 29
5.4.2 Concatenation factor . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.4.3 Piggybacking factor . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.5 Number of modems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.6 Traffic distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.7 Load test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.7.1 Comparison between load test and optimal throughput test . . . . . 40
6 Conclusions 42
Bibliography 45
A Complete set of results 47
A.1 Constant Packet Length - Constant Bit Rate (CPL-CBR) . . . . . . . . . . 47
A.1.1 Broadcom-based CM . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
A.1.2 TI-based CM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
A.1.3 QPSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
A.1.4 Number of CMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
A.2 Distributed Packet Length - Constant Bit Rate (DPL-CBR) . . . . . . . . . 52
A.3 Distributed Packet Length - Distributed Bit Rate (DPL-DBR) . . . . . . . 53
A.4 Constant Packet Length - Distributed Bit Rate (CPL-DBR) . . . . . . . . . 54
A.5 Load Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
A.5.1 Concatenation and Piggybacking disabled . . . . . . . . . . . . . . . 55
A.5.2 Piggybacking enabled . . . . . . . . . . . . . . . . . . . . . . . . . . 58
A.5.3 Concatenation enabled . . . . . . . . . . . . . . . . . . . . . . . . . . 61
A.5.4 Concatenation and Piggybacking enabled . . . . . . . . . . . . . . . 64
vi
Page 7
B CM configuration 67
C CMTS configuration 68
vii
Page 8
List of Tables
5.1 DPL-CBR traffic parameters for the packet length distribution. . . . . . 35
C.1 CMTS downstream parameters. . . . . . . . . . . . . . . . . . . . . . . . 68
C.2 Physical layer parameters for REQ. . . . . . . . . . . . . . . . . . . . . . 68
C.3 Physical layer parameters for short and long data grant with QPSK . . 69
C.4 Physical layer parameters for short and long data with 16QAM . . . . . 69
C.5 Example of CMTS upstream channel configuration used for the channel
width of 1600 KHz. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
viii
Page 9
List of Figures
1-1 A simple DOCSIS RF network. . . . . . . . . . . . . . . . . . . . . . . . 2
2-1 Message flows in ranging and registration. . . . . . . . . . . . . . . . . . 6
2-2 Typical upstream data transmission. . . . . . . . . . . . . . . . . . . . . 8
4-1 Experimental setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5-1 Maximum data rate (single Broadcom-based CM, modulation format
16QAM, piggybacking and concatenation off). . . . . . . . . . . . . . . . 23
5-2 Channel utilization (single Broadcom-based CM, modulation format 16QAM,
piggybacking and concatenation off). . . . . . . . . . . . . . . . . . . . . 24
5-3 Performance enhancers (single Broadcom-based CM, channel rate 5.12Mbps,
modulation format 16QAM). . . . . . . . . . . . . . . . . . . . . . . . . 26
5-4 Comparison of 16QAM and QPSK (single Broadcom-based CM, piggy-
backing and concatenation on). . . . . . . . . . . . . . . . . . . . . . . . 27
5-5 Comparison of CMs based on different chipsets (single CM, channel rate
5.12 Mbps, modulation format 16QAM, piggybacking and concatenation
on). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5-6 Comparison of CMs based on number of packets transmitted (single CM,
modulation format 16QAM, piggybacking and concatenation off). . . . . 30
5-7 Comparison of CMs based on concatenation factor (single CM, channel
rate 5.12 Mbps, modulation format 16QAM, concatenation on, piggy-
backing off). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
ix
Page 10
5-8 Comparison of CMs based on piggybacking factor (single CM, channel
rate 5.12 Mbps, modulation format 16QAM, concatenation off, piggy-
backing on). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5-9 Per-modem throughput for one and two Broadcom-based CMs (modula-
tion format 16QAM, piggybacking and concatenation on). . . . . . . . . 33
5-10 Effect of number of TI-based CMs on throughput for different packet
lengths (modulation format 16QAM, piggybacking and concatenation on). 34
5-11 DPL-CBR for Broadcom-based CM for all channel rates. . . . . . . . . . 35
5-12 DPL-DBR for Broadcom-based CM for all channel rates. . . . . . . . . . 36
5-13 CPL-DBR for Broadcom-based CM for all channel rates (packet length
512 bytes). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5-14 CPL-CBR for Broadcom-based CM for all channel rates (packet length
512 bytes). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5-15 Latency (single Broadcom-based CM, channel rate 0.64 Mbps, modulation
format 16QAM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5-16 Throughput (single Broadcom-based CM, channel rate 0.64 Mbps, mod-
ulation format 16QAM). . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5-17 Effect of performance enhancers on Latency (single Broadcom-based CM,
channel rate 2.56Mbps, 128 packet length, modulation format 16QAM). 39
5-18 Normalized throughput (single Broadcom-based CM, channel rate 2.56Mbps,
128 bytes packet length, modulation format 16QAM). . . . . . . . . . . 40
5-19 Normalized throughput (single Broadcom-based CM, 1024 bytes packet
length, modulation format 16QAM). . . . . . . . . . . . . . . . . . . . . 41
5-20 Comparison of throughput from load tests and optimal throughput (single
Broadcom-based CM, modulation format 16QAM). . . . . . . . . . . . . 41
A-1 CPL-CBR for 16QAM on Broadcom-based CM . . . . . . . . . . . . . . 47
A-2 CPL-CBR for 16QAM on Texas Instruments-based CM . . . . . . . . . 48
A-3 CPL-CBR for QPSK on Broadcom and TI-based CMs . . . . . . . . . . 49
x
Page 11
A-4 CPL-CBR with 16QAM for multiple Broadcom-based CMs . . . . . . . 50
A-5 CPL-CBR for 16QAM for multiple Texas Instruments-based CMs . . . . 51
A-6 DPL-CBR with 16QAM for Broadcom and Texas Instruments-based CMs 52
A-7 DPL-DBR with 16QAM for Broadcom and Texas Instruments-based CMs 53
A-8 CPL-DBR for 16QAM on Broadcom-based CM . . . . . . . . . . . . . . 54
A-9 Load test: Latency across different channel widths for Broadcom-based
CM, concatenation and piggybacking disabled . . . . . . . . . . . . . . . 55
A-10 Load test: Throughput across different channel widths for Broadcom-
based CM, concatenation and piggybacking disabled . . . . . . . . . . . 56
A-11 Load test: Normalized Throughput across different packet lengths for
Broadcom-based CM, concatenation and piggybacking disabled . . . . . 57
A-12 Load test: Latency across different channel widths for Broadcom-based
CM, piggybacking enabled . . . . . . . . . . . . . . . . . . . . . . . . . . 58
A-13 Load test: Throughput across different channel widths for Broadcom-
based CM, piggybacking enabled . . . . . . . . . . . . . . . . . . . . . . 59
A-14 Load test: Normalized Throughput across different packet lengths for
Broadcom-based CM, piggybacking enabled . . . . . . . . . . . . . . . . 60
A-15 Load test: Latency across different channel widths for Broadcom-based
CM, concatenation enabled . . . . . . . . . . . . . . . . . . . . . . . . . 61
A-16 Load test: Throughput across different channel widths for Broadcom-
based CM, concatenation enabled . . . . . . . . . . . . . . . . . . . . . . 62
A-17 Load test: Normalized Throughput across different packet lengths for
Broadcom-based CM, concatenation enabled . . . . . . . . . . . . . . . . 63
A-18 Load test: Latency across different channel widths for Broadcom-based
CM, concatenation and piggybacking enabled . . . . . . . . . . . . . . . 64
A-19 Load test: Throughput across different channel widths for Broadcom-
based CM, concatenation and piggybacking enabled . . . . . . . . . . . 65
xi
Page 12
A-20 Load test: Normalized Throughput across different packet lengths for
Broadcom-based CM, concatenation and piggybacking enabled . . . . . 66
xii
Page 13
ABSTRACT
Experimental Evaluation of DOCSIS 1.1 Upstream Performance
by
Chaitanya K. GodsayUniversity of New Hampshire, December, 2003
Data-Over-Cable Service Interface Specification (DOCSIS) is one of the many last mile
technologies intended to provide Internet access and multimedia services. DOCSIS uses the
widely deployed hybrid fiber/coax (HFC) network as the physical link between multiple
cable modems (CMs) and the cable modem termination system (CMTS).
This thesis presents the upstream performance of DOCSIS 1.1 for different parameters in
the physical layer and MAC layer, across various traffic patterns, for best-effort scheduling.
In addition to protocol specific parameters, we also examine the influence of different CM
manufacturers and number of CMs on the network. The performance metrics used are
upstream data rate, latency and channel utilization.
This is the first study in the field of DOCSIS that experiments with real devices for
performance evaluation. All previous research employed simulators for experimentation
with different aspects of the protocol. The use of real devices allows us to capture the
complete complexity of the protocol and gives us realistic results but it also limits our
control over different parameters. However, the extent of control is equivalent to that of a
cable service provider and thus the results can be used conveniently.
The results provide useful insights into the different aspects of the protocol, and can be
used by the CM manufacturers to improve their products, and also by cable operators to
set operational parameters for their networks. The methodology developed to capture the
complexities of performance evaluation under different forms of data traffic patterns can
serve as a model for other research.
xiii
Page 14
Chapter 1
Introduction
Cable operators, in the early nineties, envisioned the growth of cable networks and were
driven to explore possibilities for transmitting data from the residential user to the service
provider. By providing this capability, packet-based services, such as high-speed Internet
access, cheaper telephone connections, and video-conferencing could be deployed easily.
This led to the formation of many research groups and Multimedia Cable Network System
(MCNS), a collaboration of cable companies, was the first to come up with a specifica-
tion. MCNS released the set of standards known as DOCSIS 1.0 (Data Over Cable Service
Interface Specification) in March 1997. CableLabs, a non-profit research and development
consortium, worked in collaboration with MCNS, and is now responsible for developing new
specifications and product certification.
The DOCSIS specification [1] describes a DOCSIS network as a tree based network
with the Cable Modem Termination System (CMTS) as the root of the tree and the Cable
Modems (CMs) as the leaves of the tree. The CMTS is at the service provider facility and
the CMs are at the residential users home. The transmission of data from the CMTS to
CM, termed as “downstream”, is a point to multipoint broadcast, whereas the transmission
from the CM to CMTS, termed as “upstream”, is controlled by the CMTS and is multipoint
to point TDMA (Time Division Multiple Access). DOCSIS defines an asymmetric network
in terms of upstream and downstream data rate, with downstream rates (up to 30Mbps)
being substantially larger than the upstream rates(up to 10.24 Mbps). Data transmission
in DOCSIS is full duplex. A simple DOCSIS network is shown in Figure 1-1. The res-
idential user has the Customer Premise Equipment (CPE), such as computer, telephone,
1
Page 15
2
etc., connected to the CM. Upstream data goes from the CM to the CMTS, which is then
forwarded appropriately to the outside network. Similarly, downstream data passes through
the CMTS to the CM, which is forwarded to the CPE. Typically there are 1500 to 2000
CMs connected to a CMTS with distance between the CMTS and CM going up to 50 miles.
The DOCSIS network is also known as the Radio Frequency (RF) network and the Hybrid
Fiber Coax (HFC) network.
CM
CMTS
WAN
CM
PC
TV
Customer
Premise
Equipment
(CPE)
Cable service
provider
Residential user
CM CM
Figure 1-1: A simple DOCSIS RF network.
After the DOCSIS 1.0 specification, CableLabs released two more specifications known as
DOCSIS 1.1 and DOCSIS 2.0 respectively. DOCSIS 1.1, released in 1999, enhanced security
and added QoS to support real-time applications, such as telephony and video conferencing.
DOCSIS 2.0, released in 2001, provided a significant upstream capacity upgrade as there was
a need for it due to the increase in number of users on the cable network and introduction
of new high speed applications such as peer-to-peer file sharing and online-gaming. This
upgrade also aims at providing better noise immunity and supporting even more users
Page 16
3
than before. So the dream of having one wire running into a residence that provides high
quality features such as digital audio and video, provides real-time and interactive support
for telephony, video-conferencing, online gaming, and high speed Internet connectivity for
browsing and peer-to-peer file sharing can be met through DOCSIS. These are some of the
applications that have benefited because of the high speed provided by DOCSIS. DOCSIS
has concentrated on providing the base for supporting various kinds of QoS. Thus more and
more innovative applications will be developed to harness the full potential of DOCSIS.
The next chapter discusses the DOCSIS 1.1 protocol for upstream transmission, followed
by a chapter on motivation and a listing of all the performance parameters involved. Then
we will discuss the test methodology and present the experimental evaluation of different
parameters with analysis. The final chapter gives the conclusions and outlines the future
work.
Page 17
Chapter 2
Background
Before a CM can start transmitting data upstream (from the CM to the CMTS), it has
to join the network and register with the CMTS. Once the CM has been authorized by
the CMTS, it has joined the network and can request to transmit user data, and then
transmit data upstream. The process that a CM has to go through to join the network is
called “ranging and registration”. Different operational parameters (that will be used later)
are configured using ranging and registration. Moreover, this process will also introduce
important message flows in a DOCSIS network, required for understanding the operation
of the protocol.
2.1 Ranging and registration
When a CM has to join a DOCSIS network after power on, it first searches for a valid
downstream signal in the frequency range of 88-860 MHz. A downstream signal is valid
when the CM has recognized and synchronized with the PHY layer (QAM symbol timing,
FEC framing) and with the MAC layer (MPEG packetization and SYNC messages). A CM
is said to be MAC synchronized when it receives at least two SYNC messages within the
clock tolerance limits. The CM uses the SYNC messages to synchronize in time with the
CMTS. All the message flows between the CM, CMTS and other servers are depicted in
Figure 2-1.
After MAC synchronization, the CM tries to range with the CMTS. The CM learns
about the upstream channel characteristics using an upstream channel descriptor (UCD)
4
Page 18
5
sent by the CMTS at regular intervals. The UCD describes the upstream channel charac-
teristics for data transmission, i.e., modulation formats, modulation rates, etc. The CM
now knows how to transmit upstream data, but it does not know when to transmit. The
CMTS periodically sends information regarding when the CM is allowed to transmit, in a
MAC message termed as the MAP. Thus, the CM waits for MAP messages from the CMTS.
The MAP is MAC message that indicates when a CM can send data. The time allocated
to a CM by the CMTS to send data upstream is called as a grant. A MAP message has grants
for: initial maintenance (IM), station maintenance (SM), request (REQ), request/data
(REQ/DATA), short data grant and long data grant. A CM uses the initial maintenance
region to join the network. The initial maintenance (IM), request (REQ) and request/data
(REQ/DATA) are broadcast regions, thus multiple CMs can try to transmit information
upstream in this region and the information is subject to collisions. The station maintenance
(SM), short data grant and long data grant are unicast opportunities provided to the CM
by the CMTS and are not subject to collisions.
The CM then transmits a request to join the network (RNG-REQ) in the initial main-
tenance region. There is a possibility of collision in this region if multiple CMs want to join
the network. To resolve upstream collisions an exponential back-off scheme is used. When
the CMTS receives the RNG-REQ successfully, it assigns a unique identifier known as the
service identifier (SID) to the CM. The SID is sent by the CMTS in a unicast message called
the range response (RNG-RSP) to the CM. The CM then uses this SID to communicate
with the CMTS. RNG-RSP also gives the physical layer adjustment information such as
timing adjust, power adjust, frequency adjust etc. to the CM. The CM also gets unicast
station maintenance opportunities in the MAP after the first RNG-RSP. The CM then has
to keep on fine-tuning the physical layer adjustments by transmitting RNG-REQ until the
CMTS sends a ranging complete status in RNG-RSP. This process of fine-tuning is known
as station maintenance that goes on at regular intervals due to the tendency of the physical
devices to go out of synchronization due to various reasons.
At this point, the CM has partially joined the network. It then goes on to request an
Page 19
6
CMTS CM
clock time to send SYNC SYNC
clock time to send UCD UCD
clock time to send SYNC SYNC
clock time to send SYNC SYNC
clock time to send SYNC SYNC
Establish PHY synchronization and wait
for UCD
clock time to send UCD UCD Obtain upstream channel parameters
clock time to send SYNC SYNC Wait for transmit opportunity to perform ranging
clock time to send MAP MAP Got transmit opportunity
Transmit RNG - REQ
RNG - REQ
send RNG - RSP to CM RNG - RSP Adjust PHY parameter s
Continue till the CMTS
gives status as ranging complete
clock time to send MAP MAP Got transmit opportunity
Transmit REG - REQ REG - REQ
send REG - RSP to CM REG - RSP See what CMTS has
allowed
Continue till the CMTS
gives status as
registration complete
DHCP discover DHCP request to
broadcast address
DHCP offer Choose server
DHCP request
DHCP response Set up IP parameters from DHCP - response
Messages sent and received by DHCP server
Establish IP
connection
clock time to send MAP MAP
TFTP request
TFTP response
Request configuration parameters from TFTP server
clock time to send MAP MAP
clock time to send MAP MAP
TOD request
TOD response
Request time of day from the TOD server (optional)
Messages to and from TOD server
Figure 2-1: Message flows in ranging and registration.
Page 20
7
IP address from the DHCP (Dynamic Host Configuration Protocol) server, establish the
time of day with the TOD (Time of Day) server, and download the configuration file from
the TFTP (Trivial File Transfer Protocol) server. The configuration file specifies all the
functionality the CM can support. After getting the configuration file the CM must send
a registration request (REG-REQ) to the CMTS. The REG-REQ message contains all the
functionality the CM wishes to support as specified by the configuration file. The CMTS
responds back using a registration response (REG-RSP) message indicating to the CM all
the functionality the CM is allowed to support. The CMTS may not be able to support
all the CM requested functionality. The CMTS makes the final decision regarding what
functionality will be operational in the CM. The CM can enable baseline privacy (security)
if specified in the configuration file and then it is in operational mode ready to transmit data
upstream. At this point the CM has successfully ranged and registered with the CMTS.
2.2 Upstream data transmission
Once the CM has registered with the CMTS it is ready to send data in the upstream
direction. It is synchronized with the CMTS in time using the SYNC messages and ranging.
It knows how to send data upstream as specified by the UCD sent by the CMTS at regular
intervals. However, it does not know when to transmit data.
Transmitting data upstream is a three-step process as shown in Figure 2-2. When the
CPE sends some data to the CM, the CM looks in the most recent MAP for the REQ or
REQ/DATA region. It then makes a data grant request message indicating the grant size
and tries to transmit the request in the time specified for the REQ or REQ/DATA region.
As mentioned before, these regions are subject to collisions as many CMs could be trying to
send the data grant request message. If the data grant request message reaches the CMTS
then it either sends a long or short data grant to the CM in the following MAP. A long or a
short data grant depends on the grant size the CM has requested in the data grant request
message. The CMTS will send a data grant pending message in the MAP, when the CMTS
has received the data grant request from the CM but cannot allocate a data grant to it. So
Page 21
8
CMTS CM
clock time to send SYNC SYNC
clock time to send UCD UCD
clock time to send SYNC SYNC
CM gets data to transmit and waits
for a MAP with REQ, REQ/DATA
time to send MAP MAP
Make and send a data grant request Request Send short/long data
grant in the next MAP
time to send MAP MAP Look for a sh ort/long data grant and the
time to send
At the time to send, send the data
Data
Figure 2-2: Typical upstream data transmission.
the CM detects a collision if it does not see a short data grant, long data grant or a data
grant pending in the next MAP, and increases the window size of the exponential backoff
algorithm to resolve the collision. The CM will then defer a certain number of request
opportunities before requesting again.
If the CM gets a short/long data grant successfully from the CMTS, then it extracts
the time to send the data from the MAP. Finally, at the time to send data (specified by
the MAP) the CM transmits the data to the CMTS. This is termed as the RDS (Request-
Data grant-Send) cycle. So the CM has to go through one or many RDS cycles (in case of
collisions) to transmit data upstream.
2.3 Performance enhancers and QoS
A typical DOCSIS network has 1500-2000 CMs on a CMTS with distances ranging up to
50 miles, and with the CMs being quite close to each other distribution-wise. All these
factors lead to a high probability of collisions in the REQ or REQ/DATA region. The time
required or the time delay in the RDS cycle is thus the real bottleneck in the upstream
Page 22
9
throughput performance.
Thus, more delay in the RDS cycle would mean more waiting for new packets and more
packets being dropped (as the CM cannot request another data grant if a previous request
is pending), thus leading to smaller throughput and higher latency.
To improve the throughput and latency of the upstream there are several performance
enhancers, such as concatenation, piggybacking and fragmentation, and a set of QoS ser-
vices, such as unsolicited grant service (UGS), unsolicited grant service with activity detec-
tion (UGS-AD), real time polling (rtPS) and non real time polling (nrTPS), that aim at
controlling the delay in the RDS cycle. These parameters are set by using the configuration
file that the CM downloads during registration.
2.3.1 Performance enhancers
Concatenation: If the CM has several packets to send it can save on time for multiple
RDS cycles per packet and also on per packet overhead by concatenating and sending
many packets together. The CM must however know the final concatenated packet
size and request for a data grant accordingly.
Piggybacking: The CM while sending the data upstream at the time allocated by the
CMTS can put an extended header on the data packet, to request additional data
grants. The CM totally bypasses the collision phase in the RDS cycle this way and
can keep on requesting additional data grants as long as it has data to send.
Fragmentation: There can be times when the CMTS can give a smaller data grant than
that requested by the CM. In this case, the CM can either not use the data grant
and request again, or it can fragment the packet and send as much as possible in that
grant. The CMTS will send a smaller grant when it is overloaded with data grant
requests from many CMs.
All the performance enhancers can be controlled by the cable operator, but we considered
only concatenation and piggybacking in the thesis. To send fragmented data, the CM has
Page 23
10
to get a fragmented grant from the CMTS. This required a setup where the CMTS could
be overloaded such that it would start sending fragmented grants. This would require a
large number of certified CMs transmitting data upstream which were not available. This
would also make the analysis of fragmentation extremely complicated, since it is difficult
to control the allocation of fragmented data grant to a particular CM. Thus we decided to
not consider fragmentation for the study.
2.3.2 QoS services
The DOCSIS 1.1 specification [1] supports a series of scheduling service guarantees as re-
quired by the applications.
Unsolicited grant service (UGS): It supports this service for real-time traffic that is
characterized by fixed size packets at fixed intervals such as VoIP telephone calls. It
directly gives data grants to the CM at regular intervals thus completely bypassing
the RDS cycle.
Real-time polling service (rtPS): It supports this service for real-time traffic that is
characterized by variable size packets at fixed intervals such as MPEG video. In
this case the CMTS sends unicast REQ opportunities to the CM, thus bypassing the
collision phase of the RDS cycle. The CM then requests the data grant size it wants
and gets to transmit the data within the specified interval.
Unsolicited grant service with activity detection (UGS-AD): It supports this ser-
vice for real-time traffic that is characterized by an inactive state for a substantial
period of time followed by a period of fixed size, fixed interval active state. A VoIP
call with silence suppression is a good example that uses this service. This can be imag-
ined as a combination of RTP and UGS, where the CMTS sends unicast poll/request
opportunities to the CM in the inactive state and sends fixed size data grants at
regular intervals in the active state.
Non real time polling service (nrTPS): It supports this service for non real-time traf-
Page 24
11
fic that is characterized by variable size packets at regular intervals, such as high
bandwidth FTP. The CMTS just provides unicast request opportunities to the CM at
regular intervals. This ensures that the CM gets at least some request opportunities
in cases of network congestion.
Best effort (BE): This is not a guaranteed service, the CM has to go through the RDS
cycle and the CMTS would do its best to provide for as many data grants as it can.
Internet browsing applications are generally put on a best effort scheduling service.
All the services mentioned above except for best effort are guaranteed services and thus
a conformance to the protocol implies a guaranteed performance. We consider only best-
effort scheduling for the study as it is widely used and there has been no study on its
performance in DOCSIS.
Page 25
Chapter 3
Motivation and Goals
In the early nineties many research groups were formed to develop a specification for delivery
of data in the last mile using the widely deployed cable networks. Organizations involved
in this effort were Data Over Cable Service Interface Specification group (MCNS-DOCSIS),
IEEE 802.14 working group, Society of Cable Telecommunications Engineers (SCTE), Dig-
ital Video Broadcasting (DVB), Digital Audio Video Council (DAVIC), and ATM Forums
Residential Broadband Working Group (RBWG). The initial research in DOCSIS was on
comparing different aspects of specifications developed by different groups, mainly IEEE
802.14 and MCNS-DOCSIS [2]. The effect of different upstream allocation and schedul-
ing algorithms, MAP rates, MAP length, etc., was studied in [3, 4, 5, 6, 7]. Efforts to
statistically predict the upstream requests and allocate data grants was presented in [8].
Considerations for mapping IntServ and DiffServ into DOCSIS were outlined in [9, 10, 11].
Fragmentation scheduling [12] and behavior of the DOCSIS network after area-wide power
failure was reported in [13, 14]. Recently there has been a simulation study on the perfor-
mance evaluation of DOCSIS 1.1 [15]. All of the above research employed simulators for
experimentation with most of them using Opnet as the DOCSIS simulator.
The goal of this thesis is to evaluate the impact of different upstream parameters on
upstream performance using real devices. There are performance enhancers such as concate-
nation, piggybacking, and fragmentation that would improve the performance. However,
there has been no research to study the behavior of these enhancers. How much performance
improvement do they provide? Is there a situation where using these enhancers might be
detrimental? Are there bottlenecks in using a combination of these enhancers? There has
12
Page 26
13
been no study on these aspects of the protocol.
Moreover, there are also many physical layer parameters that add overhead and redun-
dancy. Can these overheads be optimized?
To study the behavior of the DOCSIS network under different kinds of traffic loads is
another aspect of the thesis. Devices behavior is generally tested generally only with a
constant bit rate stream. However, a constant bit rate stream is hardly ever generated in
the real world. So we experimented with some traffic distributions that are present in the
real world. Since Internet is the biggest application driving the cable modem market we
decided to concentrate on generating Internet-like traffic. Some advanced testing devices
support traffic distributions for packet length and inter-packet gap. We have developed
test scripts to control the above features and generate realistic traffic to test the DOCSIS
network.
The effect of multiple CMs on upstream performance has been studied. Different CMs
from different manufacturers have been used.
We ran all the experiments on real devices that have been deployed. We did not employ
simulators as used in the previous research and we controlled only those parameters that
the cable operators are able to control in the real world.
It should be noted that we did not test for conformance to the DOCSIS 1.1 protocol1.
We use a conformant CM and a CMTS to get statistical results, and analyze the impact of
different protocol parameters. We expect that the experimental results will be used to set
operational parameters of deployed cable networks.
3.1 Performance Parameters
We conducted a study to find all the parameters in DOCSIS that would affect the up-
stream performance. We concentrated the study based on the different layers of DOCSIS:
1CableLabs, a non-profit research and development consortium, is responsible for conformance
certification.
Page 27
14
PHY, MAC and incoming traffic parameters for the CM. All the parameters that affect the
upstream throughput, channel utilization and latency are enlisted below. The parameters
considered in this thesis are a subset of the following parameters and are presented in the
next section.
3.1.1 PHY parameters
• Upstream modulation rates: 160, 320, 640, 1280, 2560 ksym/sec
• Upstream modulation formats: QPSK (Quadrature Phase Shift Keying) , 16QAM
(Quadrature Amplitude Modulation)
• Forward Error Correction (FEC): two modes supported fixed codeword length mode
and shortened last codeword mode. The number of redundancy bytes per block and
information bytes per block are configurable.
• Scrambler (Randomizer): time required to scramble
• Preamble pre-pend: pre-pended to every packet/burst transmitted upstream.
• Transmit pre-equalizer: number of taps used for pre-equalization and processing time
• Differential encoding: for 16QAM can be set to on or off.
• Guard Time: 5 to 255 symbols. The minimum time between the last bit of the first
burst and the first bit of the following burst must be at least 5 symbol times. This
must also include timing errors of CM and CMTS.
• Ramp up and ramp down times: required by the transmitter before and after trans-
mitting the data. The accepted value is 10 symbols.
Most of the PHY parameters have a tradeoff between noise immunity and overhead
bytes. For example, FEC can add 18 bytes of overhead per block of data, but in case of
errors introduced by noisy conditions this data can be recovered. So adding this redundancy
actually saves a retransmission in noisy conditions. Secondly, to study the PHY parameters
Page 28
15
exhaustively we would need to emulate miles of coaxial cable, thousands of CMs, and would
also need devices that are capable of adding noise in a controlled way to the cable plant
to come close to a real world scenario. Since an experimental setup for creating such an
environment was not available, we experimented only with modulation rate and modulation
formats because they are least affected by distance and interference in the PHY layer.
3.1.2 MAC parameters
• Concatenation: The CM can concatenate many packets into one big concatenated
frame and send it upstream. This mechanism saves the per-packet overhead for each
packet and also increases the channel utilization.
• Piggybacking: The CM can piggyback a request for an additional data grant on the
data packet it is transmitting. The CM would decide to piggyback a request if the
CM already has many more packets to send. This mechanism effectively saves the
CM from requesting in the REQ region that is subject to collisions. The avoidance of
the collision region leads to better throughput and channel utilization.
• Fragmentation: If the CMTS grants a smaller data grant than requested by the CM,
the CM can fragment the packet and send as much as possible in the smaller data
grant. This mechanism is very useful when the CMTS is overloaded and is unable to
handle all data grant requests.
3.1.3 Traffic parameters
• Packet length
• Packet length distribution
• Inter-packet gap (Packet arrival rate)
• Inter-packet gap distribution (Packet arrival rate distribution)
Page 29
16
All the devices (CMs and CMTSes) are generally tested with constant packet length
constant bit rate traffic (CPL-CBR) to study their behavior. However, in the real world the
devices are rarely subjected CPL-CBR traffic. Traffic is generated by applications and thus
we can map traffic profiles to applications and say that there are very few applications that
would generate CPL-CBR. Thus, traffic profiles generated by Internet browsing, telephone
conversations, video streaming, etc. are different. Since the biggest application driving the
cable modem market is high-speed Internet connectivity, we decided to include a study of
the behavior of a DOCSIS network with the source transmitting Internet-like traffic. We
focused our literature search on finding the packet length distribution model of Internet-
traffic and the mathematical function that would describe the profile. We found that there
is no widely accepted way to model Internet traffic. Internet traffic is known to be self-
similar [18,19], but there is no accepted mathematical distribution function. However it
was clear that we needed to experiment with different packet length and inter-packet gap
distributions since Internet traffic is a mixture of many applications that generate many
different packet sizes with different inter-packet gaps.
We define “Distributed Packet Length-Constant Bit Rate” (DPL-CBR) as traffic that
would transmit different packet sizes at the micro level but would maintain a constant bit
rate at the macro level. We define “Constant Packet Length-Distributed Bit Rate” (CPL-
DBR) as traffic that would transmit constant packet length and distribute the inter-packet
gap using exponential distribution at the micro level, but maintain the same bit rate at a
macro level. We also define “Distributed Packet Length-Distributed Bit Rate” (DPL-DBR)
as traffic that would transmit different packet sizes with distributed inter-packet gap at the
micro level but would maintain a constant bit rate at the macro level.
However we are testing real world devices and thus support for generating this kind
of traffic was required. So we experimented with only those traffic distributions that are
programmable into the available devices and are considered close to Internet-like traffic [16,
17]. A few advanced traffic generators now support quad-modal packet length distribution
(four Gaussian distributions superimposed, each with adjustable center length and standard
Page 30
17
deviation) and exponentially distributed inter-packet gap. However, we conduct most of our
experiments using Constant Packet Length-Constant Bit Rate (CPL-CBR) traffic, as the
behavior of the CM can be analyzed.
Parameters for distributed packet length were chosen to approximate packet length
distribution of real Internet traffic as reported in [16, 17].
3.2 Parameters considered in this thesis
The parameters considered for evaluation of upstream performance are enlisted below with
a short description. The goal of the thesis is to study the impact of the following parameters
on upstream performance.
Performance metrics: We use channel utilization, upstream data rate and latency as up-
stream performance metrics.
Modulation Formats and Modulation Rates: DOCSIS 1.1 allows two modulation formats for
upstream transmission, QPSK (2 bits/symbol) and 16QAM (4 bits/symbol). DOCSIS
1.1 also supports five modulation rates for upstream transmission, 160 ksym/s, 320
ksym/s, 640 ksym/s, 1280 ksym/s, and 2560 ksym/s which correspond to the channel
widths of 200 kHz, 400 kHz, 800 kHz, 1600 kHz, and 3200 kHz respectively. The
product of modulation rate and modulation formats gives us the theoretical maximum
upstream data rate for each combination of modulation rate and modulation format.
We thus have 0.32 Mbps, 0.64 Mbps, 1.28 Mbps, 2.56 Mbps, and 5.12 Mbps as the
theoretical maximum data rate for QPSK and 0.64 Mbps, 1.28 Mbps, 2.56 Mbps, 5.12
Mbps, and 10.24 Mbps as theoretical maximum data rate for 16QAM.
Channel rate: The product of modulation rate and modulation format is also referred to
as the channel rate. However the upstream channel, in addition to data grant, is also
used for initial maintenance (IM), station maintenance (SM) and request (REQ). IM
is used by new CMs to join the network, SM is used by the CMs to adjust time, power,
and other coefficients with the CMTS, and the REQ region is used by CMs to request
Page 31
18
data grants from the CMTS. Thus, the maximum data rate will always be less than
the channel rate.
Concatenation: The CM can send a concatenated burst of packets instead of small packets
if allowed by the configuration file and the CMTS. The improvement in upstream
performance by enabling concatenation on the performance metrics across different
channel rates, packet lengths and traffic patterns has been studied.
Piggybacking: If the CM wants to send data upstream, it has to request a data grant
in the REQ or REQ/DATA region. Alternatively, it can request a data grant by
piggybacking a request with the data packet being sent currently. The CM does so
by adding an extended header to the current data packet being sent. Piggybacking is
enabled in the configuration file, and should be allowed by the CMTS. The effect of
piggybacking on the performance metrics across different channel rates, packet lengths
and traffic patterns has been studied.
Traffic profiles: DOCSIS network behavior for different kinds of traffic profiles namely,
CPL-CBR, CPL-DBR, DPL-CBR, DPL-DBR across different channel rates and the
influence of performance enhancers on each profile has been studied.
Number of CMs: Since the upstream performance is limited by the RDS cycle, having many
CMs on the network causes collisions and more RDS delays. Thus we studied the effect
of number of CMs on the network on upstream performance.
CM manufacturers: Given the availability of many different CMs in the UNH-InterOperability
Laboratory, we conducted experiments on CMs from different manufacturers.
The test setup is such that there is no physical noise or interference, only a few feet
of cable and all the devices close to each other. This is almost an ideal condition. In
the deployed cable networks the distances are more and there is noise and interference.
However, the experimental setup for generating and controlling such an environment was
Page 32
19
not available for our experiments. We have thus selected only those parameters that are
not affected, or are only minimally affected, by distance and interference.
Page 33
Chapter 4
Methodology
The experiments presented in this thesis were conducted on the test bed shown in Figure 4-
1. The test network consists of a CMTS connected via coaxial cable plant to one or more
CMs (the number of CMs varies with the experiment as described in the next section). The
upstream traffic is generated by a traffic generator connected to the CM over an Ethernet
network. The output of the CMTS is routed over another Ethernet segment to the traffic
analyzer. Two traffic generators/analyzers were used in the experiments: the SmartBits
600 chassis with LAN 3101A cards for CPL-CBR experiments and the Adtech AX4000
chassis with 10/100BaseT Ethernet interfaces for DPL-CBR, CPL-DBR, and DPL-DBR
experiments1.
Sigtek ST-260B DOCSIS 1.0/1.1 RF sniffer/traffic analyzer was used to ascertain that
the traffic generated by the CMs adhered to the set parameters. This helped us eliminate
several CMs that incorrectly implemented certain aspects of the protocol (e.g., piggyback-
ing) and led to the final decision to limit the study to only CableLabs certified CMs.
Two sets of CMs, each consisting of identical devices, were used in the study. One set
consisted of two CMs based on Broadcom BCM3300 QAMLink chipsets. The CMs in the
second set were based on Texas Instruments TNETC4040 chipsets.
Two types of tests, optimal throughput tests and load tests, were conducted on the test
setup shown in Figure 4-1. Optimal throughput tests find the maximum throughput and
1Several experiments were conducted on both traffic generators/analyzers to verify that they yield the
same results for CPL-CBR.
20
Page 34
21
Traffic generatorand analyzer
Coaxial cableEthernet
CMTS
RF sniffer
Upstream dataCM
CM
CM
Figure 4-1: Experimental setup.
channel utilization of a CM without any packet loss using binary search, and the load tests
measure latency, throughput, and channel utilization for different offered loads.
We define optimal throughput as the maximum data rate achieved by a CM without any
packet loss2. A script was used to set the CM parameters, to control the traffic generator,
and to process the results from the analyzer. A binary search algorithm was used to find the
maximum throughput of the modem. The search algorithm starts with 0 as the minimum
data rate and the channel rate as the maximum. It then averages the minimum and maxi-
mum to find the current data rate to transmit. If the transmission succeeds then the current
is made the minimum and the process continues. If the current data rate transmission fails
then the current is made the maximum and the process of averaging the minimum and
maximum, followed by transmission and update continues. This process will terminate only
when the difference between the maximum and minimum is within the tolerance specified.
This test provides us with the optimal throughput of a CM. To study the behavior of the
2There is small unavoidable packet loss in the experiments with multiple modems. This loss occurs
regardless of the offered data traffic rate and is a result of collisions at the moment the traffic streams are
started. The script was augmented to ignore this loss.
Page 35
22
DOCSIS network beyond the optimal throughput, we conduct load tests.
The load tests also use the test setup described in Figure 4-1 to study the DOCSIS
network. The offered load starts with 10% of the channel rate and then is increased by
10% for each successive run until the offered load is equal to the channel rate. This test
provides data for latency, number of packets transmitted and channel utilization in addition
to throughput (transmitted data rate). The load tests provide us with information about
the behavior of the DOCSIS network before and after packet drop begins and thus manages
to capture the optimal throughput point as well.
Page 36
Chapter 5
Experimental evaluation
Optimal throughput and load tests were carried out to study the effect of different param-
eters. This section presents a selection of the results that were obtained in the study. The
full set of results can be found in Appendix A.
5.1 Channel rate
0
0.5
1
1.5
2
2.5
3
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
Figure 5-1: Maximum data rate (single Broadcom-based CM, modulation format 16QAM,
piggybacking and concatenation off).
23
Page 37
24
0
20
40
60
80
100
0 200 400 600 800 1000 1200 1400 1600
Cha
nnel
util
izat
ion
(%)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
Figure 5-2: Channel utilization (single Broadcom-based CM, modulation format 16QAM,
piggybacking and concatenation off).
The purpose of this experiment was to study the impact of varying channel rates by
changing the channel widths and keeping the modulation format the same. The traffic
transmitted was CPL-CBR (Constant packet length-Constant Bit Rate), with binary search
used to find the optimal throughput for packet lengths 64, 128, 256, 512, 768, 1024, 1262,
and 1500 bytes1 and channel rates of 0.64, 1.28, 2.56, 5.12 and 10.24 Mbps respectively.
Since a CM functions as an Ethernet bridge, we consider the packet length range supported
by Ethernet, i.e., 64 to 1500 bytes.
Figure 5-1 shows the throughput performance of the network with one modem. It can
be seen that as the channel rate increases the modem throughput also increases. Figure 5-
2 displays the results of the same experiment as channel utilization percentages (ratio of
observed data rate and channel rate). The channel utilization is well below the theoretical
maximum data rate and decreases as the channel rate increases. The effect of packet length
1All future experiments transmit CPL-CBR for the above mentioned packet lengths and find the optimal
throughput using binary search, unless mentioned otherwise.
Page 38
25
can also be observed from Figure 5-1 and Figure 5-2 indicating that bigger packets lead to
better optimal throughput and channel utilization.
A packet length of 1500 bytes provides the best channel utilization for all channel rates
with a utilization of 70% for a channel rate of 0.64 Mbps. This keeps on decreasing for every
increase in channel rate, giving 23% utilization for the channel rate of 10.24 Mbps. Since
there is only one CM on the network, with ideal conditions for upstream transmission,
the optimal throughput provides us with the upper bounds for per CM throughput for
CPL-CBR. Thus, as we add more CMs on the network, we can expect that the per CM
throughput will be less than or equal to this observed upper bound.
The reason for less efficient channel utilization with increase in channel rate is the
MAP rate. For an analysis we consider the performance of a Broadcom-based CM with all
performance enhancers disabled as shown in Figure 5-1. Sniffer analysis revealed that even
though the CMTS has a dynamic scheduling policy, it transmits approximately 400 MAPs
per second. Since the CM has to request for a data grant in one MAP duration and wait for
a grant from the CMTS in the next MAP, the CM can in the best case use only half of the
total number of MAPs i.e. 200 MAPs for data transmission. The CM can in the best case
thus transmit packet length in bits times number of MAPs utilized per second. Considering
a packet length of 1500 bytes and 200 MAPs utilized per second (found experimentally) the
throughput would be 1500*8*200 = 2.4 Mbps. For a channel rate of 10.24 Mbps, this is
the channel utilization of 23% and is the same as the experimentally observed throughput
for the channel rate of 10.24 Mbps as shown in Figure 5-1. Since there are less number of
requests transmitted with the use of performance enhancers, we can expect better channel
utilization with the use of performance enhancers. The effect of performance enhancers is
discussed in the next section.
5.2 Performance enhancers
Figure 5-3 shows the results of experiments that evaluate the impact of performance en-
hancers, piggybacking and concatenation. It can be seen that concatenation improves the
Page 39
26
0
0.5
1
1.5
2
2.5
3
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
All OnAll OffPiggy OnConcat On
Figure 5-3: Performance enhancers (single Broadcom-based CM, channel rate 5.12Mbps,
modulation format 16QAM).
throughput for small packet lengths. In our experiments, piggybacking did not have a sig-
nificant impact on the performance. We conducted the same experiment for all possible
channel rates and obtained similar results. More results are presented in section 5.4.
However, it can be also observed that the combination of concatenation and piggy-
backing performs worse than only concatenation enabled for smaller packet lengths, but
this effect gets less and less pronounced as the packet length increases. Sniffer analysis
revealed that the CM always adds an additional 4 bytes to every packet when piggybacking
is enabled. As there are a large number of packets transmitted when the packet length is
small, the piggybacking overhead also becomes very large. Thus piggybacking is unable
to provide significant performance improvements and also leads to lower performance with
concatenation enabled when the packet length in less than 768 bytes
It can also be observed that concatenation does not provide significant performance
improvements for large packet lengths. This has lead us to believe that there is a limit
associated with the maximum concatenated burst size. Even though the specification [1]
Page 40
27
limits the concatenated burst size to 65000 bytes, the CM or the CMTS limited it to around
2000 bytes. The CM can enforce this limit so that it does not get smaller data grants than it
requested, causing increased delay in the RDS cycle and thus lower throughput. The CMTS
may enforce this limit if it does not want to handle large bursts of concatenated packets
due to the processing overhead involved in restoring and forwarding individual packets. It
could not be determined if the CM or the CMTS enforced this limit as we did not have
access to internal data structures of these devices.
5.3 Modulation format
0
0.5
1
1.5
2
2.5
3
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
16QAM 1.28 Mbps16QAM 2.56 Mbps16QAM 5.12 MbpsQPSK 1.28 MbpsQPSK 2.56 MbpsQPSK 5.12 Mbps
Figure 5-4: Comparison of 16QAM and QPSK (single Broadcom-based CM, piggybacking
and concatenation on).
This experiment evaluated the impact of the two available modulation formats on the
performance (see Figure 5-4). As outlined earlier, the experiment setup did not allow
injection of physical layer impairments that would truly test the benefits of each modulation
Page 41
28
0
0.5
1
1.5
2
2.5
3
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
Broadcom based CM 0.64 MbpsBroadcom based CM 1.28 MbpsBroadcom based CM 5.12 MbpsTI based CM 0.64 MbpsTI based CM 1.28 MbpsTI based CM 5.12 Mbps
Figure 5-5: Comparison of CMs based on different chipsets (single CM, channel rate 5.12
Mbps, modulation format 16QAM, piggybacking and concatenation on).
format. Instead, the experiment concentrated on the protocol-level aspects. The most
pronounced difference in performance was observed for channel rate 2.56 Mbps where QPSK
clearly outperformed 16QAM. The results were mixed for the remaining channel rates.
It was also observed in Figure A-1(d) and Figure A-3(a) that the throughput for 16QAM
(4 bits/symbol encoding) was only at most 1.4 times QPSK (2 bits/symbol encoding) and
was unable to achieve the theoretical twofold increase in throughput for the same channel
width. Since the requests are always sent in QPSK, the CMTS upstream demodulator does
not have to switch between QPSK and 16QAM, thus reducing the processing time at the
CMTS and increasing the throughput by providing more transmit opportunities in QPSK.
This effect is especially pronounced when QPSK throughput outperforms or is equal to
16QAM throughput for smaller packet lengths (refer to Figure A-1(d) and Figure A-3(a)).
Page 42
29
5.4 CM chipset
Modems based on two different chipsets were available for the experiments (2 based on
Broadcom BCM3300 QAMLink and 4 based on Texas Instruments TNETC4040). Figure 5-
5 compares the results for single modem experiments. The performance is comparable for
lower channel rates while for higher channel rates the performance of the TI-based CM
drops dramatically when packet length exceeds 780 bytes 2. This behavior was observed
consistently over a wider range of parameters than that shown in the figure.
Since the modems from different manufacturers performed differently, we investigated
into finding new measures to compare different manufacturers. Following are the measures
used to compare CM manufacturers.
5.4.1 Number of MAPs used for data transmission
As the CMTS uses a dynamic MAP scheduling algorithm, there is no accurate method to
get the total number of MAPs transmitted by the CMTS. However, when the performance
enhancers are turned off, the number of packets transmitted upstream is equal to the number
of MAPs used by the CM for data transmission. Thus, we use the number of packets
transmitted per second as the number of MAPs used per second, at optimal throughput for
different packet lengths for each CM. Figure 5-6 shows the comparison of CM manufacturers
based on the number of MAPs used.
Both the CMs utilize more MAPs for an increase in channel rate. As the packet length
increased the number of MAPs used decreased as fewer packets are transmitted. It can also
be observed that the Broadcom-based CM is able to utilize more MAPs and has consistent
behavior across different channel rates. On the contrary, the TI-based CM is not consistent
for packet lengths below 128 bytes and above 768 bytes for the higher channel rates.
2The value 780 bytes was obtained through additional experiments not shown here.
Page 43
30
0
100
200
300
400
500
600
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
pack
ets
/ sec
ond)
Packet length (bytes)
Broadcom based CM 0.64 MbpsBroadcom based CM 2.56 MbpsBroadcom based CM 10.24 MbpsTI based CM 0.64 MbpsTI based CM 2.56 MbpsTI based CM 10.24 Mbps
Figure 5-6: Comparison of CMs based on number of packets transmitted (single CM, mod-
ulation format 16QAM, piggybacking and concatenation off).
5.4.2 Concatenation factor
The concatenation factor is the ratio of the number of packets transmitted with concatena-
tion on to the number of packets transmitted with performance enhancers off. The number
of packets transmitted with performance enhancers turned off is equal to the number of
MAPs a CM utilizes for data transmission for a given packet length and channel rate. Since
the number of MAPs with concatenation turned off is calculated for the optimal throughput
point, it is the maximum number of MAPs a CM can utilize for data transmission. When
concatenation is turned on, the CM can utilize the same number of data transmit MAPs,
but can send more data per MAP. The concatenation factor gives the average number of
packets the CM concatenates per MAP. Figure 5-7 presents the concatenation factor for
different CM manufacturers across different channel rates and packet lengths.
It can be observed that the concatenation factor decreases with increase in packet length.
The Broadcom-based CM has a higher concatenation factor than the TI-based CM, thus
having better concatenation performance. It should be noted that when the concatenation
Page 44
31
0
1
2
3
4
5
6
7
0 200 400 600 800 1000 1200 1400 1600
Con
cate
natio
n fa
ctor
Packet length (bytes)
Broadcom based CM 0.64 MbpsBroadcom based CM 2.56 MbpsTI based CM 0.64 MbpsTI based CM 2.56 Mbps
Figure 5-7: Comparison of CMs based on concatenation factor (single CM, channel rate
5.12 Mbps, modulation format 16QAM, concatenation on, piggybacking off).
factor becomes 1, packets are not concatenated. Thus, the Broadcom-based CM does
not concatenate packets higher than 768 bytes in length and the TI-based CM doesn’t
concatenate packets higher than 256 bytes in length.
5.4.3 Piggybacking factor
The piggybacking factor is the ratio of the number of packets transmitted with piggybacking
on to the number of packets transmitted with performance enhancers off. The number of
packets transmitted with performance enhancers turned off, is also equal to the number of
MAPs a CM utilizes for data grant requests. Since the number of MAPs with piggybacking
turned off is calculated for the optimal throughput point, they are the maximum number of
MAPs a CM can utilize for data grant requests. When piggybacking is enabled, as the data
grant requests are piggybacked with data packets, the MAPs used for data grant requests
are free and can be used for data transmission as well. The piggybacking factor is thus a
measure of the usage of MAPs for data transmission that were used for data grant requests
Page 45
32
when all performance enhancers were disabled. Thus in the best case with piggybacking
enabled, the CM can utilize all the MAPs and can transmit twice the number of packets.
The piggybacking factor can thus never exceed 2.
0
0.5
1
1.5
2
2.5
3
0 200 400 600 800 1000 1200 1400 1600
Pig
gyba
ckin
g fa
ctor
Packet length (bytes)
Broadcom based CM 0.64 MbpsBroadcom based CM 2.56 MbpsTI based CM 0.64 MbpsTI based CM 2.56 Mbps
Figure 5-8: Comparison of CMs based on piggybacking factor (single CM, channel rate 5.12
Mbps, modulation format 16QAM, concatenation off, piggybacking on).
Figure 5-8 presents the piggybacking factor for different CM manufacturers across dif-
ferent channel rates and packet lengths. It can be observed that the piggybacking factor
never exceeds 2 as expected and varies for different packet lengths across different channel
rates.
The measures mentioned above can be used by the CM manufacturers, as they quantify
the efficiency of a CM to utilize MAPs, concatenation, and piggybacking respectively.
Page 46
33
0
0.5
1
1.5
2
2.5
3
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
1 CM 0.64 Mbps2 CMs 0.64 Mbps1 CM 5.12 Mbps2 CMs 5.12 Mbps1 CM 10.24 Mbps2 CMs 10.24 Mbps
Figure 5-9: Per-modem throughput for one and two Broadcom-based CMs (modulation
format 16QAM, piggybacking and concatenation on).
5.5 Number of modems
This experiment evaluated the impact of multiple simultaneously transmitting CMs with
concatenation and piggybacking enabled. Figures 5-9 shows the performance for modems
across different channel rates. The per-CM throughput remains roughly the same even
if two modems transmit simultaneously. This is not surprising given the results of the
previous experiments where sole CMs were unable to achieve channel utilization better
than a fraction of the channel rate.
Figure 5-10 presents the impact on throughput by addition of CMs to the network. It
can be seen that as more CMs were added the per CM throughput decreased. By comparing
the throughput of CMs at 64 bytes and 1500 bytes, it can be said that the effect of getting
higher throughput for higher packet lengths seems to diminish with the addition of CMs
(with concatenation and piggybacking enabled).
Page 47
34
0
0.5
1
1.5
2
2.5
3
0 1 2 3 4 5
Thr
ough
put (
Mbp
s)
Number of CMs
64 byte packets512 byte packets1024 byte packets1500 byte packets
Figure 5-10: Effect of number of TI-based CMs on throughput for different packet lengths
(modulation format 16QAM, piggybacking and concatenation on).
5.6 Traffic distributions
In these experiments, the network was subjected to traffic with distributed packet length
and distributed inter-packet gap. The packet length distribution used approximates the
packet length distribution of real Internet traffic [16, 17]. We have used quad-modal distri-
bution that superimposes four Gaussian distributions with parameters shown in Table 5.1.
Exponential distribution was used as the distribution function for inter-packet gap distri-
bution.
Figure 5-11, 5-12, 5-13 and 5-14 give the results for DPL-CBR, DPL-DBR, CPL-
DBR at a packet length of 512 bytes and CPL-CBR at a packet length of 512 bytes, for
the four possible combinations of performance enhancers across different channel rates.
It is our observation that the packet length and inter-packet gap distributions did not
affect the CM performance significantly. Comparing DPL-CBR and DPL-DBR results with
previous results obtained for CPL-CBR Figure 5-3, it can be concluded that the behavior is
approximately equivalent to passing around 500 byte packets with CPL-CBR. The constant
Page 48
35
Distribution number 1 2 3 4
Mean in bytes 46 300 576 1500
Half-point width 0.1 40 0.1 0.1
Weight 50% 20% 15% 15%
Table 5.1: DPL-CBR traffic parameters for the packet length distribution.
packet length profiles (CPL-CBR and CPL-DBR) at 512 byte packet lengths have been
reproduced in a form comparable to the distributed packet length profiles (DPL-CBR and
DPL-DBR) in Figure 5-14 and Figure 5-13.
Since packet length and inter-packet gap distributions did not affect the CM performance
significantly, all the results obtained for different channel rates, modulation formats, number
of CMs, etc., are valid for DPL-CBR, DPL-DBR and CPL-DBR. As the different traffic
profiles produced Internet-like traffic, we can conclude that the results we have obtained for
CPL-CBR at 512 byte packet length, are valid for Internet-like traffic in a DOCSIS network.
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
0.64 1.28 2.56 5.12 10.24
Channel rate (Mbps)
Th
rou
gh
pu
t (M
bp
s)
All On
All Off
Concat On
Piggyb On
Figure 5-11: DPL-CBR for Broadcom-based CM for all channel rates.
Page 49
36
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
0.64 1.28 2.56 5.12 10.24
Channel rate (Mbps)
Th
rou
gh
pu
t (M
bp
s)
All On
All Off
Concat On
Piggyb On
Figure 5-12: DPL-DBR for Broadcom-based CM for all channel rates.
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
0.64 1.28 2.56 5.12 10.24
Channel rate (Mbps)
Th
rou
gh
pu
t (M
bp
s)
All On
All Off
Concat On
Piggyb On
Figure 5-13: CPL-DBR for Broadcom-based CM for all channel rates (packet length 512
bytes).
Page 50
37
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
0.64 1.28 2.56 5.12 10.24
Channel rate (Mbps)
Th
rou
gh
pu
t (M
bp
s)
All On
All Off
Concat On
Piggyb On
Figure 5-14: CPL-CBR for Broadcom-based CM for all channel rates (packet length 512
bytes).
5.7 Load test
The purpose of a load test is to study the behavior of a DOCSIS network for different offered
load. The offered load starts with 10% of the channel rate and then is increased by 10%
for each successive run until the offered load becomes equal to the channel rate. The type
of traffic passed is CPL-CBR. We consider latency, throughput and normalized throughput
(channel utilization) as performance metrics.
Latency is the time taken by a packet to travel from the transmit port to the receive
port of the traffic generator/analyzer. Throughput is the data rate in Mbps and normalized
throughput is the ratio of throughput and channel rate. In our case the normalized through-
put and channel utilization are the same and thus the normalized throughput graphs allow
us to compare channel utilization for different channel rates for the load test.
The latency trend for different packet lengths for 0.64 Mbps channel rate is shown in
Figure 5-15. Three distinct phases can be observed in the graph. In the first phase the
latency remains low, in the second phase the latency starts increasing, and in the third
phase it remains high. It can be seen that latency increases with an increase in packet
Page 51
38
0
200
400
600
800
1000
1200
1400
1600
1800
0 20 40 60 80 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
Figure 5-15: Latency (single Broadcom-based CM, channel rate 0.64 Mbps, modulation
format 16QAM).
length. Figure 5-16 shows the throughput trend for the same channel rate of 0.64 Mbps.
By comparison, we can deduce that when the latency is at its highest in the third phase we
get maximum throughput.
Figure 5-17 shows the effect of performance enhancers, piggybacking and concatenation,
on latency. It can be observed that concatenation was able to provide the lowest latency,
as more packets are transmitted per MAP when concatenation is enabled. Piggybacking
also provided significant latency improvements as it allows the CM to send a data grant
request with the current packet and thus the CM only has to wait for a data grant from
the CMTS. Also enabling both piggybacking and concatenation provided latencies between
just piggybacking on and just concatenation on. The normalized throughput for the same
packet length and channel rate is shown in Figure 5-18. By comparison we can conclude
that lower latencies in the third phase of the latency graph indicates higher throughput for
the same packet length.
Figure 5-19 is shown as an example of the normalized throughput graphs we obtained.
Page 52
39
0
0.1
0.2
0.3
0.4
0.5
0.6
0 20 40 60 80 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
Figure 5-16: Throughput (single Broadcom-based CM, channel rate 0.64 Mbps, modulation
format 16QAM).
0
100
200
300
400
500
600
0 20 40 60 80 100
Late
ncy
(ms)
Offered load (%)
All OnAll OffConcat OnPiggy On
Figure 5-17: Effect of performance enhancers on Latency (single Broadcom-based CM,
channel rate 2.56Mbps, 128 packet length, modulation format 16QAM).
Page 53
40
0
10
20
30
40
50
60
70
80
90
0 20 40 60 80 100
Nor
mal
ized
thro
ughp
ut (
%)
Offered load (%)
All OnAll OffConcat OnPiggy On
Figure 5-18: Normalized throughput (single Broadcom-based CM, channel rate 2.56Mbps,
128 bytes packet length, modulation format 16QAM).
From the set of graphs we can conclude that the lower channel rates always provide higher
channel utilization and as the packet length increases channel utilization increases as well.
5.7.1 Comparison between load test and optimal throughput test
Figure 5-20 compares the throughput obtained by optimal throughput using binary search
with throughput obtained by the offered load test. In the load test we start experiencing
packet drops immediately after we get the maximum throughput. It can be seen that
optimal throughput with binary search was most of the times able to find the maximum
throughput. It can also be concluded by comparison with previous graphs that after the
packet drop begins, the throughput remains constant.
Page 54
41
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
thro
ughp
ut (
%)
Offered load (%)
0.64 Mbps 1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
Figure 5-19: Normalized throughput (single Broadcom-based CM, 1024 bytes packet length,
modulation format 16QAM).
0
0.5
1
1.5
2
2.5
3
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps Optimal2.56 Mbps Optimal10.24 Mbps Optimal0.64 Mbps Load Test2.56 Mbps Load Test10.24 Mbps Load Test
Figure 5-20: Comparison of throughput from load tests and optimal throughput (single
Broadcom-based CM, modulation format 16QAM).
Page 55
Chapter 6
Conclusions
DOCSIS is a complex protocol with complex interactions. This makes it difficult to provide
generalized data throughput projections based solely on channel capacity and offered data
rate. However, we have investigated different parameters involved in each layer of DOCSIS,
namely the PHY and MAC layers, across different traffic profiles, with focus on upstream
channel utilization, upstream data rate and upstream latency.
We have observed that one CM is unable to utilize all the available bandwidth even
in nearly ideal conditions. It was found that at the core of the DOCSIS network is the
MAP rate of the CMTS which controls the performance of a CM in best effort scheduling.
The CMTS MAP rate in conjunction with the capability of the CM to request data grant,
concatenate and piggyback are the major factors determining the CM performance.
Concatenation as a performance enhancer provided the best performance, especially for
smaller packet lengths (a concatenation factor of 6 has been measured for 128 byte packets
at 2.56 Mbps channel rate). Piggybacking did not provide any significant performance
improvements.
Although the CM throughput improved with an increase in channel rate as expected, the
channel utilization decreased with an increase in channel rate. 16QAM as the modulation
format was not always able to increase CM performance when compared to QPSK.
The number of MAPs utilized, the concatenation factor and the piggybacking factor,
were the new measures developed to understand and compare the difference in performance
of different CM manufacturers. It was observed that the Broadcom-based CM was able
to utilize more MAPs, and had a much higher concatenation factor as compared to the
42
Page 56
43
TI-based CM. The above measures can be used by the CM manufacturers, as they quantify
the efficiency of a CM.
Running the tests for multiple CMs gave expected results. The per CM data rate
remained constant until the carrying capacity of the network was reached, after which the
per CM data rate dropped as we added more CMs. It was also observed that the effect of
packet length on throughput diminished as more CMs were added.
Experiments with packet length and inter-packet gap distributions were carried out to
test the DOCSIS network. Internet-like traffic was generated which had an average packet
length of approximately 500 bytes. It was observed that even though packet length and
inter-packet gap distributions affect the performance, they did not affect the performance
significantly. We also conclude that the behavior of a DOCSIS network when subject to
Internet-like traffic is equivalent to the behavior of a DOCSIS network subjected to CPL-
CBR traffic at packet lengths of 512 bytes. Thus all the results we obtained for CPL-CBR
at 512 byte packet lengths are valid for Internet-like traffic.
The test scripts developed for automated testing can be used with minor modifications
to study the effect of various traffic patterns on other last mile technologies.
The main contribution of this thesis was the experimental evaluation of upstream per-
formance of DOCSIS 1.1 networks for best-effort scheduling. There have been numerous
studies on the subject that utilized either analytical models or simulators. This has been
the first study that attempts to evaluate the performance of real devices.
In the future, experiments need to be conducted on CMs and CMTSes from different
manufacturers, and also with larger number of CMs on the network. Experiments can be
extended to study the upstream performance of DOCSIS 2.0, to compare the performance
improvement provided by DOCSIS 2.0 over DOCSIS 1.1.
A methodology can be developed to statistically predict the upstream performance that
can be used by the cable service providers after the tests have been run on various combi-
nations of CMs and CMTSes. By using this methodology the cable service providers can
study various traffic parameters, such as packet length distribution and inter-packet gap
Page 57
44
distribution, of their networks and be able to predict the number of CMs it can support for
a particular load or vice versa.
Page 58
Bibliography
[1] Cable Television Lab-oratories, Inc., “Data-Over-Cable Service Interface Specifications-Radio FrequencyInterface Specification,” SP-RFI v1.1-I06-001215, Version 1.1, Dec. 2000.
[2] N. Golmie, F. Mouveaux, and D. Su, “A Comparison of MAC Protocols for HybridFiber/Coax Networks: IEEE 802.14 vs. MCNS,” Proc. IEEE ICC’99, Jun. 1999.
[3] Y. D. Lin, C. Y. Huang and W. M. Yin, “Allocation and Scheduling Algorithms forIEEE 802.14 and MCNS in Hybrid Fiber Coaxial Networks,” IEEE Transactions onBroadcasting, vol. 44, no. 4, pp. 427-35, Dec. 1998.
[4] W. M. Yin and Y. D. Lin, “Statistically Optimized Minislot Allocation for Initial andCollision Resolution in Hybrid Fiber Coaxial Networks,” IEEE J. on Selected Areasin Communications, vol. 18, no. 4, Sept. 2000.
[5] V. Sdralia, C. Smythe, P. Tzerefos, and S. Cvetkovic, “Performance Characterizationof the MCNS DOCSIS 1.0 CATV Protocol with Prioritized First Come First ServeScheduling,” IEEE Transactions on Broadcasting, vol. 45, no. 2, pp. 196-205, June1999.
[6] V. Sdralia, C. Smythe, P. Tzerefos, and S. Cvetkovic, “Delivery of Low Bit RateIsochronous Streams of the DOCSIS 1.0 Cable Television Protocol,” IEEE Transac-tions on Broadcasting, vol. 45, no.2, pp. 206-214, June 1999.
[7] Sung-Hyun Cho, Jae-Hyun Kim and Sung-Han Park, “Performance Evaluation ofthe DOCSIS 1.1 MAC Protocol According to the Structure of a MAP Message,”http://viplab.hanyang.ac.kr/pdf/5-20.pdf
[8] F. Abi-Nassif, W. C. Lee, and I. Stavrakakis, “Offered Load Estimation in a Multi-media Cable Network System,” Proc. IEEE ICC’99, June 1999.
[9] N. Golmie, F. Mouveaux, and D. Su, “Differentiated Services over Cable Networks,”Proc. IEEE GLOBECOM’99, Rio de Janeiro, Brazil, Dec. 1999.
[10] R. Rabbat and K. Y. Siu, “QoS Support for Integrated Services over CATV,” IEEECommun., vol. 37, no. 1, pp. 64-68, Jan. 1999.
[11] C. Adjih, P. Jacquet, and P. Robert, “Differentiated Admission Control in LargeNetworks,” Proc. IEEE Infocom 2000, May 2000.
[12] N. Namaan and R. Rom, “Packet Scheduling with Fragmentation,” Proc. IEEE In-focom, www.ieee-infocom.org/2002/papers/222.pdf, 2002.
[13] V. Sdralia, P. Tzerefos, C. Smythe and S. Cvetkovic, “Fault Recovery of DOCSISCATV Networks,” Proc. IEEE ICC 2000, May 2000.
[14] R. Domdom, B. Espey, M. Goodman, K. Jones, V. Lim and S. Patek, “A SimulationAnalysis of the Initialization of DOCSIS-Compliant Cable Modem Systems,” SystemsEngineering Capstone Conference, University of Virginia, 2000.
[15] G. Chandrasekaran, M. Hawa and D. W. Petr, “Preliminary Performance Evalua-tion of QoS in DOCSIS 1.1,” Technical Report, Information and TelecommunicationTechnology Center, University of Kansas, ITTC-FY2003-TR-22736-01, Jan. 2003.
45
Page 59
46
[16] S. McCreary and K. C. Claffy, “Trends in Wide Area IP Traffic Patterns,” Tech.Rep., CAIDA, Feb. 2000.
[17] K. Thompson, G. J. Miller, and R. Wilder, “Wide-Area Internet Traffic Patterns andCharacteristics,” IEEE Network, pp. 10-23, Nov. 1997.
[18] Y. D. Lin, C. Y. Huang and W. M. Yin, ”An Investigation Into HFC MAC Protocols:Mechanisms, Implementation, And Research Issues,” IEEE Communication Surveys,Third quarter 2000.
[19] D. C. Twu and K. C. Chen, ”A Dynamic Control Scheme for the IEEE 802.14 DraftProtocol Over CATV/HFC Networks,” IEEE Commun. Letters, vol. 2, no. 7, Jul.1998.
[20] V. Paxton and S. Floyd, ”Wide-area traffic: the failure of Poisson modeling,”IEEE/ACM Transactions on Networking 3, pp. 226-244, 1994.
[21] M. Crovella and A. Bestavros, ”Self-similarity in the World Wide Web Traffic: ev-idence and possible causes,” IEEE/ACM Transactions on Networking, no. 5, pp.835-846, 1996.
Page 60
Appendix A
Complete set of results
A.1 Constant Packet Length - Constant Bit Rate (CPL-CBR)
This section presents the results obtained from the optimal throughput tests using binarysearch. The optimal throughput of a CM across different packet lengths, channel rates andperformance enhancers for CPL-CBR traffic are shown.
A.1.1 Broadcom-based CM
The following figures show optimal throughput for a Broadcom-based CM with 16QAM,across different packet lengths, channel rates and performance enhancers.
0
0.5
1
1.5
2
2.5
3
3.5
4
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
0
0.5
1
1.5
2
2.5
3
3.5
4
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(a) All Off (b) Concat On
0
0.5
1
1.5
2
2.5
3
3.5
4
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
0
0.5
1
1.5
2
2.5
3
3.5
4
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(c) Piggy On (d) All On
Figure A-1: CPL-CBR for 16QAM on Broadcom-based CM
47
Page 61
48
A.1.2 TI-based CM
The following figures show optimal throughput for a Texas Instruments-based CM with16QAM, across different packet lengths, channel rates and performance enhancers.
0
0.5
1
1.5
2
2.5
3
3.5
4
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
0
0.5
1
1.5
2
2.5
3
3.5
4
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(a) All Off (b) Concat On
0
0.5
1
1.5
2
2.5
3
3.5
4
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
0
0.5
1
1.5
2
2.5
3
3.5
4
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(c) Piggy On (b) All On
Figure A-2: CPL-CBR for 16QAM on Texas Instruments-based CM
Page 62
49
A.1.3 QPSK
The following figures show optimal throughput for a Broadcom-based CM and a TexasInstruments-based CM with QPSK, across different packet lengths and channel rates. Bothpiggybacking and concatenation are enabled.
0
0.5
1
1.5
2
2.5
3
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.32 Mbps0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps
(a) Broadcom-based CM
0
0.5
1
1.5
2
2.5
3
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.32 Mbps0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps
(c) TI-based CM
Figure A-3: CPL-CBR for QPSK on Broadcom and TI-based CMs
Page 63
50
A.1.4 Number of CMs
The following figures show the effect of adding CMs to the network. The tests are runwith concatenation and piggybacking enabled with 16QAM as the modulation format. Thecumulative optimal throughput, i.e., the addition of per CM throughput is plotted in thegraphs.
Broadcom-based CM: The cumulative optimal throughputs for 1 and 2 Broadcom-basedCMs with 16QAM, across different packet lengths, channel rates are shown in thefigure below.
0
1
2
3
4
5
6
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(a) 1 CM
0
1
2
3
4
5
6
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(b) 2CMs
Figure A-4: CPL-CBR with 16QAM for multiple Broadcom-based CMs
Page 64
51
Texas Instruments-based CM: The cumulative optimal throughputs for 1,2,3 and 4 TexasInstruments-based CMs with 16QAM, across different packet lengths, channel ratesare shown in the figure below.
0
1
2
3
4
5
6
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
0
1
2
3
4
5
6
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(a) 1 CM (b) 2 CMs
0
1
2
3
4
5
6
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
0
1
2
3
4
5
6
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(c) 3 CMs (b) 4 CMs
Figure A-5: CPL-CBR for 16QAM for multiple Texas Instruments-based CMs
Page 65
52
A.2 Distributed Packet Length - Constant Bit Rate (DPL-CBR)
This section presents the results obtained from the optimal throughput tests using binarysearch. The optimal throughput of a CM with 16QAM across channel rates and performanceenhancers for DPL-CBR traffic are shown for the Broadcom-based CM and the TexasInstruments-based CM.
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
0.64 1.28 2.56 5.12 10.24
Channel rate (Mbps)
Th
rou
gh
pu
t (M
bp
s)
All On
All Off
Concat On
Piggyb On
(a) Broadcom-based CM
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
0.64 1.28 2.56 5.12 10.24
Channel rate (Mbps)
Th
rou
gh
pu
t (M
bp
s)
All On
All Off
Concat On
Piggyb On
(b) Texas Instruments-based CM
Figure A-6: DPL-CBR with 16QAM for Broadcom and Texas Instruments-based CMs
Page 66
53
A.3 Distributed Packet Length - Distributed Bit Rate (DPL-DBR)
This section presents the results obtained from the optimal throughput tests using binarysearch. The optimal throughput of a CM with 16QAM across different channel rates andperformance enhancers for DPL-DBR traffic are shown for the Broadcom-based CM andthe Texas Instruments-based CM.
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
0.64 1.28 2.56 5.12 10.24
Channel rate (Mbps)
Th
rou
gh
pu
t (M
bp
s)
All On
All Off
Concat On
Piggyb On
(a) Broadcom-based CM
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
0.64 1.28 2.56 5.12 10.24
Channel rate (Mbps)
Th
rou
gh
pu
t (M
bp
s)
All On
All Off
Concat On
Piggyb On
(b) Texas Instruments-based CM
Figure A-7: DPL-DBR with 16QAM for Broadcom and Texas Instruments-based CMs
Page 67
54
A.4 Constant Packet Length - Distributed Bit Rate (CPL-DBR)
This section presents the results obtained from the optimal throughput tests using binarysearch. The optimal throughput of a CM across different packet lengths, channel rates andperformance enhancers for CPL-DBR traffic are shown for the Broadcom-based CM.
0
0.5
1
1.5
2
2.5
3
3.5
4
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
0
0.5
1
1.5
2
2.5
3
3.5
4
0 200 400 600 800 1000 1200 1400 1600 T
hrou
ghpu
t (M
bps)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(a) All Off (b) Concat On
0
0.5
1
1.5
2
2.5
3
3.5
4
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
0
0.5
1
1.5
2
2.5
3
3.5
4
0 200 400 600 800 1000 1200 1400 1600
Thr
ough
put (
Mbp
s)
Packet length (bytes)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(c) Piggy On (d) All On
Figure A-8: CPL-DBR for 16QAM on Broadcom-based CM
A.5 Load Tests
This section presents the results obtained from the load tests performed to get latency,throughput and normalized throughput (channel utilization) across different combinationof packet lengths and channel rates. The results are presented according to the performanceenhancer used. The modulation scheme used is 16QAM in all the cases. The load testresults across different packet lengths, channel rates and performance enhancers for CPL-DBR traffic are shown for the Broadcom-based CM.
Page 68
55
A.5.1 Concatenation and Piggybacking disabled
Latency:
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(a) 0.64 Mbps channel (b) 1.28 Mbps channel
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(c) 2.56 Mbps channel (d) 5.12 Mbps channel
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(e) 10.24 Mbps channel
Figure A-9: Load test: Latency across different channel widths for Broadcom-based CM,concatenation and piggybacking disabled
Page 69
56
Throughput:
0
0.5
1
1.5
2
2.5
3
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
0
0.5
1
1.5
2
2.5
3
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(a) 0.64 Mbps channel (b) 1.28 Mbps channel
0
0.5
1
1.5
2
2.5
3
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
0
0.5
1
1.5
2
2.5
3
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(c) 2.56 Mbps channel (d) 5.12 Mbps channel
0
0.5
1
1.5
2
2.5
3
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(e) 10.24 Mbps channel
Figure A-10: Load test: Throughput across different channel widths for Broadcom-basedCM, concatenation and piggybacking disabled
Page 70
57
Normalized throughput (channel utilization):
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(a) 64 byte packet (b) 128 byte packet
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(c) 512 byte packet (d) 1024 byte packet
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(e) 1500 byte packet
Figure A-11: Load test: Normalized Throughput across different packet lengths forBroadcom-based CM, concatenation and piggybacking disabled
Page 71
58
A.5.2 Piggybacking enabled
Latency:
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(a) 0.64 Mbps channel (b) 1.28 Mbps channel
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(c) 2.56 Mbps channel (d) 5.12 Mbps channel
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(e) 10.24 Mbps channel
Figure A-12: Load test: Latency across different channel widths for Broadcom-based CM,piggybacking enabled
Page 72
59
Throughput:
0
1
2
3
4
5
6
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
0
1
2
3
4
5
6
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(a) 0.64 Mbps channel (b) 1.28 Mbps channel
0
1
2
3
4
5
6
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
0
1
2
3
4
5
6
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(c) 2.56 Mbps channel (d) 5.12 Mbps channel
0
1
2
3
4
5
6
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(e) 10.24 Mbps channel
Figure A-13: Load test: Throughput across different channel widths for Broadcom-basedCM, piggybacking enabled
Page 73
60
Normalized throughput (channel utilization):
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(a) 64 byte packet (b) 128 byte packet
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(c) 512 byte packet (d) 1024 byte packet
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(e) 1500 byte packet
Figure A-14: Load test: Normalized Throughput across different packet lengths forBroadcom-based CM, piggybacking enabled
Page 74
61
A.5.3 Concatenation enabled
Latency:
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(a) 0.64 Mbps channel (b) 1.28 Mbps channel
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(c) 2.56 Mbps channel (d) 5.12 Mbps channel
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(e) 10.24 Mbps channel
Figure A-15: Load test: Latency across different channel widths for Broadcom-based CM,concatenation enabled
Page 75
62
Throughput:
0
0.5
1
1.5
2
2.5
3
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
0
0.5
1
1.5
2
2.5
3
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(a) 0.64 Mbps channel (b) 1.28 Mbps channel
0
0.5
1
1.5
2
2.5
3
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
0
0.5
1
1.5
2
2.5
3
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(c) 2.56 Mbps channel (d) 5.12 Mbps channel
0
0.5
1
1.5
2
2.5
3
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(e) 10.24 Mbps channel
Figure A-16: Load test: Throughput across different channel widths for Broadcom-basedCM, concatenation enabled
Page 76
63
Normalized throughput (channel utilization):
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(a) 64 byte packet (b) 128 byte packet
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(c) 512 byte packet (d) 1024 byte packet
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(e) 1500 byte packet
Figure A-17: Load test: Normalized Throughput across different packet lengths forBroadcom-based CM, concatenation enabled
Page 77
64
A.5.4 Concatenation and Piggybacking enabled
Latency:
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(a) 0.64 Mbps channel (b) 1.28 Mbps channel
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(c) 2.56 Mbps channel (d) 5.12 Mbps channel
0
500
1000
1500
2000
10 20 30 40 50 60 70 80 90 100
Late
ncy
(ms)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(e) 10.24 Mbps channel
Figure A-18: Load test: Latency across different channel widths for Broadcom-based CM,concatenation and piggybacking enabled
Page 78
65
Throughput:
0
0.5
1
1.5
2
2.5
3
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
0
0.5
1
1.5
2
2.5
3
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(a) 0.64 Mbps channel (b) 1.28 Mbps channel
0
0.5
1
1.5
2
2.5
3
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
0
0.5
1
1.5
2
2.5
3
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(c) 2.56 Mbps channel (d) 5.12 Mbps channel
0
0.5
1
1.5
2
2.5
3
10 20 30 40 50 60 70 80 90 100
Thr
ough
put (
Mbp
s)
Offered load (%)
64 byte packet128 byte packet512 byte packet1024 byte packet1500 byte packet
(e) 10.24 Mbps channel
Figure A-19: Load test: Throughput across different channel widths for Broadcom-basedCM, concatenation and piggybacking enabled
Page 79
66
Normalized throughput (channel utilization):
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(a) 64 byte packet (b) 128 byte packet
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(c) 512 byte packet (d) 1024 byte packet
0
10
20
30
40
50
60
70
80
90
10 20 30 40 50 60 70 80 90 100
Nor
mal
ized
Thr
ough
put (
%)
Offered load (%)
0.64 Mbps1.28 Mbps2.56 Mbps5.12 Mbps10.24 Mbps
(e) 1500 byte packet
Figure A-20: Load test: Normalized Throughput across different packet lengths forBroadcom-based CM, concatenation and piggybacking enabled
Page 80
Appendix B
CM configuration
A basic configuration file with the primary upstream and downstream service flow was usedfor the test. The performance enhancers, were controlled by the Request/Transmission Pol-icy of the upstream service flow.
Maximum number of CPEs: 16
Upstream Flow:
Service Flow Reference: 1
QoS Parameter Set: 7
Service Flow Scheduling Type: 2 (Best Effort)
Request/Transmission Policy :
04 for concatenation and piggybacking on
05 for concatenation on, piggybacking off
06 for piggybacking on, concatenation off
07 for concatenation and piggybacking off
Downstream Flow:
Service Flow Reference: 5
QoS Parameter Set: 7
Privacy Enable: 0
67
Page 81
Appendix C
CMTS configuration
The following tables show the CMTS configuration used for the tests.
Parameter Value
frequency 681000000 hertz
width 6000000 hertz
modulation qam256
interleave taps8Increment16
power 510 tenths-of-dBmV
link-trap enabled
Table C.1: CMTS downstream parameters.
Parameter Value
Modulation qpsk
Preamble length 64
Differential Encoding Off
FEC Error Correction 0
FEC Length 16
Scrambler On
Scrambler Seed 338
Burst Size 0
Guard band size 8
Last Codeword off
Table C.2: Physical layer parameters for REQ.
68
Page 82
69
Parameter Short data Long data
grant grant
Modulation qpsk qpsk
Preamble length 72 80
Differential Encoding Off Off
FEC Error Correction 5 8
FEC Length 75 220
Scrambler On On
Scrambler Seed 338 338
Burst Size 12 0
Guard band size 8 8
Last Codeword On On
Table C.3: Physical layer parameters for short and long data grant with QPSK
Parameter Short data Long data
grant grant
Modulation 16qam 16qam
Preamble length 144 160
Differential Encoding Off Off
FEC Error Correction 6 8
FEC Length 75 220
Scrambler On On
Scrambler Seed 338 338
Burst Size 7 0
Guard band size 8 8
Last Codeword On On
Table C.4: Physical layer parameters for short and long data with 16QAM
Page 83
70
Parameter Value
frequency 26750000 hertz
width 1600000 hertz
power 80 tenths-of-dBmV
input-power-window 60 tenths-of-dB
modulation-profile 2 (number)
slot-size 8 (6.25 uSec ticks)
start-ranging-backoff 2 (2 to this power)
end-ranging-backoff 5 (2 to this power)
start-tx-backoff 3 (2 to this power)
end-tx-backoff 10 (2 to this power)
minimum-map-size 32 slots
maximum-map-size 2048 slots
contention-per-map 32 mini-slots
request-data-allowed disallowed
max-data-in-contention 80 mini-slots
initial-ranging-interval 2000 microseconds
high-priority-threshold 75 (number)
guaranteed-threshold 100 percent
link-trap enabled
Table C.5: Example of CMTS upstream channel configuration used for the channel widthof 1600 KHz.