Shashank Agnihotri Computer Networks – Page 1 Queueing theory Queueing theory is the mathematical study of waiting lines, or queues. The theory enables mathematical analysis of several related processes, including arriving at the (back of the) queue, waiting in the queue (essentially a storage process), and being served at the front of the queue. The theory permits the derivation and calculation of several performance measures including the average waiting time in the queue or the system, the expected number waiting or receiving service, and the probability of encountering the system in certain states, such as empty, full, having an available server or having to wait a certain time to be served. Queueing theory has applications in diverse fields, [1] including telecommunications, [2] traffic engineering, computing [3] and the design of factories, shops, offices and hospitals. [4] Overview The word queue comes, via French, from the Latin cauda, meaning tail. The spelling "queueing" over "queuing" is typically encountered in the academic research field. In fact, one of the flagship journals of the profession is named "Queueing Systems". Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide service. It is applicable in a wide variety of situations that may be encountered in business, commerce, industry, healthcare, [5] public service and engineering. Applications are frequently encountered in customer service situations as well as transport and telecommunication. Queueing theory is directly applicable to intelligent transportation systems, call centers, PABXs,networks, telecommunications, server queueing, mainframe computer of telecommunications terminals, advanced telecommunications systems, and traffic flow. Notation for describing the characteristics of a queueing model was first suggested by David G. Kendall in 1953. Kendall's
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Shashank Agnihotri Computer Networks – Page 1
Queueing theory
Queueing theory is the mathematical study of waiting lines, or queues. The theory enables
mathematical analysis of several related processes, including arriving at the (back of the) queue,
waiting in the queue (essentially a storage process), and being served at the front of the queue.
The theory permits the derivation and calculation of several performance measures including the
average waiting time in the queue or the system, the expected number waiting or receiving
service, and the probability of encountering the system in certain states, such as empty, full,
having an available server or having to wait a certain time to be served.
Queueing theory has applications in diverse fields,[1] including telecommunications,[2] traffic
engineering, computing[3] and the design of factories, shops, offices and hospitals.[4]
Overview
The word queue comes, via French, from the Latin cauda, meaning tail. The spelling "queueing"
over "queuing" is typically encountered in the academic research field. In fact, one of the flagship
journals of the profession is named "Queueing Systems".
Queueing theory is generally considered a branch of operations research because the results
are often used when making business decisions about the resources needed to provide service.
It is applicable in a wide variety of situations that may be encountered in business, commerce,
industry, healthcare,[5] public service and engineering. Applications are frequently encountered
in customer service situations as well as transport and telecommunication. Queueing theory is
directly applicable to intelligent transportation systems, call
centers, PABXs,networks, telecommunications, server queueing, mainframe computer of
telecommunications terminals, advanced telecommunications systems, and traffic flow.
Notation for describing the characteristics of a queueing model was first suggested by David G.
Kendall in 1953. Kendall's notation introduced an A/B/C queueing notation that can be found in
all standard modern works on queueing theory, for example, Tijms.[6]
The A/B/C notation designates a queueing system having A as interarrival time distribution, B as
service time distribution, and C as number of servers. For example, "G/D/1" would indicate a
General (may be anything) arrival process, a Deterministic (constant time) service process and a
single server. More details on this notation are given in the article about queueing models.
History
Agner Krarup Erlang, a Danish engineer who worked for the Copenhagen Telephone Exchange,
published the first paper on queueing theory in 1909.[7]
Protocol E-mail Print A AA AAA LinkedIn Facebook Twitter RSS Reprints In information technology, a protocol is the special set of rules that end points in a telecommunication connection use when they communicate. Protocols specify interactions between the communicating entities.
Protocols exist at several levels in a telecommunication connection. For example, there are protocols for the data interchange at the hardware device level and protocols for data interchange at the application program level. In the standard model known as Open Systems Interconnection (OSI), there are one or more protocols at each layer in the telecommunication exchange that both ends of the exchange must recognize and observe. Protocols are often described in an industry or international standard.
Networking Tutorials and Guides
Telecom Routing and Switching
IP Telephony Systems The TCP/IP Internet protocols, a common example, consist of:
Transmission Control Protocol (TCP), which uses a set of rules to exchange messages
with other Internet points at the information packet level
Internet Protocol (IP), which uses a set of rules to send and receive messages at the
Internet address level
Additional protocols that include the Hypertext Transfer Protocol (HTTP) and File Transfer
Protocol (FTP), each with defined sets of rules to use with corresponding programs elsewhere
on the InternetThere are many other Internet protocols, such as the Border Gateway Protocol (BGP) and the Dynamic Host Configuration Protocol (DHCP).
The word protocol comes from the Greek protocollon, meaning a leaf of paper glued to a manuscript volume that describes the contents.
This layer provides independence from data representation (e.g., encryption) by translating
between application and network formats. The presentation layer transforms data into the form
that the application accepts. This layer formats and encrypts data to be sent across a network. It
is sometimes called the syntax layer.[5]
The original presentation structure used the basic encoding rules of Abstract Syntax Notation
One (ASN.1), with capabilities such as converting an EBCDIC-coded text file to an ASCII-coded
file, or serialization of objects and other data structures from and to XML.
Layer 7: application layer
The application layer is the OSI layer closest to the end user, which means that both the OSI
application layer and the user interact directly with the software application. This layer interacts
with software applications that implement a communicating component. Such application
programs fall outside the scope of the OSI model. Application-layer functions typically include
identifying communication partners, determining resource availability, and synchronizing
communication. When identifying communication partners, the application layer determines the
identity and availability of communication partners for an application with data to transmit. When
determining resource availability, the application layer must decide whether sufficient network or
the requested communication exist. In synchronizing communication, all communication
between applications requires cooperation that is managed by the application layer. Some
examples of application-layer implementations also include:
On OSI stack:
FTAM File Transfer and Access Management Protocol
X.400 Mail
Common management information protocol (CMIP)
On TCP/IP stack:
Hypertext Transfer Protocol (HTTP),
File Transfer Protocol (FTP),
Simple Mail Transfer Protocol (SMTP)
Simple Network Management Protocol (SNMP).
Cross-layer functions
This "datagram service model" reference in MPLS may be confusing or unclear to readers. Please help clarify the "datagram service model" reference in MPLS; suggestions may be found on the talk page
There are some functions or services that are not tied to a given layer, but they can affect more
The second-lowest layer (layer 2) in the OSI Reference Model stack is the data link layer, often abbreviated “DLL” (though that abbreviation has other meanings as well in the computer world). The data link layer, also sometimes just called the link layer, is where many wired and wireless local area networking (LAN) technologies primarily function. For example, Ethernet, Token Ring, FDDI and 802.11 (“wireless Ethernet” or “Wi-Fi’) are all sometimes called “data link layer technologies”. The set of devices connected at the data link layer is what is commonly considered a simple“network”, as opposed to an internetwork.
Data Link Layer Sublayers: Logical Link Control (LLC) and Media Access Control (MAC)
The data link layer is often conceptually divided into two sublayers: logical link control (LLC) and media access control (MAC). This split is based on the architecture used in the IEEE 802 Project, which is the IEEE working group responsible for creating the standards that define many networking technologies (including all of the ones I mentioned above except FDDI). By separating LLC and MAC functions, interoperability of different network technologies is made easier, as explained in our earlier discussion of networking model concepts.
Data Link Layer Functions
The following are the key tasks performed at the data link layer:
o Logical Link Control (LLC): Logical link control refers to the functions required for the establishment and control of logical links between local devices on a network. As mentioned above, this is usually considered a DLL sublayer; it provides services to the network layer above it and hides the rest of the details of the data link layer to allow different technologies to work seamlessly with the higher layers. Most local area networking technologies use the IEEE 802.2 LLC protocol.
o Media Access Control (MAC): This refers to the procedures used by devices to control access to the network medium. Since many networks use a shared medium (such as a single network cable, or a series of cables that are electrically connected into a single virtual medium) it is necessary to have rules for managing the medium to avoid conflicts. For example. Ethernet uses the CSMA/CD method of media access control, while Token Ring uses token passing. o Data Framing: The data link layer is responsible for the final encapsulation of higher-level messages into framesthat are sent over the network at the physical layer. o Addressing: The data link layer is the lowest layer in the OSI model that is concerned with addressing: labeling information with a particular destination location. Each device on a network has a unique number, usually called ahardware address or MAC address, that is used by the data link layer protocol to ensure that data intended for a specific machine gets to it properly. o Error Detection and Handling: The data link layer handles errors that occur at the lower levels of the network stack. For example, a cyclic redundancy check (CRC) field is often employed to allow the station receiving data to detect if it was received correctly.
Physical Layer Requirements Definition and Network Interconnection Device Layers
As I mentioned in the topic discussing the physical layer, that layer and the data link layer are very closely related. The requirements for the physical layer of a network are often part of the data link layer definition of a particular technology. Certain physical layer hardware and encoding aspects are specified by the DLL technology being used. The best example of this is the Ethernet standard, IEEE 802.3, which specifies not just how Ethernet works at the data link layer, but also its various physical layers.
Since the data link layer and physical layer are so closely related, many types of hardware are associated with the data link layer. Network interface cards (NICs) typically implement a specific data link layer technology, so they are often called “Ethernet cards”, “Token Ring cards”, and so on. There are also a number of network interconnection devices that are said to “operate at layer 2”, in whole or in part, because they make decisions about what to do with data they receive by looking at data link layer frames. These devices include most bridges, switches and barters, though the latter two also encompass functions performed by layer three.
Some of the most popular technologies and protocols generally associated with layer 2 are Ethernet, Token Ring, FDDI (plus CDDI), HomePNA, IEEE 802.11, ATM, and TCP/IP's Serial Link Interface Protocol (SLIP) and Point-To-Point Protocol (PPP).
Key Concept: The second OSI Reference Model layer is the data link layer. This is the place where most LAN and wireless LAN technologies are defined. Layer two is responsible for logical link control, media access control, hardware addressing, error
detection and handling, and defining physical layer standards. It is often divided into the logical link control (LLC) and media access control (MAC) sublayers, based on the IEEE 802 Project that uses that architecture.
The Data-Link layer is the protocol layer in a program that handles the moving of data in and out
across a physical link in a network. The Data-Link layer is layer 2 in the Open Systems
Interconnect (OSI) model for a set of telecommunication protocols.
The Data-Link layer contains two sublayers that are described in the IEEE-802 LAN standards:
Media Access Control (MAC)
Logical Link Control (LLC)
The Data-Link layer ensures that an initial connection has been set up, divides output data into
data frames, and handles the acknowledgements from a receiver that the data arrived
successfully. It also ensures that incoming data has been received successfully by analyzing bit
The lowest layer of the OSI Reference Model is layer 1, the physical layer; it is commonly abbreviated “PHY”. The physical layer is special compared to the other layers of the model, because it is the only one where data is physically moved across the network interface. All of the other layers perform useful functions to create messages to be sent, but they must all be transmitted down the protocol stack to the physical layer, where they are actually sent out over the network.
Note: The physical layer is also “special” in that it is the only layer that really does not apply specifically to TCP/IP. Even in studying TCP/IP, however, it is still important to understand its significance and role in relation to the other layers where TCP/IP protocols
reside.
Understanding the Role of the Physical Layer
The name “physical layer” can be a bit problematic. Because of that name, and because of what I just said about the physical layer actually transmitting data, many people who study networking get the impression that the physical layer is only about actual network hardware. Some people may say the physical layer is “the network interface cards and cables”. This is not actually the case, however. The physical layer defines a number of network functions, not just hardware cables and cards.
A related notion is that “all network hardware belongs to the physical layer”. Again, this isn't strictly accurate. All hardware must have some relation to the physical layer in order to send data over the network, but hardware devices generally implement multiple layers of the OSI model, including the physical layer but also others. For example, an Ethernet network interface card performs functions at both the physical layer and the data link layer.
Physical Layer Functions
The following are the main responsibilities of the physical layer in the OSI Reference Model:
o Definition of Hardware Specifications: The details of operation of cables, connectors, wireless radio transceivers, network interface cards and other hardware devices are generally a function of the physical layer (although also partially the data link layer; see below).
o Encoding and Signaling: The physical layer is responsible for various encoding and signaling functions that transform the data from bits that reside within a computer or other device into signals that can be sent over the network. o Data Transmission and Reception: After encoding the data appropriately, the physical layer actually transmits the data, and of course, receives it. Note that this applies equally to wired and wireless networks, even if there is no tangible cable in a wireless network! o Topology and Physical Network Design: The physical layer is also considered the domain of many hardware-related network design issues, such as LAN and WAN topology.
Shashank Agnihotri Computer Networks – Page 26
In general, then, physical layer technologies are ones that are at the very lowest level and deal with the actual ones and zeroes that are sent over the network. For example, when considering network interconnection devices, the simplest ones operate at the physical layer: repeaters, conventional hubs and transceivers. These devices have absolutely no knowledge of the contents of a message. They just take input bits and send them as output. Devices like switches and routers operate at higher layers and look at the data they receive as being more than voltage or light pulses that represent one or zero.
Relationship Between the Physical Layer and Data Link Layer
It's important to point out that while the physical layer of a network technology primarily defines the hardware it uses, the physical layer is closely related to the data link layer. Thus, it is not generally possible to define hardware at the physical layer “independently” of the technology being used at the data link layer. For example, Ethernet is a technology that describes specific types of cables and network hardware, but the physical layer of Ethernet can only be isolated from its data link layer aspects to a point. While Ethernet cables are “physical layer”, for example, their maximum length is related closely to message format rules that exist at the data link layer.
Furthermore, some technologies perform functions at the physical layer that are normally more closely associated with the data link layer. For example, it is common to have the physical layer perform low-level (bit level) repackaging of data link layer frames for transmission. Error detection and correction may also be done at layer 1 in some cases. Most people would consider these “layer two functions”.
In many technologies, a number of physical layers can be used with a data link layer. Again here, the classic example is Ethernet, where dozens of different physical layer implementations exist, each of which uses the same data link layer (possibly with slight variations.)
Physical Layer Sublayers
Finally, many technologies further subdivide the physical layer into sublayers. In order to increase performance, physical layer encoding and transmission methods have become more complex over time. The physical layer may be broken into layers to allow different network media to be supported by the same technology, while sharing other functions at the physical layer that are common between the various media. A good example of this is the physical layer architecture used for Fast Ethernet, Gigabit Ethernet and 10-Gigabit Ethernet.
Note: In some contexts, the physical layer technology used to convey bits across a network or communications line is called a transport method. Don't confuse this with the functions of the OSI transport layer (layer 4).
Key Concept: The lowest layer in the OSI Reference Model is the physical layer. It is the realm of networking hardware specifications, and is the place where technologies reside that perform data encoding, signaling, transmission and reception functions. The physical
This unit provides the reader the necessary theory for understanding the Medium Access (MAC) sublayer of the data link layer.
After completion of this unit you will be able to:
· Define LAN and MAN
· Describe the channel allocation mechanisms used in various LANs and MANs
· Describe ALOHA protocols
· Compare and Contrast various LAN protocols
· Explain various IEEE standards for LANs
3.1 LAN and WAN
Shashank Agnihotri Computer Networks – Page 50
i) Static Channel Allocation in LAN and MAN
ii) Dynamic Channel Allocation in LAN and MAN
As the data link layer is overloaded, it is split into MAC and LLC sub layers. MAC sub-layer is the bottom part of the data link layer. Medium access control is often used as a synonym to multiple access protocol, since the MAC sub layer provides the protocol and control mechanisms that are required for a certain channel access method. This unit deals with broadcast networks and their protocols.
In any broadcast network, the key issue is how to determine who gets to use the channel when there is a competition. When only one single channel is available, determining who should get access to the channel for transmission is a very complex task. Many protocols for solving the problem are known and they form the contents of this unit.
Thus unit provides an insight of a channel access control mechanism that makes it possible for several terminals or network nodes to communicate within a multipoint network. The MAC layer is essentially important in local area networks(LAN’s), many of which use a multi-access channel as the basis for communication. WAN’s in contrast use a point to point networks.
To get a head start, let us define LANs and MANs.
Definition: A Local Area Network (LAN) is a network of systems spread over small geographical area, for example a network of computers within a building or small campus.
The owner of a LAN may be the same organization within which the LAN network is set up. It has higher data rates i.e. in scales of Mbps (Rates at which the data are transferred from one system to another) because the systems to be spanned are very close to each other in proximity.
Definition: A WAN (Wide Area Network) typically spans a set of countries that have data rates less than 1Mbps, because of the distance criteria.
The LANs may be owned by multiple organizations since the spanned distance is spread over some countries.
i) Static Channel Allocation in LAN and MAN
Before going for the exact theory behind the methods of channel allocations, we need to understand the base behind this theory, which is given below:
The channel allocation problem
We can classify the channels as static and dynamic. The static channel is where the number of users are stable and the traffic is not bursty. When the number of users using the channel keeps on varying the channel is considered as a dynamic channel. The traffic on these dynamic channels also keeps on varying. For example: In most computer systems, the data traffic is extremely
bursty. We see that in this system, the peak traffic to mean traffic ratios of 1000:1 are common.
· Static channel allocation
The usual way of allocating a single channel among the multiple users is frequency division multiplexing (FDM). If there are N users, the bandwidth allocated is split into N equal sized portions. FDM is simple and efficient technique for small number of users. However when the number of senders is large and continuously varying or the traffic is bursty, FDM is not suitable.
The same arguments that apply to FDM also apply to TDM. Thus none of the static channels allocation methods work well with bursty traffic we explore the dynamic channels.
· Dynamic channels allocation in LAN’s and MAN’s
Before discussing the channel allocation problems that is multiple access methods we will see the assumptions that we are using so that the analysis will become simple.
Assumptions:
1. The Station Model:
The model consists of N users or independent stations. Stations are sometimes called terminals. The probability of frame being generated in an interval of length ∆t is λ.Δt, where λ is a constant and defines the arrival rate of new frames. Once the frame has been generated, the station is blocked and does nothing until the frame has been successfully transmitted.
2. Single Channel Assumption:
A single channel is available for all communication. All stations can transmit using this single channel. All can receive from this channel. As far as the hard is concerned, all stations are equivalent. It is possible the software or the protocols used may assign the priorities to them.
3. Collisions:
If two frames are transmitted simultaneously, they overlap in time and the resulting signal is distorted or garbled. This event or situation is called a collision. We assume that all stations can detect collisions. A collided frame must be retransmitted again later. Here we consider no other errors for retransmission other than those generated because of collisions.
4. Continuous Time
For a continuous time assumption we mean, that the frame transmission on the channel can begin any instant of time. There is no master clock dividing the time into discrete intervals.
5. Slotted Time
Shashank Agnihotri Computer Networks – Page 52
In case of slotted time assumption, the time is divided into discrete slots or intervals. The frame transmission on the channel begins only at the start of a slot. A slot may contain 0, 1, or more frames. The 0 frame transmission corresponds to idle slot, 1 frame transmission corresponds to successful transmission, and more frame transmission corresponds to a collision.
6. Carrier Sense
Using this facility the users can sense the channel. i.e. the stations can tell if the channel is in use before trying to use it. If the channel is sensed as busy, no station will attempt to transmit on the channel unless and until it goes idle.
7. No Carrier Sense:
This assumption implies that this facility is not available to the stations. i.e. the stations cannot tell if the channel is in use before trying to use it. They just go ahead and transmit. It is only after transmission of the frame they determine whether the transmission was successful or not.
The first assumption states that the station is independent and work is generated at a constant rate. It also assumes that each station has only one program or one user. Thus when the station is blocked no new work is generated. The single channel assumption is the heart of this station model and this unit. The collision assumption is also very basic. Two alternate assumptions about time are discussed. For a given system only one assumption about time holds good, i.e. either the channel is considered to be continuous time based or slotted time based. Also a channel can be sensed or not sensed by the stations. Generally LANs can sense the channel but wireless networks cannot sense the channel effectively. Also stations on wired carrier sense networks can terminate their transmission prematurely if they discover collision. But in case of wireless networks collision detection is rarely done.
3.2 ALOHA Protocols
In 1970s, Norman Abramson and his colleagues at University of Hawaii devised a new and elegant method to solve the channel allocation problem. Their work has been extended by many researchers since then. His work is called the ALOHA system which uses ground-based radio broadcasting. This basic idea is applicable to any system in which uncoordinated users are competing for the use of a shared channel.
Pure or Un-slotted Aloha
The ALOHA network was created at the University of Hawaii in 1970 under the leadership of Norman Abramson. The Aloha protocol is an OSI layer 2 protocol for LAN networks with broadcast topology.
· If the message collides with another transmission, try resending it later
Figure 3.1: Pure ALOHA
Figure 3.2: Vulnerable period for the node: frame
The Aloha protocol is an OSI layer 2 protocol used for LAN. A user is assumed to be always in two states: typing or waiting. The station transmits a frame and checks the channel to see if it was successful. If so the user sees the reply and continues to type. If the frame transmission is not successful, the user waits and retransmits the frame over and over until it has been successfully sent.
Let the frame time denote the amount of time needed to transmit the standard fixed length frame. We assume the there are infinite users and generate the new frames according Poisson distribution with the mean N frames per frame time.
· If N>1 the users are generating the frames at higher rate than the channel can handle. Hence all frames will suffer collision.
· Hence the range for N is
0<N<1
· If N>1 there are collisions and hence retransmission frames are also added with the new frames
Let us consider the probability of k transmission attempts per frame time. Here the transmission of frames includes the new frames as well as the frames that are given for retransmission. This total traffic is also poisoned with the mean G per frame time. That is G ≥ N
· At low load: N is approximately =0, there will be few collisions. Hence few retransmissions that is G=N
· At high load: N >>1, many retransmissions and hence G>N.
· Under all loads: throughput S is just the offered load G times the probability of successful transmission P0
S = G*P0
The probability that k frames are generated during a given frame time is given by Poisson distribution
P[k]= Gke-G / K!
So the probability of zero frames is just e-G. The basic throughput calculation follows a Poisson distribution with an average number of arrivals of 2G arrivals per two frame time. Therefore, the lambda parameter in the Poisson distribution becomes 2G.
Hence P0 = e-2G
Hence the throughput S = GP0 = Ge-2G
We get for G = 0.5 resulting in a maximum throughput of 0.184, i.e. 18.4%.
Pure Aloha had a maximum throughput of about 18.4%. This means that about 81.6% of the total available bandwidth was essentially wasted due to losses from packet collisions.
Slotted or Impure ALOHA
An improvement to the original Aloha protocol was Slotted Aloha. It is in 1972 Roberts published a method to double the throughput of a pure ALOHA by using discrete time-slots. His proposal was to divide the time into discrete slots corresponding to one frame time. This approach requires the users to agree to the frame boundaries. To achieve synchronization one special station emits a pip at the start of each interval similar to a clock. Thus the capacity of slotted ALOHA increased to the maximum throughput of 36.8%.
The throughput for pure and slotted ALOHA system is shown in figure 3.3. A station can send only at the beginning of a timeslot and thus collisions are reduced. In this case, the average number of aggregate arrivals is G arrivals per 2X seconds. This leverages the lambda parameter to
be G. The maximum throughput is reached for G = 1.
Figure 3.3: Throughput versus offered load traffic
With Slotted Aloha, a centralized clock sends out small clock tick packets to the outlying stations. Outlying stations are allowed to send their packets immediately after receiving a clock tick. If there is only one station with a packet to send, this guarantees that there will never be a collision for that packet. On the other hand if there are two stations with packets to send, this algorithm guarantees that there will be a collision, and the whole of the slot period up to the next clock tick is wasted. With some mathematics, it is possible to demonstrate that this protocol does improve the overall channel utilization, by reducing the probability of collisions by a half.
It should be noted that Aloha’s characteristics are still not much different from those experienced today by Wi - Fi, and similar contention-based systems that have no carrier sense capability. There is a certain amount of inherent inefficiency in these systems. It is typical to see these types of networks’ throughput break down significantly as the number of users and message burstiness increase. For these reasons, applications which need highly deterministic load behavior often use token-passing schemes (such as token ring) instead of contention systems.
For instance ARCNET is very popular in embedded applications. Nonetheless, contention based systems also have significant advantages, including ease of management and speed in initial communication. Slotted Aloha is used on low bandwidth tacticalSatellite communications networks by the US Military, subscriber based Satellite communications networks, and contact lessRFID technologies.
3.3 LAN Protocols
With slotted ALOHA, the best channel utilization that can be achieved is 1 / e. This is hardly surprising since with stations transmitting at will, without paying attention to what other stations are doing, there are bound to be many collisions. In LANs, it is possible to detect what other stations are doing, and adapt their behavior accordingly. These networks can achieve a better utilization than 1 / e.
CSMA Protocols:
Protocols in which stations listen for a carrier (a transmission) and act accordingly are
called Carrier Sense Protocols."Multiple Access" describes the fact that multiple nodes send and receive on the medium. Transmissions by one node are generally received by all other nodes using the medium. Carrier Sense Multiple Access (CSMA) is a probabilistic Media Access Control (MAC) protocol in which a node verifies the absence of other traffic before transmitting on a shared physical medium, such as an electrical bus, or a band of electromagnetic spectrum.
The following three protocols discuss the various implementations of the above discussed concepts:
i) Protocol 1. 1-persistent CSMA:
When a station has data to send, it first listens to the channel to see if any one else is transmitting. If the channel is busy, the station waits until it becomes idle. When the station detects an idle channel, it transmits a frame. If a collision occurs, the station waits a random amount of time and starts retransmission.
The protocol is so called because the station transmits with a probability of a whenever it finds the channel idle.
ii) Protocol 2. Non-persistent CSMA:
In this protocol, a conscious attempt is made to be less greedy than in the 1-persistent CSMA protocol. Before sending a station senses the channel. If no one else is sending, the station begins doing so itself. However, if the channel is already in use, the station does not continuously sense the channel for the purpose of seizing it immediately upon detecting the end of previous transmission. Instead, it waits for a random period of time and then repeats the algorithm. Intuitively, this algorithm should lead to better channel utilization and longer delays than 1-persistent CSMA.
iii) Protocol 3. p - persistent CSMA
It applies to slotted channels and the working of this protocol is given below:
When a station becomes ready to send, it senses the channel. If it is idle, it transmits with a probability p. With a probability of q = 1 – p, it defers until the next slot. If that slot is also idle, it either transmits or defers again, with probabilities p and q. This process is repeated until either the frame has been transmitted or another station has begun transmitting. In the latter case, it acts as if there had been a collision. If the station initially senses the channel busy, it waits until the next slot and applies the above algorithm.
CSMA/CD Protocol
In computer networking, Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a network control protocol in which a carrier sensing scheme is used. A transmitting data station that detects another signal while transmitting a frame, stops transmitting that frame, transmits a jam
signal, and then waits for a random time interval. The random time interval also known as "backoff delay" is determined using the truncated binary exponential backoff algorithm. This delay is used before trying to send that frame again. CSMA/CD is a modification of pure Carrier Sense Multiple Access (CSMA).
Collision detection is used to improve CSMA performance by terminating transmission as soon as a collision is detected, and reducing the probability of a second collision on retry. Methods for collision detection are media dependent, but on an electrical bus such as Ethernet, collisions can be detected by comparing transmitted data with received data. If they differ, another transmitter is overlaying the first transmitter’s signal (a collision), and transmission terminates immediately. Here the collision recovery algorithm is nothing but an binary exponential algorithm that determines the waiting time for retransmission. If the number of collisions for the frame hits 16 then the frame is considered as not recoverable.
CSMA/CD can be in anyone of the following three states as shown in figure 3.4.
1. Contention period
2. transmission period
3. Idle period
Figure 3.4: States of CSMA / CD: Contention, Transmission, or Idle
A jam signal is sent which will cause all transmitters to back off by random intervals, reducing the probability of a collision when the first retry is attempted. CSMA/CD is a layer 2 protocol in the OSI model. Ethernet is the classic CSMA/CD protocol.
Collision Free Protocols
Although collisions do not occur with CSMA/CD once a station has unambiguously seized the channel, they can still occur during the contention period. These collisions adversely affect the system performance especially when the cable is long and the frames are short. And also CSMA/CD is not universally applicable. In this section, we examine some protocols that resolve the contention for the channel without any collisions at all, not even during the contention period.
In the protocols to be described, we assume that there exists exactly N stations, each with a unique address from 0 to N-1 “wired” into it. We assume that the propagation delay is negligible.
i) A Bit Map Protocol
In this method, each contention period consists of exactly N slots. If station 0 has a frame to send, it transmits a 1 bit during the zeroth slot. No other station is allowed to transmit during this slot. Regardless of what station 0 is doing, station 1 gets the opportunity to transmit a 1 during slot 1, but only if it has a frame queued. In general, station j may announce that it has a frame to send by inserting a 1 bit into slot j. After all N stations have passed by, each station has complete knowledge of which stations wish to transmit. At that point, they begin transmitting in a numerical order.
Since everyone agrees on who goes next, there will never be any collisions. After the last ready station has transmitted its frame, an event all stations can monitor, another N bit contention period is begun. If a station becomes ready just after its bit slot has passed by, it is out of luck and must remain silent until every station has had a chance and the bit map has come around again.
Protocols like this in which the desire to transmit is broadcast before the actual transmission are called Reservation Protocols.
ii) Binary Countdown
A problem with basic bit map protocol is that the overhead is 1 bit per station, so it odes not scale well to networks with thousands of stations. We can do better by using binary station address.
A station wanting to use the channel now broadcasts its address as a binary bit string, starting with the high-order bit. All addresses are assumed to be of the same length. The bits in each address position from different stations are Boolean ORed together. We call this protocol as Binary countdown, which was used in Datakit. It implicitly assumes that the transmission delays are negligible so that all stations see asserted bits essentially simultaneously.
To avoid conflicts, an arbitration rule must be applied: as soon as a station sees that a high-order bit position that is 0 in its address has been overwritten with a 1, it gives up.
Example: If stations 0010, 0100, 1001, and 1010 are all trying to get the channel for transmission, these are ORed together to form a 1. Stations 0010 and 0100 see the 1 and know that a higher numbered station is competing for the channel, so they give up for the current round. Stations 1001 and 1010 continue.
The next bit is 0, and both stations continue. The next bit is 1, so station 1001 gives up. The winner is station 1010 because t has the highest address. After winning the bidding, it may now transmit a frame, after which another bidding cycle starts.
This protocol has the property that higher numbered stations have a higher priority than lower numbered stations, which may be either good or bad depending on the context.
Shashank Agnihotri Computer Networks – Page 59
iii) Limited Contention Protocols
Until now we have considered two basic strategies for channel acquisition in a cable network: Contention as in CSMA, and collision – free methods. Each strategy can be rated as to how well it does with respect to the two important performance measures, delay at low load, and channel efficiency at high load.
Under conditions of light load, contention (i.e. pure or slotted ALOHA) is preferable due to its low delay. As the load increases, contention becomes increasingly less attractive, because the overhead associated with channel arbitration becomes greater. Just the reverse is true for collision free protocols. At low load, they have high delay, but as the load increases, the channel efficiency improves.
It would be more beneficial if we could combine the best features of contention and collision free protocols and arrive at a protocol that uses contention at low load to provide low delay, but uses a collision free technique at high load to provide good channel efficiency. Such protocols can be called Limited Contention protocols.
iv) Adaptive Tree Walk Protocol
A simple way of performing the necessary channel assignment is to use the algorithm devised by US army for testing soldiers for syphilis during World War II. The Army took a blood sample from N soldiers. A portion of each sample was poured into a single test tube. This mixed sample was then tested for antibodies. If none were found, all the soldiers in the group were declared healthy. If antibodies were present, two new mixed samples were prepared, one from soldiers 1 through N/2 and one from the rest. The process was repeated recursively until the infected soldiers were detected.
For the computerized version of this algorithm, let us assume that stations are arranged as the leaves of a binary tree as shown in figure 3.4 below:
In the first contention slot following a successful frame transmission, slot 0, all stations are permitted to acquire the channel. If one of them does, so fine. If there is a collision, then during slot 1 only stations falling under node 2 in the tree may compete. If one of them acquires the channel, the slot following the frame is reserved for those stations under node 3. If on the other hand, two or more stations under node 2 want to transmit, there will be a collisions during slot 1, in which case it is node 4’s turn during slot 2.
In essence, if a collision occurs during slot 0, the entire tree is searched, depth first to locate all ready stations. Each bit slot is associated with some particular node in a tree. If a collision occurs, the search continues recursively with the node’s left and right children. If a bit slot is idle or if only one station transmits in it, the searching of its node can stop because all ready stations have been located.
When the load on the system is heavy, it is hardly worth the effort to dedicate slot 0 to node 1, because that makes sense only in the unlikely event that precisely one station has a frame to send.
At what level in the tree should the search begin? Clearly, the heavier the load, the farther down the tree the search should begin.
3.4 IEEE 802 standards for LANs
IEEE has standardized a number of LAN’s and MAN’s under the name of IEEE 802. Few of the standards are listed in figure 3.6. The most important of the survivor’s are 802.3 (Ethernet) and 802.11 (wireless LAN). Both these two standards have different physical layers and different MAC sub layers but converge on the same logical link control sub layer so they have same interface to the network layer.
IEEE No Name Title
802.3 Ethernet CSMA/CD Networks (Ethernet)
802.4
Token Bus Networks
802.5
Token Ring Networks
802.6
Metropolitan Area Networks
802.11 WiFi Wireless Local Area Networks
802.15.1 Bluetooth Wireless Personal Area Networks
802.15.4 ZigBee Wireless Sensor Networks
Shashank Agnihotri Computer Networks – Page 61
802.16 WiMa Wireless Metropolitan Area Networks
Figure 3.6: List of IEEE 802 Standards for LAN and MAN
Ethernets
Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The methods used show some similarities to radio systems, although there are major differences, such as the fact that it is much easier to detect collisions in a cable broadcast system than a radio broadcast. The common cable providing the communication channel was likened to the ether and it was from this reference the name "Ethernet" was derived.
From this early and comparatively simple concept, Ethernet evolved into the complex networking technology that today powers the vast majority of local computer networks. The coaxial cable was later replaced with point-to-point links connected together by hubs and/or switches in order to reduce installation costs, increase reliability, and enable point-to-point management and troubleshooting. Star LAN was the first step in the evolution of Ethernet from a coaxial cable bus to a hub-managed, twisted-pair network.
Above the physical layer, Ethernet stations communicate by sending each other data packets, small blocks of data that are individually sent and delivered. As with other IEEE 802 LANs, each Ethernet station is given a single 48-bit MAC address, which is used both to specify the destination and the source of each data packet. Network interface cards (NICs) or chips normally do not accept packets addressed to other Ethernet stations. Adapters generally come programmed with a globally unique address, but this can be overridden, either to avoid an address change when an adapter is replaced, or to use locally administered addresses.
The most kinds of Ethernets used were with the data rate of 10Mbps. The table 3.1 gives the details of the medium used, number of nodes per segment and distance it supported, along with the application.
Table 3.1 Different 10Mbps Ethernets used
Name Cable Type
Max Segment Length
Nodes per Segment
Advantages
10Base5 Thick coax 500 m 100 Original Cable; Now Obsolete
Fast Ethernet is a collective term for a number of Ethernet standards that carry traffic at the nominal rate of 100 Mbit/s, against the original Ethernet speed of 10 Mbit/s. Of the 100 megabit Ethernet standards 100baseTX is by far the most common and is supported by the vast majority of Ethernet hardware currently produced. Full duplex fast Ethernet is sometimes referred to as "200 Mbit/s" though this is somewhat misleading as that level of improvement will only be achieved if traffic patterns are symmetrical. Fast Ethernet was introduced in 1995 and remained the fastest version of Ethernet for three years before being superseded by gigabit Ethernet.
A fast Ethernet adaptor can be logically divided into a medium access controller (MAC) which deals with the higher level issues of medium availability and a physical layer interface (PHY). The MAC may be linked to the PHY by a 4 bit 25 MHz synchronous parallel interface known as MII. Repeaters (hubs) are also allowed and connect to multiple PHYs for their different interfaces.
· 100BASE-T is any of several Fast Ethernet standards for twisted pair cables.
· 100BASE-TX (100 Mbit/s over two-pair Cat5 or better cable),
· 100BASE-T4 (100 Mbit/s over four-pair Cat3 or better cable, defunct),
· 100BASE-T2 (100 Mbit/s over two-pair Cat3 or better cable, also defunct).
The segment length for a 100BASE-T cable is limited to 100 meters. Most networks had to be rewired for 100-megabit speed whether or not they had supposedly been CAT3 or CAT5 cable plants. The vast majority of common implementations or installations of 100BASE-T are done with 100BASE-TX.
100BASE-TX is the predominant form of Fast Ethernet, and runs over two pairs of category 5 or above cable. A typical category 5 cable contains 4 pairs and can therefore support two 100BASE-TX links. Each network segment can have a maximum distance of 100 metres. In its typical configuration, 100BASE-TX uses one pair of twisted wires in each direction, providing 100 Mbit/s of throughput in each direction (full-duplex).
The configuration of 100BASE-TX networks is very similar to 10BASE-T. When used to build a local area network, the devices on the network are typically connected to a hub or switch, creating a star network. Alternatively it is possible to connect two devices directly using a crossover cable.
In 100BASE-T2, the data is transmitted over two copper pairs, 4 bits per symbol. First, a 4 bit symbol is expanded into two 3-bit symbols through a non-trivial scrambling procedure based on a
100BASE-FX is a version of Fast Ethernet over optical fiber. It uses two strands of multi-mode optical fiber for receive (RX) and transmit (TX). Maximum length is 400 metres for half-duplex connections or 2 kilometers for full-duplex.
100BASE-SX is a version of Fast Ethernet over optical fiber. It uses two strands of multi-mode optical fiber for receive and transmit. It is a lower cost alternative to using 100BASE-FX, because it uses short wavelength optics which are significantly less expensive than the long wavelength optics used in 100BASE-FX. 100BASE-SX can operate at distances up to 300 meters.
100BASE-BX is a version of Fast Ethernet over a single strand of optical fiber (unlike 100BASE-FX, which uses a pair of fibers). Single-mode fiber is used, along with a special multiplexer which splits the signal into transmit and receive wavelengths.
Gigabit Ethernet
Gigabit Ethernet (GbE or 1 GigE) is a term describing various technologies for transmitting Ethernet packets at a rate of a gigabit per second, as defined by the IEEE 802.3-2005 standard. Half duplex gigabit links connected through hubs are allowed by the specification but in the marketplace full duplex with switches is the norm.
Gigabit Ethernet was the next iteration, increasing the speed to 1000 Mbit/s. The initial standard for gigabit Ethernet was standardized by the IEEE in June 1998 as IEEE 802.3z. 802.3z is commonly referred to as 1000BASE-X (where -X refers to either -CX, -SX, -LX, or -ZX).
IEEE 802.3ab, ratified in 1999, defines gigabit Ethernet transmission over unshielded twisted pair (UTP) category 5, 5e, or 6cabling and became known as 1000BASE-T. With the ratification of 802.3ab, gigabit Ethernet became a desktop technology as organizations could utilize their existing copper cabling infrastructure.
Initially, gigabit Ethernet was deployed in high-capacity backbone network links (for instance, on a high-capacity campus network). Fiber gigabit Ethernet has recently been overtaken by 10 gigabit Ethernet which was ratified by the IEEE in 2002 and provided data rates 10 times that of gigabit Ethernet. Work on copper 10 gigabit Ethernet over twisted pair has been completed, but as of July 2006, the only currently available adapters for 10 gigabit Ethernet over copper requires specialized cabling.
InfiniBand connectors and is limited to 15 m. However, the 10GBASE-T standard specifies use of the traditional RJ-45connectors and longer maximum cable length. Different gigabits Ethernet are listed in table 3.2.
Each frame starts with a preamble of 8 bytes, each containing bit patterns “10101010”. Preamble is encoded using Manchester encoding. Thus the bit patterns produce a 10MHz square wave for 6.4 micro sec to allow the receiver’s clock to synchronize with the sender’s clock.
· Address field
The frame contains two addresses, one for the destination and another for the sender. The length of address field is 6 bytes. The MSB of destination address is ‘0’ for ordinary addresses and ‘1’ for group addresses. Group addresses allow multiple stations to listen to a single address. When a frame is sent to a group of users, all stations in that group receive it. This type of transmission is referred to as multicasting. The address consisting of all ‘1’ bits is reserved for broadcasting.
· SOF: This field is 1 byte long and is used to indicate the start of the frame.
· Length:
This field is of 2 bytes long. It is used to specify the length of the data in terms of bytes that is present in the frame. Thus the combination of the SOF and the length field is used to mark the end of the frame.
· Data :
The length of this field ranges from zero to a maximum of 1500 bytes. This is the place where the
Shashank Agnihotri Computer Networks – Page 65
actual message bits are to be placed.
· Pad:
When a transceiver detects a collision, it truncates the current frame, which means the stray bits and pieces of frames appear on the cable all the time. To make it easier to distinguish valid frames from garbage, Ethernet specifies that valid frame must be at least 64 bytes long, from the destination address to the checksum, including both. That means the data field come must be of 46 bytes. But if there is no data to be transmitted and only some acknowledgement is to be transmitted then the length of the frame is less than what is specified for the valid frame. Hence these pad fields are provided, i.e. if the data field is less than 46 bytes then the pad field comes into picture such that the total data and pad field must be equal to 46 bytes minimum. If the data field is greater than 46 bytes then pad field is not used.
· Checksum:
It is 4 byte long. It uses a 32-bit hash code of the data. If some data bits are in error, then the checksum will be wrong and the error will be detected. It uses CRC method and it is used only for error detection and not for forward error correction.
IEEE 802.4 Standard - Token Bus
This standard was proposed by Dirvin and Miller in 1986.
In this standard, physically the token bus is a linear or tree-shaped cable onto which the stations are attached. Logically, the stations are organized into a ring, with each station knowing the address of the station to its “left” or “right”. When the logical ring is initialized, the highest numbered station may send the first frame. After it is done, it passes permission to its immediate neighbor by sending the neighbor a special control frame called a token. The token propagates around the logical ring, with only the token holder being permitted to transmit frames. Since only one station at a time holds the token, collisions do not occur.
Note: The physical order in which the stations are connected to the cable is not important.
Since the cable is inherently a broadcast medium, each station receives each frame, discarding those not addressed to it. When a station passes the token, it sends a token frame specifically addressed to its logical neighbor in the ring, irrespective of where the station is physically located on the cable.
Shashank Agnihotri Computer Networks – Page 66
Figure 3.8: Token Passing
IEEE 802.5 Standard - Token Ring
A ring is really not a broadcast medium, but a collection of individual point-to-point links that happen to form a circle. Ring engineering is almost entirely digital. A ring is also fair and has a known upper bound on channel access.
sec, each bit occupies 200/R meters on the ring. This means, for example, that a 1-Mbps ring whose circumference is 1000 meters can contain only 5 bits on it at once. sec. With a typical propagation speed of about 200 m/A major issue in the design and analysis of any ring network is the “physical length” of a bit. If the data rate of the ring is R Mbps, a bit is emitted every 1/R
A ring really consists of a collection of ring interfaces connected by point-to-point lines. Each bit arriving at an interface is copied into a 1-bit buffer and then copied out onto the ring again. While in the buffer, the bit can be inspected and possibly modified before being written out. This copying step introduces a 1-bit delay at each interface.
In a token ring a special bit pattern, called the token, circulates around the ring whenever all stations are idle. When a station wants to transmit a frame, it is required to seize the token and remove it from the ring before transmitting. Since there is only one token, only one station can transmit at a given instant, thus solving the channel access problem the same way the token bus solves it.
3.5 Fiber Optic Networks
Fiber optics is becoming increasingly important, not only for wide area point-to-point links, but also for MANs and LANs. Fiber has high bandwidth, is thin and lightweight, is not affected by electromagnetic interference from heavy machinery, power surges or lightning, and has excellent
security because it is nearly impossible to wiretap without detection.
FDDI (Fiber Distributed Data Interface)
It is a high performance fiber optic token ring LAN running at 100 Mbps over distances up to 200 km with up to 1000 stations connected. It can be used in the same way as any of the 802 LANs, but with its high bandwidth, another common use is as a backbone to connect copper LANs.
FDDI – II is a successor of FDDI modified to handle synchronous circuit switched PCM data for voice or ISDN traffic, in addition to ordinary data.
FDDI uses multimode fibers. It also uses LEDs rather than lasers because FDDI may sometimes be used to connect directly to workstations.
The FDDI cabling consists of two fiber rings, one transmitting clockwise and the other transmitting counter clockwise. If any one breaks, the other can be used as a backup.
FDDI defines two classes of stations A and B. Class A stations connect to both rings. The cheaper class B stations only connect to one of the rings. Depending on how important fault tolerance is, an installation can choose class A or class B stations, or some of each.
S/NET
It is another kind of fiber optic network with an active star for switching. It was designed and implemented at Bell laboratories. The goal of S/NET is very fast switching.
Each computer in the network has two 20-Mbps fibers running to the switch, one for input and one for output. The fibers terminate in a BIB (Bus Interface Board). The CPUs each have an I/O device register that acts like a one-word window into BIB memory. When a word is written to that device register, the interface board in the CPU transmits the bits serially over the fiber to the BIB, where they are reassembled as a word in BIB memory. When the whole frame to be transmitted has been copied to BIB memory, the CPU writes a command to another I/O device register to cause the switch to copy the frame to the memory of the destination BIB and interrupt the destination CPU.
Access to this bus is done by a priority algorithm. Each BIB has a unique priority. When a BIB wants access to the bus it asserts a signal on the bus corresponding to its priority. The requests are recorded and granted in priority order, with one word transferred (16 bits in parallel) at a time. When all requests have been granted, another round of bidding is started and BIBs can again request the bus. No bus cycles are lost to contention, so switching speed is 16 bits every 200 nsec, or 80 Mbps.
3.6 Summary
This unit discusses the Medium Access Sublayer. It discusses in detail about LANs and WANs. It discusses the basic LAN protocols called ALOHA protocols. It describes the IEEE 802 standards for LANS. It discusses the importance of Fiber Optic Networks and cabling used as backbone for
Shashank Agnihotri Computer Networks – Page 68
LAN connectivity.
3.7 Self Assessment Questions
1. The Data Link Layer of the ISO OSI model is divided into ______ sublayers
a) 1 b) 4 c) 3 d) 2
2. The ______ layer is responsible for resolving access to the shared media or resources.
a) physical b) MAC sublayer c) Network d) Transport
3. A WAN typically spans a set of countries that have data rates less than _______ Mbps
a) 2 b) 1 c) 4 d) 100
4. The ________ model consists of N users or independent stations.
5. The Aloha protocol is an OSI _______ protocol for LAN networks with broadcast topology
6. In ______ method, each contention period consists of exactly N slots
ATM Protocol StructureFigure 33 shows the ATM layered architecture as described in ITU-T recommendation I.321 (1992). This is the basis on which the B-ISDN Protocol Reference Model has been defined.
Figure 33: ATM Protocol Architecture ATM Physical LayerThe physical layer accepts or delivers payload cells at its point of access to the ATM layer. It provides for cell delineation which enables the receiver to recover cell boundaries. It generates and verifies the HEC field. If the HEC cannot be verified or corrected, then the physical layer will discard the errored cell. Idle cells are inserted in the transmit direction and removed in the receiving direction.For the physical transmission of bits, 5 types of transmission frame adaptations are specified (by the ITU and the ATM Forum). Each one of them has its own lower bound or upper bound for the amount of bits it can carry (from 12.5 Mbps to 10 Gbps so far).
1. Synchronous Digital Hierarchy (SDH) 155 Mbps;
2. Plesiochronous Digital Hierarchy (PDH) 34 Mbps;
3. Cell Based 155 Mbps;4. Fibre Distributed Data Interface (FDDI) = 100 Mbps;
5. Synchronous Optical Network (SONET) 51 Mbps.The actual physical link could be either optical or coaxial with the possibility of Unshielded Twisted Pair (UTP Category 3/5) and Shielded Twisted Pair (STP Category 5) in the mid range (12.5 to 51 Mbps). ATM Layer
ATM layer mainly performs switching, routing and multiplexing. The characteristic features of the ATM layer are independent of the physical medium. Four functions of this layer have been identified.1. cell multiplexing (in the transmit direction)2. cell demultiplexing (at the receiving end)3. VPI/VCI translation4. cell header generation/extraction.This layer accepts or delivers cell payloads. It adds appropriate ATM cell headers when transmitting and removes cell headers in the receiving direction so that only the cell information field is delivered to the ATM Adaptation Layer.At the ATM switching/cross connect nodes VPI and VCI translation occurs. At a VC switch new values of VPI and VCI are obtained whereas at a VP switch only new values for the VPI field are obtained (see Figure 34). Depending on the direction, either the individual VPs and VCs are multiplexed into a single cell or the single cell is demultiplexed to get the individual VPs and VCs.
Figure 34: VC/VP Switching in ATM ATM Adaptation Layer (AAL)The ATM Adaptation Layer (AAL) is between ATM layer and the higher layers. Its basic function is the enhanced adaptation of services provided by the ATM layer to the requirements of the higher layers.This layer accepts and delivers data streams that are structured for use with user's own communication protocol. It changes these protocol data structures into ATM cell payloads when receiving and does the reverse when transmitting. It inserts timing information required by users into cell payloads or extracts from them. This is done in accordance with five AAL service classes defined as follows.1. AAL1 - Adaptation for Constant Bit Rate (CBR) services (connection oriented, 47 byte payload);2. AAL2 - Adaptation for Variable Bit Rate (VBR) services (connection oriented, 45 byte payload);3. AAL3 - Adaptation for Variable Bit Rate data services (connection oriented, 44 byte payload);4. AAL4 - Adaptation for Variable Bit Rate data services (connection less, 44 byte payload);5. AAL5 - Adaptation for signalling and data services (48 byte payload).In the case of transfer of information in real time, AAL1 and AAL2 which support connection oriented services are important. AAL4 which supports a connection less service was originally meant for data which is sensitive to loss but not to delay. However, the introduction of AAL5 which uses a 48 byte payload with no overheads, has made AAL3/4 redundant. Frame Relay
and MPEG -2 (Moving Pictures Expert Group) video are two services which will specifically use AAL5.ATM Services CBR ServiceThis supports the transfer of information between the source and destination at a constant bit rate. CBR service uses AAL1. A typical example is the transfer of voice at 64 Kbps over ATM. Another usage is for the transport of fixed rate video.This type of service over an ATM network is sometimes called circuit emulation (similar to a voice circuit on a telephone network). VBR ServiceThis service is useful for sources with variable bit rates. Typical examples are variable bit rate audio and video. ABR and UBR ServicesThe definition of CBR and VBR has resulted in two other service types called Available Bit Rate (ABR) services and Unspecified Bit Rate (UBR) services.ABR services use the instantaneous bandwidth available after allocating bandwidths for CBR and VBR services. This makes the bandwidth of the ABR service to be variable. Although there is no guaranteed time of delivery for the data transported using ABR services, the integrity of data is guaranteed. This is ideal to carry time insensitive (but loss sensitive) data such as in LAN-LAN interconnect and IP over ATM.UBR service, as the name implies, has an unspecified bit rate which the network can use to transport information relating to network management, monitoring, etc.
EXAMPLE NETWORKS – connection-oriented networks: X.25, Frame Relay and ATMby admin under Computer Networks
2 connection-oriented networks: X.25, Frame Relay and ATM
Since the beginning of connectivity arose a war between those who support subnets no connection-oriented (ie, datagrams) and supporters of the subnets oriented the connection. The main supporters of the subnets are connection oriented in community ARPANET / Internet.
Remember that the original desire of the DoD to establish and build ARPANET was to have a network that could function even after multiple impacts of weapons destroy nuclear numerous routers and transmission lines.
Therefore, tolerance errors was on his list of priorities, not so much they could charge customers.
This approach led to a design not connection oriented where each packet is routed independently of any other package.
Therefore, if some routers fall during a session, no harm because the system can reconfigure itself dynamically for subsequent packets to find a route to your destination, even if it is different from the one used by the previous packages.
The connection-oriented field comes from the world of telephone companies.
In the telephone system, the caller must dial the number of the party to call and wait connection before you can talk or send data.
This establishes a connection setup route through the phone system that is maintained until the call is terminated.
All words or packets follow the same route.
If a line or switch goes down along the way, the call is canceled. This property is precisely what the DoD did not like.
So why do you like the phone companies? For two reasons:1. Quality service.2. Billing.By first establishing a connection, the subnet can reserve resources such as space buffering and processing power (CPU) in routers.
Attempting to set a call and available resources are insufficient, the call is rejected and the caller receives a busy signal.
once a connection is established, it gives good service.
With no network connection-oriented, if too many packets arrive at the same router to same time, the router is saturated and may lose some packets.Perhaps the issuer notice this and send it back, but the quality of service is uneven and inadequate to audio or video unless the network has low.
Needless to say, to provide adequate audio quality is something the phone companies take great care, hence the preference for the connections.
The second reason that phone companies prefer the connection-oriented service is that they are accustomed to charging for connection time.
When a call long distance (domestic or international) is charged per minute.
When they arrived networks were drawn precisely to a model in which the charge per minute would be easy to do.
If you have to establish a connection before sending the data, that is when billing clock starts running. If no connection, no charge.
Ironically, maintaining billing records is very expensive.
If a telephone company adopt a flat monthly rate with unlimited calls and no billing or maintenance of a record, probably save a lot of money, despite the increase in calls this policy would generate.
However, there are policies, regulations and other factors that weigh in against doing this.
Shashank Agnihotri Computer Networks – Page 73
Interestingly, the flat rate service exists in other sectors. For example, cable TV is billed at a flat monthly rate regardless of how many programs display.
It could have been designed with pay per view as a basic concept, but it was not, in part by the high cost of turnover (and given the quality of most television programs, shame can not be discounted entirely).
Also, many parks charge an admission fee per day with unlimited access to games, in contrast to carnivals that charge per game.
That said, we should not be surprising that all the networks designed by the telephone industry have been connection-oriented subnets.
What is surprising is that the Internet is also inclined in that direction in order to provide better audio and video service.
For now examine some connection-oriented networks.
X.25 and Frame Relay
Our first example of connection-oriented network is X.25, which was the first network public data.
Deployed in the 1970′s, when telephone service was a monopoly everywhere and the telephone company in each country expected to have a data network country itself.
To use X.25, a computer, first established a connection to the remote computer, that is, made a phone call.
This connection was a connection number for use in the transfer of data packets (since it could open many connections at the same time).
Data sets were very simple, consisted of a 3-byte header and up to 128 bytes of data.
The header consists of a number of 12-bit connection, a packet sequence number, a receipt confirmation number and some number of bits.
X.25 networks operated for almost ten years with mixed results.
In the 1980′s, X.25 networks were replaced largely by a new type of network called Frame Relay.
This is a connection-oriented network without error control or flow.
As might be connection-oriented, packets are delivered in order (if to surrender all).
The properties of order delivery, no error control or flow made the Frame Relay LAN-like wide area.
Shashank Agnihotri Computer Networks – Page 74
Its most important application is the interconnection LANs in multiple locations of a company.
Frame Relay enjoyed modest success and even is still used in some parts.
Asynchronous Transfer Mode
Another type of network connection-oriented, perhaps more importantly, ATM (Asynchronous Transfer Mode).
The reason for this strange name is because the telephone system in most of the transmission is synchronous (the closest thing to a clock) and ATM do not.
ATM was designed in the early 1990s and launched in the midst of an incredible exaggeration (Ginsburg, 1996; Goralski, 1995; Ibe, 1997; kimnaras et al., 1994, and Stallings, 2000).
ATM would solve all the problems of merging telecommunications connectivity and voice, data, cable, telex, telegraph, carrier pigeons, cans connected by string, drums, smoke signals and everything else in a single integrated system that could provide all services for all needs. That did not happen.
In large part, the problems were similar to those described in the issue of OSI, ie an unwelcome appearance, along with technology, implementation and misguided policies.
Having knocked out telephone companies in the first assault, much of the Internet community saw ATM as when the Internet was the opponent of the telcos: the second part.
But not so in reality and this time even in datagrams compromisers fans realized that the quality of Internet service left much to be desired.
To make a long story, was much more successful ATM OSI and currently use deep within the telephone system, often in the transport of IP packets.
As businesses today is used primarily to carry its internal transport, users are unaware of its existence, but definitely alive and has health.
ATM virtual circuits
Since ATM networks are connection oriented, data transmission requires that you first send a package to connect.
Establishment as the message continues its path through the subnet, all switches are in the path created an entry in its internal tables noting the existence of the connection and reserving whatever resources they need the connection.
Often the connections are called virtual circuits, in analogy with the physical circuits used in the telephone system.
Shashank Agnihotri Computer Networks – Page 75
Most ATM networks also support permanent virtual circuits, the connections standing between two hosts (distant). They are similar to leased lines in the world phone.
Each connection, temporarily or permanently, has a single connection handle.
Once connected, each side can begin transmitting data.
The basic idea that is based ATM is to transmit all the information into small packets of fixed size, called cells.
The cells have a size of 53 bytes, five of which are header and 48 payload, so the sender and receiver hosts and all intermediate switches can know which cells belong to which connections.
This information allows each switch knows how to send each incoming cell.
The switching of cells is done in hardware at high speed.
In fact, the main argument for having fixed-size cells is that it is easy build hardware switches to handle small cells of fixed length.
Packages variable length IP must be routed through software, which is a slower process.
Another advantage of ATM is that the hardware can be configured to send an incoming cell to multiple output lines, a property necessary for the management of a television program to be broadcast to multiple receivers.
Finally, the small cells do not block any long line, making it easier to guarantee quality of service.
All cells follow the same route to the destination.
The delivery of cells is not guaranteed, but the order itself. If the cells 1 and 2 are sent in that order, then they should arrive in the same order, never first 2 then 1.
However, one or both of these may be lost in the way.
At higher levels of their proper protocols cell recover losses.
Note that while this warranty is not perfect, is better than the Internet.
There, the packets only lost, but also delivered in disarray.
ATM, in contrast, ensures that cells were never delivered disorder.
ATM networks are organized as traditional WAN, with lines and switches (routers).
Shashank Agnihotri Computer Networks – Page 76
The most common speeds for ATM networks are 155 and 622 Mbps, although higher speeds are supported.Speed was chosen because it is 155 Mbps which required to transmit high-definition television.
The exact choice of 155.52 Mbps was made for compatibility with the SONET transmission system from AT & T.
The physical layer is concerned with the physical environment: voltages, bit timing and other aspects.
ATM does not prescribe a particular set of rules, only specifies that the cells ATM can be sent as is cable or fiber, but can be packed into the payload of other transport systems.
In other words, ATM is designed to be independent of the transmission medium.
The ATM layer is responsible for the cells and transport.
Defines the layout of a cell and indicates the mean fields of the header.
It also deals with the establishment and the release of the virtual circuits. Congestion control is also located here.
Since most applications do not need to work directly with the cells (although some may do so), has defined a top layer to the ATM layer to users send packets bigger than a cell.
ATM interface segments these packets, transmitted from individual cells and reassembles at the other end.
This layer is AAL (ATM Adaptation Layer).
Unlike the first two-dimensional reference models, the ATM model is defined as if three dimensional.
The user plane
deals with data transport, flow control, error correction and other user functions.
In contrast, the control plane deals with connection management.
The management functions of the layer plane and are related to resource management and coordination between coats.
Each of the physical layer and AAL are divided into two subnetworks, one in the bottom that does the job and the convergence sublayer on top that provides the interface own immediate upper layer.
The PMD sublayer (Physical Medium Dependent) interacts with the real cable.
Shashank Agnihotri Computer Networks – Page 77
Move bit in and out and handles the timing bits, ie the time between each bit totransmission.This layer will be different for different carriers and cables.
The other sublayer of the physical layer is the sublayer TC (Transmission Convergence).
When cells are transmitted, the TC layer sends a string of bits to the PMD layer.
This is simple. At the other extreme, the TC sublayer receives a series of input bits of the PMD sublayer.
Its job is to convert this bit stream into a stream of cells to the ATM layer.
Manages all aspects related to the indications of where the cells begin and end the flow of bits.
In the ATM model, this feature occurs in the physical layer.
In the OSI model and large of the other networks, the work of framing, ie convert a number of bits in the rough a sequence of frames or cells, is the task of the data link layer.
As mentioned earlier, the ATM layer handles cells, including their generation and transportation.
The most interesting aspect of ATM is located here. It is a combination of the data link layer and network OSI model, there is a split in sublayers.
The AAL layer is divided into a sublayer SAR (Segmentation and Reassembly) and CS (Convergence Sublayer).
The lower sublayer packet fragmented cells in the transmit side and rejoins at the destination.
The upper sublayer allows ATM systems offer different types of services to different applications (eg, file transfer and video on demand have different requirements concerning error handling, timing, etc.).
However, since there is a substantial installed base, it is likely that he was still in use for some years.
Shashank Agnihotri Computer Networks – Page 78
Permanent and switched virtual circuits in ATM, frame relay, and X.25Switched virtual circuits (SVCs) are generally set up on a per-call basis and are disconnected when the call is terminated; however, apermanent virtual circuit (PVC) can be established as an option to provide a dedicated circuit link between two facilities. PVC configuration is usually preconfigured by the service provider. Unlike SVCs, PVC are usually very seldom broken/disconnected.
A switched virtual circuit (SVC) is a virtual circuit that is dynamically established on demand and is torn down when transmission is complete, for example after a phone call or a file download. SVCs are used in situations where data transmission is sporadic and/or not always between the same data terminal equipment (DTE) endpoints.
A permanent virtual circuit (PVC) is a virtual circuit established for repeated/continuous use between the same DTE. In a PVC, the long-term association is identical to the data transfer phase of a virtual call. Permanent virtual circuits eliminate the need for repeated call set-up andclearing.
Frame relay is typically used to provide PVCs. ATM provides both switched virtual connections and permanent virtual connections, as they are called in ATM terminology. X.25 provides both virtual calls and PVCs, although not all X.25 service providers or DTE implementations support PVCs as their use was much less common than SVCs
X.25
X.25 is packet-switched network based WAN protocol for WAN communications. It delineates data exchange and control of information within a user appliance, Data Terminal Equipment (DTE) and a network node, Data Circuit Terminating Equipment (DCE). X.25 comprises physical links such as packet-switching exchange (PSE) nodes for networking hardware, leased lines, and telephone or ISDN connections. Its unique functionality is its capacity to work effectively on any type of system that is connected to the network. X.25, although replaced by superior technology, continues to be in use. It utilizes a connection-oriented service that enables data packets to be transmitted in an orderly manner.