Top Banner
1. Describe the following: a. Networks Software b. Reference Models c. Network Standards Networks Software Network software is highly structured. In this section we examine the software techniques. In the following sections we examine the software structuring technique in some detail. The method described here forms the keystone of the entire book and will occur repeatedly later on. Protocol Hierarchy A protocol is an agreement between the communicating parties on how communication is to proceed. To reduce their design complexity, most networks are organized as a stack of layers or levels, each one built upon the one below it. The number of layers, the name of each layer, the contents of each layer, and the function of each layer differ from network to network. The purpose of each layer is to offer certain services to the higher layers, shielding those layers from the details of how the offered services are actually implemented. In a sense, each layer is a kind of virtual machine, offering certain services to the layer above it. That is the rules and conventions used in the conversations collectively known as a protocol. This concept is actually a familiar one and used throughout computer science, where it is variously known as information hiding, abstract data types, data encapsulation, and object- oriented programming. The fundamental idea is that a particular piece of software (or hardware) provides a service to its users but keeps the details of its internal state and algorithms hidden from them. Layer n on one machine carries on a conversation with layer n on another machine. The rules and conventions used in
44
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: MC0075

1. Describe the following: a. Networks Software b. Reference Models c. Network Standards

Networks Software

Network software is highly structured. In this section we examine the software techniques. In the following sections we examine the software structuring technique in some detail. The method described here forms the keystone of the entire book and will occur repeatedly later on.

Protocol Hierarchy

A protocol is an agreement between the communicating parties on how communication is to proceed. To reduce their design complexity, most networks are organized as a stack of layers or levels, each one built upon the one below it. The number of layers, the name of each layer, the contents of each layer, and the function of each layer differ from network to network. The purpose of each layer is to offer certain services to the higher layers, shielding those layers from the details of how the offered services are actually implemented. In a sense, each layer is a kind of virtual machine, offering certain services to the layer above it. That is the rules and conventions used in the conversations collectively known as a protocol.

This concept is actually a familiar one and used throughout computer science, where it is variously known as information hiding, abstract data types, data encapsulation, and object-oriented programming. The fundamental idea is that a particular piece of software (or hardware) provides a service to its users but keeps the details of its internal state and algorithms hidden from them. Layer n on one machine carries on a conversation with layer n on another machine. The rules and conventions used in this conversation are collectively known as the layer n protocol. Basically, a protocol is an agreement between the communicating parties on how communication is to proceed. Violating the protocol will make communication more difficult, if not completely impossible.

A five-layer network is illustrated in figure. The entities comprising the corresponding layers on different machines are called peers. It is the peers that communicate using the protocol. In reality, no data are directly transferred from layer n on one machine to layer n on another machine. Instead the data and control information is passed to the layer immediately

Page 2: MC0075

below it, until it reaches the lowest layer. This lowest layer is usually referred as physical layer, which interfaces directly with the physical medium. The virtual communication is indicated by dotted lines and physical communication by solid lines in figure

Layers, protocols and interfaces

Between each pair of adjacent layers there is an interface. The interface defines which primitive operations and services the lower layer offers to the upper one. When network designers decide how many layers to include in a network and what each one should do, one of the most important considerations is defining clean interfaces between the layers. Doing so, in turn, requires that each layer perform a specific collection of well-understood functions. In addition to minimizing the amount of information that must be passed between layers, clearcut interfaces also make it simpler to replace the implementation of one layer with a completely different implementation (e.g., all the telephone lines are replaced by satellite channels) because all that is required of the new implementation is that it offer exactly the same set of services to its upstairs neighbor as the old implementation did. In fact, it is common that different hosts use different implementations.

The set of layers and protocols is called Network architecture. A list of protocols used by a system is called a protocol stack. The subjects of

Page 3: MC0075

network architectures, protocol stacks, and the protocols themselves are the principal topics of this book.

Communication of information in a five-layer network.

Consider the communication between two hosts using a five-layer network. Let ‘M’ be the source message produced by the application process running at layer 5. This message is to be transmitted to the layer 5 of the destination machine.

This message is given to layer 4 for transmission as shown in Figure 2.2. Layer 4 puts a header for identification in front of the message and passes it to lower layer 3. The header includes control information, such as sequence numbers, to allow layer 4 on the destination machine to deliver messages in the right order if the lower layers do not maintain sequence. In some layers, headers can also contain sizes, times, and other control fields. There might be limit on the size of the message and hence messages can also be segmented.

In many networks, there is no limit to the size of messages transmitted in the layer 4 protocol, but there is nearly always a limit imposed by the layer 3 protocol. Consequently, layer 3 must break up the incoming messages into smaller units, packets, prepending layer 3 headers to each packet. In this example, M is split into two parts, M1 and M2.

Page 4: MC0075

Layer 3 decides which of the outgoing lines to use and passes the packets to layer 2. Layer 2 adds not only a header to each piece, but also a trailer, and gives the resulting unit to layer 1 for physical transmission. Thus the message reaches the lowest layer where it is transmitted through the physical medium. The actual flow of the message from the top layer of source machine to the top layer of the destination machine is illustrated in figure 2.2. The message has to be delivered in proper sequence to the layers of the destination machine.

At the receiving machine the message moves upward, from layer to layer, with the headers being stripped off as it progresses by the appropriate layers. Note that none of the headers for layers below n are passed up to layer n.

The important thing is to see the relation of actual flow and virtual flow, the different protocols and interfaces. Even though we refer network software for the design of all layers, the lower layers are implemented in hardware or firmware.

Design Issues for the Layers

There are some key design issues that are to be considered in computer networks. Every layer needs a mechanism for identifying senders and receivers. As many computers are normally connected in networks, few of which have multiple processes. A means for a process on one machine is needed to specify with whom it wants to communicate to. Thus some form of addressing scheme is to be devised.

Another design issue is data transmission modes. It concerns the rules for the data transfer. The systems can use serial or parallel transmission, synchronous or Asynchronous transmission, simplex or duplex transmission. The protocol also must determine how many logical channels the connection corresponds to and what their priorities are.

Another major design issue is Error Control techniques as physical circuits are not perfect. Some of the error detecting or correcting codes are to be used at both the ends of the connection. At the same time we need to consider Flow Control techniques is necessary to keep a fast sender from swamping a slow receiver. Some systems use some kind of feedback from receiver, which is useful to limit the transmission rate.

Page 5: MC0075

It is inconvenient or expensive to set up separate connection for each pair of communicating processes. Same connection can be used by multiple & unrelated conversation. Thus we need to focus on Multiplexing and de-multiplexing techniques as one of the design issue. Multiplexing is needed in the physical layer, where all the traffic for all connections has to be sent over at most a few physical circuits.

When there are multiple paths between the source and destination the complexity lies in finding the best, optimum and shortest path. Hence to find optimum path we need Routing schemes.

Apart from these some of the design issues can be related to security, compression techniques and so on.

Merits and de-merits of Layered Architecture

Advantages of Layered Architecture

· Any given layer can be modified or upgraded without affecting the other layers.

· Modulazition by means of layering simplifies the overall design.

· Different layers can be assigned to different standards, committees, and design teams.

· Mechanisms like packet-switching, circuit-switching may be used without effecting more than one layer.

· Different machines may be plugged in at different layers.

· The relation between different control functions can be better understood.

· Common lower levels may be shared by different higher levels.

· Functions (especially at lower levels) may be removed from software to hardware and micro-codes.

· Increases the compatibility of different machines.

Page 6: MC0075

Disadvantages of Layered Architecture

· Total overhead is higher.

· Two communicating machines may have to use certain functions which they could do without layers.

· As technology changes, the functions may not be in the most cost-effective layer.

Connection-Oriented and Connectionless Services

Layers can offer two types of services to the layers above them. They are Connection oriented and Connection less. Connection oriented service is modeled after telephone system. To use this service, the service user first establishes a connection, uses the connection and then releases the connection. In most of the cases the order is preserved so that bits arrive at receiver in the same order as they were sent by the transmitter. In some cases when a connection is established the source, the subnet, and the receiver conduct negotiation of certain parameters like the maximum size of the message, quality of service (QoS) required and other issues.

We have another type of service called Connection less service. This is modeled after the postal system. Here each message carries the full destination address, and each one is routed through the system independent of each others. Here messages may not arrive at the receiver in the same order as they were sent, as it depends on the route each message takes on the way to the destination. Six different types of services are summarized in table 2.1.

Comparisons of different services

Page 7: MC0075

Service Primitives

A service is formally specified by a set of primitives or operations available to the user to access the service. These primitives tell the service to perform some action or report an action taken by the peer entity. The primitives for the connection-oriented service are given in table 2.2.

Table 2.2: Service primitives for a connection oriented service

Communication in a simple client server model using the above service primitives is illustrated in figure. First the server executes the LISTEN to indicate that is ready to accept incoming connections. The client executes CONNECT (1) to establish the connection with the server. The server now unblocks the listener and sends back an acknowledgement (2). Thus the connection is established.

Simple client server model on a connection oriented network

The next step for a server is to executes a RECEIVE (3) to prepare to accept the first request. The arrival of the request packet unblocks the server so that it can process the request. After it has done the work it uses SEND (4) to answer to the client. It all the data transfer is done then it can use DISCONNECT (5) suspending the client. When the server gets this packet, it also issues a DISCONNECT (6) and when it reaches the client, the client process is releases and the connection is broken. In the process packets may get lost, timings may be wrong, many other complex issues.

Page 8: MC0075

The Relationship of Services to Protocols

Relationship between the service and protocols

A service is a set of primitives that a layer provides to the layer above it. The service defines what operation the layer is prepared to perform on behalf of its users. It says nothing about the implementation of these operations.

A protocol is a set of rules governing the format and meaning of the packets, or messages that are exchanged by the peer entities within a layer. Figure 2.4 illustrates the relationship of services to protocols. Entities use protocols to implement their service primitives. Protocols relate to the packets sent between entities.

Reference models

There are two important network architectures. They are ISO-OSI reference model and TCP/IP reference model. These two are discussed below.

In 1977, the International Organization for Standardization (ISO) began to develop its OSI networking suite. OSI has two major components: an abstract model of networking (the Basic Reference Model, or seven-layer model), and a set of concrete protocols. The standard documents that describe OSI are for sale and not currently available online.

Parts of OSI have influenced Internet protocol development, but none more than the abstract model itself, documented in ISO 7498 and its various addenda. In this model, a networking system is divided into layers. Within each layer, one or more entities implement its functionality. Each entity interacts directly only with the layer immediately beneath it, and provides facilities for use by the layer above it.

Page 9: MC0075

In particular, Internet protocols are deliberately not as rigorously architected as the OSI model, but a common version of the TCP/IP model splits it into four layers. The Internet Application Layer includes the OSI Application Layer, Presentation Layer, and most of the Session Layer. Its End-to-End Layer includes the graceful close function of the OSI Session Layer as well as the Transport Layer. Its Internet work Layer is equivalent to the OSI Network Layer, while its Interface layer includes the OSI Data Link and Physical Layers. These comparisons are based on the original seven-layer protocol model as defined in ISO 7498, rather than refinements in such things as the Internal Organization of the Network Layer document.

Protocols enable an entity in one host to interact with a corresponding entity at the same layer in a remote host. Service definitions abstractly describe the functionality provided to a (N)-layer by an (N-1) layer, where N is one of the seven layers inside the local host.

The OSI Reference Model

This reference model is proposed by International standard organization (ISO) as a a first step towards standardization of the protocols used in various layers in 1983 by Day and Zimmermann. This model is called Open system Interconnection (OSI) reference model. It is referred OSI as it deals with connection open systems. That is the systems are open for communication with other systems. It consists of seven layers.

Layers of OSI Model

The principles that were applied to arrive at 7 layers:

1. A layer should be created where a different level of abstraction is needed.

2. Each layer should perform a well defined task.

3. The function of each layer should define internationally standardized protocols

4. Layer boundaries should be chosen to minimize the information flow across the interface.

5. The number of layers should not be high or too small.

Page 10: MC0075

ISO – OSI Reference Model

The ISO-OSI reference model is as shown in figure 2.5. As such this model is not a network architecture as it does not specify exact services and protocols. It just tells what each layer should do and where it lies. The bottom most layer is referred as physical layer. ISO has produced standards for each layers and are published separately.

Each layer of the ISO-OSI reference model are discussed below:

1. Physical Layer

This layer is the bottom most layer that is concerned with transmitting raw bits over the communication channel (physical medium). The design issues have to do with making sure that when one side sends a 1 bit, it is received by other side as a 1 bit, and not as a 0 bit. It performs direct transmission of logical information that is digital bit streams into physical phenomena in the form of electronic pulses. Modulators/demodulators are used at this layer. The design issue here largely deals with mechanical, electrical, and procedural interfaces, and the physical transmission medium, which lies below this physical layer.

In particular, it defines the relationship between a device and a physical medium. This includes the layout of pins, voltages, and cable specifications. Hubs, repeaters, network adapters and Host Bus Adapters (HBAs used in

Page 11: MC0075

Storage Area Networks) are physical-layer devices. The major functions and services performed by the physical layer are:

· Establishment and termination of a connection to a communications medium.

· Participation in the process whereby the communication resources are effectively shared among multiple users. For example, contention resolution and flow control.

· Modulation, is a technique of conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a communications channel. These are signals operating over the physical cabling (such as copper and fiber optic) or over a radio link.

Parallel SCSI buses operate in this layer. Various physical-layer Ethernet standards are also in this layer; Ethernet incorporates both this layer and the data-link layer. The same applies to other local-area networks, such as Token ring, FDDI, and IEEE 802.11, as well as personal area networks such as Bluetooth and IEEE 802.15.4.

2. Data Link Layer

The Data Link layer provides the functional and procedural means to transfer data between network entities and to detect and possibly correct errors that may occur in the Physical layer. That is it makes sure that the message indeed reach the other end without corruption or without signal distortion and noise. It accomplishes this task by having the sender break the input data up into the frames called data frames. The DLL of transmitter, then transmits the frames sequentially, and processes acknowledgement frames sent back by the receiver. After processing acknowledgement frame, may be the transmitter needs to re-transmit a copy of the frame. So therefore the DLL at receiver is required to detect duplications of frames.

The best known example of this is Ethernet. This layer manages the interaction of devices with a shared medium. Other examples of data link protocols are HDLC and ADCCP for point-to-point or packet-switched networks and Aloha for local area networks. On IEEE 802 local area networks, and some non-IEEE 802 networks such as FDDI, this layer may be split into a Media Access Control (MAC) layer and the IEEE 802.2

Page 12: MC0075

Logical Link Control (LLC) layer. It arranges bits from the physical layer into logical chunks of data, known as frames.

This is the layer at which the bridges and switches operate. Connectivity is provided only among locally attached network nodes forming layer 2 domains for unicast or broadcast forwarding. Other protocols may be imposed on the data frames to create tunnels and logically separated layer 2 forwarding domain.

The data link layer might implement a sliding window flow control and acknowledgment mechanism to provide reliable delivery of frames; that is the case for SDLC and HDLC, and derivatives of HDLC such as LAPB and LAPD. In modern practice, only error detection, not flow control using sliding window, is present in modern data link protocols such as Point-to-Point Protocol (PPP), and, on local area networks, the IEEE 802.2 LLC layer is not used for most protocols on Ethernet, and, on other local area networks, its flow control and acknowledgment mechanisms are rarely used. Sliding window flow control and acknowledgment is used at the transport layers by protocols such as TCP.

3. Network Layer

The Network layer provides the functional and procedural means of transferring variable length data sequences from a source to a destination via one or more networks while maintaining the quality of service requested by the Transport layer. The Network layer performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors. Routers operate at this layer sending data throughout the extended network and making the Internet possible. This is a logical addressing scheme values are chosen by the network engineer. The addressing scheme is hierarchical.

The best known example of a layer 3 protocol is the Internet Protocol (IP). Perhaps it’s easier to visualize this layer as managing the sequence of human carriers taking a letter from the sender to the local post office, trucks that carry sacks of mail to other post offices or airports, airplanes that carry airmail between major cities, trucks that distribute mail sacks in a city, and carriers that take a letter to its destinations. Think of fragmentation as splitting a large document into smaller envelopes for shipping, or, in the case of the network layer, splitting an application or transport record into packets.

Page 13: MC0075

The major tasks of network layer are listed

· It controls routes for individual message through the actual topology.

· Finds the best route.

· Finds alternate routes.

· It accomplishes buffering and deadlock handling.

4. Transport Layer

The Transport layer provides transparent transfer of data between end users, providing reliable data transfer while relieving the upper layers of it. The transport layer controls the reliability of a given link through flow control, segmentation/de-segmentation, and error control. Some protocols are state and connection oriented. This means that the transport layer can keep track of the segments and retransmit those that fail. The best known example of a layer 4 protocol is the Transmission Control Protocol (TCP).

The transport layer is the layer that converts messages into TCP segments or User Datagram Protocol (UDP), Stream Control Transmission Protocol (SCTP), etc. packets. Perhaps an easy way to visualize the Transport Layer is to compare it with a Post Office, which deals with the dispatch and classification of mail and parcels sent. Do remember, however, that a post office manages the outer envelope of mail. Higher layers may have the equivalent of double envelopes, such as cryptographic Presentation services that can be read by the addressee only.

Roughly speaking, tunneling protocols operate at the transport layer, such as carrying non-IP protocols such as IBM’s SNA or Novell’s IPX over an IP network, or end-to-end encryption with IP security (IP sec). While Generic Routing Encapsulation (GRE) might seem to be a network layer protocol, if the encapsulation of the payload takes place only at endpoint, GRE becomes closer to a transport protocol that uses IP headers but contains complete frames or packets to deliver to an endpoint.

The major tasks of Transport layer are listed below:

· It locates the other party

Page 14: MC0075

· It creates a transport pipe between both end-users.

· It breaks the message into packets and reassembles them at the destination.

· It applies flow control to the packet stream.

5. Session Layer

The Session layer controls the dialogues/connections (sessions) between computers. It establishes, manages and terminates the connections between the local and remote application. It provides for either full-duplex or half-duplex operation, and establishes check pointing, adjournment, termination, and restart procedures. The OSI model made this layer responsible for "graceful close" of sessions, which is a property of TCP, and also for session check pointing and recovery, which is not usually used in the Internet protocols suite.

The major tasks of session layer are listed

· It is responsible for the relation between two end-users.

· It maintains the integrity and controls the data exchanged between the end-users.

· The end-users are aware of each other when the relation is established (synchronization).

· It uses naming and addressing to identify a particular user.

· It makes sure that the lower layer guarantees delivering the message (flow control).

6. Presentation Layer

The Presentation layer transforms the data to provide a standard interface for the Application layer. MIME encoding, data encryption and similar manipulation of the presentation are done at this layer to present the data as a service or protocol developer sees fit. Examples of this layer are converting an EBCDIC-coded text file to an ASCII-coded file, or serializing objects and other data structures into and out of XML.

Page 15: MC0075

The major tasks of presentation layer are listed below:

· It translates the language used by the application layer.

· It makes the users as independent as possible, and then they can concentrate on conversation.

7. Application Layer (end users)

The application layer is the seventh level of the seven-layer OSI model. It interfaces directly to the users and performs common application services for the application processes. It also issues requests to the presentation layer. Note carefully that this layer provides services to user-defined application processes, and not to the end user. For example, it defines a file transfer protocol, but the end user must go through an application process to invoke file transfer. The OSI model does not include human interfaces.

The common application services sub layer provides functional elements including the Remote Operations Service Element (comparable to Internet Remote Procedure Call), Association Control, and Transaction Processing (according to the ACID requirements). Above the common application service sub layer are functions meaningful to user application programs, such as messaging (X.400), directory (X.500), file transfer (FTAM), virtual terminal (VTAM), and batch job manipulation (JTAM).

Information Exchange among the Layers

The seven OSI layers use various forms of control information to communicate with their peer layers in other computer systems. This control information consists of specific requests and instructions that are exchanged between peer OSI layers.

Control information typically takes one of two forms: headers and trailers. Headers are prepended to data that has been passed down from upper layers. Trailers are appended to data that has been passed down from upper layers. An OSI layer is not required to attach a header or a trailer to data from upper layers.

Headers, trailers, and data are relative concepts, depending on the layer that analyzes the information unit. As illustrated in figure 2.2, at the network layer, for example, an information unit consists of a Layer 3 header

Page 16: MC0075

called Network header (NH) and data. At the data link layer, however, all the information passed down by the network layer (the Layer 3 header and the data) is treated as data.

Similar to Network layer now attaches its header (DH) and Trailer (DT) to the data that received from network layer. In other words, the data portion of an information unit at a given OSI layer potentially can contain headers, trailers, and data from all the higher layers. This is known as encapsulation. Figure 2.6 shows how the header and data from one layer are encapsulated into the header of the next lowest layer. In figure AH, PH, SH, TH, NH, refer to the header of application layer to Network layer respectively. DT & DH refer to Data link layer Trailer & Header.

Encapsulation of Data in ISO-OSI Reference model

The TCP/IP Reference Model

The TCP/IP reference model is the network model used in the current Internet architecture. It was created in the 1970s by DARPA for use in developing the Internet’s protocols, and the structure of the Internet is still closely reflected by the TCP/IP model. It has fewer, less rigidly defined layers than the commonly referenced OSI model, and thus provides an easier fit for real world protocols. It is considered as the grandfather of the

Page 17: MC0075

Internet, the ARPANET. This was a research network sponsored by the Department of Defense in the United States.

A goal was of continuing the conversation between source and destination even if transmission went out of operation. The reference model was named after two of its main protocols, TCP (Transmission Control Protocol) and IP (Internet Protocol). No document officially specifies the model. Different names are given to the layers by different documents, and different numbers of layers are shown by different documents. There are versions of this model with four layers and with five layers.

The original four-layer version of the model has layers as shown in. It consists of the following four layers

· Layer 4 – Process Layer or Application Layer:

This is where the "higher level" protocols such as FTP, HTTP, etc. operate. The original TCP/IP specification described a number of different applications that fit into the top layer of the protocol stack. These applications include Telnet, FTP, SMTP and DNS. These are illustrated in figure 2.10.

Telnet is a program that supports the TELNET protocol over TCP. TELNET is a general two-way communication protocol that can be used to connect to another host and run applications on that host remotely.

FTP (File Transfer Protocol) is a protocol that was originally designed to promote the sharing of files among computer users. It shields the user from the variations of file storage on different architectures and allows for a reliable and efficient transfer of data.

SMTP (Simple Mail Transport Protocol) is the protocol used to transport electronic mail from one computer to another through a series of other computers along the route.

DNS (Domain Name System) resolves the numerical address of a network node into its textual name or vice-versa. It would translate www.yahoo.com to 204.71.177.71 to allow the routing protocols to find the host that the packet is destined for.

· Layer 3 – Host-To-Host (Transport) Layer:

Page 18: MC0075

This is where flow-control and connection protocols exist, such as TCP. This layer deals with opening and maintaining connections, ensuring that packets are in fact received. The transport layer is the interface between the application layer and the complex hardware of the network. It is designed to allow peer entities on the source and destination hosts to carry on conversations. Data may be user data or control data. Two modes are available, full-duplex and half duplex. In full-duplex operation, both sides can transmit and receive data simultaneously, whereas in half duplex, a side can only send or receive at one time.

· Layer 2 – Internet or Internetworking Layer:

This layer defines IP addresses, with many routing schemes for navigating packets from one IP address to another. The job of the network layer is to inject packets into any network and have them travel independently to the destination. The layer defines IP (Internet Protocol) for its official packet format and protocol. Packet routing is a major job of this protocol.

· Layer 1 – Network Access Layer:

This layer describes the physical equipment necessary for communications, such as twisted pair cables, the signalling used on that equipment, and the low-level protocols using that signalling. The Host-to-Network layer interfaces the TCP/IP protocol stack to the physical network. The TCP/IP reference model does not specify in any great detail the operation of this layer, except that the host has to connect to the network using some protocol so it can send IP packets over it. As it is not officially defined, it varies from implementation to implementation, with vendors supplying their own version.

Page 19: MC0075

TCP/IP Network Protocol

The basic idea of the networking system is to allow one application on a host computer to talk to another application on a different host computer. The application forms its request, then passes the packet down to the lower layers, which add their own control information, either a header or a footer, onto the packet. Finally the packet reaches the physical layer and is transmitted through the cable onto the destination host.

The packet then travels up through the different layers, with each layer reading, deciphering, and removing the header or footer that was attached by its counterpart on the originating computer. Finally the packet arrives at the application it was destined for. Even though technically each layer communicates with the layer above or below it, the process can be viewed as one layer talking to its partner on the host.

Interaction with Application, Transport and Internet Layers

Interaction between the transport layer and the other layers immediately above and below is shown in.

Page 20: MC0075

Interactions with Application, Transport and Internet Layers

Any program running in the application layer has the ability to send a message using TCP or UDP, which are the two protocols defined for the transport layer. The application can communicate with the TCP or the UDP service, whichever it requires. Both the TCP and UDP communicate with the Internet Protocol in the internet layer. In all cases communication is a two way process. The applications can read and write to the transport layer. The diagram only shows two protocols in the transport layer.

A message to be sent originates in the application layer. This is then passed down onto the appropriate protocol in the transport layer. These protocols add a header to the message for the corresponding transport layer in the destination machine for purposes of reassembling the message. The segment is then passed onto the internet layer where the Internet Protocol adds a further header. Finally the segment is passed onto the physical layer, a header and a trailer are added at this stage. Figure shows the structure of the final segment being sent.

LAN/WAN

Header

IP

Header

TCP/UDP

Header

User data LAN/WAN

Trailer

Transmitted Segment from TCP/IP Network

Page 21: MC0075

The relations of all protocols that reside in corresponding layers are as shown in figure

2. Discuss the following Switching Mechanisms: a. Circuit switching b. Message switching c. Packet switching

Circuit switching

A circuit switching network is one that establishes a dedicated circuit (or channel) between nodes and terminals before the users may communicate. Each circuit that is dedicated cannot be used by other callers until the circuit is released and a new connection is set up. Even if no actual communication is taking place in a dedicated circuit then, that channel still remains unavailable to other users. Channels that are available for new calls to be set up are said to be idle. Circuit switching is used for ordinary telephone calls. It allows communications equipment and circuits, to be shared among users. Each user has sole access to a circuit (functionally equivalent to a pair of copper wires) during network use.

Page 22: MC0075

(a) circuit switching (b) packet switching

For call setup and control (and other administrative purposes), it is possible to use a separate dedicated signalling channel from the end node to the network. ISDN is one such service that uses a separate signalling channel. The method of establishing the connection and monitoring its progress and termination through the network may also utilize a separate control channel.

Circuit switching can be relatively inefficient because capacity is wasted on connections which are set up but are not in continuous use (however momentarily). On the other hand, the connection is immediately available and capacity is guaranteed until the call is disconnected

Communication using circuit switching involves three phases discussed below:

1. Connection establishment: Before any signal can be transmitted, an end to end circuit must be established.

2. Data transfer: Information can now be transmitted from source through the network to the destination using the dedicated path established.

3. Termination: After some period of data transfer, the connection is terminated

Page 23: MC0075

Consider communication between two points A and D in a network as shown in fig. 4.6. The connection between A and D is provided using (shared) links between two other pieces of equipment, B and C.

A four node and 3 link network

Network use is initiated by a connection phase, during which a circuit is set up between source and destination, and terminated by a disconnect phase as listed above. These phases, with associated timings, are illustrated in the picture below

A circuit switched connection between A and D

(Information flows in two directions. Information sent from the calling end is shown in grey and information returned from the remote end is shown in black)

After a user requests a circuit, the desired destination address must be communicated to the local switching node (B). In a telephony network, this is achieved by dialing the number. Node B receives the connection request and identifies a path to the destination (D) via an intermediate node (C). This is followed by a circuit connection phase handled by the switching nodes and initiated by allocating a free circuit to C (link BC), followed by

Page 24: MC0075

transmission of a call request signal from node B to node C. In turn, node C allocates a link (CD) and the request is then passed to node D after a similar delay.

The circuit is then established and may be used. While it is available for use, resources (i.e. in the intermediate equipment at B and C) and capacity on the links between the equipment are dedicated to the use of the circuit.

After completion of the connection, a signal confirming circuit establishment (a connect signal in the diagram) is returned; this flows directly back to node A with no search delays since the circuit has been established. Transfer of the data in the message then begins. After data transfer, the circuit is disconnected; a simple disconnect phase is included after the end of the data transmission.

Delays for setting up a circuit connection can be high, especially if ordinary telephone equipment is used. Call setup time with conventional equipment is typically on the order of 5 to 25 seconds after completion of dialing. New fast circuit switching techniques can reduce delays. Trade-offs between circuit switching and other types of switching depend strongly on switching times.

Message switching

Message switching was the precursor of packet switching, where messages were routed in their entirety and one hop at a time. It was first introduced by Leonard Kleinrock in 1961. Message switching systems are nowadays mostly implemented over packet-switched or circuit-switched data networks

Hop-by-hop Telex forwarding are examples of message switching systems. E-mail is another example of a message switching system. When this form of switching is used, no physical path is established in advance in between sender and receiver. Instead, when the sender has a block of data to be sent, it is stored in the first switching office (i.e. router) then forwarded later at one hop at a time.

Each block is received in its entity form, inspected for errors and then forwarded or re-transmitted. It is a form of store-and-forward network. Data is transmitted into the network and stored in a switch. The network transfers the data from switch to switch when it is convenient to do so, as such the data is not transferred in real-time. Blocking can not occur,

Page 25: MC0075

however, long delays can happen. The source and destination terminal need not be compatible, since conversions are done by the message switching networks.

Again consider a connection of a network shown in figure 4.6. For instance, when a telex (or email) message is sent from A to D, it first passes over a local connection (AB). It is then passed at some later time to C (via link BC), and from there to the destination (via link CD). At each message switch, the received message is stored, and a connection is subsequently made to deliver the message to the neighboring message switch. Message switching is also known as store-and-forward switching since the messages are stored at intermediate nodes en route to their destinations.

Message switching to communicate between A and D

The figure illustrates message switching; transmission of only one message is illustrated for simplicity. As the figure indicates, a complete message is sent from node A to node B when the link interconnecting them becomes available. Since the message may be competing with other messages for access to facilities, a queuing delay may be incurred while waiting for the link to become available. The message is stored at B until the next link becomes available, with another queuing delay before it can be forwarded. It repeats this process until it reaches its destination.

Circuit setup delays are replaced by queuing delays. Considerable extra delay may result from storage at individual nodes. A delay for putting the message on the communications link (message length in bits divided by link

Page 26: MC0075

speed in bps) is also incurred at each node enroute. Message lengths are slightly longer than they are in circuit switching, after establishment of the circuit, since header information must be included with each message; the header includes information identifying the destination as well as other types of information. Most message switched networks do not use dedicated point-to-point links.

Packet switching

Packet switching splits traffic data (for instance, digital representation of sound, or computer data) into chunks, called packets. Packet switching is similar to message switching. Any message exceeding a network-defined maximum length is broken up into shorter units, known as packets, for transmission. The packets, each with an associated header, are then transmitted individually through the network. These packets are routed over a shared network. Packet switching networks do not require a circuit to be established and allow many pairs of nodes to communicate almost simultaneously over the same channel. Each packet is individually addressed precluding the need for a dedicated path to help the packet find its way to its destination.

Packet switching is used to optimize the use of the channel capacity available in a network, to minimize the transmission latency (i.e. the time it takes for data to pass across the network), and to increase robustness of communication.

Page 27: MC0075

Packet-switched communication between A and D

The most well-known use of packet switching is the Internet. The Internet uses the Internet protocol suite over a variety of data link layer protocols. For example, Ethernet and Frame relay are very common. Newer mobile phone technologies (e.g., GPRS, I-mode) also use packet switching. Packet switching is also called connectionless networking because no connections are established

There are two important benefits from packet switching.

1. The first and most important benefit is that since packets are short, the communication links between the nodes are only allocated to transferring a single message for a short period of time while transmitting each packet. Longer messages require a series of packets to be sent, but do not require the link to be dedicated between the transmission of each packet. The implication is that packets belonging to other messages may be sent between the packets of the message being sent from A to D. This provides a much fairer sharing of the resources of each of the links.

2. Another benefit of packet switching is known as "pipelining". Pipelining is visible in the figure above. At the time packet 1 is sent from B to C, packet 2 is sent from A to B; packet 1 is sent from C to D while packet 2 is sent from B to C, and packet 3 is sent from A to B, and so forth. This simultaneous use of communications links represents a gain in efficiency; the total delay for transmission across a packet network may be considerably less than for message switching, despite the inclusion of a header in each packet rather than in each message.

The long-haul circuit-switching telecommunications network was originally designed to handle voice traffic, and the majority of traffic on these networks continues to be voice. A key characteristic of circuit-switching networks is that resources within the network are dedicated to a particular call. For voice connections, the resulting circuit will enjoy a high percentage of utilization because; most of the time, one party or the other is talking.

However, as the circuit-switching network began to be used increasingly for data connections, two shortcomings became apparent:

Page 28: MC0075

· In a typical user/host data connection (e.g., personal computer user logged on to a database server), much of the time the line is idle. Thus, with data connections, a circuit-switching approach is inefficient.

· In a circuit-switching network, the connection provides for transmission at a constant data rate. Thus, each of the two devices that are connected must transmit and receive at the same data rate as the other. This limits the utility of the network in interconnecting a variety of host computers and workstations.

To understand how packet switching addresses these problems, let us briefly summarize packet-switching operation. Data are transmitted in short packets. A typical upper bound on packet length is 1000 octets (bytes).If a source has a longer message to send, the message is broken up into a series of packets. Each packet contains a portion (or all for a short message) of the user’s data plus some control information. The control information at a minimum includes the information that the network requires to be able to route the packet through the net- work and deliver it to the intended destination. At each node en route, the packet is received stored briefly, and passed on to the next node.

Now, let’s consider figure, assuming that it depicts a simple packet switching network. Consider a packet to be sent from station A to station E. The packet includes control information that indicates that the intended destination is E. The packet is sent from A to node 4. Node 4 stores the packet, determines the next leg of the route (say 5), and queues the packet to go out on that link (the 4-5 link). When the link is available, the packet is transmitted to node 5, which forwards the packet to node 6, and finally to E. This approach has a number of advantages over circuit switching:

The Use of Packets

Page 29: MC0075

· Line efficiency is greater, as a single node-to-node link can be dynamically shared by many packets over time. The packets are queued up and transmitted as rapidly as possible over the link. By contrast, with circuit switching, time on a node-to-node link is pre-allocated using synchronous time division multiplexing. Much of the time, such a link may be idle because a portion of its time is dedicated to a connection that is idle.

· A packet-switching network can perform data-rate conversion. Two stations of different data rates can exchange packets because each connects to its node at its proper data rate.

· When traffic becomes heavy on a circuit-switching network, some calls are blocked; that is, the network refuses to accept additional connection requests until the load on the network decreases. On a packet-switching network, packets are still accepted, but delivery delay increases.

· Priorities can be used. Thus, if a node has a number of packets queued for transmission, it can transmit the higher-priority packets first. These rackets will therefore experience less delay than lower-priority packets.

3. Explain the different classes of IP addresses with suitable examples.

IPv4 Address Classes

The IPv4 address space can be subdivided into 5 classes - Class A, B, C, D and E. Each class consists of a contiguous subset of the overall IPv4 address range.

With a few special exceptions explained further below, the values of the leftmost four

bits of an IPv4 address determine its class as follows:

Class Leftmost bits Start address Finish address

A 0xxx 0.0.0.0 127.255.255.255

Page 30: MC0075

B 10xx 128.0.0.0 191.255.255.255

C 110x 192.0.0.0 223.255.255.255

D 1110 224.0.0.0 239.255.255.255

E 1111 240.0.0.0 255.255.255.255

All Class C addresses, for example, have the leftmost three bits set to '110', but each of the remaining 29 bits may be set to either '0' or '1' independently (as represented by an x in these bit positions):

110xxxxx xxxxxxxx xxxxxxxx xxxxxxxx

Converting the above to dotted decimal notation, it follows that all Class C addresses fall in the range from 192.0.0.0 through 223.255.255.255.

IP Address Class E and Limited Broadcast

The IPv4 networking standard defines Class E addresses as reserved, meaning that they should not be used on IP networks. Some research organizations use Class E addresses for experimental purposes. However, nodes that try to use these addresses on the Internet will be unable to communicate properly.

A special type of IP address is the limited broadcast address 255.255.255.255. A broadcast involves delivering a message from one sender to many recipients. Senders direct an IP broadcast to 255.255.255.255 to indicate all other nodes on the local network (LAN) should pick up that message. This broadcast is 'limited' in that it does not reach every node on the Internet, only nodes on the LAN.

Technically, IP reserves the entire range of addresses from 255.0.0.0 through 255.255.255.255 for broadcast, and this range should not be considered part of the normal Class E range.

IP Address Class D and Multicast

The IPv4 networking standard defines Class D addresses as reserved for multicast. Multicast is a mechanism for defining groups of nodes and

Page 31: MC0075

sending IP messages to that group rather than to every node on the LAN (broadcast) or just one other node (unicast).

Multicast is mainly used on research networks. As with Class E, Class D addresses should not be used by ordinary nodes on the Internet.

IP Address Class A, Class B, and Class C

Class A, Class B, and Class C are the three classes of addresses used on IP networks in common practice

IP Loopback Address

127.0.0.1 is the loopback address in IP. Loopback is a test mechanism of network adapters. Messages sent to 127.0.0.1 do not get delivered to the network. Instead, the adapter intercepts all loopback messages and returns them to the sending application. IP applications often use this feature to test the behavior of their network interface.

s with broadcast, IP officially reserves the entire range from 127.0.0.0 through 127.255.255.255 for loopback purposes. Nodes should not use this range on the Internet, and it should not be considered part of the normal Class A range.

Zero Addresses

As with the loopback range, the address range from 0.0.0.0 through 0.255.255.255 should not be considered part of the normal Class A range. 0.x.x.x addresses serve no particular function in IP, but nodes attempting to use them will be unable to communicate properly on the Internet.

Private Addresses

The IP standard defines specific address ranges within Class A, Class B, and Class C reserved for use by private networks (intranets). The table below lists these reserved ranges of the IP address space.

Class Private start address Private finish address

A 10.0.0.0 10.255.255.255

B 172.16.0.0 172.31.255.255

Page 32: MC0075

C 192.168.0.0 192.168.255.255

Nodes are effectively free to use addresses in the private ranges if they are not connected to the Internet, or if they reside behind firewalls or other gateways that use Network Address Translation (NAT).

IPv6 Address Types

IPv6 does not use classes. IPv6 supports the following three IP address types:

unicast

multicast

anycast

Unicast and multicast messaging in IPv6 are conceptually the same as in IPv4. IPv6 does not support broadcast, but its multicast mechanism accomplishes essentially the same effect. Multicast addresses in IPv6 start with 'FF' (255) just like IPv4 addresses.

Anycast in IPv6 is a variation on multicast. Whereas multicast delivers messages to all nodes in the multicast group, anycast delivers messages to any one node in the multicast group. Anycast is an advanced networking concept designed to support the failover and load balancing needs of applications.

IPv6 Reserved Addresses

IPv6 reserves just two special addresses: 0:0:0:0:0:0:0:0 and 0:0:0:0:0:0:0:1. IPv6 uses 0:0:0:0:0:0:0:0 internal to the protocol implementation, so nodes cannot use it for their own communication purposes. IPv6 uses 0:0:0:0:0:0:0:1 as its loopback address, equivalent to 127.0.0.1 in IPv4.

4. Discuss the following with respect to Internet Control Message Protocols: a. Congested and Datagram Flow control b. Route change requests from routers

c. Detecting circular or long routes

Page 33: MC0075

Congested and Datagram Flow control

As IP is a connectionless, a router cannot reserve communication resources or memory in advance of receiving datagram. Hence routers can be overrun with traffic. This situation is called network congestion or simply congestion.

Congestion arises because of two reasons:

1. A high speed computer generating traffic faster than a network can transfer

2. The datagram may need to cross a slower speed WAN .

When datagram arrive at host or router to process at faster rate, it enqueues them in memory temporarily. So for small bursts temporary memory solves the problem. But if the traffic continues the memory will be exhausted and results in discarding the datagram. A machine uses ICMP source quench messages to report to the original source. It is request for the source to reduce its current rate of the datagram transmission.

In general a router sends one source quench message for every datagram that they discard. There is no ICMP message to reverse the effect of a source quench. As soon as any host gets this source quench message, it lowers the rate at which the datagram it sends to that destination until it stops getting source quench messages. It then gradually increases the rate as long as further source quench messages are not received.

Source quench format

Page 34: MC0075

Source quench message format

The format of source quench message is as shown in figure 5.5. It contains the TYPE field equal to 4 and CODE filed equal to 0. it contains a datagram prefix. As most ICMP messages report an error, the datagram prefix field contains a prefix of the datagram that triggered the source quench request.

A congested router discards the datagram, sends one source quench request and the datagram prefix in source quench message identifies the datagram that was dropped.

Route change requests from routers

Routers are assumed to know the correct routes. Host begins with minimal routing information and learns new routes from routers. Hosts initialize the Internet routing tables from a configuration file at system startup, system administrators make routing changes during normal operations. Whenever the network topology changes, routing tables in routers or host may become incorrect. Routers exchange routing information periodically to accommodate network changes and keep their routes up-to-date.

When a router detects a host using non optimal route, it sends the host an ICMP message called redirect, requesting that the host must change its route to that specific destination. The router also forwards the original datagram.

Redirect message format

The Format for RE-DIRECT message as shown in figure. it contains the TYPE field with value equal to 5.

Page 35: MC0075

It contains a 32-bit ROUTER INTERNET ADDRESS field. It specifies address of the router that the host is to use to reach the destination mentioned in the datagram. INTERNET HEADER which is also a 32-bit contains an IP header plus 64 bits of the datagram that invokes the message. A host that receives ICMP redirect message examines the datagram prefix to determine the datagram’s destination address.

Redirect message format

The CODE field is 8-bit long specifies how to interpret the destination address based on the values which is illustrated in table below

Code value of redirect message

CODE Value Meaning

0 Redirect datagram for the Net

1 Redirect datagram for the host

2Redirect datagram for the type of service and Net

3Redirect datagram for the type of service and Host

ICMP redirect message is sent to hosts only and not to routers.

Detecting circular or long routes

Internet routers use routing tables and error in routing table can produce routing cycle. A routing cycle can consists of two or more routers in which the datagram is circulated among themselves only. Once a datagram enters a routing cycle it will pass around the cycle endlessly. To prevent this each datagram consists of time to live field in IP header sometimes it is also referred to as hop count. A router decrements this time to live counter when

Page 36: MC0075

ever it processes a datagram and discards the datagram when this counter hits zero.

Whenever a datagram is discarded by a router because of counter time out it sends an ICMP time exceeded message back to the source of the discarded datagram.

Time exceeded message format

Time exceeded message format

The format for Time exceeded message is as shown in figure 5.7. A router sends this message whenever a datagram is discarded because

1. Time to live field of IP datagram header has reached zero.

2. re-assembling timer expires while waiting for more fragments of that datagram

It uses TYPE field value equal to 11. It supports two values for CODE filed 0 and 1 to specify the nature of time out being reported according to the list shown in table 5.3.

Code value of Time exceeded message

CODE Value Meaning

0Time to live count of IP datagram exceeded

1Fragment re-assembling timer exceeded