Top Banner
INTRODUCTION TO COMPUTER NETWORKS A computer network is a group of interconnected computers. Networks may be classified according to a wide variety of characteristics. This article provides a general overview of some types and categories and also presents the basic components of a network. The network allows computers to communicate with each other and share resources and information. The Advanced Research Projects Agency (ARPA) designed "Advanced Research Projects Agency Network" (ARPANET) for the United States Department of Defense. It was the first computer network in the world in late 1960s and early 1970s. Network classification The following list presents categories used for classifying networks: Connection method Computer networks can also be classified according to the hardware and software technology that is used to interconnect the individual devices in the network, such as Optical fiber , Ethernet , Wireless LAN , HomePNA , Power line communication or G.hn . Ethernet uses physical wiring to connect devices. Frequently deployed devices include hubs, switches, bridges and/or routers. Wireless LAN technology is designed to connect devices without wiring. These devices use radio waves or infrared signals as a transmission medium. ITU-T G.hn technology uses existing home wiring (coaxial cable , phone lines and power lines ) to create a high-speed (up to 1 Gigabit/s) local area network. Wired Technologies Twisted-Pair Wire - This is the most widely used medium for telecommunication. Twisted-pair wires are ordinary telephone wires which consist of two insulated copper wires twisted into pairs and are used for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed range from 2 million bits per second to 100 million bits per second. 1
68

A Computer Network is a Group of Interconnected Computers

Oct 27, 2014

Download

Documents

Shruti Pillai
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: A Computer Network is a Group of Interconnected Computers

INTRODUCTION TO COMPUTER NETWORKS

A computer network is a group of interconnected computers. Networks may be classified according to a wide variety of characteristics. This article provides a general overview of some types and categories and also presents the basic components of a network.

The network allows computers to communicate with each other and share resources and information. The Advanced Research Projects Agency (ARPA) designed "Advanced Research Projects Agency Network" (ARPANET) for the United States Department of Defense. It was the first computer network in the world in late 1960s and early 1970s.

Network classification

The following list presents categories used for classifying networks:

Connection method

Computer networks can also be classified according to the hardware and software technology that is used to interconnect the individual devices in the network, such as Optical fiber, Ethernet, Wireless LAN, HomePNA, Power line communication or G.hn. Ethernet uses physical wiring to connect devices. Frequently deployed devices include hubs, switches, bridges and/or routers.

Wireless LAN technology is designed to connect devices without wiring. These devices use radio waves or infrared signals as a transmission medium.

ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area network.

Wired Technologies

Twisted-Pair Wire - This is the most widely used medium for telecommunication. Twisted-pair wires are ordinary telephone wires which consist of two insulated copper wires twisted into pairs and are used for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed range from 2 million bits per second to 100 million bits per second.

Coaxial Cable – These cables are widely used for cable television systems, office buildings, and other worksites for local area networks. The cables consist of copper or aluminum wire wrapped with insulating layer typically of a flexible material with a high dielectric constant, all of which are surrounded by a conductive layer. The layers of insulation help minimize interference and distortion. Transmission speed range from 200 million to more than 500 million bits per second.

1

Page 2: A Computer Network is a Group of Interconnected Computers

Fiber Optics – These cables consist of one or more thin filaments of glass fiber wrapped in a protective layer. It transmits light which can travel over long distance and higher bandwidths. Fiber-optic cables are not affected by electromagnetic radiation. Transmission speed could go up to as high as trillions of bits per second. The speed of fiber optics is hundreds of times faster than coaxial cables and thousands of times faster than twisted-pair wire.

Wireless Technologies

Terrestrial Microwave – Terrestrial microwaves use Earth-based transmitter and receiver. The equipment looks similar to satellite dishes. Terrestrial microwaves use low-gigahertz range, which limits all communications to line-of-sight. Path between relay stations spaced approx. 30 miles apart. Microwave antennas are usually placed on top of buildings, towers, hills, and mountain peaks.

Communications Satellites – The satellites use microwave radio as their telecommunications medium which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically 22,000 miles above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.

Cellular and PCS Systems – Use several radio communications technologies. The systems are divided to different geographic area. Each area has low-power transmitter or radio relay antenna device to relay calls from one area to the next area.

Wireless LANs – Wireless local area network use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANS use spread spectrum technology to enable communication between multiple devices in a limited area. Example of open-standard wireless radio-wave technology is IEEE 802.11b.

Bluetooth – A short range wireless technology. Operate at approx. 1Mbps with range from 10 to 100 meters. Bluetooth is an open wireless protocol for data exchange over short distances.

The Wireless Web – The wireless web refers to the use of the World Wide Web through equipments like cellular phones, pagers, PDAs, and other portable communications devices. The wireless web service offers anytime/anywhere connection.

2

Page 3: A Computer Network is a Group of Interconnected Computers

TYPES OF NETWORKS

Networks are often classified as Local Area Network (LAN), Wide Area Network (WAN), Metropolitan Area Network (MAN), Personal Area Network (PAN), Virtual Private Network (VPN), Campus Area Network (CAN), Storage Area Network (SAN), etc. depending on their scale, scope and purpose. Usage, trust levels and access rights often differ between these types of network - for example, LANs tend to be designed for internal use by an organization's internal systems and employees in individual physical locations (such as a building), while WANs may connect physically separate parts of an organization to each other and may include connections to third parties.

1. LAN:

A local area network is a computer network covering a small physical area, like a home, office, or small group of buildings, such as a school, or an airport. The defining characteristics of LANs, in contrast to wide-area networks (WANs), include their usually higher data-transfer rates, smaller geographic place, and lack of a need for leased telecommunication lines. ARCNET, Token Ring and many other technologies have been used in the past, and G.hn may be used in the future, but Ethernet over twisted pair cabling and Wi-Fi are the two most common technologies currently in use.

History

As larger universities and research labs obtained more computers during the late 1960s, there was increasing pressure to provide high-speed interconnections. A report in 1970 from the Lawrence Radiation Laboratory detailing the growth of their "Octopus" network, gives a good indication of the situation.

Cambridge Ring was developed at Cambridge University in 1974 but was never developed into a successful commercial product.

Ethernet was developed at Xerox PARC in 1973–1975, and filed as U.S. Patent 4,063,220. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published their seminal paper - "Ethernet: Distributed Packet-Switching For Local Computer Networks"ARCNET was developed by Datapoint Corporation in 1976 and announced in 1977 - and had the first commercial installation in December 1977 at Chase Manhattan Bank in New York

Standards evolution

The development and proliferation of CP/M-based personal computers from the late 1970s and then DOS-based personal computers from 1981 meant that a single site began to have dozens or even hundreds of computers. The initial attraction of networking these was generally to share disk space and laser printers, which were both very expensive at the time. There was much enthusiasm for the concept and for several years, from about

3

Page 4: A Computer Network is a Group of Interconnected Computers

1983 onward, computer industry pundits would regularly declare the coming year to be “the year of the LAN”.

In practice, the concept was marred by proliferation of incompatible Physical Layer and network protocol implementations, and a plethora of methods of sharing resources. Typically, each vendor would have its own type of network card, cabling, protocol, and network operating system. A solution appeared with the advent of Novell NetWare which provided even-handed support for dozens of competing card/cable types, and a much more sophisticated operating system than most of its competitors. Netware dominated the personal computer LAN business from early after its introduction in 1983 until the mid 1990s when Microsoft introduced Windows NT Advanced Server and Windows for Workgroups.

Of the competitors to NetWare, only Banyan Vines had comparable technical strengths, but Banyan never gained a secure base. Microsoft and 3Com worked together to create a simple network operating system which formed the base of 3Com's 3+Share, Microsoft's LAN Manager and IBM's LAN Server. None of these were particularly successful.

In this same timeframe, Unix computer workstations from vendors such as Sun Microsystems, Hewlett-Packard, Silicon Graphics, Intergraph, NeXT and Apollo were using TCP/IP based networking. Although this market segment is now much reduced, the technologies developed in this area continue to be influential on the Internet and in both Linux and Apple Mac OS X networking—and the TCP/IP protocol has now almost completely replaced IPX, AppleTalk, NBF and other protocols used by the early PC LANs.

Cabling

Early LAN cabling had always been based on various grades of co-axial cable, but IBM's Token Ring used shielded twisted pair cabling of their own design, and in 1984 StarLAN showed the potential of simple Cat3 unshielded twisted pair—the same simple cable used for telephone systems. This led to the development of 10Base-T (and its successors) and structured cabling which is still the basis of most LANs today. In addition, fiber-optic cabling is used increasingly for high-bandwidth local networking.

Technical aspects

Switched Ethernet is the most common Data Link Layer implementation on local area networks. At the Network Layer, the Internet Protocol has become the standard. However, many different options have been used in the history of LAN development and some continue to be popular in niche applications. Smaller LANs generally consist of one or more switches linked to each other—often at least one is connected to a router, cable modem, or ADSL modem for Internet access.

Larger LANs are characterized by their use of redundant links with switches using the spanning tree protocol to prevent loops, their ability to manage differing traffic types via

4

Page 5: A Computer Network is a Group of Interconnected Computers

quality of service (QoS), and to segregate traffic with VLANs. Larger LANS also contain a wide variety of network devices such as switches, firewalls, routers, load balancers, sensors.

LANs may have connections with other LANs via leased lines, leased services, or by tunneling across the Internet using virtual private network technologies. Depending on how the connections are established and secured in a LAN, and the distance involved, a LAN may also be classified as metropolitan area network (MAN) or wide area networks (WAN).

Fig- Typical library network, in a branching tree topology and controlled access to resources

2. WAN:

Wide Area Network is a computer network that covers a broad area (i.e., any network whose communications links cross metropolitan, regional, or national boundaries). This is in contrast with personal area networks (PANs), local area networks (LANs), campus area networks (CANs), or metropolitan area networks (MANs) which are usually limited to a room, building, campus or specific metropolitan area (e.g., a city) respectively. The largest and most well-known example of a WAN is the Internet. WANs are used to connect LANs and other types of networks together, so that users and computers in one location can communicate with users and computers in other locations. Many WANs are built for

5

Page 6: A Computer Network is a Group of Interconnected Computers

one particular organization and are private. Others, built by Internet service providers, provide connections from an organization's LAN to the Internet. WANs are often built using leased lines. At each end of the leased line, a router connects to the LAN on one side and a hub within the WAN on the other. Leased lines can be very expensive. Instead of using leased lines, WANs can also be built using less costly circuit switching or packet switching methods. Network protocols including TCP/IP deliver transport and addressing functions. Protocols including Packet over SONET/SDH, MPLS, ATM and Frame relay are often used by service providers to deliver the links that are used in WANs. X.25 was an important early WAN protocol, and is often considered to be the "grandfather" of Frame Relay as many of the underlying protocols and functions of X.25 are still in use today (with upgrades) by Frame Relay. Academic research into wide area networks can be broken down into three areas: Mathematical models, network emulation and network simulation. Performance improvements are sometimes delivered via WAFS or WAN optimization. Transmission rate usually range from 1200 bps to 6 Mbps, although some connections such as ATM and Leased lines can reach speeds greater than 156 Mbps. Typical communication links used in WANs are telephone lines, microwave links & satellite channels. Recently with the proliferation of low cost of Internet connectivity many companies and organizations have turned to VPN to interconnect their networks, creating a WAN in that way. Companies such as Cisco, New Edge Networks and Check Point offer solutions to create VPN networks. Several options are available for WAN connectivity:

Option Description Adv. Disadv. Saple protocols used

Leased line Point-to-Point connection between two computers or Local Area Networks (LANs)

Most secure Expensive PPP, HDLC, SDLC, HNAS

Circuit switching

A dedicated circuit path is created between end points. Best example is dialup connections

Less expensive Call setup PPP, ISDN

Packet switching

Devices transport packets via a shared single point-to-point or

Shared media cross link

X.25 Frame-

6

Page 7: A Computer Network is a Group of Interconnected Computers

point-to-multipoint link across a carrier internetwork. Variable length packets are transmitted over Permanent Virtual Circuits (PVC) or Switched Virtual Circuits (SVC)

Relay

Cell relay Similar to packet switching, but uses fixed length cells instead of variable length packets. Data is divided into fixed-length cells and then transported across virtual circuits

Best for simultaneous use of voice and data

Overhead can be considerable

ATM

3. MAN:

A Metropolitan Area Network is optimized for a larger geographical area than a LAN, ranging from several blocks of buildings to entire cities. MANs can also depend on communications channels of moderate-to-high data rates. A MAN might be owned and operated by a single organization, but it usually will be used by many individuals and organizations. MANs might also be owned and operated as public utilities. They will often provide means for internetworking of local networks. Metropolitan area networks can span up to 50km, devices used are modem and wire/cable.

4. PAN:

A personal area network is a computer network used for communication among computer devices (including telephones and personal digital assistants) close to one's person. The devices may or may not belong to the person in question. The reach of a PAN is typically a few meters. PANs can be used for communication among the personal devices themselves (intrapersonal communication), or for connecting to a higher level network and the Internet (an uplink).Personal area networks may be wired with computer buses such as USB and FireWire. A wireless personal area network (WPAN) can also be made possible with network technologies such as IrDA, Bluetooth, UWB, Z-Wave and ZigBee.

5. VPN:

A virtual private network (VPN) is a computer network in which some of the links between nodes are carried by open connections or virtual circuits in some larger networks (such as the Internet), as opposed to running across a single private network. The Link Layer protocols of the virtual network are said to be tunneled through the transport network. One common application is to secure communications through the public Internet, but a VPN does not need to have explicit security features such as authentication or content encryption. For example, VPNs can also be used to separate the traffic of different user communities over an underlying network with strong security

7

Page 8: A Computer Network is a Group of Interconnected Computers

features, or to provide access to a network via customized or private routing mechanisms. VPN service providers may offer best-effort performance, or may have a defined service level agreement (SLA) with their VPN customers. Generally, a VPN has a topology more complex than point-to-point.

Categorization by user administrative relationships- The Internet Engineering Task Force (IETF) has categorized a variety of VPNs, some of which, such as Virtual LANs (VLAN) are the standardization responsibility of other organizations, such as the Institute of Electrical and Electronics Engineers (IEEE) Project 802, Workgroup 802.1 (architecture). Originally, Wide Area Network (WAN) links from a telecommunications service provider interconnected network nodes within a single enterprise. With the advent of LANs, enterprises could interconnect their nodes with links that they owned. While the original WANs used dedicated lines and layer 2 multiplexed services such as Frame Relay, IP-based layer 3 networks, such as the ARPANET, Internet, military IP networks (NIPRNET, SIPRNET, JWICS, etc.), became common interconnection media. VPNs began to be defined over IP networks. The military networks may themselves be implemented as VPNs on common transmission equipment, but with separate encryption and perhaps routers. It became useful first to distinguish among different kinds of IP VPN based on the administrative relationships (rather than the technology) interconnecting the nodes. Once the relationships were defined, different technologies could be used, depending on requirements such as security and quality of service. When an enterprise interconnects a set of nodes, all under its administrative control, through a LAN network, that is termed an intranet. When the interconnected nodes are under multiple administrative authorities but are hidden from the public Internet, the resulting set of nodes is called an extranet. A user organization can manage both intranets and extranets itself, or negotiate a service as a contracted (and usually customized) offering from an IP service provider. In the latter case, the user organization contracts for layer 3 services – much as it may contract for layer 1 services such as dedicated lines, or multiplexed layer 2 services such as frame relay.

IETF documents distinguish between provider-provisioned and customer-provisioned VPNs. Just as an interconnected set of providers can supply conventional WAN services, so a single service provider can supply provider-provisioned VPNs (PPVPNs), presenting a common point-of-contact to the user organization.

Routing

Tunneling protocols can be used in a point-to-point topology that would generally not be considered a VPN, because a VPN is expected to support arbitrary and changing sets of network nodes. Since most router implementations support software-defined tunnel interface, customer-provisioned VPNs often comprise simply a set of tunnels over which conventional routing protocols run. PPVPNs, however, need to support the coexistence of multiple VPNs, hidden from one another, but operated by the same service provider.

8

Page 9: A Computer Network is a Group of Interconnected Computers

Building blocks- Depending on whether the PPVPN runs in layer 2 or layer 3, the building blocks described below may be L2 only, L3 only, or combinations of the two. Multiprotocol Label Switching (MPLS) functionality blurs the L2-L3 identity..While RFC 4026 generalized these terms to cover L2 and L3 VPNs, they were introduced in RFC 2547.

Customer edge device. (CE) 

In general, a CE is a device, physically at the customer premises, that provides access to the PPVPN service. Some implementations treat it purely as a demarcation point between provider and customer responsibility, while others allow customers to configure it.

Provider edge device (PE) 

A PE is a device or set of devices, at the edge of the provider network, which provides the provider's view of the customer site. PEs are aware of the VPNs that connect through them, and which maintain VPN state.

Provider device (P) 

A P device operates inside the provider's core network, and does not directly interface to any customer endpoint. It might, for example, provide routing for many provider-operated tunnels that belong to different customers' PPVPNs. While the P device is a key part of implementing PPVPNs, it is not itself VPN-aware and does not maintain VPN state. Its principal role is allowing the service provider to scale its PPVPN offerings, as, for example, by acting as an aggregation point for multiple PEs. P-to-P connections, in such a role, often are high-capacity optical links between major locations of provider.

User-visible PPVPN services

This section deals with the types of VPN currently considered active in the IETF; some historical names were replaced by these terms.

Layer 1 services

Virtual private wire and private line services (VPWS and VPLS)

In both of these services, the provider does not offer a full routed or bridged network, but components from which the customer can build customer-administered networks. VPWS are point-to-point while VPLS can be point-to-multipoint. They can be Layer 1 emulated circuits with no data link structure. The customer determines the overall customer VPN service, which also can involve routing, bridging, or host network elements. An unfortunate acronym confusion can occur between Virtual Private Line Service and Virtual Private LAN Service; the context should make it clear whether "VPLS" means the layer 1 virtual private line or the layer 2 virtual private LAN.

9

Page 10: A Computer Network is a Group of Interconnected Computers

Layer 2 services

Virtual LAN

A Layer 2 technique that allows for the coexistence of multiple LAN broadcast domains, interconnected via trunks using the IEEE 802.1Q trunking protocol. Other trunking protocols have been used but have become obsolete, including Inter-Switch Link (ISL), IEEE 802.10 (originally a security protocol but a subset was introduced for trunking), and ATM LAN Emulation (LANE).

Virtual private LAN service (VPLS)

Developed by IEEE, VLANs allow multiple tagged LANs to share common trunking. VLANs frequently comprise only customer-owned facilities. The former] is a layer 1 technology that supports emulation of both point-to-point and point-to-multipoint topologies. The method discussed here extends Layer 2 technologies such as 802.1d and 802.1q LAN trunking to run over transports such as Metro Ethernet.

As used in this context, a VPLS is a Layer 2 PPVPN, rather than a private line, emulating the full functionality of a traditional local area network (LAN). From a user standpoint, a VPLS makes it possible to interconnect several LAN segments over a packet-switched, or optical, provider core; a core transparent to the user, making the remote LAN segments behave as one single LAN.

In a VPLS, the provider network emulates a learning bridge, which optionally may include VLAN service.

Pseudo wire (PW)

PW is similar to VPWS, but it can provide different L2 protocols at both ends. Typically, its interface is a WAN protocol such as Asynchronous Transfer Mode or Frame Relay. In contrast, when aiming to provide the appearance of a LAN contiguous between two or more locations, the Virtual Private LAN service or IPLS would be appropriate.....

IP-only LAN-like service (IPLS)

A subset of VPLS, the CE devices must have L3 capabilities; the IPLS presents packets rather than frames. It may support IPv4 or IPv6.

L3 PPVPN architectures

This section discusses the main architectures for PPVPNs, one where the PE disambiguates duplicate addresses in a single routing instance, and the other, virtual router, in which the PE contains a virtual router instance per VPN. The former approach, and its variants, has gained the most attention.

10

Page 11: A Computer Network is a Group of Interconnected Computers

One of the challenges of PPVPNs involves different customers using the same address space, especially the IPv4 private address space. The provider must be able to disambiguate overlapping addresses in the multiple customers' PPVPNs.

BGP/MPLS PPVPN

In the method defined by RFC 2547, BGP extensions advertise routes in the IPv4 VPN address family, which are of the form of 12-byte strings, beginning with an 8-byte Route Distinguisher (RD) and ending with a 4-byte IPv4 address. RDs disambiguate otherwise duplicate addresses in the same PE.

PEs understands the topology of each VPN, which are interconnected with MPLS tunnels, either directly or via P routers. In MPLS terminology, the P routers are Label Switch Routers without awareness of VPNs.

Virtual router PPVPN

The Virtual Router architecture, as opposed to BGP/MPLS techniques, requires no modification to existing routing protocols such as BGP. By the provisioning of logically independent routing domains, the customer operating a VPN is completely responsible for the address space. In the various MPLS tunnels, the different PPVPNs are disambiguated by their label, but do not need routing distinguishers. Virtual router architectures do not need to disambiguate addresses, because rather than a PE router having awareness of all the PPVPNs, the PE contains multiple virtual router instances, which belong to one and only one VPN.

Categorizing VPN security models

From the security standpoint, VPNs either trust the underlying delivery network, or must enforce security with mechanisms in the VPN itself. Unless the trusted delivery network runs only among physically secure sites, both trusted and secure models need an authentication mechanism for users to gain access to the VPN. Some Internet service providers as of 2009 offer managed VPN service for business customers who want the security and convenience of a VPN but prefer not to undertake administering a VPN server themselves. Managed VPNs go beyond PPVPN scope, and are a contracted security solution that can reach into hosts. In addition to providing remote workers with secure access to their employer's internal network, other security and management services are sometimes included as part of the package. Examples include keeping anti-virus and anti-spyware programs updated on each client's computer.

Authentication before VPN connection

A known trusted user, sometimes only when using trusted devices, can be provided with appropriate security privileges to access resources not available to general users. Servers may also need to authenticate themselves to join the VPN.

11

Page 12: A Computer Network is a Group of Interconnected Computers

A wide variety of authentication mechanisms exist. VPNs may implement authentication in devices including firewalls, access gateways, and others. They may use passwords, biometrics, or cryptographic methods. Strong authentication involves combining cryptography with another authentication mechanism. The authentication mechanism may require explicit user action, or may be embedded in the VPN client or the workstation.

Trusted delivery networks

Trusted VPNs do not use cryptographic tunneling, and instead rely on the security of a single provider's network to protect the traffic. In a sense, they elaborate on traditional network- and system-administration work.

Multi-Protocol Label Switching (MPLS) is often used to overlay VPNs, often with quality-of-service control over a trusted delivery network.

Layer 2 Tunneling Protocol (L2TP) which is a standards-based replacement, and a compromise taking the good features from each, for two proprietary VPN protocols: Cisco's Layer 2 Forwarding (L2F) (obsolete as of 2009) and Microsoft's Point-to-Point Tunneling Protocol (PPTP).

Security mechanisms

Secure VPNs use cryptographic tunneling protocols to provide the intended confidentiality (blocking intercept and thus packet sniffing), sender authentication (blocking identity spoofing), and message integrity (blocking message alteration) to achieve privacy.

Secure VPN protocols include the following:

IPsec (Internet Protocol Security) - A standards-based security protocol developed originally for IPv6, where support is mandatory, but also widely used with IPv4.

Transport Layer Security (SSL/TLS) is used either for tunneling an entire network's traffic (SSL VPN), as in the OpenVPN project, or for securing individual connection. SSL has been the foundation by a number of vendors to provide remote access VPN capabilities. A practical advantage of an SSL VPN is that it can be accessed from locations that restrict external access to SSL-based e-commerce websites without IPsec implementations. SSL-based VPNs may be vulnerable to Denial of Service attacks mounted against their TCP connections because latter are inherently unauthenticated.

DTLS, used by Cisco for a next generation VPN product called Cisco AnyConnect VPN. DTLS solves the issues found when tunneling TCP over TCP as is the case with SSL/TLS

Secure Socket Tunneling Protocol (SSTP) by Microsoft introduced in Windows Server 2008 and Windows Vista Service Pack 1. SSTP tunnels Point-to-Point Protocol (PPP) or L2TP traffic through an SSL 3.0 channel.

12

Page 13: A Computer Network is a Group of Interconnected Computers

L2TPv3 (Layer 2 Tunneling Protocol version 3), a new release. MPVPN (Multi Path Virtual Private Network). Ragula Systems Development

Company owns the registered trademark "MPVPN". Cisco VPN, a proprietary VPN used by many Cisco hardware devices.

Proprietary clients exist for all platforms; open-source clients also exist SSH VPN- OpenSSH offers VPN tunneling to secure remote connections to a

network (or inter-network links). This feature (option -w) should not be confused with port forwarding (option -L). OpenSSH server provides limited number of concurrent tunnels and the VPN feature itself does not support personal authentication.

Security and mobility

Mobile VPNs apply standards-based authentication and encryption technologies to secure communications with mobile devices and to protect networks from unauthorized users. Designed for wireless environments, Mobile VPNs provide an access solution for mobile users who require secure access to information and applications over a variety of wired and wireless networks. Mobile VPNs allow users to roam seamlessly across IP-based networks and in and out of wireless-coverage areas without losing application sessions or dropping the secure VPN session. For instance, highway patrol officers require access to mission-critical applications as they travel between different subnets of a mobile network, much as a cellular radio has to hand off its link to repeaters at different cell towers. The Host Identity Protocol (HIP), under study by the Internet Engineering Task Force, is designed to support mobility of hosts by separating the role of IP addresses for host identification from their locator functionality in an IP network. With HIP a mobile host maintains its logical connections established via the host identity identifier while associating with different IP addresses when roaming between access network

6. CAN:

Campus area network is a computer network that interconnects local area networks throughout a limited geographical area, such as a university campus, a corporate campus, or a military base. It could be considered a metropolitan area network that is specific to a campus setting. A campus area network is, therefore, larger than a local area network but smaller than a wide area network. The term is sometimes used to refer to university campuses, while the term corporate area network is used to refer to corporate campuses instead. Although not considered a wide area network, a CAN extends the reach of each local area network within the campus area of an organization. In a CAN, the buildings of a university or corporate campus are interconnected using the same types of hardware and networking technologies that one would use in a LAN. In addition, all of the components, including switches, routers, and cabling, as well as wireless connection points, are owned and maintained by the organization.

13

Page 14: A Computer Network is a Group of Interconnected Computers

7. SAN:

A storage area network (SAN) is an architecture to attach remote computer storage devices (such as disk arrays, tape libraries, and optical jukeboxes) to servers in such a way that the devices appear as locally attached to the operating system. Although the cost and complexity of SANs are dropping, they are uncommon outside larger enterprises. Network attached storage (NAS), in contrast to SAN, uses file-based protocols such as NFS or SMB/CIFS where it is clear that the storage is remote, and computers request a portion of an abstract file rather than a disk block.

INTERNETWORKING

Internetworking involves connecting two or more computer networks via gateways using a common routing technology. The result is called an internetwork (often shortened to internet). The most notable example of internetworking is the Internet (often, but not always, capitalized), a network of networks based on many underlying hardware technologies, but unified by an internetworking protocol standard, the Internet Protocol Suite (TCP/IP).

The network elements used to connect individual networks are known as routers, but were originally called gateways, a term that was deprecated in this context, due to confusion with functionally different devices using the same name. The interconnection of networks with bridges (link layer devices) is sometimes incorrectly termed "internetworking", but the resulting system is simply a larger, single subnetwork, and no internetworking protocol (such as IP) is required to traverse it. However, a single computer network may be converted into an internetwork by dividing the network into segments and then adding routers between the segments.

The original term for an internetwork was catenet. Internetworking started as a way to connect disparate types of networking technology, but it became widespread through the developing need to connect two or more local area networks via some sort of wide area network. The definition now includes the connection of other types of computer networks such as personal area networks. The Internet Protocol is designed to provide an unreliable (i.e., not guaranteed) packet service across the network. The architecture avoids intermediate network elements maintaining any state of the network. Instead, this function is assigned to the endpoints of each communication session. To transfer data reliably, applications must utilize an appropriate Transport Layer protocol, such as Transmission Control Protocol (TCP), which provides a reliable stream. Some applications use a simpler, connection-less transport protocol, User Datagram Protocol (UDP), for tasks which do not require reliable delivery of data or that require real-time service, such as video streaming

14

Page 15: A Computer Network is a Group of Interconnected Computers

Networking models

Two architectural models are commonly used to describe the protocols and methods used in internetworking. The Open System Interconnection (OSI) reference model was developed under the auspices of the International Organization for Standardization (ISO) and provides a rigorous description for layering protocol functions from the underlying hardware to the software interface concepts in user applications. Internetworking is implemented in Layer 3 (Network Layer) of the model.

The Internet Protocol Suite, also called the TCP/IP model, of the Internet was not designed to conform to this model and does not refer to it in any of the normative specifications (Requests for Comment) and Internet standards. Despite similar appearance as a layered model, it uses a much less rigorous, loosely defined architecture that concerns itself only with the aspects of networking. It does not discuss hardware-specific low-level interfaces, and assumes availability of a Link Layer interface to the local network link to which the host is connected. Internetworking is facilitated by the protocols of its Internet Layer. In modern practice, interconnected networks use the Internet Protocol. There are at least three variants of internetworks, depending on who administers and who participates in them:

Intranet Extranet Internet

Intranets and extranets may or may not have connections to the Internet. If connected to the Internet, the intranet or extranet is normally protected from being accessed from the Internet without proper authorization. The Internet is not considered to be a part of the intranet or extranet, although it may serve as a portal for access to portions of an extranet.

INTRANET

An intranet is a private computer network that uses Internet technologies to securely share any part of an organization's information or operational systems with its employees. Sometimes the term refers only to the organization's internal website, but often it is a more extensive part of the organization's information technology infrastructure and private websites are an important component and focal point of internal communication and collaboration. An intranet is built from the same concepts and technologies used for the Internet, such as client-server computing and the Internet Protocol Suite (TCP/IP). Any of the well known Internet protocols may be found in an intranet, such as HTTP (web services), SMTP (e-mail), and FTP (file transfer). Internet technologies are often deployed to provide modern interfaces to legacy information systems hosting corporate data.

15

Page 16: A Computer Network is a Group of Interconnected Computers

An intranet can be understood as a private version of the Internet, or as a private extension of the Internet confined to an organization. The first intranet websites and home pages began to appear in organizations in 1990 - 1991. Although not officially noted, the term intranet first became common-place inside early adopters, such as universities and technology corporations, in 1992. Intranets differ from extranets in that the former are generally restricted to employees of the organization while extranets may also be accessed by customers, suppliers, or other approved parties. Extranets extend a private network onto the Internet with special provisions for access, authorization and authentication (see also AAA protocol).

An organization's intranet does not necessarily have to provide access to the Internet. When such access is provided it is usually through a network gateway with a firewall, shielding the intranet from unauthorized external access. The gateway often also implements user authentication, encryption of messages, and often virtual private network (VPN) connectivity for off-site employees to access company information, computing resources and internal communications. Increasingly, intranets are being used to deliver tools and applications, e.g., collaboration (to facilitate working in groups and teleconferencing) or sophisticated corporate directories, sales and customer relationship management tools, project management etc., to advance productivity.

Intranets are also being used as corporate culture-change platforms. For example, large numbers of employees discussing key issues in an intranet forum application could lead to new ideas in management, productivity, quality, and other corporate issues. In large intranets, website traffic is often similar to public website traffic and can be better understood by using web metrics software to track overall activity. User surveys also improve intranet website effectiveness. Larger businesses allow users within their intranet to access public internet through firewall servers. They have the ability to screen messages coming and going keeping security intact.

When part of an intranet is made accessible to customers and others outside the business, that part becomes part of an extranet. Businesses can send private messages through the public network, using special encryption/decryption and other security safeguards to connect one part of their intranet to another. Intranet user-experience, editorial, and technology teams’ work together to produce in-house sites. Most commonly, intranets are managed by the communications, HR or CIO departments of large organizations, or some combination of these.

Because of the scope and variety of content and the number of system interfaces, intranets of many organizations are much more complex than their respective public websites. Intranets and their use are growing rapidly. According to the Intranet design annual 2007 from Nielsen Norman Group, the number of pages on participants' intranets averaged 200,000 over the years 2001 to 2003 and has grown to an average of 6 million pages over 2005–2007.

16

Page 17: A Computer Network is a Group of Interconnected Computers

Benefits of intranets Workforce productivity: Intranets can also help users to locate and view

information faster and use applications relevant to their roles and responsibilities. With the help of a web browser interface, users can access data held in any database the organization wants to make available, anytime and - subject to security provisions - from anywhere within the company workstations, increasing employees' ability to perform their jobs faster, more accurately, and with confidence that they have the right information. It also helps to improve the services provided to the users.

Time: With intranets, organizations can make more information available to employees on a "pull" basis (i.e., employees can link to relevant information at a time which suits them) rather than being deluged indiscriminately by emails.

Communication: Intranets can serve as powerful tools for communication within an organization, vertically and horizontally. From a communications standpoint, intranets are useful to communicate strategic initiatives that have a global reach throughout the organization./// The type of information that can easily be conveyed is the purpose of the initiative and what the initiative is aiming to achieve, who is driving the initiative, results achieved to date, and who to speak to for more information. By providing this information on the intranet, staff have the opportunity to keep up-to-date with the strategic focus of the organization. Some examples of communication would be chat, email, and or blogs. A great real world example of where an intranet helped a company communicate is when Nestle had a number of food processing plants in Scandinavia. Their central support system had to deal with a number of queries every day (McGovern, Gerry). When Nestle decided to invest in an intranet, they quickly realized the savings. McGovern says the savings from the reduction in query calls was substantially greater than the investment in the intranet.

Web publishing allows 'cumbersome' corporate knowledge to be maintained and easily accessed throughout the company using hypermedia and Web technologies. Examples include: employee manuals, benefits documents, company policies, business standards, newsfeeds, and even training, can be accessed using common Internet standards (Acrobat files, Flash files, CGI applications). Because each business unit can update the online copy of a document, the most recent version is always available to employees using the intranet.

Business operations and management: Intranets are also being used as a platform for developing and deploying applications to support business operations and decisions across the internetworked enterprise.

Cost-effective: Users can view information and data via web-browser rather than maintaining physical documents such as procedure manuals, internal phone list and requisition forms. This can potentially save the business money on printing, duplicating documents, and the environment as well as document maintenance overhead. "PeopleSoft, a large software company, has derived significant cost savings by shifting HR processes to the intranet". Gerry McGovern goes on to say the manual cost of enrolling in benefits was found to be USD109.48 per enrollment. "Shifting this process to the intranet reduced the cost per enrollment to $21.79; a saving of 80 percent". PeopleSoft also saved some money when they

17

Page 18: A Computer Network is a Group of Interconnected Computers

received requests for mailing address change. "For an individual to request a change to their mailing address, the manual cost was USD17.77. The intranet reduced this cost to USD4.87, a saving of 73 percent". PeopleSoft was just one of the many companies that saved money by using an intranet. Another company that saved a lot of money on expense reports was Cisco. "In 1996, Cisco processed 54,000 reports and the amount of dollars processed was USD19 million".

Promote common corporate culture: Every user is viewing the same information within the Intranet.

Enhance Collaboration: With information easily accessible by all authorised users, teamwork is enabled.

Cross-platform Capability: Standards-compliant web browsers are available for Windows, Mac, and UNIX.

Built for One Audience: Many companies dictate computer specifications. Which, in turn, may allow Intranet developers to write applications that only have to work on one browser (no cross-browser compatibility issues).

Knowledge of your Audience: Being able to specifically address your "viewer" is a great advantange. Since Intranets are user specific (requiring database/network authentication prior to access), you know exactly who you are interfacing with. So, you can personalize your Intranet based on role (job title, department) or individual ("Congratulations Jane, on your 3rd year with our company!").

Immediate Updates: When dealing with the public in any capacity, laws/specifications/parameters can change. With an Intranet and providing your audience with "live" changes, they are never out of date, which can limit a company's liability.

Supports a distributed computing architecture: The intranet can also be linked to a company’s management information system, for example a time keeping system.

Planning and creating an intranet

Most organizations devote considerable resources into the planning and implementation of their intranet as it is of strategic importance to the organization's success. Some of the planning would include topics such as:

The purpose and goals of the intranet Persons or departments responsible for implementation and management Functional plans, information architecture, page layouts, design. Implementation schedules and phase-out of existing systems Defining and implementing security of the intranet How to ensure it is within legal boundaries and other constraints Level of interactivity (e.g. wikis, on-line forms) desired. Is the input of new data and updating of existing data to be centrally controlled or

devolved?

18

Page 19: A Computer Network is a Group of Interconnected Computers

These are in addition to the hardware and software decisions (like content management systems), participation issues (like good taste, harassment, confidentiality), and features to be supported.

The actual implementation would include steps such as:

Securing senior management support and funding. Business requirements analysis. Setting up web server access using a TCP/IP network. Installing required user applications on computers. Creation of document framework for the content to be hosted. User involvement in testing and promoting use of intranet. Ongoing measurement and evaluation, including through benchmarking against

other intranets

Content is King: A successful Intranet project engages its viewers and provides them with immense corporate value by:

Feeding the Intranet: Key personnel must be assigned and committed to feeding Intranet consumers. The alternative for your project to become the "yellow-pages" (a tool that is used as a last resort).

Keep it current: Information that is current, relevant, informative, and useful to the end-user is the only way to keep them coming back for more.

Interact or "Listen": Allow your users to create content. Social networking must be an integral part of any Intranet project, if a company is serious about providing information to and receiving information from their employees.

Feedback: Allow a specific forum for users to tell you what they want and what they do not like.

Act on Feedback: Your users of the Intranet are typically the employees of the company with their finger on the pulse of your industry. Those that are in the trenches on a daily basis will be able to tell "corporate" what trends are happening in the marketplace before any news source. This two-way communication is critical for any successful Intranet. Company executives must read the input and create responses based on the company's direction. Otherwise, what is the point of any employee taking the time to respond. If an employee submits their opinion or their observation, they need to feel that they have been heard. This is accomplished by:

Require management to review intranet posts on a daily basis and respond to the poster. Let them know that their post has already been addressed, is being reviewed, or is being referred to a department head. This ensures the poster that their post has been read and is being acted upon accordingly. If they do not receive feedback, they will discontinue posting.

Broadcast feedback: The ideas that make it into the "this is a great idea" bucket, should become "news-worthy". This makes the poster feel useful and encourages others to follow.

19

Page 20: A Computer Network is a Group of Interconnected Computers

Log feedback by users: This information can be useful when considering an applicant for promotion/transfer, etc. It will also let you know who is focused on the company's benefit and not just "filling a position".

Require executives to provide daily/weekly content: Everyone wants to hear from the person(s) they are working for. The Executive Team needs to lead the way in communicating the company's vision to their associates on a frequent (daily, if possible. If not, no less than weekly) basis.

EXTRANET

An extranet is a private network that uses Internet protocols, network connectivity, and possibly the public telecommunication system to securely share part of an organization's information or operations with suppliers, vendors, partners, customers or other businesses. An extranet can be viewed as part of a company's intranet that is extended to users outside the company, usually via the Internet. It has also been described as a "state of mind" in which the Internet is perceived as a way to do business with a selected set of other companies (business-to-business, B2B), in isolation from all other Internet users. In contrast, business-to-consumer (B2C) models involve known servers of one or more companies, communicating with previously unknown consumer users.

An extranet can be understood as an intranet mapped onto the public Internet or some other transmission system not accessible to the general public, but managed by more than one company's administrator(s). For example, military networks of different security levels may map onto a common military radio transmission system that never connects to the Internet. Any private network mapped onto a public one is a virtual private network (VPN), often using special security protocols.

For decades, institutions have been interconnecting to each other to create private networks for sharing information. One of the differences that characterizes an extranet, however, is that its interconnections are over a shared network rather than through dedicated physical lines. With respect to Internet Protocol networks, RFC 4364 states "If all the sites in a VPN are owned by the same enterprise, the VPN is a corporate intranet. If the various sites in a VPN are owned by different enterprises, the VPN is an extranet. A site can be in more than one VPN; e.g., in an intranet and several extranets. We regard both intranets and extranets as VPNs. In general, when we use the term VPN we will not be distinguishing between intranets and extranets. Even if this argument is valid, the term "extranet" is still applied and can be used to eliminate the use of the above description."

In the quote above from RFC 4364, the term "site" refers to a distinct networked environment. Two sites connected to each other across the public Internet backbone comprise a VPN. The term "site" does not mean "website." Thus, a small company in a single building can have an "intranet," but to have a VPN, they would need to provide tunneled access to that network for geographically distributed employees.

Similarly, for smaller, geographically united organizations, "extranet" is a useful term to describe selective access to intranet systems granted to suppliers, customers, or other

20

Page 21: A Computer Network is a Group of Interconnected Computers

companies. Such access does not involve tunneling, but rather simply an authentication mechanism to a web server. In this sense, an "extranet" designates the "private part" of a website, where "registered users" can navigate, enabled by authentication mechanisms on a "login page".

An extranet requires network security. These can include firewalls, server management, the issuance and use of digital certificates or similar means of user authentication, encryption of messages, and the use of virtual private networks (VPNs) that tunnel through the public network. Many technical specifications describe methods of implementing extranets, but often never explicitly define an extranet. RFC 3547 presents requirements for remote access to extranets. RFC 2709 discusses extranet implementation using IPsec and advanced network address translation (NAT).

Industry uses

During the late 1990s and early 2000s, several industries started to use the term "extranet" to describe central repositories of shared data made accessible via the web only to authorized members of particular work groups. Scandinavia, Germany and Belgium, among others. Some applications are offered on a Software as a Service (SaaS) basis by vendors functioning as Application service providers (ASPs). Specially secured extranets are used to provide virtual data room services to companies in several sectors (including law and accountancy).

For example, in the construction industry, project teams could login to and access a 'project extranet' to share drawings and documents, make comments, issue requests for information, etc. In 2003 in the United Kingdom, several of the leading vendors formed the Network of Construction Collaboration Technology Providers, or NCCTP, to promote the technologies and to establish data exchange standards between the different systems. The same type of construction-focused technologies have also been developed in the United States and Australia.

Advantages Exchange large volumes of data using Electronic Data Interchange (EDI) Share product catalogs exclusively with trade partners Collaborate with other companies on joint development efforts Jointly develop and use training programs with other companies Provide or access services provided by one company to a group of other

companies, such as an online banking application managed by one company on behalf of affiliated banks

Share news of common interest exclusively

Disadvantages Extranets can be expensive to implement and maintain within an organization

(e.g., hardware, software, employee training costs) — if hosted internally instead of via an application service provider.

21

Page 22: A Computer Network is a Group of Interconnected Computers

Security of extranets can be a concern when hosting valuable or proprietary information. System access must be carefully controlled to secure sensitive data.

INTERNET

The Internet is a standardized, global system of interconnected computer networks that connects millions of people. The system uses the Internet Protocol Suite (TCP/IP) standard rules for data representation, signaling, authentication, and error detection. It is a network of networks that consists of millions of private and public, academic, business, and government networks of local to global scope that are linked by copper wires, fiber-optic cables, wireless connections, and other technologies. The Internet carries a vast array of information resources and services, most notably the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support electronic mail, in addition to popular services such as video on demand, online shopping, online gaming, exchange of information from one-to-many or many-to-many by online chat, online social networking, online publishing, file transfer, file sharing and Voice over Internet Protocol (VoIP) or teleconferencing, telepresence person-to-person communication via voice and video.

Fig- Visualization of the various routes through a portion of the Internet

The origins of the Internet reach back to the 1960s when the United States funded research projects of its military agencies to build robust, fault-tolerant and distributed computer networks. This research and a period of civilian funding of a new U.S. backbone by the National Science Foundation spawned worldwide participation in the development of new networking technologies and led to the commercialization of an international network in the mid 1990s, and resulted in the following popularization of

22

Page 23: A Computer Network is a Group of Interconnected Computers

countless applications in virtually every aspect of modern human life. As of 2009, an estimated quarter of Earth's population uses the services of the Internet.

Terminology

The terms Internet and World Wide Web are often used in everyday speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet is a global data communications system. It is a hardware and software infrastructure that provides connectivity between computers. In contrast, the Web is one of the services communicated via the Internet. It is a collection of interconnected documents and other resources, linked by hyperlinks and URLs. The term the Internet, when referring to the Internet, has traditionally been treated as a proper noun and written with an initial capital letter. There is a trend to regard it as a generic term or common noun and thus write it as "the internet", without the capital.

History

Creation

The USSR's launch of Sputnik spurred the United States to create the Advanced Research Projects Agency, known as ARPA, in February 1958 to regain a technological lead. ARPA created the Information Processing Technology Office (IPTO) to further the research of the Semi Automatic Ground Environment (SAGE) program, which had networked country-wide radar systems together for the first time. J. C. R. Licklider was selected to head the IPTO. Licklider moved from the Psycho-Acoustic Laboratory at Harvard University to MIT in 1950, after becoming interested in information technology. At MIT, he served on a committee that established Lincoln Laboratory and worked on the SAGE project. In 1957 he became a Vice President at BBN, where he bought the first production PDP-1 computer and conducted the first public demonstration of time-sharing.

At the IPTO, Licklider got Lawrence Roberts to start a project to make a network, and Roberts based the technology on the work of Paul Baran, who had written an exhaustive study for the U.S. Air Force that recommended packet switching (as opposed to circuit switching) to make a network highly robust and survivable. After much work, the first two nodes of what would become the ARPANET were interconnected between UCLA and SRI International (SRI) in Menlo Park, California, on October 29, 1969. The ARPANET was one of the "eve" networks of today's Internet. Following on from the demonstration that packet switching worked on the ARPANET, the British Post Office, Telenet, DATAPAC and TRANSPAC collaborated to create the first international packet-switched network service. In the UK, this was referred to as the International Packet Switched Service (IPSS), in 1978. The collection of X.25-based networks grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. The X.25 packet switching standard was developed in the CCITT (now called ITU-T) around 1976.

23

Page 24: A Computer Network is a Group of Interconnected Computers

X.25 was independent of the TCP/IP protocols that arose from the experimental work of DARPA on the ARPANET, Packet Radio Net and Packet Satellite Net during the same time period. Vinton Cerf and Robert Kahn developed the first description of the TCP protocols during 1973 and published a paper on the subject in May 1974. Use of the term "Internet" to describe a single global TCP/IP network originated in December 1974 with the publication of RFC 675, the first full specification of TCP that was written by Vinton Cerf, Yogen Dalal and Carl Sunshine, then at Stanford University. During the next nine years, work proceeded to refine the protocols and to implement them on a wide range of operating systems. The first TCP/IP-based wide-area network was operational by January 1, 1983 when all hosts on the ARPANET were switched over from the older NCP protocols. In 1985, the United States' National Science Foundation (NSF) commissioned the construction of the NSFNET, a university 56 kilobit/second network backbone using computers called "fuzzballs" by their inventor, David L. Mills. The following year, NSF sponsored the conversion to a higher-speed 1.5 megabit/second network. A key decision to use the DARPA TCP/IP protocols was made by Dennis Jennings, then in charge of the Supercomputer program at NSF.

The opening of the network to commercial interests began in 1988. The US Federal Networking Council approved the interconnection of the NSFNET to the commercial MCI Mail system in that year and the link was made in the summer of 1989. Other commercial electronic e-mail services were soon connected, including OnTyme, Telemail and Compuserve. In that same year, three commercial Internet service providers (ISPs) were created: UUNET, PSINet and CERFNET. Important, separate networks that offered gateways into, then later merged with, the Internet include Usenet and BITNET. Various other commercial and educational networks, such as Telenet, Tymnet, Compuserve and JANET were interconnected with the growing Internet. Telenet (later called Sprintnet) was a large privately funded national computer network with free dial-up access in cities throughout the U.S. that had been in operation since the 1970s. This network was eventually interconnected with the others in the 1980s as the TCP/IP protocol became increasingly popular. The ability of TCP/IP to work over virtually any pre-existing communication networks allowed for a great ease of growth, although the rapid growth of the Internet was due primarily to the availability of an array of standardized commercial routers from many companies, the availability of commercial Ethernet equipment for local-area networking, and the widespread implementation and rigorous standardization of TCP/IP on UNIX and virtually every other common operating system.

24

Page 25: A Computer Network is a Group of Interconnected Computers

Growth

Fig- Graph of Internet users per 100 inhabitants between 1997 and 2007 by International Telecommunication Union

Although the basic applications and guidelines that make the Internet possible had existed for almost two decades, the network did not gain a public face until the 1990s. On 6 August 1991, CERN, a pan European organisation for particle research, publicized the new World Wide Web project. The Web was invented by English scientist Tim Berners-Lee in 1989. An early popular web browser was ViolaWWW, patterned after HyperCard and built using the X Window System. It was eventually replaced in popularity by the Mosaic web browser. In 1993, the National Center for Supercomputing Applications at the University of Illinois released version 1.0 of Mosaic, and by late 1994 there was growing public interest in the previously academic, technical Internet. By 1996 usage of the word Internet had become commonplace, and consequently, so had its use as a synecdoche in reference to the World Wide Web.

Meanwhile, over the course of the decade, the Internet successfully accommodated the majority of previously existing public computer networks (although some networks, such as FidoNet, have remained separate). During the 1990s, it was estimated that the Internet grew by 100 percent per year, with a brief period of explosive growth in 1996 and 1997. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.Using various statistics, Advanced Micro Devices estimated the population of Internet users to be 1.5 billion as of January 2009.

25

Page 26: A Computer Network is a Group of Interconnected Computers

Technology

The complex communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. While the hardware can often be used to support other software systems, it is the design and the rigorous standardization process of the software architecture that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been delegated to the Internet Engineering Task Force (IETF). The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. Resulting discussions and final standards are published in a series of publications each of which is called a Request for Comment (RFC), freely available on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards.

These standards describe a framework known as the Internet Protocol Suite. This is a model architecture that divides methods into a layered system of protocols (RFC 1122, RFC 1123). The layers correspond to the environment or scope in which their services operate. At the top is the Application Layer, the space for the application-specific networking methods used in software applications, e.g., a web browser program, and just below it is the Transport Layer which connects applications on different hosts via the network (e.g., client-server model) with appropriate data exchange methods. Underlying these layers are the actual networking technologies, consisting of two layers. The Internet Layer enables computers to identify and locate each other via Internet Protocol (IP) addresses, and allows them to connect to one-another via intermediate (transit) networks. Lastly, at the bottom of the architecture, is a software layer that provides connectivity between hosts on the same local network link (therefor called Link Layer), such as a local area network (LAN) or a dial-up connection. The model, also known as TCP/IP, is designed to be independent of the underlying hardware which the model therefore does not concern itself with in any detail. Other models have been developed, such as the Open Systems Interconnection (OSI) model, but they are not compatible in the details of description, nor implementation, but many similarities exist and the TCP/IP protocols are usually included in the discussion of OSI networking.

The most prominent component of the Internet model is the Internet Protocol (IP) which provides addressing systems (IP addresses) for computers on the Internet. IP enables internetworking and essentially establishes the Internet itself. IP Version 4 (IPv4) is the initial version used on the first generation of the today's Internet and is still in dominant use. It was designed to address up to ~4.3 billion (109) Internet hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion which is estimated to enter its final stage in approximately 2011. A new protocol version, IPv6, was developed which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 is currently in commercial deployment phase around the world and Internet address registries (RIRs) have begun to urge all resource managers to plan rapid adoption and conversion.

26

Page 27: A Computer Network is a Group of Interconnected Computers

IPv6 is not interoperable with IPv4. It essentially establishes a "parallel" version of the Internet not directly accessible with IPv4 software. This means software upgrades or translator facilities are necessary for every networking device that needs to communicate on the IPv6 Internet. Most modern computer operating systems are already converted to operate with both versions of the Internet Protocol. Network infrastructures, however, are still lagging in this development. Aside from the complex physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is defined by its interconnections and routing policies.

Modern usage

Structure

The Internet and its structure have been studied extensively. For example, it has been determined that both the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks. Similar to the way the commercial Internet providers connect via Internet exchange points, research networks tend to interconnect into large subnetworks such as GEANT, GLORIAD, Internet2 (successor of the Abilene Network), and the UK's national research and education network JANET. These in turn are built around smaller networks (see also the list of academic computer network organizations). According to a June 2007 article in Discover magazine, the combined weight of all the electrons moved within the Internet in a day is 0.2 millionths of an ounce. Others have estimated this at nearer 2 ounces (50 grams). Computer network diagrams often represent the Internet using a cloud symbol from which network communications pass in and out

Many computer scientists describe the Internet as a "prime example of a large-scale, highly engineered, yet highly complex system". The Internet is extremely heterogeneous; for instance, data transfer rates and physical characteristics of connections vary widely. The Internet exhibits "emergent phenomena" that depend on its large-scale organization. For example, data transfer rates exhibit temporal self-similarity. Further adding to the complexity of the Internet is the ability of more than one computer to use the Internet through only one node, thus creating the possibility for a very deep and hierarchical sub-network that can theoretically be extended infinitely (disregarding the programmatic limitations of the IPv4 protocol). Principles of this architecture date back to the 1960s and it might not be a solution best suited to modern needs. Thus, the possibility of developing alternative structures is currently being looked into.

27

Page 28: A Computer Network is a Group of Interconnected Computers

CANN

Fig- ICANN headquarters in Marina Del Rey, California, United StatesMain article: ICANN

The Internet Corporation for Assigned Names and Numbers (ICANN) is the authority that coordinates the assignment of unique identifiers on the Internet, including domain names, Internet Protocol (IP) addresses, and protocol port and parameter numbers. A globally unified name space (i.e., a system of names in which there is at most one holder for each possible name) is essential for the Internet to function. ICANN is headquartered in Marina del Rey, California, but is overseen by an international board of directors drawn from across the Internet technical, business, academic, and non-commercial communities. The US government continues to have the primary role in approving changes to the root zone file that lies at the heart of the domain name system. Because the Internet is a distributed network comprising many voluntarily interconnected networks, the Internet has no governing body. ICANN's role in coordinating the assignment of unique identifiers distinguishes it as perhaps the only central coordinating body on the global Internet, but the scope of its authority extends only to the Internet's systems of domain names, IP addresses, protocol ports and parameter numbers. On November 16, 2005, the World Summit on the Information Society, held in Tunis, established the Internet Governance Forum (IGF) to discuss Internet-related issues.

Workplace- The Internet is allowing greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections and Web applications.

Mobile devices

The Internet can now be accessed virtually anywhere by numerous means. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet from anywhere there is a cellular network supporting that device's technology. Within the limitations imposed by the small screen and other limited facilities of such a pocket-sized device, all the services of the Internet, including email and web

28

Page 29: A Computer Network is a Group of Interconnected Computers

browsing, may be available in this way. Service providers may restrict the range of these services and charges for data access may be significant, compared to home usage.

Market

The Internet has also become a large market for companies; some of the biggest companies today have grown by taking advantage of the efficient nature of low-cost advertising and commerce through the Internet, also known as e-commerce. It is the fastest way to spread information to a vast number of people simultaneously. The Internet has also subsequently revolutionized shopping—for example; a person can order a CD online and receive it in the mail within a couple of days, or download it directly in some cases. The Internet has also greatly facilitated personalized marketing which allows a company to market a product to a specific person or a specific group of people more so than any other advertising medium. Examples of personalized marketing include online communities such as MySpace, Friendster, Orkut, Facebook and others which thousands of Internet users join to advertise themselves and make friends online. Many of these users are young teens and adolescents ranging from 13 to 25 years old. In turn, when they advertise themselves they advertise interests and hobbies, which online marketing companies can use as information as to what those users will purchase online, and advertise their own companies' products to those users.

Services

E-mail

The concept of sending electronic text messages between parties in a way analogous to mailing letters or memos predates the creation of the Internet. Today it can be important to distinguish between internet and internal e-mail systems. Internet e-mail may travel and be stored unencrypted on many other networks and machines out of both the sender's and the recipient's control. During this time it is quite possible for the content to be read and even tampered with by third parties, if anyone considers it important enough. Purely internal or intranet mail systems, where the information never leaves the corporate or organization's network, are much more secure, although in any organization there will be IT and other personnel whose job may involve monitoring, and occasionally accessing, the e-mail of other employees not addressed to them. Pictures, documents and other files can be sent as e-mail attachments. E-mails can be cc-ed to multiple e-mail addresses.

29

Page 30: A Computer Network is a Group of Interconnected Computers

World Wide Web-

Many people use the terms Internet and World Wide Web (or just the Web) interchangeably, but, as discussed above, the two terms are not synonymous. The World Wide Web is a huge set of interlinked documents, images and other resources, linked by hyperlinks and URLs. These hyperlinks and URLs allow the web servers and other machines that store originals, and cached copies of, these resources to deliver them as required using HTTP (Hypertext Transfer Protocol). HTTP is only one of the communication protocols used on the Internet. Web services also use HTTP to allow software systems to communicate in order to share and exchange business logic and data.

Software products that can access the resources of the Web are correctly termed user agents. In normal use, web browsers, such as Internet Explorer, Firefox, Opera, Apple Safari, and Google Chrome, access web pages and allow users to navigate from one to another via hyperlinks. Web documents may contain almost any combination of computer data including graphics, sounds, text, video, multimedia and interactive content including games, office applications and scientific demonstrations. Through keyword-driven Internet research using search engines like Yahoo! and Google, millions of people worldwide have easy, instant access to a vast and diverse amount of online information. Compared to encyclopedias and traditional libraries, the World Wide Web has enabled a sudden and extreme decentralization of information and data.

Using the Web, it is also easier than ever before for individuals and organizations to publish ideas and information to an extremely large audience. Anyone can find ways to publish a web page, a blog or build a website for very little initial cost. Publishing and maintaining large, professional websites full of attractive, diverse and up-to-date information is still a difficult and expensive proposition, however. Many individuals and some companies and groups use "web logs" or blogs, which are largely used as easily updatable online diaries. Some commercial organizations encourage staff to fill them with advice on their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result. One example of this practice is Microsoft, whose product developers publish their personal blogs in order to pique the public's interest in their work. Collections of personal web pages published by large service providers remain popular, and have become increasingly sophisticated. Whereas operations such as Angelfire and GeoCities have existed since the early days of the Web, newer offerings from, for example, Facebook and MySpace currently have large followings. These operations often brand themselves as social network services rather than simply as web page hosts.

Advertising on popular web pages can be lucrative, and e-commerce or the sale of products and services directly via the Web continues to grow. In the early days, web pages were usually created as sets of complete and isolated HTML text files stored on a web server. More recently, websites are more often created using content management or wiki software with, initially, very little content. Contributors to these systems, who may be paid staff, members of a club or other organization or members of the public, fill

30

Page 31: A Computer Network is a Group of Interconnected Computers

underlying databases with content using editing pages designed for that purpose, while casual visitors view and read this content in its final HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors.

Remote access

The Internet allows computer users to connect to other computers and information stores easily, wherever they may be across the world. They may do this with or without the use of security, authentication and encryption technologies, depending on the requirements. This is encouraging new ways of working from home, collaboration and information sharing in many industries. An accountant sitting at home can audit the books of a company based in another country, on a server situated in a third country that is remotely maintained by IT specialists in a fourth. These accounts could have been created by home-working bookkeepers, in other remote locations, based on information e-mailed to them from offices all over the world. Some of these things were possible before the widespread use of the Internet, but the cost of private leased lines would have made many of them infeasible in practice. An office worker away from their desk, perhaps on the other side of the world on a business trip or a holiday, can open a remote desktop session into his normal office PC using a secure Virtual Private Network (VPN) connection via the Internet. This gives the worker complete access to all of his or her normal files and data, including e-mail and other applications, while away from the office. This concept is also referred to by some network security people as the Virtual Private Nightmare, because it extends the secure perimeter of a corporate network into its employees' homes.

Collaboration

The low cost and nearly instantaneous sharing of ideas, knowledge, and skills has made collaborative work dramatically easier. Not only can a group cheaply communicate and share ideas, but the wide reach of the Internet allows such groups to easily form in the first place. An example of this is the free software movement, which has produced, among other programs, Linux, Mozilla Firefox, and OpenOffice.org. Internet "chat", whether in the form of IRC chat rooms or channels, or via instant messaging systems, allow colleagues to stay in touch in a very convenient way when working at their computers during the day. Messages can be exchanged even more quickly and conveniently than via e-mail. Extensions to these systems may allow files to be exchanged, "whiteboard" drawings to be shared or voice and video contact between team members.

Version control systems allow collaborating teams to work on shared sets of documents without either accidentally overwriting each other's work or having members wait until they get "sent" documents to be able to make their contributions. Business and project teams can share calendars as well as documents and other information. Such collaboration occurs in a wide variety of areas including scientific research, software development, conference planning, political activism and creative writing. Social and political collaboration is also becoming more widespread as both Internet access and

31

Page 32: A Computer Network is a Group of Interconnected Computers

computer literacy grow. From the flash mob 'events' of the early 2000s to the use of social networking in the 2009 Iranian election protests, the Internet allows people to work together more effectively and in many more ways than was possible without it.

File sharing

A computer file can be e-mailed to customers, colleagues and friends as an attachment. It can be uploaded to a website or FTP server for easy download by others. It can be put into a "shared location" or onto a file server for instant use by colleagues. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. In any of these cases, access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by digital signatures or by MD5 or other message digests. These simple features of the Internet, over a worldwide basis, are changing the production, sale, and distribution of anything that can be reduced to a computer file for transmission. This includes all manner of print publications, software products, news, music, film, video, photography, graphics and the other arts. This in turn has caused seismic shifts in each of the existing industries that previously controlled the production and distribution of these products.

Streaming media

Many existing radio and television broadcasters provide Internet "feeds" of their live audio and video streams (for example, the BBC). They may also allow time-shift viewing or listening such as Preview, Classic Clips and Listen Again features. These providers have been joined by a range of pure Internet "broadcasters" who never had on-air licenses. This means that an Internet-connected device, such as a computer or something more specific, can be used to access on-line media in much the same way as was previously possible only with a television or radio receiver. The range of material is much wider, from pornography to highly specialized, technical webcasts. Podcasting is a variation on this theme, where—usually audio—material is downloaded and played back on a computer or shifted to a portable media player to be listened to on the move. These techniques using simple equipment allow anybody, with little censorship or licensing control, to broadcast audio-visual material worldwide.

Webcams can be seen as an even lower-budget extension of this phenomenon. While some webcams can give full-frame-rate video, the picture is usually either small or updates slowly. Internet users can watch animals around an African waterhole, ships in the Panama Canal, traffic at a local roundabout or monitor their own premises, live and in real time. Video chat rooms and video conferencing are also popular with many uses being found for personal webcams, with and without two-way sound. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with a vast number of users. It uses a flash-based web player to stream and show video

32

Page 33: A Computer Network is a Group of Interconnected Computers

files. Registered users may upload an unlimited amount of video and build their own personal profile. YouTube claims that its users watch hundreds of millions, and upload hundreds of thousands, of videos daily.

Internet telephony

VoIP stands for Voice-over-Internet Protocol, referring to the protocol that underlies all Internet communication. The idea began in the early 1990s with walkie-talkie-like voice applications for personal computers. In recent years many VoIP systems have become as easy to use and as convenient as a normal telephone. The benefit is that, as the Internet carries the voice traffic, VoIP can be free or cost much less than a traditional telephone call, especially over long distances and especially for those with always-on Internet connections such as cable or ADSL. VoIP is maturing into a competitive alternative to traditional telephone service. Interoperability between different providers has improved and the ability to call or receive a call from a traditional telephone is available. Simple, inexpensive VoIP network adapters are available that eliminate the need for a personal computer.

Voice quality can still vary from call to call but is often equal to and can even exceed that of traditional calls. Remaining problems for VoIP include emergency telephone number dialling and reliability. Currently, a few VoIP providers provide an emergency service, but it is not universally available. Traditional phones are line-powered and operate during a power failure; VoIP does not do so without a backup power source for the phone equipment and the Internet access devices. VoIP has also become increasingly popular for gaming applications, as a form of communication between players. Popular VoIP clients for gaming include Ventrilo and Teamspeak. Wii, PlayStation 3, and Xbox 360 also offer VoIP chat features.

Accessibility

Language

The prevalent language for communication on the Internet is English. This may be a result of the origin of the Internet, as well as English's role as a lingua franca. It may also be related to the poor capability of early computers, largely originating in the United States, to handle characters other than those in the English variant of the Latin alphabet. After English (28.6% of Web visitors) the most requested languages on the World Wide Web are Chinese (20.3%), Spanish (8.2%), Japanese (5.9%), French and Portuguese (4.6%), German (4.1%), Arabic (2.6%), Russian (2.4%), and Korean (2.3%). By region, 41% of the world's Internet users are based in Asia, 25% in Europe, 16% in North America, 11% in Latin America and the Caribbean, 3% in Africa, 3% in the Middle East and 1% in Australia. The Internet's technologies have developed enough in recent years, especially in the use of Unicode that good facilities are available for development and communication in most widely used languages. However, some glitches such as mojibake (incorrect display of foreign language characters, also known as kryakozyabry) still remain.

33

Page 34: A Computer Network is a Group of Interconnected Computers

Connectivity

Common methods of home access include dial-up, landline broadband (over coaxial cable, fiber optic or copper wires), Wi-Fi, satellite and 3G technology cell phones. Public places to use the Internet include libraries and Internet cafes, where computers with Internet connections are available. There are also Internet access points in many public places such as airport halls and coffee shops, in some cases just for brief use while standing. Various terms are used, such as "public Internet kiosk", "public access terminal", and "Web payphone". Many hotels now also have public terminals, though these are usually fee-based. These terminals are widely accessed for various usage like ticket booking, bank deposit, online payment etc. Wi-Fi provides wireless access to computer networks, and therefore can do so to the Internet itself. Hotspots providing such access include Wi-Fi cafes, where would-be users need to bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A hotspot need not be limited to a confined location. A whole campus or park, or even an entire city can be enabled. Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services covering large city areas are in place in London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. The Internet can then be accessed from such places as a park bench. Apart from Wi-Fi, there have been experiments with proprietary mobile wireless networks like Ricochet, various high-speed data services over cellular phone networks, and fixed wireless services. High-end mobile phones such as smartphones generally come with Internet access through the phone network. Web browsers such as Opera are available on these advanced handsets, which can also run a wide variety of other Internet software. More mobile phones have Internet access than PCs, though this is not as widely used. An Internet access provider and protocol matrix differentiates the methods used to get online.

By region

Social impact

The Internet has made possible entirely new forms of social interaction, activities and organizing, thanks to its basic features such as widespread usability and access. Social networking websites such as Facebook and MySpace have created a new form of socialization and interaction. Users of these sites are able to add a wide variety of items to their personal pages, to indicate common interests, and to connect with others. It is also possible to find a large circle of existing acquaintances, especially if a site allows users to utilize their real names, and to allow communication among large existing groups of people. Sites like meetup.com exist to allow wider announcement of groups which may exist mainly for face-to-face meetings, but which may have a variety of minor interactions over their group's site at meetup.org, or other similar sites.

34

Page 35: A Computer Network is a Group of Interconnected Computers

Digital natives

The first generation is now being raised with widespread availability of Internet connectivity, with consequences for privacy, identity, and copyright concerns. These "Digital natives" face a variety of concerns that were not present for prior generations.

Politics

In democratic societies, the Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States became famous for its ability to generate donations via the Internet. Many political groups use the Internet to achieve a whole new method of organizing, in order to carry out Internet activism. Some governments, such as those of Iran, North Korea, Myanmar, the People's Republic of China, and Saudi Arabia, restrict what people in their countries can access on the Internet, especially political and religious content. This is accomplished through software that filters domains and content so that they may not be easily accessed or obtained without elaborate circumvention.

In Norway, Denmark, Finland and Sweden, major Internet service providers have voluntarily (possibly to avoid such an arrangement being turned into law) agreed to restrict access to sites listed by police. While this list of forbidden URLs is only supposed to contain addresses of known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws making the possession or distribution of certain material, such as child pornography, illegal, but do not use filtering software. There are many free and commercially available software programs, called content-control software, with which a user can choose to block offensive websites on individual computers or networks, such as to limit a child's access to pornography or violence.

Leisure activities

The Internet has been a major source of leisure since before the World Wide Web, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much of the main traffic. Today, many Internet forums have sections devoted to games and funny videos; short cartoons in the form of Flash movies are also popular. Over 6 million people use blogs or message boards as a means of communication and for the sharing of ideas. The pornography and gambling industries have both taken full advantage of the World Wide Web, and often provide a significant source of advertising revenue for other websites. Although many governments have attempted to put restrictions on both industries' use of the Internet, this has generally failed to stop their widespread popularity.

One main area of leisure on the Internet is multiplayer gaming. This form of leisure creates communities, bringing people of all ages and origins to enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing games to online gambling. This has revolutionized the way many people

35

Page 36: A Computer Network is a Group of Interconnected Computers

interact and spend their free time on the Internet. While online gaming has been around since the 1970s, modern modes of online gaming began with services such as GameSpy and MPlayer, to which players of games would typically subscribe. Non-subscribers were limited to certain types of game play or certain games. Many use the Internet to access and download music, movies and other works for their enjoyment and relaxation. As discussed above, there are paid and unpaid sources for all of these, using centralized servers and distributed peer-to-peer technologies. Some of these sources take more care over the original artists' rights and over copyright laws than others.

Many use the World Wide Web to access news, weather and sports reports, to plan and book holidays and to find out more about their random ideas and casual interests. People use chat, messaging and e-mail to make and stay in touch with friends worldwide, sometimes in the same way as some previously had pen pals. Social networking websites like MySpace, Facebook and many others like them also put and keep people in contact for their enjoyment. The Internet has seen a growing number of Web desktops, where users can access their files, folders, and settings via the Internet. Cyberslacking has become a serious drain on corporate resources; the average UK employee spends 57 minutes a day surfing the Web at work, according to a study by Peninsula Business Services.

Functional relationship (network architecture)

Computer networks may be classified according to the functional relationships which exist among the elements of the network, e.g., Active Networking, Client-server and Peer-to-peer (workgroup) architecture.

1. Active networking- Active networking is a communication pattern that allows packets flowing through a telecommunications network to dynamically modify the operation of the network. Active network architecture is composed of execution environments (similar to a unix shell that can execute active packets), a node operating system capable of supporting one or more execution environments. It also consists of active hardware, capable of routing or switching as well as executing code within active packets. This differs from the traditional network architecture which seeks robustness and stability by attempting to remove complexity and the ability to change its fundamental operation from underlying network components. Network processors are one means of implementing active networking concepts. Active networks have also been implemented as overlay networks.

2. Client-server- Client-server computing or networking is a distributed application architecture that partitions tasks or work loads between service providers (servers) and service requesters, called clients. Often clients and servers operate over a computer network on separate hardware. A server machine is a high-performance host that is running one or more server programs which share its resources with clients. A client does not share any of its resources,

36

Page 37: A Computer Network is a Group of Interconnected Computers

but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await (listen to) incoming requests.

3. Peer-to-peer- A peer-to-peer distributed network architecture is composed of participants that make a portion of their resources (such as processing power, disk storage, and network bandwidth) available directly to their peers without intermediary network hosts or servers. Peers are both suppliers and consumers of resources, in contrast to the traditional client-server model where only servers supply, and clients consume.Peer-to-peer was popularized by file sharing systems like Napster. Peer-to-peer file sharing networks have inspired new structures and philosophies in other areas of human interaction. In such social contexts, peer-to-peer as a meme refers to the egalitarian social networking that is currently emerging throughout society, enabled by Internet technologies in general.

Network topology

Network topology is the physical or logical arrangement and interconnections of the elements (links, nodes, etc.) of a computer network. A local area network (LAN) is one example of a network that exhibits both a physical topology and a logical topology. Any given node in the LAN has one or more links to one or more other nodes in the network and the mapping of these links and nodes in a graph results in a geometrical shape that may be used to describe the physical topology of the network. Likewise, the mapping of the data flows between the nodes in the network determines the logical topology of the network. The physical and logical topologies may or may not be identical in any particular network.

Any particular network topology is determined only by the graphical mapping of the configuration of physical and/or logical connections between nodes. The study of network topology uses graph theory. Distances between nodes, physical interconnections, transmission rates, and/or signal types may differ in two networks and yet their topologies may be identical.

Basic topology types

The study of network topology recognizes three basic topologies:

Bus topology Star topology Ring topology

Classification of network topologies

37

Page 38: A Computer Network is a Group of Interconnected Computers

There are also three basic categories of network topologies:

1. physical topologies 2. signal topologies 3. logical topologies

The terms signal topology and logical topology are often used interchangeably, though there is a subtle difference between the two.

Physical topologiesThe mapping of the nodes of a network and the physical connections between them – i.e., the layout of wiring, cables, the locations of nodes, and the interconnections between the nodes and the cabling or wiring system

Classification of physical topologies

Point-to-pointThe simplest topology is a permanent link between two endpoints (the line in the illustration above). Switched point-to-point topologies are the basic model of conventional telephony. The value of a permanent point-to-point network is the value of guaranteed, or nearly so, communications between the two endpoints. The value of an on-demand point-to-point connection is proportional to the number of potential pairs of subscribers, and has been expressed as Metcalfe's Law.

Permanent (dedicated)Easiest to understand, of the variations of point-to-point topology, is a point-to-point communications channel that appears, to the user, to be permanently associated with the two endpoints. Children's "tin-can telephone" is one example, with a microphone to a single public address speaker is another. These are examples of physical dedicated channels. Within many switched telecommunications systems, it is possible to establish a permanent circuit. One example might be a telephone in the lobby of a public building, which is programmed to ring only the number of a telephone dispatcher. "Nailing down" a switched connection saves the cost of running a physical circuit between the two points. The resources in such a connection can be released when no longer needed, for example, a television circuit from a parade route back to the studio.

Switched:Using circuit-switching or packet-switching technologies, a point-to-point circuit can be set up dynamically, and dropped when no longer needed. This is the basic mode of conventional telephony.

38

Page 39: A Computer Network is a Group of Interconnected Computers

1. Bus

In local area networks where bus technology is used, each machine is connected to a single cable. Each computer or server is connected to the single bus cable through some kind of connector. A terminator is required at each end of the bus cable to prevent the signal from bouncing back and forth on the bus cable. A signal from the source travels in both directions to all machines connected on the bus cable until it finds the MAC address or IP address on the network that is the intended recipient. If the machine address does not match the intended address for the data, the machine ignores the data. Alternatively, if the data does match the machine address, the data is accepted. Since the bus topology consists of only one wire, it is rather inexpensive to implement when compared to other topologies. However, the low cost of implementing the technology is offset by the high cost of managing the network. Additionally, since only one cable is utilized, it can be the single point of failure. If the network cable breaks, the entire network will be down, since there is only one cable. Since there is one cable, the transfer speeds between the computers on the network is faster.

Fig- network topology: BUS

Linear busThe type of network topology in which all of the nodes of the network are connected to a common transmission medium which has exactly two endpoints (this is the 'bus', which is also commonly referred to as the backbone, or trunk) – all data that is transmitted between nodes in the network is transmitted over this common transmission medium and is able to be received by all nodes in the network virtually simultaneously (disregarding propagation delays). Note: The two endpoints of the common transmission medium are normally terminated with a device called a terminator that exhibits the characteristic impedance of the transmission medium and which dissipates or absorbs the energy that remains in the

39

Page 40: A Computer Network is a Group of Interconnected Computers

signal to prevent the signal from being reflected or propagated back onto the transmission medium in the opposite direction, which would cause interference with and degradation of the signals on the transmission medium

Distributed busThe type of network topology in which all of the nodes of the network are connected to a common transmission medium which has more than two endpoints that are created by adding branches to the main section of the transmission medium – the physical distributed bus topology functions in exactly the same fashion as the physical linear bus topology (i.e., all nodes share a common transmission medium). Notes: 1.) All of the endpoints of the common transmission medium are normally terminated with a device called a 'terminator' 2.) The physical linear bus topology is sometimes considered to be a special case of the physical distributed bus topology – i.e., a distributed bus with no branching segments. 3.) The physical distributed bus topology is sometimes incorrectly referred to as a physical tree topology – however, although the physical distributed bus topology resembles the physical tree topology, it differs from the physical tree topology in that there is no central node to which any other nodes are connected, since this hierarchical functionality is replaced by the common bus.

2. Star

In local area networks where the star topology is used, each machine is connected to a central hub. In contrast to the bus topology, the star topology allows each machine on the network to have a point to point connection to the central hub. All of the traffic which transverses the network passes through the central hub. The hub acts as a signal booster or repeater which in turn allows the signal to travel greater distances. As a result of each machine connecting directly to the hub, the star topology is considered the easiest topology to design and implement. An advantage of the star topology is the simplicity of adding other machines. The primary disadvantage of the star topology is the hub is a single point of failure. If the hub were to fail the entire network would fail as a result of the hub being connected to every machine on the network.

40

Page 41: A Computer Network is a Group of Interconnected Computers

Fig- network topology: STAR

Notes: 1.) A point-to-point link (described above) is sometimes categorized as a special instance of the physical star topology – therefore, the simplest type of network that is based upon the physical star topology would consist of one node with a single point-to-point link to a second node, the choice of which node is the 'hub' and which node is the 'spoke' being arbitrary2.) after the special case of the point-to-point link, as in note 1.) above, the next simplest type of network that is based upon the physical star topology would consist of one central node – the 'hub' – with two separate point-to-point links to two peripheral nodes – the 'spokes'. 3.) Although most networks that are based upon the physical star topology are commonly implemented using a special device such as a hub or switch as the central node (i.e., the 'hub' of the star), it is also possible to implement a network that is based upon the physical star topology using a computer or even a simple common connection point as the 'hub' or central node – however, since many illustrations of the physical star network topology depict the central node as one of these special devices, some confusion is possible, since this practice may lead to the misconception that a physical star network requires the central node to be one of these special devices, which is not true because a simple network consisting of three computers connected as in note 2.) Above also has the topology of the physical star.

41

Page 42: A Computer Network is a Group of Interconnected Computers

4.) Star networks may also be described as either broadcast multi-access or no broadcast multi-access (NBMA), depending on whether the technology of the network either automatically propagates a signal at the hub to all spokes, or only addresses individual spokes with each communication. Extended star A type of network topology in which a network that is based upon the physical star topology has one or more repeaters between the central node (the 'hub' of the star) and the peripheral or 'spoke' nodes, the repeaters being used to extend the maximum transmission distance of the point-to-point links between the central node and the peripheral nodes beyond that which is supported by the transmitter power of the central node or beyond that which is supported by the standard upon which the physical layer of the physical star network is based. Note: If the repeaters in a network that is based upon the physical extended star topology are replaced with hubs or switches, then a hybrid network topology is created that is referred to as a physical hierarchical star topology, although some texts make no distinction between the two topologies. Distributed Star A type of network topology that is composed of individual networks that are based upon the physical star topology connected together in a linear fashion – i.e., 'daisy-chained' – with no central or top level connection point (e.g., two or more 'stacked' hubs, along with their associated star connected nodes or 'spokes').

3. Ring

Fig- Ring network topology

In local area networks where the ring topology is used, each computer is connected to the network in a closed loop or ring. Each machine or computer has a unique address that is

42

Page 43: A Computer Network is a Group of Interconnected Computers

used for identification purposes. The signal passes through each machine or computer connected to the ring in one direction. Ring topologies typically utilize a token passing scheme, used to control access to the network. By utilizing this scheme, only one machine can transmit on the network at a time. The machines or computers connected to the ring act as signal boosters or repeaters which strengthen the signals that transverse the network. The primary disadvantage of ring topology is the failure of one machine will cause the entire network to fail.

4. Mesh

The value of fully meshed networks is proportional to the exponent of the number of subscribers, assuming that communicating groups of any two endpoints, up to and including all the endpoints, is approximated by Reed's Law.

Fig- Fully connected mesh topology

Fully connectedThe type of network topology in which each of the nodes of the network is connected to each of the other nodes in the network with a point-to-point link – this makes it possible for data to be simultaneously transmitted from any single node to all of the other nodes.Note: The physical fully connected mesh topology is generally too costly and complex for practical networks, although the topology is used when there are only a small number of nodes to be interconnected.

43

Page 44: A Computer Network is a Group of Interconnected Computers

Fig- Partially connected mesh topology

Partially connectedThe type of network topology in which some of the nodes of the network are connected to more than one other node in the network with a point-to-point link – this makes it possible to take advantage of some of the redundancy that is provided by a physical fully connected mesh topology without the expense and complexity required for a connection between every node in the network.Note: In most practical networks that are based upon the physical partially connected mesh topology, all of the data that is transmitted between nodes in the network takes the shortest path (or an approximation of the shortest path) between nodes, except in the case of a failure or break in one of the links, in which case the data takes an alternate path to the destination. This requires that the nodes of the network possess some type of logical 'routing' algorithm to determine the correct path to use at any particular time.

4.Tree

44

Page 45: A Computer Network is a Group of Interconnected Computers

Fig- Tree network topology

Also known as a hierarchical network.

The type of network topology in which a central 'root' node (the top level of the hierarchy) is connected to one or more other nodes that are one level lower in the hierarchy (i.e., the second level) with a point-to-point link between each of the second level nodes and the top level central 'root' node, while each of the second level nodes that are connected to the top level central 'root' node will also have one or more other nodes that are one level lower in the hierarchy (i.e., the third level) connected to it, also with a point-to-point link, the top level central 'root' node being the only node that has no other node above it in the hierarchy (The hierarchy of the tree is symmetrical.) Each node in the network having a specific fixed number, of nodes connected to it at the next lower level in the hierarchy, the number, being referred to as the 'branching factor' of the hierarchical tree.

1.) A network that is based upon the physical hierarchical topology must have at least three levels in the hierarchy of the tree, since a network with a central 'root' node and only one hierarchical level below it would exhibit the physical topology of a star.2.) A network that is based upon the physical hierarchical topology and with a branching factor of 1 would be classified as a physical linear topology.3.) The branching factor, f, is independent of the total number of nodes in the network and, therefore, if the nodes in the network require ports for connection to other nodes the total number of ports per node may be kept low even though the total number of nodes is large – this makes the effect of the cost of adding ports to each node totally dependent upon the branching factor and may therefore be kept as low as required without any effect upon the total number of nodes that are possible.4.) The total number of point-to-point links in a network that is based upon the physical hierarchical topology will be one less than the total number of nodes in the network.5.) If the nodes in a network that is based upon the physical hierarchical topology are required to perform any processing upon the data that is transmitted between nodes in

45

Page 46: A Computer Network is a Group of Interconnected Computers

the network, the nodes that are at higher levels in the hierarchy will be required to perform more processing operations on behalf of other nodes than the nodes that are lower in the hierarchy. Such a type of network topology is very useful and highly recommended.

Signal topology

The mapping of the actual connections between the nodes of a network, as evidenced by the path that the signals take when propagating between the nodes.

Note: The term 'signal topology' is often used synonymously with the term 'logical topology', however, some confusion may result from this practice in certain situations since, by definition, the term 'logical topology' refers to the apparent path that the data takes between nodes in a network while the term 'signal topology' generally refers to the actual path that the signals (e.g., optical, electrical, electromagnetic, etc.) take when propagating between nodes.Example: Logical topology

The logical topology, in contrast to the "physical", is the way that the signals act on the network media, or the way that the data passes through the network from one device to the next without regard to the physical interconnection of the devices. A network's logical topology is not necessarily the same as its physical topology. For example, twisted pair Ethernet is a logical bus topology in a physical star topology layout. While IBM's Token Ring is a logical ring topology, it is physically set up in a star topology.

Classification of logical topologies

The logical classification of network topologies generally follows the same classifications as those in the physical classifications of network topologies, the path that the data takes between nodes being used to determine the topology as opposed to the actual physical connections being used to determine the topology.

Notes:1.) Logical topologies are often closely associated with media access control (MAC) methods and protocols.2.) The logical topologies are generally determined by network protocols as opposed to being determined by the physical layout of cables, wires, and network devices or by the flow of the electrical signals, although in many cases the paths that the electrical signals take between nodes may closely match the logical flow of data, hence the convention of using the terms 'logical topology' and 'signal topology' interchangeably.3.) Logical topologies are able to be dynamically reconfigured by special types of equipment such as routers and switches.

46

Page 47: A Computer Network is a Group of Interconnected Computers

Daisy chains

Except for star-based networks, the easiest way to add more computers into a network is by daisy-chaining, or connecting each computer in series to the next. If a message is intended for a computer partway down the line, each system bounces it along in sequence until it reaches the destination. A daisy-chained network can take two basic forms: linear and ring.

A linear topology puts a two-way link between one computer and the next. However, this was expensive in the early days of computing, since each computer (except for the ones at each end) required two receivers and two transmitters.

By connecting the computers at each end, a ring topology can be formed. An advantage of the ring is that the number of transmitters and receivers can be cut in half, since a message will eventually loop all of the way around. When a node sends a message, the message is processed by each computer in the ring. If a computer is not the destination node, it will pass the message to the next node, until the message arrives at its destination. If the message is not accepted by any node on the network, it will travel around the entire ring and return to the sender. This potentially results in a doubling of travel time for data.

Centralization

The star topology reduces the probability of a network failure by connecting all of the peripheral nodes (computers, etc.) to a central node. When the physical star topology is applied to a logical bus network such as Ethernet, this central node (traditionally a hub) rebroadcasts all transmissions received from any peripheral node to all peripheral nodes on the network, sometimes including the originating node. All peripheral nodes may thus communicate with all others by transmitting to, and receiving from, the central node only. The failure of a transmission line linking any peripheral node to the central node will result in the isolation of that peripheral node from all others, but the remaining peripheral nodes will be unaffected. However, the disadvantage is that the failure of the central node will cause the failure of all of the peripheral nodes also.

If the central node is passive, the originating node must be able to tolerate the reception of an echo of its own transmission, delayed by the two-way round trip transmission time (i.e. to and from the central node) plus any delay generated in the central node. An active star network has an active central node that usually has the means to prevent echo-related problems.

A tree topology (a.k.a. hierarchical topology) can be viewed as a collection of star networks arranged in a hierarchy. This tree has individual peripheral nodes (e.g. leaves) which are required to transmit to and receive from one other node only and are not required to act as repeaters or regenerators. Unlike the star network, the functionality of the central node may be distributed.

47

Page 48: A Computer Network is a Group of Interconnected Computers

As in the conventional star network, individual nodes may thus still be isolated from the network by a single-point failure of a transmission path to the node. If a link connecting a leaf fails, that leaf is isolated; if a connection to a non-leaf node fails, an entire section of the network becomes isolated from the rest.

In order to alleviate the amount of network traffic that comes from broadcasting all signals to all nodes, more advanced central nodes were developed that are able to keep track of the identities of the nodes that are connected to the network. These network switches will "learn" the layout of the network by "listening" on each port during normal data transmission, examining the data packets and recording the address/identifier of each connected node and which port it's connected to in a lookup table held in memory. This lookup table then allows future transmissions to be forwarded to the intended destination only.

Decentralization

In a mesh topology (i.e., a partially connected mesh topology), there are at least two nodes with two or more paths between them to provide redundant paths to be used in case the link providing one of the paths fails. This decentralization is often used to advantage to compensate for the single-point-failure disadvantage that is present when using a single device as a central node (e.g., in star and tree networks). A special kind of mesh, limiting the number of hops between two nodes, is a hypercube. The number of arbitrary forks in mesh networks makes them more difficult to design and implement, but their decentralized nature makes them very useful. This is similar in some ways to a grid network, where a linear or ring topology is used to connect systems in multiple directions. A multi-dimensional ring has a toroidal topology, for instance.

A fully connected network, complete topology or full mesh topology is a network topology in which there is a direct link between all pairs of nodes. In a fully connected network with n nodes, there are n(n-1)/2 direct links. Networks designed with this topology are usually very expensive to set up, but provide a high degree of reliability due to the multiple paths for data that are provided by the large number of redundant links between nodes. This topology is mostly seen in military applications. However, it can also be seen in the file sharing protocol BitTorrent in which users connect to other users in the "swarm" by allowing each user sharing the file to connect to other users also involved. Often in actual usage of BitTorrent any given individual node is rarely connected to every single other node as in a true fully connected network but the protocol does allow for the possibility for any one node to connect to any other node when sharing files.

Hybrids

Hybrid networks use a combination of any two or more topologies in such a way that the resulting network does not exhibit one of the standard topologies (e.g., bus, star, ring, etc.). For example, a tree network connected to a tree network is still a tree network, but two star networks connected together exhibit a hybrid network topology. A hybrid

48

Page 49: A Computer Network is a Group of Interconnected Computers

topology is always produced when two different basic network topologies are connected. Two common examples for Hybrid network are: star ring network and star bus network

A Star ring network consists of two or more star topologies connected using a multistation access unit (MAU) as a centralized hub.

A Star Bus network consists of two or more star topologies connected using a bus trunk (the bus trunk serves as the network's backbone).

While grid networks have found popularity in high-performance computing applications, some systems have used genetic algorithms to design custom networks that have the fewest possible hops in between different nodes. Some of the resulting layouts are nearly incomprehensible, although they function quite well.

Basic hardware components

All networks are made up of basic hardware building blocks to interconnect network nodes, such as Network Interface Cards (NICs), Bridges, Hubs, Switches, and Routers. In addition, some method of connecting these building blocks is required, usually in the form of galvanic cable (most commonly Category 5 cable). Less common are microwave links (as in IEEE 802.12) or optical cable ("optical fiber"). An ethernet card may also be required.

Network interface cards

Fig- network interface card

A network card, network adapter, network interface controller (NIC), network interface card, or LAN adapter is a computer hardware component designed to allow computers to communicate over a computer network. It is both an OSI layer 1 (physical layer) and

49

Page 50: A Computer Network is a Group of Interconnected Computers

layer 2 (data link layer) device, as it provides physical access to a networking medium and provides a low-level addressing system through the use of MAC addresses. It allows users to connect to each other either by using cables or wirelessly.

Although other network technologies exist, Ethernet has achieved near-ubiquity since the mid-1990s. Every Ethernet network card has a unique 48-bit serial number called a MAC address, which is stored in ROM carried on the card. Every computer on an Ethernet network must have a card with a unique MAC address. Normally it is safe to assume that no two network cards will share the same address, because card vendors purchase blocks of addresses from the Institute of Electrical and Electronics Engineers (IEEE) and assign a unique address to each card at the time of manufacture.

Whereas network cards used to be expansion cards that plug into a computer bus, the low cost and ubiquity of the Ethernet standard means that most newer computers have a network interface built into the motherboard. These either have Ethernet capabilities integrated into the motherboard chipset or implemented via a low cost dedicated Ethernet chip, connected through the PCI (or the newer PCI express) bus. A separate network card is not required unless multiple interfaces are needed or some other type of network is used. Newer motherboards may even have dual network (Ethernet) interfaces built-in.

The card implements the electronic circuitry required to communicate using a specific physical layer and data link layer standard such as Ethernet or token ring. This provides a base for a full network protocol stack, allowing communication among small groups of computers on the same LAN and large-scale network communications through routable protocols, such as IP.

There are four techniques used to transfer data, the NIC may use one or more of these techniques.

Polling is where the microprocessor examines the status of the peripheral under program control.

Programmed I/O is where the microprocessor alerts the designated peripheral by applying its address to the system's address bus.

Interrupt-driven I/O is where the peripheral alerts the microprocessor that it's ready to transfer data.

DMA is where an intelligent peripheral assumes control of the system bus to access memory directly. This removes load from the CPU but requires a separate processor on the card.

A network card typically has a twisted pair, BNC, or AUI socket where the network cable is connected, and a few LEDs to inform the user of whether the network is active, and whether or not there is data being transmitted on it. Network cards are typically available in 10/100/1000 Mbit/s varieties. This means they can support a notional maximum transfer rate of 10, 100 or 1000 Megabits per second.

50

Page 51: A Computer Network is a Group of Interconnected Computers

A network interface controller (NIC) is a hardware device that handles an interface to a computer network and allows a network-capable device to access that network. The NIC has a ROM chip that contains a unique number, the multiple access control (MAC) Address burned into it. The MAC address identifies the device uniquely on the LAN. The NIC exists on both the 'Physical Layer' (Layer 1) and the 'Data Link Layer' (Layer 2) of the OSI model.

Sometimes the words 'controller' and 'card' are used interchangeably when talking about networking because the most common NIC is the network interface card. Although 'card' is more commonly used, it is less encompassing. The 'controller' may take the form of a network card that is installed inside a computer, or it may refer to an embedded component as part of a computer motherboard, a router, expansion card, printer interface or a USB device.

A MAC address is a 48-bit network hardware identifier that is burned into a ROM chip on the NIC to identify that device on the network. The first 24-bit field is called the Organizationally Unique Identifier (OUI) and is largely manufacturer-specific. Each OUI allows for 16,777,216 Unique NIC Addresses. Smaller manufacturers that do not have a need for over 4096 unique NIC addresses may opt to purchase an Individual Address Block (IAB) instead. An IAB consists of the 24-bit OUI plus a 12-bit extension (taken from the 'potential' NIC portion of the MAC address.)

Repeaters

A repeater is an electronic device that receives a signal and retransmits it at a higher power level, or to the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair Ethernet configurations, repeaters are required for cable which runs longer than 100 meters.

The term "repeater" originated with telegraphy and referred to an electromechanical device used to regenerate telegraph signals. Use of the term has continued in telephony and data communications. In telecommunication, the term repeater has the following standardized meanings:

1. An analog device that amplifies an input signal regardless of its nature (analog or digital).

2. A digital device that amplifies, reshapes, retimes, or performs a combination of any of these functions on a digital input signal for retransmission.

Because repeaters work with the actual physical signal, and do not attempt to interpret the data being transmitted, they operate on the Physical layer, the first layer of the OSI model.

51

Page 52: A Computer Network is a Group of Interconnected Computers

Hubs

Fig- 4-port Ethernet hub

A network hub or repeater hub is a device for connecting multiple twisted pair or fiber optic Ethernet devices together and thus making them act as a single network segment. Hubs work at the physical layer (layer 1) of the OSI model. The device is thus a form of multiport repeater. Repeater hubs also participate in collision detection, forwarding a jam signal to all ports if it detects a collision. Hubs also often come with a BNC and/or AUI connector to allow connection to legacy 10BASE2 or 10BASE5 network segments. The availability of low-priced network switches has largely rendered hubs obsolete but they are still seen in older installations and more specialized applications. A network hub contains multiple ports. When a packet arrives at one port, it is copied unmodified to all ports of the hub for transmission. The destination address in the frame is not changed to a broadcast address.

Bridges

A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model. Bridges do not promiscuously copy traffic to all ports, as hubs do, but learn which MAC addresses are reachable through specific ports. Once the bridge associates a port and an address, it will send traffic for that address only to that port. Bridges do send broadcasts to all ports except the one on which the broadcast was received.

Bridges learn the association of ports and addresses by examining the source address of frames that it sees on various ports. Once a frame arrives through a port, its source address is stored and the bridge assumes that MAC address is associated with that port. The first time that a previously unknown destination address is seen, the bridge will forward the frame to all ports other than the one on which the frame arrived.

52

Page 53: A Computer Network is a Group of Interconnected Computers

Bridges come in three basic types:

1. Local bridges: Directly connect local area networks (LANs)2. Remote bridges: Can be used to create a wide area network (WAN) link between

LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers.

3. Wireless bridges: Can be used to join LANs or connect remote stations to LANs.

Advantages of network bridges Self-configuring Primitive bridges are often inexpensive Reduce the size of collision domain by microsegmentation in non-switched

networks Transparent to protocols above the MAC layer Allows the introduction of management/performance information and access

control LANs interconnected are separate, and physical constraints such as number of

stations, repeaters and segment length don't apply Helps minimize bandwidth usage used to interconnect two LANs

Disadvantages of network bridges Does not limit the scope of broadcasts Does not scale to extremely large networks Buffering introduces store and forward delays; on average traffic destined for

bridge will be related to the number of stations on the rest of the LAN Bridging of different MAC protocols introduces errors Because bridges do more than repeaters by viewing MAC addresses, the extra

processing makes them slower than repeaters Bridges are more expensive than repeaters

Although infinite bridges (or Layer 2 switches) can be connected in theory, often a broadcast storm will result as more and more collisions occur. Collisions delay service advertisements, which causes the hosts to back off and attempt to retransmit after a pseudo-random interval. Because bridges simply repeat any Layer 2 broadcast traffic, this can result in undesirable broadcast traffic consuming the network. An example would be a bridge in between adjacent office buildings. It is unlikely that the advantages of bridging would outweigh the loss of network bandwidth associated with all of the service advertisements. Another major disadvantage is that any standards-compliant implementation of bridging cannot have any closed loops in a network. This limits both performance and reliability.

53

Page 54: A Computer Network is a Group of Interconnected Computers

Switches

Fig- Atlantis network switch with Ethernet ports.

A network switch is a device that forwards and filters OSI layer 2 datagrams (chunk of data communication) between ports (connected cables) based on the MAC addresses in the packets. This is distinct from a hub in that it only forwards the packets to the ports involved in the communications rather than all ports connected. Strictly speaking, a switch is not capable of routing traffic based on IP address (OSI Layer 3) which is necessary for communicating between network segments or within a large or complex LAN. Some switches are capable of routing based on IP addresses but are still called switches as a marketing term. A switch normally has numerous ports, with the intention being that most or all of the network is connected directly to the switch, or another switch that is in turn connected to a switch.

Switch is a marketing term that encompasses routers and bridges, as well as devices that may distribute traffic on load or by application content (e.g., a Web URL identifier). Switches may operate at one or more OSI model layers, including physical, data link, network, or transport (i.e., end-to-end). A device that operates simultaneously at more than one of these layers is called a multilayer switch.

Overemphasizing the ill-defined term "switch" often leads to confusion when first trying to understand networking. Many experienced network designers and operators recommend starting with the logic of devices dealing with only one protocol level, not all of which are covered by OSI. Multilayer device selection is an advanced topic that may lead to selecting particular implementations, but multilayer switching is simply not a real-world design concept.

Role of switches in networks

Network switch is a marketing term rather than a technical one.[citation needed] Switches may operate at one or more OSI layers, including physical, data link, network,

54

Page 55: A Computer Network is a Group of Interconnected Computers

or transport (i.e., end-to-end). A device that operates simultaneously at more than one of these layers is called a multilayer switch, although use of the term is diminishing.[citation needed] In switches intended for commercial use, built-in or modular interfaces make it possible to connect different types of networks, including Ethernet, Fibre Channel, ATM, ITU-T G.hn and 802.11. This connectivity can be at any of the layers mentioned. While Layer 2 functionality is adequate for speed-shifting within one technology, interconnecting technologies such as Ethernet and token ring are easier at Layer 3. Interconnection of different Layer 3 networks is done by routers. If there are any features that characterize "Layer-3 switches" as opposed to general-purpose routers, it tends to be that they are optimized, in larger switches, for high-density Ethernet connectivity. In some service provider and other environments where there is a need for a great deal of analysis of network performance and security, switches may be connected between WAN routers as places for analytic modules. Some vendors provide firewall, network intrusion detection and performance analysis modules that can plug into switch ports. Some of these functions may be on combined modules. In other cases, the switch is used to create a mirror image of data that can go to an external device. Since most switch port mirroring provides only one mirrored stream, network hubs can be useful for fanning out data to several read-only analyzers, such as intrusion detection systems and packet sniffers.

Routers

A router is a networking device that forwards packets between networks using information in protocol headers and forwarding tables to determine the best next router for each packet. Routers work at the Network Layer of the OSI model and the Internet Layer of TCP/IP.

Fig- Nortel ERS 8600 router

55