IMPLEMENTATION OF DISTRIBUTED ENTERPRISE BRANCH NETWORKIN HCL Submitted in partial fulfillment of the requirements for the award of the degree of Bachelor of Technology In Information Technology Project Guide: Submitted by Prof. ANUBHUTI RODA RAJAT MIGLANI(0661153107) VIKAS RAVISH (0711153107) PIYUSH AGNIHOTRY(0771153107) THAKUR AMIT CHAUHAN(0801153107) 1
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
IMPLEMENTATION OF DISTRIBUTED
ENTERPRISE BRANCH NETWORKIN
HCL
Submitted in partial fulfillment of the requirements
for the award of the degree of
Bachelor of Technology
In
Information Technology
Project Guide: Submitted by
Prof. ANUBHUTI RODA RAJAT MIGLANI(0661153107)
VIKAS RAVISH (0711153107)
PIYUSH AGNIHOTRY(0771153107)
THAKUR AMIT CHAUHAN(0801153107)
BHARTIYA VIDHYAPEETH’S COLLEGE OF ENGINEERING
A-4, PASCHIM VIHAR, ROHTAK ROAD, NEW DELHI- 110063
GURU GOBIND SINGH INDRAPRASTHA UNIVERSITY
1
(2007-2011)
TABLE OF CONTENTS
1. Project Description………………………………………………6
2. Network Diagram………………………………………………..7
3. Hierarchical Network…………………………………………….8
4. Vlan based Approach…………………………………………… 14
5. Port Security……………………………………………………. 19
6. Redundancy with spanning tree protocol………………………. 20
7. Remote login using Telnet………………………………………..32
8. Address Allocation using DHCP…………………………………35
9. Frame relay……………………………………………………….41
10. VPN……………………………………………………………. 51
11. Dynamic routing……………..…………………………………...56
12. Tools Description…………………………………………………65
13. Verification of the Technologies Used……………………………71
14. Conclusion………………………………………………………. 90
15. Future Scope………………………………………………………91
16. References…………………………………………………………92
17. Appendix…………………………………………………………..93
2
CERTIFICATE
This is to certify that this Major Project Report entitled “Implementation of
Distributed Enterprise Branch Network” completed by Mr. Rajat Miglani
Roll No.07/BV/IT/066, Mr. Vikas Ravish Roll No.07/BV/IT/071, Mr. Piyush
Agnihotry Roll No.07/BV/IT/077, Mr .Thakur Amit Chauhan Roll
No.07/BV/IT/080, Mr. Udit Chauhan Roll No.07/BV/IT/079 is an authentic
work carried out by them at Bharati Vidyapeeth’s College Of Engineering,
New Delhi under our guidance.
The matter embodied in this project work has not been used earlier for the
award of any degree or diploma to the best of my knowledge and belief.
DATE: Prof. Anubhuti Roda
3
ACKNOWLEDGEMENT
We wish our deepest gratitude to Prof. Anubhuti Roda our Project Guide for
their guidance and support to us throughout the project. We would like to
thank their profusely for giving access to all the details required during the
course of project formulation and completion.
Finally we are grateful to all the staff of the IT Department for their willful
cooperation and assistance.
Date:
Piyush Agnihotry(0771153107)
4
ABSTRACT
This project on Implementation of Distributed Enterprise Branch Network is a
logical network design that aims to have a redundant, robust, reliable,
manageable, maintainable, secure, scalable network of an enterprise that is
spread globally throughout the world keeping in mind the cost factor that a
medium enterprise can spend on the network. This system is based on the
simulation of the switched Inter-Branch and WAN between the branches
network on the Packet tracer.The network is to be implemented using
technologies such as STP(Spanning Tree Protocol), Vlan based approach,
Management Vlan, Dynamic Route Sharing using EIGRP,Port security,
Hierarchical Network within a branch and for inter-branch communication we
have used the technologies such Frame Relay. Frame relay is a cost –effective
Wan protocol that is implemented on the private infrastructure and VPN is
configured on the public infrastructure that’s internet.
5
PROJECT DESCRIPTION
Computer Networks these days is a basic necessity of each of the enterprise
network required for a number of purposes such as information exchange,
voice communication, video calling and conferencing and also to provide
internet access to the clients.
As the dependency of the enterprise on the networks increases so is the need
for a robust, scalable, manageable, redundant network increases. This is the
main focus of the project .The project is logical topology of the distributed
Enterprise Branch Network that is implemented on the simulator GNS3
(Graphical Network Simulator) .
The network is the hierarchical network that consists of layer 2 switches, layer
3 switches, routers, Frame Relay Switches and ACS (Access Control Server).
The nodes are configured for the technologies within a Branch :
1. Hierarchical Network
2. Vlan based approach
3. Inter Vlan communication using trunks
4. Port Security
5. Redundancy with spanning tree protocol
6. Dynamic route sharing by EIGRP.
7. Address allocation using DHCP
8. FRAME - RELAY.
6
NETWORK DIAGRAM
7
TECHNOLOGIES
IMPLEMENTED
WITHIN THE BRANCH
Chapter 1
8
HIERARCHIAL NETWORK
When building a LAN that satisfies the needs of a small- or medium-sized
business, your plan is more likely to be successful if a hierarchical design
model is used. Compared to other network designs, a hierarchical network is
easier to manage and expand, and problems are solved more quickly.
Hierarchical network design involves dividing the network into discrete layers.
Each layer provides specific functions that define its role within the overall
network. By separating the various functions that exist on a network, the
network design becomes modular, which facilitates scalability and
performance. The typical hierarchical design model is broken up in to three
layers: access, distribution, and core. An example of a three-layer hierarchical
network design is displayed in the figure.
FIG 1
Access Layer
9
The access layer interfaces with end devices, such as PCs, printers, and IP
phones, to provide access to the rest of the network. The access layer can
include routers, switches, bridges, hubs, and wireless access points. The main
purpose of the access layer is to provide a means of connecting devices to the
network and controlling which devices are allowed to communicate on the
network.
Distribution Layer
The distribution layer aggregates the data received from the access layer
switches before it is transmitted to the core layer for routing to its final
destination. The distribution layer controls the flow of network traffic using
policies and delineates broadcast domains by performing routing functions
between virtual LANs (VLANs) defined at the access layer. VLANs allow you
to segment the traffic on a switch into separate subnetworks. For example, in a
university you might separate traffic according to faculty, students, and guests.
Distribution layer switches are typically high-performance devices that have
high availability and redundancy to ensure reliability.
Core Layer
The core layer of the hierarchical design is the high-speed backbone of the
internetwork. The core layer is critical for interconnectivity between
distribution layer devices, so it is important for the core to be highly available
and redundant. The core area can also connect to Internet resources. The core
aggregates the traffic from all the distribution layer devices, so it must be
capable of forwarding large amounts of data quickly.
10
1.1 BENEFITS OF HIERARCHIAL NETWORK
1. Scalability
Hierarchical networks scale very well. The modularity of the design allows
you to replicate design elements as the network grows. Because each instance
of the module is consistent, expansion is easy to plan and implement. For
example, if your design model consists of two distribution layer switches for
every 10 access layer switches, you can continue to add access layer switches
until you have 10 access layer switches cross-connected to the two distribution
layer switches before you need to add additional distribution layer switches to
the network topology. Also, as you add more distribution layer switches to
accommodate the load from the access layer switches, you can add additional
core layer switches to handle the additional load on the core.
2. Redundancy
As a network grows, availability becomes more important. You can
dramatically increase availability through easy redundant implementations
with hierarchical networks. Access layer switches are connected to two
different distribution layer switches to ensure path redundancy. If one of the
distribution layer switches fails, the access layer switch can switch to the other
distribution layer switch. Additionally, distribution layer switches are
connected to two or more core layer switches to ensure path availability if a
core switch fails. The only layer where redundancy is limited is at the access
layer. Typically, end node devices, such as PCs, printers, and IP phones, do
not have the ability to connect to multiple access layer switches for
redundancy. If an access layer switch fails, just the devices connected to that
11
one switch would be affected by the outage. The rest of the network would
continue to function unaffected.
3. Performance
Communication performance is enhanced by avoiding the transmission of data
through low-performing, intermediary switches. Data is sent through
aggregated switch port links from the access layer to the distribution layer at
near wire speed in most cases. The distribution layer then uses its high
performance switching capabilities to forward the traffic up to the core, where
it is routed to its final destination. Because the core and distribution layers
perform their operations at very high speeds, there is no contention for
network bandwidth. As a result, properly designed hierarchical networks can
achieve near wire speed between all devices.
4. Security
Security is improved and easier to manage. Access layer switches can be
configured with various port security options that provide control over which
devices are allowed to connect to the network. You also have the flexibility to
use more advanced security policies at the distribution layer. You may apply
access control policies that define which communication protocols are
deployed on your network and where they are permitted to go. For example, if
you want to limit the use of HTTP to a specific user community connected at
the access layer, you could apply a policy that blocks HTTP traffic at the
distribution layer. Restricting traffic based on higher layer protocols, such as
IP and HTTP, requires that your switches are able to process policies at that
layer. Some access layer switches support Layer 3 functionality, but it is
12
usually the job of the distribution layer switches to process Layer 3 data,
because they can process it much more efficiently.
5. Manageability
Manageability is relatively simple on a hierarchical network. Each layer of the
hierarchical design performs specific functions that are consistent throughout
that layer. Therefore, if you need to change the functionality of an access layer
switch, you could repeat that change across all access layer switches in the
network because they presumably perform the same functions at their layer.
Deployment of new switches is also simplified because switch configurations
can be copied between devices with very few modifications. Consistency
between the switches at each layer allows for rapid recovery and simplified
troubleshooting. In some special situations, there could be configuration
inconsistencies between devices, so you should ensure that configurations are
well documented so that you can compare them before deployment.
6. Maintainability
Because hierarchical networks are modular in nature and scale very easily,
they are easy to maintain. With other network topology designs, manageability
becomes increasingly complicated as the network grows. Also, in some
network design models, there is a finite limit to how large the network can
grow before it becomes too complicated and expensive to maintain. In the
hierarchical design model, switch functions are defined at each layer, making
the selection of the correct switch easier. Adding switches to one layer does
not necessarily mean there will not be a bottleneck or other limitation at
another layer. For a full mesh network topology to achieve maximum
13
performance, all switches need to be high-performance switches, because each
switch needs to be capable of performing all the functions on the network. In
the hierarchical model, switch functions are different at each layer. You can
save money by using less expensive access layer switches at the lowest layer,
and spend more on the distribution and core layer switches to achieve high
performance on the network.
1.2 Hierarchy in our network
In our network we have used 3 access layer switches to provide the network
access to the end devices named asw1,asw2,asw3.The access layer switches
uses the layer 2 switch.
The distribution layer switches are the switches dsw1,dsw2 which are
multilayer switches that is layer 3 switch to provide intervlan routing.
The core layer switches are the switches csw1,csw2 which are multilayer
switches along with 2 routers r1,r2 to provide access to the public network
outside the enterprise branch.
CHAPTER 2
VLAN BASED APPROACH
14
A VLAN is a logically separate IP subnetwork. VLANs allow multiple IP
networks and subnets to exist on the same switched network. For computers to
communicate on the same VLAN, each must have an IP address and a subnet
mask that is consistent for that VLAN. The switch has to be configured with
the VLAN and each port in the VLAN must be assigned to the VLAN. A
switch port with a singular VLAN configured on it is called an access port.
Remember, just because two computers are physically connected to the same
switch does not mean that they can communicate. Devices on two separate
networks and subnets must communicate via a router (Layer 3), whether or not
VLANs are used. You do not need VLANs to have multiple networks and
subnets on a switched network, but there are definite advantages to using
VLANs.
Fig 2.1
2.2 Benefits of a VLAN
15
Security - Groups that have sensitive data are separated from the rest
of the network, decreasing the chances of confidential information
breaches. Faculty computers are on VLAN 10 and completely
separated from student and guest data traffic.
Cost reduction - Cost savings result from less need for expensive
network upgrades and more efficient use of existing bandwidth and
uplinks.
Higher performance - Dividing flat Layer 2 networks into multiple
logical workgroups (broadcast domains) reduces unnecessary traffic on
the network and boosts performance.
Improved IT staff efficiency - VLANs make it easier to manage the
network because users with similar network requirements share the
same VLAN. When you provision a new switch, all the policies and
procedures already configured for the particular VLAN are
implemented when the ports are assigned. It is also easy for the IT staff
to identify the function of a VLAN by giving it an appropriate name. In
the figure, for easy identification VLAN 20 could be named "Student",
VLAN 10 could be named "Faculty", and VLAN 30 "Guest."
Simpler project or application management - VLANs aggregate
users and network devices to support business or geographic
requirements. Having separate functions makes managing a project or
working with a specialized application easier, for example, an e-
learning development platform for faculty. It is also easier to determine
the scope of the effects of upgrading network services.
16
2.3 OUR PROJECT VLAN
In our project we have implemented 4 Vlans vlan10, vlan20, vlan30 and
vlan99.
Vlan10, 20, 30 are for the client access for say different departments finance,
marketing, production, human resource,etc
Vlan 10 uses the address space 192.168.10.0/24
Vlan 20 uses the address space 192.168.20.0/24
Vlan 30 uses the address space 192.168.30.0/24
Vlan 99 is the vlan only for the management purpose that is for controlling
remote access and management of the devices on the enterprise branch
network.
Vlan 99 uses the address space 192.168.99.0/24
2.3 VLAN CONFIGURATION
17
Fig 2.2
Fig 2.3
18
Fig 2.4
Fig 2.5
Fig 2.6
19
Chapter 3
PORT SECURITY
A switch that does not provide port security allows an attacker to attach a
system to an unused, enabled port and to perform information gathering or
attacks. A switch can be configured to act like a hub, which means that every
system connected to the switch can potentially view all network traffic passing
through the switch to all systems connected to the switch. Thus, an attacker
could collect traffic that contains usernames, passwords, or configuration
information about the systems on the network.
All switch ports or interfaces should be secured before the switch is deployed.
Port security limits the number of valid MAC addresses allowed on a port.
When you assign secure MAC addresses to a secure port, the port does not
forward packets with source addresses outside the group of defined addresses.
PORT SECURITY IN OUR PROJECT
In our project all the unused ports are shutdown so that no hacker or intruder
can connect his/her device to the unused ports.
Also each interface trace is enabled so that administrator always know what is
happening on the interface and what information is flowing through the
interface.
20
Chapter 4
REDUNDANCY WITH SPANNING TREE PROTOCOL
The hierarchical design model addresses issues found in the flat model
network topologies. One of the issues is redundancy. Layer 2 redundancy
improves the availability of the network by implementing alternate network
paths by adding equipment and cabling. Having multiple paths for data to
traverse the network allows for a single path to be disrupted without impacting
the connectivity of devices on the network.
In our network the redundancy is implemented at the distribution layer and at
the core layer the redundancy is provided by providing redundant links or
paths between access and distribution and core layer .
Our network can provide availability in case of
1. Path failure from access to distribution layer as each access layer switch is
connected to two distribution layer switches.
2. Distribution layer switches Failure
3. Path failure from distribution to access layer
21
5.1 PROBLEMS WITH REDUNDANCY
Layer 2 Loops
Redundancy is an important part of the hierarchical design. Although it is
important for availability, there are some considerations that need to be
addressed before redundancy is even possible on a network
When multiple paths exist between two devices on the network and STP has
been disabled on those switches, a Layer 2 loop can occur. If STP is enabled
on these switches, which is the default, a Layer 2 loop would not occur.
Broadcast frames are forwarded out all switch ports, except the originating
port. This ensures that all devices in the broadcast domain are able to receive
the frame. If there is more than one path for the frame to be forwarded out, it
can result in an endless loop.
Broadcast Storms
A broadcast storm occurs when there are so many broadcast frames caught in
a Layer 2 loop that all available bandwidth is consumed. Consequently, no
bandwidth is available bandwidth for legitimate traffic, and the network
becomes unavailable for data communication.
A broadcast storm is inevitable on a looped network. As more devices send
broadcasts out on the network, more and more traffic gets caught in the loop,
eventually creating a broadcast storm that causes the network to fail.
There are other consequences for broadcast storms. Because broadcast traffic
is forwarded out every port on a switch, all connected devices have to process
22
all broadcast traffic that is being flooded endlessly around the looped network.
This can cause the end device to malfunction because of the high processing
requirements for sustaining such a high traffic load on the network interface
card.
Duplicate Unicast Frames
Broadcast frames are not the only type of frames that are affected by loops.
Unicast frames sent onto a looped network can result in duplicate frames
arriving at the destination device.
4.2 STP
1. STP Topology
Redundancy increases the availability of the network topology by protecting
the network from a single point of failure, such as a failed network cable or
switch. When redundancy is introduced into a Layer 2 design, loops and
duplicate frames can occur. Loops and duplicate frames can have severe
consequences on a network. The Spanning Tree Protocol (STP) was developed
to address these issues.
STP ensures that there is only one logical path between all destinations on the
network by intentionally blocking redundant paths that could cause a loop. A
port is considered blocked when network traffic is prevented from entering or
leaving that port. This does not include bridge protocol data unit (BPDU)
23
frames that are used by STP to prevent loops. You will learn more about STP
BPDU frames later in the chapter. Blocking the redundant paths is critical to
preventing loops on the network. The physical paths still exist to provide
redundancy, but these paths are disabled to prevent the loops from occurring.
If the path is ever needed to compensate for a network cable or switch failure,
STP recalculates the paths and unblocks the necessary ports to allow the
redundant path to become active.
2. STP Algorithm
STP uses the Spanning Tree Algorithm (STA) to determine which switch ports
on a network need to be configured for blocking to prevent loops from
occurring. The STA designates a single switch as the root bridge and uses it as
the reference point for all path calculations. In the figure the root bridge,
switch S1, is chosen through an election process. All switches participating in
STP exchange BPDU frames to determine which switch has the lowest bridge
ID (BID) on the network. The switch with the lowest BID automatically
becomes the root bridge for the STA calculations. The root bridge election
process will be discussed in detail later in this chapter.
The BPDU is the message frame exchanged by switches for STP. Each BPDU
contains a BID that identifies the switch that sent the BPDU. The BID
contains a priority value, the MAC address of the sending switch, and an
optional extended system ID. The lowest BID value is determined by the
combination of these three fields. You will learn more about the root bridge,
BPDU, and BID in later topics.
24
After the root bridge has been determined, the STA calculates the shortest path
to the root bridge. Each switch uses the STA to determine which ports to
block. While the STA determines the best paths to the root bridge for all
destinations in the broadcast domain, all traffic is prevented from forwarding
through the network. The STA considers both path and port costs when
determining which path to leave unblocked. The path costs are calculated
using port cost values associated with port speeds for each switch port along a
given path. The sum of the port cost values determines the overall path cost to
the root bridge. If there is more than one path to choose from, STA chooses
the path with the lowest path cost. You will learn more about path and port
costs in later topics.
When the STA has determined which paths are to be left available, it
configures the switch ports into distinct port roles. The port roles describe
their relation in the network to the root bridge and whether they are allowed to
forward traffic.
Root ports - Switch ports closest to the root bridge. In the example,
the root port on switch S2 is F0/1 configured for the trunk link between
switch S2 and switch S1. The root port on switch S3 is F0/1,
configured for the trunk link between switch S3 and switch S1.
Designated ports - All non-root ports that are still permitted to
forward traffic on the network. In the example, switch ports F0/1 and
F0/2 on switch S1 are designated ports. Switch S2 also has its port
F0/2 configured as a designated port.
25
Non-designated ports - All ports configured to be in a blocking state
to prevent loops. In the example, the STA configured port F0/2 on
switch S3 in the non-designated role. Port F0/2 on switch S3 is in the
blocking state.
3. Port Roles
The root bridge is elected for the spanning-tree instance. The location of the
root bridge in the network topology determines how port roles are calculated.
This topic describes how the switch ports are configured for specific roles to
prevent the possibility of loops on the network.
There are four distinct port roles that switch ports are automatically configured
for during the spanning-tree process.
Root Port
The root port exists on non-root bridges and is the switch port with the best
path to the root bridge. Root ports forward traffic toward the root bridge. The
source MAC address of frames received on the root port are capable of
populating the MAC table. Only one root port is allowed per bridge.
Designated Port
The designated port exists on root and non-root bridges. For root bridges, all
switch ports are designated ports. For non-root bridges, a designated port is the
switch port that receives and forwards frames toward the root bridge as
needed. Only one designated port is allowed per segment. If multiple switches
exist on the same segment, an election process determines the designated
26
switch, and the corresponding switch port begins forwarding frames for the
segment. Designated ports are capable of populating the MAC table.
Non-designated Port
The non-designated port is a switch port that is blocked, so it is not forwarding
data frames and not populating the MAC address table with source addresses.
A non-designated port is not a root port or a designated port. For some
variants of STP, the non-designated port is called an alternate port.
Disabled Port
The disabled port is a switch port that is administratively shut down. A
disabled port does not function in the spanning-tree process. There are no
disabled ports in the example.
4.3 STP Convergence Steps
Convergence is an important aspect of the spanning-tree process. Convergence
is the time it takes for the network to determine which switch is going to
assume the role of the root bridge, go through all the different port states, and
set all switch ports to their final spanning-tree port roles where all potential
loops are eliminated. The convergence process takes time to complete because
of the different timers used to coordinate the process.
Step 1. Electing a Root Bridge
The first step of the spanning-tree convergence process is to elect a root
bridge. The root bridge is the basis for all spanning-tree path cost calculations
27
and ultimately leads to the assignment of the different port roles used to
prevent loops from occurring.
A root bridge election is triggered after a switch has finished booting up, or
when a path failure has been detected on a network. Initially, all switch ports
are configured for the blocking state, which by default lasts 20 seconds. This
is done to prevent a loop from occurring before STP has had time to calculate
the best root paths and configure all switch ports to their specific roles. While
the switch ports are in a blocking state, they are still able to send and receive
BPDU frames so that the spanning-tree root election can proceed. Spanning
tree supports a maximum network diameter of seven switch hops from end to
end. This allows the entire root bridge election process to occur within 14
seconds, which is less than the time the switch ports spend in the blocking
state.
Immediately after the switches have finished booting up, they start sending
BPDU frames advertising their BID in an attempt to become the root bridge.
Initially, all switches in the network assume that they are the root bridge for
the broadcast domain. The flood of BPDU frames on the network have the
root ID field matching the BID field, indicating that each switch considers
itself the root bridge. These BPDU frames are sent every 2 seconds based on
the default hello timer value.
28
As each switch receives the BPDU frames from its neighboring switches, they
compare the root ID from the received BPDU frame with the root ID
configured locally. If the root ID from the received BPDU frame is lower than
the root ID it currently has, the root ID field is updated indicating the new best
candidate for the root bridge role.
After the root ID field is updated on a switch, the switch then incorporates the
new root ID in all future BPDU frame transmissions. This ensures that the
lowest root ID is always conveyed to all other adjacent switches in the
network. The root bridge election ends once the lowest bridge ID populates
the root ID field of all switches in the broadcast domain.
Step 2. Elect Root Ports
Now that the root bridge has been determined, the switches start configuring
the port roles for each of their switch ports. The first port role that needs to be
determined is the root port role.
Every switch in a spanning-tree topology, except for the root bridge, has a
single root port defined. The root port is the switch port with the lowest path
cost to the root bridge. Normally path cost alone determines which switch port
becomes the root port. However, additional port characteristics determine the
root port when two or more ports on the same switch have the same path cost
to the root. This can happen when redundant links are used to uplink one
switch to another switch when an EtherChannel configuration is not used.
Recall that Cisco EtherChannel technology allows you to configure multiple
physical Ethernet type links as one logical link.
29
Switch ports with equivalent path costs to the root use the configurable port
priority value. They use the port ID to break a tie. When a switch chooses one
equal path cost port as a root port over another, the losing port is configured as
the non-designated to avoid a loop.
The process of determining which port becomes a root port happens during the
root bridge election BPDU exchange. Path costs are updated immediately
when BPDU frames arrive indicating a new root ID or redundant path. At the
time the path cost is updated, the switch enters decision mode to determine if
port configurations need to be updated. The port role decisions do not wait
until all switches settle on which switch is going to be the final root bridge. As
a result, the port role for a given switch port may change multiple times during
convergence, until it finally settles on its final port role after the root ID
changes for the last time.
Step 3. Electing Designated Ports and Non-Designated Ports
After a switch determines which of its ports is the root port, the remaining
ports must be configured as either a designated port (DP) or a non-designated
port (non-DP) to finish creating the logical loop-free spanning tree.
Each segment in a switched network can have only one designated port. When
two non-root port switch ports are connected on the same LAN segment, a
competition for port roles occurs. The two switches exchange BPDU frames to
sort out which switch port is designated and which one is non-designated.
30
Generally, when a switch port is configured as a designated port, it is based on
the BID. However, keep in mind that the first priority is the lowest path cost to
the root bridge and that only if the port costs are equal, is the BID of the
sender.
When two switches exchange their BPDU frames, they examine the sending
BID of the received BPDU frame to see if it is lower than its own. The switch
with the lower BID wins the competition and its port is configured in the
designated role. The losing switch configures its switch port to be non-
designated and, therefore, in the blocking state to prevent the loop from
occurring.
The process of determining the port roles happens concurrently with the root
bridge election and root port designation. As a result, the designated and non-
designated roles may change multiple times during the convergence process
until the final root bridge has been determined. The entire process of electing
the root bridge, determining the root ports, and determining the designated and
non-designated ports happens within the 20-second blocking port state. This
convergence time is based on the 2-second hello timer for BPDU frame
transmission and the seven-switch diameter supported by STP. The max age
delay of 20 seconds provides enough time for the seven-switch diameter with
the 2-second hello timer between BPDU frame transmissions.
31
4.4 STP TOPOLOGY CHANGE NOTIFICATION
STP Topology Change Notification Process
A switch considers it has detected a topology change either when a port that
was forwarding is going down (blocking for instance) or when a port
transitions to forwarding and the switch has a designated port. When a change
is detected, the switch notifies the root bridge of the spanning tree. The root
bridge then broadcasts the information into the whole network.
In normal STP operation, a switch keeps receiving configuration BPDU
frames from the root bridge on its root port. However, it never sends out a
BPDU toward the root bridge. To achieve that, a special BPDU called the
topology change notification (TCN) BPDU was introduced. When a switch
needs to signal a topology change, it starts to send TCNs on its root port. The
TCN is a very simple BPDU that contains no information and is sent out at the
hello time interval. The receiving switch is called the designated bridge and it
acknowledges the TCN by immediately sending back a normal BPDU with the
topology change acknowledgement (TCA) bit set. This exchange continues
until the root bridge responds.
Broadcast Notification
Once the root bridge is aware that there has been a topology change event in
the network, it starts to send out its configuration BPDUs with the topology
change (TC) bit set. These BPDUs are relayed by every switch in the network
with this bit set. As a result, all switches become aware of the topology change
32
and can reduce their aging time to forward delay. Switches receive topology
change BPDUs on both forwarding and blocking ports.
The TC bit is set by the root for a period of max age + forward delay seconds,
which is 20+15=35 seconds by default.
Fig 4.1
Fig 4.2
33
Fig 4.3
BPDU Process
Each switch in the broadcast domain initially assumes that it is the root bridge
for the spanning-tree instance, so the BPDU frames sent contain the BID of
the local switch as the root ID. By default, BPDU frames are sent every 2
seconds after a switch is booted; that is, the default value of the hello timer
specified in the BPDU frame is 2 seconds. Each switch maintains local
information about its own BID, the root ID, and the path cost to the root.
When adjacent switches receive a BPDU frame, they compare the root ID
from the BPDU frame with the local root ID. If the root ID in the BPDU is
lower than the local root ID, the switch updates the local root ID and the ID in
its BPDU messages. These messages serve to indicate the new root bridge on
the network. Also, the path cost is updated to indicate how far away the root
34
bridge is. For example, if the BPDU was received on a Fast Ethernet switch
port, the path cost would be set to 19. If the local root ID is lower than the root
ID received in the BPDU frame, the BPDU frame is discarded.
After a root ID has been updated to identify a new root bridge, all subsequent
BPDU frames sent from that switch contain the new root ID and updated path
cost. That way, all other adjacent switches are able to see the lowest root ID
identified at all times. As the BPDU frames pass between other adjacent
switches, the path cost is continually updated to indicate the total path cost to
the root bridge. Each switch in the spanning tree uses its path costs to identify
the best possible path to the root bridge.
BID Fields
The bridge ID (BID) is used to determine the root bridge on a network. This
topic describes what makes up a BID and how to configure the BID on a
switch to influence the election process to ensure that specific switches are
assigned the role of root bridge on the network.
Fig 4.4
35
Fig 4.5
Fig 4.6
4.5 STP IN OUR PROJECT
STP has been implemented in our project to account the problems with
redundant links that we have incorporated and also for path and switch failure
accounting. That is whenever a path or a switch fails STP algorithm does the
convergence so as to ensure availability in case of failures in the network. The
verification of the STP process is described in further section of verification of
technologies.
36
CHAPTER 5
REMOTE LOGIN USING TELNET
Remote management of the devices is a great need in the enterprise network
for assuring a maintainable network . Long before desktop computers with
sophisticated graphical interfaces existed, people used text-based systems
which were often just display terminals physically attached to a central
computer. Once networks were available, people needed a way to remotely
access the computer systems in the same manner that they did with the directly
attached terminals.
Telnet was developed to meet that need. Telnet dates back to the early 1970s
and is among the oldest of the Application layer protocols and services in the
TCP/IP suite. Telnet provides a standard method of emulating text-based
terminal devices over the data network. Both the protocol itself and the client
software that implements the protocol are commonly referred to as Telnet.
Appropriately enough, a connection using Telnet is called a Virtual Terminal
(VTY) session, or connection. Rather than using a physical device to connect
to the server, Telnet uses software to create a virtual device that provides the
same features of a terminal session with access to the server command line
interface (CLI).
To support Telnet client connections, the server runs a service called the
Telnet daemon. A virtual terminal connection is established from an end
device using a Telnet client application. Most operating systems include an
Application layer Telnet client. On a Microsoft Windows PC, Telnet can be
37
run from the command prompt. Other common terminal applications that run
as Telnet clients are HyperTerminal, Minicom, and TeraTerm.
Once a Telnet connection is established, users can perform any authorized
function on the server, just as if they were using a command line session on
the server itself. If authorized, they can start and stop processes, configure the
device, and even shut down the system.
Telnet is a client/server protocol and it specifies how a VTY session is
established and terminated. It also provides the syntax and order of the
commands used to initiate the Telnet session, as well as control commands
that can be issued during a session. Each Telnet command consists of at least
two bytes. The first byte is a special character called the Interpret as Command
(IAC) character. As its name implies, the IAC defines the next byte as a
command rather than text.
Some sample Telnet protocol commands include:
1. Are You There (AYT) - Lets the user request that something
appear on the terminal screen to indicate that the VTY session
is active.
2. Erase Line (EL) - Deletes all text from the current line.
3. Interrupt Process (IP) - Suspends, interrupts, aborts, or
terminates the process to which the Virtual Terminal is
connected. For example, if a user started a program on the
Telnet server via the VTY, he or she could send an IP
command to stop the program.
38
5.1 TELNET IN OUR PROJECT
We have used telnet in our project to facilitate the remote access to all the
internetworking devices of the enterprise branch network that is access layer
devices that are layer 2 switches, distribution layer devices that layer 3 devices
multilayer switches, core layer devices that consist of the layer 3 devices
including multilayer switches and the routers.
Facilitation of the remote login is provided by the management vlan 99. Each
of the access layer and distribution layer device is assigned management vlan
address for remote login to the terminal from the administrator side.
ASW1: 192.168.99.6/24
ASW2:192.168.99.7/24
ASW3:192.168.99.8/24
DSW1:192.168.99.2/24
DSW2:192.168.99.3/24
CSW1:172.16.2.2/24
CSW2:172.16.5.2/24
R1:172.16.12.2/24
R2:172.16.13.2/24
39
TELNET REQUIREMENTS:
1. Line configuration using:
Router(config)#line vty 0 10
Router(config-line)#password cisco
Router(config-line)#login
2. Password at the privileged mode:
Router(config)#enable password cisco
The telnet is not a very secure protocol used in the project and does not
support authentication thus we plan too implement the secure shell protocol in
the major project work.
40
Chapter 6
ADDRESS ALLOCATION BY DHCP
6.1 What is DHCP?
Every device that connects to a network needs an IP address. Network
administrators assign static IP addresses to routers, servers, and other network
devices whose locations (physical and logical) are not likely to change.
Administrators enter static IP addresses manually when they configure devices
to join the network. Static addresses also enable administrators to manage
those devices remotely.
However, computers in an organization often change locations, physically and
logically. Administrators are unable to keep up with having to assign new IP
addresses every time an employee moves to a different office or cubicle.
Desktop clients do not require a static address. Instead, a workstation can use
any address within a range of addresses. This range is typically within an IP
subnet. A workstation within a specific subnet can be assigned any address
within a specified range. Other items such as the subnet mask, default
gateway, and Domain Name System (DNS) server are assigned a value which
is common either to that subnet or entire administrated network. For example,
all hosts within the same subnet will receive different host IP addresses, but
will receive the same subnet mask and default gateway IP address."
Recall from CCNA Exploration: Network Fundamentals that DHCP makes the
process of assigning new IP addresses almost transparent. DHCP assiPacket
tracerIP addresses and other important network configuration information
41
dynamically. Because desktop clients typically make up the bulk of network
nodes, DHCP is an extremely useful and timesaving tool for network
administrators. RFC 2131 describes DHCP.
Administrators typically prefer a network server to offer DHCP services,
because these solutions are scalable and relatively easy to manage. However,
in a small branch or SOHO location, a Cisco router can be configured to
provide DHCP services without the need for an expensive dedicated server. A
Cisco IOS feature set called Easy IP offers an optional, full-featured DHCP
server.
Fig 6.1
42
Fig 6.2
6.2 DHCP Message Format
The developers of DHCP needed to maintain compatibility with BOOTP and
consequently used the same BOOTP message format. However, because
DHCP has more functionality than BOOTP, the DHCP options field was
added. When communicating with older BOOTP clients, the DHCP options
field is ignored.
The figure shows the format of a DHCP message. The fields are as follows
Operation Code (OP) - Specifies the general type of message. A value of 1
indicates a request message; a value of 2 is a reply message.
43
Hardware Type - Identifies the type of hardware used in the network. For
example, 1 is Ethernet, 15' is Frame Relay, and 20 is a serial line. These are
the same codes used in ARP messages.
Hardware Address length - 8 bits to specify the length of the address.
Hops - Set to 0 by a client before transmitting a request and used by relay
agents to control the forwarding of DHCP messages.
Transaction Identifier - 32-bit identification generated by the client to allow
it to match up the request with replies received from DHCP servers.
Seconds - Number of seconds elapsed since a client began attempting to
acquire or renew a lease. Busy DHCP servers use this number to prioritize
replies when multiple client requests are outstanding.
Flags - Only one of the 16 bits is used, which is the broadcast flag. A client
that does not know its IP address when it sends a request, sets the flag to 1.
This value tells the DHCP server or relay agent receiving the request that it
should send the reply back as a broadcast.
Client IP Address - The client puts its own IP address in this field if and only
if it has a valid IP address while in the bound state; otherwise, it sets the field
to 0. The client can only use this field when its address is actually valid and
usable, not during the process of acquiring an address.
Your IP Address - IP address that the server assiPacket tracerto the client.
Server IP Address - Address of the server that the client should use for the
next step in the bootstrap process, which may or may not be the server sending
44
this reply. The sending server always includes its own IP address in a special
field called the Server Identifier DHCP option.
Gateway IP Address - Routes DHCP messages when DHCP relay agents are
involved. The gateway address facilitates communications of DHCP requests
and replies between the client and a server that are on different subnets or
networks.
Client Hardware Address - Specifies the Physical layer of the client.
Server Name - The server sending a DHCPOFFER or DHCPACK message
may optionally put its name in this field. This can be a simple text nickname
or a DNS domain name, such as dhcpserver.netacad.net.
Boot Filename - Optionally used by a client to request a particular type of
boot file in a DHCPDISCOVER message. Used by a server in a DHCPOFFER
to fully specify a boot file directory and filename.
Options - Holds DHCP options, including several parameters required for
basic DHCP operation. This field is variable in length. Both client and server
may use this field.
Fig 6.3
45
Fig 6.4
Fig 6.5
6.3 DHCP IN OUR PROJECT
A special server is dedicated at the access layer that acts as the dhcp server to
provide address to the clients connected to the access layer switches
asw1,asw2,asw3.
The pools are added to the server with 192.168.10.0/24, 192.168.20.0/24,
192.168.30.0/24, 192.168.99.0/24 to provide ip address to various clients
connected to any of the four vlans described.
46
TECHNOLOGIES
IMPLEMENTED FOR
INTER-BRANCH
COMMUNICATION
47
Chapter 7
FRAME RELAY
Frame Relay is a high-performance WAN protocol that operates at the
physical and data link layers of the OSI reference model.
Frame Relay has become one of the most extensively used WAN protocols,
primarily because it is inexpensive compared to dedicated lines. In addition,
configuring user equipment in a Frame Relay network is very simple. Frame
Relay connections are created by configuring CPE routers or other devices to
communicate with a service provider Frame Relay switch. The service
provider configures the Frame Relay switch, which helps keep end-user
configuration tasks to a minimum.
Frame Relay has become the most widely used WAN technology in the world.
Large enterprises, governments, ISPs, and small businesses use Frame Relay,
primarily because of its price and flexibility. As organizations grow and
depend more and more on reliable data transport, traditional leased-line
solutions are prohibitively expensive. The pace of technological change, and
mergers and acquisitions in the networking industry, demand and require more
flexibility.
Frame Relay reduces network costs by using less equipment, less complexity,
and an easier implementation. Moreover, Frame Relay provides greater
bandwidth, reliability, and resiliency than private or leased lines. With
increasing globalization and the growth of one-to-many branch office
topologies, Frame Relay offers simpler network architecture and lower cost of
ownership.
48
7.1 WHY FRAME RELAY
Cost Effectiveness of Frame Relay
Frame Relay is a more cost-effective option for two reasons. First, with
dedicated lines, customers pay for an end-to-end connection. That includes the
local loop and the network link. With Frame Relay, customers only pay for the
local loop, and for the bandwidth they purchase from the network provider.
Distance between nodes is not important. While in a dedicated-line model,
customers use dedicated lines provided in increments of 64 kb/s, Frame Relay
customers can define their virtual circuit needs in far greater granularity, often
in increments as small as 4 kb/s.
The second reason for Frame Relay's cost effectiveness is that it shares
bandwidth across a larger base of customers. Typically, a network provider
can service 40 or more 56 kb/s customers over one T1 circuit. Using dedicated
lines would require more DSU/CSUs (one for each line) and more
complicated routing and switching. Network providers save because there is
less equipment to purchase and maintain.
49
Fig 7.1
Flexibility Of Frame Relay
A virtual circuit provides considerable flexibility in network design. Looking
at the figure, you can see that Span's offices all connect to the Frame Relay
cloud over their respective local loops. What happens in the cloud is really of
no concern at this time. All that matters is that when any Span office wants to
communicate with any other Span office, all it needs to do is connect to a
virtual circuit leading to the other office. In Frame Relay, the end of each
connection has a number to identify it called a Data Link Connection Identifier
(DLCI). Any station can connect with any other simply by stating the address
of that station and DLCI number of the line it needs to use. In a later section,
you will learn that when Frame Relay is configured, all the data from all the
configured DLCIs flows through the same port of the router. Try to picture the
50
same flexibility using dedicated lines. Not only is it complicated, but it also
requires considerably more equipment.
7.2 FRAME RELAY WAN
In the late 1970s and into the early 1990s, the WAN technology joining the
end sites was typically using the X.25 protocol. Now considered a legacy
protocol, X.25 was a very popular packet switching technology because it
provided a very reliable connection over unreliable cabling infrastructures. It
did so by including additional error control and flow control. However, these
additional features added overhead to the protocol. Its major application was
for processing credit card authorization and for automatic teller machines.
This course mentions X.25 only for historical purposes.
When you build a WAN, regardless of the transport you choose, there is
always a minimum of three basic components, or groups of components,
connecting any two sites. Each site needs its own equipment (DTE) to access
the telephone company's CO serving the area (DCE). The third component sits
in the middle, joining the two access points. In the figure, this is the portion
supplied by the Frame Relay backbone.
Frame Relay has lower overhead than X.25 because it has fewer capabilities.
For example, Frame Relay does not provide error correction, modern WAN
facilities offer more reliable connection services and a higher degree of
reliability than older facilities. The Frame Relay node simply drops packets
51
without notification when it detects errors. Any necessary error correction,
such as retransmission of data, is left to the endpoints.
Fig 7.2
Frame Relay handles volume and speed efficiently by combining the
necessary functions of the data link and network layers into one simple
protocol. As a data link protocol, Frame Relay provides access to a network,
delimits and delivers frames in proper order, and recognizes transmission
errors through a standard Cyclic Redundancy Check. As a network protocol,
Frame Relay provides multiple logical connections over a single physical
circuit and allows the network to route data over those connections to its
intended destinations.
52
Frame Relay operates between an end-user device, such as a LAN bridge or
router, and a network. The network itself can use any transmission method that
is compatible with the speed and efficiency that Frame Relay applications
require. Some networks use Frame Relay itself, but others use digital circuit
switching or ATM cell relay systems. The figure shows a circuit-switching
backbone as indicated by the Class 4/5 switches.
7.3 FRAME RELAY ENCAPSULATION
Frame Relay takes data packets from a network layer protocol, such as IP or
IPX, encapsulates them as the data portion of a Frame Relay frame, and then
passes the frame to the physical layer for delivery on the wire. To understand
how this works, it is helpful to understand how it relates to the lower levels of
the OSI model.
The figure shows how Frame Relay encapsulates data for transport and moves
it down to the physical layer for delivery.
Fig 7.3
53
7.4 CONFIGURING FRAME RELAY
Fig 7.4
1. Enable Frame Relay Encapsulation
This first figure, displays how Frame Relay has been configured on the serial
interfaces. This involves assigning an IP address, setting the encapsulation
type, and allocating bandwidth. The figure shows routers at each end of the
Frame Relay link with the configuration scripts for routers R1 and R2.
Step 1. Setting the IP Address on the Interface
On a Cisco router, Frame Relay is most commonly supported on synchronous
serial interfaces. Use the ip address command to set the IP address of the
54
interface. You can see that R1 has been assigned 10.1.1.1/24, and R2 has been
assigned IP address 10.1.1.2/24.
Step 2. Configuring Encapsulation
The encapsulation frame-relay interface configuration command enables
Frame Relay encapsulation and allows Frame Relay processing on the
supported interface. There are two encapsulation options to choose from, and
these are described below.
Fig 7.5
55
Step 3. Setting the Bandwidth
Use the bandwidth command to set the bandwidth of the serial interface.
Specify bandwidth in kb/s. This command notifies the routing protocol that
bandwidth is statically configured on the link. The EIGRP and OSPF routing
protocols use the bandwidth value to calculate and determine the metric of the
link.
Step 4. Setting the LMI Type (optional)
This is an optional step as Cisco routers autosense the LMI type. Recall that
Cisco supports three LMI types: Cisco, ANSI Annex D, and Q933-A Annex A
and that the default LMI type for Cisco routers is cisco.
2. Encapsulation Options
Recall that the default encapsulation type on a serial interface on a Cisco
router is the Cisco proprietary version of HDLC. To change the encapsulation
from HDLC to Frame Relay, use the encapsulation frame-relay [cisco | ietf]
command. The no form of the encapsulation frame-relay command removes
the Frame Relay encapsulation on the interface and returns the interface to the
default HDLC encapsulation.
The default Frame Relay encapsulation enabled on supported interfaces is the
Cisco encapsulation. Use this option if connecting to another Cisco router.
56
Many non-Cisco devices also support this encapsulation type. It uses a 4-byte
header, with 2 bytes to identify the DLCI and 2 bytes to identify the packet
type.
3. Frame Relay Sub interfaces
Frame Relay can partition a physical interface into multiple virtual interfaces
called sub interfaces. A sub interface is simply a logical interface that is
directly associated with a physical interface. Therefore, a Frame Relay sub
interface can be configured for each of the PVCs coming into a physical serial
interface.
To enable the forwarding of broadcast routing updates in a Frame Relay
network, you can configure the router with logically assigned sub interfaces. A
partially meshed network can be divided into a number of smaller, fully
meshed, point-to-point networks. Each point-to-point sub network can be
assigned a unique network address, which allows packets received on a
physical interface to be sent out the same physical interface because the
packets are forwarded on VCs in different sub interfaces.
Frame Relay sub interfaces can be configured in either point-to-point or
multipoint mode:
Point-to-point - A single point-to-point sub interface establishes one
PVC connection to another physical interface or sub interface on a
remote router. In this case, each pair of the point-to-point routers is on
57
its own subnet, and each point-to-point sub interface has a single
DLCI. In a point-to-point environment, each sub interface is acting like
a point-to-point interface. Typically, there is a separate subnet for each
point-to-point VC. Therefore, routing update traffic is not subject to
the split horizon rule.
Multipoint - A single multipoint sub interface establishes multiple
PVC connections to multiple physical interfaces or sub interfaces on
remote routers. All the participating interfaces are in the same subnet.
The sub interface acts like an NBMA Frame Relay interface, so routing
update traffic is subject to the split horizon rule. Typically, all
multipoint VCs belong to the same subnet.7
7.5 VERIFICATION OF FRAME RELAY
Verify Frame Relay Interfaces
After configuring a Frame Relay PVC and when troubleshooting an issue,
verify that Frame Relay is operating correctly on that interface using the show
interfaces command.
Recall that with Frame Relay, the router is normally considered a DTE device.
However, a Cisco router can be configured as a Frame Relay switch. In such
cases, the router becomes a DCE device when it is configured as a Frame
Relay switch.
58
The show interfaces command displays how the encapsulation is set up, along
with useful Layer 1 and Layer 2 status information, including:
LMI type
LMI DLCI
Frame Relay DTE/DCE type
The first step is always to confirm that the interfaces are properly configured.
The figure shows a sample output for the show interfaces command. Among
other things, you can see details about the encapsulation, the DLCI on the
Frame Relay-configured serial interface, and the DLCI used for the LMI. You
should confirm that these values are the expected values. If not, you may need
to make changes.
Checking LMI configuration
The next step is to look at some LMI statistics using the show frame-relay lmi
command. In the output, look for any non-zero "Invalid" items. This helps
isolate the problem to a Frame Relay communications issue between the
carrier's switch and your router.
PVC Configuration Check
Use the show frame-relay pvc [interface interface] [dlci] command to view
PVC and traffic statistics. This command is also useful for viewing the
59
number of BECN and FECN packets received by the router. The PVC status
can be active, inactive, or deleted.
A final task is to confirm whether the frame-relay inverse-arp command
resolved a remote IP address to a local DLCI. Use the show frame-relay map
command to display the current map entries and information about the
connections.
7.6 FRAME RELAY IN OUR PROJECT
IN our project the frame relay is implemented on the private infrastructure.
For simplicity we have used a single frame relay switch that connects 3
branches of the enterprise. 3 branches routers that are DTE’s are connected via
frame relay switch. The frame relay is implemented in point to point form with
dlci values configured for each of the sub-interface.
For DTE router internet:192.168.1.3/29 is used point to point interface with
dlci value 203
For DTE router R5:192.168.1.2/29 is used point to point interface with dlci
value 202
For DTE router R4:192.1681.1/29 is used as two point to point interfaces with
the dlci values 101 and 102.
60
Chapter 8
VIRTUAL PRIVATE NETWORK
The Internet is a worldwide, publicly accessible IP network. Because of its
vast global proliferation, it has become an attractive way to interconnect
remote sites. However, the fact that it is a public infrastructure poses security
risks to enterprises and their internal networks. Fortunately, VPN technology
enables organizations to create private networks over the public Internet
infrastructure that maintain confidentiality and security.
Organizations use VPNs to provide a virtual WAN infrastructure that connects
branch offices, home offices, business partner sites, and remote telecommuters
to all or portions of their corporate network. To remain private, the traffic is
encrypted. Instead of using a dedicated Layer 2 connection, such as a leased
line, a VPN uses virtual connections that are routed through the Internet.
Earlier in this course, an analogy involving getting priority tickets for a
stadium show was introduced. An extension to that analogy will help explain
how a VPN works. Picture the stadium as a public place in the same way as
the Internet is a public place. When the show is over, the public leaves through
public aisles and doorways, jostling and bumping into each other along the
way. Petty thefts are threats to be endured.
Consider how the performers leave. Their entourage all link arms and form
cordons through the mobs and protect the celebrities from all the jostling and
pushing. In effect, these cordons form tunnels. The celebrities are whisked
61
through tunnels into limousines that carry them cocooned to their destinations.
This section describes how VPNs work in much the same way, bundling data
and safely moving it across the Internet through protective tunnels. An
understanding of VPN technology is essential to be able to destinations. This
section describes how VPNs work in much the same way, bundling data and
safely moving it across the Internet through protective tunnels. An
understanding of VPN technology is essential to be able to implement secure
teleworker services on enterprise networks.
Fig 8.1
Organizations using VPNs benefit from increased flexibility and productivity.
Remote sites and teleworkers can connect securely to the corporate network
from almost any place. Data on a VPN is encrypted and undecipherable to
62
anyone not entitled to have it. VPNs bring remote hosts inside the firewall,
giving them close to the same levels of access to network devices as if they
were in a corporate office.
The figure shows leased lines in red. The blue lines represent VPN-based
connections. Consider these benefits when using VPNs:
Cost savings - Organizations can use cost-effective, third-party Internet
transport to connect remote offices and users to the main corporate site.
This eliminates expensive dedicated WAN links and modem banks. By
using broadband, VPNs reduce connectivity costs while increasing
remote connection bandwidth.
Security - Advanced encryption and authentication protocols protect
data from unauthorized access.
Scalability - VPNs use the Internet infrastructure within ISPs and
carriers, making it easy for organizations to add new users.
Organizations, big and small, are able to add large amounts of capacity
without adding significant infrastructure.
63
Fig 8.2
8.2 VPN COMPONENTS
Fig 8.3
64
8.3 VPN SECURITY
Fig 8.4
Fig 8.5
65
8.4 VPN IN OUR PROJECT
We have used the site-to-site vpn configuration in our project over the public
infrastructure that is three routers internet,R4 and R5 are configured for the
vpns as described above in the configuration.
Note that there are two options for Wan connection in our network that are
1. VPN: Vpn is wan connection over the public infrastructure for that purpose
the routers R4,R5 and internet are configured as the Vpn concentrator rather
than using a dedicated VPN concentrator thereby realizing cost efficiency.
2. FRAME RELAY: Frame Relay Wan Connection is used on the private
infrastructure here we have used the two wan connections to provide
redundancy on the wan connection
66
Chapter 9
DYNAMIC ROUTE SHARING WITH OSPF
Routing protocols are used to facilitate the exchange of routing information
between routers. Routing protocols allow routers to dynamically share
information about remote networks and automatically add this information to
their own routing tables.
Routing protocols determine the best path to each network which is then added
to the routing table. One of the primary benefits to using a dynamic routing
protocol is that routers exchange routing information whenever there is a
topology change. This exchange allows routers to automatically learn about
new networks and also to find alternate paths when there is a link failure to a
current network.
The Purpose of Dynamic Routing Protocols
A routing protocol is a set of processes, algorithms, and messages that are
used to exchange routing information and populate the routing table with the
routing protocol's choice of best paths. The purpose of a routing protocol
includes:
Discovery of remote networks
Maintaining up-to-date routing information
Choosing the best path to destination networks
Ability to find a new best path if the current path is no longer available
67
OSPF that is implemented in our project is a link state routing protocol with
the following advantages over other protocols:
1. Non-Proprietary : The protocol ospf is not vendor specific and thus can
be implemented on any of the router or switch irrespective of the company
which manufactured it.
2. Faster Convergence: The performance of this link state routing protocol
is much faster than the other protocols thus it can fastly adapt to the
changing topology.
3. Scalability: The protocol can easily account for the scalability of the
network being developed.
4. Event Driven Updates: After the initial flooding of LSPs, link-state
routing protocols only send out an LSP when there is a change in the
topology. The LSP contains only the information regarding the affected
link. Unlike some distance vector routing protocols, link-state routing
protocols do not send periodic updates.
5. Hierarchical Design: Link-state routing protocols such as OSPF and IS-IS
use the concept of areas. Multiple areas create a hierarchical design to
networks, allowing for better route aggregation (summarization) and the
isolation of routing issues within an area
68
Fig 9.1
9.2 OSPF MESSAGE ENCAPSULATION
The data portion of an OSPF message is encapsulated in a packet. This data
field can include one of five OSPF packet types.
The OSPF packet header is included with every OSPF packet, regardless of its
type. The OSPF packet header and packet type-specific data are then
encapsulated in an IP packet. In the IP packet header, the protocol field is set
to 89 to indicate OSPF, and the destination address is set to one of two
multicast addresses: 224.0.0.5 or 224.0.0.6. If the OSPF packet is encapsulated
in an Ethernet frame, the destination MAC address is also a multicast address:
01-00-5E-00-00-05 or 01-00-5E-00-00-06.
69
Fig 9.2
9.2.1 OSPF PACKET TYPES
1. Hello - Hello packets are used to establish and maintain adjacency with
other OSPF routers. The hello protocol is discussed in detail in the next topic.
2. DBD - The Database Description (DBD) packet contains an abbreviated list
of the sending router's link-state database and is used by receiving routers to
check against the local link-state database.
3. LSR - Receiving routers can then request more information about any entry
in the DBD by sending a Link-State Request (LSR).
4. LSU - Link-State Update (LSU) packets are used to reply to LSRs as well
as to announce new information. LSUs contain seven different types of Link-
70
State Advertisements (LSAs). LSUs and LSAs are briefly discussed in a later
topic.
5 LSAck - When an LSU is received, the router sends a Link-State
Acknowledgement (LSAck) to confirm receipt of the LSU.
9.2.2 HELLO PROTOCOL
OSPF packet Type 1 is the OSPF Hello packet. Hello packets are used to:
Discover OSPF neighbors and establish neighbor adjacencies.
Advertise parameters on which two routers must agree to become
neighbors.
Elect the Designated Router (DR) and Backup Designated Router
(BDR) on multiaccess networks like Ethernet and Frame Relay.
Important fields shown in the figure include:
Type: OSPF Packet Type: Hello (1), DD (2), LS Request (3), LS
Update (4), LS ACK (5)
Router ID: ID of the originating router
Area ID: area from which the packet originated
Network Mask: Subnet mask associated with the sending interface
Hello Interval: number of seconds between the sending router's hellos
71
Router Priority: Used in DR/BDR election (discussed later)
Designated Router (DR): Router ID of the DR, if any
Backup Designated Router (BDR): Router ID of the BDR, if any
Fig 9.3
9.3 Neighbor Establishment
Before an OSPF router can flood its link-states to other routers, it must first
determine if there are any other OSPF neighbors on any of its links. In the
figure, the OSPF routers are sending Hello packets on all OSPF-enabled
interfaces to determine if there are any neighbors on those links. The
information in the OSPF Hello includes the OSPF Router ID of the router
sending the Hello packet (Router ID is discussed later in the chapter).
Receiving an OSPF Hello packet on an interface confirms for a router that
72
there is another OSPF router on this link. OSPF then establishes adjacency
with the neighbor.
OSPF Hello and Dead Intervals
Before two routers can form an OSPF neighbor adjacency, they must agree on
three values: Hello interval, Dead interval, and network type. The OSPF Hello
interval indicates how often an OSPF router transmits its Hello packets. By
default, OSPF Hello packets are sent every 10 seconds on multi-access and
point-to-point segments and every 30 seconds on non-broadcast multi-access
NBMA) segments (Frame Relay, X.25, ATM).
In most cases, OSPF Hello packets are sent as multicast to an address reserved
for ALLSPFRouters at 224.0.0.5. Using a multicast address allows a device to
ignore the packet if its interface is not enabled to accept OSPF packets. This
saves CPU processing time on non-OSPF devices.
The Dead interval is the period, expressed in seconds that the router will wait
to receive a Hello packet before declaring the neighbor "down." Cisco uses a
default of four times the Hello interval. For multi-access and point-to-point
segments, this period is 40 seconds. For NBMA networks, the Dead interval is
120 seconds.
If the Dead interval expires before the routers receive a Hello packet, OSPF
will remove that neighbor from its link-state database. The router floods the
link-state information about the "down" neighbor out all OSPF enabled
interfaces.
73
9.4 ENABLING OSPF ROUTING
OSPF is enabled with the router ospf process-id global configuration
command. The process-id is a number between 1 and 65535 and is chosen by
the network administrator. The process-id is locally significant, which means
that it does not have to match other OSPF routers in order to establish
adjacencies with those neighbors. This differs from EIGRP. The EIGRP
process ID or autonomous system number does need to match for two EIGRP
neighbors to become adjacent.
R1(config)#router ospf 1
R1(config-router)#
ADDING NETWORKS TO OSPF
The network command used with OSPF has the same function as when used
with other IGP routing protocols:
Any interfaces on a router that match the network address in the network
command will be enabled to send and receive OSPF packets.
This network (or subnet) will be included in OSPF routing updates.
The network command is used in router configuration mode.
Router(config-router)#network network-address wildcard-mask area
area-id
The OSPF network command uses a combination of network-address and
wildcard-mask similar to that which can be used by EIGRP. Unlike EIGRP,
74
however, OSPF requires the wildcard mask. The network address along with
the wildcard mask is used to specify the interface or range of interfaces that
will be enabled for OSPF using this network command.
As with EIGRP, the wildcard mask can be configured as the inverse of a
subnet mask. For example, R1's FastEthernet 0/0 interface is on the
172.16.1.16/28 network. The subnet mask for this interface is /28 or
255.255.255.240. The inverse of the subnet mask results in the wildcard mask.
Note: Like EIGRP, some IOS versions allow you to simply enter the subnet
mask instead of the wildcard mask. The IOS then converts the subnet mask to
the wildcard mask format.
255.255.255.255
- 255.255.255.240 Subtract the subnet mask
--------------------
0. 0. 0. 15 Wildcard mask
The area area-id refers to the OSPF area. An OSPF area is a group of routers
that share link-state information. All OSPF routers in the same area must have
the same link-state information in their link-state databases. This is
accomplished by routers flooding their individual link-states to all other
routers in the area. In this chapter, we will configure all of the OSPF routers
within a single area. This is known as single-area OSPF.
75
An OSPF network can also be configured as multiple areas. There are several
advantages to configuring large OSPF networks as multiple areas, including
smaller link-state databases and the ability to isolate unstable network
problems within an area. Multi-area OSPF is covered in CCNP.
When all of the routers are within the same OSPF area, the network
commands must be configured with the same area-id on all routers. Although
any area-id can be used, it is good practice to use an area-id of 0 with single-
area OSPF. This convention makes it easier if the network is later configured
as multiple OSPF areas where area 0 becomes the backbone area.
Determining the Router ID
The OSPF router ID is used to uniquely identify each router in the OSPF
routing domain. A router ID is simply an IP address. Cisco routers derive the
router ID based on three criteria and with the following precedence:
1. Use the IP address configured with the OSPF router-id command.
2. If the router-id is not configured, the router chooses highest IP address of
any of its loopback interfaces.
3. If no loopback interfaces are configured, the router chooses highest active
IP address of any of its physical interfaces.
76
9.5 VERIFICATION OF OSPF
The show ip ospf neighbor command can be used to verify and troubleshoot
OSPF neighbor relationships. For each neighbor, this command displays the
following Output:
Neighbor ID - The router ID of the neighboring router.
Pri - The OSPF priority of the interface. This is discussed in a later section.
State - The OSPF state of the interface. FULL state means that the router and
its neighbor have identical OSPF link-state databases. OSPF states are
discussed in CCNP.
Dead Time - The amount of time remaining that the router will wait to receive
an OSPF Hello packet from the neighbor before declaring the neighbor down.
This value is reset when the interface receives a Hello packet.
Address - The IP address of the neighbor's interface to which this router is
directly connected.
Interface - The interface on which this router has formed adjacency with the
neighbor.
77
Fig 9.4
Fig 9.5
78
Fig 9.6
9.6 OSPF IN OUR PROJECT
In our project ospf is enabled for dynamic sharing of the routes on the frame
relay network used therefore the ospf is configured on routers R4,R5 and
internet for route sharing within the branches and it is also enabled on the
distribution and core layer switches for facilitation of the inter-vlan routing
within the branch.
79
Chapter 10
TOOLS DESCRIPTION
10.1 GNS3
GNS3 is a graphical network simulator that allows simulation of complex
networks.
To allow complete simulations, GNS3 is strongly linked with :
Dynamips , the core program that allows Cisco IOS emulation.
Dynagen , a text-based front-end for Dynamips.
Qemu , a generic and open source machine emulator and virtualizer.
GNS3 is an excellent complementary tool to real labs for network engineers,
administrators and people wanting to pass certifications such as CCNA,
CCNP, CCIP, CCIE, JNCIA, JNCIS, JNCIE.
It can also be used to experiment features of Cisco IOS, Juniper JunOS or to
check configurations that need to be deployed later on real routers.
This project is an open source, free program that may be used on multiple
operating systems, including Windows, Linux, and MacOS X.
Features overview
Design of high quality and complex network topologies.
Emulation of many Cisco IOS router platforms, IPS, PIX and ASA