Cisco Advanced Services BSNL National Internet Backbone - II IP MPLS Core Design Version 1.1 Corporate Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 526-4100
104
Embed
National Internet Backbone Ii Ip Mpls Core Lld Ver 1.1
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Cisco Advanced Services
BSNL National Internet Backbone - II
IP MPLS Core Design
Version 1.1
Corporate Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 526-4100
Table 1 Single Area OSPF scale 15 Table 2 OSPF Interface Costs 15 Table 3 OSPF Timer Default Values 15 Table 4 iBGP Peer Groups 15 Table 5 iBGP Peer Groups 15 Table 6 Default Values for iBGP Timers 15 Table 7 Address Plan for Core Links 15 Table 8 Address Plan for all Provider Router Loopback Addresses 15 Table 9 Address Plan for all Miscellaneous Router and Switch Loopback Addresses 15 Table 10 Address Plan for all Provider Edge Router Loopback Addresses 15 Table 11 Address Plan for the “INET” VRF VRF Table 15 Table 12 Glossary 15
BSNL National Internet Backbone - II
9 IP MPLS Core Design
Document Information
Author: Vinod Anthony Joseph (vinjosep) Change Authority: Advanced Engineering Services Change Forecast: Medium
Review and Distribution Organisation Name Title
Modification History Rev Date Originator Status Comment 1.1 25th Sep,2004 Vinod
Anthony Joseph
Issue
BSNL National Internet Backbone - II
10 IP MPLS Core Design
Document Acceptance
Name Name
Title Title
Company Company
Signature Signature
Date Date
Name Name
Title Title
Company Company
Signature Signature
Date Date
Name Name
Title Title
Company Company
Signature Signature
Date Date
BSNL National Internet Backbone - II
11 IP MPLS Core Design
Introduction
Preface This document provides the detailed low level design for the IP MPLS Core Infrastructure for the BSNL NIB-II Network. This Low Level Design Handbook covers all aspects of this network – the topology, the IP addressing scheme, the protocol design, and the relevant protocol step-by-step configurations. The objective of this handbook is to serve as a meticulously detailed snapshot of the network at the time of its launch. It is hoped that it will be of immense use during the implementation and operations and maintenance phase, as a reference manual.
Audience This document is intended for the engineering and operational teams of BSNL, as a reference for implementation and operational guidelines for the NIB-II IP MPLS Core Network.
Scope and Requirements The scope of the document is to provide a network to satisfy the requirements laid out within the RFP against which this BSNL NIB –II network is built. The document is divided in following sections:
• Network Overview
• Core Transport Architecture
Interior Gateway Protocol - OSPF architecture
BGP Protocol architecture
Route Reflector architecture
• IP Packet Switching
Cisco Express Forwarding
MPLS Based Forwarding
Label Distribution Protocol
• IP Addressing allocation and architecture
Assumptions The IP MPLS Core Network Architecture and relevant configuration details discussed in this document rely on the features and capabilities available in Cisco IOS.
Introduction
BSNL National Internet Backbone - II
12 IP MPLS Core Design
Related Documents BSNL NIB-II Services Requirements, Cisco Systems, Bill of Materials, Cisco Systems, BSNL NIB-1 to NIB-II Integration, Cisco Systems
BSNL National Internet Backbone - II
13 IP MPLS Core Design
Network Overview
Figure 1 BSNL NIB-II Network
… Figure 1 illustrates the major elements of the solution for the BSNL NIB-II Network. The NIB-II Network is an effort by BSNL to provide a diversified range of Internet and VPN services to cater to the data communication requirements of a wide spectrum of enterprises, service providers, and businesses in India. The IP MPLS Network focuses not on just providing basic transport services, but promises a paradigm shift in the traditional connectivity options available today in the country. The Internet protocol (IP) has evolved into a convergence point in today’s telecom scenario, catering to a new generation of applications, and services. Building on the strengths and dynamics of the Internet Protocol, the network is set to offer a whole horizon of value added services, catering to diversified requirements and growing demands of today’s customer. A total of seventy one cities are interconnected to create this network. Both the services, Internet Services and MPLS VPN Services are provided over a common MPLS infrastructure using Cisco GSR12416, Cisco GSR 12410, and Cisco 7613 routers. Label Switching allows the fast switching of packets based on label paths constructed through the core to routing destinations.
MPLS VPN ServicesEdge Network
Internet Services Edge Network
MPLS Core
CPE
CPE
CPE
Access Network
Internet Services
IP VPNServices
CoS
Backbone Infrastructure ServicesCustomer Access
MPLS VPN ServicesEdge Network
MPLS VPN ServicesEdge Network
Internet Services Edge Network
MPLS CoreMPLS Core
CPE CPE
CPE CPE
CPE CPE
Access Network
Internet Services
VPN Services
CoS
Backbone Infrastructure ServicesCustomer Access
Network Overview
BSNL National Internet Backbone - II
14 IP MPLS Core Design
The IP VPN backbone uses MPLS-VPN tagging to construct private customer networks through the core. MPLS-VPN allows the segregation or intersection of VPN end-points through a flexible mechanism that allows customer Intranets and extranets to other customers, if required. The Cisco 7613 routers provide the Edge interface to the VPN and services while the Cisco GSR12416 at A1 cities and Cisco GSR 12410 at A2, A3 and Juniper routers at A4 cities provide core label switching for VPN tagged packets.
Network Topology A tiered IPMPLS architecture will be implemented ensuring the network is both scalable and resilient. The Core will provide the IP transport mechanism supporting all BSNL offered services. OSPF will be used as the IGP and BGPv4 will be run to provide external routing to the wider Internet and also routing for customer VPNs (see sections regarding OSPF and BGP under “Protocol Design” for detailed information). Figure 2 BSNL NIB-II Network – A1 POPs
A1 POPsIP MPLS Core Network
STM-16 Link
Cisco 12416 Router
Kolkatta
ChennaiBangalore
Mumbai
Core A1Core A1
Core A1Core A1
Core A1Core A1
Core A1Core A1
Core A1Core A1
Noida
Core A1Core A1
Figure 2 illustrates the physical topology of the A1 cities in the BSNL NIB-II network. The five cities are interconnected as a full mesh and form the nerve-centers of the network using STM-16 links to provide the interconnectivity.
Network Overview
BSNL National Internet Backbone - II
15 IP MPLS Core Design
Figure 3 BSNL NIB-II Network – A1+A2+A3 POPs
A1 + A2 + A3 POPsIP MPLS Core Network
STM-16 LinkCisco 12416 Router
Noida
Kolkatta
ChennaiBangalore
Mumbai
Lucknow
Patna
Core A3Core A3
Core A2Core A2
Core A3Core A3
Ludhiana
Jaipur
Ahmedabad
Ernakulam
Mangalore
Core A3Core A3
Indore
Pune
Core (P)Core (P)
Core A2Core A2 Cisco 12410 Router
Core A3Core A3 Cisco 12410 Router
Hyderabad
Core A3Core A3
Core A3Core A3
Core A1Core A1
Core A3Core A3
Core A1Core A1
Core A1Core A1
Core A2Core A2
Core A1
Core A2Core A2
Core A1Core A1
Core A1
Figure 3 illustrates the physical topology of the A2, & A3 cities in the BSNL NIB-II network. The nine cities are dual-homed into the NIB-II core A1 core routers. The connectivity of A2 and A3 cities also use STM-16 links to provide the interconnectivity.
Network Overview
BSNL National Internet Backbone - II
16 IP MPLS Core Design
Figure 4 BSNL NIB-II Network – A1+A2+A3+A4 POPs
A1 + A2 + A3 + A4 POPsIP MPLS Core Network
STM-16 LinkCisco 12416 Router
Noida
Kolkatta
ChennaiBangalore
Mumbai
Lucknow
Patna
Core A3Core A3
Core A2Core A2
Core A3Core A3
Ludhiana
Jaipur
Ahmedabad
Ernakulam
Mangalore
Core A3Core A3
Indore
Pune
Core (P)Core (P)
Core A2Core A2 Cisco 12010 Router
Core A3Core A3 Cisco 12010 Router
Bhubhaneshwar
Coimbatore
Hyderabad
Core A4Core A4
Core A4Core A4
Core A4Core A4
Core A4Core A4
Ranchi
GuwahatiCore A3Core A3
Core A3Core A3
Core A4Core A4
Allahabad
Core A1Core A1
Core A4Core A4
Core A3Core A3
Core A1Core A1
Core A4Core A4
Core A1Core A1
Core A2Core A2
Core A4Core A4
Nagpur
Chandigarh
Core A1Core A1
Core A4Core A4
Raipur
Core A2Core A2
Core A4Core A4
Vijaywada
Core A4Core A4 Juniper M40e Router
Core A1Core A1
STM-1 Link
Core A1Core A1
Core A1Core A1Core A1
Core A1
Figure 4 illustrates the physical topology of the A4 cities in the BSNL NIB-II network. The ten A4 cities are dual-homed into the NIB-II core. STM-1 links are used to provide the interconnectivity. The network is built in such a way that any single link failure will not isolate a particular node. Each A2, A3, A4, B1 and B2 city is dual homed into the BSNL NIB-II core. The full mesh of A1 cities in the heart of the core also helps in simplifying the re-routing in case of links failures. The network design will ensure that no single link in the core will be completely overloaded, in case of a single link failure. The physical topology of B1 and B2 cities and their interconnectivity with the NIB-II core are not shown for the sake of simplicity. The B1 and B2 cities are also dual homed to NIB –II core network using STM – 1 Links.
Terminology Definitions P (Provider Routers) Five Cisco GSR 12416’s at A1 sites interconnected in a fully
meshed topology to make up the heart of the BSNL network. A2 and A3 site core routers connect to two major A1 sites forming the outer layer of the core. These links will be Point-to-Point STM-16 POS (Packet over Sonet) links.
VPN-PE Cisco 7613 routers are used as Provider Edge routers. Two
Routers are used in each A1 site and one router in each A2, and A3 sites. These routers are to be used as the MPLS-VPN PE (Provider Edge) routers.
A1 sites Chennai, Kolkatta, Bangalore, Mumbai, and Noida are the
current A1 sites.
Network Overview
BSNL National Internet Backbone - II
17 IP MPLS Core Design
A2 sites Ahmedabad, Hyderabad, and Pune are the current A2 sites. A3 sites Indore, Ernakulam, Patna, Jullunder, Lucknow, and Jaipur are
the current A3 sites. A4 sites The existing Juniper based MPLS infrastructure at Chandigarh,
Allahabad, Ranchi, Guwahati, Bhubhaneshwar, Raipur, Vijayawada, Coimbatore, Mangalore, and Nagpur classify as A4 sites.
Gwalior, Rajkot, Surat, Vadodara, Faridabad, Gurgaon, Agra, Ludhiana, Amritsar, Jammu, Kanpur, and Varanasi, Jodhpur, and Delhi are the current B1 sites.
Rajamundhry, Tirupati, Jamshedpur, Durgapur, Siliguri, Dimapur, Aurangabad, Kolhapur, Jabalpur, Mehsana, Ambala, Ghaziabad, Meerut, Dehradun, Ferozpur, Simla, Kalyan, Ajmer, and Panjim (Goa) are the current B2 sites.
LAN Switches Cisco Catalyst 6509 switches will provide the Fast Ethernet inter-
connectivity between all the NMS components within a site.
IP Core Design Considerations
Design Objectives/Rules This section lists the design requirements as specified in the RFP, and the technology used to achieve those requirements.
• Stable and scalable Interior Gateway Protocol (IGP).
An industry standard OSPF protocol meets these criteria and is selected as the Routing Protocol for the BSNL NIB-II network
• Fast IGP convergence
OSPF meets this criterion by using Dijkstra’s SPF algorithm and Link State Advertisements. The Link state Database has complete network topology information and provides faster convergence in case of link failures.
• Small IGP route table within core (GSR) routers. This is achieved by using the following Routing Protocol Design principles.
Provider routers do not carry any BGP routes
BGP is not redistributed into OSPF
BGP next-hop-self is used on selective iBGP routers alongside label switching paths within Core
Customer routes and PE-CE links are not redistributed into OSPF
• Stable and Scalable Core
Network Overview
BSNL National Internet Backbone - II
18 IP MPLS Core Design
Achieved by using MPLS for packet forwarding, MP-BGP for VPN services
• Scalable iBGP Architecture
Achieved by using Route Reflectors for VPN-v4, and IP-v4 services
• Flexible and scalable IP VPN features and managed VPN services
Achieved by using MPLS-VPN, MP BGP and Label Switching paths within the Core
• Implementation of existing BGP Policy.
Achieved by using BGP Peer-groups, Prefix-lists, and Route Reflectors to control route updates.
BSNL National Internet Backbone - II
19 IP MPLS Core Design
Core Transport Architecture
IGP Protocol Design
Introduction In the BSNL NIB-II network the following protocols are used: OSPF, BGP, MPLS, and LDP. The Interior Gateway protocol is OSPF. This protocol is used for the purpose of re-configuration of the network and re-routing in the case of any link failure in the network. MPLS is used for packet forwarding, and creation of VPNs. BGP is used for exchanging Internet routes (Both NIB-II customer prefixes using registered public addresses and Global Internet routes). Multiprotocol extensions to BGP (MPBGP) are used to facilitate the PE-PE router communication with respect to customer VPN routes. LDP is used to facilitate label distribution in the NIB-II core.
IGP Routing OSPF OSPF is the link state routing protocol that carries connectivity information of a network. In a network running OSPF, each router advertises detailed information on the state of its directly connected links to all its directly connected peers. In this manner link state information is passed on to all routers in the network. Using this information, each router computes a topology database, using an enhanced Shortest Path First (SPF) algorithm. The topology database consists of detailed path specific information, used by the router to decide on a specific path to choose in-order to reach a particular destination. The topology database is used to derive optimal paths for reaching each destination in the network, which is computed using the SPF algorithm, and this information is stored in the IP routing table. Further the topology database is synchronized across the entire network between all OSPF routers. This ensures that all routers in the entire network share a common view about the network topology. OSPF routers use multicast mechanisms for making network announcements to their peers, and these announcements only happen when there is a change in the network such as a link flap. When a change occurs, only the changed information is replicated all over the network. This makes OSPF an optimized protocol to carry link state information. It is also designed to converge at an extremely rapid pace. The BSNL NIB-II network requires an underlying Interior Gateway Protocol (IGP) to be enabled to perform a number of functions. These include enabling BGP next-hop reach-ability, routing of NTP synchronization updates, management traffic and routing of accounting data. OSPF (Open Shortest Path First) was requested due to the maturity of the protocol, wide platform support and familiarity of operations staff with the deployment and troubleshooting of OSPF networks. OSPF is standardized, converges quickly and has a hierarchy that maps well to the BSNL NIB-II network topology for the present and future
Core Transport Architecture
BSNL National Internet Backbone - II
20 IP MPLS Core Design
requirements. OSPF will be responsible for interior routing only, and will not be used to carry any BGP derived external routes nor will it carry customer addresses or links. The Addressing in the core is planned in a contiguous, well defined and self-contained manner, to enable summarization at area borders when the network scales. This would simplify area routing tables and limit LSA (Link State Advertisement) flooding in future.
OSPF Design The Interior gateway protocol design using OSPF is based on a single Area model and would offer a reliable & redundant architecture, increasing efficiencies in the dynamics of the carrier grade IP/MPLS Network of BSNL. The single area architecture would largely simplify the process of deploying fast-convergence techniques such as MPLS Traffic Engineering Fast Reroute in the BSNL NIB-II Network. The OSPF design is as follows - The 71 city BSNL Core MPLS Network is based on an OSPF single Area model. All the STM-16, STM-1, and Gigabit Ethernet links interconnecting P and PE-routers with each other, will be configured to run OSPF. In Summary, the OSPF backbone area is the logical and physical structure for the BSNL NIB-II Autonomous System, this single area will be the distribution point for the routing structure. The single area design will facilitate ease in deployment of advanced MPLS features. The simple and scalable OSPF single area design will facilitate a multi-area design in future in conjunction with the network expansion. The current design can grow into a 3-Tier architecture comprising of the Core, Distribution, access points, as given in the OSPF road-map section. Contiguous address space allocation will ensure efficient route summarization at the area edges. The following figures provide an insight to the OSPF core, and POP architectures. Figure 5 Single Area OSPF architecture
BSNL MPLS CoreA1+A2+A3+A4+B1+B2
Internal IGP: OSPF Area 0
Core Transport Architecture
BSNL National Internet Backbone - II
21 IP MPLS Core Design
Figure 5 illustrates the logical view of the OSPF single area architecture in the BSNL NIB-II network.
Table 1 Single Area OSPF scale
OSPF Area Number of routers Number of links
Area 0 133 204
Table 1 illustrates the number of routers, and links in the BSNL NIB-II network, that participate in OSPF Area 0 routing.
Figure 6 OSPF Area Layout for BSNL NIB-II A1 POP
Figure 6 illustrates the OSPF single area architecture of an A1 POP in the BSNL NIB-II network. The A1 POPs consist of the following components:
• Provider routers connected to the NIB-II Network Core using STM-16 links.
• Cisco Provider Edge routers connected to the Provider router using STM-1 links.
• Internet Gateway routers connected to the Provider Edge router using (directly connected) Gigabit-Ethernet links.
• IDC Edge routers connected to the Provider router using (directly connected) Gigabit-Ethernet links.
• IP TAX routers connected to the Provider router using (directly connected) Gigabit-Ethernet links.
• Juniper based MPLS Provider Edge routers connected to the Provider router using (directly connected) Gigabit Ethernet links.
• MPLS Provider routers of A2, and A3 cities connected to the Provider router using STM-16 links.
Core A1 P Router
Remote PE
Remote PE
Remote PE
B1/B2 PE
IDC Edge
PE (M40e)Local
PE(IPTAX)
STM-1
STM-1
STM-1
STM-1
GE
Intl GWIntl GW
NIB-IIArea 0
InternetInternet
GE
GE
FE link
GE link
STM-1 Link
STM-16
GE
RR
PE
Core Transport Architecture
BSNL National Internet Backbone - II
22 IP MPLS Core Design
• Juniper based MPLS Provider routers of A4 cities connected to the Provider router using STM-1 links.
• MPLS Provider Edge routers of B1 cities connected to the Provider router using STM-1 links.
• MPLS Provider Edge routers of B2 cities connected to the Provider router using STM-1 links.
Figure 7 OSPF Area Layout for BSNL NIB-II A2-IGW POP
Figure 7 illustrates the OSPF single area architecture of an Internet Gateway A2 POP in the BSNL NIB-II network. The A2-IGW POPs consist of the following components:
• Provider routers connected to the NIB-II Network Core using STM-16 links.
• Cisco Provider Edge routers connected to the Provider router using STM-1 links.
• Internet Gateway routers connected to the Provider Edge router using (directly connected) Gigabit-Ethernet links.
• Juniper based MPLS Provider Edge routers connected to the Provider router using (directly connected) Gigabit Ethernet links.
• MPLS Provider routers of A3 cities connected to the Provider router using STM-16 links.
• Juniper based MPLS Provider routers of A4 cities connected to the Provider router using STM-1 links.
• MPLS Provider Edge routers of B1 cities connected to the Provider router using STM-1 links.
• MPLS Provider Edge routers of B2 cities connected to the Provider router using STM-1 links.
Remote PERemote PE
B1/B2 PEB1/B2 PEB1/B2 PEB1/B2 PE
Core A2
Intl GWIntl GW
PE (M20e)PE (M20e)
InternetInternet
NIB-IIArea 0
Core
STM1
STM1
STM-1
STM-16
STM-16
GE
PE
GE Link
STM-1 Link
STM-16 Link
GE
Core Transport Architecture
BSNL National Internet Backbone - II
23 IP MPLS Core Design
Figure 8 OSPF Area Layout for BSNL NIB-II A3-IGW POP
Figure 8 illustrates the OSPF single area architecture of an A2 POP in the BSNL NIB-II network. The A2 POPs consist of the following components:
• Provider routers connected to the NIB-II Network Core using STM-16 links.
• Cisco Provider Edge routers connected to the Provider router using STM-1 links.
• Juniper based MPLS Provider Edge routers connected to the Provider router using (directly connected) Gigabit Ethernet links.
• MPLS Provider routers of A3 cities connected to the Provider router using STM-1 links.
• Juniper based MPLS Provider routers of A4 cities connected to the Provider router using STM-1 links.
• MPLS Provider Edge routers of B1 cities connected to the Provider router using STM-1 links.
• MPLS Provider Edge routers of B2 cities connected to the Provider router using STM-1 links.
B1/B2 PEB1/B2 PEB1/B2 PEB1/B2 PE
Remote PERemote PE
PE (M20e)PE (M20e)
Intl GWIntl GW
InternetInternet
NIB-IIArea 0
STM-1STM-1
GE
GE
STM-1
PE
Core A3
GE Link
STM-1 Link
STM-16 Link
STM-16
STM-16
Core
Core Transport Architecture
BSNL National Internet Backbone - II
24 IP MPLS Core Design
Figure 9 OSPF Area Layout for BSNL NIB-II A3 POP
Figure 9 illustrates the OSPF single area architecture of an A3 POP in the BSNL NIB-II network. The A2 POPs consist of the following components:
• Provider routers connected to the NIB-II Network Core using STM-16 links.
• Cisco Provider Edge routers connected to the Provider router using STM-1 links.
• Juniper based MPLS Provider Edge routers connected to the Provider router using (directly connected) Gigabit Ethernet links.
• Juniper based MPLS Provider routers of A4 cities connected to the Provider router using STM-1 links.
• MPLS Provider Edge routers of B1 cities connected to the Provider router using STM-1 links.
• MPLS Provider Edge routers of B2 cities connected to the Provider router using STM-1 links.
B1/B2 PEB1/B2 PEB1/B2 PEB1/B2 PE
Remote PERemote PE
PE (M20e)PE (M20e)
NIB-IIArea 0
STM-1STM-1
GE
STM-1
PE
Core A3
GE Link
STM-1 Link
STM-16 Link
STM-16
STM-16
Core
Core Transport Architecture
BSNL National Internet Backbone - II
25 IP MPLS Core Design
Figure 10 OSPF Area Layout for BSNL NIB-II A4 POP
Figure 10 illustrates the OSPF single area architecture of an A4 POP in the BSNL NIB-II network. The A2 POPs consist of the following components:
• Juniper based MPLS Provider routers connected to the NIB-II Network Core using STM-1 links.
• MPLS Provider Edge routers connected to the Juniper based MPLS Provider router using Gigabit Ethernet links.
Note The B1 & B2 POPs consist of a MPLS Provider Edge router. The PE routers are connected to the nearest A1/A2/A3 POP using STM-1 links.
Loopback Addresses All the routers in the BSNL NIB-II network will have Loopback addresses configured. These are used to force stability of the routers OSPF ID. These loopback addresses will be in OSPF passive mode to optimize the routing process.
OSPF Costs Cisco routers by default will assign fast Ethernet (100Mb) interfaces an OSPF cost of 1. This would require all interface costs to be reset in the BSNL NIB-II network, as there will be interfaces up to STM-16 (2.4Gb) speeds. This behavior can be altered so that the calculation used to compute the link cost will scale. To achieve this, OSPF costs will be reset using the ospf auto-cost statement. This is to ensure that the IGP routing solution scales up to STM-16 speeds and beyond. The new approximate OPSF costs for various media types are given below.
Core A4Core A4
B1/B2 PEB1/B2 PEB1/B2 PEB1/B2 PE
7613 PE7613 PE
STM-1
GE
NIB-IIArea 0
GE
PE
GE Link
STM-1 Link
CoreSTM-1
STM-1
Core Transport Architecture
BSNL National Internet Backbone - II
26 IP MPLS Core Design
Table 2 OSPF Interface Costs
Bandwidth Bytes OSPF cost Media
2000000 5000 E1
10000000 1000 Ethernet
34000000 290 E3
45000000 220 T3
100000000 100 Fast Ethernet
155000000 64 STM-1
622000000 16 STM-4
1000000000 10 Gigabit Ethernet
2480000000 4 STM-16
Note These values are achieved by setting auto-cost reference-bandwidth 10000 in the OSPF configuration. With the bandwidth statements set correctly on each interface this will ensure the ospf cost values are set as shown in Table 2. Bandwidth statements will be set to match physical link bandwidth and during initial deployment the ip ospf cost command will NOT be used to set link costs.
Load Sharing Cisco OSPF can by default support up to four paths to the same destination with the same cost and load share across them. This can be increased to six if the need arises. Load sharing is supported in two methods, per packet and per destination. Per destination load sharing will be enabled to avoid mis-ordering of packets.
OSPF Timer Tuning In the initial deployment of the BSNL NIB-II network all timers will be left at their default values as shown in Table 3. Table 3 OSPF Timer Default Values
Timer Default Value
ip ospf dead-interval 4 x hello interval (40 sec)
ip ospf hello-interval 10 sec
ip ospf retransmit-interval 5 sec
ip ospf transmit-delay 1 sec
Timers spf spf-delay 5 sec
Timers spf spf-holdtime 10 sec
Core Transport Architecture
BSNL National Internet Backbone - II
27 IP MPLS Core Design
Each of these timers will affect the performance of OSPF and can be tuned to effect both convergence time and network resource utilization. Below is a description of each timer:
ip ospf dead-interval: The dead interval is used with the hello interval to set how many hello packets need to be missed before an adjacency is declared down. By default this timer is four (4) times the hello timer. This timer will effect convergence time but must be set the same on all nodes in the network.
ip ospf hello-interval: The hello interval sets how often OSPF will send hello packets on an interface. Lowering this value will improve convergence times but will also add additional traffic to the network. Link failures will be immediately recognized and will NOT wait for the dead-interval before they are advertised.
ip ospf retransmit-interval: When a router sends and LSA to its neighbour, it keeps the LSA until it receives back an acknowledgement. If it does not receive an acknowledgement in N seconds it resends the LSA. The value N is controlled by this command. Aggressive setting of this value can result in needless retransmission especially across serial links.
ip ospf transmit-delay: Sets the estimated time it takes to transmit a link-state update packet on the interface. If the delay is not added before transmission over a link, the time in which the LSA propagates over the link is not considered. This setting will only have importance on very slow serial links.
timers spf spf-delay: Time in seconds between when OSPF receives a topology change and when it starts an SPF calculation.
timers spf spf-holdtime: The minimum time between two consecutive SPF calculations. Setting both this and the delay low will allow routing to switch to an alternate path quicker.
OSPF configuration guidelines The following configuration guidelines are used for implementing OSPF in the BSNL NIB-II Network All the routers in the NIB-II network will be placed in OSPF Area 0. OSPF neighbour changes will be logged, in order to reflect the level of stability within the IGP. All Internet access customers must be seen from the BSNL NIB-II network via a Public Address. This is so that the customers can have access to the global Internet. The BSNL NIB-II Global OSPF process will not be extended to the customer networks / routers for exchanging routing information. This is to ensure the stability, integrity and security of the global routing processes. Internet access customers requiring dynamic routing must use EBGP. This will remove any need to use the BSNL NIB-II IGP for dynamic routing to the customer. VPN customers will have their respective IGP / BGP routing instances created on the PE routers, and there will not be any integration between the customer’s private routing instance and the BSNL Global OSPF routing process. For details on Internet and MPLS VPN services refer to the “BSNL NIB-II services LLD”.
OSPF Scalability Although BSNL does not use multiple OSPF areas, the following information is included as a reference if multiple areas are desirable in the future. OSPF creates hierarchy through areas to provide a scalable solution. One motivation behind multi-area design is less computational work for the routers that are internal to the area
Core Transport Architecture
BSNL National Internet Backbone - II
28 IP MPLS Core Design
during a topology change in any remote OSPF area of the network. Most of the SPF algorithm cost is proportional to the number of links in an area. In an unstable network environment, this is still a concern even with high performance processors. By splitting a network into multiple areas, not only does computation become less expensive, but instability between areas is hidden, and the amount of flooding is reduced. OSPF is designed so instability in one area does not necessarily result in a full SPF re-computation in other areas. If summarization is used between areas, instability in one area can be completely hidden from other areas. So from the computational and stability perspective, it is evident that by using areas, OSPF can scale to larger numbers of nodes in the network. The following recommendations will assist BSNL in creating a schema for building multiple OSPF areas. The guidelines given below essentially provide a reference model for extending the OSPF area architecture beyond a single area, and may vary at the actual time of implementation based on the business needs of BSNL.
• Creation of OSPF Areas to be based on their regional proximity as far as possible. As regional networks grow and more cities are added in each region, it is recommended that the OSPF areas be developed in each region. For instance, if multiple cities in the state of Gujarat are to be added to this large network it is suggested that a backbone be developed between these cities within Gujarat to create state-wide backbone. The state backbone can then be connected the NIB-II national backbone at a pre-defined set of locations. It is to be noted that the reference to the state Gujarat is used for the sake of simplicity. BSNL may opt to create OSPF areas with larger regional boundaries categorized such as North, East, West, and South Zones. A sample illustration is given in Figure 11.
• It is advisable to define OSPF area hierarchies based on the physical topology.
• IP Address space allotments for OSPF areas need to be contiguous, facilitating Route Summarization between multiple OSPF areas. Summarizing is the consolidation of multiple routes into one single advertisement. This is normally done at the boundaries of Area Border Routers (ABRs). Although summarization could be configured between any two areas, it is better to summarize in the direction of the backbone. This way the backbone receives all the aggregate addresses and in turn will inject them, already summarized, into other areas.
• It is recommended that OSPF areas other the backbone area, to be configured as Stub areas. External networks, such as those redistributed from other protocols into OSPF, are not allowed to be flooded into a stub area. Routing from these areas to the outside world is based on a default route. Configuring a stub area reduces the topological database size inside an area and reduces the memory requirements of routers inside that area. A Stub area carries Intra-area, Inter-area, and a default route. External routes are blocked from being injected into the area. A Stub area provides the benefits of reducing the size of the OSPF Link State Database, and at the same time facilitates optimal routing, if the area is multihomed to the OSPF backbone area.
Note Proper care should be taken to ensure that an OSPF router within an area is not multi-homed across multiple areas.
• Totally Stubby areas should not be deployed in the BSNL NIB-II Network. All MPLS Label Switch Routers (LSR) and Label Edge Routers (LER) must have their own MPLS label in order to facilitate MPLS service deployment.
Core Transport Architecture
BSNL National Internet Backbone - II
29 IP MPLS Core Design
Figure 11 OSPF Design – Future expansion
Delhi
Kolkatta
ChennaiBangalore
Mumbai
Lucknow
Patna
Core A3
Core A2
Core A3
Jullunder
Jaipur
Ahmedabad
Ernakulam
Mangalore
Core A3
Indore
Pune
Bhubhaneshwar
Coimbatore
Hyderabad
Core A4
Core A4
Core A4
Core A4
Ranchi
GuwahatiCore A3
Core A3
Core A4
Allahabad
Core A1
Core A4
Core A3
Core A1
Core A4
Core A1
Core A2
Core A4
Nagpur
Chandigarh
Core A1
Core A4
Raipur
Core A2
Core A4
Vijaywada
Core A1
STM-16 LinkSTM-1 Link
Juniper M40e router
Gujarat Regional Backbone(Area 1)
City 7
City 1
City 2
City 4
City 6
City 8
NIB-II Backbone(Area 0)
Core (P)Core (P)
Core A2Core A2
Core A3Core A3
Core A4Core A4
Cisco 12010 router
Cisco 12010 router
Cisco 12416 router
Figure 11 illustrates the scaling of the current OSPF architecture. It is shown that the regional backbone connects to the P-router in Ahmedabad, creating regional OSPF hierarchies.
Note The OSPF process identifier definition on all routers is being standardized with a value set to “100”. For a complete set of router software configuration details (see section “Appendix – I Router software configurations”). This illustration uses Bangalore for the sake of simplicity.
Figure 13 Bangalore Provider Router - OSPF
!
router ospf 100
router-id 218.248.254.3
log-adjacency-changes
auto-cost reference-bandwidth 10000
passive-interface Loopback0
network 218.248.254.3 0.0.0.0 area 0
network 218.248.250.45 0.0.0.3 area 0
network 218.248.250.5 0.0.0.3 area 0
!
interface Loopback 0
description *** Router Identifier ***
ip address 218.248.254.3 255.255.255.255
no ip directed-broadcast
!
interface POS 2/0
description *** STM-16 Interface to Chennai ***
bandwidth 2048000
ip address 218.248.250.5 255.255.255.252
no ip directed-broadcast
no ip proxy-arp
no shutdown
crc 32
clock source line
pos framing sdh
pos scramble-atm
pos flag c2 22
pos flag s1s0 2
no cdp enable
Core Transport Architecture
BSNL National Internet Backbone - II
31 IP MPLS Core Design
!
interface POS 11/0
description *** STM-1 Interface to PE router***
ip address 218.248.250.45 255.255.255.252
bandwidth 155000
no shutdown
no ip directed-broadcast
no ip proxy-arp
no cdp enable
! end
Note The OSPF process identifier definition on all routers is being standardized with a
value set to “100”. For a complete set of router software configuration details (see section “Appendix – I Router software configurations”).
P-Router# show ip ospf neighbor Neighbor ID Pri State Dead Time Address Interface d.d.d.d 1 FULL/- 00:01:52 y.y.y.y POS 1/1/0 e.e.e.e 1 FULL/- 00:01:52 a.a.a.a POS 1/1/1 g.g.g.g 1 FULL/ - 00:01:52 c.c.c.c POS 1/1/2
The output given above provides details on OSPF adjacencies formed by the local router with its neighbour. The illustration given above uses sample data, and the use of this command in real-time will substitute the sample data with actual addresses. The state “Full” indicates that OSPF has established an adjacency with the specified neighbour and the OSPF synchronization and route-exchange process between the two routers is complete. Figure 15 OSPF Neighbour Database Sample
P-Router# show ip ospf database OSPF Router with ID(x.x.x.x) (Process ID 100) Router Link States(Area 0) Link ID ADV Router Age Seq# Checksum Link count a.a.a.a a.a.a.a 1381 0x8000010D 0xEF60 2 b.b.b.b b.b.b.b 1460 0x800002FE 0xEB3D 4 c.c.c.c c.c.c.c 2027 0x80000090 0x875D 3 d.d.d.d d.d.d.d 1323 0x800001D6 0x12CC 3 Net Link States(Area 0) Link ID ADV Router Age Seq# Checksum g.g.g.g h.h.h.h 1323 0x8000005B 0xA8EE m.m.m.m n.n.n.n 1461 0x8000005B 0x7AC
Core Transport Architecture
BSNL National Internet Backbone - II
32 IP MPLS Core Design
The output given above provides details of OSPF database on the local router. The illustration given above uses sample data, and the use of this command in real-time will substitute the sample data with actual addresses. The “Router Link States” and “Net Link States” indicates the router has populated its OSPF database with the Link State Advertisements propagated in the network. Figure 16 OSPF Routing Table Sample
P-Router# show ip route Gateway of last resort is 0.0.0.0 to network 0.0.0.0 r.r.r.r/ 24 is subnetted, 1 subnets C a.a.a.a is directly connected, Serial0 b.b.b.b/ 24 is subnetted,3 subnets O b.b.b.a [110/ 70] via 1.1.1.2, 00: 00:48, Serial0 O b.b.b.c [110/ 11181] via 1.1.1.2, 00:00:48, Serial0 O b.b.b.d [110/ 11181] via 1.1.1.2, 00:00: 48, Serial0 m.m.m.m/ 24 is subnetted, 1 subnets C m.m.m.a is directly connected, Serial1 S* 0.0.0.0/ 0 is directly connected, Serial1
The output given above displays the routing table of the local router. The illustration given above uses sample data, and the use of this command in real-time will substitute the sample data with actual addresses. The “O” indicates that the route is learnt through OSPF.
BGP Routing BGP is an inter-autonomous system routing protocol. An autonomous system is a network or group of networks under a common administration and with common routing policies. BGP is used to exchange routing information for the Internet and is the protocol used between Internet service providers (ISP). Customers connect to ISPs, and ISPs use BGP to exchange customer and ISP routes. When BGP is used between autonomous systems (AS), the protocol is referred to as External BGP (EBGP). If a service provider is using BGP to exchange routes within an AS, then the protocol is referred to as Interior BGP (IBGP).
BGP is a very robust and scalable routing protocol, as evidenced by the fact that BGP is the routing protocol employed on the Internet. To achieve scalability at this level, BGP uses many route parameters, called attributes, to define routing policies and maintain a stable routing environment. In addition to BGP attributes, classless interdomain routing (CIDR) is used by BGP to reduce the size of the Internet routing tables. BGP neighbours exchange full routing information when the TCP connection between neighbours is first established. When changes to the routing table are detected, the BGP routers send to their neighbours only those routes that have changed. BGP routers do not send periodic routing updates, and BGP routing updates advertise only the optimal path to a destination network.
In the BSNL NIB-II Network, BGP will be used for both internal and external routing as well as for MPLS-VPNs. This section of the design specifically covers the following:
• iBGP design for the IP-v4 Internet routing table and how it will be passed through the network.
• iBGP design for the VPN-v4 routing table and how it will be passed through the network.
IP-v4 iBGP Design One of the limitations of BGP is that a BGP router never advertises iBGP-learned routes to another iBGP neighbour. This therefore means that there must be a full mesh of all iBGP peering sessions, and this provides some serious issues when scaling the network. There are two options available to solve this issue; these are route reflectors and confederations.
Core Transport Architecture
BSNL National Internet Backbone - II
33 IP MPLS Core Design
Confederations require that the network be split into internal autonomous systems (AS), hence making the full meshes smaller. Route reflectors have been designed to be more hierarchical and hence fit the design of the BSNL NIB-II network and these will be used to scale the iBGP design.
Route Reflectors Route reflectors (RR) were designed to solve the scaling problems of very large iBGP meshes. Route reflectors relax the rule that iBGP peers cannot advertise routes learned via iBGP. The RR concept allows an iBGP peer to reflect (hence the name) all learned routes to a route reflector client (RRC). When RR’s are enabled in a network there can be three types of iBGP peers:
• Router Reflector – Reflects all BGP (iBGP and EBGP) routes to clients (Only the best route is advertised).
• Client – Can receive routes from both RR’s and Non-Clients.
• Non-Client – Normal iBGP behavior (No iBGP routes are advertised).
Any one router can be a combination of types, this allows for the hierarchical structure to be developed. Where a distribution router can be a RRC and receiving routes reflected from a router in the core and then in turn reflect them down to an edge router. A client can also peer with multiple RR’s allowing the building of redundancy into the RR design.
A group of routers and its RR are known as a cluster, each RR also has a cluster_id. The cluster_id is added to the cluster_list of any advertisements sent from the RR. The cluster_list is used in the same way the as_path is used for loop detection, so that if a route is seen by a RR with its own cluster_id it is ignored. This means that if there is more than one RR in the same cluster they must have the same cluster_id. Clusters will be used throughout the BSNL NIB-II network.
Route Reflector Hierarchy The BGP IP-v4 route reflector design places the RRs in A1 cities which form the heart of the BSNL NIB-II network core. Two IP-v4 route reflectors are located in Chennai and Mumbai respectively. The IP-v4 RRs function as Route Servers in the BSNL NIB-II network. The eight Internet Gateway routers peer with the RRs in a redundant configuration.
Core Transport Architecture
BSNL National Internet Backbone - II
34 IP MPLS Core Design
Figure 17 Route Reflector Hierarchies for IP-v4 Services
Figure 17 provides a snapshot of the IP-v4 RR peering. In this illustration, the Kolkatta and Noida Internet Gateway routers iBGP peer with the two IP-v4 RRs propagating Internet routing information. The RRs are also iBGP peered with each other. This illustration uses Kolkatta and Noida (which are A1 cities) as samples for the sake of simplicity.
Route Reflector Peering Architecture The Internet Gateway routers are configured with two Ethernet dot1q sub-interfaces. The first dot1q sub-interface is placed in the BSNL NIB-II Global IP routing table. This ensures IP reachability between the Internet Gateway routers, and all the other routers in the BSNL NIB-II network. This facilitates the iBGP peering for propagating Internet routing information between all the Internet Gateway routers and the IP-v4 RR. It enables the Internet Gateway routers to forward Internet traffic to other Gateway routers, if they loose routing information from their respective upstream providers.
Details on the second dot1q sub-interface are given in the “Section - VPN-V4 Multi-Protocol iBGP Design”. The Internet Gateway routers are configured to iBGP peer with both the IP-v4 RRs propagating the complete Internet routing information received from upstream providers. The received routing information is reflected to all Internet Gateway routers. Both the RRs use iBGP peering to exchange complete routing information. This ensures RR redundancy.
IP-v4 Route Reflectors
STM - 16 Link
Cisco 12416 Router
Kolkatta
ChennaiBangalore
Mumbai
Core A1Core A1
Core A1 Core A1
Core A1Core A1
Core A1Core A1
Core A1 Core A1
Noida
Core A1Core A1
IP - v4 RR IP - v4 RR
IP-v4 RRIP-v4 RR
PEPE
PE PE
GIG - E Link iBGP Peering
IGW
IGW
Core Transport Architecture
BSNL National Internet Backbone - II
35 IP MPLS Core Design
Note Internet access customers that require full Internet routes are configured to peer
with the IP-v4 RRs using eBGP Multi-hop. The IP-v4 Route Reflector does not peer with the Provider Edge routers. Hence the IP-v4 RR is not responsible for propagating BSNL’s Internet customer routing information upstream to the Internet.
In summary the Route Reflector acts as an Internet Route Server for BSNL NIB-II customers that require full Internet routing information. The details on the RR architecture are given below: Figure 18 Route Reflector peering architecture for Full Internet routing customers
NIB-2
RR VPN4RR VPN4
PE1PE1 P1P1
PE2PE2
Intl GW1Intl GW1
InternetInternet
PE3PE3
RR IPv4RR IPv4
iBGP
Full Routes
CE1
Full Routes
IPV4 iBGP peering
Dot1q interface
eBGP Multi-hop peering
Global routing table
Figure 18 depicts the Internet Gateway router iBGP peer with the two IP-v4 RR propagating full Internet routes. The illustration also shows an Internet customer requiring full Internet routes being peered with the RR using eBGP Multi-hop.
Peer Groups Peer groups will be used to simplify the configuration and improve CPU and memory utilization. When using peer groups every configuration line applied to the peer group definition is also applied to each peer group member. There will be a number of peer groups used in the BSNL NIB-II network at different levels of the network. The following table below shows the IP-v4 peer-groups used in the BSNL network. Table 4 iBGP Peer Groups
Peer-group Placement Description
REFLECTORS All Neighbours that are RR’s to this router.
RRCUSTOMERS Edge Neighbours that peer using eBGP Multi-hop to this
Core Transport Architecture
BSNL National Internet Backbone - II
36 IP MPLS Core Design
router.
GATEWAYS Peering Points External Peering routers.
IPV4REFLECTORS All RRs that are peered with the Internet Gateway router.
PE Routers – IP-v4 All the PE routers in the BSNL NIB-II network are classified as VPN-v4 PE routers. Details on the architecture of VPN-v4 PE routers are given in the section “VPN-v4 Multi-Protocol iBGP design”.
IP-v4 iBGP configuration guidelines The following configuration guidelines are used for implementing IP-v4 iBGP in the BSNL NIB-II Network
• The BGP Neighbour addresses will always be the loopback addresses. The router’s loopback address will be used as the source address for all neighbour sessions. This is normally set using the “update-source locally” BGP configuration command. The configured BGP version will be set to 4.
• All neighbours will use BGP peer-group statements. This will largely simplify the iBGP deployment, and policy enforcement. BGP peer-groups will remove the recurrence of configuration statements that may occur on a per neighbour basis.
• All neighbours should pass on community information, required for policy enforcement, and BGP will be enabled to use new format community strings
• The Internet Gateway routers will receive defaults along with partial/full Internet routing information from upstream peers.
• Next-hop-self will be used for on the Internet Gateway PE routers. This will set the announced prefix to have a next-hop tag of the BSNL NIB-II Internet Gateway router. This is mandatory, as the BGP peer receiving the route announcement will look for a known and valid next-hop address association with the announced prefix.
• The Internet Gateway routers will peer with both the IP-v4 Route Reflectors. This is important in ensuring RR redundancy.
• Both the RRs will be part of the same cluster, as both the RRs service the same list of iBGP clients. The RRs will also have an iBGP peering with each other.
• The Internet Gateway routers will export all the BGP routes received from upstream eBGP peers to the RRs. The information includes full/partial Internet routes and defaults received from the upstream peers. This is useful for providing routing information to customers that need full Internet routes.
• The RRs will reflect all the routing information received from one Internet Gateway to the other Internet Gateway routers.
• Internet customers that need full Internet routing information will peer with the RR using eBGP Multi-hop.
• The RRs will be configured with inbound filters restricting the Full Internet routing customer network announcements.
• The RRs will reflect all routes received from Internet Gateway routers to the Internet customers that are peered using eBGP Multi-hop.
• BGP neighbour changes will be logged
Core Transport Architecture
BSNL National Internet Backbone - II
37 IP MPLS Core Design
• All iBGP timers will be set to their default value
The following BGP features will be disabled:
• Auto-summarization
• IGP Synchronization For details on Internet access services and the relevant configuration information, refer to the “BSNL NIB-II services LLD”.
Note The configuration section “IP-v4 Route Reflector – Peer-group REFLECTORS”
illustrates the relevant peer-group configuration commands needed for establishing iBGP peering relationships, between the Chennai and Mumbai IP-v4 Route Reflectors. The Loopback address of the Mumbai RR is used in the BGP neighbour statement. The BGP cluster-id used on the IP-v4 Route Reflectors is being standardized with a value of 100. For the complete BGP configuration details (see section “Appendix – I Router software configurations”).
Figure 20 IP-v4 Route Reflector – Peer Group “Gateways”
!
router bgp 9829
no synchronization
bgp log-neighbor-changes
neighbor GATEWAYS peer-group
neighbor GATEWAYS remote-as 9829
neighbor GATEWAYS update-source loopback0
neighbor GATEWAYS version 4
Core Transport Architecture
BSNL National Internet Backbone - II
38 IP MPLS Core Design
neighbor GATEWAYS send-community
neighbor GATEWAYS route-reflector-client
neighbor 218.248.251.64 peer-group GATEWAYS
bgp cluster-id 100
no auto-summary
! end
Note The configuration section “IP-v4 Route Reflector – Peer-group GATEWAYS”
illustrates the relevant peer-group configuration commands needed for establishing iBGP peering relationships, between the IP-v4 Route Reflectors, and the Internet gateway routers. This illustration depicts the iBGP peering between the RR and the Chennai Internet Gateway Router. The Chennai IGW router is used in this sample for the sake of simplicity. The RRs have no inbound / outbound policy filters applied. This ensures that routes announced by one Internet Gateway router are reflected to all the other Internet Gateway routers. The BGP cluster-id used on the IP-v4 Route Reflectors is being standardized with a value of 100. The Internet access customer IP addresses used are sample data, and can be substituted with the actual addresses as and when the customer access is provisioned. For the complete BGP configuration details (see section “Appendix – I Router software configurations”).
Figure 21 IP-v4 Route Reflector – Peer Group “RRCUSTOMERS”
!
router bgp 9829
no synchronization
bgp log-neighbor-changes
neighbor RRCUSTOMERS peer-group
neighbor RRCUSTOMERS remote-as <customer-asn>
neighbor RRCUSTOMERS ebgp-multihop
neighbor RRCUSTOMERS version 4
neighbor RRCUSTOMERS send-community
neighbor RRCUSTOMERS prefix-list input-filter in
neighbor <neighbor> peer-group RRCUSTOMERS
bgp cluster-id 100
no auto-summary
!
ip prefix-list input-filter deny 0.0.0.0 le 32
! end
Core Transport Architecture
BSNL National Internet Backbone - II
39 IP MPLS Core Design
Note The configuration section “IP-v4 Route Reflector – Peer-group RRCUSTOMERS”
illustrates the relevant peer-group configuration commands needed for establishing eBGP Multi-hop peering relationships, between the IP-v4 Route Reflectors, and Internet access customers that need “Full Internet routes”. The RRs have inbound policy filters restricting the customer eBGP router from advertising any routing information that belong to the customer AS. The RRs have no outbound policy filters applied. This ensures that full Internet routes are announced to the customer. The BGP cluster-id used on the IP-v4 Route Reflectors is being standardized with a value of 100. The IP addresses used are sample data, and will be substituted with the actual addresses as and when the address blocks are assigned to the customers. For the complete BGP configuration details (see section “Appendix – I Router software configurations”).
Figure 22 Internet Gateway Router – Peer Group “IP-v4REFLECTORS”
!
router bgp 9829
no synchronization
bgp log-neighbor-changes
neighbor IPV4REFLECTORS peer-group
neighbor IPV4REFLECTORS remote-as 9829
neighbor IPV4REFLECTORS version 4
neighbor IPV4REFLECTORS send-community
neighbor IPV4REFLECTORS next-hop-self
neighbor IPV4REFLECTORS weight 32768
neighbor 218.248.251.64 peer-group IPV4REFLECTORS
no auto-summary
! end
Core Transport Architecture
BSNL National Internet Backbone - II
40 IP MPLS Core Design
Note The configuration section “IP-v4 Route Reflector – Peer-group IP-v4
REFLECTORS” illustrates the relevant peer-group configuration commands needed for establishing iBGP peering relationships, between the Internet Gateway routers and the IP-v4 Route Reflectors. This illustration depicts the iBGP peering between the RR and the Chennai Internet Gateway Router. The Chennai IGW router is used in this sample for the sake of simplicity. The Internet Gateway routers have no inbound/outbound policy filters configured. This ensures that full Internet routes are announced to IP-v4 RR, and also reflected between the Internet gateway routers. The Internet Gateway routers are configured to use the BGP “weight” attribute to prefer routes learnt via eBGP over iBGP. The iBGP session with the IP-v4 RR is configured to use a default administrative weight of 32768. The eBGP session originating from this router to upstream peers, will use a higher administrative weight. This illustration does not provide the details on the eBGP peering sessions. For the complete BGP configuration details (see section “Appendix – I Router software configurations”).
Chennai-IP-v4-RR# show ip bgp neighbor 218.248.254.74 BGP neighbor is 218.248.254.74, remote AS 9829, internal link BGP version 4, remote router ID x.x.x.x BGP state = Established, up for 01:04:30 Last read 00:00:30, hold time is 180, keepalive interval is 60 seconds Neighbor capabilities: Route refresh:advertised and received Address family IPv4 Unicast:advertised and received Received 83 messages, 0 notifications, 0 in queue Sent 78 messages, 0 notifications, 0 in queue Route refresh request:received 0, sent 0 Minimum time between advertisement runs is 5 seconds For address family:IPv4 Unicast BGP table version 18, neighbor version 18 Index 2, Offset 0, Mask 0x4 Inbound soft reconfiguration allowed Community attribute sent to this neighbor 2 accepted prefixes consume 72 bytes Prefix advertised 7, suppressed 0, withdrawn 4
The output given above provides the details on the BGP neighbour relationship between the Chennai RR and the Mumbai RR neighbour. The state “Established” indicates that the BGP peer relationship and route-exchange process between the two routers is complete. The type “Internal link” indicates that the BGP session is Internal to the AS.
Core Transport Architecture
BSNL National Internet Backbone - II
41 IP MPLS Core Design
Figure 24 BGP IP-v4 Routing Table Sample
PE-Router# show ip bgp BGP table version is 5, local router ID is 10.0.33.34 Status codes: s suppressed, d damped, h history, * valid, > best, i - internal Origin codes: i - IGP, e - EGP, ? - incomplete Network Next Hop Metric LocPrf Weight Path *> 1.0.0.0 0.0.0.0 0 32768 ? * 2.0.0.0 10.0.33.35 10 0 35 ? *> 0.0.0.0 0 32768 ? * 10.0.0.0 10.0.33.35 10 0 35 ? *> 0.0.0.0 0 32768 ? *> 192.168.0.0/16 10.0.33.35 10 0 35 ? The output given above displays the BGP IP-v4 routing table of the local router. The illustration given above uses sample data, and the use of this command in real-time will substitute the sample data with actual addresses.
VPN-v4 Multi-Protocol iBGP Design Multi-Protocol extensions to BGP (MPBGP) facilitate the exchange of information associated with customer VPN networks, between PE routers. iBGP with Multi-protocol extensions is used for communication between PE routers. MP-iBGP propagates reachability information for VPN-IPv4 prefixes among PE routers by means of the MPBGP (BGP with Multiprotocol extensions), which define support for address families other than IPv4 (including VPN-IPv4). It does this in a way that ensures the routes for a given VPN are learned only by other members of that VPN, enabling members of the VPN to communicate with each other. Route distinguishers are used to differentiate individual VPNs.
Route Reflector Hierarchy The BGP VPN-v4 route reflector design places the RRs in A1 cities which forms the heart of the BSNL NIB-II network core. Two VPN-v4 route reflectors are located in Bangalore and Noida respectively. All the PE routers and Internet Gateway routers peer with the RRs in a redundant-configuration.
Core Transport Architecture
BSNL National Internet Backbone - II
42 IP MPLS Core Design
Figure 25 Route Reflector Hierarchies for VPN-v4 Services
Figure 25 provides a snapshot of the IP-v4 RR peering. In this illustration, the PE routers at Mumbai and Chennai use MP-iBGP peering sessions with the two VPN-v4 RRs exchanging prefix information. The RRs are also peered with each other. This illustration uses Mumbai, Kolkatta, and Chennai (which are A1 cities) as samples for the sake of simplicity.
Route Reflector Peering Architecture The following sections provide details on the VPN-V4 Route Reflector architecture.
PE Routers – VPN-v4 (MPLS Layer 3 VPN NLRI) The PE routers in the BSNL NIB-II network will be used to terminate MPLS Layer 3 VPN customers. The PE router learns about a customer IP prefix from a customer edge (CE) router, either by a static route configuration, through an eBGP session with the CE router, through OSPF routing with the CE router, or through using Routing Information Protocol (RIP-v2) with the CE router.
The PE router converts this IP-v4 route into a VPN-IPv4 prefix by combining it with an 8-byte route distinguisher (RD). The generated prefix is a member of the VPN-IPv4 address family and is 96 bits in length. It serves to uniquely identify the customer address even if the customer is using a globally non-unique (private) IP address. The customer routes are imported into iBGP using the multi-protocol extensions to BGP. All the PE routers in the BSNL NIB-II network will peer with the two VPN-v4 RRs propagating customer routing information.
VPN-v4 Route Reflectors
STM - 16 Link
Cisco 12416 Router
Kolkatta
ChennaiBangalore
Mumbai
Core A1Core A1
Core A1 Core A1
Core A1Core A1
Core A1Core A1
Core A1 Core A1
Noida
Core A1Core A1
PE PE
PEPE
VPN-v4 RRVPN-v4 RR
PE
GIG - E Link iBGP Peering
VPNv4 RR
Core Transport Architecture
BSNL National Internet Backbone - II
43 IP MPLS Core Design
For comprehensive details on MPLS VPN services and the relevant configuration information, refer to the “BSNL NIB-II services LLD”.
PE Routers – VPN-v4 (Internet NLRI) The PE routers in the BSNL NIB-II network will be used to terminate Internet access customers. Each Internet access customer is placed in a separate MPLS Layer 3 VRF instance. The PE router uses any of the supported PE-CE routing protocols for exchanging routing information with the customer.
In most cases, the customer VRF table is constructed using default only routing for single-homed Internet access customers. In this case the customer uses a default route pointing to the BSNL NIB-II PE router for forwarding traffic to the Internet.
Multi-homed Internet access customers use eBGP as the PE-CE routing protocol. The customer VRF is populated only with Internet routes that belong to the BNSL NIB-II Autonomous System. In this case the customer may use either floating / normal default routes or any preferred method for routing his outbound traffic.
As mentioned in “Section – iBGP IP-v4 configuration guidelines”, customers that need full Internet routes will be directly peered with the IP-v4 RR using eBGP Multi-hop. The CE routers will need to be configured with a static route to the BSNL Major IP Network block, in order to reach the IP-v4 RR.
Finally the PE routers announce the customer address blocks learnt through static routing / eBGP peering, to the VPN-V4 Route Reflectors. This ensures that the customer address blocks are announced to upstream providers, and facilitates Internet connectivity.
All the PE routers in the BSNL NIB-II network will peer with the two VPN-v4 RRs, propagating customer routing information. For the complete BGP and Internet customer VRF table configuration details (see section “Appendix – I Router software configurations”).
Internet Gateway Routers The Internet Gateway routers are configured with two Ethernet dot1q sub-interfaces, which are terminated at the PE router. The first dot1q sub-interface is placed in the BSNL NIB-II Global IP Routing Table as mentioned in “Section IP-v4 iBGP Design”.
The second dot1q sub-interface in the eight Internet Gateway routers is placed in the in a MPLS Layer 3 VRF instance called “INET” VRF. Each VRF table uses a separate Route Distinguisher for reasons given later in this section. Since BSNL has significant experience in managing and operating OSPF based IP networks, OSPF is the preferred choice of PE-CE routing protocol between the Internet Gateway routers and the PE routers.
The Internet Gateway routers do not propagate full Internet routing information to the appropriate PE routers. The Internet Gateway routers only propagate a default route, loopback and Internet Gateway router - PE link addresses into the VRF, which is further propagated to the VPN-v4 RR via MP-iBGP.
The following mechanisms can be used for propagating Internet defaults to all the PE routers in the BSNL NIB-II network. They are as follows:
Core Transport Architecture
BSNL National Internet Backbone - II
44 IP MPLS Core Design
Option1: The Internet Gateway routers are configured with a static default route pointing to the respective upstream providers. The routers are further configured to generate an OSPF default using the “default-information originate” statement into the VRF table. The PE router propagates the defaults to the VPN-v4 RR via MP-iBGP, which eventually gets populated in the BGP routing table of all the PE routers. This is the preferred method of injecting defaults into the BSNL NIB-II network. Care should be taken to ensure that the PE routers connecting to the Internet Gateway routers needs to be configured with the “default-information originate” statement under the BGP “ipv4 address family” section for the VRF table hosting the Internet Gateway routers. Option2: The Internet Gateway routers are configured to receive defaults from Internet upstream providers via eBGP. The BGP routing information will be redistributed into the PE-CE OSPF VRF routing process. Route filters will be applied to permit only the default routes to be redistributed into OSPF. All other routing information will be filtered from entering the PE-CE OSPF VRF routing process. Proper care should be taken to ensure that only default routes are injected, and all other Internet routes are filtered. Mis-configuration of route filters may result in the full Internet routing table being redistributed into the OSPF VRF routing table. Since all the eight defaults are tagged with separate RD values, each route entry is treated as originating from a unique source and is exported to the VPN-v4 RRs using BGP. The eight defaults are reflected to all the PE routers in the BSNL NIB-II network. Therefore each PE router will have eight default route entries in its BGP routing table for BGP path selection. The path selection behaviour can be further customized using BGP “Local-Preference” attributes, which will facilitate the process of preferring one Internet Gateway router over the other. Since some of the Internet Gateway routers have STM-1 uplinks, and some others Satellite connections, the BGP “Local-pref” attribute can be used to prefer the STM-1 peering points over the satellite counterparts. Finally the Internet customer VRF tables and “INET” VRF VRF table are configured to import each others routing information creating an extranet for facilitating Internet access. Route filters will be created on the “INET” VRF to filter customer RFC1918 private addresses from being populated into the VRF table. Therefore every PE router will have eight default route entries in its BGP table, out of which a single default route is chosen and populated in the Internet customer VRF table which is based on the BGP policy This approach of placing the Internet traffic to enter and exit an MPLS Layer 3 VRF Table ensures that all Internet traffic is MPLS label switched within the BSNL NIB-II Core. It also facilitates in ensuring that the routing tables of the Provider routers is small in size. The Internet Gateway routers will be configured to remove Private AS Numbers at the point of connectivity to upstream providers using the “remove-private-as” statements. This will ensure that Private AS Numbers of BSNL’s customers are removed prior to announcing the information to the Internet.
Note The Internet Gateway routers will not peer with the VPN-v4 RRs.
Details on the placement of the Internet gateways and the Internet traffic flow are given below:
Core Transport Architecture
BSNL National Internet Backbone - II
45 IP MPLS Core Design
Figure 26 Placement of Internet Gateway and Internet customer VRF Tables
NIB-2
Intl GW1Intl GW1
InternetInternet
OSPF
Full routes
Intl GW2Intl GW2
1 2
1 CHN PE advertises 0/0 with RD 9289:1
9829:1:0/0, IGW1 Loopback and IGW_PE Link address, NH=CHN PE
2 BLR PE advertises 0/0 with RD 9289:2
9829:2:0/0, IGW2 Loopback and IGW2-PE link address, NH=BLR PE
OSPF0.0.0.0/0, Loopback /CE-PE Link address
CHN PE
ERN PE
CBT PE
CHN P
VPNV4 RR
BLR P
BLR PE
IPV4 RR
InternetInternet
Global Routing Table
“INET” VRF
Internet-cust1 VRF
Ernakulam
Coimbatore
Chennai
Bangalore
Provider
Provider Edge
ERN
CBT
CHN
BLR
P
PE
iBGP PeeringIGW routers configuredas IPV4 RR Clientssending Full Internet routes to the IPV4 RRs
OSPF IGW to PEIGW routers sending default route, Loopback address and IGW-PE Link address in OSPF to the connected PE
0.0.0.0/0, Loopback /CE-PE Link address
Full routes
Note Figure 26 illustrates the placement of all the Internet gateway routers in the “INET” VRF using a dot1q sub-interface, and propagating routing information on loopbacks, CE-PE links along with OSPF defaults. The propagation of full Internet routing information from the Internet gateways to the IP-v4 RR is shown. Here the creation of the VRF table for Internet access customers on the PE routers is also illustrated. This illustration uses an Internet customer VRF called as “Internet-customer-vrf” for the sake of simplicity. Finally the PE routers use MP-iBGP peering with the VPN-v4 RRs for propagating routing information, and facilitating Internet connectivity. This illustration uses the Chennai and Bangalore sites as samples. For the complete BGP and “INET” VRF VRF table configuration details (see section “Appendix – I Router software configurations”).
Peer Groups The following table below shows the VPN-v4 peer-groups used in the BSNL NIB-II network.
Core Transport Architecture
BSNL National Internet Backbone - II
46 IP MPLS Core Design
Table 5 iBGP Peer Groups
Peer-group Placement Description
VPNREFLECTORS All Neighbours that are RRs to this router
CLIENTS Edge Neighbours that peer using iBGP to this router
VPNV4REFLECTORS All RRs that are peered with the PE router.
iBGP Timer Tuning In the initial deployment of the BSNL NIB-II network all timers will be left at their default values as shown in Table 8. Advertisement-interval: To set the interval between the sending of BGP routing updates. The default interval is 5 seconds for iBGP peers. Keepalive: Frequency, in seconds, with which the Cisco IOS software sends keepalive messages to its peer. The default is 60 seconds. Holdtime: Interval, in seconds, after not receiving a keepalive message that the software declares a peer dead. The default is 180 seconds. Scan-time: Configures import processing of VPNv4 unicast routing information from BGP routers into routing tables. Valid values used for selecting the desired scanning interval are from 5 to 60 seconds. The default scan time is 60 seconds. The following, default iBGP timers (as shown below) will be used.
Table 6 Default Values for iBGP Timers
iBGP Timers Default Value
Advertisement-interval 5 sec
Keepalive 60 sec
Holdtime 180 sec
Scan-time 60 sec
VPN-v4 iBGP configuration guidelines The following design rules are used for implementing VPN-v4 MP-iBGP in the BSNL NIB-II Network
• The BGP Neighbour addresses will always be the loopback addresses. The router’s loopback address will be used as the source address for all neighbour sessions. This is normally set using the “update-source locally” BGP configuration command.
• The configured BGP version will be set to 4.
• BGP neighbour changes will be logged
• All iBGP timers will be set to their default value
Core Transport Architecture
BSNL National Internet Backbone - II
47 IP MPLS Core Design
• All neighbours will use BGP peer-group statements. This will largely simplify the iBGP deployment, and policy enforcement. BGP peer-groups will remove the recurrence of configuration statements that may occur on a per neighbour basis.
• All neighbours would be configured to pass on community information, required for policy enforcement. Also BGP will be configured to support extended community attributes for facilitating support for MPLS VPNs.
• Both the RRs will be part of the same cluster, as both the RRs service the same list of MP-iBGP clients. The RRs will have an MP-iBGP peering with each other.
• The PE routers will peer with both the VPN-v4 Route Reflectors for propagating routing information of Internet access customers. The PE routers will be configured to propagate the registered address blocks of customers aggregated locally. The PE routers will also propagate MPLS Layer 3 VPN customer routing information to both the RRs.
• The PE routers will use “Next-hop-self” for all the routes that are being propagated.
• The Route Reflectors will not be configured with any inbound/outbound policies for filtering routing updates to its clients. Route filters applied on a per neighbor basis would affect the routing functionality of the MPLS Layer 3 VPN customers of BSNL. Hence the policies needed to filter / control routing updates are applied on the respective VRF tables. Details on applying route policies on the VRF tables are discussed in the “BSNL Services LLD”.
• The Internet Gateway routers will not be peered with the VPN-v4 RRs.
• Customers of BSNL will not be directly peered with the VPN-v4 RRs.
The following BGP features should be disabled:
• Auto-summarization
• IGP Synchronization
For details on MPLS VPN services and the relevant configuration information, refer to the “BSNL NIB-II services LLD”.
Note The config section “VPN-v4 Route Reflector – Peer-group CLIENTS” illustrates the relevant peer-group configuration commands needed for establishing MP-iBGP peering relationships, between the VPN-v4 Route Reflectors, and the PE routers. This illustration depicts the MP-iBGP peering between the RR and the Chennai PE Router. The Chennai PE router is used in this sample for the sake of simplicity.
The BGP cluster-id used on the VPN-v4 Route Reflectors is being standardized with a value of 200. The Route Reflectors are configured for extended community attributes required for announcing customer VPN-v4 routing information.
For the complete BGP configuration details (see section “Appendix – I Router software configurations”).
Figure 28 VPN-v4 Route Reflector – Peer Group “VPNREFLECTORS”
!
router bgp 9829
no synchronization
bgp cluster-id 200
bgp log-neighbor-changes
neighbor VPNREFLECTORS peer-group
neighbor VPNREFLECTORS remote-as 9829
neighbor VPNREFLECTORS update-source Loopback0
no neighbor VPNREFLECTORS activate
no auto-summary
!
address-family vpnv4
neighbor VPNREFLECTORS activate
neighbor VPNREFLECTORS send-community extended
neighbor 218.248.254.76 peer-group VPNREFLECTORS
exit-address-family
! end
Core Transport Architecture
BSNL National Internet Backbone - II
49 IP MPLS Core Design
Note The config section “VPN-v4 Route Reflector – Peer-group VPNREFLECTORS”
illustrates the relevant peer-group configuration commands needed for establishing MP-iBGP peering relationships, between the Noida and Bangalore VPN-v4 Route Reflectors. The Route Reflectors are configured for extended community attributes required for announcing customer VPN-v4 routing information. For the complete BGP configuration details (see section “Appendix – I Router software configurations”).
Figure 29 VPN-v4 Route Reflector – Peer Group “VPNV4REFLECTORS”
Note The config section “VPN-v4 Provider Edge router – Peer-group VPNV4REFLECTORS” illustrates the relevant peer-group configuration commands needed for establishing MP-iBGP peering relationships, between the Provider Edge routers, and the VPN-v4 Route Reflectors. This illustration depicts the MP-iBGP peering between the Chennai PE Router and the Noida VPN-v4 RR. The Chennai PE router is used in this sample for the sake of simplicity. The PE routers are configured for extended community attributes required for announcing customer VPN-v4 routing information. The example illustrates a sample VRF table for an Internet access customer called “Internet-customer-vrf” being exported within Multi-Protocol BGP, and the relevant customer routes being propagated across the BSNL NIB-II Core. The Internet customer VRF table illustrated here uses static routing between the PE-CE. The configuration also illustrates the export of the “INET” VRF which hosts all the Internet gateway routers in the NIB-II network. For the complete BGP, “Internet-customer vrf” and “INET” VRF VRF table configuration details (see section “Appendix – I Router software configurations”).
PE-Router# show ip bgp neighbor 218.248.254.76 BGP neighbor is 218.248.254.76, remote AS 9829, internal link BGP version 4, remote router ID x.x.x.x BGP state = Established, up for 01:04:30 Last read 00:00:30, hold time is 180, keepalive interval is 60 seconds Neighbor capabilities: Route refresh:advertised and received Address family IPv4 Unicast:advertised and received Address family VPNv4 Unicast:advertised and received Received 83 messages, 0 notifications, 0 in queue Sent 78 messages, 0 notifications, 0 in queue Route refresh request:received 0, sent 0 Minimum time between advertisement runs is 5 seconds For address family:VPNv4 Unicast BGP table version 188, neighbor version 186 Index 2, Offset 0, Mask 0x4 NEXT_HOP is always this router Community attribute sent to this neighbor 2 accepted prefixes consume 72 bytes Prefix advertised 7, suppressed 0, withdrawn 4
The output given above provides the details on the BGP neighbour relationship between the Noida RR and the Bangalore RR. The state “Established” indicates that the BGP peer relationship and route-exchange process between the two routers is complete. The type “Internal link” indicates that the BGP session is Internal to the AS. The address family “VPNv4 unicast: advertised and received” indicates that the local router is configured with the MultiProtocol extensions to BGP, required for supporting MPLS VPNs.
Core Transport Architecture
BSNL National Internet Backbone - II
51 IP MPLS Core Design
For details on MPLS VPN deployment and the relevant configuration information, refer to the “BSNL NIB-II services LLD”. Figure 31 BGP VPN-v4 Routing Table Sample
PE-Router# show ip bgp vpnv4 all tags Network Next Hop In label/Out label Route Distinguisher: 100:101 (Customer_A) a.a.a.a k.k.k.k nolabel/28 b.b.b.b j.j.j.j 16/aggregate(Customer_A) c.c.c.c i.i.i.i nolabel/29 Route Distinguisher: 100:102 (Customer_B) d.d.d.d h.h.h.h nolabel/30 f.f.f.f g.g.g.g 28/aggregate(Customer_B)
The output given above displays the BGP VPN-v4 routing table of the local router. The table lists all the customer VPN-v4 prefix information, along with the respective Route Distinguishers, and label information. The illustration given above uses sample data, and the use of this command in real-time will substitute the sample data with actual BSNL customer addresses. For details on MPLS VPN deployment and the relevant configuration information, refer to the “BSNL NIB-II services LLD”.
eBGP Design The eBGP architecture will be provided by BSNL, and is beyond the scope of this document.
BSNL National Internet Backbone - II
52 IP MPLS Core Design
IP Packet Switching
Switching Architecture
Overview The IP Core network in BSNL NIB-II network will use the following packet switching and packet forwarding techniques to provide the required functionality
• Cisco Express Forwarding (CEF)
• Multi Packet Label Switching (MPLS)
• Label Distribution Protocol (LDP)
These techniques are described in detailed in sections below along with the specific features they enable in IP Core network.
CEF Cisco Express Forwarding (CEF) is advanced, Layer 3 IP switching technology. CEF optimizes network performance and scalability for networks with large and dynamic traffic patterns, such as the Internet.
Cisco Express Forwarding evolved to best accommodate the changing network dynamics and traffic characteristics resulting from increasing numbers of short duration flows typically associated with Web-based applications and interactive type sessions. Existing layer 3 switching paradigms use a route-cache model to maintain a fast lookup table for destination network prefixes. The route-cache entries are traffic-driven in that the first packet to a new destination is routed via routing table information and as part of that forwarding operation, a route-cache entry for that destination is then added. This allows subsequent packets flows to that same destination network to be switched based on an efficient route-cache match. These entries are periodically aged out to keep the route cache current and can be immediately invalidated if the network topology changes. This 'demand-caching' scheme — maintaining a very fast access subset of the routing topology information — is optimized for scenarios whereby the majority of traffic flows are associated with a subset of destinations. However, given that traffic profiles at the core of the Internet (and potentially within some large Enterprise networks) are no longer resembling this model, a new switching paradigm was required that would eliminate the increasing cache maintenance resulting from growing numbers of topologically dispersed destinations and dynamic network changes.
IP Packet Switching
BSNL National Internet Backbone - II
53 IP MPLS Core Design
CEF avoids the potential overhead of continuous cache churn by instead using a Forwarding Information Base (FIB) for the destination switching decision which mirrors the entire contents of the IP routing table. i.e. there is a one-to-one correspondence between FIB table entries and routing table prefixes; therefore no need to maintain a route-cache. This offers significant benefits in terms of performance, scalability, network resilience and functionality, particularly in large complex networks with dynamic traffic patterns.
MPLS constructs the Label Forwarding Information Base (LFIB) based on the CEF Forwarding Information Base (FIB), and requires CEF to be enabled on all routers and router interfaces that are in the forwarding path.
CEF configuration guidelines The following configuration guidelines are used for implementing CEF in the BSNL NIB-II Network
• CEF will be enabled on all the Provider and Provider Edge routers in the BSNL NIB-II network core.
MPLS based Forwarding In conventional Layer 3 forwarding, as a packet traverses the network, each router extracts forwarding information from the Layer 3 header. Header analysis is repeated at each router (hop) through which the packet passes. In a MPLS network, packets are forwarded based on labels. Each IP network that is reachable through an interface is assigned a unique label. A mapping is established between an incoming label and an out going label. This is maintained in the Label Forwarding Information Base (LFIB) table. Each node examines the incoming label, does a table lookup, swaps the incoming label for the outgoing label and then forwards the packet out of the out going interface. Figure 32 MPLS Label Imposition
Label 2 L3 Header Data
IP Packet
Label 1L2 Header
Frame
DestinationPE
CustomerDestination
Figure 32 shows the details of a Double Stack MPLS Label imposition.
IP Packet Switching
BSNL National Internet Backbone - II
54 IP MPLS Core Design
Figure 33 MPLS Header
Layer 2 Header MPLS Header Layer 3 Header+ +
S = Bottom of stack
TTL = Time to live
EXP=Experimental Bits (CoS)
LABEL EXP S TTL
20 bits 3 1 8
Direct COS support in MPLS
Layer 2 Header MPLS Header Layer 3 Header+ +
S = Bottom of stack
TTL = Time to live
EXP=Experimental Bits (CoS)
LABEL EXP S TTL
20 bits 3 1 8
Direct COS support in MPLS Figure 33 shows the details of the MPLS header. It is located between the Layer 3 (IP) header and Layer 2 header. The EXP bits and the TTL field of the MPLS header can be copied from the IP header. The S bit indicates whether there is more than one MPLS label in this packet.
MPLS configuration guidelines The following configuration guidelines are used for implementing MPLS in the BSNL NIB-II Network
• MPLS will be enabled on all the Provider and Provider Edge routers in the BSNL NIB-II network core.
• MPLS will not be enabled on the customer facing interfaces in order to avoid label spoofing.
MPLS Traffic Engineering MPLS Traffic Engineering enables an MPLS backbone to replicate and expand upon the traffic engineering capabilities traditionally available only on Layer 2 networks. MPLS is an integration of Layer 2 and Layer 3 technologies. By making traditional Layer 2 features available to Layer 3, MPLS enables Traffic Engineering.
MPLS Traffic Engineering provides an integrated approach to traffic engineering. With MPLS, traffic engineering capabilities are integrated into Layer 3, which optimizes the routing of IP traffic, given the constraints imposed by backbone capacity and topology. MPLS Traffic Engineering enhances standard Interior Gateway Protocols (IGPs), such as OSPF, to automatically map packets onto the appropriate traffic flows, and transports the traffic flows across the network using MPLS based forwarding. It employs "constraint-based routing," in which the path for a traffic flow is the shortest path that meets the resource requirements (constraints) of the traffic flow. Here, the traffic flow has bandwidth requirements, media requirements, a priority that is compared to the priority of other flows, and so forth. Further it has the ability to recover from link or node failures by adapting to the new constraints presented by the changed topology. MPLS Traffic Engineering understands the backbone topology and available resources. It accounts for link bandwidth and for the size of the traffic flow when determining routes for LSPs across the backbone. It has a dynamic adaptation mechanism that enables the backbone to be resilient to failures, even if several primary paths are precalculated off-line. MPLS Traffic Engineering includes enhancements to the IGP shortest path first (SPF) calculations to automatically calculate what traffic should be sent over what LSPs.
The BSNL NIB-II network uses the following MPLS Traffic Engineering features:
IP Packet Switching
BSNL National Internet Backbone - II
55 IP MPLS Core Design
• MPLS Differentiated Class of Service
• MPLS Fast Re-route
MPLS Differentiated Class of Service architecture The BSNL NIB-II network uses a Class of Service model based on the marking of EXP code points. Traffic based on source/type is classified at the network ingress and the PE router maps / re-writes the EXP bits of the packet’s MPLS Shim header. The BSNL NIB-II network supports differential Class of Service for MPLS Layer 3 VPNs and MPLS Layer 2 VPNs by examining the information in the MPLS EXP bits. Figure 34 MPLS EXP Field
Label EXP S TTL
20 Bits 3 Bits 1 Bit 3 Bits Figure 34 provides a snapshot on the MPLS EXP Field. The Network provides four Traffic classes namely “Platinum”, “Gold”, “Silver”, and “Bronze” apart from the default class i.e. “System”. The “Platinum” class provides expedited forwarding of network traffic, and is configured for a low Packet Loss Priority (PLP). PLP defines the drop probability of traffic during network congestion. A high PLP indicates that the traffic of the selected class will have an aggressive drop rate during congestion. The “Gold” class provides assured forwarding of network traffic with a low PLP. The “Silver” class provides assured forwarding of network traffic with a medium PLP. “Bronze” provides best effort forwarding of network traffic, and uses a high PLP. The Architecture uses a single Traffic Engineered LSP based design, which is configured to carry all traffic classes. This significantly reduces the number of LSPs in the network, which scales well as the network grows. It also provides operational simplicity, and reduces engineering complexities as compared to Per class – Per LSP based designs that are suitable for small –medium sized networks.
IP Packet Switching
BSNL National Internet Backbone - II
56 IP MPLS Core Design
Figure 35 Class of Service Architecture
Bandwidth = 20%Bandwidth = 20%
Bandwidth = 20%Bandwidth = 20%
Bandwidth = 30%Bandwidth = 30%
Bandwidth = 30%Bandwidth = 30%
BSNL NIBBSNL NIB--II : CoS ArchitectureII : CoS Architecture
Queue 0 = 30%Queue 0 = 30%
Queue 1 = 30%Queue 1 = 30%
Queue 2 = 20%Queue 2 = 20%
Queue 3 = 20%Queue 3 = 20%
Gold TrafficGold Traffic
Silver TrafficSilver Traffic
Bronze TrafficBronze Traffic
Platinum TrafficPlatinum Traffic
Figure 35 provides a snapshot on the Class of Service architecture in the BSNL NIB-II network. For details on Class of Service deployment and the relevant configuration information, refer to the “BSNL NIB-II services LLD”.
MPLS Fast Reroute MPLS traffic engineering automatically establishes and maintains label-switched paths (LSPs) across the backbone using Resource Reservation Protocol (RSVP). The path used by a given LSP is based on the LSP resource requirements and available network resources such as bandwidth. Available resources are flooded via extensions to a link-state based Interior Gateway Protocol (IGP), such as OSPF. Paths for LSPs are calculated at the LSP headend. Under failure conditions, the headend determines a new route for the LSP. Recovery at the headend provides for the optimal use of resources. However, due to messaging delays, the headend cannot recover as fast as possible by making a repair at the point of failure.
Fast Reroute provides link protection to LSPs. This enables all traffic carried by LSPs that traverse a failed link to be rerouted around the failure. The reroute decision is completely controlled locally by the router interfacing the failed link. The headend of the tunnel is also notified of the link failure through OSPF or through RSVP; the headend then attempts to establish a new LSP that bypasses the failure.
Link Protection Fast Reroute link protection can protect an individual link from failure. Switching time for a FRR protected link is designed to match SONET APS times of around 50ms. This means that each link along the TE LSP can be protected and a backup path instigated from the point of the link failure, independent of the headend router. With FRR, the head-end router will not be aware of the failure, and from its perspective the established TE tunnel will operate as normally.
IP Packet Switching
BSNL National Internet Backbone - II
57 IP MPLS Core Design
Figure 36 Label Switched Path with Link Protection
R8
R2
R6
R3R4
R1 R10
R9
Head End
R8
R2
R6
R3R4
R1 R10
R9
Head End X
Tail End
Tail End
Primary Path
Backup Path
R5
R5
Figure 36 illustrates this process of Fast Reroute. The link between routers R6 and R5 is protected by Fast Reroute. In the event of a failure, R6 will detect the link is not operational, and immediately transfer any data in a TE tunnel using for that link to the backup TE tunnel via {R2,R3,R4} to R5. The important aspect about link protection is that the data or labeled traffic transmitted on the backup link will need to always end at the same router that is connected to the other end of the protected link. The router names used in this illustration are sample data. Finally once the original link is restored, and through path optimization timers, the original LSP will be re-established. MPLS link protection can be deployed in the BSNL NIB-II network. The relevant configuration details for deploying the same are given in the section “CEF/MPLS/MPLS-TE/LDP configuration”.
MPLS Traffic Engineering configuration guidelines The following configuration guidelines are used for enabling MPLS traffic engineering in the BSNL NIB-II Network
• MPLS Traffic Engineering will be enabled on the Provider, and Provider Edge routers.
• All core facing interfaces on the Provider and Provider Edge routers will be configured for MPLS Traffic Engineering.
• MPLS Traffic Engineering extensions for OSPF will be enabled. This will instruct OSPF to create the Traffic Engineering Database (TED) for OSPF area 0
• Each PE router will be configured with a single Traffic Engineered LSP.
IP Packet Switching
BSNL National Internet Backbone - II
58 IP MPLS Core Design
• Resource Reservation Protocol signaling will be enabled on all the core facing interfaces of the Provider, and Provider Edge routers.
Label Distribution Protocol A protocol is used between the routers in a MPLS network to assign labels to IP network and exchange label information with other routers. The protocol used in the BSNL NIB-II network is as follows:
• Label Distribution protocol (LDP Port number 646)
LDP is used to assign labels to networks that have been learnt by the IGP. At the ingress of the MPLS Network, a MPLS header is added to the IP packet. At each hop, the packet is forwarded by looking only at the label in the MPLS header. The label is swapped before forwarding it to the next router. At the egress of the MPLS network the MPLS header is stripped and the IP packet is forwarded out of the egress interface. Figure 37 Overview of MPLS Label Switching
3. Label Switches switch labelled packets using label swapping
4. Label Edge Router at egress removes label and delivers packet
IP Packet
MPLS Packet
1b. Tag/Label Distribution Protocol (TDP/LDP) establishes label to destination network mappings. IP Packet
MPLS Enabled IP Network
Figure 37 illustrates the label switching process.
Interface MTU size Two levels of labels are used to deliver MPLS-VPN services. The first level label is distributed by the LDP protocol, whilst the second level label is created by MP-BGP for VPN distribution.
When these two labels are placed into the frame they increase the frame size by 8 bytes (4 bytes per label). This is particularly relevant to Ethernet interfaces which have a default Maximum Transmission Unit of 1500 bytes; since it is now possible to have a valid Ethernet frame of 1526 bytes, which is known as a baby giant.
IP Packet Switching
BSNL National Internet Backbone - II
59 IP MPLS Core Design
To support label switching over an interface, the MTU size for MPLS frames must be increased with the following command on every interface in the core on P and PE routers. mpls mtu 1524
This will allow an MPLS frame with 6 labels of 1524 bytes over the link.
Note Six labels have been allowed to cater for future services on the network such as
traffic engineering and Carrier Supporting Carrier. Each service may require an increase in the label stack from 2 to something greater.
Load Balancing For routers in the MPLS network with parallel paths, Cisco Express Forwarding (CEF) load balancing will be on by default between MPLS nodes.
Regardless of how many parallel circuits exist, a pair of MPLS routers, the label distribution protocol (LDP) will only run a single session between them, not one for each link.
LDP will associate the same session ID for every numbered interface on the PE, hence regardless of which IP address the PE uses as a next hop, LDP at the adjacent router will know it is the same peer. So when the adjacent router needs to bind a label to a downstream PE, it will use the SAME label for destination IP address, even though those addresses are redistributed via OSPF and are reachable via multiple paths.
The following figure shows the forwarding table for destination 218.248.254.72 that can be reached via the two parallel POS interfaces. There is one forwarding entry for each interface that the destination can be reached on, however they all use the same label 24.
Figure 38 Multiple Label Paths to Same Destination
PE_Router#show mpls for 218.248.254.72 Local Outgoing Prefix Bytes tag Outgoing Next Hop tag tag or VC or Tunnel Id switched interface 24 Pop tag 218.248.254.72/32 0 POS 1/1/1 point2point Pop tag 218.248.254.72/32 0 POS 1/1/2 point2point
Because MPLS uses CEF switching, load balancing will occur by default on a per-destination basis by using the CEF hash algorithm to select the path. The following figure compares per-destination and per-packet load balancing.
IP Packet Switching
BSNL National Internet Backbone - II
60 IP MPLS Core Design
Figure 39 Per Destination vs. Per Packet Load Balancing
Note If you select per-packet load sharing then it will round-robin packets over the paths. Per packet load sharing is generally discouraged due to some applications such as voice being affected by out of order packets. Per-destination will work well the greater the number of source/destination pairs there are, and is the recommended way of doing configuring load balancing in the core. However, per-packet load sharing can be configured, dependent on the behaviour of the application.
LDP configuration guidelines The following configuration guidelines are used for implementing LDP in the BSNL NIB-II Network
• LDP is the protocol that should be used for label distribution
• LDP should be enabled on all the links between the P routers and the links between the P and PE routers.
• The Router ID for LDP sessions should be forced to loopback0.
• CEF should be globally enabled on the Router.
• TTL propagate will be disabled by default on all the P and PE Routers.
Note The config section “Provider Edge router – CEF/MPLS/LDP” illustrates the
relevant commands needed for configuring CEF, MPLS, and LDP. The PE routers are configured for MPLS Traffic Engineering with the required TE extensions configured for the IGP (OSPF). RSVP signaling is enabled on the core facing interfaces for facilitating TE. This illustration uses the Bangalore PE router for the sake of simplicity. For the complete CEF/MPLS/LDP configuration details (see section “Appendix – I Router software configurations”).
Note The config section “Provider router – CEF/MPLS/LDP” illustrates the relevant commands needed for configuring CEF, MPLS, and LDP. The PE routers are configured for MPLS Traffic Engineering with the required TE extensions configured for the IGP (OSPF). RSVP signaling is enabled on the core facing interfaces for facilitating TE. This illustration uses the Bangalore P router for the sake of simplicity. For the complete CEF/MPLS/LDP configuration details (see section “Appendix – I Router software configurations”).
PE-Router# show ip cef summary IP Distributed CEF with switching (Table Version 135165) 45788 routes, 0 reresolve, 4 unresolved routes (0 old, 4 new) 45788 leaves, 2868 nodes, 8442864 bytes, 135165 inserts, 89377 invalidations 0 load sharing elements, 0 bytes, 0 references 1 CEF resets, 0 revisions of existing leaves refcounts: 527870 leaf, 466167 node
The output given above displays the operational status of CEF on the local router. The “IP Distributed CEF with switching” indicates that DCEF is configured and active on the local router. The illustration given above uses sample data, and the use of this command in real-time will substitute the sample data with actual addresses.
Figure 43 MPLS Interface Summary Sample
P-Router# show mpls interfaces
Interface IP Tunnel Operational POS1/1/1 Yes (ldp) Yes Yes POS1/1/2 Yes (ldp) Yes No POS1/1/3 Yes (ldp) Yes Yes
The output given above displays the operational status of MPLS on the local router. The status “ldp” indicates that MPLS label forwarding is configured and active on the local router, and its interfaces. The illustration given above uses sample data, and the use of this command in real-time will substitute the sample data with actual addresses. Figure 44 LDP Neighbour Table Sample
PE-Router# show mpls ldp neighbors Peer LDP Ident: a.a.a.a:0; Local LDP Ident b.b.b.b:0 TCP connection: a.a.a.a.11072 – b.b.b.b.646 State: Oper; Msgs sent/rcvd: 65/73; Downstream Up time: 00:43:02 LDP discovery sources: POS1/1/0, Src IP addr: c.c.c.c Addresses bound to peer LDP Ident: d.d.d..d e.e.e.e f.f.f.f g.g.g.g h.h.h.h Peer LDP Ident: k.k.k.k:0; Local LDP Ident m.m.m.m:0 TCP connection: k.k.k.k.11000 - m.m.m.m.646 State: Oper; Msgs sent/rcvd: 26/25; Downstream Up time: 00:10:35 LDP discovery sources: POS1/2/0, Src IP addr: x.x.x.x Addresses bound to peer LDP Ident: y.y.y.y u.u.u.u w.w.w.w
IP Packet Switching
BSNL National Internet Backbone - II
65 IP MPLS Core Design
The output given above displays the LDP neighbour table on the local router. The status “oper” indicates that LDP session between the local router and its neighbour is active and operational. The illustration given above uses sample data, and the use of this command in real-time will substitute the sample data with actual addresses. Figure 45 LDP Database Sample
P-Router# show mpls ldp bindings lib entry: b.b.b.b/8, rev 4 local binding: tag: imp-null lib entry: c.c.c.c/16, rev 1137 local binding: tag: 16 remote binding: lsr: a.a.a.a:0, label: 16 lib entry: d.d.d.d/16, rev 1139 local binding: tag: 17 lib entry: m.m.m.m/32, rev 1257 local binding: tag: 18 lib entry: g.g.g.g/32, rev 14 local binding: tag: imp-null
The output given above displays the Label Information Base on the local router. The “local binding” and “remote binding” indicates the label bindings for the prefix. The illustration given above uses sample data, and the use of this command in real-time will substitute the sample data with actual addresses.
P-Router# show ip ospf database opaque-area OSPF Router with ID (a.a.a.a) (Process ID 100) Type-10 Opaque Link Area Link States (Area 0) LS age: 397 Options: (No TOS-capability, DC) LS Type: Opaque Area Link Link State ID: s.s.s.s Opaque Type: 1 Opaque ID: 0 Advertising Router: a.a.a.a LS Seq Number: 80000003 Checksum: 0x12C9 Length: 132 Fragment number : 0 MPLS TE router ID : a.a.a.a Remote-P-Router Link connected to Point-to-Point network Link ID : g.g.g.g Interface Address : h.h.h.h Neighbor Address : k.k.k.k Admin Metric : 195 Maximum bandwidth : 2048000 Maximum reservable bandwidth : 48125 Number of Priority : 8 Priority 0 : 48125 Priority 1 : 48125 Priority 2 : 48125 Priority 3 : 48125 Priority 4 : 48125 Priority 5 : 16125 Priority 6 : 8125 Priority 7 : 8125 Affinity Bit : 0x0 Number of Links : 1 LS age: 339 Options: (No TOS-capability, DC) LS Type: Opaque Area Link Link State ID: r.r.r.r Opaque Type: 1 Opaque ID: 0 Advertising Router: q.q.q.q LS Seq Number: 80000001 Checksum: 0x80A7 Length: 132 Fragment number : 0
The output given above displays the OSPF TE database on the local router. The “Type-10 Opaque Link Area Link States (Area 0)” indicates that the area supports flooding of Type-10 LSAs. The “Advertising router” identifies the router that sent the LSA information. The illustration given above uses sample data, and the use of this command in real-time will substitute the sample data with actual addresses. Figure 47 Fast Reroute Database Sample
PE-Router# show mpls traffic-eng fast-reroute database x.x.x.x Tunnel head fast reroute information: Prefix Tunnel In-label Out intf/label FRR intf/label Status x.x.x.x/16 Tu111 Tun hd PO0/0:Untagged Tu4000:16 ready x.x.x.x/16 Tu449 Tun hd PO0/0:Untagged Tu4000:736 ready x.x.x.x/16 Tu314 Tun hd PO0/0:Untagged Tu4000:757 ready x.x.x.x/16 Tu313 Tun hd PO0/0:Untagged Tu4000:756 ready
IP Packet Switching
BSNL National Internet Backbone - II
67 IP MPLS Core Design
The output given above displays the Fast Reroute database on the local router. The “read” indicates that the backup tunnel is ready for FRR link protection. The illustration given above uses sample data, and the use of this command in real-time will substitute the sample data with actual addresses.
BSNL National Internet Backbone - II
68 IP MPLS Core Design
Network Management Services
Overview This section discusses some of the services that will be enabled on all network devices in the BSNL NIB-II network.
Network Time Protocol (NTP) The NTP protocol is designed to synchronize and distribute time to devices - routers, switches, servers and client machines in an IP network. It is primarily an aid to accounting where accurate time is needed to synchronize data base transactions or billing records. The benefit extends to troubleshooting, in that each device can time stamp log file records with a synchronized timestamp. An NTP server that receives time from a non-NTP source is normally called a stratum 1 node. Any node that receives time from a stratum 1 node is then called a stratum 2 node, etc. An unsynchronized node will typically report itself as either stratum 0 (protocol messages) or stratum 16 (user display). There are two types of NTP relationships between two hosts: client-server and peer. In a client-server relationship, the client periodically polls a configured server to determine its concept of the correct time. If multiple servers at the same stratum are available, the client synchronizes with the one that exhibits the least jitter. Servers do not maintain state for clients.
NTP Architecture In keeping with the generic Core, Distribution, and Access three-layer model, the Best Practice general topology for a NTP deployment follows the same principle. The description given here assumes no NTP source is available from the internet. The IP-v4 Route Reflectors are used as the Primary and Secondary NTP servers. The Master node (Chennai-IP-v4 RR) will collect non-NTP time from its hardware. This node will operate at NTP stratum 3. The second route-reflector (Mumbai-IP-v4 RR) will also be configured as an NTP master but at stratum 10 – therefore it will not be used as the NTP server unless the primary is unavailable. The secondary master will also use the primary as its NTP server, in order that is has an authoritative time source.
Network Management Services
BSNL National Internet Backbone - II
69 IP MPLS Core Design
Note The Master node may actually sync to atomic accuracy clocks over the internet
using NTP. This service is free but it has been assumed that BSNL wishes to minimize traffic between the network and the internet as far as possible.
The distribution nodes will be all the other routers in the BSNL NIB-II network, which will synchronise to the Master nodes. The access nodes will be the Customer Edge devices – these will sync from their local PE router. In fact, in this MPLS VPN environment the CE devices may only access their local PE to get NTP sync; they cannot access the Core NTP nodes. The NTP topology is shown in the following illustration.
Figure 48 NTP Logical Topology
Note Figure 46 provides a snapshot of the NTP topology and the various peering
relationships.
NTP Configuration Figure 49 Primary NTP Server Configuration
Hostname <hostname>
!
Chennai – IP-v4 RR ntp master stratum 3
Mumbai – IP-v4 RR ntp master stratum10
PE RouterP Router
CPE CPE
Primary client-serverrelationship
Secondary client-serverrelationship
Network Management Services
BSNL National Internet Backbone - II
70 IP MPLS Core Design
clock set 13:32:00 18 August 2004
clock update-calendar
service timestamps debug datetime localtime show-timezone
service timestamps log datetime localtime show-timezone
!
ntp source loopback 0
ntp master 3
!
clock timezone GMT +5 30
!
clock calendar-valid
! end
Figure 50 Secondary NTP Server Configuration
Hostname <hostname>
!
service timestamps debug datetime localtime show-timezone
service timestamps log datetime localtime show-timezone
!
ntp source loopback 0
ntp master 10
ntp server 218.248.254.75
!
clock timezone GMT +5 30
!
ntp update-calendar
! end
Figure 51 NTP Client Configuration
Hostname <hostname>
!
service timestamps debug datetime localtime show-timezone
service timestamps log datetime localtime show-timezone
!
ntp server 218.248.254.75
ntp server 218.248.254.74
!
clock timezone GMT +5 30
!
ntp update-calendar
! end
Network Management Services
BSNL National Internet Backbone - II
71 IP MPLS Core Design
Figure 52 CE Device NTP Configuration
Hostname <hostname>
!
service timestamps debug datetime localtime show-timezone
service timestamps log datetime localtime show-timezone
!
ntp server X.X.X.X
!
clock timezone GMT +5 30
!
ntp update-calendar ! end
Note The CE devices must use the IP address of a directly connected interface on the PE device for their NTP server, otherwise no synchronization will occur.
Simple Network Management Protocol (SNMP) The use of SNMP to collect management data, and in some circumstances to configure network devices, is essential to running an IP network effectively.
The recommendation is that all devices be configured for SNMP read-only access from a select list of NMS devices only. The BSNL NIB-II network will be configured for SNMP access. The configuration will need to be customized based on the NMS needs of the network. The following configuration provides a decent baseline for an SNMP polling configuration. Figure 53 SNMP Configuration
Hostname <hostname> ! snmp-server community BSNL
snmp-server contact <contact>
snmp-server location <location>
snmp-server community BSNL ro 1
!
access-list 1 permit ip any
! end
BSNL National Internet Backbone - II
72 IP MPLS Core Design
IP Address Space Design
Layer 3 Addressing
IP Addressing This section provides the IP addressing plan for the BSL NIB-II network.
Core Links Backbone links addresses are usually assigned from contiguous blocks. Ideally, these addresses are usually filtered out at the border of the network, and only internal users are allowed to access the backbone link and loopback addresses. In the current network link allocation, four /24 CIDR blocks are being used for interconnecting the core of the network. The IP Address blocks used are as follows: “218.248.250.0 /24” “218.248.251.0 /24” “218.248.252.0/24” “218.248.253.0/24”
IP Address Space Design
BSNL National Internet Backbone - II
73 IP MPLS Core Design
Table 7 Address Plan for Core Links
Network Address Host Addresses CIDR Prefix Description of use 218.248.250.0 4 /30 STM-16 “Bangalore – Noida”
218.248.253.0 4 /30 Gigabit Ethernet “(Chandigarh) P – Cisco PE“
218.248.253.4 4 /30 STM-1 “Nagpur - Aurangabad”
218.248.253.8 4 /30 STM-1 “Nagpur – Panjim (Goa)”
218.248.253.12 4 /30 STM-1 “Nagpur – Kolhapur”
218.248.253.16 4 /30 STM-1 “Nagpur – Nashik”
218.248.253.20 4 /30 Gigabit Ethernet “(Nagpur) P – Cisco PE“
218.248.253.24 4 /30 STM-1 “Mangalore – Mysore”
218.248.253.28 4 /30 STM-1 “Mangalore – Belgaum”
218.248.253.32 4 /30 STM-1 “Mangalore – Calicut”
218.248.253.36 4 /30 STM-1 “Mangalore – Hubli”
218.248.253.40 4 /30 Gigabit Ethernet “(Mangalore) P – Cisco PE“
218.248.253.44 4 /30 STM-16 “Bangalore – Indore”
IP Address Space Design
BSNL National Internet Backbone - II
79 IP MPLS Core Design
218.248.253.48 4 /30 STM-1 “Kalyan – Pune”
218.248.253.52 4 /30 STM-1 Delhi – Noida
218.248.253.56 8 /29 Reserved for future core links
218.248.253.64 64 /26 Reserved for future core links
218.248.253.128 128 /25 Reserved for future core links
Note Links connecting all the routers in a particular site, to the P router site are treated as point-to-point links having /30 address assignments. One /25, /26, and /29 address blocks are used as reserves for future core link address allocations. Subsequent additional links can use the IP addresses allotted as buffer.
Loopback Addresses Loopback addresses are required to fix a particular device in the network. Without a Loopback configured a router will take its identity as one of the interface addresses. In case this interface goes down, a different IP address will be used for the router identity. Such a situation will confuse the network operations and report generation processes, since the identity of the router would be changing as the network topology changes. To eliminate this possibility, fixed loopback addresses are assigned to the routers, and the identity of the router would be fixed over the course of operation of the network. One /24 CIDR block is used for assigning loopback addresses for all routers and switches in the BSNL NIB-II Network. The IP Address block used is as follows: “218.248.254.0/24” The /24 CIDR block is further variably subnetted with one /25 and two /26 address blocks architected as follows: One /26 address block allocated for assigning loopback addresses for Provider routers. Another /26 address block is allocated for assigning loopback addresses for miscellaneous routers and switches such as the IDC Edge routers, Route-Reflectors, IGW routers, IXP routers IP-TAX routers, and Catalyst LAN switches. Finally the /25 block is used for assigning loopback addresses for all the PE routers.
Note Assigning loopback addresses for routers from various blocks will make the
addresses easily recognizable, and will assist in network operations. Table 8 Address Plan for all Provider Router Loopback Addresses
Network Address Host Addresses CIDR Prefix Description of use 218.248.254.1 1 /32 Loopback for Noida P router
218.248.254.2 1 /32 Loopback for Mumbai P router
218.248.254.3 1 /32 Loopback for Bangalore P router
218.248.254.4 1 /32 Loopback for Chennai P router
218.248.254.5 1 /32 Loopback for Kolkatta P router
218.248.254.6 1 /32 Loopback for Pune P router
218.248.254.7 1 /32 Loopback for Hyderabad P router
IP Address Space Design
BSNL National Internet Backbone - II
80 IP MPLS Core Design
218.248.254.8 1 /32 Loopback for Ahmedabad P router
218.248.254.9 1 /32 Loopback for Ernakulam P router
218.248.254.10 1 /32 Loopback for Indore P router
218.248.254.11 1 /32 Loopback for Jaipur P router
218.248.254.12 1 /32 Loopback for Jullunder P router
218.248.254.13 1 /32 Loopback for Lucknow P router
218.248.254.14 1 /32 Loopback for Patna P router
218.248.254.15 1 /32 Loopback for Coimbatore P router
218.248.254.16 1 /32 Loopback for Nagpur P router
218.248.254.17 1 /32 Loopback for Mangalore P router
218.248.254.18 1 /32 Loopback for Chandigarh P router
218.248.254.19 1 /32 Loopback for Allahabad P router
218.248.254.20 1 /32 Loopback for Ranchi P router
218.248.254.21 1 /32 Loopback for Guwahati P router
218.248.254.22 1 /32 Loopback for Raipur P router
218.248.254.23 1 /32 Loopback for Vijayawada P router
218.248.254.24 1 /32 Loopback for Bhubhaneshwar P router
218.248.254.25 to 218.248.254.63
39 N/A Reserved for future P router loopback addresses
Note The Loopback Address allocation ensures optimal reserves for future allotments.
Table 9 Address Plan for all Miscellaneous Router and Switch Loopback Addresses
Network Address Host Addresses CIDR Prefix Description of use 218.248.254.65 1 /32 Loopback for Noida IDC Edge router
218.248.254.66 1 /32 Loopback for Mumbai IDC Edge router
218.248.254.67 1 /32 Loopback for Bangalore IDC Edge router
218.248.254.68 1 /32 Loopback for Noida IP-TAX router
218.248.254.69 1 /32 Loopback for Mumbai IP-TAX router
218.248.254.70 1 /32 Loopback for Bangalore IP-TAX router
218.248.254.71 1 /32 Loopback for Chennai IP-TAX router
218.248.254.72 1 /32 Loopback for Kolkatta IP-TAX router
218.248.254.73 1 /32 Loopback for Noida VPN-v4 Route-Reflector
218.248.254.74 1 /32 Loopback for Mumbai IP-v4 Route-Reflector
218.248.254.75 1 /32 Loopback for Chennai IP-v4 Route-Reflector
IP Address Space Design
BSNL National Internet Backbone - II
81 IP MPLS Core Design
218.248.254.76 1 /32 Loopback for Bangalore VPN-v4 Route-Reflector
218.248.254.77 1 /32 Looback for Bangalore IGW router
218.248.254.78 1 /32 Looback for Chennai IGW router
218.248.254.79 1 /32 Looback for Kolkatta IGW router
218.248.254.80 1 /32 Looback for Mumbai IGW router
218.248.254.81 1 /32 Looback for Noida IGW router
218.248.254.82 1 /32 Looback for Hyderabad IGW router
218.248.254.83 1 /32 Looback for Pune IGW router
218.248.254.84 1 /32 Looback for Ernakulam IGW router
218.248.254.85 1 /32 Looback for Bangalore IXP router
218.248.254.86 1 /32 Looback for Chennai IXP router
218.248.254.87 1 /32 Looback for Kolkatta IXP router
218.248.254.88 1 /32 Looback for Mumbai IXP router
218.248.254.89 1 /32 Looback for Noida IXP router
218.248.254.90 1 /32 Looback for Hyderabad IXP router
218.248.254.91 1 /32 Looback for Pune IXP router
218.248.254.92 1 /32 Looback for Ernakulam IXP router
218.248.254.93 1 /32 Looback for Catalyst LAN Switch
218.248.254.94 1 /32 Looback for Catalyst LAN Switch
218.248.254.95 1 /32 Looback for Catalyst LAN Switch
218.248.254.96 1 /32 Looback for Catalyst LAN Switch
218.248.254.97 1 /32 Looback for Catalyst LAN Switch
218.248.254.98 to 218.248.254.127
30 N/A Reserved for future loopback addresses
Note The Loopback Address allocation plan ensures optimal reserves for future allotments.
Table 10 Address Plan for all Provider Edge Router Loopback Addresses
Network Address Host Addresses CIDR Prefix Description of use 218.248.254.129 1 /32 Loopback for Noida PE router
218.248.254.130 1 /32 Loopback for Noida Juniper PE router
218.248.254.131 1 /32 Loopback for Mumbai PE router
218.248.254.132 1 /32 Loopback for Mumbai Juniper PE router
218.248.254.133 1 /32 Loopback for Bangalore PE router
IP Address Space Design
BSNL National Internet Backbone - II
82 IP MPLS Core Design
218.248.254.134 1 /32 Loopback for Bangalore Juniper PE router
218.248.254.135 1 /32 Loopback for Chennai PE router
218.248.254.136 1 /32 Loopback for Chennai Juniper PE router
218.248.254.137 1 /32 Loopback for Kolkatta PE router
218.248.254.138 1 /32 Loopback for Kolkatta Juniper PE router
218.248.254.139 1 /32 Loopback for Pune PE router
218.248.254.140 1 /32 Loopback for Pune Juniper PE router
218.248.254.141 1 /32 Loopback for Hyderabad PE router
218.248.254.142 1 /32 Loopback for Hyderabad Juniper PE router
218.248.254.143 1 /32 Loopback for Ahmedabad PE router
218.248.254.144 1 /32 Loopback for Ahmedabad Juniper PE router
218.248.254.145 1 /32 Loopback for Ernakulam PE router
218.248.254.146 1 /32 Loopback for Ernakulam Juniper PE router
218.248.254.147 1 /32 Loopback for Indore PE router
218.248.254.148 1 /32 Loopback for Jaipur PE router
218.248.254.149 1 /32 Loopback for Jullunder PE router
218.248.254.150 1 /32 Loopback for Lucknow PE router
218.248.254.151 1 /32 Loopback for Lucknow Juniper PE router
218.248.254.152 1 /32 Loopback for Patna PE router
218.248.254.153 1 /32 Loopback for Coimbatore PE router
218.248.254.154 1 /32 Loopback for Nagpur PE router
218.248.254.155 1 /32 Loopback for Mangalore PE router
218.248.254.156 1 /32 Loopback for Chandigarh PE router
218.248.254.157 1 /32 Loopback for Allahabad PE router
218.248.254.158 1 /32 Loopback for Ranchi PE router
218.248.254.159 1 /32 Loopback for Guwahati PE router
218.248.254.160 1 /32 Loopback for Raipur PE router
218.248.254.161 1 /32 Loopback for Vijayawada PE router
218.248.254.162 1 /32 Loopback for Bhubhaneshwar PE router
218.248.254.163 1 /32 Loopback for Madurai PE router
218.248.254.164 1 /32 Loopback for Trivandrum PE router
218.248.254.165 1 /32 Loopback for Mysore PE router
218.248.254.166 1 /32 Loopback for Visag PE router
218.248.254.167 1 /32 Loopback for Shillong PE router
IP Address Space Design
BSNL National Internet Backbone - II
83 IP MPLS Core Design
218.248.254.168 1 /32 Loopback for Kalyan PE router
218.248.254.169 1 /32 Loopback for Panjim (Goa) PE router
218.248.254.170 1 /32 Loopback for Nashik PE router
218.248.254.171 1 /32 Loopback for Bhopal PE router
218.248.254.172 1 /32 Loopback for Gwalior PE router
218.248.254.173 1 /32 Loopback for Rajkot PE router
218.248.254.174 1 /32 Loopback for Surat PE router
218.248.254.175 1 /32 Loopback for Vadodara PE router
218.248.254.176 1 /32 Loopback for Faridabad PE router
218.248.254.177 1 /32 Loopback for Gurgoan PE router
218.248.254.178 1 /32 Loopback for Agra PE router
218.248.254.179 1 /32 Loopback for Amritsar PE router
218.248.254.180 1 /32 Loopback for Jammu PE router
218.248.254.181 1 /32 Loopback for Kanpur PE router
218.248.254.182 1 /32 Loopback for Varanasi PE router
218.248.254.183 1 /32 Loopback for Jodhpur PE router
218.248.254.184 1 /32 Loopback for Trichy PE router
218.248.254.185 1 /32 Loopback for Pondicherry PE router
218.248.254.186 1 /32 Loopback for Rajamundry PE router
218.248.254.187 1 /32 Loopback for Palghat PE router
218.248.254.188 1 /32 Loopback for Trichur PE router
218.248.254.189 1 /32 Loopback for Belgaum PE router
218.248.254.190 1 /32 Loopback for Hubli PE router
218.248.254.191 1 /32 Loopback for Rajamundry PE router
218.248.254.192 1 /32 Loopback for Tirupati PE router
218.248.254.193 1 /32 Loopback for Jamshedpur PE router
218.248.254.194 1 /32 Loopback for Durgapur PE router
218.248.254.195 1 /32 Loopback for Siliguri PE router
218.248.254.196 1 /32 Loopback for Dimapur PE router
218.248.254.197 1 /32 Loopback for Aurangabad PE router
218.248.254.198 1 /32 Loopback for Kolhapur PE router
218.248.254.199 1 /32 Loopback for Jabalpur PE router
218.248.254.200 1 /32 Loopback for Mehsana PE router
218.248.254.201 1 /32 Loopback for Ambala PE router
IP Address Space Design
BSNL National Internet Backbone - II
84 IP MPLS Core Design
218.248.254.202 1 /32 Loopback for Ghaziabad PE router
218.248.254.203 1 /32 Loopback for Meerut PE router
218.248.254.204 1 /32 Loopback for Dehradun PE router
218.248.254.205 1 /32 Loopback for Ferozpur PE router
218.248.254.206 1 /32 Loopback for Simla PE router
218.248.254.207 1 /32 Loopback for Ajmer PE router
218.248.254.208 1 /32 Loopback for Ludhiana PE router
218.248.254.209 1 /32 Loopback for Delhi PE router
218.248.254.210 to 218.248.254.254
44 N/A Reserved for future PE loopback addresses
Note The Loopback Address allocation ensures optimal reserves for future allotments.
PE-CE Link Addresses for the Internet Gateway VRF The Internet Gateway to Provider Edge router links are placed in a MPLS Layer 3 VPN VRF Table, and hence is treated as a customer VPN internal to the BSNL NIB-II Network. Customer link addresses are assigned from an independent CIDR IP Address block, which is separate from the IP Addresses used in the BSNL NIB-II global routing table. This simplifies the process of policy assignment within the network. In the current network link allocation, one /25 CIDR block is exclusively used for assigning IP addresses for the IGW-PE links. This also provides adequate buffer for scaling the number of Internet Gateways in the network. The IP Address block used is as follows: “218.248.255.0 /25”
Table 11 Address Plan for the “INET” VRF VRF Table
Network Address Host Addresses CIDR Prefix Description of use 218.248.255.0 4 /30 Gigabit Ethernet “(Bangalore) IGW-Cisco PE”
IP Address space utilization The address allocation for the NIB-II core infrastructure, that includes core links, Internet Gateway to PE router link addresses, and loopback addresses, uses five /24, and one /25 CIDR blocks out the seven /24 address blocks provided by BSNL.
The remaining address blocks are utilized as follows:
One /24 CIDR block is used for NAT/PAT address allocations for customers of BSNL. The /25 block is used for allocating host addresses for the HP NMS/PMS servers.
Customer Links - Internet BSNL will use registered public addresses for assigning link addresses for Internet access customers. It is strongly recommended that BSNL allocate a minimum of a /24 CIDR address block for every PE router. This would ensure that the customer link addresses can be aggregated appropriately, while importing the routes using BGP. It is also strongly recommended that BSNL should not spread any allocation of address blocks smaller than a /24 CIDR block across PE routers.
Customer Links – MPLS Layer 3 VPN (Un-Managed) BSNL will use private RFC1918 addresses for assigning link addresses for Layer 3 MPLS VPN customers.
Customer Links – MPLS Layer 3 VPN (Managed) BSNL will use unique private RFC1918 addresses for assigning link addresses for Managed Layer 3 MPLS VPN customers. The addresses need to be unique throughout the entire BSNL NIB-II network, and should not overlap with any other address space deployed. This will ensure that the customer link addresses can be exported into the BSNL NIB-II NMS-VPN for managed services.
BSNL National Internet Backbone - II
86 IP MPLS Core Design
Hardware Configurations
Hardware Configuration Details The following sections provide details on the Provider and Provider Edge router hardware deployed in the BSNL NIB-II network.
Provider Router for A1 cities – Cisco GSR 12416 The Cisco GSR 12416 routers are deployed as the Provider routers in the BSNL NIB-II network. The Cisco GSR 12416 routers form the heart of the core (A1 cities) for the NIB-II network. The Cisco 12416 Router is a 16-slot chassis member of the Cisco 12000 Series that provides a total switching capacity of 320 Gigabits per second (Gbps), with 20 Gbps (10 Gbps full duplex) capacity per slot. Innovative Virtual Output Queuing (VOQ) technology ensures that the fabric experiences no head-of-line (HOL) blocking and an enhanced clock scheduler guarantees that all line cards get equal access to the fabric. Extensive use of high-performance application-specific integrated circuits (ASICs) supports line rate forwarding with minimal latency of real-time traffic while the fabric handles replication of multicast traffic in the hardware, providing a high level of performance. With its high performance, switching capacity, and port density, the Cisco 12416 Router is the ideal routing platform to deploy in a variety of service provider and Internet Service Provider (ISP) applications. The Cisco 12416 Router delivers true 10 Gbps performance for the next generation of optical core networking and the real-time, revenue-generating services that the core must support. Using its 16-slot capacity and industry-leading portfolio of line cards, the Cisco 12416 Router provides high-density ISP aggregation and point-of-presence (POP) consolidation. Typical applications include consolidating backbone links, creating peering and IP/MPLS transit interconnections.
Hardware Configurations
BSNL National Internet Backbone - II
87 IP MPLS Core Design
Figure 54 Chassis Overview of the Cisco GSR 12416 Router
Figure 52 illustrates the chassis overview of the Cisco GSR 12416 router. The illustration depicts the configuration and placement of the line cards in the router. The details are as follows:
• S1 : Alarm Card
• S2 : 4 Port STM-16 Module
• S3: 4 Port STM-16 Module
• S4 : 1 Port STM-16
• S5 : Spare
• S6: Spare
• S7: Spare
• S8: RP
• S9 : RP
• S10 : Spare
• S11 : 4 Port STM-1 Module
• S12 : Spare
• S13 : 8 Port STM-1 Module
• S14: Spare
• S15: Spare
• S16: Gig E Module (with 6 ports)
Slo
t 1
Slo
t 2
Slo
t 3
Slo
t 4
Slo
t 5
Slo
t 6
Slo
t 7
Slo
t 8
4 P
ort
ST
M-1
6 M
od
ule
1 P
ort
ST
M-1
6 M
od
ule
PR
P-2
Pro
ce
sso
r
PR
P-2
Pro
ce
ss
or
4 P
ort
ST
M-1
Mo
du
le
8 P
ort
ST
M-1
Mo
du
le
Gig
E M
od
ule
(w
ith
6 p
ort
s)
Slo
t 9
Slo
t 1
0
Slo
t 1
1
Slo
t 1
2
Slo
t 1
3
Slo
t 1
4
Slo
t 1
5
Slo
t 1
6
A1 Core Router - C isco 12416
City - CHENNAI
Hardware Configurations
BSNL National Internet Backbone - II
88 IP MPLS Core Design
This illustration uses the Chennai Provider router as sample for the sake of simplicity. The router is configured with high levels of redundancy. All the system functions and control cards are duplicated, operating in redundant mode. The GSR Performance Route Processor (PRP) which is the main system processor is also configured in redundant mode.
Provider Router for A2/A3 cities – Cisco GSR 12410 The Cisco GSR 12410 routers are deployed as the Provider routers in the BSNL NIB-II network. The Cisco GSR 12410 routers form the outer layer of the core, deployed in A2 and A3 cities. The Cisco 12410 router, a member of the Cisco 12000 Series, features a ten-slot chassis and switch fabric that supports 20Gbps (10 Gbps full duplex) throughput per line card slot. Based on the architecture of the widely deployed Cisco 12000 Series, the Cisco 12410 Router supports distributed packet forwarding with a crossbar matrix switch. Innovative Virtual Output Queuing (VOQ) technology (which prevents head-of-line blocking), an enhanced clock scheduler, specialized high-speed application-specific integrated circuits (ASICs), and the Cisco 12000 Series switch fabric offer unmatched carrier-class performance and reliability that support guaranteed priority packet delivery and true 10Gbps line rate performance in a fully loaded system. The key to delivering true multiservice networking on the Cisco GSR 12410 router is the Cisco Multiprotocol Label Switching (MPLS) feature suite that includes support for traffic engineering, virtual private networking (VPN), and connection services. Figure 55 Chassis Overview of the Cisco GSR 12410 Router
Figure 53 illustrates the chassis overview of the Cisco GSR 12410 router. The illustration depicts the configuration and placement of the line cards in the router. The details are as follows:
• S1 :1 Port STM-16 Module
• S2: 1 Port STM-16 Module
Slot 1
Slot 2
Slot 3
Slot 4
Slot 5
Slot 6
Slot 7
Slot 8
Slot 9
Slot 1
0
1 Port
STM-
16 M
odule
1 Port
STM-
16 M
odule
4 Port
STM-
1 Mod
ule8 P
ort ST
M-1 M
odule
Gig E
Mod
ule (w
ith 4
ports
)
PRP-2
Proc
esso
r
PRP-2
Proc
esso
r
Cities - PUN, LUC, LUD
A2, A3 Core Router - Cisco 12410
Hardware Configurations
BSNL National Internet Backbone - II
89 IP MPLS Core Design
• S3 :
• S4 : 4 Port STM-1 Module
• S5 : 8 Port STM-1 Module
• S6: Spare
• S7: Gig E Module (with 4 ports)
• S8: Spare
• S9 : PRP-2 Processor
• S10 : PRP-2 Processor
The router is configured with high levels of redundancy. All the system functions and control cards are duplicated, operating in redundant mode. The GSR Performance Route Processor (PRP) which is the main system processor is also configured in redundant mode.
Provider Edge Router – Cisco 7613 The Cisco 7613 routers are deployed as the Provider Edge routers in the BSNL NIB-II network. The Cisco 7613 Router is a high-performance router designed for deployment at the network edge where performance, IP services, and redundancy/fault resiliency are key requirements. Combined with the central route processor and forwarding engine, the 13-slot Cisco 7613 provides 30 Mpps forwarding rates and up to 256 Gbps total throughput. This versatile Cisco 7600 platform scales WAN connectivity from OC-48/STM-16 to DS0 and LAN connectivity from 10-Gigabit Ethernet through 10-Mbps Ethernet. The Cisco 7613 delivers these capabilities while implementing high-touch hardware-accelerated IP services via Cisco's patented parallel express forwarding (PXF) network processor. The Cisco 7613 incorporates the requirements service providers demand in a chassis. Understanding the need to use rack space efficiently, the Cisco 7613 is designed for horizontal line-card configurations with side-to-side airflow and single-side connection management for both interface and power terminations. Figure 56 Chassis Overview of the Cisco 7613 Router
Slot 1
Slot 2
Slot 3 Flexwan II Module slot 1 - 8 port Ch. E1 module
Flexwan II Module slot 2 - 1 port Ch. STM-1 module
Slot 4 Flexwan II Module slot 1 - 8 port Ch. E1 module
Flexwan II Module slot 2 - 1 port Ch. STM-1 module
Slot 5 Flexwan II Module slot 1 - 8 port Ch. E1 module
Flexwan II Module slot 2 - 1 port Ch. STM-1 module
Slot 6 Flexwan II Module slot 1 - 8 port Ch. E1 module
Flexwan II Module slot 2 - 8 port Ch. E1 module
Slot 7
Slot 8
Slot 9 Flexwan II Module slot 1 - 8 port Ch. E1 module
Flexwan II Module slot 2 - 8 port Ch. E1 module
Slot 10 Flexwan II Module slot 1 - 8 port Ch. E1 module
Slot 11 Flexwan II Module slot 1 - 2 port E3 module
Flexwan II Module slot 2 - 2 port E3 module
Slot 12
Slot 13
SUP 720-3BXL
4 port STM-1 OSM Module
48 Port 10/100 Ethernet Module
A1 node - Year 1
Router 1 - Cisco 7613
Firewall Services Module
IPSec VPN Security Module
SUP 720-3BXL
Hardware Configurations
BSNL National Internet Backbone - II
90 IP MPLS Core Design
Figure 54 illustrates the chassis overview of the Cisco 7613 router. The illustration depicts the configuration and placement of the cards in the router. The details are as follows:
• S1 : Firewall Services Module
• S2 : IPSec VPN Security Module
• S3 : Flexwan II Module slot 1 – “8 port Channelized E1 Module”
• S3 : Flexwan II Module slot 2 – “1 port Channelized STM-1 Module”
• S4 : Flexwan II Module slot 1 – “8 port Channelized E1 Module”
• S4 : Flexwan II Module slot 2 – “1 port Channelized STM-1 Module”
• S5 : Flexwan II Module slot 1 – “8 port Channelized E1 Module”
• S5 : Flexwan II Module slot 2 – “1 port Channelized STM-1 Module”
• S6 : Flexwan II Module slot 1 - “8 port Channelized E1 Module”
• S6 : Flexwan II Module slot 2 - “8 port Channelized E1 Module”
• S7: SUP 720-3BXL
• S8: SUP 720-3BXL
• S9 : Flexwan II Module slot 1 - “8 port Channelized E1 Module”
• S9 : Flexwan II Module slot 2 - “8 port Channelized E1 Module”
• S10 : Flexwan II Module slot 1 - “8 port Channelized E1 Module”
• S11: Flexwan II Module slot 1 - “2 port E3 Module”
• S11: Flexwan II Module slot 1 - “2 port E3 Module”
• S12: 4 port STM-1 OSM Module
• S13: 48 Port 10/100 Ethernet Module
The router is configured with high levels of redundancy. All the system functions and control cards are duplicated, operating in redundant mode. The GSR Supervisor Engine which is the main system processor is also configured in redundant mode.
Note Please refer to the “BSNL Implementation LLD” for details on the placement of
line and interface cards for the various PoPs in the BSNL NIB-II Network.
BSNL National Internet Backbone - II
91 IP MPLS Core Design
Appendix I
Router software configurations This section provides the relevant configuration statements used for deploying OSPF, BGP, MPLS, and LDP in the BSNL NIB-II network. Only the relevant IP MPLS protocol sections are highlighted here. For a complete list of router configuration commands, refer to the “BSNL implementation LLD handbook”. Figure 57 IP-v4 Route Reflector – Chennai Router Configuration
Hostname <hostname>
!
ip cef
mpls ip
mpls label protocol ldp
mpls ldp router-id loopback 0 force
!
mpls traffic-eng tunnels
mpls traffic-eng reoptimize timers frequency 30
!
router ospf 100
router-id 218.248.254.75
log-adjacency-changes
mpls traffic-eng area 0
mpls traffic-eng router-id loopback0
auto-cost reference-bandwidth 10000
passive-interface Loopback0
network 218.248.254.75 0.0.0.0 area 0.0.0.0
network 218.248.250.134 0.0.0.3 area 0.0.0.0
!
interface loopback0
ip address 218.248.254.75 255.255.255.255
no ip directed-broadcast
!
interface Gigabit 7/0
description *** Gigabit-Ethernet Interface to P router***
ip address 218.248.250.134 255.255.255.252
mpls ip
Appendix I
BSNL National Internet Backbone - II
92 IP MPLS Core Design
mpls traffic-eng tunnels
ip rsvp bandwidth 1000000
bandwidth 1000000
no shutdown
no ip directed-broadcast
no ip proxy-arp
no cdp enable
!
router bgp 9829
no synchronization
bgp log-neighbor-changes
neighbor REFLECTORS peer-group
neighbor REFLECTORS remote-as 9829
neighbor REFLECTORS update-source loopback0
neighbor REFLECTORS version 4
neighbor REFLECTORS send-community
neighbor GATEWAYS peer-group
neighbor GATEWAYS remote-as 9829
neighbor GATEWAYS update-source loopback0
neighbor GATEWAYS version 4
neighbor GATEWAYS send-community
neighbor GATEWAYS route-reflector-client
neighbor RRCUSTOMERS peer-group
neighbor RRCUSTOMERS remote-as <CUSTOMER-ASN>
neighbor RRCUSTOMERS ebgp-multihop
neighbor RRCUSTOMERS version 4
neighbor RRCUSTOMERS send-community
neighbor RRCUSTOMERS prefix-list input-filter in
neighbor 218.248.254.74 peer-group REFLECTORS
neighbor 218.248.254.77 peer-group GATEWAYS
neighbor 218.248.254.78 peer-group GATEWAYS
neighbor 218.248.254.79 peer-group GATEWAYS
neighbor 218.248.254.80 peer-group GATEWAYS
neighbor 218.248.254.81 peer-group GATEWAYS
neighbor 218.248.254.82 peer-group GATEWAYS
neighbor 218.248.254.83 peer-group GATEWAYS
neighbor 218.248.254.84 peer-group GATEWAYS
neighbor <address> peer-group RRCUSTOMERS
bgp cluster-id 100
no auto-summary
!
ip prefix-list input-filter deny 0.0.0.0 le 32
!
clock set 13:32:00 18 August 2004
clock update-calendar
Appendix I
BSNL National Internet Backbone - II
93 IP MPLS Core Design
service timestamps debug datetime localtime show-timezone
service timestamps log datetime localtime show-timezone
!
ntp source loopback 0
ntp master 3
!
clock timezone +5 30
!
clock calendar-valid
!
snmp-server community BSNL
snmp-server contact <contact>
snmp-server location <location>
snmp-server community BSNL ro 1
!
access-list 1 permit ip any
! end
Note This configuration sample uses the Chennai IP-v4 Route Reflector. The Mumbai
IP-v4 Route Reflector would use similar configuration statements, with the exception of the neighbor IP addresses substituted with the appropriate values.
description *** Gigabit-Ethernet Interface to P router***
ip address 218.248.251.138 255.255.255.252
mpls ip
mpls traffic-eng tunnels
ip rsvp bandwidth 1000000
bandwidth 1000000
no shutdown
no ip directed-broadcast
no ip proxy-arp
no cdp enable
!
router bgp 9829
no synchronization
bgp cluster-id 200
bgp log-neighbor-changes
neighbor CLIENTS peer-group
neighbor CLIENTS remote-as 9829
neighbor CLIENTS update-source Loopback0
no neighbor CLIENTS activate
neighbor VPNREFLECTORS peer-group
neighbor VPNREFLECTORS remote-as 9829
neighbor VPNREFLECTORS update-source Loopback0
no neighbor VPNREFLECTORS activate
no auto-summary
!
address-family vpnv4
neighbor CLIENTS activate
neighbor CLIENTS route-reflector-client
neighbor CLIENTS send-community extended
neighbor VPNREFLECTORS activate
neighbor VPNREFLECTORS send-community extended
neighbor 218.248.254.137 peer-group CLIENTS
neighbor 218.248.254.76 peer-group VPNREFLECTORS
exit-address-family
!
service timestamps debug datetime localtime show-timezone
service timestamps log datetime localtime show-timezone
!
Appendix I
BSNL National Internet Backbone - II
95 IP MPLS Core Design
ntp server 218.248.254.75
ntp server 218.248.254.74
!
clock timezone +5 30
!
ntp update-calendar
!
snmp-server community BSNL
snmp-server contact <contact>
snmp-server location <location>
snmp-server community BSNL ro 1
!
access-list 1 permit ip any
! end
Note This configuration sample uses the Noida VPN-v4 Route Reflector. The Kolkatta VPN-v4 Route Reflector would use similar configuration statements, with the exception of the neighbor IP addresses substituted with the appropriate values.
service timestamps debug datetime localtime show-timezone
service timestamps log datetime localtime show-timezone
!
ntp server 218.248.254.75
ntp server 218.248.254.74
!
clock timezone +5 30
!
ntp update-calendar
!
snmp-server community BSNL
snmp-server contact <contact>
snmp-server location <location>
snmp-server community BSNL ro 1
!
access-list 1 permit ip any
!
end
Note The STM interface configuration will vary based on the STM Multiplexers and Optical equipment used in the BSNL NIB-II Network.
BSNL National Internet Backbone - II
102 IP MPLS Core Design
Appendix II
Glossary of Terms The following is a list of Acronyms contained in this document: Table 12 Glossary
Acronym Expansion AS Autonomous System BSNL Bharat Sanchar Nigam Limited BGP Border Gateway Protocol CIDR Classless Internet Domain Routing CPE Customer Premise Equipment CEF Cisco Express Forwarding CoS Class of Service FE Fast Ethernet FRR Fast Reroute GE Gigabit Ethernet IGP Interior Gateway Protocol IP Internet Protocol IGW Internet Gateway Router LDP Label Distribution Protocol LSP Label Switched Path MBGP Multi-Protocol BGP MPLS Multi-Protocol Label Switching NTP Network Time Protocol NOC Network Operations Center NMS Network Management System NLRI Network Layer Reachability Information OSPF Open Shortest Path First PE Provider Edge Router P Provider Router PoS Packet over Sonet RR Route Reflector SDH Synchronous Digital Hierarchy TE Traffic Engineering V4 Version 4
Appendix II
BSNL National Internet Backbone - II
103 IP MPLS Core Design
VPN Virtual Private Network VRF Virtual Routing and Forwarding Table
Corporate Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 526-4100
European Headquarters Cisco Systems Europe 11 Rue Camille Desmoulins 92782 Issy-Les-Moulineaux Cedex 9 France www-europe.cisco.com Tel: 33 1 58 04 60 00 Fax: 33 1 58 04 61 00
Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA www.cisco.com Tel: 408 526-7660 Fax: 408 527-0883
Asia Pacific Headquarters Cisco Systems Australia, Pty., Ltd Level 9, 80 Pacific Highway P.O. Box 469 North Sydney NSW 2060 Australia www.cisco.com Tel: +61 2 8448 7100 Fax: +61 2 9957 4350
Cisco Systems has more than 200 offices in the following countries and regions. Addresses, phone numbers, and fax numbers are listed on the
Cisco Web site at www.cisco.com/go/offices.
Argentina • Australia • Austria • Belgium • Brazil • Bulgaria • Canada • Chile • China • Colombia • Costa Rica • Croatia • Czech Republic Denmark • Dubai, UAE Finland • France • Germany • Greece • Hong Kong SAR • Hungary • India • Indonesia • Ireland • Israel • Italy • Japan • Korea • Luxembourg • Malaysia • Mexico The Netherlands • New Zealand • Norway • Peru • Philippines • Poland • Portugal • Puerto Rico • Romania • Russia • Saudi Arabia • Singapore • Slovakia • Slovenia South Africa • Spain • Sweden • Switzerland • Taiwan • Thailand • Turkey • Ukraine • United Kingdom • United States • Venezuela • Vietnam • Zimbabwe