Top Banner
HP Virtual Connect technology implementation for the HP BladeSystem c-Class technology brief, 2 nd edition Abstract .............................................................................................................................................. 2 Acronyms in text .................................................................................................................................. 2 Overview of Virtual Connect technology ................................................................................................. 3 How Virtual Connect works ................................................................................................................... 4 Virtual Connect environment with HP BladeSystem c-Class enclosure .......................................................... 6 HP 1/10Gb Virtual Connect Ethernet Module for BladeSystem c-Class ................................................... 7 HP 4-Gb Virtual Connect Fibre Channel Module for BladeSystem c-Class................................................ 8 Virtual Connect Manager ..................................................................................................................... 9 Conclusion ........................................................................................................................................ 13 For more information.......................................................................................................................... 14 Call to action .................................................................................................................................... 14 Appendix A: Virtual Connect from the perspective of a server administrator...............................Appendix A-1 Appendix B: Virtual Connect from the perspective of a LAN administrator ................................. Appendix B-1 Appendix C: Virtual Connect from the perspective of a SAN administrator ............................... Appendix C-1
25
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Virtual Connect

HP Virtual Connect technology implementation for the HP BladeSystem c-Class technology brief, 2nd edition

Abstract.............................................................................................................................................. 2 Acronyms in text .................................................................................................................................. 2 Overview of Virtual Connect technology................................................................................................. 3 How Virtual Connect works................................................................................................................... 4 Virtual Connect environment with HP BladeSystem c-Class enclosure.......................................................... 6

HP 1/10Gb Virtual Connect Ethernet Module for BladeSystem c-Class ................................................... 7 HP 4-Gb Virtual Connect Fibre Channel Module for BladeSystem c-Class................................................ 8

Virtual Connect Manager ..................................................................................................................... 9 Conclusion........................................................................................................................................ 13 For more information.......................................................................................................................... 14 Call to action .................................................................................................................................... 14 Appendix A: Virtual Connect from the perspective of a server administrator...............................Appendix A-1

Appendix B: Virtual Connect from the perspective of a LAN administrator .................................Appendix B-1

Appendix C: Virtual Connect from the perspective of a SAN administrator ............................... Appendix C-1

Page 2: Virtual Connect

Abstract As datacenter density and complexity increase, so do demands for IT efficiency and responsiveness. As a result, simplifying system interconnections becomes significantly more important. HP has developed a new interconnect solution, the HP Virtual Connect architecture, to boost the efficiency and productivity of data center server, storage, and network administrators. HP is implementing the Virtual Connect architecture first in the HP BladeSystem c-Class.

This paper explains how Virtual Connect technology virtualizes the connections between the server and the network infrastructure (server-edge I/O virtualization) so that networks can communicate with pools of HP BladeSystem servers and administrators can change servers in minutes instead of days or weeks. It also explains how implementing Virtual Connect:

• Reduces cables without adding switches to manage • Maintains end-to-end connections of preferred fabric brands • Cleanly separates server enclosure administration from LAN and SAN administration • Relieves LAN and SAN administrators from server maintenance • Makes servers ready for rapid change at any time, so that server administrators can add, move, or

replace servers without affecting the LANs or SANs

The three appendices also include implementation information from the perspective of a server, LAN, and SAN administrator.

Acronyms in text The following acronyms are used in the text of this document.

Table 1. Acronyms

Acronym Acronym expansion

IEEE Institute of Electrical and Electronics Engineers

LACP Link Aggregate Control Protocol (IEEE 802.3ad)

FC Fibre Channel

GUI Graphical user interface

HBA Host bus adapter

iLO Integrated Lights-Out

LAN Local area network

LUN Logical unit number

MAC Media access control

NIC Network interface card

NFT NIC fault-tolerance

NPIV N_Port ID Virtualization

OA HP BladeSystem Onboard Administrator

PCI Peripheral component interconnect

SAN Storage area network

2

Page 3: Virtual Connect

Acronym Acronym expansion

TLB Transmit Load Balancing

VC-FC Virtual Connect Fibre Channel

VC Manager Virtual Connect Manager

VLAN Virtual LAN

WWID World wide identification

WWN World wide name

Overview of Virtual Connect technology HP BladeSystem c-Class integrates the Virtual Connect architecture from the ground up. The benefits of this technology are derived from capabilities built into the communication and control infrastructure. Support from these built-in capabilities is essential for achieving the level of functionality provided by the HP BladeSystem c-Class; that is, intuitive ease of use, smooth integration, and scalable implementation. If these capabilities are not built it in, they cannot be bolted on.

The HP BladeSystem c-Class using the Virtual Connect architecture resolves datacenter difficulties related to density and complexity: too many cables, switches, and administrators.

Densely stacking servers with many Ethernet and Fibre Channel (FC) connections can result in hundreds of cables coming out of a rack. Multitudes of cables are inherently risky, and cable intensive interconnect schemes such as patch panels or Pass-Thru modules are typically the most expensive connection methods.

While use of switches can greatly reduce the number of required cables, adding switches creates additional management overhead. Moreover, a FC storage area network (SAN) fabric can include only a limited number of switches. Because switches integrated into blade server environments are, by design, small compared to the large, standalone switches typically used in data centers, blade systems require more switches. Consequently, FC SANs must sometimes connect to blade servers using Pass-Thru methods to stay within the fabric limits.

When a server is added, moved, or replaced in any server system, the local area network (LAN) and SAN must be adjusted. Therefore, LAN and SAN administrators must become involved in routine server activities. This creates delay for the server administrator waiting for schedule coordination.

Virtual Connect is an industry standard-based implementation of server-edge I/O virtualization. It puts an abstraction layer between the servers and the external networks so that the LAN and SAN see a pool of servers rather than individual servers (see Figure 1). Once the LAN and SAN connections are made to the pool of servers, the server administrator uses a Virtual Connect Manager User Interface to create an I/O connection profile for each server. Instead of using the default media access control (MAC) addresses for all network interface controllers (NICs) and default World Wide Names (WWNs) for all host bus adapters (HBAs), the Virtual Connect Manager creates bay-specific I/O profiles, assigns unique MAC addresses and WWNs to these profiles, and administers them locally. Local administration of network addresses is a common industry technique that Virtual Connect applies to a new purpose. Network and storage administrators can establish all LAN and SAN connections once during deployment and need not make connection changes later if servers are changed. As servers are deployed, added, or changed, Virtual Connect keeps the I/O profile for that LAN and SAN connection constant.

3

Page 4: Virtual Connect

Figure 1. Server-edge I/O virtualization. Virtual Connect technology puts an abstraction layer between servers and the external networks, creating a logical multi-host endpoint. The server administrator assigns server I/O connections to the Virtual Connect interconnect modules, and the LAN and SAN administrators treat its ports as the endpoint of their networks.

How Virtual Connect works The ability to implement Virtual Connect is built in to each component of the HP BladeSystem c-Class, including the HP BladeSystem Onboard Administrator, PCI Express mezzanine cards, HBAs, NICs, and the iLO communication channels. HP Virtual Connect modules are required to activate the full server-edge I/O virtualization across the system.

No special mezzanine cards are required; HP Virtual Connect works with the standard Ethernet NICs and FC HBAs that are available with HP BladeSystem c-Class server blades. HP Virtual Connect Ethernet and FC interconnect modules are new options to simplify connection of those server NICs and HBAs to the data center environment. Virtual Connect extends the capability of the standard server NICs and HBAs by providing support for securely administering their Ethernet MAC address and FC WWNs.

4

Page 5: Virtual Connect

No virtual devices are created; the WWNs and MAC addresses are real. They are the only WWNs and MAC addresses seen by the system, the OS, and the networks. Virtual Connect has the unique ability to manage the WWNs and MAC addresses presented by the hardware without recabling and without requiring the assistance of multiple administrators. Although the hardware ships with default MAC addresses and WWNs, Virtual Connect resets the MAC addresses and WWNs prior to boot, so PXE/SAN boot and all operating systems will see only the Virtual Connect managed values. Virtual Connect securely manages the MACs and WWNs by accessing the physical NICs and HBAs through the enclosure’s Onboard Administrator and the iLO interfaces on the individual server blades.

During setup of the Virtual Connect environment, the administrator can select MAC/WWN values from one of the following groups:

• Factory default MACs/WWNs • A specific, user-defined range of MACs/WWNs • One of several HP pre-defined ranges of MACs/WWNs

The use of factory default MAC addresses is not recommended as they cannot be moved to another server blade.

NOTE: HP is registered as an Ethernet and FC vendor with the appropriate standards bodies and has reserved pre-defined MAC address and WWN ranges for exclusive use with Virtual Connect. These reserved ranges will never be used as factory default MACs/WWNs on any hardware. System administrators must be careful to use each reserved range only once within their enterprise environment.

If a server is moved from a Virtual Connect managed enclosure to an unmanaged enclosure, the local MAC addresses and WWNs are automatically returned to the original factory defaults. If a server is removed from a server bay within a Virtual Connect domain and is plugged into another bay in the same domain or into a bay in a different domain, it will be assigned the new set of addresses appropriate for that server bay location.

Ethernet network adapters have had for some time the ability to configure locally administered addresses. The difference with Virtual Connect is that the configuration can be done securely in an OS-independent manner and is coordinated with the administration of other aspects of the server’s programmable attributes, Fibre Channel HBAs have not typically supported locally-administered addresses, and so securely administering these WWNs is a new, built-in capability offered by HP.

Virtual Connect reduces the required number of Fibre Channel cables by means of an HBA Aggregator. This device is not a switch but an N_Port ID Virtualization (NPIV) device that allows multiple HBAs to connect with a single FC switch port. Virtual Connect adheres to the ANSI T-11 standards that define all Fibre Channel technologies. Virtual Connect is transparent to the SAN, which sees its connections as a collection of HBAs. Since HBAs do not require management, using Virtual Connect means that no other brands of switches are introduced. Therefore the IT environment can continue to gain the benefits of end-to-end connectivity of the users’ preferred network brands.

5

Page 6: Virtual Connect

Virtual Connect environment with HP BladeSystem c-Class enclosure The Virtual Connect modules plug directly into the interconnect bays of the HP BladeSystem c-Class enclosure. The modules can be placed side by side for redundancy (see Figure 3). Initial implementations include the HP 1/10Gb Virtual Connect Ethernet Module for BladeSystem c-Class and the HP 4Gb Virtual Connect Fibre Channel Module for BladeSystem c-Class.

Figure 3. Rear of HP BladeSystem c7000 Enclosure showing redundant Virtual Connect modules

The initial product release will support only single enclosure module stacking. A future firmware update will provide support for up to four HP BladeSystem c7000 enclosures (for a total of 64 servers) per Virtual Connect domain, two or four Virtual Connect Ethernet modules per enclosure (eight total per Virtual Connect domain), and zero or two Virtual Connect FC modules per enclosure.

For a single-module configuration, install the HP 1/10Gb Virtual Connect Ethernet Module in interconnect bay 1 and for a redundant configuration, install the second module in interconnect bay 2. Avoid using Virtual Connect and non-Virtual Connect interconnect modules in horizontally adjacent bays.

NOTE: When installing an HP 1/10Gb Virtual Connect Ethernet Module into an enclosure with existing servers, do not change the MAC addresses of the NICs residing in servers that were installed prior to the deployment of the Virtual Connect module. Ensure that all iLOs and HP 1/10Gb Virtual Connect Ethernet Module have received IP addresses. Without IP addresses on all modules, Virtual Connect will not operate properly.

To install FC, the enclosure must have at least one Virtual Connect Ethernet module, because the Virtual Connect Manager software runs on a processor resident on the Ethernet module.

6

Page 7: Virtual Connect

HP 1/10Gb Virtual Connect Ethernet Module for BladeSystem c-Class The Virtual Connect Ethernet Module has sixteen 1Gb Ethernet downlinks to servers (connected across the signal midplane in the enclosure), eight 1Gb Ethernet uplinks to networks (RJ45 copper Ethernet connectors), two 10Gb Ethernet connectors (for copper CX4 cables), and one 10Gb Ethernet internal inter-switch link (across the signal midplane in the enclosure) for a failover connection between Virtual Connect modules (see Figure 4). The Virtual Connect Ethernet module can connect selected server Ethernet ports to specific data center networks and provide a connection to any data center switch environment, including Cisco, Nortel, and HP ProCurve.

Figure 4. Front view of HP 1/10Gb Virtual Connect Ethernet Module illustrating its connections

Virtual Connect Ethernet modules can be stacked by cabling the Ethernet modules together within a Virtual Connect domain. Every server blade in the domain can then be configured to access any external network connection. Every server has fault-tolerant access to every uplink port. Network connections can be aggregated and can be from different modules. Stacking links can be aggregated, and the stacking link between adjacent Virtual Connect Ethernet modules is internal (see Figure 5).

7

Page 8: Virtual Connect

Figure 5. Illustration of stacked Virtual Connect Ethernet modules, this example uses 10 GbE ports for stacking. Single enclosure stacking is available initially, multi-enclosure stacking will be available in a future firmware release.

HP 4-Gb Virtual Connect Fibre Channel Module for BladeSystem c-Class The FC module has sixteen 4Gb FC downlinks to servers and four 1/2/4Gb auto-negotiating FC uplinks to networks (see Figure 6). The FC module can selectively aggregate multiple server FC HBA ports (Qlogic or Emulex) on a FC uplink using NPIV, and can connect the enclosure to data center FC switches. The FC module does not appear as a switch to the FC fabric.

FC modules within different enclosures are each connected directly to the same set of FC SANs. Stacking support for FC modules is not provided; therefore a connection is required from each enclosure to each SAN within the Virtual Connect Domain. With this configuration, the Virtual Connect Manager can deploy or migrate a server blade I/O profile across all four enclosures without any need for additional external SAN configurations.

8

Page 9: Virtual Connect

Figure 6. Front view of HP Virtual Connect Fibre Channel module illustrating its connections

Virtual Connect Manager The HP Virtual Connect Manager (VC Manager) manages enclosure connectivity and is seamlessly integrated into both the HP Insight Control Data Center Edition and the HP Control Tower. VC Manager defines available LANs and SANs, sets up enclosure connections to the LAN or SAN, and defines and manages server I/O profiles.

The VC Manager contains utilities and a Profile Wizard to develop templates to create and assign profiles to multiple servers at once. The I/O profiles include the physical NIC MAC addresses, FC HBA WWNs, and the SAN boot configurations. The VC Manager profile summary page includes a view of server status, port, and network assignments (see Figure 7). Customers can also edit the profile details, re-assign the profile, and examine how HBAs and NICs are connected.

9

Page 10: Virtual Connect

Figure 7. HP Virtual Connect Manager Profile Summary screen. At this screen IT administrators can create, edit, and delete Virtual Connect profiles.

The VC Manager uses a policy driven approach to assign I/O profiles to servers. The policy can dictate a profile for a specific device bay location; in this case, the profile will be assigned to any server installed in that location. Using the VC Manager policy as a guide, Virtual Connect ensures that each server blade is properly connected to its appropriate LAN and SAN, regardless of its replacement status.

The network administrator defines networks and subnets that will be available to the server administrator. Using the HP Virtual Connect Manager (Figure 8), the server administrator sets up external connections, enables networks to share connections (Figure 9), and supports server aggregation and failover (Figure 10).

10

Page 11: Virtual Connect

Figure 8. HP Virtual Connect Manager screen for defining Ethernet networks and subnets

Figure 9. HP Virtual Connect Manager screen for creating Ethernet VLANs

11

Page 12: Virtual Connect

Figure 10. HP Virtual Connect Manager screen summarizing all Virtual Connect profiles

VC Manager facilitates the upgrade and/or replacement of a server by enabling the server administrator to reassign the I/O profile to a new server (Figure 11). Additionally, VC Manager enables the administrator to move a Virtual Connect profile from a failed server to a spare server. All of this functionality is embedded in the Virtual Connect Module. Future automation capability will automate these processes.

12

Page 13: Virtual Connect

Figure 11. A migration showing how the administrator can move the Ethernet MACs, FC WWNs, and FC boot parameters of a failed server to a spare server.

Conclusion HP Virtual Connect technology provides a simple, easy-to-use tool for managing the connections between HP BladeSystem c-Class servers and external networks. It cleanly separates server enclosure administration from LAN and SAN administration, relieving LAN and SAN administrators from server maintenance. It makes HP BladeSystem c-Class server blades change-ready, so that server administrators can add, move, or replace those servers without affecting the LANs or SANs.

13

Page 14: Virtual Connect

For more information For additional information, refer to the resources listed below.

Topic Resource Hyperlink

HP BladeSystem technology briefs:

• HP BladeSystem c-Class architecture • HP BladeSystem c-Class Enclosure • Managing the HP BladeSystem c-

Class

http://h18013.www1.hp.com/products/servers/technology/whitepapers/proliant-servers.html#bl

HP 1/10Gb Ethernet Module http://h18013.www1.hp.com/products/servers/technology/whitepapers/proliant-servers.html#bl

HP Systems Insight Manager www.hp.com/support/hpsim

Performance Management Pack www.hp.com/servers/proliantessentials/pmp

Rapid Deployment Pack www.hp.com/servers/rdp

Call to action Send comments about this paper to [email protected].

© 2007 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

TC070603TB, June 2007

Page 15: Virtual Connect

Appendix A: Virtual Connect from the perspective of a server administrator

Preparing for the Virtual Connect implementation To ensure a successful implementation of Virtual Connect (VC), HP recommends the following firmware versions be verified as up to date:

• All Integrated Lights-Out (iLO) firmware • Each HP BladeSystem server BIOS • Onboard Administrator firmware • All network chipset and mezzanine card firmware • Fibre Channel (FC) Host Bus Adapter (HBA) firmware • Each Virtual Connect Ethernet and FC Module firmware

Working with other administrators Virtual Connect modules will connect to the LAN and SAN. Before the Virtual Connect modules are installed and connected, the LAN, SAN, and Server administrators need to plan how Virtual Connect will be connected. Advanced planning is required to ensure that sufficient uplink connections are defined and implemented. The following criteria need to be determined:

• Identify the number of uplinks required due to bandwidth requirements. • Verify that the upstream network switch ports are configured for Link Aggregate Control Protocol

(LACP). • Decide if factory MAC addresses and FC worldwide names (WWNs) will be used or whether VC-

assigned addresses will be implemented. • Determine the server connections that will be required to connect the servers and VLANs to the core

network. • Consider any additional specific VLAN requirements are considered.

NOTE: If link aggregation is required, the upstream switches ports connecting to the Virtual Connect module must be configured for 802.3ad (LACP).

The SAN connections must also be planned and ensure that the following criteria are determined:

• Determine the number of FC uplinks required due to bandwidth requirements. • Whether the upstream FC switch ports are configured for and support N_Port ID Virtualization

(NPIV).

During the planning phase, the LAN and server administrator need to determine how each server will connect to the network and in which IP network and VLAN(s) the server will reside. In a typical network, these connections are through physical cables. If a move from one network to another is required, the cable must also be moved. The c-Class Blade servers and VC provide a wire-once implementation, if the VC module(s) connect to the upstream or core switches and the VC networks and server profiles are defined. Only when the server profile(s) are assigned to a server is a physical connection from the server to the core network complete. The VC module(s) need to be connected to the upstream data center switches, the Virtual Connect networks defined, and server profiles defined.

Appendix A-1

Page 16: Virtual Connect

Creating server profiles A Virtual Connect server profile can be created and deleted as required. Server profiles provide the linkage between the server and the network connections defined in Virtual Connect. Server profiles are created within Virtual Connect manager and contain information about server MAC and WWN addresses, connections to LAN/VLANs and SANs for each NIC and FC HBA, as well as PXE and/or SAN Boot parameters.

If a server requires connections to a specific network or VLAN, a server profile must be created to make those connections. The profile is assigned to that server. Once a server profile is assigned to a server in a specific slot and the server is powered on, the connection specifics of that profile are applied to that server. In the event of a server failure, the profile could be moved from one server to another, transferring all the configuration parameters contained in the profile to the new server in a different slot. In addition, if a server is removed from a slot and replaced with a different server, the new server inherits the same configurations and settings.

A Server profile can be created defining LAN and SAN connections and applied to a server slot in advance of the server even being installed or purchased. This reduces implementation time because the LAN and SAN connections have been predefined.

NOTES: • A server must be Powered Off to assign or remove a server profile. • There can be a maximum of 64 fully-populated server profiles in a

Virtual Connect domain.

Administering Virtual Connect Ongoing server management is performed securely through the Virtual Connect manager, Onboard Administrator, and the server’s iLO. After Virtual Connect has been implemented, server changes can be made through the VC manager by the server administrator. The LAN and SAN administrator need be consulted only when additional LAN connections or SAN LUNs are required. The Server administrator can create, manage, move, and assign server profiles as required with no effect on the LAN or SAN.

When the VC-Enet module is first installed into the c-Class enclosure, its default configuration connects all 16 server-side NICs to Uplink Port 1 in a default VLAN. This provides the ability to install a VC-Enet module into an enclosure and provide immediate network access.

When the Virtual Connect manager is run for the first time, a VC Domain is created. When the domain is created, the default network connection is immediately terminated. Specific network connections will then need to be created.

If the VC Domain is deleted after the VC-Enet module has been configured and functioning with specific LAN and VLAN connections defined, the default configuration connecting all 16 server-side NICs to Uplink Port 1 will be reinstated until a new VC domain is created.

Accessing Onboard Administrator The Onboard Administrator module provides a central management and access point into the enclosure management and to the servers within the enclosure. The Onboard Administrator module also provides an aggregation point for all iLO controllers within an enclosure, a single network uplink, and IP addresses for each enclosure to connect to the network.

The Onboard Administrator is accessed from any standard web browser. Virtual Connect is accessed and managed through a standard Web browser, using SSL security. An administrator can log into the enclosure Onboard Administrator or directly to the VC module. Virtual Connect uses iLO to configure and manage a server within the c-Class enclosure.

Appendix A-2

Page 17: Virtual Connect

Appendix B: Virtual Connect from the perspective of a LAN administrator

Preparing for the Virtual Connect implementation To ensure a successful implementation of Virtual Connect (VC), HP recommends the following firmware versions be verified as up to date:

• All Integrated Lights-Out (iLO) firmware • Each HP BladeSystem server BIOS • Onboard Administrator firmware • All network chipset and mezzanine card firmware • Fibre Channel (FC) Host Bus Adapter (HBA) firmware • Each Virtual Connect Ethernet and FC Module firmware

Administering MAC addresses MAC and WWN addresses are assigned through one of three methods:

• Factory implemented addresses • Virtual Connect-assigned addresses (VC-assigned). • User-assigned addresses

NOTE: The user can define the MAC addresses and Virtual Connect will manage them.

During the creation of the VC domain, the administrator must choose between VC-assigned addresses and Factory-assigned addresses.

If VC-assigned addresses are used, as server profiles are created, the MAC and WWN addresses are associated with that profile.

To take full advantage of VC, HP recommends implementing VC-assigned addresses.

Using VC-assigned MAC addresses Each server NIC is provided with a factory implemented MAC address which is stored in the server. VC provides the ability to use that address, but also provides the ability to use VC-assigned MAC addresses. The use of VC-assigned MAC addresses provides the ability to pre-configure a server’s MAC address before the server is deployed. VC also provides the ability to move a MAC address from one server to another through the VC manager.

NOTE: The VC-assigned MAC addresses are legally registered addresses to reduce the possibility of duplicate MAC addresses.

Appendix B-1

Page 18: Virtual Connect

LAN and SAN transparency Each VC-Enet module has eight (8) 10/100/1000 Mb R-J45 ports, plus two (2) CX4 10-Gb external ports that can be used for either stacking or uplink ports. VC supports standard network protocols, such as 802.1Q VLAN tagging and 802.3ad (LACP) trunking, which provides the ability to connect multiple uplinks together to increase network bandwidth to the core switch.

The VC module provides “transparency” from the LAN and SAN and significantly reduces network administration. While utilizing Server I/O virtualization technology, the upstream Ethernet switch sees multiple MAC addresses on the port connected to the VC-Enet module. The VC-Enet module does not appear as a “switch” device. The VC-FC module does not participate in the SAN fabric domain and, as such, does not count as a “switch” in the SAN fabric.

Virtual Connect architecture Is Virtual Connect a switch? Virtual Connect Ethernet does not operate as a traditional networking switch device. Like a switch it provides port isolation, but operates as an Ethernet bridge so that network traffic is pre-defined from NIC port to uplink port. From the view of the network, Virtual Connect appears as a Pass-Thru device presenting one or more MAC addresses to the network from each uplink port. The best analogy is found in a VMware environment where multiple MAC addresses are presented to the network through a single NIC port on a server. Network transparency Network transparency refers to the ability of a device (in this case Virtual Connect) to transparently exist between a host system and the upstream network infrastructure. The Virtual Connect Ethernet module will transparently pass frames (between the source and destination) devices without any additional requirements on the source or destination device. Loop prevention and spanning tree The HP 1/10Gb Virtual Connect module prevents network loops by making sure that there is only one active uplink (or uplink LAG) for any single network at any one time so traffic cannot loop through from uplink to uplink. In addition, Virtual Connect automatically discovers all the internal stacking links and uses an internal loop prevention algorithm to ensure there are no loops caused by the stacking links. None of the internal loop prevention traffic ever shows up on uplinks or moves to the external switch.

Virtual Connect does not participate in Spanning Tree protocol on network uplinks.

Load balancing Virtual Connect provides load balancing over Link Aggregation Groups. Virtual Connect supports the IEEE 802.3ad standard for port aggregation (LACP only) for external uplinks on the same Virtual Connect module." Link Aggregation Groups load balance traffic by source and destination MAC and/or source and destination IP addresses. Virtual Connect understands LACP (IEEE 802.3ad Link Aggregation Control Protocol) so it can handle bundled network uplinks to ports in the same HP 1/10Gb Virtual Connect module.

Failover Virtual Connect facilitates link failover by allowing Virtual Connect Networks to leverage ports on multiple HP 1/10Gb Virtual Connect modules in the same VC Domain. Depending on its configuration, a Virtual Connect network will transparently shift its upstream communication to a port on the same module or on a different HP 1/10Gb Virtual Connect module in the event of a link failure.

Appendix B-2

Page 19: Virtual Connect

The Virtual Connect Manager typically runs on the HP 1/10Gb VIRTUAL CONNECTnet Module in bay 1 unless that module is unavailable, causing a failover to the HP 1/10Gb VIRTUAL CONNECTnet Module running in bay 2.

NIC teaming Virtual Connect supports most features available in the HP NIC Teaming Software. With the HP NIC Teaming software a user can introduce a variety of host-based features including NIC fault-tolerance (NFT) and Transmit Load Balancing (TLB).

Port aggregation Virtual Connect supports IEEE 802.3ad protocol for link aggregation for LACP configurations. Link aggregation is only possible for ports on the same HP 1/10GbVirtual Connect module connected to the same upstream switch. Virtual Connect does not support the aggregation of ports between multiple HP 1/10Gb Virtual Connect modules) even in the same VC Domain.

Introducing Virtual Connect into your c-Class system Firmware updates • The Onboard Administrator Firmware must be version 1.20 or greater. Always install the most

current firmware for the following items: • Server blade system ROMs • Ethernet Mezzanines • Fibre Channel Mezzanines • Onboard Administrator

MAC address management When setting up a new Virtual Connect Domain for the first time, an administrator may choose to use either factory default Ethernet MAC addresses or Ethernet MAC addresses assigned by Virtual Connect. Once the administrator chooses the type of MAC addresses to use in a VC Domain, the selection is permanent until the domain is deleted. If an administrator chooses to leverage Ethernet MAC addresses managed by Virtual Connect, Virtual Connect has the ability to assign MAC addresses that override the factory default MAC addresses or the user can define the MAC addresses and Virtual Connect will manage them. If a blade is moved to a non-Virtual-Connect-managed enclosure, the local MAC addresses on that server blade are automatically returned to the original factory defaults.

Uplink configurations Virtual Connect supports a variety of uplink types and uplink speeds. These uplinks can be configured in a variety of ways: • Simple, single cable uplink with one VLAN • Multiple uplink ports aggregated in an IEEE 802.3ad LACP link aggregation group (dynamic port

channel or port trunk) • Multiple VLANs sharing a single port or port-trunk, IEEE 802.1Q leveraging single or aggregated

ports

Virtual LAN Virtual Connect is fully VLAN-aware. VLAN-tagged frames can be interpreted by the HP 1/10-Gb Virtual Connect module using a Shared Uplink Set and passed to the appropriate Virtual Connect Network. Alternatively, VLAN-tagged frames can be passed through the Virtual Connect Network for interpretation by the host.

Appendix B-3

Page 20: Virtual Connect

Installation Virtual Connect Manager GUI The Virtual Connect Manager GUI provides a simple way for Virtual Connect administrators to create and configure Virtual Connect Networks, assign Virtual Connect Server Profiles, and manage the Virtual Connect domain. The Virtual Connect Manager can be accessed through a browser by selecting the Virtual Connect Ethernet Module from the Onboard Administrator. In addition, the GUI can also be accessed by pointing a web browser to the IP address of the primary HP 1/10-Gb Virtual Connect module.

Configuring the data center switch The upstream data center switch can be configured using any tool that is normally used to configure a data center switch. If the downlink ports on the data center switch to the HP 1/10Gb Virtual Connect module are configured as a port trunk or VLAN trunk (802.1Q tagged), the Virtual Connect administrator must be aware of this and configure the HP 1/10Gb Virtual Connect uplinks accordingly.

Operational Management and Maintenance Administering Virtual Connect All Virtual Connect administration is performed through the Virtual Connect Manager.

Moves, adds, and changes Virtual Connect server profiles may be moved, created, or modified by the Virtual Connect Manager. In order to apply a server profile to a server bay, the server occupying that server bay must be powered off for the change to take affect. If an applied server profile is modified, the server occupying that blade bay may remain on during this process, but must be power cycled for the change to take effect. If a server profile is moved from one blade bay to another blade bay, both the target and destination server must be powered off.

Network management Virtual Connect provides no additional network management capabilities. It is not possible to solicit status and receive “events” from the HP 1/10Gb Virtual Connect module directly. It appears as a Pass-Thru device to the network. Detailed Virtual Connect port-level statistics may be viewed through the Virtual Connect Manager. Frame-level detail is available outside the Virtual Connect domain through external network diagnostic tools. In addition, NIC statistics and performance measurements may be gathered using external network diagnostic equipment.

Appendix B-4

Page 21: Virtual Connect

Appendix C: Virtual Connect from the perspective of a SAN administrator

Preparing for the Virtual Connect implementation To ensure a successful implementation of Virtual Connect (VC), HP recommends the following firmware versions be verified as up to date:

• All Integrated Lights-Out (iLO) firmware • Each HP BladeSystem server BIOS • Onboard Administrator firmware • All network chipset and mezzanine card firmware • Fibre Channel (FC) Host Bus Adapter (HBA) firmware • Each Virtual Connect Ethernet and FC Module firmware

N_Port ID Virtualization N_Port ID Virtualization (NPIV) is a Fibre Channel standard defined by the Technical Committee T11. NPIV enables multiple Fibre Channel initiators to share a single physical N_Port thus reducing the overall hardware requirements within a SAN. This allows more devices to communicate with the SAN fabric without having to consume additional SAN switch ports

Leveraging NPIV, neither the Virtual Connect Fibre Channel (VC-FC) module nor any of its ports utilize Fibre Channel Domain IDs within a SAN fabric, yet they still provide fibre port aggregation. Each VC-FC uplink port is treated as an N_Port, and once each port is logged into the SAN fabric, the fibre traffic is passed without any delays or packet modification.

Fibre Channel domains A Fibre Channel Domain is a defined group of switch ports that interact as a single entity. A SAN fabric can consist of a single SAN switch, or multiple switches. Within a SAN fabric, each switch (or a subset of a switch) can be identified as a single domain. This domain is identified within the fabric with a unique domain ID.

All SAN fabrics have a maximum limit to the number of Fibre Channel Domain IDs they can support (the maximum number varies per product/vendor.) Using VC-FC technology, administrators are no longer bound by the Domain ID restriction since VC-FC does not use a Domain ID; rather each port of a VC-FC module is treated as a simple N_Port within the fabric by enabling NPIV on the SAN switch port.

Fibre Channel aggregation vs. switching One key advantage the VC-FC module provides compared to a a fibre channel switch is that the VC-FC modules do not consume a fibre channel domain ID on the SAN fabric. Each VC-FC port appears to the SAN fabric as an N_Port passing multiple HBA WWID’s through this single port. The VC-FC modules are able to aggregate as many as 16 fibre channel HBA ports through a single VC-FC module (when using the maximum oversubscription ratio of 16:1). This method of aggregation is especially important to SAN administrators who continually struggle with SAN fabric segmentation and fibre channel domain ID consumption.

Another administrative advantage of using VC-FC port aggregation versus fibre channel switches is the ability to cable once while still allowing for all blade servers to be added, removed, moved, and re-provisioned without worries of having to physically move a cable. This allows the server administrator to manage the profile/personality of the physical server blades without having to burden the SAN and LAN teams with change requests.

Appendix C-1

Page 22: Virtual Connect

Virtual Connect Fibre Channel dependencies In order to use the Virtual Connect Fibre Channel, administrators must have the following:

• An HP fibre channel mezzanine adapter with the appropriate supported firmware and boot BIOS installed in at least one of the c-Class blades within the chassis.

• At least one Virtual Connect Ethernet module installed in Interconnect Bay 1. • An available interconnect bay to install the VC-FC module(s) into – Note VC-FC modules are

supported in Interconnect Bays 3 thru 8. • For each VC-FC uplink port used (VC-FC has 4 uplink ports per module), an NPIV capable SAN

switch must be attached and NPIV must be enabled on the SAN switch port. Most enterprise class SAN switches today support NPIV, however a firmware upgrade may be required. Please refer to your SAN switch vendor documentation for more details.

• If switch based zoning is in place, the use of NPIV will require soft zoning to be used (zoning by WWN) to allow for the multiple connections sharing a single NPIV switch port to be zoned.

Failover HP c-Class HBA port mappings are hard-wired in the c-Class midplane to a predetermined interconnect bay side. Therefore, with a dual-port mezzanine adapter in a c-Class blade, port one will map to the left interconnect bay, and port 2 will map to the matching interconnect bay on the right side of the chassis (e.g., interconnect bay 3 and 4, or 5 and 6, or 7 and 8). This design allows for maximum availability and performance.

Virtual Connect Fibre Channel modules do not have any interdependencies or mechanisms within the modules themselves to support VC-FC module failure/failover. HP recommends deploying VC-FC modules in pairs in a side-by-side configuration (e.g., interconnect bays 3 and 4, or 5 and 6). In doing so, standard fibre channel redundant solutions can be implemented at the OS layer in order to support multiple paths either in an active/passive or active/active solution (active/passive and active/active solutions are dependent upon the OS layer, HBA drivers, and failover support software.)

Introducing Virtual Connect into your c-Class system Firmware updates To implement a VC-FC module into an infrastructure, the following firmware must be verified and updated:

• Each c-Class enclosure Onboard Administrator firmware • Each c-class blade iLO2 firmware • Each participating c-Class blade server BIOS • Each participating c-Class blade HBA mezzanine adapter firmware • All Virtual Connect Ethernet modules • All Virtual Connect Fibre Channel modules • All external SAN switches that are directly connected to a port within a Virtual Connect Fibre

Channel module • Potentially all SAN switches within the same Fibre Channel Domain and or SAN fabric depending

on your SAN management best practices.

Appendix C-2

Page 23: Virtual Connect

HBA address management The Virtual Connect Administrator provides the capability and flexibility to choose between three HBA port address management schemas.

• Option 1: The first option uses the factory default WWID that is associated with every port of an HBA. Choosing this option limits the virtual capabilities of Virtual Connect but still provides the ability to use port aggregation and NPIV.

• Option 2: The second option is to implement the block of virtual WWID addresses that HP has predefined with each Virtual Connect Domain. Using this block of 64,000 addresses enables the administrator to take full advantage of Virtual Connect’s virtualization and provisioning capabilities.

• Option 3: The third option is similar to the second option; however, instead of using the predefined virtual address block of WWIDs, administrators have the ability to create their own block of virtual WWIDs. Once the block of IDs are defined, there is no further delineation between option 2 and option 3 in terms of Virtual Connect capabilities.

VC-FC and SAN switch uplink configurations Each Virtual Connect Fibre Channel module provides four auto sensing 1/2/4Gb uplink ports. The administrator can define the ratio or oversubscriptions rates that each port is capable of depending upon the fibre traffic requirements. This allows the VC-FC administrator to choose the desired oversubscription ratio (16:1, 8:1, or 4:1) of the 4 GB uplink ports. (Note – port aggregation (port channeling/trunking) is not supported at this time.)

The SAN Switches will be required to support NPIV to successfully connect to the VC-FC devices. The SAN administrator will have to ensure that all uplink SAN switches support NPIV and that NPIV is enabled on any port connected to the VC-FC devices.

Zoning and LUN masking and selective storage presentation The administration of SAN fabric zoning and LUN Masking assignments are no different than that of a typical non-NPIV SAN fabric. The only requirement is that soft zoning be used (if zoning is required) on any SAN switch connected to the VC-FC device.

If HP-defined or customer-defined WWID’s are enabled in the VC-FC modules, the Virtual Connect administrator can provide the SAN administrator with the WWIDs of any HP c-Class HBA device prior to any hardware arriving at the customer’s site. This allows the SAN administrator to pre-configure blade storage connections and allocate LUN’s.

Installation VC-FC interface GUI Virtual Connect utilizes a shared VC Administrator web-based interface across the entire VC Domain. The VC Administrator code is contained entirely within the firmware of the VC-E modules. Therefore, all one needs is a supported web browser to manage the Virtual Connect Domain including all Virtual Connect Fibre and Ethernet modules. No additional management software is required, nor is there any need for a separate management server.

Configuring the data center FC switch When installing the VC-FC module, NPIV must be enabled on the SAN switch that is attached to the VCFC module uplinks before the server blade HBAs can login to the fabric. See the Fibre Channel switch firmware documentation for information on whether it supports NPIV and for instructions on enabling this support.

Appendix C-3

Page 24: Virtual Connect

Brocade switch Most Brocade Fibre Channel switches running Fabric OS 5.1.0 or later will support NPIV, which is enabled by default.

When not enabled by default, use the portCfgNPIVPort command within the Brocade switch command line interface to enable NPIV. Please see the Brocade switch firmware documentation for usage of the portCfgNPIVPort command.

Cicso switch Cisco Fibre Channel switches running SAN-OS 3.0 or later will support NPIV. To enable NPIV on Cisco Fibre Channel Switches running the Cisco Device Manager, use the following procedure.

1. From the Cisco Device Manager, click Admin. 2. Select FeatureControl. The Feature Control window is displayed. 3. Click the row titled NPIV. 4. In the Action column, select enable, then click Apply. 5. Click Close to return to the Name Server screen. 6. Click Refresh to display the Host ports.

McData switch McDATA Fibre Channel switches with E/OS 8.0 or later will support NPIV. McDATA switches require an optional license to enable this function. The following procedure details how to apply this license and enable NPIV: 1. From a browser, open the web user interface for the McDATA switch that is to be connected to the

VC-FC module. The Node List view details the devices attached to the McDATA switch. 2. To install the NPIV license, click Maintenance, and then select Options Features. 3. Enter the license key for NPIV in the Feature Key field. Select the key as the "N_Port ID

Virtualization (NPIV)". Link from the window and apply the key by clicking OK. A check mark in the left window indicates that the N_Port ID Virtualization key is installed.

4. Click Configure, and then select Ports > NPIV. 5. Click Enable. 6. At the prompt click OK if you are sure you want to enable NPIV. 7. In the Login column, set the value to 17 or higher for each port connected to the VC-FC to ensure

proper operation. 8. Click OK to save changes.

Operational management and maintenance Administering Virtual Connect Virtual Connect (both Fibre Channel and Ethernet) administration is performed through the web based Virtual Connect Manager interface. Using a web browser, the VC manager can be accessed either directly or from within the Onboard Administrator.

Future upgrade plans for the Virtual Connect Manager include a scriptable command line interface.

Moves, adds, and changes Virtual Connect introduces the concept of server bay profiles. Within the Virtual Connect Manager, server bay profiles are created for each server bay as needed. Each server bay profile contains the NIC MAC addresses, WWIDs and WWNs, Fibre Channel SAN connections, and the Fibre Channel boot parameters. A server bay profile, once created, can be assigned or re-assigned to any server bay within the Virtual Connect Domain. In doing so, this allows any blade to take on the same personality of a prior blade such as in the case of a blade server failure.

Appendix C-4

Page 25: Virtual Connect

SAN management To the SAN Manager interface, a Virtual Connect port that is connected to a SAN switch will look like an end port with a WWID and a WWN. However, the SAN administrator will not be able to see the WWID/WWN of the ports on each of the blade HBAs until those blades are powered on and perform a standard fabric login. The NPIV function of the SAN switch port will understand the multiple WWID/WWNs being presented and will subsequently present each to the fabric as being associated with the port.

There is no change in the way a SAN administrator configures the storage arrays. All HP blades will have associated WWID’s and the storage administrator can still use the same tools to create hosts and assign LUNs to these hosts.

Appendix C-5