Top Banner
HP BladeSystem Matrix 6.3 Planning Guide HP Part Number: 646940-001 Published: May 2011 Edition: 1
100
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 646940-001

HP BladeSystem Matrix 6.3 Planning Guide

HP Part Number: 646940-001Published: May 2011Edition: 1

Page 2: 646940-001

© Copyright 2011 Hewlett-Packard Development Company, L.P.

The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the expresswarranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shallnot be liable for technical or editorial errors or omissions contained herein.

Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, CommercialComputer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government undervendor's standard commercial license. Microsoft, Windows, and Windows Server are U.S. registered trademarks of Microsoft Corporation.

Page 3: 646940-001

Contents1 Overview..................................................................................................5

HP BladeSystem Matrix documents..............................................................................................5Planning summary....................................................................................................................6HP BladeSystem Matrix infrastructure...........................................................................................7HP BladeSystem Matrix components............................................................................................9

2 HP BladeSystem Matrix services planning....................................................14Servers and services to be deployed in HP BladeSystem Matrix.....................................................14Application services................................................................................................................14Management services.............................................................................................................16

3 HP BladeSystem Matrix customer facility planning.........................................29Racks and enclosures planning.................................................................................................29Data center requirements.........................................................................................................29Virtual Connect domains.........................................................................................................30

4 HP BladeSystem Matrix solution storage......................................................35Virtual Connect technology......................................................................................................35Storage connections................................................................................................................35Storage volumes.....................................................................................................................37

5 HP BladeSystem Matrix solution networking.................................................40Network planning..................................................................................................................40Virtual Connect Ethernet uplink connections................................................................................42Virtual Connect Flex-10 Ethernet services connections...................................................................43Manageability connections......................................................................................................45

6 HP BladeSystem Matrix pre-delivery planning checklist..................................497 Next steps...............................................................................................508 Support and other resources......................................................................51

Contacting HP........................................................................................................................51Related information.................................................................................................................53

A Dynamic infrastructure provisioning with HP BladeSystem Matrix.....................54Example 1—An agile test and development infrastructure using logical servers...............................54Example 2—An agile test and development infrastructure with IO.................................................60

B Sample configuration templates..................................................................68C Optional Management Services integration notes.........................................76

HP BladeSystem Matrix and HP Server Automation.....................................................................76HP BladeSystem Matrix and Insight Recovery..............................................................................76HP BladeSystem Matrix and Insight Control for VMware vCenter Server.........................................76HP BladeSystem Matrix and Insight Control for Microsoft System Center.........................................77

D HP BladeSystem Matrix and Virtual Connect FlexFabric ConfigurationGuidelines..................................................................................................80

Virtual Connect FlexFabric hardware components.......................................................................80FlexFabric interconnects/mezzanines – HP BladeSystem c7000 port mapping................................81HP BladeSystem c7000 enclosure FlexFabric module placement....................................................82FlexFabric configurations using only HP G7 BladeSystem servers...................................................83FlexFabric configurations using only HP G6 or i2 BladeSystem servers...........................................85FlexFabric configurations using a mixture of HP G7 with G6 and/or i2 BladeSystem servers.............87

Contents 3

Page 4: 646940-001

HP BladeSystem Matrix configuration guidelines for mixing FlexFabric with Flex-10 .........................90Glossary....................................................................................................91Index.........................................................................................................95

4 Contents

Page 5: 646940-001

1 OverviewThis guide is the recommended initial document for planning an HP BladeSystem Matrix infrastructuresolution. The intended audience for this guide is pre-sales and HP Services involved in the planning,ordering, and installation of an HP BladeSystem Matrix-based solution.Planning is the key to success; early planning is the key to creating an HP BladeSystem Matrixorder, which moves on to smooth, successful, and satisfactory delivery. This guide is intended foruse, along with a planning worksheet, to capture planning decisions, customer-provided details,and HP BladeSystem Matrix configuration parameters for future implementation.Effective planning requires knowledge of BladeSystem technology, including Virtual Connect (VC)FlexFabric, VC Flex-10 Ethernet and Fibre Channel (FC); knowledge of FC shared storage, includingfabric zoning, redundant paths, N_Port ID Virtualization (NPIV), and logical unit number (LUN)provisioning; knowledge of software configuration planning and functionality, including HP InsightOrchestration (IO), Central Management Server (CMS) software, OS deployment, and anycustomer-provided management software in connection with the HP BladeSystem Matriximplementation.The HP BladeSystem Matrix Starter Kits and optional expansion kits provide configuration optionsthat enable integration into a customer’s existing environment. This document is intended to guideyou through the planning processes by outlining the decisions involved and data collected inpreparing for a HP BladeSystem Matrix solution implementation.There are two points during the HP BladeSystem Matrix implementation delivery process wheredesign decision input and user action are required:This document outlines both sets of input information.1. Pre-Order: Before placing the HP BladeSystem Matrix order, you must plan and specify

requirements and order options.2. Pre-Delivery: Before the delivery of the HP BladeSystem Matrix physical infrastructure, you

must coordinate the environmental and configuration details to make sure the on-siteimplementation service can begin immediately.

HP BladeSystem Matrix documentsThe HP BladeSystem Matrix documents table shows the documentation hierarchy of the HPBladeSystem Matrix infrastructure solution. Read this guide before ordering and configuring theHP BladeSystem Matrix, and use the guide in conjunction with the HP BladeSystem Matrix ReleaseNotes and HP BladeSystem Matrix Compatibility Chart.

Table 1 HP BladeSystem Matrix documents

HP BladeSystem MatrixDocumentation CDDocument titlesPhase

YesHP BladeSystem Matrix 6.3 Compatibility ChartThe Compatibility Chart provides version information of HP BladeSystem Matrixcomponents.

Planning

NoVolume 4, “For Insight Recovery on ProLiant servers”, of the HP BladeSystemMatrix 6.3 Setup and Installation GuideThe “Before you begin” section of this document describes storage, networking,and SAN zoning considerations when implementing an HP BladeSystem MatrixRecovery Management (Insight Recovery) configuration.

YesHP BladeSystem Matrix 6.3 Release NotesThe release notes provide key information on HP BladeSystem Matrix and itscomponents.

Using HPBladeSystemMatrix

HP BladeSystem Matrix documents 5

Page 6: 646940-001

Table 1 HP BladeSystem Matrix documents (continued)

HP BladeSystem MatrixDocumentation CDDocument titlesPhase

YesHP BladeSystem Matrix 6.3 Getting Started GuideThe getting started guide provides instructions on how to design your first HPBladeSystem Matrix infrastructure template and then create (or provision) aninfrastructure service using that template after the installation is complete.

YesHP BladeSystem Matrix 6.3 Troubleshooting GuideThe troubleshooting guide provides information on troubleshooting tools andhow to recover from errors in a HP BladeSystem Matrix environment.

YesHP BladeSystem Matrix Step-by-Step Use Case Guides and demo videosThe use cases provide test and video instructions on how to build six differentsolutions corresponding to the six included demos.

The latest updates to the HP BladeSystem Matrix solution are located on the HP website, http://www.hp.com/go/matrixcompatibility. The supported hardware, software, and firmware versionsare listed in the HP BladeSystem Matrix Compatibility Chart. Updates to issues and solutions arelisted in the HP BladeSystem Matrix Release Notes.White papers and external documentation listed above are located on the HP BladeSystem MatrixInfrastructure 6.x product manuals page or on the HP BladeSystem Matrix Documentation CD.HP BladeSystem Matrix QuickSpecs are located at http://h18004.www1.hp.com/products/quickspecs/13297_div/13297_div.pdf and for HP-UX, see http://h18004.www1.hp.com/products/quickspecs/13755_div/13755_div.pdf.

Planning summaryHP BladeSystem Matrix is a platform that optimally creates an HP Converged Infrastructureenvironment that is simple and straightforward to buy and use.This document presents steps to guide you through the HP BladeSystem Matrix planning process.

6 Overview

Page 7: 646940-001

Figure 1 HP BladeSystem Matrix planning steps

HP BladeSystem Matrix infrastructureHP BladeSystem Matrix embodies the HP Converged Infrastructure enabling provisioning,deployment, and management of application services. The following key components enable thisinfrastructure:

• Converged infrastructure consisting of virtual I/O, shared storage, and computer resources

• Management environment with physical and virtual machine provisioning and workflowautomation, capacity planning, Disaster Recovery (DR)-ready and auto spare failover,continuous optimization, and power management

• Factory and on site integration servicesPlanning begins with understanding what makes up each component. Some components mightinclude existing services found in the customer data center. Other components are automaticallyprovided by, or optionally ordered with, HP BladeSystem Matrix.The physical infrastructure as provided by HP BladeSystem Matrix consists of the followingcomponents:HP BladeSystem Matrix FlexFabric enclosures include the following:

• HP BladeSystem c7000 Enclosure with power and redundant HP Onboard Administrator (OA)modules

• Redundant pair of HP VC FlexFabric 10Gb/24-Port modules

HP BladeSystem Matrix infrastructure 7

Page 8: 646940-001

HP BladeSystem Matrix Flex-10 enclosures include the following:

• HP BladeSystem c7000 Enclosure with power and redundant OA modules

• Redundant pair of HP VC Flex-10 10Gb Ethernet modules

• Redundant pair of HP VC 8Gb 24-Port FCThe following components are included by default, but can be deselected:

• HP 10000 G2 series rack

• HP ProLiant DL360c G7 server functioning as a Central Management ServerThe following figure illustrates a basic HP BladeSystem Matrix configuration. Many componentsdisplayed in the diagram are discussed in detail in this guide, and are carried through to the HPBladeSystem Matrix Setup and Installation Guide. The examples in this document are based onthis sample configuration. Additional detailed application examples are located in “AppendixA—Dynamic infrastructure provisioning with HP BladeSystem Matrix” (page 54). For an InsightRecovery implementation, these steps are required for the HP BladeSystem Matrix configurationsat both the primary and recovery sites.

Figure 2 Basic HP BladeSystem Matrix infrastructure

Management infrastructureThe physical infrastructure provided by the customer’s data center includes power, cooling, andfloor space.

8 Overview

Page 9: 646940-001

The management infrastructure as provided by HP BladeSystem Matrix consists of the followingcomponents:

• HP Insight Software Advisor

• HP Insight DynamicsHP Insight Dynamics capacity planning, configuration, and workload management◦

◦ IO

◦ HP Insight Recovery (HP IR) (setup requires an additional per event service)

• HP Insight Control

◦ HP Insight Control performance management

◦ HP Insight Control power management

◦ HP Insight Control virtual machine management

◦ HP Insight Control server migration

◦ HP Insight Control server deployment

◦ HP Insight Control licensing and reports

◦ HP iLO Advanced for BladeSystem

• HP Virtual Connect Enterprise Manager (HP VCEM) software

• HP Insight Remote Support Advanced (formerly Remote Support Pack)

• HP Systems Insight Manager (HP SIM)HP System Management Homepage (HP SMH)◦

◦ HP Version Control Repository Manager (HP VCRM)

◦ Windows management instrumentation (WMI) Mapper

• HP Insight managed system setup wizardOptional management infrastructure, which can integrate with HP BladeSystem Matrix, includesthe following components (discussed throughout this guide):

• Insight Control for Microsoft System Center (additional per event service required)

• Insight Control for VMware vCenter Server (additional per event service required)

• HP Server Automation software (customer-provided)

• HP Ignite-UX software (customer-provided)

• Microsoft System Center server (customer-provided)

• VMware vCenter server (customer-provided)The customer provided components also include network connectivity, SAN fabric, and networkmanagement such as domain name system (DNS), dynamic host configuration protocol (DHCP),time source, and domain services. The HP BladeSystem Matrix management components integratewith the customer’s existing management infrastructure.The factory integration and integration services are described in the HP BladeSystem MatrixQuickSpecs.

HP BladeSystem Matrix componentsThe following components are available when ordering an HP BladeSystem Matrix infrastructure:

• Four or more Blade servers, which form the server pools

• One or more CMS servers to host the management services for the environment

HP BladeSystem Matrix components 9

Page 10: 646940-001

• Starter Kits, which contain the infrastructure needed for a fully-working environment whenpopulated with additional server blades

• Expansion kits, which extend the HP BladeSystem Matrix with additional enclosures,infrastructure, and blades

• HP BladeSystem Matrix enclosure licenses

• Rack infrastructure

• Power infrastructure

• FC SAN storage

• iSCSI SAN storage (optional)

• Switches, transceivers and signal cables

• Other licenses to enable the HP BladeSystem Matrix environmentFor all HP BladeSystem Matrix components and support options, see the HP BladeSystem MatrixQuickSpecs. Additional components such as FC SAN switches and network switches might berequired to integrate the HP BladeSystem Matrix solution with the customer’s existing infrastructureand can be included with the HP BladeSystem Matrix order.

Table 2 HP BladeSystem Matrix components

SelectionChooseHardware Component

Blades configured to orderIn HP BladeSystem Matrix Flex-10 Starter or ExpansionKits all blades require an host bus adapter (HBA)

Choose bladesFill Starter and Expansion Kits to capacity– these will form your server resource poolfor HP BladeSystem Matrix. mezzanine card. When ProLiant G6 or Integrity i2

blades are integrated within HP BladeSystem MatrixSee the Compatibility Chart for supportedblade hardware. FlexFabric Starter or Expansion Kits, a NIC FlexFabric

Adapter is required for all blades in the enclosure. Forsolutions with all ProLiant G7 blades, the NIC FlexFabricAdapter LOM is embedded on the blade so noadditional modules or mezzanines are required. See“HP BladeSystem Matrix and Virtual Connect FlexFabricConfiguration Guidelines” (page 80) for moreinformation about these configuration options.

DL360 G7 Matrix CMS ServerDefault selection for CMS

Choose 1 or more CMS servers

Includes 10Gb NICDoes not include SFPs or cables

BL460c G6 Matrix CMS ServerSelection for an all-blade solution

Alternate CMS ServerRight-sized per specific customer needs, ordered orcustomer providedThe alternative CMS host must meet all the CMShardware requirements listed by the HP Insight Software6.3 Support Matrix and within this document.

Flex-10 Starter Kit for Integrity with HP-UXRedundant VC-Enet Flex-10 modules

Choose 1 HP BladeSystem Matrix StarterKit• HP BladeSystem c7000 Enclosure Redundant VC-FC 8Gb 24-port modules• Redundant OA modules 8 full-height blade bays available• Fully populated with 10 active cool

fans Flex-10 Starter Kit for ProLiant

10 Overview

Page 11: 646940-001

Table 2 HP BladeSystem Matrix components (continued)

SelectionChooseHardware Component

Redundant VC-Enet Flex-10 modules• Fully populated with six, 2400Wpower supplies Redundant VC-FC 8Gb 24-port modules

• six C19/C20 single phase powerinputs available

16 half-height server blade bays available

FlexFabric Starter Kit for ProLiantRedundant VC-FlexFabric modules

• HP BladeSystem Matrix licensesrequired, but not included with StarterKits (see "Select HP BladeSystemMatrix licenses" in this table.)

16 half-height blade bays available

Flex-10 Expansion Kit for Integrity8 full-height blade bays available

Choose 1 or more Expansion Kits to growthe HP BladeSystem Matrix• HP BladeSystem c7000 Enclosure HP BladeSystem Matrix license not included• Redundant OA modules

Flex-10 Expansion Kit for ProLiant16 half-height blade bays available

• Fully populated with 10 active coolfans

HP BladeSystem Matrix licenses included• Fully populated with six, 2400Wpower supplies

FlexFabric Expansion Kit for ProLiant16 half-height blade bays available

• six C19/C20 single phase powerinputs available

HP BladeSystem Matrix licenses included

HP BL Matrix SW 16-Svr 24x7 Supp Insight SoftwareOne required for each ProLiant Starter Kit.

Select HP BladeSystem Matrix licensesHP BladeSystem Matrix licenses are eitheroffered as a required order option, or This HP BladeSystem Matrix license is included with both

ProLiant Expansion Kits.included in the kit. Software licenseordering requirements are outlined in theHP BladeSystem Matrix QuickSpecs. HP-UX 11i Matrix Blade 2Skt PSL LTU Per Socket Licenses

for BL860c i2HP BladeSystem Matrix licenses forIntegrity: licenses required for both StarterKits and Expansion kits. Minimum 8licenses required. HP-UX 11i Matrix Blade 4Skt PSL LTU Per Socket Licenses

for BL870c i2HP BladeSystem Matrix licenses forProLiant: required to purchase license for

HP-UX 11i Matrix Blade 8Skt PSL LTU Per Socket Licensesfor BL890c i2

Starter Kit. License purchase not neededfor Expansion Kits (already included).

HP VCEM BL7000 one enclosure licenseRequired for each HP BladeSystem Matrix with HP-UXStarter or Expansion Kit

HP 10000 G2 racksChoose 1 or more racks

Customer provided

HP PDUsMonitored power distribution units (PDU)s recommendedfor manageability and to reduce the number of powerconnections required per rack

Choose power infrastructureEach HP BladeSystem Matrix enclosurerequires six C19/C20 connectionsRedundant power configurationrecommended (i.e. order PDUs in pairs)

Customer provided PDUs

Fibre Channel HP 3PAR F-Class and T-Class storagesystems

Choose supported FC SAN StorageIf the customer chooses to provide anexisting array, the SAN array must be At this time, HP 3PAR storage systems can be purchased

individually, on a separate order installed in a separatecertified for HP BladeSystem c-Classrack. A single 3PAR system may consist of multiplecabinets.

servers (see HP StorageWorks andBladeSystem c-Class Support Matrix).

HP BladeSystem Matrix components 11

Page 12: 646940-001

Table 2 HP BladeSystem Matrix components (continued)

SelectionChooseHardware Component

SAN storage must be qualified with theVC-FC or VC-FlexFabric modules by the

HP StorageWorks EVAEVAs may be ordered in a HP BladeSystem Matrix rack,or in separate racks for better expandability.storage vendor (see SPOCK for qualified

HP SAN Storage)

HP StorageWorks XP ArrayOrdered in a separate rack.

Other HP StorageWorks FC Storage

Customer provided third party FC storage

HP StorageWorks P4300 G2 7.2TB SAS Starter SANSolutionOrder up to 8 of these to build a 16 node cluster

(Optional) Add supported iSCSI SANStorageSupported in HP BladeSystem Matrix asa backing store for VM guests.

HP StorageWorks P4500 G2 10.8TB SAS VirtualizationSAN SolutionAdd the 10Gb NIC option for high bandwidth storageapplications

See the HP BladeSystem MatrixQuickSpecs and HP BladeSystem MatrixCompatibility Chart for recommendationsand requirements.

Other HP StorageWorks iSCSI solutions

Customer provided third party iSCSI storage

Configured to orderEthernet switches and FC SAN switches are required tocomplete the solution.

Add switches, transceivers and signalcablesSee the HP BladeSystem MatrixQuickSpecs and HP BladeSystem Matrix Transceivers and signal cables are required for uplinks

to switches. The number and type of uplinks for Ethernet,Compatibility Chart for recommendationsand requirements. SAN, and VC Stacking may be determined upon

completion of this document. Consult the Quick Specsof individual components for compatible transceiver orcable choices.

Customer providedFC SAN switches must support NPIV.

HP StorageWorks XP Command View Advanced Edition(if an HP XP array is ordered, although the Remote WebConsole can be used alternatively)

Other licenses to enable the HPBladeSystem Matrix environmentStorage licenses: purchase requirementsdepend on choice of storage, (someexamples listed to the right).

HP Command View EVA License To Use to host boot anddata LUNs (if an HP EVA is purchased)

Hypervisor licenses: Refer to the QuickSpecs for order options. VMware licenses

Hyper-V licenses

Customer responsibilitiesThe customer can select and configure multiple physical Integrity or ProLiant server blades andadditional HP BladeSystem Matrix expansion kits.If the default HP ProLiant DL360 G7 management server is not selected, the customer is requiredto provide a compatible ProLiant server to function as the CMS.The customer also provides connectivity to the HP BladeSystem Matrix infrastructure. The numberand type of LAN connections is part of the network planning phase of this document.

12 Overview

Page 13: 646940-001

IMPORTANT: Be sure that FC SAN SFP+ transceivers are used for FC SAN uplinks, and EthernetSFP/SFP+ transceivers are used for Ethernet uplinks. VC Flex-10 modules only support Ethernetuplinks and VC FC modules only support FC SAN uplinks.

IMPORTANT: VC FlexFabric modules have dual personality faceplate ports; only ports 1 through4 may be used as FC SAN uplinks (4Gb/8Gb). Additionally, although all VC FlexFabric portssupport 10Gb Ethernet uplinks, only ports 5 through 8 support both 1Gb and 10Gb Ethernetuplinks. Using the wrong port or SFP/SFP+ transceiver for any uplink will result in an invalid andunsupported configuration.

IMPORTANT: Two additional VC FlexFabric interconnect modules must be purchased when aNIC FlexFabric Adapter mezzanine card is purchased for each blade. This includes any ProLiantG6 or Integrity i2 configuration.

When the optional StorageWorks EVA4400 Array is ordered, two embedded FC SAN switchesprovide connectivity from HP BladeSystem Matrix enclosures to the array. If the EVA is not includedwith the Starter Kit, the customer must provide connectivity to a compatible FC SAN array.Customer-supplied FC switches to the external SAN must support boot from SAN and NPIVfunctionality. Refer to the HP website (http://www.hp.com/storage/spock) for a list of switchesand storage that are supported by VC FC. Registration is required. Following the login, go to theleft navigation and click on Other Hardware→Virtual Connect. Then click on the module applicableto the customer’s solution:• HP Virtual Connect FlexFabric 10Gb/24port Module for c-Class Blade System

• HP Virtual Connect 8Gb 24-Port Fibre Channel Module for c-Class Blade System

• HP Virtual Connect 4Gb / 8Gb 20-Port Fibre Channel Module for c-Class Blade System

NOTE: This module is not in a Starter or Expansion kit. Matrix conversion services arerequired.

Provisioning of suitable computer room space, power, and cooling is based on specificationsdescribed in the HP BladeSystem Matrix Quick Specs. When hardware is to be installed in customerprovided racks, the customer must order hardware integration services. If the customer elects tonot order these services, hardware installation must be done properly prior to any HP BladeSystemMatrix implementation services.When implementing Insight Recovery, two data center sites are used: a primary site that is usedfor production operations and a recovery site that is used in the event of a planned or unplannedoutage at the primary site. Each site contains a complete HP BladeSystem Matrix configurationwith an intersite link that connects the sites. Protecting data at the primary site is accomplished byusing data replication to the recovery site. Network and data replication requirements forimplementing Insight Recovery are described in Volume 4, “For Insight Recovery on ProLiantservers”, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide.Using this document to formulate a plan early on is an essential part of the order process for HPBladeSystem Matrix.

IMPORTANT: Each secondary Matrix CMS in a federated environment requires the purchase ofa Matrix starter kit and corresponding services, just as with the primary Matrix CMS implementation.The following chapter covers the planning considerations of a federated CMS in further detail.

HP BladeSystem Matrix components 13

Page 14: 646940-001

2 HP BladeSystem Matrix services planningServers and services to be deployed in HP BladeSystem Matrix

Begin planning the HP BladeSystem Matrix configuration and implementation by analyzing yourapplication services and their infrastructure requirements.Application services can consist of simple or multi-tier, multi-node physical and virtual servers andassociated operating system(s), and storage and network requirements. For example, a two-tierdatabase service can consist of an application tier that includes two to four virtual machines whilethe database tier consists of one or two physical server blades.Management services can include the monitoring, provisioning and control of application servicesusing such components as Insight Dynamics, server deployment, and VMware vCenter server.

Server planning required for the HP BladeSystem Matrix Installation andStartup ServicePlan management servers to be installed and configured as follows:

• Management Servers hosting the following services:Insight Software CMS◦

◦ Insight Control server deployment for environments with ProLiant blade servers

◦ HP Ignite-UX (pre-existing) for environments with HP-UX and Integrity blade servers

◦ SQL Server (or can be installed in a customer-provided SQL server farm)

◦ Required storage management software: HP Command View Enterprise Virtual Array(EVA) or XP Command View Advanced Edition or other storage management softwarerequired

• Hypervisor host A and B (Integrity VM, Microsoft Hyper-V, VMware ESX, ESXi)

• Windows, Linux, or HP-UX operating system for a newly created logical server

• Unused server for logical server move operation target demonstration

• (Optional) Allocated for IO automated deployment targetsWhen implementing Insight Recovery, a similar plan is required for the recovery site.When implementing a federated CMS, the first CMS installed becomes the primary CMS. Anysubsequent CMS which is then installed and joined with the federation is called a secondary CMS.A federated CMS may consist of up to five CMS servers (1 primary and 4 secondary). The InsightOrchestration software is only installed on the primary CMS. Each secondary CMS contains thefull Insight Software stack, except for Insight Orchestration.

IMPORTANT: Each secondary Matrix CMS of a federated CMS requires purchase of a Matrixstarter kit and corresponding services, just as with the primary Matrix CMS implementation.

Application servicesThis section outlines the type of information you need when planning application services deployedon HP BladeSystem Matrix. These services may be deployed as logical servers or automaticallyprovisioned by the infrastructure orchestration capabilities of Insight Dynamics.

14 HP BladeSystem Matrix services planning

Page 15: 646940-001

The following defines the information to collect when describing HP BladeSystem Matrix applicationservices:

• Service name:A label used to identify the application or management service◦

◦ Optionally, one or more tiers of a multi-tiered application

◦ The server name on which the application or management service is hosted

• Host type and configuration:Physical blades◦– Server model (e.g. BL870c i2)

– Processor and memory requirements

◦ Virtual machines– Hypervisor (ESX, Hyper-V, HP VM)

– Processor and memory requirements

• Software and OS requirements:List of applications or management services running on the server◦

◦ Operating System types:– Windows Server

– Red Hat Enterprise Linux

– SUSE Linux Enterprise Server

– HP-UX

– Hypervisor OS:– VMware ESX

– Hyper-V on Windows Server 2008

– HP Integrity VM on HP-UX

• SAN storage and fabric:Boot from SAN required for directly deployed physical servers◦

◦ Boot from SAN recommended for VM hosts

◦ FC or iSCSI SAN required for VM guest backing store

◦ LUN size and RAID level

◦ Remote storage for recovery

• Network connectivity:Connectivity to corporate network.◦

◦ Private network requirements, for example, VMware service console, VMotion network

◦ Bandwidth requirements

The application services examples used in this document are based on use cases described inExploring the Technology behind Key Use Cases for HP Insight Dynamics for ProLiant servers. Fordetails on how the HP BladeSystem Matrix infrastructure solution can be used to provision a dynamictest and development infrastructure using logical servers or IO templates, see the examples in“Appendix A—Dynamic infrastructure provisioning with HP BladeSystem Matrix” (page 54).

Application services 15

Page 16: 646940-001

For Insight Recovery implementations, discuss with the customer what Insight Recovery's DRcapabilities and determine the VC-hosted physical blades and/or VM-hosted logical servers thecustomer wants Insight Recovery to protect. These logical servers are known as DR-protected logicalservers. In addition, sufficient computer resources (physical blades and VM hosts) must be availableat the recovery site for a successful Insight Recovery failover. See the Volume 4, “For Insight Recoveryon ProLiant servers”, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide for moreinformation.Some customers may not yet be able to articulate the specific details of their failover requirements.In this case, HP recommends that several of the logical servers created as part of the HP BladeSystemMatrix SIG implementation be used as DR-protected logical servers to demonstrate an HP IRconfiguration and its failover functionality.

Planning Step 1a—Define application servicesUse the following template to list the services to be deployed by the HP BladeSystem Matrixinfrastructure. If the management service will be hosted by HP BladeSystem Matrix, make sure toinclude the Management Service description previously provided.

Table 3 Application services in the HP BladeSystem Matrix environment

Network requirementsStorage requirementsSoftwareHost configurationService

(service name)

(tier #1 of service)

(LAN requirements)(SAN requirements)(installed software)(server type)(server)

(tier #2 of service)

(LAN requirements)(SAN requirements)(installed software)(server type)(server)

IMPORTANT: HP BladeSystem Matrix infrastructure is based on the requirement of boot fromSAN for directly deployed physical servers and recommended for VM hosts.

Management servicesThe HP BladeSystem Matrix solution requires an Insight Software management environment. Thisenvironment consists of a CMS running Insight Software, a deployment server, storage management(for example, HP Command View EVA), and a SQL server. This environment may also includeseparate customer-provided servers for the optional management infrastructure mentioned previously.See the following paragraphs discussing separate servers.

Planning the Insight Software CMSIf you have not already performed detailed planning for the CMS, download and run "The HPSystems Insight Manager Sizer" currently found online at HP ActiveAnswers (an approximately 40MB zip file that contains a Windows setup.exe). The sizer does not include all the Insight Softwarebeing installed in this example. Additional disk space requirements are later in this section. Thereis additional CMS planning information available in the HP Insight Software SIM informationlibrary: http://h18004.www1.hp.com/products/servers/management/unified/infolibraryis.html

16 HP BladeSystem Matrix services planning

Page 17: 646940-001

NOTE: When planning a federated CMS, the plan for the primary and each secondary CMSmust include exclusion ranges in its VCEM instance to remove overlap between all the current andplanned instances of VCEM residing in the same data center.

NOTE: If you are considering configuring the CMS in a high availability cluster either now or inthe future, the CMS must be configured within a Windows domain and not as a standaloneworkgroup. HP does not currently support data migration of a CMS from a workgroup to a Windowsdomain.

Server hardware

Table 4 Confirm the CMS meets the minimum hardware requirements

SpecificationComponent

HP ProLiant BladeSystem c-Class server blades (G6 or higher series server isrecommended), or an HP ProLiant ML300, DL300, DL500 or DL700 (G3 or higherseries server is recommended)

Server

12GB for 32-bit Windows management servers (deprecated)Memory32GB for 64-bit Windows management servers, appropriate for maximumscalability (see below)

2 Processor dual core (2.4 GHz or faster recommended)Processor

150GB disk space is recommended. If usage details are known in advance, abetter estimate may be obtained from the disk requirements section below.

Disk space

New Technology File SystemFile Structure

Local or virtual/mapped DVD Drive requiredDVD Drive

There are several commonly used choices for installing and configuring a CMS with the HPBladeSystem Matrix:

• CMS on a rack-mounted ProLiant DL or ML server

• CMS on a ProLiant server blade

• CMS running from mirrored local disks

• CMS running from a SAN-based disk image (boot from SAN)

• A federated CMS, consisting of a primary CMS and one to four secondary CMSsEach of these has benefits and tradeoffs.When choosing between a server blade and a racked server configuration, consider theenvironment's purpose. When choosing to implement the CMS as a server blade, keep in mindthat an improper change to the VC-Ethernet network, server profile, or SAN network definitionscan render the CMS on a blade unable to manage any other device, including the OA or VCmodules. Well-defined processes for management and maintenance operations can mitigate thisrisk. When hosting the HP BladeSystem Matrix CMS within a HP BladeSystem Matrix enclosure,exercise greater care when accessing VCEM or the VC modules.When choosing the storage medium for the CMS, the default choice is to run the CMS from aSAN-based disk image. In environments where SAN availability may not be guaranteed (or uniform)it may be preferable to install a fully functional CMS on the mirrored local disk. However, thislimits the choices, process, and time for recovery in the event of a hardware failure or plannedmaintenance.

Management services 17

Page 18: 646940-001

NOTE: If this server is deselected, the customer must supply or order another server that meetsthe requirements for CMS.

Considerations when a CMS is not a server bladeWhen a server other than a server blade in HP BladeSystem Matrix is used as the CMS, considerthe following requirements in addition to the requirements listed in the HP Insight DynamicsInstallation and Configuration Guide.

Networking connectionsThe CMS must connect to multiple networks, which are common with those defined inside the HPBladeSystem Matrix environment. In the default configuration for HP BladeSystem Matrix, thesenetworks are named:

• Management

• ProductionIf the CMS is also the deployment server for HP BladeSystem Matrix, the server must also connectto:

• DeploymentIf vCenter is not running, the VMotion networks do not need to be brought into the CMS in eitherthe BL or external server case.Also ensure that the server has adequate physical ports and are configured for virtual local areanetworks (VLAN)s for any other networks to be used with HP BladeSystem Matrix.When implementing Insight Recovery, the CMS at the primary and recovery sites must be accessibleto each other using a fully qualified domain name (FQDN).

SAN connectionsIn configurations where the CMS is either booted from SAN or also running storage software, theserver requires necessary SAN HBAs and connectivity into the HP BladeSystem Matrix SAN.

Disk requirementsSee the HP Insight Software Support Matrix, which shows several different supported combinationsof HP SIM, Insight Control server deployment (RDP), and their databases. In addition to the diskspace required for the CMS operating system, the requirements for Insight Software are summarizedhere for planning purposes:

• 20GB for install of Windows Server 2008 R2 Enterprise Edition (recommended CMS operatingsystem)

• 20GB for install or upgrade of HP Insight Software

• Allot 8GB for OS temp space

• Allot 4GB for each OS to deploy. This additional storage must be accessible to the InsightControl server deployment software.

• Allot 65MB per workload on Windows or Linux managed systems or 35MB per workload onHP-UX managed systems. These allotments are for collecting and preserving a maximum offour years of data for use by Insight Capacity Advisor.

• Allot 4GB (CMS DB) per 100 workloads to preserve historical data for Insight Global WorkloadManager.

The HP SIM Sizer can help estimate the long-term disk space requirements for logging events andother historic data based on your number of managed nodes and retention plans.

18 HP BladeSystem Matrix services planning

Page 19: 646940-001

Ignite-UX serverIgnite-UX is required for all HP BladeSystem Matrix with HP-UX installations.

Considerations for a federated CMSIn IO, scalability can be increased through federated CMS configuration that contains one primaryCMS with full HP Insight Software installation and up to four secondary CMSs with Insight Software,but without IO. IO provisioning is managed through the primary CMS and executed across allCMSs in the federated CMS environment.In a federated CMS configuration, DNS lookups of participating CMSs are required for successfulIO operation. DNS is used to resolve CMS hostnames to IP addresses. On the primary CMS,forward and reverse DNS lookups must work for each secondary CMS. DNS lookups must beresolved using the FQDN of each system.In a federated CMS configuration, primary and secondary CMSs share the same deploymentservers, such as the Insight Control deployment server and Ignite-UX server. Deployment serversshould be registered in the primary CMS and they must each have their own deployment networkthat the physical blade servers can access for enabling physical and virtual deployment. Registeringthe deployment server on the primary CMS requires network access between these servers (via thedeployment or management LAN).Creating a federated CMS configuration can always be achieved for new installations, andsometimes can be achieved for upgrade scenarios. New installs (6.3 or later) are always infederated mode so you may add a secondary CMS provided that exclusion ranges are configuredappropriately in VCEM on the primary and new secondary CMS. When upgrading from a priorversion to 6.3 or later, the CMS will not be in federated mode. If this existing CMS has IO installed,then upgrading to a primary CMS requires ATC engagement to preserve IO services and templates.An existing CMS could also become a secondary CMS but the IO services will be lost, becauseIO must be uninstalled first.Table 5 (page 19) outlines supported configurations of a federated CMS with associatedmanagement software. See Figure 5 (page 26) for an illustrated example configuration of afederated CMS.

NOTE: The configuration of VMM templates takes place on the CMS that manages the Hyper-Vhosts.

Table 5 Supported management software with a federated CMS

Multiple, each CMS has oneinstance

Single, shared betweenprimary and secondaryCMSs

Is it supported for afederated CMS?

Management software

Yes1YesYesHP Server Automation

Yes1YesYesIgnite-UX Server

Yes1YesYesHP Insight Control serverdeployment (RDP)

YesYesYesvCenter Server

Yes2YesYesCommandView Server

NoYesYesHP Insight Orchestration

YesNoYesHP Insight Control (exceptRDP)

YesNoYesHP Insight Control forMicrosoft System Center

Management services 19

Page 20: 646940-001

Table 5 Supported management software with a federated CMS (continued)

Multiple, each CMS has oneinstance

Single, shared betweenprimary and secondaryCMSs

Is it supported for afederated CMS?

Management software

YesNoYesHP Insight Control forVMware vCenter Server

YesNoYesHP Insight Foundation

YesNoYesHP Insight Dynamicscapacity planning,configuration, and workloadmanagement

Yes2NoYesHP VCEM

YesNoYesMicrosoft SQL Server (CMSdatabase)

YesNoYesMicrosoft System Center

N/AN/ANoHP Insight Recovery

N/AN/ANoHP Cloud ServiceAutomation (CSA)

1 The primary CMS must have access to all deployment servers in a federated CMS configuration.2 Multiple VCEM instances co-exist in a single data center with federated CMS configurations. There is one instance for

each primary and secondary CMS. When these instances share CommandView and/or networks, it is critical to avoidany media access control (MAC) and worldwide name (WWN) conflicts by configuring exclusion ranges for eachinstance of VCEM.

Additional management serversIf you plan to use the HP StorageWorks XP Command View Advanced Edition, a separate storagemanagement server must be allocated for the XP CV AE software.When implementing Insight Recovery, there must be separate storage management servers at eachsite to manage the local array storage (EVA or XP). See Volume 4, “For Insight Recovery on ProLiantservers”, of the HP BladeSystem Matrix 6.3 Setup and Installation Guide for more information.In environments where the number of managed nodes and virtual machines is large, HP recommendsa separate database server to host the CMS information.VMware vCenter Server must be provided and managed by the customer on a separate server ifthe customer is managing VMware ESX hosts in the HP BladeSystem Matrix environment. InsightControl for VMware vCenter Server should not be installed on the CMS.HP Server Automation or HP Ignite-UX must be provided and managed on separate servers if thecustomer is using either of these software technologies for HP BladeSystem Matrix deployments.HP BladeSystem Matrix is capable of performing operating system deployment, operating systemcustomization, and application deployment through HP Server Automation. To plan for integrationof HP Server Automation with HP BladeSystem Matrix, become familiar with the instructions detailedin Integrating HP Server Automation with HP BladeSystem Matrix/Insight Dynamics.Microsoft System Center must be provided on separate servers if the customer desires to use thissoftware technology as an additional management console for servers in a HP BladeSystem Matrixenvironment. If used, Insight Control for Microsoft System Center is installed on the separate servers.

Management server scenariosWhen planning for this environment, take into consideration the purpose of HP BladeSystem Matrixdeployment and current and future growth. The following scenarios assist in determining theconfiguration of the management environment.

20 HP BladeSystem Matrix services planning

Page 21: 646940-001

Limited environment—Demo, evaluation, testing, or POC

Enclosures

• 1 to 2 enclosures

• Mix of up to 250 physical and virtual servers

Management server

• DL360 G7 with 2 processors and 32GB memory

• Windows Server 2008 R2 Enterprise Edition

• Insight Software

• Insight Control server deployment

• SQL Express 2005 or 2008 (installed by Insight Software). SQL Express is not recommendedfor medium or large environments.

• Storage management software, for example HP Command View EVA (can be installed on aseparate server if required by the customer)

Network connections

• Production LAN (uplinked to data center)

• Management LAN (uplinked to data center)

• Deployment LAN (uplinked to data center)For an illustration of a limited HP BladeSystem Matrix infrastructure as described above, pleasesee Figure 2 (page 8) in the overview chapter.

ProLiant standard environment

Enclosures

• 1 to 4 enclosures

• Up to 70 VM hosts. A VM host is a system with a hypervisor installed on it to host virtualmachines. A host machine can host more than one virtual machine.

• Any size environment, physical and virtual, up to HP BladeSystem Matrix scalability limits ofa non-federated CMS. This limit is 1,500 logical servers (ProLiant nodes, virtual and physical)when using 64-bit Windows (32-bit CMS has been deprecated).

Management servers

• Server 1DL360 G7 with 2 processors and 32GB memory◦

◦ Windows Server 2008 R2 Enterprise Edition

◦ Insight Software

◦ Insight Control server deployment

• Server 2DL360 G7 with 2 processors and 32GB memory◦

◦ Windows Server 2008 R2 Enterprise Edition

◦ SQL Server 2005 (or can be installed in a separate SQL server farm)

◦ Storage management software (may also be installed on a separate server)

Management services 21

Page 22: 646940-001

Network connections

• Production LAN, Management Servers #1, #2

• Management LAN, Management Servers #1, #2

• Deployment LAN, Management Server #1 only

Figure 3 HP BladeSystem Matrix infrastructure configured with ProLiant managed nodes

Integrity standard environment

Enclosures

• 1 to 4 enclosures

• Any size environment, physical and virtual, up to HP BladeSystem Matrix scalability limits ofa non-federated CMS. This limit is 800 logical servers (count of HP-UX nodes, virtual andphysical).

22 HP BladeSystem Matrix services planning

Page 23: 646940-001

Management servers

• Server 1◦ Insight Software

• Server 2◦ SQL Server 2005

• Server 3 (can be installed in a separate SQL server farm)◦ HP Ignite-UX server

• Server 4 (can be combined with SQL Server 2 if required)◦ Storage management software

Network connections

• Production LAN, Management Servers #1, #2

• Management LAN, Management Servers #1, #2, #3, #4

• Deployment LAN, Management Server #3 only

• SAN A and B; Management Server #4 and each Starter and Expansion kit

Management services 23

Page 24: 646940-001

Figure 4 HP BladeSystem Matrix infrastructure configured with Integrity managed nodes

Federated environment—positioning for growth

EnclosuresAny size environment, physical and virtual, up to HP BladeSystem Matrix scalability limits of afederated CMS and positioned for additional growth. Each CMS’s resource pool starts with aBladeSystem Matrix Starter kit and is expanded with BladeSystem Matrix Expansion kits. Allinfrastructure, management servers, and resource pools of the federation must be collocated in thesame data center.

• Limits when all logical servers are ProLiant managed nodes:1 primary CMS and up to 4 secondary CMSs◦

◦ 1,500 nodes for each secondary CMS

24 HP BladeSystem Matrix services planning

Page 25: 646940-001

◦ 1,000 nodes for the primary CMS

◦ 6,000 nodes maximum across primary and secondary CMS resource pools

• Limits when all logical servers are Integrity managed nodes:◦ 1 primary CMS and up to 4 secondary CMSs

◦ 800 nodes for each secondary CMS

◦ 600 nodes for the primary CMS

◦ 3,200 nodes maximum across primary and secondary CMS resource pools

Management servers

• Server 1 (primary CMS)◦ Insight Software

• Servers 2 through 5 (secondary CMSs)◦ Insight Software, excluding Insight Orchestration

• Servers 6 through 10 (SQL servers)◦ SQL Server 2005

• Server 11 (Deployment server)◦ Ignite-UX, Server Automation or Insight Control server deployment

• Server 12 (Deployment server)◦ Additional deployment server (optional)

• Server 13 (Storage management server)◦ Storage management software, for example HP CommandView EVA or XP edition (can

be combined with another server only for EVA edition; XP edition must be installed on aseparate server)

• Server 14 (Storage management server)◦ Other/additional storage management software

Network connections

• Production LAN, Management Servers #1, #2, #3, #4, #5, #6, #7, #8, #9, #10

• Management LAN, Management Servers #1, #2, #3, #4, #5, #6, #7, #8, #9, #10, #11,#12, #13, #14

• Deployment LAN, Management Servers #11, #12

• SAN A and B; Management Server #13 and the primary CMS’s virtual connect domain group(VCDG)

• SAN C and D; Management Server #14 and some secondary CMSs’ VCDGs

Management services 25

Page 26: 646940-001

NOTE: SAN switch infrastructure and storage management servers may be shared across CMSboundaries only if VCEM exclusion ranges are configured so that each CMS has a non-overlappingrange of WWNs. An example of this is SAN C and D, illustrated in figure 5.

NOTE: When VCDGs share any networks, but are managed in resource pools of more than oneCMS (as shown in figure 5), VCEM exclusion ranges are mandatory to prevent overlap of MACaddresses.

Figure 5 HP BladeSystem Matrix infrastructure configured with a federated CMS

26 HP BladeSystem Matrix services planning

Page 27: 646940-001

Planning Step 1b—Determine management serversWhen deploying the BladeSystem management environment, the Insight Software components areplaced on the same ProLiant server along with infrastructure management tools. This configurationis outlined in the HP BladeSystem Setup and Installation Guide.The example table below shows management services implemented using the following configurationchoices. Note that most environments will not require all of the servers and services shown here.See “Optional Management Services integration notes” (page 76) for more information.

• All Insight Software components that make up the management environment reside on thesame physical blade.

• The optional StorageWorks EVA4400 is included with HP BladeSystem Matrix and is managedthrough HP Command View EVA hosted by the Management Service.

• The production network carries application data traffic and is connected to the data center.

• The management network provides operating system control and is connected to the datacenter.

• The deployment LAN is used by the Insight Control server deployment server to exclusivelyrespond to PXE boot request and automated operating system installation. Other deploymenttechnologies require a separate deployment network.

Management services 27

Page 28: 646940-001

Table 6 Example management services for the HP BladeSystem Matrix environment

Networkrequirements

Storagerequirements

SoftwareHost configurationService

• Boot from SANMatrix CMS#1

• Production•• Windows Server®2008 SP2 (64-bit)

Physical

• •DL360 G7 Management• Insight Software• •2 Processors Deployment• Insight Control server

deployment• 32GB memory

• HP Command View EVA

• SQL Server (installed byInsight Software)

Ignite-UXServer

• Production•• HP-UX 11i V3Provided by Customer

• ••Physical (rack mount) ManagementHP Ignite-UX

• •Itanium-based Deployment

• Integrity OVMM

• HP Server Automationsoftware

SA primarycore

• Production• Provided by Customer

• •Physical (rack mount) Management

• Deployment

MSC #1 • Production•• W2K3 R2Provided by Customer

• ••Physical (rack mount) ManagementMicrosoft System CenterConfiguration Manager

• Insight Control for MSC(CM integrationmodules)

MSC #2 • Production•• W2K8 R2Provided by Customer

• ••Physical (rack mount) ManagementMSC OperationsManager

• MSC VM Manager

• Insight Control for MSC(OM & VMM modules)

• Provided by CustomervCenterServer

• Production• W2K8 R2

• •VMware vCentersoftware

Management

• VMotion• Insight Control for

VMware vCenter Server

IMPORTANT: When multiple OS deployment technologies (Ignite-UX, Insight Control serverdeployment, Server Automation) are planned for an HP BladeSystem Matrix installation, a uniqueand separate deployment LAN must exist for each deployment server.

IMPORTANT: Insight Software, SQL Server, and at least one deployment technology are in allHP BladeSystem Matrix implementations. Storage software for FC storage is also required in HPBladeSystem Matrix implementations (such as Command View EVA). Any other services must berunning on customer provided servers. Separate installation services may be ordered with HPBladeSystem Matrix implementations to deliver Insight Control for Microsoft System Center and/orInsight Control for VMware vCenter Server. See the Appendix for additional integration details.

28 HP BladeSystem Matrix services planning

Page 29: 646940-001

3 HP BladeSystem Matrix customer facility planningCustomer facility planning is not just about floor space, power, and cooling. It is the physicalrealization of all the services, networking and storage that combine to form an HP BladeSystemMatrix solution. A good facility plan contains known requirements balanced with consideration offuture requirements.

Racks and enclosures planningIn this section, various infrastructure services are identified to enable HP BladeSystem Matriximplementation. If the service exists in the current customer environment, note the server name, IPor other relevant parameters adjacent to the infrastructure service.

Planning Step 2a—HP BladeSystem Matrix rack and enclosure parametersComplete the following template identifying basic information about the racks and enclosures inthis order. Be sure to include the choice of enclosure implemented (Matrix Flex-10, Matrix FlexFabric,or Matrix with HP-UX).

Table 7 Racks and enclosures plan

ValueItem

Matrix rack #1

Rack Model

Rack Name

Matrix Enclosure #1 (Starter Kit)

Enclosure Model

Enclosure Name

Enclosure Location (Rack Name, U#)

Data center requirementsCustomer responsibilityData center facility planning for BladeSystem installation is located in the HP BladeSystem c-ClassSite Planning Guide.

Planning Step 2b—Determine HP BladeSystem Matrix facility requirementsTable 8 Facility requirements

ValueFacility power

Facility power connection characteristics

Voltage, Phase

Receptacle type

Circuit rating

(20% for NA/JP, 0% for much of the EU, or Custompercent)

Circuit de-rating percentage for the locality

UPS or WALL

Power redundancy? (If yes, specify labeling scheme)

Planning metrics for rack:

Racks and enclosures planning 29

Page 30: 646940-001

Table 8 Facility requirements (continued)

ValueFacility power

Rack weight estimate (in Kg or lbs)

Airflow estimate (in CMM/CFM)

Watts (W). Volt-Amps (VA) Estimate for rack

Thermal limit per rack (in Watts)(customer requirement – compare to estimate)

Quantity and type of PDUs for rack

Monitored PDUs only

For example, IP address on management LANAdditional uplink & IP address

For example, set to match current infrastructureSNMP community strings

Installation characteristics:

Identify data center location

Side clearances/floor space allocation

Verify ready to receive and install rack

Virtual Connect domainsA VC domain represents the set of VC-Ethernet, FC modules, and server blades that are managedtogether in a single c7000 enclosure, or multiple connected enclosures (up to 4). VC domains aremanaged by a virtual connect manager (VCM).A VC domain group is the collection of one or more VC domains. HP VCEM is used to define theVC domain group and manage the pool of MAC addresses, WWNs, and server profiles withinthe domains.The following steps show how to determine whether to connect multiple enclosures into a singledomain or use standalone domains under VCEM. The steps also show how to select unique MACaddresses, WWN addresses, and virtual serial numbers.

Determine enclosure stackingIf one or more HP BladeSystem Matrix expansion kits within the rack are being considered, thenreview the following information to determine whether a multi-enclosure VC domain configurationwill be required. Stacking is used only for VC-Ethernet modules (Flex-10 or FlexFabric).For enclosures with VC Flex-10 and VC-FC modules, HP recommends defining one VC domain perrack. This simplifies cabling, conserves data center switch ports, and is straightforward to implement.For enclosures with VC FlexFabric modules, HP recommends one VC domain per enclosure tomaximize available bandwidth for FC SAN and LAN uplinks.Interconnecting the modules to create a multi-enclosure domain allows all Ethernet NICs on allserver blades in the VC domain to have access to any VC uplink port. Only LAN traffic will routethrough stacking links. FC SAN traffic does not flow over stacking links. Only perform multi-enclosurestacking with VC FlexFabric if the stacking link requirements do not conflict with the per enclosureSAN uplink requirements. By using these module-to-module links, a single pair of uplinks can beused as the data center network connections for the entire VC domain, which allows any serverblade to be connected to any Ethernet network.

30 HP BladeSystem Matrix customer facility planning

Page 31: 646940-001

Reasons to configure multi-enclosure domains

• Data center switch ports or switch bandwidth are in short supply. VC stacking createsbandwidth sharing amongst enclosures, which conserves data center switch bandwidth.

• VC stacking creates bandwidth sharing among enclosures, which conserves data center switchbandwidth. Customer desires multi-enclosure domain configuration.

Reasons to configure single-enclosure domains

• All traffic must be routed through the network.◦ VC routes intra-enclosure (for example, server port to server port) within the domain via

the cross-links. If the customer requires further manageability of this traffic, use single VCdomains for each enclosure.

• Physical isolation◦ The services, networking, and storage environments of each enclosure remain physically

isolated.

• Any other situations in which bandwidth sharing between enclosures is not desirable orallowed.

• Customer desires single-enclosure domain configuration.

Stacking link configurationsThe following considerations are for stacking VC Flex-10 Ethernet modules as well for stacking ofVC FlexFabric modules.

• All VC-Ethernet modules within the VC domain must be interconnected.◦ Any combination of cables can be used to interconnect the VC modules.

• Two built-in 10Gb links are provided between modules in horizontally adjacent bays.◦ Faceplate ports 7 and 8 are shared with the two built-in links, meaning that when port 7

or 8 is enabled (i.e. used as an uplink), the corresponding built-in stacking link is disabled.

• Supported cable lengths on 10Gb stacking links are 0.5 to 7 meters.

• Supported cable lengths on 10Gb uplinks are 3 to 15 meters.

• VC FC uplinks must always exist per enclosure as FC traffic is not transmitted across stackinglinks.

Simple stacking examples are diagrammed in the QuickSpecs for the HP Virtual connect Flex-1010Gb Ethernet Module for c-Class BladeSystem: http://h18004.www1.hp.com/products/quickspecs/13127_div/13127_div.pdf.

Virtual Connect domains 31

Page 32: 646940-001

Figure 6 Multi-enclosure stacking enclosure cabling (VC modules are in Bays 1 & 2 for each enclosure)

Example VC domain stacking configurations based upon the number of enclosures are shownabove. The one-meter cables are sufficient for stacking short links to adjacent enclosures, while

32 HP BladeSystem Matrix customer facility planning

Page 33: 646940-001

three-meter cables are sufficient for stacking links that span multiple adjacent enclosures. The OAlinking cables required for stacking are not shown in the figure.HP recommends that uplinks alternate between left and right sides, as shown in green.The examples show stacking of ports 5 and 6 while keeping the two internal cross-links active ina multi-enclosure domain configuration – this is a total of four 10GbE stacking ports of sharedbandwidth across enclosures (80Gbps line rate). The two internal cross-links remain active as longas ports 7 and 8 are unused.Order the following cables for each multi-enclosure domain:

• Quantity 1, 2, or 3 of Ethernet Cable 4ft CAT5 RJ45 for 2, 3 or 4 enclosures, respectively tobe used as OA backplane links (not in figure).

• Quantity 2, 4, or 6 of HP 1m SFP+ 10GbE Copper Cable for 2, 3 or 4 enclosures, respectivelyto be used as VC stacking links.

• Order fixed quantity 2 of HP 3m SFP+ 10GbE Copper Cable to be used as wrap-around VCstacking links in VC domains with 3 or 4 enclosures.

Assign unique Virtual Connect MAC addressesThe MAC addresses assigned by VCEM must be unique throughout the data center. In the datacenter, there may be other BladeSystem enclosures with a range of assigned MAC addresses.Make sure to assign a range that does not conflict with those enclosures.Federated CMS configurations have VCEM instances for each primary and secondary CMS. WhenVCDGs in multiple VCEM instances share networks now or may share in the future, it is critical toavoid any MAC conflicts by configuring exclusion ranges so that non-overlapping usable rangesexist for each CMS.When implementing an HP IR configuration, if the primary and recovery site DR-protected serversshare a common subnet, make sure that there is no conflict between the MAC addresses that VCEMassigns on both sites. One way to avoid conflicts is by using the sets of 64 MAC address rangesthat VCEM provides with the “exclusion ranges” feature. An example of using exclusion ranges isincluded in Volume 4, “For Insight Recovery on ProLiant servers”, of the HP BladeSystem Matrix6.3 Setup and Installation Guide.

Assign unique Virtual Connect WWN addressesThe WWN addresses assigned by VCEM must be unique throughout the data center. You mayhave existing BladeSystem enclosures with a range of assigned WWN addresses. Make sure toassign a range that does not conflict with those enclosures.Federated CMS configurations have VCEM instances for each primary and secondary CMS. WhenVCDGs in multiple VCEM instances share SANs now or may share in the future, it is critical toavoid any WWN conflicts by configuring exclusion ranges so that non-overlapping usable rangesexist for each CMS.When implementing an HP IR configuration, if the primary and recovery site DR-protected serversshare a common SAN Fabric, make sure that there is no conflict between the WWN addressesthat VCEM assigns on both sites. One possible way to avoid conflicts is by using the sets of 64WWN address ranges that VCEM provides with the “exclusion ranges” feature. An example ofusing exclusion ranges is included in Volume 4, “For Insight Recovery on ProLiant servers”, of theHP BladeSystem Matrix 6.3 Setup and Installation Guide.

Select virtual serial numbersUse virtual serial numbers to provide a virtual identity for your physical server blades; this allowsyou to easily move server identities. Ensure that each VC domain uses a unique range of virtualserial numbers.

Virtual Connect domains 33

Page 34: 646940-001

Planning Step 2c—Virtual Connect domain configurationsEach VC domain and the VC domain group must be assigned names. In most cases, a singleVCDG is adequate for each HP BladeSystem Matrix implementation.In a federated CMS configuration, portability groups cannot be shared between CMSs (primaryand/or secondary). One VCDG will be configured per each CMS in a typical BladeSystem Matrixfederated CMS.

Table 9 Virtual Connect domain configuration

ValueItem

Virtual Connect Domain Group #1

VCDG nameName

VCD name(s)List the names of each VCD in this VCDG

Virtual Connect Domain #1

VCD nameName

Enclosure name(s)List the names of each enclosure in this VCD

Multi-enclosure stackingN/A, recommended, minimum or other?

MAC addressesVCEM-defined, HP-defined or user-defined?If HP-defined, select unique range 1-64

WWN addressesVCEM-defined, HP-defined or user-defined?If HP-defined, select unique range 1-64

Serial numbersHP-defined or user-defined?If HP-defined, select unique range 1–64

Virtual Connect Domain #2

VCD nameName

Enclosure name(s)List enclosures in domain

Multi-enclosure stackingN/A, recommended, minimum or other?

MAC addressesVCEM-defined, HP-defined or user-defined?If HP-defined, select unique range 1-64

WWN addressesVCEM-defined, HP-defined or user-defined?If HP-defined, select unique range 1-64

Serial numbersHP-defined or user-defined?If HP-defined, select unique range 1–64

34 HP BladeSystem Matrix customer facility planning

Page 35: 646940-001

4 HP BladeSystem Matrix solution storageAfter you determine the application and infrastructures services included in the HP BladeSystemMatrix solution, it is time to make several decisions regarding interconnectivity options, storagerequirements, and customer provided infrastructure.For more detailed information about the processes outlined in this section, see the HP BladeSystemMatrix Setup and Installation Guide. For HP Insight Recovery (IR) implementations, this processmust be used for both the primary and recovery sites.

Virtual Connect technologyThis section identifies the network and storage connections used by the application services runningon the HP BladeSystem Matrix physical servers. The external network and storage connections aremapped to physical servers using VC virtualization technology. VC is implemented through VCFC, VC-Ethernet with Flex-10 capability and VC FlexFabric with Flex-10 and FC capabilities. VCis managed in the HP BladeSystem Matrix environment using HP VCEM. An HP VCEM softwarelicense is included in each HP BladeSystem Matrix kit.

Storage connectionsThe VC FC or VC FlexFabric modules in a HP BladeSystem Matrix solution enable the c-Classadministrator to reduce FC cabling by making use of NPIV. Because it uses an N-port uplink, it isconnected to data center FC switches that support the NPIV protocol. When the server blade HBAsor FlexFabric Adapters log in to the fabric through the VC modules, the HBA WWN is visible tothe FC switch name server and can be managed as if it was connected directly.The HP VC FC acts as an HBA aggregator where each NPIV-enabled N-port uplink can carry theFC traffic for multiple HBAs. The HP VC FlexFabric modules translate FCoE from the blades intoFC protocol. With VC FlexFabric, FlexFabric Adapters on blade servers, not HBAs, are sendingthe FCoE traffic across the enclosure midplane.

IMPORTANT: The HP VC FC uplinks must be connected to a data center FC switch that supportsNPIV. See the switch firmware documentation for information to determine whether a specificswitch supports NPIV and for instructions on enabling this support.

The HP BladeSystem Matrix VC FC module has eight uplinks. The HP BladeSystem Matrix VCFlexFabric module has eight uplinks, four of which are dual personality uplinks which may be usedas a FC uplink. In either case, each uplink is completely independent of the other uplinks and hasa capability of aggregating up to 16 physical server HBA N-port links into an N-port uplink throughthe use of NPIV. Multiple VC FC module uplinks can be grouped logically into a VC fabric whenattached to the same FC SAN fabric. This feature enables access to more than one FC SAN fabric,as well as enabling a flexible and fully redundant method to connect server blades to FC SANs.

Planning Step 3a—Collect details about the customer provided SAN storageThe default configuration as described in the HP BladeSystem Matrix System installation andconfiguration documentation consists of an EVA and switches in the enclosure to create a completeself-contained SAN. In the case of a customer choosing an alternative storage configuration, thefollowing information is required for planning the installation.For details on support storage options, see the HP BladeSystem Matrix Quick Specs.

Virtual Connect technology 35

Page 36: 646940-001

Table 10 Storage and fabrics

ResponseQuestion

Does some or all the SAN already exist?

Will the matrix rack and enclosures be connected to an already installedand working SAN and array, or will some or all of the SAN storage beinstalled for the HP BladeSystem Matrix solution?

Number of separate SANs

Number of switches per SAN (assume 2):

Number of arrays

Planning Step 3b—FC SAN storage connectionsThe number of SAN connections per enclosure will vary depending on the number of redundantpaths the customer chooses and the number of separate SAN environments they plan to connect.A typical solution has two SAN connections to the enclosure that connects the enclosure to an EVA.The two connections are for high availability through SAN multi-pathing.

Table 11 FC SAN storage connections

NoteVC FC SAN profileStorage controllerWWPN

One of multipleconnections to thesame SAN

Customer SAN name

Minimum of 11

Typically a secondconnection to firstSAN for HA

2

3

4

5

6

NOTE: Every CMS in a federated CMS environment manages its own storage pool. Thereforestorage pool entries must be created on each CMS for the portability groups that the CMS ismanaging.

Planning Step 3c—iSCSI SAN Storage connections

IMPORTANT: iSCSI is not supported with Integrity nodes.

Whenever iSCSI is used as a VM guest backing store, follow the best practice of separating iSCSItraffic from other network traffic. Physical separation is preferred for providing dedicated bandwidth(independent VC Ethernet uplinks) and logical separation (VLANs) is important when sharingswitching infrastructure. Any bandwidth sharing between iSCSI and other network traffic can beproblematic. When implementing iSCSI as a VM backing store, make sure that an iSCSI networkis added to your list of networks (in addition to Management, Production, Deployment and vMotionnetworks). Relevant examples of network configurations applicable to HP BladeSystem Matrixenvironments for VMware with HP StorageWorks P4000 SANs are located in the white paperRunning VMware vSphere 4 on HP Left-Hand P4000 SAN solutions (http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA3-0261ENW.pdf).For iSCSI SAN solutions in the HP portfolio, visit http://www.hp.com/go/iSCSI for moreinformation.

36 HP BladeSystem Matrix solution storage

Page 37: 646940-001

Table 12 Example iSCSI SAN Storage connections

Provision type(Static, or DHCP)

IP addressSignal typeRouter uplink (Datacenter switch and port)

Host uplinkNetwork

1000Base-TDC1-switch/Port1P4300 G2Node#1/Port1

iSCSI

1000Base-TDC2-switch/Port1P4300 G2Node#1/Port2

iSCSI

1000Base-TDC1-switch/Port2P4300 G2Node#2/Port1

iSCSI

1000Base-TDC2-switch/Port2P4300 G2Node#2/Port2

iSCSI

1000Base-TDC1-switch/Port3P4300 G2Node#3/Port1

iSCSI

1000Base-TDC2-switch/Port3P4300 G2Node#3/Port2

iSCSI

1000Base-TDC1-switch/Port4P4300 G2Node#4/Port1

iSCSI

1000Base-TDC2-switch/Port4P4300 G2Node#4/Port2

iSCSI

10GBase-TDC1-switch/Port25Enclosure1:Bay1:Port3iSCSI

10GBase-TDC2-switch/Port25Enclosure1:Bay2:Port3iSCSI

Storage volumesHP recommends that the CMS be configured to boot from SAN. To facilitate the flexible movementof management services across blades and enclosures, these services must be configured to useshared storage for the OS boot image, the application image, and the application data. HP alsorecommends that virtual machine hosts also boot from SAN.If connectivity to customer provided SAN storage is desired, the FC switch must support the NPIVprotocol. Access to the switch will be required by HP Services personnel to deploy boot from SANLUNs. Fabric zones are required in a multi-path environment to ensure a successful operatingsystem deployment.

Storage requirementsFor each server profile, consider the boot LUN and any additional data storage requirements andlist those parameters in the following table.The HP BladeSystem Matrix Starter Kit on-site implementation services include the deployment ofoperating systems on a limited number of configured LUNs on the new or existing customer SAN.For more details about HP BladeSystem Matrix Starter Kit Implementation Services, see the HPBladeSystem Matrix Quick Specs.The Replicated To column refers to the Insight Recovery remote storage controller target and datareplication group names for the replicated LUNs. HP BladeSystem Matrix is disaster recovery ready,which means HP IR licenses are included and the HP IR feature can be enabled by applying InsightDynamics licenses on supported ProLiant server blades. Application service recovery can be enabledby configuring a second HP BladeSystem Matrix infrastructure at a remote location and enablingstorage replication between the two sites. Continuous access software and licenses are alsorequired. If XP storage is used, Cluster Extension for XP software version 3.0.1 or later is required.See Volume 4, “For Insight Recovery on ProLiant servers”, of the HP BladeSystem Matrix 6.3 Setupand Installation Guide for additional information on storage and data replication requirements.

Storage volumes 37

Page 38: 646940-001

The following table summarizes the type of information needed when planning application andmanagement services deployed on HP BladeSystem Matrix.

Table 13 Storage volumes

Connected toReplicated tovHost namevDisk (LUN) nameUse and sizeServer

(Local SANstorage target)

(remote target anddata replication

(xxxx_vhost)(xxxx_vdisk)(LUN properties)(server name)

group name, ifreplicated)

The following details define the type of information needed when planning VC FC connections forapplication services deployed on HP BladeSystem Matrix:

• Server name:A label used to identify the application or management service◦

◦ Optionally, may consist of one or more tiers of a multi-tiered application

◦ The server name on which the application or management service is hosted

• Use and size:The purpose and characteristics of the LUNs associated with the FC connection, for example,boot LUN; the LUN ID, and the LUN size.

• vDisk (LUN) name:The vDisk label assigned to the LUN

• vHost name:The vHost label assigned to the LUN

• Replicated to:Specifies the remote storage controller WWPN and data replication group name, if using HPInsight Recovery

• Connected to:Specifies the local storage controller WWPN hosting this LUN

Table 14 Example storage volumes for management services

Connected toReplicated tovHost namevDisk (LUN) nameUse and sizeServer

F400_3PARN/A1matrix_cms_vhostmatrix_cms_vdisk146GB bootCMS

1 CMS storage is not replicated using HP IR as a second CMS is required at the remote location.

Application services storage definition examples“Appendix A—Dynamic infrastructure provisioning with HP BladeSystem Matrix” (page 54) providesstorage definition examples for use with application services using logical servers and with IOtemplates.

Planning Step 3d—Define storage volumesBased on the service template completed previously, record the shared storage requirements, size,and connections for each service. If the service will be replicated using Insight Recovery, completethe Replicated to column.

38 HP BladeSystem Matrix solution storage

Page 39: 646940-001

Table 15 Example storage volumes for application services

Connected toReplicated tovHost namevDisk (LUN) nameUse and sizeServer

F400_3PARNone1esx1_vhostesx1_vdisk20GB bootVM Host 1

F400_3PARNone1esx2_vhostesx2_vdisk20GB bootVM Host 2

F400_3PARNone1esx1_vhost, esx2_vhostesx_shared_vdisk500GB VMFSESX shared disk

F400_3PARNone1sp_w2k3_sys_01_vhostsp_w2k3_sys_01_vdisk20GB bootTest W2K3 Host1

F400_3PARNone1sp_2008_sys_01_vhostsp_2008_sys_01_vdisk40GB bootTest W2K8 Host2

(storage target)None1xxxx_vhostxxxx_vdisk###GB{DB1}

(storage target)None1xxxx_vhostxxxx_vdisk###GB{DB2}

(storage target)None1xxxx_vhostxxxx_vdisk###GB{App1}

(storage target)None1xxxx_vhostxxxx_vdisk###GB{App2}

1 Storage configurations for Insight Recovery are not covered in this example

Isolating VM Guest storage from VM Host OS filesWhen performing multiple concurrent VM provisioning requests on the system drive of a hypervisorhost, the disk I/O can become saturated during the virtual hard drive replication, which can causethe host to become unstable or unresponsive, or both. Current and future HP Insight Dynamicsorchestration service requests can fail because the orchestration software is unable to successfullyquery the host for resource information and virtual machine-specific information. HP recommendsplanning hypervisors with separate disks for the hypervisor system drive and the backing storagefor virtual machines. Doing so will result in greater performance and lower risk of starving thehypervisor of required I/O bandwidth. HP Insight Dynamics orchestration services offer the abilityto control which devices are used for provisioning the virtual machine. To avoid this problem, seethe HP BladeSystem Matrix Setup and Installation Guide for configuration steps to exclude hypervisorboot volumes from use.

Microsoft Hyper-VConsult the Hyper-V Planning and Deployment Guide: http://www.microsoft.com/downloads/details.aspx?FamilyID=5da4058e-72cc-4b8d-bbb1-5e16a136ef42&displaylang=enThis document describes the separation of network traffic of the hypervisor host from virtualmachines, where it recommends: Use a dedicated network adapter for themanagement operating system of the virtualization server. The HPrecommendation, which has been validated by rigorous testing, is that the principle of isolatinghypervisor resources from virtual machine resources should be applied to virtual machine storageas well as networking.The following site recommends that administrators Avoid storing system files on drivesused for Hyper-V storage: http://blogs.technet.com/vikasma/archive/2008/06/26/hyper-v-best-practices-quick-tips-1.aspxThe following site recommends that administrators Place the pagefile and operatingsystem files on separate physical disk drives: http://www.microsoft.com/whdc/system/sysperf/Perf_tun_srv.mspx

VMware ESXMost production ESX Server customers concentrate their virtual machine disk usage on externalstorage, such as a FC SAN, a hardware or software initiated iSCSI storage device, or a remoteNAS file server (using the NFS protocol).

Storage volumes 39

Page 40: 646940-001

5 HP BladeSystem Matrix solution networkingNetwork planning

This section identifies and collects the network configuration used to manage the HP BladeSystemMatrix enclosures. It is assumed that separate networks are used for production (for example,application level communications) and management communications (for example, managingservers and services). Distinct networks are not required, and the two networks can be one andthe same. Each deployment network can only host a single deployment service, so planning to usemultiple deployment technologies require multiple, distinct deployment networks.Collect the following customer network details that you will use to assign enclosure managementand application services network information.

Planning Step 4a—Collect details about the customer provided networksThe following details define information you need when planning networks for HP BladeSystemMatrix:

• Network name—The VC network profile name

• IP address (network number) – The representative (masked) address for the network

• Subnet mask – A bit mask used to determine the membership of an IP address in a network

• Deployment server – The server which handles deployment to the network

• IP range for auto-provisioning – The addresses available to HP Insight Dynamics for staticassignment to servers when HP Insight Dynamics provisions an instance of an applicationservice

• VLAN tag—The VLAN id or tag associated with this network

• Preferred Link Connection Speed—The default speed for server profile connections mappedto this network

• DHCP server – The address of the DHCP server for each network

• DNS server – The DNS server addresses for each network

• Gateway IP address – The default gateway for address routing external to the network

• DNS domain name – The DNS suffix specific to a network

• SMTP host – SMTP mail services are required for HP Insight Dynamics workflow notifications.The CMS or another host can be configured to forward notifications

• Time source – Having a time source is essential for services to function as designed

IMPORTANT: When multiple OS deployment technologies (Ignite-UX, Insight Control serverdeployment, Server Automation) are planned for a HP BladeSystem Matrix installation, a uniqueand separate deployment LAN must exist for each deployment server.

IMPORTANT: A federated CMS is highly dependent on the DNS configuration. On the primaryCMS, forward and reverse DNS lookups must work for each secondary CMS. DNS lookups needto be resolvable using the FQDN of each system.

Table 16 Configuration of networks and switches

ValueItem

Production LAN

IP address (network number)

Subnet mask

40 HP BladeSystem Matrix solution networking

Page 41: 646940-001

Table 16 Configuration of networks and switches (continued)

ValueItem

IP range for auto-provisioning

VLAN tag

Preferred link connection speed

Gateway IP address

DHCP server

DNS server #1

DNS server #2

DNS domain name

Management LAN

IP address (network number)

Subnet mask

IP range for auto-provisioning

VLAN tag

Preferred link connection speed

DHCP server

DNS server #1

DNS server #2

Gateway IP address

DNS domain name

Deployment LAN

192.168.1.0IP address (network number)

255.255.255.0Subnet mask

(Insight Control server deployment, HP Server Automation,or HP Ignite-UX)

Deployment server

VLAN tag

Preferred link connection speed

DHCP server

N/ADNS server #1

N/ADNS server #2

N/AGateway IP address

N/ADNS domain name

VMotion LAN

192.168.2.0IP address (network number)

255.255.255.0Subnet mask

VLAN tag

Preferred link connection speed

Network planning 41

Page 42: 646940-001

Table 16 Configuration of networks and switches (continued)

ValueItem

DHCP server

N/ADNS server #1

N/ADNS server #2

N/AGateway IP address

N/ADNS domain name

Other Network services

SMTP host

Time source

Virtual Connect Ethernet uplink connectionsEach Flex-10 interconnect module has several numbered Ethernet connectors. All of these connectorscan be used to connect to a data center switch (uplink ports), or they can be used to stack VCmodules as part of a single VC domain (stacking ports).Networks must be defined within the VCM so that specific, named networks can be associatedwith specific external data center connections. These named networks can then be used to specifynetworking connectivity for individual servers and application services.The simplest approach to connecting the defined networks to the data center is to map each networkto a specific uplink port. Whether a single or multi-enclosure domain is defined, any server hasaccess to any Ethernet port.For a minimal production ready configuration, HP recommends that you define a single networkusing multiple uplinks (uplink port set). This configuration can provide improved throughput andavailability. One data center uplink port is defined using the “A” side (such as, Bay1 or left side)VC Ethernet module and the second port defined on the “B” side (such as Bay 2 or right side) VCEthernet module.The following table is an example of how the networks can be defined in a multi enclosure domain.The “Production” and “Management” networks are defined with redundant, cross-enclosure A/Buplink connections to the data center switches; the “Deployment” network traffic, such as a networkdedicated to deployment services, is routed entirely within the enclosures so a data center uplinkis not required.

Table 17 VC Ethernet uplink connections example

Signal typeRouter uplinks (Data center switchand port)

VC Uplinks (Enclosure VC module ports)Network name

DC1net port #4Enclosure1:Bay1:Port2Production

DC1net port #5Enclosure2:Bay2:Port2

DC1net port #6Enclosure1:Bay1:Port3Management

DC1net port #7Enclosure 2:Bay2:Port3

N/AN/ADeployment

In situations where the customer has VLANs in place on the data center networks, or the numberof uplinks are constrained, you can combine a number of networks in a shared uplink set.

42 HP BladeSystem Matrix solution networking

Page 43: 646940-001

Table 18 VC Ethernet uplink connections example using Shared Uplink Sets

Signal typeRouter uplinksVC UplinksSUS name(Networks)

10GBase-SRDC1 port #4Enclosure1:Bay1:Port1SUS 1ProductionManagement 10GBase-SRDC1 port #5Enclosure 2:Bay2:Port1

The following details define information you need when planning VC Ethernet connections for HPBladeSystem Matrix:

• Network name—The VC network profile name.

• Shared uplink set (SUS) name—Optionally, the VC Shared Uplink Set name, when multiplenetworks share uplinks

• VC Uplinks (Enclosure VC module ports)—The VC uplink Ethernet ports. If deploying redundantconnections, specify additional ports as required. One VC Flex-10 transceiver must be orderedfor each uplink port. Verify compatibility with data center switch transceivers and opticalcables.

• Router uplinks (Data center switch and port)—The uplink data center switch name and portnumber that is the destination of this connection.

• Signal type—The physical signal cabling standard for the connection

Planning Step 4b – Virtual Connect Ethernet uplinksThe VC uplink recommendation for a typical production environment is described in the HPBladeSystem Matrix Setup and Installation Guide. Complete the table by identifying the VC Ethernetports used for uplink connections to the data center networks, and VLAN tags if required. If thesame uplinks ports will share Production and Management data traffic, then VLAN tags and a SUSis defined.

Table 19 VC Ethernet uplink connections with sample list of networks

Signal typeRouter uplinksVC uplinksNetwork name

Production

Management

Deployment

VMotion

iSCSI

Integrity OVMM

SG heartbeat

SG failover

Switch.portEnclosure.bay.port(other network)

Virtual Connect Flex-10 Ethernet services connectionsFlex-10 technology is a hardware-based solution that enables users to partition a 10-Gb/s Ethernet(10GbE) connection and regulate the data speed of each partition. While capable of supporting10 Gb/s bandwidth, the VC-Ethernet interconnect is compatible with lower speed switches.Each Flex-10 network connection can be dynamically fine-tuned from 100 Mb/s to 10 Gb/s tohelp eliminate bottlenecks and conserve network capacity. Data center bandwidth requirementsvary depending on the application. For example, TCP/IP communications, such as email, file share,web services, may consume 1 Gb/s of bandwidth. Data center management traffic, such as remote

Virtual Connect Flex-10 Ethernet services connections 43

Page 44: 646940-001

desktop or virtual machine may consume 2 Gb/s and Inter-process communications used in clustercontrol could consume upward of 4 Gb/s in bandwidth.Using VC Flex-10 you can define a network that does not use any external uplinks. This creates acable–less network within the VC domain.The following details define information you need when planning VC Flex-10 Ethernet connectionsfor application services deployed on HP BladeSystem Matrix:

• Server name:A label used to identify the application or management service◦

◦ Optionally, can consist of one or more tiers of a multi-tiered application

◦ The server name on which the application or management service is hosted

• Network:◦ The VC network profile name

• Port Assignment:The Flex NIC port connected to this network.◦

◦ Used when specifying a physical blade not auto-provisioned by IO.

• Flex-10 bandwidth:Specifies the Flex-10 bandwidth allocation for this NIC.◦

◦ Used when specifying a physical blade not auto-provisioned by IO.

• PXE settings:Specifies the PXE options (Enabled, Disabled, Use BIOS) for this NIC.◦

◦ Used when specifying a physical blade not auto-provisioned by IO.

Continuing with the services examples developed previously in the “Servers and services to bedeployed in HP BladeSystem Matrix” section, and using the following table, define VC Ethernetparameters for those services.

Management services network configurationThe management server network connections consist of connection to the production andmanagement subnets. The deployment network is used by the deployment server.

Table 20 Network host connections example for management services

PXE settingFlex-10 Bandwidthallotment

Port assignmentConnectionServer

Management servers

Disable1Gb1a, 1bManagementCMS

Disable2Gb2a, 2bProduction

Enabled1Gb1a, 1bDeploymentInsight Control serverdeployment

Disable2Gb2a, 2bProduction

Application Services network connectivity examplesAppendix A—Dynamic infrastructure provisioning with HP BladeSystem Matrix provides networkconnectivity examples for use with application services using logical servers and with IO templates.

44 HP BladeSystem Matrix solution networking

Page 45: 646940-001

Planning Step 4c—Define services VC Ethernet connectionsRecord the connections, type, and destination for each service based on the service template youcompleted previously.

Table 21 Network host connections example for application services

PXE setting1Flex-10 Bandwidthallotment1

Port assignment1ConnectionServer

(uplink destination)(connectionbandwidth)

(connection type)(VC Ethernetconnection #1)

(server names)

(uplink destination)(connectionbandwidth)

(connection type)(VC Ethernetconnection #2)

(server names)

1 These parameters can be specified when defining network connections to physical blades not auto-provisioned by IO,such as the CMS, deployment server, SQL Server, and ESX hosts.

NOTE: Currently, IO can only provision a network with a single VLAN ID mapped to a singleFlex NIC port. Even though the VC profile network port definition allows traffic from multiplenetworks to be trunked over a single NIC (with VLAN ID tagging), IO cannot express this in aservice template. Ensure that any server blade provisioned by IO has enough NIC ports toindividually carry the defined networks.

Manageability connectionsThe following table lists required network connections to properly configure and manage each HPBladeSystem Matrix enclosure. Some connections can be provisioned using static address, DHCP,or the recommended EBIPA. This table lists the physical network connections and IP addressrequirements for the BladeSystem enclosure management connections.

Table 22 Required management connections for HP BladeSystem Matrix enclosures

Provision type(EBIPA, Static, orDHCP)

IP addressSignal typeRouter uplink (Datacenter switch and port)

Host uplinkNetwork

Static1000Base-TOA #1Management

Static1000Base-TOA #2Management

EBIPAMultiplexedThrough OAconnection

VC Ethernet #1Management

EBIPAMultiplexedThrough OAconnection

VC Ethernet #2Management

EBIPAMultiplexedThrough OAconnection

VC Fibre #1Management

EBIPAMultiplexedThrough OAconnection

VC Fibre #2Management

StaticMultiplexedThrough OAconnection

Optional – VC DomainIP

Management

EBIPAMultiplexedThrough OAconnection

Enclosure iLO RangeManagement

If the EVA4400 (or EVA6400, EVA8400), P4300 G2 (or P4500 G2) or other storage solution isincluded in the HP BladeSystem Matrix configuration, the following is a sample of the requirednetwork connections. Other storage solutions such as a HP StorageWorks XP Array, HP 3PARF-Class InServ storage system or HP 3PAR T-Class InServ storage system have similar networkconnection requirements.

Manageability connections 45

Page 46: 646940-001

Table 23 Additional network connections for storage management

Provision typeIP addressSignal typeRouter uplink (Datacenter switch and port)

Host uplinkNetwork

Static100Base-TEVA4400 ABM MGMTport

Management

Static100Base-TEVA4400 Fibre switch#1

Management

Static100Base-TEVA4400 Fibre switch#2

Management

Static100Base-TP4300 G2 Node #1Management

Static100Base-TP4300 G2 Node #2Management

Static100Base-TOther SAN switchManagement

Static100Base-TOther FC Storagecontroller

Management

Other devices included in the HP BladeSystem Matrix configuration require management networkconnections such as monitored PDUs and network switches.

Table 24 Other additional network connections

Provision typeIP addressSignal typeRouter uplink (Datacenter switch and port)

Host uplinkNetwork

Static100Base-TMonitored PDU #1Management

Static100Base-TMonitored PDU #2Management

StaticN/AN/ANetwork Switch #1Management

StaticN/AN/ANetwork Switch #2Management

Planning Step 4d—Define manageability connectionsBased on the HP BladeSystem Matrix required network connections provided in the previous table,use the following template to record the various IP addresses required to manage the BladeSystemenclosures. Use one template for each enclosure ordered.

Table 25 HP BladeSystem Matrix management connections

Provision type(EBIPA, Static, orDHCP)

IP addressSignal typeRouter uplink (Datacenter switch and port)

Host uplinkNetwork

1000Base-TStarter Kit OA #1Management

1000Base-TStarter Kit OA #2Management

MultiplexedThrough OAconnection

Starter Kit VC-Enet #1Management

MultiplexedThrough OAconnection

Starter Kit VC-Enet #2Management

MultiplexedThrough OAconnection

Starter Kit VC-FC #1Management

MultiplexedThrough OAconnection

Starter Kit VC-FC #2Management

MultiplexedThrough OAconnection

Optional – VC DomainIP

Management

46 HP BladeSystem Matrix solution networking

Page 47: 646940-001

Table 25 HP BladeSystem Matrix management connections (continued)

Provision type(EBIPA, Static, orDHCP)

IP addressSignal typeRouter uplink (Datacenter switch and port)

Host uplinkNetwork

(starting IP –ending IP)

MultiplexedThrough OAconnection

Starter Kit iLO RangeManagement

1000Base-TExpansion Kit #1 OA #1Management

1000Base-TExpansion Kit #1 OA #2Management

MultiplexedThrough OAconnection

Expansion Kit #1VC-Enet #1

Management

MultiplexedThrough OAconnection

Expansion Kit #1VC-Enet #2

Management

MultiplexedThrough OAconnection

Expansion Kit #1 VC-FC#1

Management

MultiplexedThrough OAconnection

Expansion Kit #1 VC-FC#2

Management

(starting IP –ending IP)

MultiplexedThrough OAconnection

Expansion Kit #1 iLORange

Management

100Base-TEVA4400 ABM MGMTport

Management

100Base-TEVA4400 Fibre switch#1

Management

100Base-TEVA4400 Fibre switch#2

Management

100Base-TP4300 G2 Node #1MGMT

Management

100Base-TP4300 G2 Node #2MGMT

Management

100Base-TOther SAN switchManagement

100Base-TOther FC Storagecontroller

Management

100Base-TMonitored PDU #1Management

100Base-TMonitored PDU #2Management

N/AN/ANetwork Switch #1Management

N/AN/ANetwork Switch #2Management

Access requirementsThe following table shows the various access credentials to support the HP BladeSystem Matriximplementation. Each enclosure that is included in the HP BladeSystem configuration (for example,Starter Kit enclosure, one or more expansion kits) requires OA and VC credentials. Plan or identifycredentials for all management services including the CMS, deployment servers, hypervisor consolesand storage management consoles. Also identify and plan SNMP settings and user credentials formanaged device consoles such as SAN switches, VC, iLO and OA.

Manageability connections 47

Page 48: 646940-001

IMPORTANT: It is the user's responsibility to be sure that all CMSs in a federated CMS environmenthave the user accounts in sync. Create the same user accounts on primary and secondary CMSs.

Planning Step 4e—Determine BladeSystem infrastructure credentialsComplete the following BladeSystem credentials template identifying those credentials that will beused to manage the HP BladeSystem Matrix environment.

Table 26 Access credentials sample list

PasswordCredentialDomainAccess

Starter Kit—OA

Starter Kit—VC

Starter Kit-iLOs

N/AN/ASNMP READ ONLYcommunity string

N/AN/ASNMP READ WRITEcommunity string

CMS service account1

Insight Control serverdeployment

SQL Server

VMware vCenter Server2

Ignite-UX server

HP Command View EVA

SAN switch account

IO User

IO Architect

IO Administrator

Expansion Kit #1—OA

Expansion Kit #1—VC

Expansion Kit #1 - iLOs

1 Service account credentials are used to run the various Insight Software services. Typically this account is used only toinstall and authorize these services. Set the password property to Never Expire.

2 The credentials used to communicate with vCenter will, in part, determine which ESX hosts are visible to HP BladeSystemMatrix. You can provide full access or restricted visibility based on your vCenter configuration.

48 HP BladeSystem Matrix solution networking

Page 49: 646940-001

6 HP BladeSystem Matrix pre-delivery planning checklistReview user responsibility items before HP BladeSystem Matrix is delivered and before you meetwith HP Services personnel. Making sure these items are in place and ready prevents delays inHP BladeSystem Matrix solution implementation. For HP IR implementations, user responsibilityitems are required for both the primary and recovery sites.

Table 27 User responsibility summary

Ready?Item

Floor space assigned and reserved?

Power connections available?

LAN connections to production and management LAN portsassigned and reserved?

IP addresses assigned as enumerated above?

If customer provided SAN storage, SAN fabric zoningdefined?

If customer provided SAN storage, are the LUNs createdand ready to be presented?

If customer provided CMS, record the system IP, hostname,iLO, virtual or physical media present, and credentials.

If customer provided deployment server, record IP, andcredentials.

If customer provided vCenter server, record IP andcredentials.

If a separate (remote) SQL server, record hostname, SQLport, instance name and administrator credentials.

For HP IR Implementations:

Customer-determined, DR-protected logical servers?

Intersite link operational between sites?

49

Page 50: 646940-001

7 Next stepsAfter completing the preceding planning and pre-delivery steps, the customer environment is readyfor physical delivery of the HP BladeSystem Matrix components. Upon arrival of the HP BladeSystemMatrix components, continue with the HP BladeSystem Matrix setup and installation following theprocedures described in HP BladeSystem Matrix Setup and Installation Guide.

50 Next steps

Page 51: 646940-001

8 Support and other resourcesContacting HP

Information to collect before contacting HPBe sure to have the following information available before you contact HP:

• HP BladeSystem Matrix Starter Kit or Expansion Kit c7000 enclosure serial number and/orSAID (Service Agreement Identifier) if applicable

• Software product name

• Hardware product model number

• Operating system type and version

• Applicable error message

• Third-party hardware or software

• Technical support registration number (if applicable)

IMPORTANT: Be sure to mention that this is an HP BladeSystem Matrix configuration when youcall for support. Each HP BladeSystem Matrix Starter Kit or Expansion Kit c7000 serial numberidentifies it as a HP BladeSystem Matrix installation.

How to contact HPUse the following methods to contact HP technical support:

• In the United States, see the Customer Service / Contact HP United States website for contactoptions:http://www.hp.com/go/assistance

• In the United States, call 1-800-334-5144 to contact HP by telephone. This service is available24 hours a day, 7 days a week. For continuous quality improvement, conversations might berecorded or monitored.

• In other locations, see the Contact HP Worldwide website for contact options:http://www.hp.com/go/assistance

Registering for software technical support and update serviceHP BladeSystem Matrix includes as standard, three or one year of 24 x 7 HP Software TechnicalSupport and Update Service and 24 x 7 four hour response HP Hardware Support Service. Thisservice provides access to HP technical resources for assistance in resolving software implementationor operations problems.The service also provides access to software updates and reference manuals in electronic form asthey are made available from HP. Customers who purchase an electronic license are eligible forelectronic updates.With this service, Insight software customers benefit from expedited problem resolution as well asproactive notification and delivery of software updates. For more information about this service,see the following website:http://www.hp.com/services/insightRegistration for this service takes place following online redemption of the license certificate.

Contacting HP 51

Page 52: 646940-001

How to use your software technical support and update serviceAfter you have registered, you will receive a service contract in the mail containing the CustomerService phone number and your Service Agreement Identifier (SAID). You need your SAID whenyou contact technical support. Using your SAID, you can also go to the Software Update Manager(SUM) webpage at http://www.itrc.hp.com to view your contract online.

Warranty informationHP will replace defective delivery media for a period of 90 days from the date of purchase. Thiswarranty applies to all Insight software products.

HP authorized resellersFor the name of the nearest HP authorized reseller, see the following sources:

• In the United States, see the HP U.S. service locator website:http://www.hp.com/service_locator

• In other locations, see the Contact HP worldwide website:http://www.hp.com/go/assistance

Documentation feedbackHP welcomes your feedback. To make comments and suggestions about product documentation,send a message to:[email protected] the document title and manufacturing part number in your message. All submissions becomethe property of HP.

Security bulletin and alert policy for non-HP owned software componentsOpen source software (such as OpenSSL) or third-party software (such as Java) are sometimesincluded in HP products. HP discloses that the non-HP owned software components listed in theHP Insight Dynamics end user license agreement (EULA) are included with HP Insight Dynamics.To view the EULA, use a text editor to open the /opt/vse/src/README file on an HP-UX CMS,or the <installation-directory>\src\README file on a Windows CMS. (The defaultinstallation directory on a Windows CMS is C:\Program Files\HP\Virtual ServerEnvironment, but this directory can be changed at installation time.)HP addresses security bulletins for the software components listed in the EULA with the same levelof support afforded HP products.HP is committed to reducing security defects and helping you mitigate the risks associated withsecurity defects when they do occur. HP has a well defined process when a security defect is foundthat culminates with the publication of a security bulletin. The security bulletin provides you with ahigh level description of the problem and explains how to mitigate the security defect.

Subscribing to security bulletinsTo receive security information (bulletins and alerts) from HP:1. Open a browser to the HP home page:

http://www.hp.com2. Click the Support & Drivers tab.3. Click Sign up: driver, support, & security alerts, which appears under Additional Resources

in the right navigation pane.

52 Support and other resources

Page 53: 646940-001

4. Select Business & IT Professionals to open the Subscriber's Choice webpage.5. Do one of the following:

• Sign in if you are a registered customer.

• Enter your email address to sign-up now. Then, select the box next to Driver and Supportalerts and click Continue. Select HP BladeSystem Matrix Converged Infrastructure fromthe product family section A, then select each of the BladeSystem Matrix entries in sectionB.

Related informationThe latest versions of manuals (only the HP BladeSystem Matrix Compatibility Chart and HPBladeSystem Matrix Release Notes are available) and white papers for HP BladeSystem Matrixand related products can be downloaded from the web at http://h20000.www2.hp.com/bizsupport/TechSupport/DocumentIndex.jsp?contentType=SupportManual&lang=en&cc=us&docIndexId=64180&taskId=115&prodTypeId=3709945&prodSeriesId=4223779.

Related information 53

Page 54: 646940-001

A Dynamic infrastructure provisioning with HP BladeSystemMatrix

This section provides examples on how the HP BladeSystem Matrix solution, through dynamicinfrastructure provisioning, allows the user to quickly re-purpose infrastructure without reinstallation.When compared with production environments, test and development environments arecharacterized by the need to quickly set up and then tear down an environment after a relativelyshort period of time. In fact, the workloads are characterized by short periods of high-levels ofutilization, followed by long periods of minimal or no use. Static, stand-alone server environmentsare not particularity suited to this type of use-case.The following examples are real-world implementations and can be leveraged as guides to theplanning process. The first example describes the services implementing Dynamic InfrastructureProvisioning using logical servers. The second example describes the services to implement DynamicInfrastructure Provisioning using Insight Operations. This example is a database service consistingof multiple tiers each with multiple physical servers.

Example 1—An agile test and development infrastructure using logicalservers

In this example, logical servers deployed as physical server blades are used to create a dynamictest and development infrastructure. Logical server management operations are used to rapidlyactivate and deactivate test and development environments, to quickly re-purpose infrastructurewithout reinstallation. Resources may be pooled and shared, improving utilization and reducingcost.Test and development teams share a pool of VC server blades used to develop and test applicationson multiple operating systems and versions. A number of different environments are needed, butnot all are required at the same time. At any one time, several logical servers are active, makingthe currently required test and development environments available for use. Other logical servers,for those test and development environments that are not currently needed, are inactive. Adeactivated logical server does not consume computer resources such as CPU, memory, or power,but its profile, including associated storage, is retained, making it easy to reactivate the logicalserver to quickly make the environment again available for use.

54 Dynamic infrastructure provisioning with HP BladeSystem Matrix

Page 55: 646940-001

Figure 7 An agile test and development infrastructure using logical servers

Example 1—An agile test and development infrastructure using logical servers 55

Page 56: 646940-001

Bill of materialsTable 28 Bill of materials

HP BladeSystem Matrix included/orderable separatelyItem

HP BladeSystem Matrix includedHP BladeSystem c7000 Enclosure

HP BladeSystem Matrix includedHP VC Flex-10 10Gb Ethernet module x2

HP BladeSystem Matrix includedHP VC 8Gb 8Port FC module x2

HP BladeSystem Matrix included (optional)HP StorageWorks EVA4400

HP BladeSystem Matrix included (default, deselectable)HP ProLiant BL460c G6—CMS server blade

HP BladeSystem Matrix includedHP Insight Software and Licenses

OrderableHP ProCurve 6600 with 10Gb support

OrderableHP ProLiant BL460c x3

OrderableWindows Server 2003 SP2 EE

OrderableRed Hat Enterprise Linux

Step 1—Define servicesTable 29 Define services

Network requirementsStorage requirementsSoftwareHost configurationService

Corporate Network#1 – 1Gb

Boot from SANInsight SoftwareCommand View EVASQL Express

ProLiant BL460cManagement Service(CMS)

Deployment Network#2 – 2Gb

Corporate network #11Gb

Boot from SANPhysical ProLiantBL460c Windows

TestV1Windows

Server 2003 R2SP2SE

Deployment network#2 – 2Gb

Corporate network #11Gb

Boot from SANPhysical ProLiantBL460c Windows

DevV2Windows

Server 2003 R2SP2SE

Step 2a—Racks and enclosuresTable 30 Racks and enclosures plan

ValueItem

Matrix Rack #1

10642 G2Rack Model

Mx-Rack1Rack Name

Matrix Enclosure #1 (Starter kit)

Matrix Starter KitEnclosure Model

Mx-Rack1–enc1Enclosure Name

Mx-Rack1, U1Enclosure Location (Rack Name, U#)

56 Dynamic infrastructure provisioning with HP BladeSystem Matrix

Page 57: 646940-001

Step 2b—Facility planningTable 31 Facility requirements

ValueFacility Planning

Facility power connections:

208V, 3 phase (Delta)Voltage, Phase

NEMA L15-30RReceptacle type

30A breakersCircuit rating

20%Circuit de-rating percentage for the locality

WALLUPS or WALL

PWR1 & PWR2Power redundancy? (If yes, specify labeling scheme)

Planning metrics for racks:

698 lbs.Rack weight estimate (in Kg or lbs)

551 CFMAirflow estimate (in CMM/CFM)

8088 W, 8255 VA (Watson order tool)Watts (W). Volt-Amps (VA) Estimate for rack5355 W, 5464 VA (BladeSystem Power Sizing tool)

5000 Watts - customer desires power capping to managelimitations.

Thermal limit per rack (in Watts)(customer requirement – compare to estimate)

Quantity two of 8.6 kVA Modular PDU, 24A 3 Phase(AF512A)

Quantity and type of PDUs for rack

Customer will add PDUs when expanding

Monitored PDUs only:

Additional uplink & IP address

SNMP community strings

Installation characteristics:

123 Main St.Data center location for rackSpringfield, IL, USA

Customer familiar with space requirementsSide clearances / floor space allocation

Logistics handled by customer upon deliveryVerify ready to receive and install rack

Step 2c—Virtual Connect domain configurationTable 32 Virtual Connect domain configuration

ValueItem

Virtual Connect Domain Group #1

VCDG-Matrix1Name

VCD1-Matrix1List the names of each VCD in this VCDG

Virtual Connect Domain #1

VCD1-Matrix1Name

MatrixRack1Enc1List the names of each enclosure in this VCD

N/AMulti-enclosure stacking

Example 1—An agile test and development infrastructure using logical servers 57

Page 58: 646940-001

Table 32 Virtual Connect domain configuration (continued)

ValueItem

N/A, recommended, minimum or other?

1MAC addressesVCEM-defined, HP-defined or user-defined? If HP-defined,select unique range 1-64(range: 00-17-xx-77-00-00 : 00-17-xx-77-03-FF)

1WWN addressesVCEM-defined, HP-defined or user-defined? If HP-defined,select unique range 1-64(range: 50:06:xx:00:03:C2:62:00 -50:06:xx:00:03:C2:65:FF)

1Serial numbersHP-defined or user-defined? If HP-defined, select uniquerange 1–64(range: ??? - ???)

Step 3c—Storage volumesTable 33 Storage volumes

Connected toReplicatedto

vHost namevDisk (LUN) nameUse and sizeServer

EVA4400N/Amatrix_cms_vhostmatrix_cms_vdisk50GB bootCMS

EVA4400NoneTestV1_w2k3_vhostTestV1_w2k3_vdisk16GB bootTestV1Windows

EVA4400NoneDevV2_w2k8_vhostDevV2_w2k8_vdisk16GB bootDevV2Windows

EVA4400NoneTestV1_rhel5_vhostTestV1_rhel5_vdisk16GB bootTestV1RHEL

EVA4400NoneDevV1_w2k8_vhostDevV1_w2k8_vdisk16GB bootDevV1Windows

Step 4a—Network configurationTable 34 Configuration of networks and switches

ValueItem

16.89.128.1 – 16.89.128.254Production LAN IP range

10.1.1.1 – 10.1.1.254Management LAN IP range

Step 4b—VC Ethernet uplinksTable 35 VC Ethernet uplink connections

Signal typeRouter uplinksVC uplinksNetwork name

10Gbase-SRProCurve Switch #1, port 25Mx-Rack1-enc1:bay 1:port 1ProductionProCurve Switch #1, port 26Mx-Rack1-enc1:bay 2:port 1

10Gbase-SRProCurve Switch #1, port 27Mx-Rack1-enc1:bay 1:port 2ManagementProCurve Switch #1, port 28Mx-Rack1-enc1:bay 2:port 2

10Gbase-SRProCurve Switch #1, port 29Mx-Rack1-enc1:bay 1:port 3DeploymentProCurve Switch #1, port 30Mx-Rack1-enc1:bay 2:port 3

58 Dynamic infrastructure provisioning with HP BladeSystem Matrix

Page 59: 646940-001

Step 4c—Services network connectionsTable 36 Network host connections

Connected to1Flex-10Bandwidthallotment

IPConnectionServer

Production LANsegment

1Gb16.89.129.100VC-Ethernet Uplink –Corporate Network #1

Management ServiceManagement LANsegment

2Gb10.1.1.100VC-Ethernet Uplink –Deployment Network #2

Production LANsegment

1Gb16.89.129.101VC-Ethernet Uplink –Corporate Network #1

TestV1WindowsManagement LANsegment

2Gb10.1.1.101VC-Ethernet Uplink –Deployment Network #2

Production LANsegment

1Gb16.89.129.102VC-Ethernet Uplink –Corporate Network #1

DevV2WindowsManagement LANsegment

2Gb10.1.1.102VC-Ethernet Uplink –Deployment Network #2

Production LANsegment

1Gb16.89.129.103VC-Ethernet Uplink –Corporate Network #1

TestV1RHELManagement LANsegment

2Gb10.1.1.103VC-Ethernet Uplink –Deployment Network #2

Production LANsegment

1Gb16.89.129.104VC-Ethernet Uplink –Corporate Network #1

DevV1WindowsManagement LANsegment

2Gb10.1.1.104VC-Ethernet Uplink –Deployment Network #2

1 To enable network redundancy, multiple connections can be made to the same LAN segment.

Step 4d—Management network connectionsTable 37 Management connections

Provision type(EBIPA, Static, orDHCP)

IP addressSignal typeRouter uplink (Datacenter switch and port)

Host uplinkNetwork

Static16.89.129.101000Base-TProCurve Switch #1,port 1

Starter Kit OA #1Production

Static16.89.129.111000Base-TProCurve Switch #1,port 2

Starter Kit OA #2Production

EBIPA16.89.129.70MultiplexedThrough OAconnection

Starter Kit VC Ethernet#1

Production

EBIPA16.89.129.71MultiplexedThrough OAconnection

Starter Kit VC Ethernet#2

Production

EBIPA16.89.129.72MultiplexedThrough OAconnection

Starter Kit VC Fibre #1Production

EBIPA16.89.129.73MultiplexedThrough OAconnection

Starter Kit VC Fibre #2Production

Static16.89.129.12MultiplexedThrough OAconnection

Optional – VC DomainIP

Production

EBIPA16.89.129.50-65MultiplexedThrough OAconnection

iLO RangeProduction

Example 1—An agile test and development infrastructure using logical servers 59

Page 60: 646940-001

Table 37 Management connections (continued)

Provision type(EBIPA, Static, orDHCP)

IP addressSignal typeRouter uplink (Datacenter switch and port)

Host uplinkNetwork

Static16.89.129.7100Base-TProCurve Switch #1,port 3

EVA4400 ABM MGMTport

Production

Static16.89.129.8100Base-TProCurve Switch #1,port 4

EVA4400 Fibre Switch#1

Production

Static16.89.129.9100Base-TProCurve Switch #1,port 5

EVA4400 Fibre Switch#2

Production

Static16.89.129.11000Base-TDC switch #1, port 7ProCurve Switch #1Production

Step 4e—Access credentialsTable 38 Access credentials

PasswordCredentialDomainAccess

acmeAdminStarter Kit – OA

acmeAdminStarter Kit – VC

AcmeDC1-publicN/AN/ASNMP READ ONLYcommunity

AcmeDC1-privateN/AN/ASNMP READ WRITEcommunity

AdministratorAcme.netCMS service account

AdministratorAcme.netInsight Control serverdeployment

AdministratorAcme.netHP Command View EVA

Example 2—An agile test and development infrastructure with IOIO enables IT architects to visually design a catalog of infrastructure service templates, includingmulti-tier, multi-node configurations that can be activated in minutes. To ease the load on your ITstaff and streamline processes, IO enables self-service provisioning. Authorized users can provisioninfrastructure from pools of shared server and storage resources using a self-service portal. Thisportal includes features to automate and streamline the approval process for requested infrastructure.In addition, IO is designed to integrate with existing IT management tools and processes using anembedded workflow automation tool. Integration can make the infrastructure delivery process workmore efficiently and reliably across IT architects, administrators, and operations teams.In this solution, IO is used with Insight Dynamics to create a dynamic multi-server, multi-tier test anddevelopment infrastructure using both logical servers deployed as physical server blades andVMware ESX VM logical servers. Use of IO reduces cost, saves time, and improves quality byautomatically provisioning easily managed services, as needed, using published service templatesthat incorporate best practices and policies. IO delivers advanced template-driven design,provisioning and ongoing operations for multi-server, multi-tier infrastructure services. IO providesrole-based user interfaces and functionality. An architect uses the IO Designer to create and publishservice templates that incorporate best practices and policies. Service templates can also includeworkflows to automate and integrate customer-specific actions, such as approvals or notifications,into automated service provisioning and resource allocation. Using the IO Self-Service Portal, anend user can easily create and manage a service, as needed. An administrator uses the IO consoleintegrated with HP SIM to manage users, resource pools, and self-service requests.

60 Dynamic infrastructure provisioning with HP BladeSystem Matrix

Page 61: 646940-001

Figure 8 An agile test and development infrastructure with IO templates

Bill of materialsTable 39 Bill of materials

HP BladeSystem Matrix included/orderable separatelyItem

HP BladeSystem Matrix includedHP BladeSystem c7000 Enclosure

HP BladeSystem Matrix includedHP VC Flex–10 10Gb Ethernet module x2

HP BladeSystem Matrix includedHP VC 8Gb 8Port FC module x2

HP BladeSystem Matrix included (optional)HP StorageWorks EVA4400

HP BladeSystem Matrix included (default, deselectable)HP ProLiant BL460c G6 Server – CMS server blade

HP BladeSystem Matrix includedHP Insight Software and Licenses

Example 2—An agile test and development infrastructure with IO 61

Page 62: 646940-001

Table 39 Bill of materials (continued)

HP BladeSystem Matrix included/orderable separatelyItem

OrderableHP ProCurve 6600 with 10Gb support

OrderableHP ProLiant BL680c Server x2

OrderableHP ProLiant BL460c Server x2

OrderableWindows Server 2003 SP2 EE

OrderableRed Hat Enterprise Linux

OrderableVMware ESX Server with VMotion

OrderableVMware vCenter server

Step 1—Define servicesTable 40 Define services

Network requirementsStorage requirementsSoftwareHost configurationService

Management service

Corporate Network #1-1 Gb/s

Boot from SANInsight SoftwareCommand View EVASQL Express

BL460cManagement service(CMS)

Deployment Network#2 – 2 Gb/s

4AA2-3927ENW

VMFSVirtual ServerWindows Server2003 R2SP2 SE

VMApp1 • Production LAN#1 –1 Gb/s

• ManagementLAN#2 – 2 Gb/s

VMFSVirtual ServerWindows Server2003 R2SP2 SE

VMApp2 • Production LAN#1 –1 Gb/s

• ManagementLAN#2 – 2 Gb/s

Boot from SANVMFS Storage Pool 1 -90Gb

Physical BL680c – ESXv4.1

VMHost1 • Production LAN#1 –1 Gb/s

• ManagementLAN#2 – 2 Gb/s

• VMotion LAN#3 – 4Gb/s

Boot from SANVMFS Storage Pool 1 -90Gb

Physical BL680c – ESXv4.1

VMHost2 • Production LAN#1 –1 Gb/s

• ManagementLAN#2 – 2 Gb/s

• VMotion LAN#3 – 4Gb/s

Database tier

62 Dynamic infrastructure provisioning with HP BladeSystem Matrix

Page 63: 646940-001

Table 40 Define services (continued)

Network requirementsStorage requirementsSoftwareHost configurationService

Boot from SANPhysical BL460cVirtual Server

DB1 • Production LAN#1 –1 Gb/s

Windows Server2003 R2SP2 EE • Management

LAN#2 – 2 Gb/s

Boot from SANPhysical BL460cVirtual Server

DB2 • Production LAN#1 –1 Gb/s

Windows Server2003 R2SP2 EE • Management

LAN#2 – 2 Gb/s

Step 2a—Racks and enclosuresTable 41 Racks and enclosures plan

ValueItem

Matrix Rack #1

10642 G2Rack Model

Mx-Rack1Rack Name

Matrix Enclosure #1 (Starter kit)

HP BladeSystem Matrix Starter KitEnclosure Model

Mx-Rack1-enc1Enclosure Name

Mx-Rack1, U1Enclosure Location (Rack Name, U#)

Step 2b—Facility planningTable 42 Facility requirements

ValueFacility power

Facility power connection:

200- 240VAC, ThreeVoltage, Phase

EC60309 2 pole, 3 wireReceptacle type

32ACircuit rating

0%Circuit de-rating percentage for the locality

WALLUPS or WALL

PWR1 & PWR2Power redundancy? (If yes, specify labeling scheme)

Planning metrics for rack:

365 kgRack weight estimate (in Kg or lbs)

16.596 CMM (BladeSystem Power Sizing tool)Airflow estimate (in CMM/CFM)

8088 W, 8255 VA (Watson order tool)5355 W, 5464 VA (BladeSystem Power Sizing tool)

Watts (W). Volt-Amps (VA) Estimate for rack

No limit specified. Customer to confirm facility limits exceedestimates.

Thermal limit per rack (in Watts)(customer requirement – compare to estimate)

Example 2—An agile test and development infrastructure with IO 63

Page 64: 646940-001

Table 42 Facility requirements (continued)

ValueFacility power

Selected Quantity four of HP 32A HV Core Only ModularPower Distribution Unit, 2 extra installed for futureexpansion

Quantity and type of PDUs for rack

Monitored PDUs only:

Additional uplink & IP address

SNMP community strings

Installation characteristics:

Customer’s site in Brussels, BelgiumIdentify data center location

Hot/Cold aisles adequate distances, bayed to end ofexisting row

Side clearances / floor space allocation

Customer requires 10 business days after delivery beforescheduling install

Verify ready to receive and install rack

Step 2c—Virtual Connect domain configurationTable 43 Virtual Connect domain configuration

ValueItem

Virtual Connect Domain Group #1

VCDG-Matrix1Name

VCD1-Matrix1List the names of each VCD in this VCDG

Virtual Connect Domain #1

VCD1-Matrix1Name

MatrixRack1Enc1List the names of each enclosure in this VCD

N/AMulti-enclosure stackingN/A, recommended, minimum or other?

1MAC addressesVCEM-defined, HP-defined or user-defined? If HP-defined,select unique range 1-64(range: 00-17-xx-77-00-00 : 00-17-xx-77-03-FF)

1WWN addressesVCEM-defined, HP-defined or user-defined? If HP-defined,select unique range 1-64(range: 50:06:xx:00:03:C2:62:00 -50:06:xx:00:03:C2:65:FF)

1Serial numbersHP-defined or user-defined? If HP-defined, select uniquerange 1–64(range: ??? - ???)

64 Dynamic infrastructure provisioning with HP BladeSystem Matrix

Page 65: 646940-001

Step 3c—Storage volumesTable 44 Storage volumes

Connected toReplicated tovHost namevDisk (LUN) nameUse and sizeService

EVA4400N/Amatrix_cms_vhostmatrix_cms_vdisk50GB bootCMS

EVA4400NoneVMApp1_vhostVMApp1_vdisk16GB bootVMApp1

EVA4400NoneVMApp2_vhostVMApp2_vdisk16GB bootVMApp2

EVA4400NoneVMHost1_vhostVMHost1_vdisk16GB bootVMHost1

EVA4400NoneVMHost2_vhostVMHost2_vdisk16GB bootVMHost2

EVA4400NoneVMHost1_vhost,VMHost2_vhost

ESX_shared_vdisk250GB VMFSESX shared disk

EVA4400NoneDB1_vhostDB1_boot_vdisk16GB bootDB1

EVA4400NoneDB1_vhostDB1_sql_vdisk50GB data

EVA4400NoneDB2_vhostDB2_boot_vdisk16GB bootDB2

EVA4400NoneDB2_vhostDB2_cstore_vdisk50GB data

Step 4a—Network configurationTable 45 Configuration of networks and switches

ValueItem

16.89.128.1 – 16.89.128.254Production LAN IP range

10.1.1.1 – 10.1.1.254Management LAN IP range

Step 4b—Virtual Connect Ethernet uplinksTable 46 VC Ethernet uplink connections

Signal typeRouter uplinksVC uplinksNetwork name

10Gbase-SRProCurve Switch #1, port 25Mx-Rack1-enc1:bay 1:port 1ProductionProCurve Switch #1, port 26Mx-Rack1-enc1:bay 2:port 1Management (SUS1)

10Gbase-SRProCurve Switch #1, port 27Mx-Rack1-enc1:bay 1:port 2DeploymentProCurve Switch #1, port 28Mx-Rack1-enc1:bay 2:port 2

Step 4c—Services network connectionsTable 47 Network host connections

Connected to1Flex-10Bandwidthallotment

IPConnectionService

Production LAN segment1 Gb/s16.89.129.100VC-Ethernet Uplink – CorporateNetwork #1Management

Service Management LANsegment

2 Gb/s10.1.1.100VC-Ethernet Uplink – DeploymentNetwork #2

Database service

Application tier

Example 2—An agile test and development infrastructure with IO 65

Page 66: 646940-001

Table 47 Network host connections (continued)

Connected to1Flex-10Bandwidthallotment

IPConnectionService

Production LAN segment1 Gb/s16.89.129.101VC-Ethernet Uplink – CorporateNetwork #1

VMApp1Management LANsegment

2 Gb/s10.1.1.101VC-Ethernet Uplink – DeploymentNetwork #2

Production LAN segment1 Gb/s16.89.129.102VC-Ethernet Uplink – CorporateNetwork #1

VMApp2Management LANsegment

2 Gb/s10.1.1.102VC-Ethernet Uplink – DeploymentNetwork #2

Production LAN segment1 Gb/s16.89.129.103VC-Ethernet Uplink – CorporateNetwork #1

VMHost1 Management LANsegment

2 Gb/s10.1.1.103VC-Ethernet Uplink – DeploymentNetwork #2

None (Internal)4 Gb/s10.1.1.113VC VMotion Network #3

Production LAN segment1 Gb/s16.89.129.104VC-Ethernet Uplink – CorporateNetwork #1

VMHost2 Management LANsegment

2 Gb/s10.1.1.104VC-Ethernet Uplink – DeploymentNetwork #2

None (Internal)4 Gb/s10.1.1.114VC VMotion Network #3

Database tier

Production LAN segment1 Gb/s16.89.129.105VC-Ethernet Uplink – CorporateNetwork #1

DB1Management LANsegment

2 Gb/s10.1.1.105VC-Ethernet Uplink – DeploymentNetwork #2

Production LAN segment1 Gb/s16.89.129.106VC-Ethernet Uplink – CorporateNetwork #1

DB2Management LANsegment

2 Gb/s10.1.1.106VC-Ethernet Uplink – DeploymentNetwork #2

1 To enable network redundancy, multiple connections can be made to the same LAN segment.

Step 4d—Management network connectionsTable 48 Management connections

Provision type(EBIPA, Static, orDHCP)

IP addressSignal typeRouter uplink (Datacenter switch and port)

Host uplinkNetwork

Static16.89.129.101000Base-TProCurve Switch #1,port 1

Starter Kit OA #1Production

Static16.89.129.111000Base-TProCurve Switch #1,port 2

Starter Kit OA #2Production

EBIPA16.89.129.70MultiplexedThrough OAconnection

Starter Kit VC Ethernet#1

Production

EBIPA16.89.129.71MultiplexedThrough OAconnection

Starter Kit VC Ethernet#2

Production

EBIPA16.89.129.72MultiplexedThrough OAconnection

Starter Kit VC Fibre #1Production

66 Dynamic infrastructure provisioning with HP BladeSystem Matrix

Page 67: 646940-001

Table 48 Management connections (continued)

Provision type(EBIPA, Static, orDHCP)

IP addressSignal typeRouter uplink (Datacenter switch and port)

Host uplinkNetwork

EBIPA16.89.129.73MultiplexedThrough OAconnection

Starter Kit VC Fibre #2Production

Static16.89.129.12MultiplexedThrough OAconnection

Optional – VC DomainIP

Production

EBIPA16.89.129.50-65MultiplexedThrough OAconnection

iLO RangeProduction

Static16.89.129.71000Base-TProCurve Switch #1,port 3

EVA4400 ABM MGMTport

Production

Static16.89.129.81000Base-TProCurve Switch #1,port 4

EVA4400 Fibre Switch#1

Production

Static16.89.129.91000Base-TProCurve Switch #1,port 5

EVA4400 Fibre Switch#2

Production

Static16.89.129.11000Base-TDC switch #1, port 7ProCurve Switch #1Production

Step 4e—Access credentialsTable 49 Access credentials

PasswordCredentialDomainAccess

AdminStarter Kit – OA

AdminStarter Kit – VC

AcmeDC1-publicN/AN/ASNMP READ ONLYcommunity

AcmeDC1-privateN/AN/ASNMP READ WRITEcommunity

AdministratorAcme.netCMS service account

AdministratorAcme.netDeployment server

AdministratorAcme.netHP Command View EVA

Example 2—An agile test and development infrastructure with IO 67

Page 68: 646940-001

B Sample configuration templatesThis section provides sample templates. The following templates are also available in theaccompanying HP BladeSystem Matrix Planning Guide Worksheet.

Step 1a—Application services templateTable 50 Application services

Network requirementsStorage requirementsSoftwareHost configurationService

(service name)

(service tier #1 name)

(LAN requirements)(SAN requirements)(server type)(server)

(service tier #2 name)

(LAN requirements)(SAN requirements)(server type)(server)

Step 1b—Management servers templateTable 51 Required management services

Network requirementsStorage requirementsSoftwareHost configurationService

Managementenvironment

(LAN requirements)(SAN requirements)Insight Software(server type)CMS #1

(LAN requirements)(SAN requirements)Insight Software serverdeployment

(server type)

(LAN requirements)(SAN requirements)HP Command ViewEVA

(server type)

(LAN requirements)(SAN requirements)SQL server(server type)

(LAN requirements)(SAN requirements)(Other)(server type)

Step 2a—Rack and enclosures templateTable 52 Racks and enclosures plan

ValueItem

Matrix Rack #1

Rack Model

Rack Name

Matrix Enclosure #1 (Starter kit)

68 Sample configuration templates

Page 69: 646940-001

Table 52 Racks and enclosures plan (continued)

ValueItem

Enclosure Model

Enclosure Name

Enclosure Location (Rack Name, U#)

Step 2b—Facility planning templateTable 53 Facility requirements

ValueFacility Planning

Facility power connection characteristics:

Voltage, Phase

Receptacle type

Circuit rating

Circuit de-rating percentage for the locality

UPS or WALL

Power redundancy? (If yes, specify labeling scheme)

Planning metrics for rack:

Rack weight estimate (in Kg or lbs)

Airflow estimate (in CMM/CFM)

Watts (W). Volt-Amps (VA) Estimate for rack

Thermal limit per rack (in Watts)(customer requirement – compare to estimate)

Quantity and type of PDUs for rack

Monitored PDUs only:

Additional uplink & IP address

SNMP community strings

Installation characteristics:

Identify data center location

Side clearances / floor space allocation

Verify ready to receive and install rack

Step 2c— Virtual Connect domain configurationTable 54 Virtual Connect domain configuration

ValueItem

Virtual Connect Domain Group #1

Name

List the names of each VCD in this VCDG

Virtual Connect Domain #1

69

Page 70: 646940-001

Table 54 Virtual Connect domain configuration (continued)

ValueItem

Name

List the names of each enclosure in this VCD

Multi-enclosure stackingN/A, recommended, minimum or other?

MAC addressesVCEM-defined, HP-defined or user-defined? If HP-defined,select unique range 1-64

WWN addressesVCEM-defined, HP-defined or user-defined? If HP-defined,select unique range 1-64

Serial numbersHP-defined or user-defined? If HP-defined, select uniquerange 1–64

Virtual Connect Domain #2

Name

List the names of each enclosure in this VCD

Multi-enclosure stackingN/A, recommended, minimum or other?

MAC addressesVCEM-defined, HP-defined or user-defined? If HP-defined,select unique range 1-64

WWN addressesVCEM-defined, HP-defined or user-defined? If HP-defined,select unique range 1-64

Serial numbersHP-defined or user-defined? If HP-defined, select uniquerange 1–64

Step 3a—Collect details about the customer-provided SAN storageTable 55 Storage and fabrics

ResponseQuestion

Does some or all the SAN already exist?

Will the matrix rack and enclosures be connected to an already installedand working SAN and array, or will some or all of the SAN storage beinstalled for the HP BladeSystem Matrix solution?

Number of separate SANs

Number of switches per SAN (assume 2):

Number of arrays

70 Sample configuration templates

Page 71: 646940-001

Step 3b—FC SAN storage connectionsTable 56 FC SAN connections

NoteVC FC SAN profileStorage controllerWWPN

One of multipleconnections to thesame SAN

Customer SAN name

Minimum of 11

Typically a secondconnection to firstSAN for HA

2

3

4

5

6

Step 3c—iSCSI SAN storage connectionsTable 57 Example iSCSI SAN storage connections

Provision type(Static, or DHCP)

IP addressSignal typeRouter uplink (Datacenter switch and port)

Host uplinkNetwork

1000Base-TDC1-switch/Port1P4300 G2Node#1/Port1

iSCSI

1000Base-TDC2-switch/Port1P4300 G2Node#1/Port2

iSCSI

1000Base-TDC1-switch/Port2P4300 G2Node#2/Port1

iSCSI

1000Base-TDC2-switch/Port2P4300 G2Node#2/Port2

iSCSI

1000Base-TDC1-switch/Port3P4300 G2Node#3/Port1

iSCSI

1000Base-TDC2-switch/Port3P4300 G2Node#3/Port2

iSCSI

1000Base-TDC1-switch/Port4P4300 G2Node#4/Port1

iSCSI

1000Base-TDC2-switch/Port4P4300 G2Node#4/Port2

iSCSI

10GBase-TDC1-switch/Port25Enclosure1:Bay1:Port3iSCSI

10GBase-TDC2-switch/Port25Enclosure1:Bay2:Port3iSCSI

Step 3d—Define storage volumesTable 58 Storage volumes

Connected toReplicated tovHost namevDisk (LUN) nameUse and sizeServer

(Local SANstorage target)

(remote target anddata replication

(xxxx_vhost)(xxxx_vdisk)(LUN properties)(server name)

group name, ifreplicated)

71

Page 72: 646940-001

Table 58 Storage volumes (continued)

Connected toReplicated tovHost namevDisk (LUN) nameUse and sizeServer

Step 4a—Network configuration detailsTable 59 Configuration of networks and switches

ValueItem

Production LAN

IP address (network number)

Subnet mask

IP range for auto-provisioning

VLAN tag

Preferred link connection speed

Gateway IP address

DHCP server

DNS server #1

DNS server #2

DNS domain name

Management LAN

IP address (network number)

Subnet mask

IP range for auto-provisioning

VLAN tag

Preferred link connection speed

DHCP server

DNS server #1

DNS server #2

DNS domain name

Deployment LAN

192.168.1.0IP address (network number)

255.255.255.0Subnet mask

(Insight Control server deployment, HP Server Automation,or HP Ignite-UX)

Deployment server

VLAN tag

Preferred link connection speed

DHCP server

N/ADNS server #1

72 Sample configuration templates

Page 73: 646940-001

Table 59 Configuration of networks and switches (continued)

ValueItem

N/ADNS server #2

N/AGateway IP address

N/ADNS domain name

VMotion LAN

192.168.2.0IP address (network number)

255.255.255.0Subnet mask

VLAN tag

Preferred link connection speed

DHCP server

N/ADNS server #1

N/ADNS server #2

N/AGateway IP address

N/ADNS domain name

Other Network services

SMTP host

Time source

Step 4b—Virtual Connect Ethernet uplinks templateTable 60 VC Ethernet uplink connections

Signal typeRouter uplinksVC uplinksNetwork name

Production

Management

Deployment

VMotion

iSCSI

Integrity OVMM

SG heartbeat

SG failover

Switch.portEnclosure.bay.port(other network)

73

Page 74: 646940-001

Step 4c—Services network connections templateTable 61 Network host connections

PXE setting1Flex-10 Bandwidthallotment1

Port assignment1ConnectionServer

(uplink destination)(connectionbandwidth)

(connection type)(VC Ethernetconnection #1)

(server names)

(uplink destination)(connectionbandwidth)

(connection type)(VC Ethernetconnection #2)

(server names)

N/AN/AVMotion

N/AN/ADeployment

1 These parameters can be specified when defining network connections to physical blades not auto-provisioned by IO,such the CMS, deployment server, SQL Server, and ESX hosts.

Step 4d—Management network connections templateTable 62 Management connections

Provision type(EBIPA, Static, orDHCP)

IP addressSignal typeRouter uplink (Datacenter switch and port)

Host uplinkNetwork

1000Base-TStarter Kit OA #1Management

1000Base-TStarter Kit OA #2Management

MultiplexedThrough OAconnection

Starter Kit VC—Enet #1Management

MultiplexedThrough OAconnection

Starter Kit VC-Enet #2Management

MultiplexedThrough OAconnection

Starter Kit VC-FC #1Management

MultiplexedThrough OAconnection

Starter Kit VC-FC #2Management

MultiplexedThrough OAconnection

Optional – VC DomainIP

Management

(starting IP –ending IP)

MultiplexedThrough OAconnection

Starter Kit iLO RangeManagement

1000Base-TExpansion Kit #1 OA #1Management

1000Base-TExpansion Kit #1 OA #2Management

MultiplexedThrough OAconnection

Expansion Kit #1VC-Enet #1

Management

MultiplexedThrough OAconnection

Expansion Kit #1VC-Enet #2

Management

MultiplexedThrough OAconnection

Expansion Kit #1 VC-FC#1

Management

MultiplexedThrough OAconnection

Expansion Kit #1 VC-FC#2

Management

(starting IP –ending IP)

MultiplexedThrough OAconnection

Expansion Kit #1 iLORange

Management

100Base-TEVA4400 ABM MGMTport

Management

74 Sample configuration templates

Page 75: 646940-001

Table 62 Management connections (continued)

Provision type(EBIPA, Static, orDHCP)

IP addressSignal typeRouter uplink (Datacenter switch and port)

Host uplinkNetwork

100Base-TEVA4400 Fibre switch#1

Management

100Base-TEVA4400 Fibre switch#2

Management

100Base-TP4300 G2 Node #1MGMT

Management

100Base-TP4300 G2 Node #2MGMT

Management

100Base-TOther SAN switchManagement

100Base-TOther FC Storagecontroller

Management

100Base-TMonitored PDU #1Management

100Base-TMonitored PDU #2Management

N/AN/ANetwork Switch #1Management

N/AN/ANetwork Switch #2Management

Step 4e—Access credentials templateTable 63 Access credentials

PasswordCredentialsDomainAccess

Starter Kit – OA

Starter Kit – VC

N/AN/ASNMP READ ONLY communitystring

N/AN/ASNMP READ WRITE communitystring

CMS service account

Deployment server

SQL Server

VMware vCenter

HP Command View EVA

SAN switch account

IO User

IO Architect

IO Administrator

Expansion Kit #1 –OA

Expansion Kit #1 – VC

75

Page 76: 646940-001

C Optional Management Services integration notesThis section explains how to integrate HP BladeSystem Matrix implementation with optionalmanagement services.The following implementation services may be delivered in conjunction with a HP BladeSystemMatrix implementation service subject to meeting all requirements:• Insight Recovery implementation• Insight Control for Microsoft System Center implementation

• Insight Control for VMware vCenter Server implementation

These services are discussed in further detail below.If planned for integration, the following management services must be implemented before HPBladeSystem Matrix is delivered:• HP Server Automation software• HP Ignite-UX software• VMware vCenter Server software• Microsoft System Center softwareIf any implementation services are ordered for implementing these other management services,then all required planning must be done for these services in addition to planning for thisimplementation service.

HP BladeSystem Matrix and HP Server AutomationHP Server Automation is supported for integration with HP BladeSystem Matrix subject to meetingall requirements. A functional overview of this integration is detailed in Deliver Infrastructure andApplications in minutes with HP Server Automation and HP BladeSystem Matrix with InsightDynamics Deliver Infrastructure and Applications in minutes with HP Server Automation and HPBladeSystem Matrix with Insight Dynamics.The implementation services and licensing for HP Server Automation are not included in the HPBladeSystem Matrix implementation service. These requirements and prerequisites related to thisintegration, including limitations, version compatibility, setup instructions and troubleshooting aredetailed in Integrating HP Server Automation with HP BladeSystem Matrix/Insight Dynamics.

HP BladeSystem Matrix and Insight RecoveryHP BladeSystem Matrix includes licensing of servers for IR. However, the implementation servicefor IR is not included in the HP BladeSystem Matrix implementation service.Per event installation services should be ordered if the following service is delivered with the HPBladeSystem Matrix installation: 1 Quantity of HP Startup Insight Recovery SVC.See the Service Delivery Guide (SDG) for all other requirements and prerequisites related to fulfillingthis service.

HP BladeSystem Matrix and Insight Control for VMware vCenter ServerHP BladeSystem Matrix provides a complete and comprehensive set of infrastructure managementtools. However, some users may choose also to use VMware vCenter Server to manage their virtualinfrastructure. HP provides an extension called HP Insight Control for VMware vCenter Server,which is part of Insight Control that is included with HP BladeSystem Matrix.Per event installation services should be ordered if the following service is delivered with the HPBladeSystem Matrix installation: 1 Quantity of HP Startup Insight Control VMware vCenter SVC.See the Service Delivery Guide (SDG) for all other requirements and prerequisites related to fulfillingthis service.

76 Optional Management Services integration notes

Page 77: 646940-001

HP Insight Control for VMware vCenter ServerHP Insight Control for VMware vCenter Server is a plug-in module that delivers HP ProLiant statusand events into the vCenter console and enables direct, in-context launch of HP troubleshootingtools. The plug-in can be installed on any server with network access to the HP BladeSystem MatrixCMS, the vCenter CMS and the managed host systems.

HP BladeSystem Matrix and Insight Control for Microsoft System CenterSome users may also choose to use Microsoft System Center for some of their management functions.HP provides a set of integrated extensions called HP Insight Control for Microsoft System Center,which is part of Insight Control that is included with HP BladeSystem Matrix.Per event installation services should be ordered if the following service is delivered with the HPBladeSystem Matrix installation: 1 Quantity of HP Startup Insight Control System Center SVC.This section provides the supported configurations and guidelines for each of the components ofHP Insight Control for Microsoft System Center when used in a HP BladeSystem Matrix environment.See the Service Delivery Guide (SDG) for all other requirements and prerequisites related to fulfillingthis service.

Microsoft System Center ComponentsThere are four components to the Microsoft System Center Management Suite, each of which hasits own management console. Although they may be installed onto common servers, they areusually separated onto individual servers for performance reasons.• System Center Configuration Manager• System Center Operations Manager• System Center Virtual Machine Manager• System Center Data Protection Manager

NOTE: System Center Data Protection Manger is not required Insight Control for MicrosoftSystem Center.

HP Insight Control for Microsoft System CenterHP Insight Control for Microsoft System Center is a set of six integration modules that are installeddirectly onto the System Center consoles to provide seamless integration of the unique ProLiantand BladeSystem manageability features into the Microsoft System Center environment.• HP ProLiant Server Management Packs for Operations Manager 2007 integrate with System

Center Operations Manager to expose the native capabilities of ProLiant servers, includingmonitoring and alerting.

• HP BladeSystem Management Pack for Operations Manager 2007 integrates with SystemCenter Operations Manager to expose the native capabilities of BladeSystem c-Class enclosures,including monitoring and alerting.

• HP ProLiant PRO Management Pack for Virtual Machine Manager 2008 works in conjunctionwith System Center Operations Manager and System Center Virtual Machine Manager toproactively guide and automate movement of virtual machines based upon host hardwarealerts.

• HP ProLiant Server OS Deployment for Configuration Manager 2007 tightly integrates withSystem Center Configuration Manager to automatically deploy bare metal servers. This includessupport for pre-deployment hardware and BIOS configuration, and post-OS driver and agentinstallation.

HP BladeSystem Matrix and Insight Control for Microsoft System Center 77

Page 78: 646940-001

• HP Hardware Inventory Tool for Configuration Manager 2007 uses native System CenterHardware Inventory to provide detailed component level inventory of every managed server.

• HP Server Updates Catalog for System Center Configuration Manager 2007 uses System CenterConfiguration Manager to install and update ProLiant drivers and firmware using a rules-basedmodel.

HP BladeSystem Matrix CMSHP does not support installing any of the Microsoft System Center consoles onto the same serveras HP BladeSystem Matrix CMS components. However, some of the components of MicrosoftSystem Center may manage a HP BladeSystem Matrix CMS.The following table provides a list of supported configurations for the HP BladeSystem Matrix CMSas a managed node of Microsoft System Center:

Table 64 Supported configurations for the HP BladeSystem Matrix CMS

CommentsSupportedHP Insight Control for Microsoft System centercomponent

HP BladeSystem Matrix CMS may be a SystemCenter Operations Manager managed node.

YesProLiant Server Management Packs

HP BladeSystem Matrix c-Class enclosures maybe managed by System Center Operations

YesBladeSystem Management Pack

Manager. However, the BladeSystem MonitorService should not be installed on theBladeSystem CMS.

Do not use ProLiant PRO Management Pack tomove any virtual machine that may reside ona HP BladeSystem Matrix host.

NoProLiant PRO Management Pack

Do not use ProLiant Server OS Deployment todeploy operating systems on HP BladeSystemMatrix CMS.

NoProLiant Server OS Deployment

HP BladeSystem Matrix CMS inventory maybe viewed by System Center ConfigurationManager.

YesHardware Inventory Tool

Do not use Server Updates Catalog to updateHP BladeSystem Matrix CMS.

NoServer Updates Catalog

The BladeSystem Management Pack uses a special Windows service to manage and monitor HPBladeSystem c-Class enclosures. This service is normally installed on the System Center OperationsManager console, but it can also be installed on any other Windows server. Do not install theBladeSystem Monitor Service on the HP BladeSystem Matrix CMS.

Other Managed Nodes in a HP BladeSystem Matrix EnvironmentOther server nodes in a HP BladeSystem Matrix environment (those that are not HP BladeSystemMatrix CMS) may be managed by Microsoft System Center.The following table provides a list of supported configurations for other nodes in a HP BladeSystemMatrix environment:

78 Optional Management Services integration notes

Page 79: 646940-001

Table 65 Supported configurations for other nodes in a HP BladeSystem Matrix CMS

CommentsSupportedHP Insight Control for Microsoft System centercomponent

Other nodes may be a System CenterOperations Manager managed node.

YesProLiant Server Management Packs

HP BladeSystem Matrix c-Class enclosures maybe managed by System Center OperationsManager.

YesBladeSystem Management Pack

ProLiant PRO Management Pack may be usedto move virtual machines on other nodes.

YesProLiant PRO Management Pack

Ensure that these configuration changes arecomprehended when using HP BladeSystemMatrix CMS components to view virtualmachine configurations.

Do not use ProLiant Server OS Deployment todeploy operating systems on other nodes in aHP BladeSystem Matrix environment.

NoProLiant Server OS Deployment

HP BladeSystem Matrix CMS inventory maybe viewed by System Center ConfigurationManager.

YesHardware Inventory Tool

Server Updates Catalog may be used toupgrade other nodes, but only when the

YesServer Updates Catalog

firmware or driver version matches thesupported HP BladeSystem Matrix version.However, you must explicitly filter out the HPBladeSystem Matrix CMS from the SystemCenter Server Collection before deployingupdates.

When using the HP Server Updates Catalog for System Center Configuration Manager to upgradefirmware or drivers, it is important to check that the firmware or driver version that you are updatingon the managed nodes adheres to the supported version for HP BladeSystem Matrix. If they donot, you should not perform the upgrade.You may explicitly exclude servers from the System Center Server Collection before deployingupdates. A Server Collection is a group of servers to which you wish to perform a System CenterConfiguration Manager function, such as updating firmware or drivers. In order to maintain thematched set of HP BladeSystem Matrix CMS firmware and driver versions, you should exclude theHP BladeSystem Matrix CMS from the Server Collection. This can be done by creating separateCollection for all ProLiant servers except for the CMS server. See the System Center ConfigurationManager User Guide for creating Collections within System Center Configuration Manager.

HP BladeSystem Matrix and Insight Control for Microsoft System Center 79

Page 80: 646940-001

D HP BladeSystem Matrix and Virtual Connect FlexFabricConfiguration Guidelines

This appendix provides customers and technical services with the information necessary to planfor an HP BladeSystem Matrix solution that utilizes the HP BladeSystem Matrix FlexFabric Starterand HP BladeSystem Matrix Expansion Kits.

Virtual Connect FlexFabric hardware components

HP Virtual Connect FlexFabric 10Gb/24-Port moduleThe HP Virtual Connect FlexFabric 10Gb/24-Port module is a new interconnect built upon VirtualConnect's Flex-10 technology. The Virtual Connect FlexFabric module consolidates Ethernet andstorage networks into a single converged network with the goal of reducing network costs andcomplexity. It extends previous-generation HP Virtual Connect Flex-10 technology with the inclusionof iSCSI, Converged Enhanced Ethernet (CEE), and Fibre Channel protocols. With FlexFabric,customers no longer need to purchase multiple interconnects for Ethernet and FC.

HP FlexFabric Converged Network AdaptersCustomers need blade servers with the following supported FlexFabric adapters:• HP NC551i Dual Port FlexFabric 10Gb Converged Network Adapter• HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter• HP NC553i 10Gb 2-port FlexFabric Converged Network Adapter• HP NC553m 10Gb 2-port FlexFabric Converged Network AdapterHP G7 blade servers have a Converged Network Adapter chip integrated on the motherboard(LOM) and are compatible with FlexFabric interconnect modules. For additional bandwidth, theNC551m is supported in the BL465c G7, the BL685c G7 and all G6 BladeSystem servers. TheNC553m is supported in all HP G7 and G6 BladeSystem servers. FlexFabric support for Integrityi2 servers require the NC551m Dual Port FlexFabric 10 Gb Converged Network Adapter.Both the HP NC553m and the HP NC551m mezzanine adapters support Virtual Connect Flex-10that allows each 10 Gb port to be divided into four physical NICs to optimize bandwidthmanagement for virtualized servers. When connected to a CEE-capable switch, FC and EthernetI/O are separated and routed to the corresponding network. For iSCSI storage, the NC553m andNC551m support full protocol offload providing better CPU efficiency when compared to softwareinitiators enabling the server to handle increased virtualization workloads and compute-intensiveapplications. This combination of high-performance network and storage connectivity reduces costand complexity and provides the flexibility and scalability to fully optimize BladeSystem servers.The NC553m and NC551m deliver the performance benefits and cost savings of convergednetwork connectivity for HP BladeSystem servers. The dual-port NC553m and NC551m optimizenetwork and storage traffic with hardware acceleration and offloads for stateless TCP/IP, TCPOffload Engine (TOE), FC and iSCSI.Older generation G6 blade servers have 10 GB Network Interface Cards (NICs) embedded onthe motherboard as a LOM and are not readily compatible with FlexFabric. Customers need topurchase either the HP NC553m or the HP NC551m Converged Network Adapter mezzaninecard and plug it into the server’s mezzanine to enable any G6 BladeSystem server to supportFlexFabric.

80 HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines

Page 81: 646940-001

FlexFabric interconnects/mezzanines – HP BladeSystem c7000 portmapping

Figure 9 HP BladeSystem c7000 enclosure - rear view

5. Interconnect Bays 7 / 81. Upper Fan System

6. Onboard Administrator2. Interconnect Bays 1 /2

7. Lower Fan System3. Interconnect Bays 3 / 4

8. Rear Redundant Power Complex4. Interconnect Bays 5 / 6

Understanding the concept of port mapping is critical to properly planning your HP BladeSystemMatrix FlexFabric configuration. The diagram above shows the proper placement of the VirtualConnect FlexFabric 10Gb/24-Port modules and the integrated or mezzanine adapters that willbe used for your supported BladeSystem Matrix FlexFabric configurations.Port mapping differs slightly between full-height and half-height server blades due to the supportof additional mezzanine cards on the full-height version. HP has simplified the processes of mappingmezzanine ports to switch ports by providing intelligent management tools via the OnboardAdministrator and HP Insight Manager Software. The HP BladeSystem Onboard AdministratorUser Guide and HP BladeSystem c7000 Enclosure Setup and Installation Guide provide detailedinformation on port mapping.The following diagrams show the port mappings for half-height and full-height blades. The followingtables represent a number of recommended and supported configurations for an HP BladeSystemc7000 enclosure with one or more redundant pairs of Virtual Connect FlexFabric modules.

FlexFabric interconnects/mezzanines – HP BladeSystem c7000 port mapping 81

Page 82: 646940-001

Figure 10 Half-height server blade port mapping

Figure 11 Full-height server blade port mapping

HP BladeSystem c7000 enclosure FlexFabric module placementThe port mapping diagrams show an association between the mezzanine type and placement(integrated adapter or mezzanine adapter located in Mezz 1, Mezz 2, or Mezz 3) and the HPBladeSystem c-Class interconnect bay that is utilized via the internal port mappings of the HPBladeSystem c7000 enclosure.When an integrated FlexFabric adapter is utilized, the blade communicates through the FlexFabric24-port VC module redundant pair located in interconnect bays 1 & 2. When a FlexFabricmezzanine adapter in Mezz 1 is utilized, then the blade communicates from that mezzaninethrough the FlexFabric 24-port VC module redundant pair located in interconnect bays 3 & 4.When a FlexFabric mezzanine adapter in Mezz 2 is utilized for half-height or full-height blades,then the blade communicates through the FlexFabric 24-port VC module redundant pair locatedin interconnect bays 5 and 6, and so forth.

82 HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines

Page 83: 646940-001

FlexFabric configurations using only HP G7 BladeSystem serversThe following tables show the recommended configuration as well as a number of supportedconfigurations for an HP BladeSystem c7000 enclosure with Virtual Connect FlexFabric modulesin support of HP G7 BladeSystem servers.

IMPORTANT: Configurations which support the G7 BladeSystem servers require the followingcomponents:• A minimum of two total (1 pair of redundant) HP Virtual Connect FlexFabric 10Gb/24-Port

modules located in interconnect bays 1-2• An integrated HP FlexFabric mezzanine adapterHP Virtual Connect FlexFabric modules can be added in pairs for additional bandwidth. TheBladeSystem servers that are to utilize the increased bandwidth require one HP Virtual ConnectFlexFabric mezzanine adapter for each additional pair of HP Virtual Connect FlexFabric modules.

The HP NC553m 10Gb 2-Port FlexFabric Adapter and the NC551m Dual Port FlexFabric 10GbConverged Network Adapter are both supported FlexFabric mezzanine adapters for a BladeSystemMatrix FlexFabric enclosure configured with G7 blades only. The HP NC553m is the recommendedFlexFabric mezzanine adapter for configurations which use only HP G7 BladeSystem servers forthe following reasons:• Better performance• Newer FlexFabric Converged Network Adapter technology• Easier standardization since it is supported in all G6 and G7 blades

Best Practice recommended configuration• 2 FlexFabric modules• All G7 blades with an integrated FlexFabric adapter

Server networkadapters required

Interconnect module configurations for G7 blades using NC551m or NC553i

Integrated FlexFabricAdapter1

VC FlexFabric module[Bay 2]VC FlexFabric module[Bay 1]

Empty[Bay 4]Empty[Bay 3]

Empty[Bay 6]Empty[Bay 5]

Empty[Bay 8]Empty[Bay 7]

1 Requires HP ProLiant BL G7 server blades with HP NC553i or HP NC551i Dual Port FlexFabric 10Gb ConvergedNetwork Adapter

This configuration enables the customer to most cost effectively introduce FlexFabric technologywhile keeping the configuration complexity to a minimum. This configuration also provides theability to most easily manage SAN connectivity. For the cost of two FlexFabric modules, each bladehas up to 6 Ethernet ports and 2 FC ports for a total of 8 ports sharing up to 20 Gb of I/Obandwidth.

FlexFabric configurations using only HP G7 BladeSystem servers 83

Page 84: 646940-001

Supported configuration• 4 FlexFabric modules• All G7 blades with an integrated FlexFabric Adapter• 1 additional FlexFabric mezzanine adapter

Server networkadapters used

Interconnect module configurations for G7 blades using integrated FlexFabric adapter + 1additional FlexFabric mezzanine(s)

Integrated FlexFabricadapter1

VC FlexFabric module[Bay 2]VC FlexFabric module[Bay 1]

FlexFabric adapter inMezzanine slot 12

VC FlexFabric module[Bay 4]VC FlexFabric module[Bay 3]

Empty[Bay 6]Empty[Bay 5]

Empty[Bay 8]Empty[Bay 7]

1 Requires HP ProLiant BL G7 server blades with HP NC553i or HP NC551i Dual Port FlexFabric 10Gb ConvergedNetwork Adapter

2 Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any bladecommunicating through this pair of VC FlexFabric modules

This configuration is supported, but has complexities that must be carefully weighed against theopportunity to potentially achieve higher performance. This configuration is less cost effective, morecomplex to configure, and more difficult to manage SAN connectivity. For the cost of 4 FlexFabricmodules and a FlexFabric adapter for every blade in the enclosure, each blade can have 12 or14 Ethernet ports and 2 or 4 FC ports for a total of 16 ports sharing up to 40 Gb of I/O bandwidthdepending on which interconnect modules have FC uplinks to the SAN.

Supported configuration• 6 FlexFabric modules• All G7 blades with an integrated FlexFabric adapter• 2 additional FlexFabric mezzanine adapters

Server networkadapters used

Interconnect module configurations for G7 blades using integrated FlexFabric adapter + 2additional FlexFabric mezzanine(s)

Integrated FlexFabricadapter1

VC FlexFabric module[Bay 2]VC FlexFabric module[Bay 1]

FlexFabric adapter inMezzanine slot 12

VC FlexFabric module[Bay 4]VC FlexFabric module[Bay 3]

FlexFabric adapter inMezzanine slot 22

VC FlexFabric module[Bay 6]VC FlexFabric module[Bay 5]

Empty[Bay 8]Empty[Bay 7]

1 Requires HP ProLiant BL G7 server blades with HP NC553i or HP NC551i Dual Port FlexFabric 10Gb ConvergedNetwork Adapter

2 Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any bladecommunicating through this pair of VC FlexFabric modules

This configuration is supported, but has complexities that must be carefully weighed against theopportunity to potentially achieve even higher performance. This configuration is even less costeffective, more complex to configure, and introduces increased difficulty when managing SANconnectivity. For the cost of six FlexFabric modules and two FlexFabric adapters for every bladein the enclosure, each blade can have 18, 20, or 22 Ethernet ports and 2, 4, or 6 FC ports for a

84 HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines

Page 85: 646940-001

total of 24 ports sharing up to 60 Gb of I/O bandwidth depending on which interconnect moduleshave FC uplinks to the SAN.

Supported configuration• 8 FlexFabric modules• All G7 blades with an integrated FlexFabric adapter• 3 additional FlexFabric mezzanine adapters

NOTE: Only full-height blades have three mezzanine slots. Without full-height blades, the 4thset of FlexFabric modules will not be utilized with the FlexFabric mezzanine adapters.

Server networkadapters used

Interconnect module configurations for G7 blades using integrated FlexFabric adapter + 3additional FlexFabric mezzanine(s)

Integrated FlexFabricadapter1

VC FlexFabric module[Bay 2]VC FlexFabric module[Bay 1]

FlexFabric adapter inMezzanine slot 12

VC FlexFabric module[Bay 4]VC FlexFabric module[Bay 3]

FlexFabric adapter inMezzanine slot 22

VC FlexFabric module[Bay 6]VC FlexFabric module[Bay 5]

FlexFabric adapter inMezzanine slot 32

VC FlexFabric module[Bay 8]VC FlexFabric module[Bay 7]

1 Requires HP ProLiant BL G7 server blades with HP NC553i or HP NC551i Dual Port FlexFabric 10Gb ConvergedNetwork Adapter

2 Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any bladecommunicating through this pair of VC FlexFabric modules

This configuration is supported, but has complexities that must be carefully weighed against theopportunity to potentially achieve even higher performance. This configuration is the least costeffective, the most complex to configure, and the most difficult to manage SAN connectivity. Forthe cost of eight FlexFabric modules and three (or two half-height blades) FlexFabric adapters forevery blade in the enclosure, each blade can have 24, 26, 28, or 30 Ethernet ports and 2, 4, 6,or 8 FC ports for a total of 32 ports sharing up to 80 Gb of I/O bandwidth depending on whichinterconnect modules have FC uplinks to the SAN.

FlexFabric configurations using only HP G6 or i2 BladeSystem serversThe following tables show a number of supported configurations for an HP BladeSystem c7000enclosure with Virtual Connect FlexFabric modules in support of HP G6 or i2 BladeSystem servers.

IMPORTANT: Configurations which support the G6 or i2 BladeSystem servers require:• A minimum of four total (two pair of redundant) HP Virtual Connect FlexFabric 10Gb/24-Port

modules located in interconnect bays 1-4• A minimum of one FlexFabric mezzanine adapter placed in Mezz 1HP Virtual Connect FlexFabric modules can be added in pairs for additional bandwidth. TheBladesystem servers that are to utilize the increased bandwidth require one HP Virtual ConnectFlexFabric mezzanine adapter for each additional pair of HP Virtual Connect FlexFabric modules.

The HP NC553m 10Gb 2-Port FlexFabric Adapter and the NC551m Dual Port FlexFabric 10GbConverged Network Adapter are both supported FlexFabric mezzanine adapters for a BladeSystemMatrix FlexFabric enclosure configured with G6 or i2 blades only. FlexFabric support for Integrityi2 servers require the NC551m Dual Port FlexFabric 10 Gb Converged Network Adapter. The HP

FlexFabric configurations using only HP G6 or i2 BladeSystem servers 85

Page 86: 646940-001

NC553m is the recommended FlexFabric mezzanine adapter for configurations which use onlyHP G6 BladeSystem servers for the following reasons:• Better performance• Newer FlexFabric Converged Network Adapter technology• Easier standardization since it is supported in all G6 and G7 blades

Supported configuration• 4 FlexFabric modules• All G6 and i2 blades• 1 additional FlexFabric mezzanine adapter

Server networkadapters required

Interconnect module configurations for G6 or i2 blades using additional FlexFabric mezzanine(s)

Flex-10/Enet LOMVC FlexFabric module[Bay 2]VC FlexFabric module[Bay 1]

FlexFabric adapter inMezzanine slot 11

VC FlexFabric module[Bay 4]VC FlexFabric module[Bay 3]

Empty[Bay 6]Empty[Bay 5]

Empty[Bay 8]Empty[Bay 7]

1 Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter

This configuration is supported, but has complexities that must be carefully weighed against theopportunity to potentially achieve higher performance. This configuration is less cost effective, morecomplex to configure, and more difficult to manage SAN and network connectivity. For the costof four FlexFabric modules and a FlexFabric adapter for every blade in the enclosure, each bladecan have 14 Ethernet ports and 2 FC ports for a total of 16 ports sharing up to 40 Gb of I/Obandwidth depending on which interconnect modules have FC uplinks to the SAN. HP ProLiantG6 and Integrity i2 blades have Flex-10 LOMs so the VC FlexFabric modules in bays 1 and 2 willbe used for Ethernet only.

Supported configuration• 6 FlexFabric modules• All G6 and i2 blades with an integrated Flex–10 Adapter• 2 additional FlexFabric mezzanine adapters

Server networkadapters used

Interconnect module configurations for G6 or i2 blades using additional FlexFabric mezzanine(s)

Flex-10/Enet LOMVC FlexFabric module[Bay 2]VC FlexFabric module[Bay 1]

FlexFabric adapter inMezzanine slot 11

VC FlexFabric module[Bay 4]VC FlexFabric module[Bay 3]

FlexFabric adapter inMezzanine slot 22

VC FlexFabric module[Bay 6]VC FlexFabric module[Bay 5]

Empty[Bay 8]Empty[Bay 7]

1 Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter2 Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade

communicating through this pair of VC FlexFabric modules

This configuration is supported, but has complexities that must be carefully weighed against theopportunity to potentially achieve higher performance. This configuration is even less cost effective,more complex to configure, and more difficult to manage SAN and network connectivity. For the

86 HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines

Page 87: 646940-001

cost of six FlexFabric modules and two FlexFabric adapters for every blade in the enclosure, eachblade can have 18 or 20 Ethernet ports and 2 or 4 FC ports for a total of 24 ports sharing up to60 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to the SAN.HP ProLiant G6 and Integrity i2 blades have Flex-10 LOMs so the VC FlexFabric modules in bays1 and 2 will be used for Ethernet only.

Supported configuration• 8 FlexFabric modules• All G6 and i2 blades with an integrated Flex–10 Adapter• 3 additional FlexFabric mezzanine adapters

NOTE: Only full-height blades have three mezzanine slots. Without full-height blades, the 4thset of FlexFabric modules will not be utilized with the FlexFabric mezzanine adapters.

Server networkadapters used

Interconnect module configurations for G6 or i2 blades using additional FlexFabric mezzanine(s)

Flex-10/Enet LOMVC FlexFabric module[Bay 2]VC FlexFabric module[Bay 1]

FlexFabric adapter inMezzanine slot 11

VC FlexFabric module[Bay 4]VC FlexFabric module[Bay 3]

FlexFabric adapter inMezzanine slot 22

VC FlexFabric module[Bay 6]VC FlexFabric module[Bay 5]

FlexFabric adapter inMezzanine slot 32

VC FlexFabric module[Bay 8]VC FlexFabric module[Bay 7]

1 Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter2 Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade

communicating through this pair of VC FlexFabric modules

This configuration is supported, but has complexities that must be carefully weighed against theopportunity to potentially achieve higher performance. This configuration is the least cost effective,most complex to configure, and most difficult to manage SAN and network connectivity. For thecost of eight FlexFabric modules and three (or 2 for half-height blades) FlexFabric adapters forevery blade in the enclosure, each blade can have 26, 28, or 30 Ethernet ports and 2, 4, or 6FC ports for a total of 32 ports sharing up to 80 Gb of I/O bandwidth depending on whichinterconnect modules have FC uplinks to the SAN. HP ProLiant G6 and Integrity i2 blades haveFlex-10 LOMs so the VC FlexFabric modules in bays 1 and 2 will be used for Ethernet only.

FlexFabric configurations using a mixture of HP G7 with G6 and/or i2BladeSystem servers

The following tables show a number of supported configurations for an HP BladeSystem c7000enclosure with Virtual Connect FlexFabric modules in support of a mixture of HP G7 with G6and/or i2 BladeSystem servers.

FlexFabric configurations using a mixture of HP G7 with G6 and/or i2 BladeSystem servers 87

Page 88: 646940-001

IMPORTANT: Configurations which support a mixture of HP G7 with G6 and/or i2 BladeSystemservers require:• A minimum of four total (two pair of redundant) HP Virtual Connect FlexFabric 10Gb/24-Port

modules located in Interconnect Bays 1-4• A minimum of one FlexFabric mezzanine adapter placed in Mezz 1 of every BladeSystem

server in the enclosureHP Virtual Connect FlexFabric modules can be added in pairs for additional bandwidth if desired.The Bladesystem servers that are to utilize the increased bandwidth require one HP Virtual ConnectFlexFabric mezzanine adapter for each additional pair of HP Virtual Connect FlexFabric modules.FlexFabric support for Integrity i2 servers require the NC551m Dual Port FlexFabric 10 GbConverged Network Adapter.

The HP NC553m 10Gb 2-Port FlexFabric Adapter and the NC551m Dual Port FlexFabric 10GbConverged Network Adapter are both supported FlexFabric mezzanine adapters for a BladeSystemMatrix FlexFabric enclosure configured with G6 and G7 blades. The HP NC553m is therecommended FlexFabric mezzanine adapter for configurations which use a mixture of G6 andG7 BladeSystem servers for the following reasons:• Better performance• Newer FlexFabric Converged Network Adapter technology• Easier standardization since it is supported in all G6 and G7 blades

Supported configuration• 4 FlexFabric modules• Mixed G7 with G6 and/or i2 blades• 1 additional FlexFabric mezzanine adapter

Server networkadapters required

Interconnect module configurations for mixed G7 with G6 and/or i2 blades using additionalFlexFabric mezzanine(s)

Flex-10/Enet LOM orFlexFabric IntegratedAdapter

VC FlexFabric module[Bay 2]VC FlexFabric module[Bay 1]

FlexFabric adapter inMezzanine slot 11

VC FlexFabric module[Bay 4]VC FlexFabric module[Bay 3]

Empty[Bay 6]Empty[Bay 5]

Empty[Bay 8]Empty[Bay 7]

1 Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter

This configuration is supported, but has complexities that must be carefully weighed against theopportunity to potentially achieve higher performance. This configuration is less cost effective, morecomplex to configure, and more difficult to manage SAN and network connectivity. For the costof four FlexFabric modules and a FlexFabric adapter for every blade in the enclosure, each bladecan have 12 or 14 Ethernet ports and 2 or 4 (G7s only) FC ports for a total of 16 ports sharingup to 40 Gb of I/O bandwidth depending on which interconnect modules have FC uplinks to theSAN. HP ProLiant G6 and Integrity i2 blades have Flex-10 LOMs so these blades will not haveaccess to SAN uplinks on the VC FlexFabric modules in bays 1 and 2.

88 HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines

Page 89: 646940-001

Supported configuration• 6 FlexFabric modules• Mixed G7 with G6 and/or i2 blades• 2 additional FlexFabric mezzanine adapters

Server networkadapters used

Interconnect module configurations for mixed G7 with G6 and/or i2 blades using additionalFlexFabric mezzanine(s)

Flex-10/Enet LOM orFlexFabric IntegratedAdapter

VC FlexFabric module[Bay 2]VC FlexFabric module[Bay 1]

FlexFabric adapter inMezzanine slot 1

VC FlexFabric module[Bay 4]VC FlexFabric module[Bay 3]

FlexFabric adapter inMezzanine slot 22

VC FlexFabric module[Bay 6]VC FlexFabric module[Bay 5]

Empty[Bay 8]Empty[Bay 7]

1 Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter2 Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade

communicating through this pair of VC FlexFabric modules

This configuration is supported, but has complexities that must be carefully weighed against theopportunity to potentially achieve higher performance. This configuration is even less cost effective,more complex to configure, and more difficult to manage SAN and network connectivity. For thecost of six FlexFabric modules and two FlexFabric adapters for every blade in the enclosure, eachblade can have 18, 20, or 22 Ethernet ports and 2, 4, or 6 (G7s only) FC ports for a total of 24ports sharing up to 60 Gb of I/O bandwidth depending on which interconnect modules have FCuplinks to the SAN. HP ProLiant G6 and Integrity i2 blades have Flex-10 LOMs so these bladeswill not have access to SAN uplinks on the VC FlexFabric modules in bays 1 and 2.

Supported configuration• 8 FlexFabric modules• Mixed G7 with G6 and/or i2 blades• 3 additional FlexFabric mezzanine adapters

NOTE: Only full-height blades have three mezzanine slots. Without full-height blades, the 4thset of FlexFabric modules will not be utilized with the FlexFabric mezzanine adapters.

Server networkadapters used

Interconnect module configurations for mixed G7 with G6 and/or i2 blades using additionalFlexFabric mezzanine(s)

Flex-10/Enet LOM orFlexFabric IntegratedAdapter

VC FlexFabric module[Bay 2]VC FlexFabric module[Bay 1]

FlexFabric adapter inMezzanine slot 11

VC FlexFabric module[Bay 4]VC FlexFabric module[Bay 3]

FlexFabric adapter inMezzanine slot 22

VC FlexFabric module[Bay 6]VC FlexFabric module[Bay 5]

FlexFabric adapter inMezzanine slot 32

VC FlexFabric module[Bay 8]VC FlexFabric module[Bay 7]

1 Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter2 Requires HP NC553m or HP NC551m Dual Port FlexFabric 10Gb Converged Network Adapter for any blade

communicating through this pair of VC FlexFabric modules

FlexFabric configurations using a mixture of HP G7 with G6 and/or i2 BladeSystem servers 89

Page 90: 646940-001

This configuration is supported, but has complexities that must be carefully weighed against theopportunity to potentially achieve higher performance. This configuration is the least cost effective,most complex to configure, and most difficult to manage SAN and network connectivity. For thecost of eight FlexFabric modules and three (or 2 half-height blades) FlexFabric adapters for everyblade in the enclosure, each blade can have 24, 26, 28, or 30 Ethernet ports and 2, 4, 6, or 8(G7s only) FC ports for a total of 32 ports sharing up to 80 Gb of I/O bandwidth depending onwhich interconnect modules have FC uplinks to the SAN. HP ProLiant G6 and Integrity i2 bladeshave Flex-10 LOMs so these blades will not have access to SAN uplinks on the VC FlexFabricmodules in bays 1 and 2.

HP BladeSystem Matrix configuration guidelines for mixing FlexFabric withFlex-10

HP BladeSystem Matrix solutions do not support mixing and matching of Flex-10 and FlexFabricwithin the same enclosure, or in the same Virtual Connect domain group.When planning for HP BladeSystem Matrix solutions within a single VC domain group or withinan enclosure, configuration choices are limited to either the Flex-10 with VC-FC or FlexFabricmodules. The HP BladeSystem Matrix CMS can manage both Flex-10 and FlexFabric enclosuresas separate VC domain groups.HP BladeSystem Matrix solutions do not support mixing and matching of FlexFabric modules andVC-FC within the same enclosure. If VC-FC support is desired, then the customer must utilize theHP BladeSystem Matrix Flex-10 Starter and Expansion kits.

90 HP BladeSystem Matrix and Virtual Connect FlexFabric Configuration Guidelines

Page 91: 646940-001

GlossaryABM Array-based managementAPI Application program interfaceAWE Address Windowing Extensions. A method used by Windows OS to make more than 4 GB

available to applications through system calls.BFS Boot from SANBOOTP Bootstrap ProtocolCapAd HP Capacity AdvisorCEE Converged Enhanced EthernetCIM Common Information ModelCIMOM Common Information Model Object ManagerCLI Command line interface. An interface comprised of various commands which are used to control

operating system responses.CLX HP Cluster ExtensionsCMS Central management serverCNA Converged network adapterDAC Direct Attach CableDEP Data execution preventionDG Device groupDHCP Dynamic Host Configuration ProtocolDMI Desktop Management InterfaceDNS Domain Name SystemDR Disaster RecoveryDSM Device specific moduleEBIPA Enclosure Bay IP addressingEFI Extensible Firmware InterfaceESA HP Extensible Server and Storage AdapterEVA HP Enterprise Virtual Array. An HP storage array product line.FC Fibre Channel. A network technology primarily used for storage networks.FCoE Fibre Channel over EthernetFDT Firmware development toolFQDN Fully qualified domain nameFTP File Transfer ProtocolGUI Graphical user interfacegWLM HP Global Workload ManagerHA High availabilityHBA Host Bus Adapter. A circuit board and/or integrated circuit adapter that provides input/output

processing and physical connectivity between a server and a storage device.HP OO See OO.HP SIM HP Systems Insight ManagerHP SUM HP Smart Update ManagerHPIO See IO.HPVM HP Virtual Machine. Common name for HP Integrity Virtual Machines product.HTTP Hypertext Transfer Protocol

91

Page 92: 646940-001

HTTPS Hypertext Transfer Protocol SecureIC HP Insight ControliCAP Instant CapacityICE HP Insight Control SuiteICMP Internet Control Message ProtocolID HP Insight DynamicsIIS Internet Information ServicesiLO Formerly HP Insight Control remote management. Renamed HP Integrated Lights-Out.IO HP Insight Orchestration. A web application that enables you to deploy, manage, and monitor

the overall behavior of Insight Orchestration and its users, templates, services, and resources.IPM Formerly HP Insight Power Manager. Renamed HP Insight Control power management.IPv4 Internet Protocol version 4IPv6 Internet Protocol version 6IR HP Insight RecoveryJVM Java Virtual MachineKB Knowledge baseLDAP Lightweight Directory Access ProtocolLinuxPE Linux pre-boot environmentLOM LAN on motherboardLS Logical serverLSM Logical server managementLUN Logical unit number. The identifier of a SCSI, Fibre Channel or iSCSI logical unit.LV Logical volumeMAC Media Access Control . A unique identifier assigned by the manufacturer to most network interface

cards (NICs) or network adapters. In computer networking, a Media Access Control address.Also known as an Ethernet Hardware Address (EHA), hardware address, adapter address orphysical address.

MMC Microsoft Management ConsoleMPIO HP Multipath I/OMSA HP StorageWorks Modular Smart Array. An HP storage array product line (also known as the

P2000).MSC Microsoft System CenterMSCS Microsoft Cluster Server/ServiceMSSW HP Insight Managed System Setup WizardNCU Network Configuration UtilityNFS Network File SystemNFT Network file transferNIC Network interface card. A device that handles communication between a device and other devices

on a network.NPIV N_Port ID VirtualizationNTP Network Time ProtocolNVRAM Non-volatile random access memoryOA HP Onboard AdministratorOE Operating environmentOO HP Operations OrchestrationOS Operating system

92 Glossary

Page 93: 646940-001

P-VOL Primary VolumePAE Physical Address Extension. A feature of x86 processors to allow addressing more than 4 GB of

memory.PDR Power distribution rackPDU Power distribution unit. The rack device that distributes conditioned AC or DC power within a

rack.POC Proof of conceptPOST Power on self testPSP HP ProLiant Support PackPSUE Pair suspended-errorPSUS Pair suspended-splitPXE Preboot Execution EnvironmentRAID Redundant Array of Independent DisksRBAC Role-based access controlRDP Formerly HP Rapid Deployment Pack. Renamed HP Insight Control server deployment.RG Recovery groupRM HP Matrix recovery managementS-VOL Secondary VolumeSA HP Server AutomationSAID Service agreement identifierSAM System administration managerSAN Storage area network. A network of storage devices available to one or more servers.SCP State change pendingSFIP Stress-free installation planSG HP ServiceguardSIM See HP SIM.SLVM Shared Logical Volume ManagerSMA Storage Management ApplianceSMH HP System Management HomepageSMI-S Storage Management Initiative SpecificationSMP Formerly HP Server Migration Pack. Renamed HP Insight Control server migration.SMTP Simple Mail Transfer Protocol. A protocol for sending email messages between servers and from

mail clients to mail servers. The messages can then be retrieved with an email client using eitherPOP or IMAP.

SN Serial numberSNMP Simple network management protocolSPE Storage pool entrySPM HP Storage Provisioning Manager. A means of defining logical server storage requirements by

specifying volumes and their properties.SQL Structured Query LanguageSRD Shared Resource DomainSSH Secure ShellSSL Secure Sockets LayerSSO Single sign-onSTM System Type ManagerSUS Shared uplink set

93

Page 94: 646940-001

TCP/IP Transmission Control Protocol/Internet ProtocolTFTP Trivial File Transfer ProtocolTOE TCP Offload EngineUAC User Account ControlUDP User Datagram ProtocolURC Utility Ready ComputingURS Utility Ready StorageUSB Universal Serial Bus. A serial bus standard used to interface devices.UUID Universally Unique IdentifierVC HP Virtual ConnectVCA Version Control AgentVCDG Virtual connect domain groupVCEM HP Virtual Connect Enterprise Manager. HP VCEM centralizes network connection management

and workload mobility for HP BladeSystem servers that use Virtual Connect to access local areanetworks (LANs), storage area networks (SANs), and converged network environments.

VCM Virtual Connect ManagerVCPU Virtual CPUVCRM HP Version Control Repository ManagerVCSU Virtual Connect Support UtilityVdisk Virtual diskVG Volume groupVLAN Virtual local area networkVM Virtual machineVMAN HP Insight Virtualization ManagerVMM Formerly HP Virtual Machine Manager. Renamed HP Insight Control virtual machine management.VPort Virtual portVSE Formerly Virtual Server Environment. Renamed Insight Dynamics.WAIK Windows Automation Installation KitWBEM Web-Based Enterprise ManagementWinPE Windows pre-boot environmentWMI Windows Management InstrumentationWWID See WWN.WWN Worldwide Name. A unique 64–bit address assigned to a FC device.WWNN Worldwide Node Name. A WWN that identifies a device in a FC fabric.WWPN Worldwide Port Name. A WWN that identifies a port on a device in an FC fabric.XML Extensible Markup Language

94 Glossary

Page 95: 646940-001

Index

Aaccess credentials

infrastructure, 48templates, 75test and development infrastructure using logical servers,

60test and development infrastructure with IO, 67

access requirements, 47application services, 14

define, 16templates, 68

Bbill of materials

test and development infrastructure using logical servers,56

test and development infrastructure with IO, 61

Cc7000 enclosure, 8, 10, 30, 61CMS, 13

configuration, 37disk requirements, 18Insight Software, 14, 16Microsoft System Center, 78network connections, 18non server blade, 18planning, 16SAN connections, 18supported configurations, 78

componentscustomer responsibility, 12HP BladeSystem Matrix, 9Microsoft System Center, 77

configurationCMS, 37FlexFabric, 80, 83, 85, 87, 90management services network, 44sample templates, 68VC domain, 34

connectionsdefine manageability, 46FC, 38FC SAN storage, 36iSCSI SAN storage, 36manageability, 45storage, 35VC ethernet, 45VC ethernet uplink, 42

convergedinfrastructure, 7

customerfacility planning, 29network details, 40responsibility, 12, 29

SAN storage, 35SAN storage templates, 70

Ddata center

customer responsibility, 29requirements, 29

defineapplication services, 16manageability connections, 46services VC ethernet connections, 45storage volumes, 38

define servicestest and development infrastructure, 62test and development infrastructure using logical servers,

56deployed servers and services, 14disk requirements, CMS, 18documents, HP BladeSystem Matrix, 5domain

VC configuration, 34Virtual Connect, 30

Eenclosure

Flex-10, 8, 29FlexFabric, 7, 29parameters, 29planning, 29stacking, 30

enclosure stackingFlex-10, 30FlexFabric, 30

ethernetdefine services connections, 45VC, 8VC Flex-10 services, 43VC uplink connections, 42VC uplinks, 43

Expansion KitFlex-10, 11

Ffacility

planning, 29planning templates, 69planning test and development infrastructure using

logical servers, 57requirements, 29

facility planning, test and development infrastructure usingIO, 63

FCconnections, 38module, 30, 35SAN, 13, 36, 39SAN storage, 71

95

Page 96: 646940-001

storage, 28switch, 37VC, 8, 35

FC SAN storageconnections, 36templates, 71

federated CMS, 14access requirements, 48DNS configuration, 40HP BladeSystem Matrix infrastructure, 26planning, 19storage pool, 36supported management software, 19

federated environment, 24Flex-10

capability, 35enclosure, 8, 29enclosure stacking, 30Expansion Kit, 11FlexFabric, 90module, 31, 42Starter Kit, 10VC ethernet services, 43

FlexFabricconfiguration, 83, 85, 87, 90configuration guidelines, 80enclosure, 7, 29enclosure stacking, 30Flex-10, 90hardware components, 80Integrity, 80, 85, 87interconnects or mezzanines, 81module, 8, 31, 35module placement, 82Starter Kit, 10

HHP BladeSystem c7000

FlexFabric, 82port mapping, 81

HP BladeSystem Matrixbasic infrastructure, 8components, 9customer facility planning, 29documents, 5infrastructure, 7pre-delivery, 5pre-delivery planning, 49pre-order, 5solution networking, 40solution storage, 35

HP BladeSystem Matrix infrastructurebasic, 8federated CMS, 26Integrity managed nodes, 23overview, 5ProLiant managed nodes, 22

HP Insight Control see Insight ControlHP Insight Dynamcis see Insight Dynamics

HP Insight Orchestration see IOHP IR see IRHP server automation

additional management servers, 20optional management services, 76

IIgnite-UX server, 19infrastructure

access credentials, 48dynamic provisioning, 54–67HP BladeSystem Matrix, 7management, 8test and development, 54

IO, 60infrastructure, converged, 7Insight Control

Microsoft System Center, 9, 20, 28, 76, 77server deployment, 14, 18, 21, 27, 28, 40VMware vCenter Server, 9, 20, 28, 76, 77

Insight Dynamics, 9, 14, 15, 16, 18, 20, 60, 76orchestration service requests, 39

Insight Orchestration see IOInsight Recovery see IRInsight Software

CMS planning, 16integration, optional management services, 76Integrity

FlexFabric, 80, 85, 87Integrity managed nodes, 23Integrity server environment, standard, 22intended audience, 5IO, 44, 45

templates, 16, 38, 44, 61test and development infrastructure, 60

access credentials, 67bill of materials, 61, 62facility planning, 63managed network connections, 66network configuration, 65racks and enclosures, 63services network connections, 65storage volumes, 65VC domain configuration, 64VC Ethernet uplinks, 65

IR, optional management services, 76iSCSI

SAN storage, 71SAN storage connections

templates, 71isolating VM guest/host, 39

Llimited environment, 21link configurations, stacking, 31logical servers

test and development infrastructure, 54access credentials, 60bill of materials, 56

96 Index

Page 97: 646940-001

define services, 56facility planning, 57management network connections, 59network configuration, 58racks and enclosures, 56services network connections, 59storage volumes, 58VC domain configuration, 57VC Ethernet uplinks, 58

MMAC address, assign, 33manageability connections, 45managed network connections

test and development infrastructure, 66management

additional servers, 20determine servers, 27infrastructure, 8network connections, 74server scenarios, 20services, 16, 44, 76

management network connectionstemplates, 74test and development infrastructure using logical servers,

59management servers

additional, 20templates, 68

Microsoft System CenterCMS, 78components, 77Insight Control, 9, 28, 76, 77optional management services, 77, 78other managed nodes, 78

moduleFC, 30, 35Flex-10, 31, 42FlexFabric, 8, 31, 35

Nnetwork

management services, 44planning, 40

network configurationtemplates, 72test and development infrastructure, 65test and development infrastructure using logical servers,

58network connections

CMS, 18management, 74services, 74

network details, customer, 40next steps, 50NPIV, 12, 13, 35, 37

Ooptional management services, 76

HP server automation, 76Insight Control for Microsoft System Center, 77Insight Control for VMware vCenter Server, 76, 77integration, 76IR, 76Microsoft System Center, 77, 78

orchestration service requests, 39overview, HP BladeSystem Matrix infrastructure, 5

Pplanning

checklist, 49customer facility, 29federated CMS, 19Insight Software CMS, 16network, 40racks and enclosures, 29server, 14services, 14summary, 6

planning step1a-define application services, 161b-determine management servers, 272a-rack & enclosure parameters, 292b-determine facility requirements, 292c-VC domain configuration, 343a-collect customer SAN storage details, 353b-FC SAN storage connections, 363c-iSCSI SAN storage connections, 363d-define storage volumes, 384a-collect customer provided network details, 404b-VC ethernet uplilnks, 434c-define services VC ethernet connections, 454d-define manageability connections, 464e-determine infrastructure credentials, 48

port mapping, 81pre-delivery, 5pre-order, 5ProLiant managed nodes

HP BladeSystem Matrix infrastructure, 22ProLiant server environment

standard, 21

Rrack

parameters, 29planning, 29

racks and enclosurestemplates, 68test and development infrastructure, 63test and development infrastructure using logical servers,

56requirements

data center, 29facility, 29storage volumes, 37

Ssample configuration templates, 68

97

Page 98: 646940-001

SANconnections, 18FC, 13, 39

SAN storagecustomer details, 35FC, 71FC connections, 36iSCSI, 71iSCSI connections, 36templates, 70

serverdeployed, 14Ignite-UX, 19management, 27management scenarios, 20planning, 14

server environmentfederated, 24limited, 21

servicesapplication, 14deployed, 14Flex-10 ethernet, 43management, 16

network configuration, 44network connections, 74planning, 14

services network connectionstemplates, 74test and development infrastructure, 65test and development infrastructure using logical servers,

59solution

networking, 40stacking

enclosure, 30link configurations, 31

standard environmentIntegrity, 22ProLiant, 21

Starter KitFlex-10, 10FlexFabric, 10

storageconnections, 35FC, 28solution, 35volumes, 37

storage poolfederated CMS, 36

storage volumes, 37define, 38requirements, 37templates, 71test and development infrastructure, 65test and development infrastructure using logical servers,

58switch

FC, 37

Ttemplates

access credentials, 75application services, 68customer-provided SAN storage, 70facility planning, 69FC SAN storage

connections, 71IO, 16, 38, 44, 61iSCSI

SAN storage connections, 71management network connections, 74management servers, 68network configuration, 72racks and enclosures, 68sample configuration, 68–75SAN storage, 70services network connections, 74storage volumes, 71VC domain configuration, 69VC Ethernet uplinks, 73

test and development infrastructureIO, 60

access credentials, 67bill of materials, 61define services, 62mangement network connections, 66network configuration, 65racks and enclosures, 63services network connections, 65storage volumes, 65VC domain configuration, 64VC Ethernet uplinks, 65

logical servers, 54access credentials, 60bill of materials, 56define services, 56facility planning, 57management network connections, 59network configuration, 58racks and enclosures, 56services network connections, 59storage volumes, 58VC domain configuration, 57VC Ethernet uplinks, 58

test and development infrastructure with IOaccess credentials, 67facility planning, 63

VVC see VC

assign MAC address, 33assign WWN address, 33define services ethernet connections, 45domain, 30ethernet module, 8ethernet uplink connections, 42ethernet uplinks, 43FC, 8, 35

98 Index

Page 99: 646940-001

Flex-10 ethernet services, 43technology, 35

VC domain configurationtemplates, 69test and development infrastructure, 64test and development infrastructure using logical servers,

57VC Ethernet uplinks

templates, 73test and development infrastructure, 65test and development infrastructure using logical servers,

58VC Flex Fabric

configuration, 80hardware components, 80

virtual serial numbers, 33VM guest storage

isolating from VM host, 39VM host, isolating from VM guest, 39VMware vCenter Server, 76

Insight Control, 9, 20, 28

WWWN address, 33

99

Page 100: 646940-001