Top Banner
Fast Lane Institute for Knowledge Transfer GmbH Oranienburgerstr. 66, 10117 Berlin www.flane.de [email protected] DCUCD Data Center Unified Computing Design Fast Lane Lab Guide Version 3.0.0
203
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Design

Fast Lane Institute for Knowledge Transfer GmbH Oranienburgerstr. 66, 10117 Berlin www.flane.de [email protected]

DCUCD

Data Center Unified Computing Design

Fast Lane Lab Guide

Version 3.0.0

Page 2: Design
Page 3: Design

DCUCD

Fast Lane Lab Guide V3.0.0

Overview This guide presents the instructions and other information concerning the lab activities for this

course.

Outline

This guide includes the following:

� Lab Overview

� Lab 2-1: Exploring Cisco Unified Computing System Hardware

� Case Study 2-2: Sizing the Cisco Unified Computing System

� Lab 3-1: Deploying a Server with Cisco Unified Computing System

� Case Study 3-2: Designing Server Deployment

� Lab 4-1: Implementing Management Hierarchy

� Lab 5-1: Exploring the Cisco Unified Computing Network

� Case Study 5-2: Designing a Cisco Unified Computing Network

� Lab 6-1: Exploring Cisco Unified Computing SAN

� Case Study 6-2: Designing Cisco Unified Computing SAN

� Lab 9-1: Installing VMware vSphere and vCenter

� Lab 9-2: Installing a Cisco Nexus 1000V VSM

� Lab 9-3: Configuring Port Profiles

Page 4: Design

2 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

FastLane UCS Lab Layout

GE

FC

10GE

FTP/WWW/mail NetApp

172.16.1.x/24

Management Network

172.17.1.x/24

Data Network

UCS A UCS B

MDS A MDS B

Student Desktop

Page 5: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 3

Remote Lab login Note the remote lab access data you received from your instructor in this table:

The URL is always: http://remotelabs.flane.de

LAB# POD# Username Password

Page 6: Design

4 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

FastLane UCS Lab Topology – Lab Aids

Management IP Addresses (P is always your Pod #)

Device IP Address Default Gateway

MDS1 172.16.1.31 172.16.1.254

MDS2 172.16.1.32 172.16.1.254

UCS Fabric A 172.16.1.101 172.16.1.254

UCS Fabric B 172.16.1.102 172.16.1.254

UCS shared 172.16.1.200 172.16.1.254

Student Desktop 172.16.1.2P 172.16.1.254

Mail Server 172.16.1.250

172.17.1.250 172.17.1.254

Nexus 1000v 172.17.P1.200 172.17.P1.254

W2K3-SAN 172.17.P2.1 172.17.P2.254

W2K3-VM 172.17.P2.100 172.17.P2.254

ESX server 172.17.P1.1 172.17.P1.254

Device Login Credentials

Device Username Password

Student PC administrator 1234QWer

ESX root 1234QWer

W2K3-SAN administrator 1234QWer

W2K3-VM administrator 1234QWer

UCS Manager admin 1234QWer

MDS student 1234QWer

Nexus 1000v admin 1234QWer

Page 7: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 5

Fast Lane Lab Guide V3.0.0 .................................................................................................. 1

Overview ................................................................................................................................... 1 Outline ............................................................................................................................................. 1

FastLane UCS Lab Layout ......................................................................................................... 2

Remote Lab login....................................................................................................................... 3

FastLane UCS Lab Topology – Lab Aids .................................................................................... 4

Demo 2-1: Initial Configuration.................................................................................................. 8 Activity Objective........................................................................................................................... 8 Visual Objective ............................................................................................................................. 8 Required Resources ...................................................................................................................... 8

Lab 2-1: Exploring Cisco Unified Computing System Hardware.................................................. 9 Activity Objective .............................................................................................................................. 9 Required Resources ............................................................................................................................ 9 Task 1: Examining Cisco UCS Cluster Configuration............................................................................ 10 Task 2: Examining Cisco UCS Fabric Interconnect Switches.................................................................. 13 Task 3: Examining Cisco UCS Chassis ............................................................................................... 18 Task 4: Examining Cisco UCS I/O Modules ........................................................................................ 20 Task 5: Examining Cisco UCS Server Blades....................................................................................... 22

Case Study 2-2: Sizing the Cisco Unified Computing System ..................................................... 32 Overview ........................................................................................................................................ 32 Assignment ..................................................................................................................................... 32 Requirements .................................................................................................................................. 32 Case Study Aids............................................................................................................................... 34

Lab 3-1: Deploying a Server with Cisco Unified Computing System........................................... 35 Activity Objective ............................................................................................................................ 35 Visual Objective .............................................................................................................................. 36 Required Resources .......................................................................................................................... 36 Job Aid ........................................................................................................................................... 36 Task 1: Creating MAC Resource Pool................................................................................................. 38 Task 2: Creating WWNN Resource Pool ............................................................................................. 41 Task 3: Creating WWPN Resource Pool.............................................................................................. 43 Task 4: Creating UUID Suffix Resource Pool ...................................................................................... 46 Task 5: Creating Server Pool ............................................................................................................. 49 Task 6: Creating Server Pool Policy and Qualification Policy ................................................................. 50 Task 7: Creating Advanced Service Profile .......................................................................................... 56

Case Study 3-2: Designing Server Deployment .......................................................................... 67 Overview ........................................................................................................................................ 67 Assignment ..................................................................................................................................... 69 Requirements .................................................................................................................................. 70 Case Study Aids............................................................................................................................... 71

Lab 4-1: Implementing Management Hierarchy........................................................................ 74 Activity Objective ............................................................................................................................ 74 Visual Objective .............................................................................................................................. 74 Required Resources .......................................................................................................................... 75 Job Aid ........................................................................................................................................... 75 Task 1: Creating Organization............................................................................................................ 76 Task 2: Creating Locales ................................................................................................................... 79 Task 3: Creating Users...................................................................................................................... 82 Task 4: Creating Roles...................................................................................................................... 86

Page 8: Design

6 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Lab 5-1: Exploring the Cisco Unified Computing Network ........................................................ 89 Activity Objective.............................................................................................................................89 Visual Objective...............................................................................................................................89 Required Resources ..........................................................................................................................90 Lab Aid...........................................................................................................................................90 Task 1: Examining Cisco UCS Network Configuration ..........................................................................91 Task 2: Verifying Network High Availability .......................................................................................96

Case Study 5-2: Designing a Cisco Unified Computing Network .............................................. 102 Overview.......................................................................................................................................102 Assignment....................................................................................................................................102 Requirements .................................................................................................................................102

Lab 6-1: Exploring Cisco Unified Computing SAN.................................................................. 105 Activity Objective...........................................................................................................................105 Visual Objective.............................................................................................................................105 Required Resources ........................................................................................................................106 Job Aid .........................................................................................................................................106 Task 1: Examining SAN Network Topology.......................................................................................107 Task 2: Examining Cisco UCS SAN Configuration .............................................................................115 Task 3: Examining the SAN High Availability (Demonstration)............................................................120

Case Study 6-2: Designing Cisco Unified Computing SAN ....................................................... 122 Overview.......................................................................................................................................122 Assignment....................................................................................................................................122 Requirements .................................................................................................................................122

Lab 9-1: Installing VMware vSphere and vCenter ......................................................... 125 Activity Objective .......................................................................................................................125 Visual Objective .........................................................................................................................125 Required Resources ..................................................................................................................126 Task 1: Create a Service Profile.................................................................................................126 Task 2: Install vSphere 4.0u1.....................................................................................................134 Task 3: Import a Virtual Machines .............................................................................................145 Task 4: Install vCenter Server....................................................................................................149

Lab 9-2: Installing a Cisco Nexus 1000V VSM ............................................................... 160 Activity Objective .......................................................................................................................160 Visual Objective .........................................................................................................................160 Required Resources ..................................................................................................................160 Task 1: Prepare the VLAN infrastructure ..................................................................................161 Task 2: Install the Nexus 1000V VSM ........................................................................................164

Lab 9-3: Configuring Port Profiles .................................................................................. 178 Activity Objective .......................................................................................................................178 Visual Objective .........................................................................................................................178 Required Resources ..................................................................................................................178 Task 1: Create an Uplink Port Profile ........................................................................................179 Task 2: Create a Data Port Profile .............................................................................................183 Task 3: Add Hosts to a Cisco Nexus 1000V VSM......................................................................185 Task 4: Test Cisco Nexus 1000V Functionality .........................................................................192

Page 9: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 7

Page 10: Design

8 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Demo 2-1: Initial Configuration Complete this lab activity to practice what you learned in the related lesson.

Activity Objective

In this activity, you will observe the instructor performing the initial configuration of a Cisco

UCS clustered environment. After observing this lab, you should be able to complete the initial

configuration of a Cisco UCS 6100 Fabric Interconnect and establish a cluster relationship

between two Cisco UCS 6100 Fabric Interconnects.

Visual Objective

The figure illustrates what you will accomplish in this activity.

© 2009 Cisco System s, Inc. A ll rights reserved. DCUCI v3.0— 2

Lab 2-1: Initial Configuration

COMPUTE

Ethernet

FCoE

Ethernet

Required Resources

These are the resources and equipment that are required to complete this activity:

� (2) Cisco UCS 6100 Fabric Interconnects

� Serial terminal access to both Fabric Interconnects

Page 11: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 9

Lab 2-1: Exploring Cisco Unified Computing System Hardware

Complete this lab exercise to examine the Cisco Unified Computing System hardware

components and practice what you learned in the related module.

Activity Objective

In this activity, you will use the Data Center Unified Computing Design lab topology and Cisco

UCS to examine, identify, and verify Cisco UCS hardware components. After completing this

activity, you will be able to meet these objectives:

� Examine Cisco UCS cluster configuration

� Identify Cisco UCS Fabric Interconnect switches configuration

� Identify Cisco UCS chassis configuration

� Identify Cisco UCS IOM configuration

� Identify Cisco UCS Server Blade configuration

� Connect to Cisco UCS Server Blade using KVM console

� Decommission and re-acknowledge the assigned Cisco UCS Server Blade

Required Resources

These are the resources and equipment required to complete this activity:

� Two Cisco UCS 6120XP Fabric Interconnect switches

� One Cisco UCS 5108 Chassis

� Two Cisco UCS 2104XP IO Modules

� Six Cisco UCS B200-M1 Server Blades

Page 12: Design

10 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Task 1: Examining Cisco UCS Cluster Configuration

In this task, you will examine the Cisco UCS equipment general information and verify the

basic management configuration.

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will only examine the setup and will not change any parameters if not required

by the task.

Activity Procedure

Complete these steps:

Step 1 Log into the FastLane remote lab by starting internet explorer (or any other browser)

and navigating to http://remotelabs.flane.de

Step 2 Click the “login” button on the top right (or the “student login” link on the bottom of

the page)

Step 3 Login with the credentials of your workgroup (supplied by the instructor)

Step 4 Click on the BLUE PC icon to start your remote access session to the remote lab.

ALL lab exercises will be done on this PC, your local PC will only be used for

accessing the remote lab GUI. (the GREY components are not manageable by you

for now)

Page 13: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 11

Step 5 Log into the student PC using user “administrator” with a password of “1234QWer”

Step 6 Once connected to the student desktop, locate the Internet Explorer icon entitled

Cisco UCS Manager and double-click it to start the Cisco UCS Manager client

application.

Step 7 If this is the first launch of the Cisco UCS Manager from the assigned student

desktop, the launch sequence begins by downloading the Cisco UCS Manager Java

application.

Step 8 When the download is finished, you will be prompted to enter the login credentials.

Use the login user “admin” and password “1234QWer”

Step 9 When successfully authenticated, the Cisco UCS Manager application launches and

the Cisco UCS Manager window appears.

Page 14: Design

12 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 10 Examine the Cisco UCS topology by selecting the Equipment tab in the left pane

and then clicking the Main Topology View tab in the right pane (if not already

selected). You will see the Cisco UCS topology similar to the one below: Cisco UCS

5108 Chassis and two Cisco UCS6120XP Fabric Interconnect switches.

Step 11 Now select the Admin tab in the left pane of the window to examine and verify the

basic management information presented in the General tab on the right pane.

Select the All option in the left pane tree structure to see the Cisco UCS cluster and

individual Fabric Interconnect switch management information.

Page 15: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 13

1. What are the Cisco UCS Fabric Interconnect A and B management IP addresses, subnet mask, and default gateway?

2. What is the high-availability configuration setting?

3. What is the cluster name?

Task 2: Examining Cisco UCS Fabric Interconnect Switches

In this task, you will examine the Cisco UCS Fabric Interconnect switches information and

applied configuration.

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will only examine the setup and will not change any parameters if not required

by the task.

Activity Procedure

Complete these steps:

Step 1 Click the Equipment tab in the left pane to switch back to the Cisco UCS

components view. Next, expand and select the Fabric Interconnects option in the

tree structure to see the details about the Cisco UCS Fabric Interconnect switches A

and B interfaces.

Page 16: Design

14 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Page 17: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 15

1. What is the total number of interfaces in a single Cisco UCS Fabric Interconnect switch?

Step 2 Examine the Fabric Interconnect A information by selecting the Fabric

Interconnect A option in the tree structure in the left pane. Examine the information

available in the General tab on the right pane.

2. What is the total size of the Cisco UCS Fabric Interconnect A memory?

3. What is the high-availability state, cluster link state, and Cisco UCS Fabric Interconnect A role?

Step 3 Examine the Fabric Interconnect A switch fixed module interfaces configuration and

status. Expand the Fabric Interconnect A option in the tree structure on the left

pane. Next, expand the Fixed Module option. To examine the interfaces

information, browse between Server Ports, Unconfigured Ports, and Uplink

Ethernet Ports tabs in the right pane.

Page 18: Design

16 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 4 Examine the Fabric Interconnect A switch expansion module interfaces

configuration and status. Select the Expansion Module 2 option under the Fabric

Interconnect A option in the tree structure on the left pane. Navigate to the Fibre

Channel Ports tab in right pane.

Step 5 The interfaces information can be examined by choosing the Physical Ports tab in

the right pane when Fabric Interconnect A option is selected in the tree structure

on the left pane. Browse between the Uplink Ports, Server Ports, Fibre Channel

Ports, and Unconfigured Ports in the right pane.

Page 19: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 17

4. What is the type of expansion module installed in the individual Fabric Interconnect?

5. How many and which interfaces are configured for connectivity from Fabric Interconnect A to the upstream LAN network?

6. How many and which interfaces are configured for the server connectivity on the Fabric Interconnect A?

7. How many and which interfaces are configured for the SAN connectivity on the Fabric Interconnect A?

8. How many and which interfaces are available for future system expansion (for example, if additional chassis would be added) on the Fabric Interconnect A?

Step 6 Check the physical outlook of the Fabric Interconnect by selecting Fabric

Interconnect A in the left pane and then the Physical Display tab in the right pane.

Move your mouse over an individual port and wait for the balloon-tip to appear—it

shows brief information about the interface: its port number and type.

Step 7 Finally, check the fan and power supply status by browsing between Fans and PSUs

tabs in the right pane. Both fans and power supplies should be operational.

Page 20: Design

18 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Task 3: Examining Cisco UCS Chassis

In this task, you will examine the Cisco UCS Chassis information and configuration.

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will only examine the setup and will not change any parameters if not required

by the task.

Activity Procedure

Complete these steps:

Step 1 Select the Equipment tab in the left pane to switch to the Cisco UCS components

view. Next, expand the Chassis option in the tree structure and select Chassis 1.

Examine the general chassis information in the General tab on the right pane.

Expand the Part Details, Status Details, Power State Details, and Connection

Details to examine the detailed information.

Page 21: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 19

1. What is the ID of the chassis?

2. What is the maximum number of blades that the chassis can host?

3. How many power supplies can be installed in the chassis and what type of power scheme is used?

Step 2 Expand the Chassis 1 option in the left pane and select the Fans option.

4. How many fan modules are installed and are operational in the chassis?

Step 3 Navigate to the Fans tab in the right pane to examine and verify the fan operation.

Step 4 Select an individual fan module in the left pane and click the Statistics tab in the

right pane to examine the exhaust temperature.

Page 22: Design

20 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Task 4: Examining Cisco UCS I/O Modules

In this task, you will examine the Cisco UCS I/O modules information and configuration.

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will only examine the setup and will not change any parameters if not required

by the task.

Activity Procedure

Complete these steps:

Step 5 Expand the IO Modules option under Chassis 1 in the tree on the left pane and

select IO Module 1. Navigate to the General tab in the right pane and explore the

I/O Module information.

1. What is the I/O Module part name?

Step 6 Now browse between the Fabric and Backplane Ports tabs to examine the

information about the uplink and server ports.

Page 23: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 21

2. How many interfaces are available and how many interfaces are used on the I/O module to connect to the Fabric Interconnect switch?

3. How many interfaces are available for blade server connectivity?

Page 24: Design

22 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Task 5: Examining Cisco UCS Server Blades

In this task, you will examine the Cisco UCS Server Blades information and configuration.

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will only examine the setup and will not change any parameters if not required

by the task.

Activity Procedure

Complete these steps:

Step 1 Expand the Servers option under the Chassis 1 in the tree on the left pane and select

the assigned server blade (it depends on the pod number—for example, POD1 uses

server blade in slot1). Next, select the General tab in the right pane and explore the

basic server blade properties.

1. What is the type of the assigned server blade?

2. How many processors does the server blade have?

3. What is the number of cores and threads per processor?

4. How much memory does the server blade have?

Page 25: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 23

5. How many adapters does the server blade have?

Step 2 Select the Inventory tab in the right pane and examine the detailed inventory

information by examining the individual server blade components. First, select the

BMC tab in the right pane to examine the management information of the server

blade.

6. What is the IP address of the server blade management interface?

Step 3 Examine the memory configuration to verify which memory DIMMs are populated.

Page 26: Design

24 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

7. Which memory DIMM slots are populated?

8. What is the size of the individual DIMM module?

Step 4 Examine other server blade components by browsing between the CPUs, Interface

Cards, HBAs, NICs, and Storage tabs.

Page 27: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 25

9. What is the type of the processors on the server blade?

10. What is the processor speed and architecture?

11. What is the type and size of the local storage?

Page 28: Design

26 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 5 Expand the assigned server blade and the Interface Cards option in the tree

structure in the left pane. Next, select the Interface Card 1, navigate to the General

tab on the right pane, and expand the Part Details section.

12. What is the type of the interface adapter and what kind of connectivity does it support?

Step 6 Select your assigned server blade (the blade number is your pod#) in the left pane

and navigate to the General tab on the right pane. Click the KVM Console option

under the Actions section to examine the server console output.

Step 7 If the Warning – Security message box appears, click Run to start the KVM Viewer

Java application.

Page 29: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 27

Step 8 The KVM startup screen shows and the KVM window appears.

Step 9 Explore the options available in the KVM Console. You can send various keystroke

combinations accessing the Macros menu or change the behavior under Tools >

Session Options.

Page 30: Design

28 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 10 Open the virtual media dialog by selecting the Launch Virtual Media option under

the Tools menu. Here you can map a locally attached media or ISO image to the

blade. The ISO image must be accessible by the computer from where the KVM was

launched. The option is useable when installing an operating system or other

application on the server blade.

Step 11 Disable the server blade by decommissioning it. Select Server Maintenance under

Actions, and select the Decommission option.

Page 31: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 29

Step 12 The blade will be disabled and marked with “Needs Resolution”, in red, over it.

Successful decommissioning is indicated by the message window stating that the

maintenance task completed successfully. The KVM Console closes.

Step 13 Re-acknowledge the blade by selecting the Reacknowledge Slot action. This will

enable the blade and activate the blade discovery.

Page 32: Design

30 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 14 Confirm the re-acknowledge action by selecting Yes.

Step 15 Select the FSM tab to observe the blade discovery process, by means of which the

Cisco UCS acquires inventory information about the blade.

Step 16 If closed, re-open the KVM Console for the blade; select the Re-acknowledge

option under Server Maintenance to start the discovery process again. Observe the

KVM Console output. You should see the discovery process booting the Cisco UCS

operating system on the blade used to gather the inventory information.

Page 33: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 31

Page 34: Design

32 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Case Study 2-2: Sizing the Cisco Unified Computing System

In this case study, you will review customer requirements to size the Cisco UCS, that is, to

select the proper types and quantities of Cisco UCS components to practice what you learned in

the related module.

Overview

The customer, or service provider, is planning to offer managed data center services. The

service provider has decided to deploy Cisco UCS systems, which will be the basis for the

managed server services. The solution must support multiple customers, therefore a multitenant

environment will be designed.

Assignment

Your assigned task is to perform the Cisco UCS sizing, which will be the basis for the BOM.

This includes the following:

� Review the service provider requirements.

� Design the Cisco UCS Server Blades per server type and define the hardware properties for

the individual server type.

� Design two Cisco UCS cluster classes.

� Design two Cisco UCS chassis classes and determine the number of required uplink ports.

� Select the Cisco UCS Fabric Interconnect switch type per Cisco UCS system class.

� Determine the number and type of server downlinks per selected Cisco UCS Fabric

Interconnect.

� Determine the number and type of LAN uplink ports per Cisco UCS Fabric Interconnect.

Work alone or together with one or more teammates to complete the assignment.

Requirements

You have conducted several technical meetings to collect the necessary input data, from which

you can do the Cisco UCS sizing.

You have identified that two types of servers will be deployed:

� Physical servers for the customers requiring the operating system installations

� VMware ESX hosts for VM deployments

For the physical server, the requirements are the following:

� A single, fast CPU

� 8 GB of memory

� Two 1GE NICs with 802.1Q trunking support for redundant LAN connectivity

� Peak estimated LAN traffic of 0.5 Gb/s

� Two Fibre Channel HBAs for connectivity to redundant SAN fabrics

Page 35: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 33

� Peak estimated SAN traffic of 0.5 Gb/s

� SAN boot support for operating system installation

� Application data saved on the SAN attached storage

� Throughput can be halved upon connectivity failure for either LAN or SAN

For the VMware ESX host, the requirements are the following:

� Two fast multicore CPUs

� 128 GB of memory

� Six 1GE NICs with 802.1Q trunking support for redundant LAN connectivity with 5 Gb/s peak traffic

� Two Fibre Channel HBAs for connectivity to redundant SAN fabrics with 4 Gb/s peak traffic

� VMware ESX hosts booted from local disks

� VMware vSphere 4.0 VMotion, high availability, fault tolerance, DRS services

� Customer VMs stored on the VMFS volume located on the SAN attached storage

� SAN boot support for operating system installation

� Application data saved on the SAN attached storage

� Throughput must not be halved upon connectivity failure for either LAN or SAN

The following diagram summarizes the identified server requirements

© 20 09 Cis co Sy stems , I nc. All r igh ts re ser ve d. D CUCD v3. 0— LG -1 2

Design Requirements

Physical server requirements

Single fast CPU

8 GB memory

2 x 1GE trunk with VLANs (redundancy)

LAN traffic peak = 0.5-Gb/s Ethernet

2 x HBA for redundant SAN connectivity

SAN traffic peak = 0.5-Gb/s FC

SAN boot for operating system

SAN for application data

Throughput can be halved upon failure

VS-class1 server requirements

Dual multicore CPU

128 GB memory

6 x 1GE trunk with VLANs (redundancy)

LAN traffic peak = 5-Gb/s Ethernet

2 x HBA for redundant SAN connectivity

SAN traffic peak = 4-Gb/s FC

Local disk for Vmware ESX

SAN for customer VMs

Throughput must not be halved upon failure

Hardware

OS

App.

Hardware

Hypervisor

OS

App.

OS

App.

There will be multiple Cisco UCS systems deployed. To simplify the future expansion and management effort, the decision has been made to deploy two types of Cisco UCS clusters:

� Low volume traffic, which will support physical servers only

� High volume traffic, which will support the VMware ESX hosts

Page 36: Design

34 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

In both Cisco UCS cluster classes, the design should support fully populated chassis and

maximize the number of ports used.

Case Study Aids

You can use the following tables to write down the information about the design you make.

Designed Server Blade Classes

Server Blade Class Name:

Server Blade Class Name:

Blade Type

Adapter

Processor

Memory

LAN Throughput

SAN Throughput

Redundancy

Designed Chassis Classes

Chassis Class Name:

Chassis Class Name:

Blade Class

Blade Qty

IOM Qty

IOM Uplink Qty

SFP+ type

Designed Cluster Classes

Cluster Class Name:

Cluster Class Name:

Fabric Interconnect Type

Chassis Class

Chassis Qty

Expansion Module

Server Link Qty.

LAN Uplink Qty.

SAN Uplink Qty.

Page 37: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 35

Lab 3-1: Deploying a Server with Cisco Unified Computing System

Complete this lab exercise to examine how you can deploy a server within the Cisco Unified

Computing System using service profile, resource pools, server pools, and policies, and boot

from SAN functionality.

Activity Objective

In this activity, you will use the Data Center Unified Computing Design lab topology and Cisco

UCS to create Cisco UCS resource pools, server policies and pools, service profiles, and to

deploy server booting from SAN. After completing this activity, you will be able to meet these

objectives:

� Create MAC resource pools

� Create WWNN and WWPN resource pools

� Create UUID suffix resource pools

� Create server pools

� Create server qualification policies

� Create server pool policies

� Create advanced service profile

� Associate service profile with a server blade

� Boot the server blade of the SAN

Page 38: Design

36 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Visual Objective

The figure illustrates what you will accomplish in this activity.

© 20 09 Cis co Sy stems , I nc. All r igh ts re ser ve d. DCUC D v3 .0—LG- 6

Pod5

Pod1

Pod3

Pod6

Pod2

Pod4

Service profile

� Name

� UUID

� MAC address

� LA N config

� WW N address

� SAN config

� Boot config

�...

MAC pool 1

00:25:b5:00:00:01

WWNN pool 1

20:01:00:00:00:00:00:01

UUID pool 1

1fffffff-2fff-3fff-1000-000000000001

WWPN pool 1

20:00:00:00:00:00:00:01

Boot policy

• Boot Devices

• Boot Order

Deploying Server with Cisco Unified Computing System

Note The Cisco UCS chassis in your lab topology might have more than six server blades

inserted.

Required Resources

These are the resources and equipment required to complete this activity:

� Two Cisco UCS 6120XP Fabric Interconnect switches

� One Cisco UCS 5108 chassis

� Two Cisco UCS 2104XP I/O modules

� One Cisco UCS B200-M1 server blade

� Storage LUN with preinstalled Windows 2003 server

� One MDS 9124 switch

Job Aid

Refer to the Lab Aids section of this lab guide for the following information:

� How to access the lab

� Assigned student desktop IP address and login credentials

� Cisco UCS cluster IP address and login credentials

You will use the information in the following tables to complete the lab exercise.

Page 39: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 37

SAN Parameter Assignment

Pod VSAN ID VSAN Name UCS Fabric FCoE VLAN ID

Pod1 11 VSAN11 A 1011

Pod2 11 VSAN11 A 1011

Pod3 11 VSAN11 A 1011

Pod4 11 VSAN11 A 1011

Pod5 11 VSAN11 A 1011

Pod6 11 VSAN11 A 1011

Pod7 11 VSAN11 A 1011

Pod8 11 VSAN11 A 1011

Pod VSAN ID WWNN WWPN

Pod1 11 20:01:00:00:00:00:01:01 20:00:00:00:00:00:01:01

Pod2 11 20:01:00:00:00:00:02:01 20:00:00:00:00:00:02:01

Pod3 11 20:01:00:00:00:00:03:01 20:00:00:00:00:00:03:01

Pod4 11 20:01:00:00:00:00:04:01 20:00:00:00:00:00:04:01

Pod5 11 20:01:00:00:00:00:05:01 20:00:00:00:00:00:05:01

Pod6 11 20:01:00:00:00:00:06:01 20:00:00:00:00:00:06:01

Pod7 11 20:01:00:00:00:00:07:01 20:00:00:00:00:00:07:01

Pod8 11 20:01:00:00:00:00:08:01 20:00:00:00:00:00:08:01

MAC and UUID Suffix Assignment

Pod MAC UUID Suffix

Pod1 00:25:b5:00:01:01 1000-000000000001

Pod2 00:25:b5:00:02:01 2000-000000000001

Pod3 00:25:b5:00:03:01 3000-000000000001

Pod4 00:25:b5:00:04:01 4000-000000000001

Pod5 00:25:b5:00:05:01 5000-000000000001

Pod6 00:25:b5:00:06:01 6000-000000000001

Pod7 00:25:b5:00:07:01 7000-000000000001

Pod8 00:25:b5:00:08:01 8000-000000000001

Lab NetApp Storage PWWN

1 50:0a:09:81:86:a9:fb:f1

2

Page 40: Design

38 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

IP Addressing (L= lab#, P=Pod#)

Pod VLAN IP Network Windows Default gateway

PodP LP1 172.17.P1.0/24 172.17.P1.1 172.17.P1.254

Task 1: Creating MAC Resource Pool

In this task, you will create a MAC address resource pool. You will later use this MAC address

when creating the service profile.

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will not change any parameters if not required by the task.

Activity Procedure

Complete these steps:

Step 1 Log into the Cisco UCS Manager if not already logged in, and select the LAN tab in

the left navigation pane. Set the Filter to Pools to limit the output.

Step 2 Right-click the MAC Pools option in the left pane and select Create MAC Pool

from the menu to open the MAC pool creation wizard.

Step 3 Enter the MAC pool name in the form of podX-mac-win (where X is your pod

number) and click Next to proceed with the pool creation.

Page 41: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 39

Step 4 Click Add to add the MAC addresses to the pool.

Step 5 Enter the first MAC address of the pool for your Windows 2003 server. The MAC

address is located in the MAC and UUID Suffix Assignment table under the Job Aid

section. Leave the pool size at 1. The first six numbers of the MAC address have

been prepopulated with the 00:25:b5, which cannot be changed. Click OK to

confirm the action.

Note Normally you would create a pool consisting of several MAC addresses. In this lab exercise,

you want the service profile using this pool to get this specific MAC address, therefore the

pool size is 1.

Page 42: Design

40 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 6 Finish the MAC pool creation by clicking Finish.

Step 7 You will see the dialog box notifying you that the MAC pool has been successfully

created.

Page 43: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 41

Task 2: Creating WWNN Resource Pool

In this task, you will create a WWNN address resource pool. You will later use this WWNN

address when creating the service profile.

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will not change any parameters if not required by the task.

Activity Procedure

Complete these steps:

Step 1 Select the SAN tab in the left navigation pane. Set the Filter to Pools to limit the

output.

Step 2 Right-click the WWNN Pools option in the left pane and select the Create WWNN

Pool from the menu to open the WWNN pool creation wizard.

Step 3 Enter the WWNN pool name in the form of podX-wwnn-win (where X is your pod

number) and click Next to proceed with the pool creation.

Page 44: Design

42 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 4 Click Add to add the WWNN addresses to the pool.

Step 5 Enter the first WWNN address of the pool for your Windows 2003 server. The

WWNN address is located in the WWNN and WWPN Assignment table under the

Job Aid section. Leave the pool size at 1. Click OK to confirm the action.

Note Normally you would create a pool consisting of several WWNN addresses. In this lab

exercise, you want the service profile using this pool to get this specific WWNN address,

therefore the pool size is 1.

Step 6 Click Next to proceed with the pool creation, and click Finish.

Page 45: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 43

Step 7 You will see the dialog box notifying you that WWNN pool has been successfully

created.

Task 3: Creating WWPN Resource Pool

In this task, you will create a WWPN address resource pool. You will later use this WWPN

address when creating the service profile.

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will not change any parameters if not required by the task.

Activity Procedure

Complete these steps:

Step 1 Right-click the WWPN Pools option in the left pane and select the Create WWPN

Pool from the menu to open the WWPN pool creation wizard.

Step 2 Enter the WWPN pool name in the form of podX-wwpn-win (where X is your pod

number) and click Next to proceed with the pool creation.

Page 46: Design

44 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 3 Click Add to add the WWPN addresses to the pool.

Step 4 Enter the first WWPN address of the pool for your Windows 2003 server. The

WWPN address is located in the WWNN and WWPN Assignment table under the

Job Aid section. Leave the pool size at 1. Click OK to confirm the action.

Note Normally you would create a pool consisting of several WWPN addresses. In this lab

exercise, you want the service profile using this pool to get this specific WWPN address,

therefore the pool size is 1.

Step 5 Now click Next to proceed with the pool creation, and click Finish.

Page 47: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 45

Step 6 You will see the dialog box notifying you that WWNN pool has been successfully

created.

Page 48: Design

46 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Task 4: Creating UUID Suffix Resource Pool

In this task, you will create a UUID suffix resource pool. You will later use this UUID suffix

when creating the service profile.

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will not change any parameters if not required by the task.

Activity Procedure

Complete these steps:

Step 1 Select the Servers tab in the left navigation pane. Set the Filter to Pools to limit the

output.

Step 2 Right-click the UUID Suffix Pools option in the left pane and select the Create

UUID Suffix Pool from the menu to open the UUID suffix pool creation wizard.

Step 3 Enter the UUID suffix pool name in the form of podX-uuid-pool (where X is your

pod number), value “derived” for the UUID prefix, and click Next.

Page 49: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 47

Step 4 Click Add to add the UUID suffixes to the pool.

Step 5 Enter the first UUID suffix of the pool for your Windows 2003 server. The UUID

suffix is located in the MAC and UUID Suffix Assignment table under the Job Aid

section. Leave the pool size at 1. Click OK to confirm the action.

Note Normally you would create a pool consisting of several UUID suffixes. In this lab exercise,

you want the service profile using this pool to get this specific UUID suffix, therefore the pool

size is 1.

Step 6 Click Next to proceed with the pool creation, and click Finish.

Page 50: Design

48 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 7 You will see the dialog box notifying you that the UUID suffix pool has been

successfully created.

Page 51: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 49

Task 5: Creating Server Pool

In this task, you will create a server pool. You will later use this server pool when creating the

service profile.

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will not change any parameters if not required by the task.

Activity Procedure

Complete these steps:

Step 1 Select the Servers tab in the left navigation pane. Set the Filter to Pools to limit the

output.

Step 2 Right-click the Server Pools option in the left pane and select the Create Server

Pool from the menu to open the Server Pool pool creation wizard.

Step 3 Enter the Server Pool name in the form of podX-srvpool (where X is your pod

number) and click Next.

Page 52: Design

50 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 4 Click Finish, without manually adding the servers. You will add servers with the

server pool and qualification policy. The wizard ends with a confirmation dialog

box.

Task 6: Creating Server Pool Policy and Qualification Policy

In this task, you will create a server pool policy and qualification policy. You will use these

policies to automatically populate the previously created server pool.

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will not change any parameters if not required by the task.

Activity Procedure

Complete these steps:

Step 1 Set the Filter value to Policies to limit the output.

Page 53: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 51

Step 2 Right-click Server Pool Policy Qualifications and select the Create Server Pool

Policy Qualification from the menu to start the wizard.

Step 3 Set the policy name to podX-poolqual (where X is your pod number).

Step 4 Select Create Chassis/Server Qualifications from the Actions options.

Step 5 Leave the First Chassis ID and Number of Chassis values at 1, because there is only

one chassis in the Cisco UCS system.

Step 6 Click the green plus sign to add the server qualification. Set the First Slot ID to

your assigned server ID and click Finish Stage >> to return to the previous screen.

Page 54: Design

52 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 7 Click Finish to conclude the chassis/server qualification creation.

Step 8 In the qualifications table, you will see the created chassis/server qualification

policy.

Page 55: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 53

Step 9 Explore also other qualification options before confirming the policy creation. After

reviewing the options, click OK to conclude the policy creation. You will see dialog

box confirming successful policy creation.

Step 10 Switch the Filter value in the left pane back to Pools. Expand the Server Pools in the

tree structure and select the server pool that you created.

Step 11 Click the green plus sign under the Pool Policies section in the right pane to create

the Server Pool Policy.

Page 56: Design

54 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 12 Enter the policy name in the form podX-poolpolicy (where X is the pod number).

Set the Qualification to the pool policy qualification you have created. Click OK to

apply the configuration.

Step 13 Review the pool policies for your server pool. You should now see the pool policy

that you have just created. The Size and Assigned values are 0, because no server

has been placed in the server pool.

Step 14 The server pool policies take action when the server blade is discovered. This

happens either when the blade is inserted into the chassis or if you reacknowledge

the server. Navigate to the Equipment tab in the left pane and expand the tree

structure. Right-click on the assigned server blade and select the Re-acknowledge

Server option from the menu. When asked, confirm the action.

Page 57: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 55

Step 15 Observe the blade status in the FSM tab on the right pane. You should see the

discovery process in progress.

Step 16 When the server has been reacknowledged, navigate back to the Servers tab in the

left pane. Set the Filter value to Pools and expand the tree structure in the left pane.

Select the server pool that you created.

Step 17 Select the Servers tab in the right pane to examine the pool membership. You

should see that your server blade is now part of the server pool that you created.

Page 58: Design

56 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Task 7: Creating Advanced Service Profile

In this task, you will create an advanced service profile and boot Windows 2003 of the SAN.

You will use the pools that you created to create server personality.

Note VSAN and VLAN configuration is prepared in advance. Therefore, you do not have to

configure either connectivity.

Activity Procedure

Complete these steps:

Step 1 Set the Filter value to Service Profiles to limit the output. Expand the Service

Profile option in the left pane.

Step 2 Right-click the root option and select Create Service Profile (expert) from the

menu.

Step 3 Set the service profile name to podX-server-win (where X is your pod number). Set

the UUID Assignment value to the UUID suffix pool that you created in the

previous task. Click Next to proceed with the configuration.

Page 59: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 57

Step 4 Next, you need to set the storage configuration parameters. Leave the Local Storage

and Scrub Policy values as they are.

Step 5 Select the Expert option for the SAN storage configuration. Set the WWNN

Assignment value to the WWNN pool that you created in the previous task.

Step 6 Click Add to add the vHBA interfaces.

Page 60: Design

58 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 7 In the Create vHBA dialog box, set the following, and click OK to apply the

configuration:

� Name field to vHBA1

� WWPN Assignment value to the WWPN pool that you created in the previous

task

� Fabric ID A and VSAN 11 as specified in the SAN Parameter Assignment table

in the Lab Aid section for your blade server

Step 8 Click Next to proceed the Networking section. Select the Expert option for the

LAN connectivity configuration. Click Add to add the vNIC to the service profile.

Page 61: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 59

Step 9 In the Create vNIC dialog box, set the following, and click OK to apply the

configuration when finished:

� Name field to vNIC1

� MAC Address Assignment value to the MAC pool that you created in the

previous task

� Fabric ID to A and check the Enable Failover box

� Click “create VLAN” and create vlan podP-data with vlan number LP1 (where

L is the lab number and P is your pod number)

� Do not forget to click the ”native VLAN” checkbox.

Step 10 Proceed with the service profile creation by clicking Next. Now you have to set the

server boot order. Click Create Boot Policy to enter a new boot policy for your

server.

Page 62: Design

60 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 11 Name the policy podX-sanboot-win, where X is your pod number. Click vHBAs

and Add SAN Boot. In the Add SAN Boot dialog box, enter the name of the vHBA

through which the SAN boot should be performed. In your case, this is vHBA1.

Leave Type value at Primary, because the server will have only one vHBA.

Step 12 Click Add SAN Boot Target, which was previously grayed out. Set the Boot Target

LUN value to 1. Set the Boot Target WWN to the SAN Boot Target WWN value to

50:0a:09:81:86:a9:fb:f1 for Lab 1.

Click OK to apply the configuration.

Page 63: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 61

Step 13 Now select the newly created boot policy and apply it to the service profile.

Step 14 Click Next to proceed to the Server Assignment options. Select the Select existing

Server option from the Server Assignment drop-down box. This box gives you the

server blades available in the chassis. Select the server blade that was assigned to

you. MAKE SURE YOU SELECT THE CORRECT SERVER, if in doubt ask your

instructor for help.

Note You could also select your existing server pool here, your blade server was placed in the

pool in the first part of the exercise.

Page 64: Design

62 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 15 Click Next. Leave the default values on the final section where Operational policies

are configured and click Finish. You have now created the service profile.

Step 16 You have selected the server blade upon service profile creation, therefore it is being

applied to the server blade. You can observe the process by opening the KVM

Console for the server blade. When the service profile is applied, the server will be

powered on if you have selected that option. Otherwise, navigate to the Equipment

tab in the left pane and expand the tree structure. Right-click your server blade and

select the Boot Server option in the menu.

Step 17 The server will boot with Windows 2003 operating system. Use the Administrator

and 1234QWer login credentials to log into the server.

Page 65: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 63

Step 18 After logging into the server, explore the operating system. Examine the hardware

characteristics from the Windows 2003 perspective: processor and the amount of

memory, LAN and SAN network adapters, and so on. The information you see

should correspond to the hardware inventory seen before in the Cisco UCS Manager.

Step 19 Open control panel-networkconnections and assign IP 172.17.P2.1/24 to the first

(connected) 10GE adapter. Use 172.17.P2.254 as the default gateway. (P is always

your Pod#)

Step 20 After reviewing the operating system parameters, verify the IP network connectivity.

Open a command prompt and issue the ipconfig command. You should see that one

of the LAN adapters is configured with IP address 172.17.P1.1/24 (where X is your

pod number), the other one is disconnected (we did not setup a second vNIC)

Page 66: Design

64 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 21 Next, try to ping your own IP 172.17.P2.1 and the default gateway 172.17.P2.254.

The command should succeed.

Page 67: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 65

Step 22 Finally, try to use Telnet to reach the default gateway IP address 172.17.P2.254. No

login is required. Take a look at the MAC address table. Do you see your MAC-

Pool-Address?

Page 68: Design

66 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 23 Leave the Windows 2003 server operational when you finish the lab exercise.

Page 69: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 67

Case Study 3-2: Designing Server Deployment In this case study, you will review the customer requirements to design the server deployment

with Cisco UCS. You will define the naming convention, pools, policies, service profile

templates, and service profiles to practice what you have learned in the related module.

Overview

The customer, or service provider, is planning to offer managed data center services. They have

decided to deploy Cisco UCS systems, and you have already completed the sizing for the

solution. The solution will support a multitenant environment.

Server Blade Classes

Two Cisco UCS server blade classes have been identified: PS-class1 and VS-class1.

The PS-class1 will support the physical server deployments and has the following

characteristics:

� Cisco UCS B200-M1 server blade

� One Intel Xeon E5520 processor

� 8 GB memory

� Cisco UCS M71KR-Q CNA mezzanine for redundant LAN and SAN connectivity

The VS-class1 will support the VMware ESX host deployments and has the following

characteristics:

� UCS B250-M1 server blade

� Two Intel Xeon E5570 processors

� 128 GB memory

� Cisco UCS M81KR VIC mezzanine for redundant LAN and SAN connectivity

� Six LAN NICs and two HBAs

Chassis Classes

Two Cisco UCS chassis classes have been identified: BC-class1 and BC-class1.

The BC-class1 will host PS-class1 server blades and has the following characteristics:

� Cisco UCS 5108 chassis

� Up to 8 PS-class1 server blades per chassis

� Two Cisco UCS 2104XP IOMs per chassis

� Single 10G DCE uplink from an individual Cisco UCS 2104XP IOM

The BC-class2 will host VS-class1 server blades and has the following characteristics:

� Cisco UCS 5108 chassis

� Up to 4 VS-class1 server blades per chassis

� Two Cisco UCS 2104XP IOMs per chassis

� Four 10G DCE uplink ports from an individual Cisco UCS 2104XP IOM

Page 70: Design

68 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Cluster Classes

Two UCS cluster classes have been identified: UCS-class1 and UCS-class1.

The UCS-class1 cluster will connect only BC-class1 chassis and has the following

characteristics:

� Two Cisco UCS 6120XP Fabric Interconnect switches

� Two N10-E0080 8-port 1/2/4 Gb/s Fibre Channel expansion modules

� Up to 16 BC-class1 chassis will be connected in the cluster

� Up to 128 PS-class1 server blades will be present in the cluster

� Eight 4-Gb/s Fibre Channel uplink ports per Fabric Interconnect will be used for SAN

connectivity

� Four 10-Gb/s Ethernet LAN uplink ports per Fabric Interconnect will be used for LAN

connectivity

� Sixteen 10-Gb/s DCE server downlinks per Fabric Interconnect

The Cisco UCS-class2 cluster will connect only BC-class2 chassis and has the following

characteristics:

� Two Cisco UCS 6120XP Fabric Interconnect switches

� Two N10-E0080 8-port 1/2/4 Gb/s Fibre Channel expansion modules

� Up to 2 BC-class1 chassis will be connected in the cluster

� Up to 8 VS-class1 server blades will be present in the cluster

� Eight 4-Gb/s Fibre Channel uplink ports per Fabric Interconnect will be used for SAN

connectivity

� Four 10-Gb/s Ethernet LAN uplink ports per Fabric Interconnect will be used for LAN

connectivity

� Eight 10-Gb/s DCE server downlinks per Fabric Interconnect

Page 71: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 69

The following tables summarize the result of the Cisco UCS sizing process.

© 20 09 Cis co Sy stems , I nc. All r igh ts re ser ve d. D CUCD v3. 0— LG -1 8

Service Provider Multitenant Environment Summary

� Installed in multiple data centers

– Same design approach

� Redundancy required

UCS class Quantity

(initial)

Max. blade quantitiy

BC-class1 BC-class2

UC-class1 1/site 120 15 -

UC-class2 1/site 8 - 2

UCS system quantity and per UCS blade chassis population

Chassis Class

PS-class1 VS-class1

BC-class1 8 -

BC-class2 - 2

Per chassis maximum blade

population

PS-class1 server blade VS-class1 server blade

Adapter UCS M71KR-Q CNA UCS M81KR VIC

Processor Intel Xeon E5520 2x Intel Xeon E5570

Memory 8 GB 128 GB

Quantity 120 8

Connectivity LAN, SAN 6xLAN, SAN

Redundancy Required Required

Server blade types and quantity

Assignment Work alone or together with one or more teammates to complete the following:

� Define Cisco UCS management IP addressing

� Define a naming convention for Cisco UCS systems: MAC, WWNN, WWPN, UUID

resource pools, server pools, server pool policy qualifications, server pool policies, service

profile templates, and service profile naming convention

� Define resource pools (MAC, WWNN, WWPN, UUID)

� Define server pool policy qualification, pool policies, and server pools

� Define boot policies

� Define service profile templates

� Define service profiles

� Define hierarchy with organizations for the individual customer-related configuration

Page 72: Design

70 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Requirements

The design must meet the following requirements:

� Servers using PS-class1 server blades will be booted from SAN

� ESX hosts using VS-class1 server blades will boot from local disk

� Virtual machines will be stored in SAN attached storage

� VMware vSphere 4.0 VMotion, high availability, fault tolerance, DRS services will be used

to optimize resource usage and achieve better high availability

� Redundant LAN connectivity is required for physical and ESX hosts

� Redundant HBA connectivity is required for accessing SAN attached storage

� There will be a maximum of 120 PS-class1 server blades per Cisco UCS-class1 cluster

� There will be a maximum of 8 VS-class1 server blades per Cisco UCS-class2 cluster

� MAC, WWNN, WWPN addresses, and UUID suffixes should be automatically assigned

� The MAC, WWNN, WWPN addresses, and UUID suffixes should incorporate the

information about the type and number of the cluster

� Servers should be assigned to the service profile based on the server blade characteristics

� Manual pool management is not desired, since the deployment is large

� Service provider will deploy only limited amount of different server types

� Administrator should be able to define common parameters of multiple service profiles

from a single place initially

� Define service profile example for the Windows server

� Define service profile example for the Linux server

� Define service profile example for the ESX host

� LAN traffic should be load-balanced over fabrics A and B

� Management IP addresses should be taken from the 192.168.0.0/16 address space and

should support up to 64 Cisco UCS-class1 clusters and up to 512 Cisco UCS-class2

clusters.

Page 73: Design

Case Study Aids

You can use the following tables to write down the information about the design you make.

UC-class1 Cluster Management Design

Cluster 1

Cluster 2

Cluster n

Cluster IP

Fabric Interconnect IP addresses (switch-A and switch-B)

Blade Management IP pool

Gateway

UC-class2 Cluster Management Design

Cluster 1

Cluster 2

Cluster n

Cluster IP

Fabric Interconnect IP addresses (switch-A and switch-B)

Blade Management IP pool

Gateway

UCS Cluster MAC Pool Design

UCS Cluster

Class

UCS Cluster

Name

MAC Pool

Name First MAC Last MAC

Page 74: Design

UCS Cluster WWNN Pool Design

UCS Cluster

Class

UCS Cluster

Name

WWNN Pool

Name First WWNN Last WWNN

UCS Cluster WWPN Pool Design for the first HBA

UCS Cluster

Class

UCS Cluster

Name

WWPN Pool

Name First WWPN Last WWPN

UCS Cluster WWPN Pool Design for the second HBA

UCS Cluster

Class

UCS Cluster

Name

WWPN Pool

Name First WWPN Last WWPN

UCS Cluster UUID Pool Design

UCS Cluster

Class

UCS Cluster

Name

UUID Pool

Name First UUID Last UUID

UCS Server Pool Design

UCS Cluster

Class

UCS Cluster

Name Server Pool Name

Page 75: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 73

UCS Server Pool Qualification Policy Design

UCS Cluster

Class

UCS Cluster

Name Policy Pool Name Rules

Page 76: Design

74 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Lab 4-1: Implementing Management Hierarchy Complete this lab exercise to examine how you can implement management hierarchy in the

Cisco Unified Computing System using organizations, locals, roles, and users.

Activity Objective

In this activity, you will use the Data Center Unified Computing Design lab topology and Cisco

UCS to create the management hierarchy. After completing this activity, you will be able to

meet these objectives:

� Create organization

� Create locale

� Create roles

� Create users

� Link locales, roles, and users

Visual Objective

The figure illustrates what you will accomplish in this activity.

© 20 09 Cis co Sy stems , I nc. All r igh ts re ser ve d. DCUC D v3 .0—LG- 7

Root

Org POD1 Org POD6

MAC pool 1

00:25:b5:00:00:01

WWNN pool 1

20:01:00:00:00:00:00:01

UUID pool 1

1fffffff-2fff-3fff-1000-000000000001

WWPN pool 1

20:00:00:00:00:00:00:01

Boot policy

� Boot Devices

� Boot Orde r

Service profile 1

� Name

� UUID

� MAC addre ss

� LAN c onfig

� WWN address

� SAN config

� Boot config

�...

MAC poo l 6

00:25:b5:00:00:01

WWNN pool 6

20:01:00:00:00:00:00:01

UUID pool 6

1fffffff-2fff-3fff-1000-000000000001

WWPN pool 6

20:00:00:00:00:00:00:01

Boot po licy

� Boot Devices

� Boot Order

Service profile 6

�Name

�UUID

�MAC address

� LAN config

� WWN address

� SAN config

� Boot config

�...

User POD1 User POD6

Implementing Management Hierarchy

Note The Cisco UCS chassis in your lab topology might have more than six server blades

inserted.

Page 77: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 75

Required Resources

These are the resources and equipment required to complete this activity:

� Two Cisco UCS 6120XP Fabric Interconnect switches

� One Cisco UCS 5108 chassis

� Two Cisco UCS 2104XP I/O modules

� One Cisco UCS B200-M1 server blade

Job Aid

Refer to the Lab Aids section of this lab guide for the following information:

� How to access the lab

� Assigned student desktop IP address and login credentials

� Cisco UCS cluster IP address and login credentials

Page 78: Design

76 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Task 1: Creating Organization

In this task, you will create organization that might be used to store your pod-related

configuration.

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will not change any parameters if not required by the task.

Activity Procedure

Complete these steps:

Step 1 Log into the Cisco UCS Manager, if not already logged in, and select the Servers

tab in the left navigation pane. Expand the tree structure down to the root

organization under the Service Profiles option.

Note You will create organization from the Service Profiles, although it could be created from any

place where organizations are used to manage configuration (for example, from the LAN or

SAN tab).

Step 2 Right-click on the root organization and select the Create Organization option

from the menu. You can also reach the Create Organization selection from the

Actions and General tabs.

Page 79: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 77

Step 3 Enter the organization name in the form podX-org (where X is your pod number) in

the Create Organization wizard. Click OK to create the organization.

Step 4 Confirm the organization creation by clicking OK in the confirmation window.

Step 5 Now examine the root organization under the Service Profiles to confirm that

creation was successful.

Page 80: Design

78 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 6 The organization you have created is now visible in other places. Navigate to the

LAN tab in the left pane. Expand the Policies option in the tree structure down to

the root organization. You can see the created organization here also.

Page 81: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 79

Task 2: Creating Locales

In this task, you will create locale to which you will assign the organization that you created in

the previous task.

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will not change any parameters if not required by the task.

Activity Procedure

Complete these steps:

Step 1 Select the Admin tab in the left navigation pane. Then change the Filter value to

User Management.

Step 2 Expand the User Management down to the Locales under the User Services. Next,

right-click Locales. Select the Create Locale option from the menu.

Page 82: Design

80 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 3 Enter the locale name in the form podX-locale (where X is you pod number) in the

Create Locale window. Click Next to proceed with the locale creation.

Step 4 Select the organization that you previously created and assign it to the locale that

you are creating by dragging and dropping the organization to the right pane

(whitespace). In the organization tree structure, you might also see other

organizations that were created by other pod members. You will see the newly

created locale connected to your organization. Click Finish to create the locale.

Note You might also see other organizations that were created by a member of other pods.

Page 83: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 81

Step 5 When asked, confirm the locale creation by clicking OK.

Step 6 Expand the Locales under the User Services in the Admin tab. You should see the

locale that you created (and possibly other pod locales). Select the locale to review

the configuration. You should see your organization under assigned organizations in

the General tab.

Page 84: Design

82 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Task 3: Creating Users

In this task, you will create a Cisco UCS Manager user.

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will not change any parameters if not required by the task.

Activity Procedure

Complete these steps:

Step 1 Right-click User Services in the left pane and select the Create User option from

the menu.

Step 2 Enter the following user parameters in the Create User wizard (replace X with your

pod number):

� Login ID: podX-user

� Password: 1234QWer

� Roles: network, storage, server-profile

� Locales: podX-locale

When the parameters are entered, click OK to create the user.

Page 85: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 83

Step 3 When asked to confirm the user account creation, click OK.

Step 4 Now expand the Locally Authenticated Users option. Select the user you have just

created from the user list to examine and verify the settings.

Page 86: Design

84 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 5 Test the newly created user by logging out of the Cisco UCS Manager by clicking

Exit on the top toolbar.

Step 6 Select the Log off admin option from the Exit dialog.

Step 7 Log back into the Cisco UCS Manager using the credentials of the user you have

just created.

Page 87: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 85

Step 8 Navigate to the Admin tab and select the User Management option in the Filter

field. Expand the User Management down to the User Services.

Step 9 Right-click on User Services and observe the menu options. All the create options

are now grayed out because the user that you have logged in does not have the AAA

role assigned.

Page 88: Design

86 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Task 4: Creating Roles

In this task, you will create a Cisco UCS Manager role.

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will not change any parameters if not required by the task.

Activity Procedure

Complete these steps:

Step 1 Log out of the Cisco UCS Manager again and log back in using the Admin user.

Step 2 Right-click User Services and select the Create Role option from the menu.

Page 89: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 87

Step 3 Enter the role name in a form of podX-role (where X is your pod number). Select

AAA from the Privileges option and finish the role creation by clicking OK.

Step 4 Expand the Locally Authenticated Users. Next, select the user that you created,

check the box next to the role that you created, and click Save Changes to apply the

configuration.

Page 90: Design

88 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 5 The action will not succeed because the user belonging to a locale cannot administer

AAA services. To be able to do that, the user would need to be directly under the

root organization.

Step 6 Select the Locally Authenticated Users and expand the user that you created, in the

right pane. You will notice that among the roles and locale that you applied to the

user, there is an additional role, Role-read-only, which is used to provide read-only

access to the Cisco UCS parameters not allowed to be changed.

Page 91: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 89

Lab 5-1: Exploring the Cisco Unified Computing Network

Complete this lab exercise to examine the Cisco Unified Computing network topology and

components.

Activity Objective

In this activity, you will use the Data Center Unified Computing Design lab topology to

identify, examine, and verify network topology and hardware components. After completing

this activity, you will be able to meet these objectives:

� Use DCNM to examine network topology and Nexus 7010 configuration

� Telnet into Nexus 7010 to examine the configuration

� Use the Cisco UCS Manager to explore the Cisco UCS network configuration

� Verify the unified network high availability

Visual Objective

The figure illustrates what you will accomplish in this activity.

© 20 09 Cis co Sy stems , I nc. All r igh ts re ser ve d. DCUC D v3 .0—LG- 8

vPC keepalive

������������������Po100P o101

��������

Pod5

Pod1

Pod3

Pod6

Pod2

Pod4

E2/1

E2/3

E2/1

E2/3

E2/9

E2/9E2/6

E2/6

Port 1,2,3,4 Port 1,2,3,4

Port 19, 20 Port 19, 20

N7010-C1

Port

1,2,3,4

Port

1,2,3,4

N7010-C2

S6100-A S6100-B

������������������

Po1

Exploring Cisco Unified Computing Network

Note The Cisco UCS chassis in your lab topology might have more than six server blades

inserted.

Page 92: Design

90 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Required Resources

These are the resources and equipment required to complete this activity:

� Two Cisco UCS 6120XP Fabric Interconnect switches

� One Cisco UCS 5108 chassis

� Two Cisco UCS 2104XP I/O modules

� Six Cisco UCS B200-M1 server blades

Lab Aid

Refer to the Lab Aids section of this lab guide for the following information:

� How to access the lab

� Assigned student desktop IP address and login credentials

� Cisco UCS cluster IP address and login credentials

Page 93: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 91

Task 1: Examining Cisco UCS Network Configuration

In this task, you will examine the Cisco UCS cluster network configuration.

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will not change any parameters if not required by the task.

Activity Procedure

Complete these steps:

Step 1 Open and log into the Cisco UCS Manager by double-clicking the Cisco UCS

Manager icon on the desktop.

Step 2 Select the LAN tab in the left pane to navigate to the Cisco UCS LAN

configuration. Select the LAN Cloud in the Filter field to minimize the output. You

will see physical LAN configuration in the right LAN Uplinks pane: fabrics A and

B are connected to the external LAN via ports 1/1 through 1/4

todo

Step 3 Select the VLAN tab to examine the VLAN configuration. You can see some of the

same VLANs configured on the Cisco UCS as you observed on the Nexus switches.

Notice the VLAN name—although each VLAN has an identifier, when deploying

the servers on the Cisco UCS system, VLAN names are used.

Page 94: Design

92 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

todo

Step 4 Double-click one of the VLANs and examine the detailed information (for example,

VLAN 10). You can see that the VLAN is configured with redundancy in mind—it

is available via either fabric, which is denoted with the field Fabric ID being set to

Dual.

Page 95: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 93

Step 5 Now expand the Fabric A option and right-click the VLANs.

Step 6 Select the Create VLAN from the menu to explore the VLAN creation process. For

the VLAN name, use the PODP-test, where P is the pod number. Set the fabric

value to fabric A and enter the VLAN ID in the form LP9, where L is the lab

number and P is the pod number.

Page 96: Design

94 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 7 Before submitting the configuration, verify that there is no VLAN overlapping by

clicking Check Overlap. There should be no VLAN overlapping because no such

existing VLAN should be present.

Step 8 The VLAN should be successfully created.

Step 9 Under the VLANs for Fabric A, you should now see the created VLAN. You might

also see the VLANs created by members of other pods.

Step 10 Delete the VLAN that you have created by right-clicking the VLAN in the VLAN

list in the right pane or by clicking Delete under the VLAN details.

Page 97: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 95

Page 98: Design

96 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Task 2: Verifying Network High Availability

In this task, you will verify the network high availability by introducing a failure in the LAN

domain.

Note You will conduct this task on Cisco UCS equipment shared between the pods. Either all the

pods do the task simultaneously or the instructor shows the simulation.

Activity Procedure

Complete these steps:

Step 1 Verify the MAC address assigned to the Windows 2003 server with the ipconfig /all

command from the command prompt. The MAC address should be the address that

you put into the podX-mac-win pool.

Step 2 Ping the default gateway from your windows PC to make sure the mac address is

learned by the switch.

Page 99: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 97

Step 3 Open a Telnet session to the upstream switch by clicking on the Putty icon on the

desktop and selecting “

Step 4 Display the MAC address table for your blade servers MAC address. You should see

the MAC address learned. Use the command show mac address-table address

MAC (where MAC is the server MAC address).

Page 100: Design

98 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Page 101: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 99

Step 5 Open a command prompt on the Windows server and start a continuous ping to the

router IP. Use the command ping 172.17.p2. –t for the continuous ping.

Note You can determine the DNS IP address by issuing the ipconfig /all command at the

command prompt.

Step 6 Start a second ping to your servers own IP address (172.17.82.1)

Step 7 Notfiy your instructor you have completed this step.

Caution Since all pods share the uplinks no Pod should modity and of the uplinks individually since

this will lead to unpredictiable result in the lab.

Step 8 The Instructor will now simulate the LAN uplink failure by shutting down the

port19 interface on the s6100-A.

Page 102: Design

100 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 9 Observe the continuous ping that you issued from the Windows server. Notice that

almost no pings were lost while UCS was switchjing connections to the other fabric

interconnect. Also notice the windows server running on the blade never sees link-

down (which would reset all TCP sessions)

Page 103: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 101

Step 10 The instructor will reenable the ports on the fabric interconnect. Watch your ping

while UCS switches the connection back to Interconnect A (we configures the NIC

to use interconnect A with failover)

Page 104: Design

102 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Case Study 5-2: Designing a Cisco Unified Computing Network

You will review the customer requirements to size the Cisco Unified Computing network, that

is, to select the proper type and quantities of network devices to practice what you have learned

in the related module.

Overview

The customer, or service provider, is planning to offer managed data center services. They have

decided to deploy Cisco UCS systems, and you have already completed the sizing for the

solution. The solution will support multitenant environment.

Assignment

Your assigned task is to perform the Cisco Unified Computing network sizing, which will be

the basis for the BOM. This includes the following:

� Review the service provider requirements

� Select the core network switches

� Determine the number and type of required interfaces

� Design the core network switches high availability

� Define how the Cisco UCS clusters will be connected to the core switches

Work alone or together with one or more teammates to complete the assignment.

Requirements

You have conducted several technical meetings to collect the necessary input data based on

which you can do the Cisco Unified Computing network sizing. The requirements are as

follows;

� Two types of Cisco UCS clusters will be used: Cisco UCS-classs1 and Cisco UCS-class2.

� Initially, 10 Cisco UCS-class1 clusters and 10 Cisco UCS-class2 cluster will be installed in

a single data center.

� Cisco UCS clusters should be redundantly connected.

� There should be at least two core switches.

� If one core switch fails, the second should take over.

� Each core switch should be equipped with 40 10/100/1000 copper-based interfaces.

� Cisco UCS uplink interfaces connected to core switches should not be oversubscribed.

� An additional 42 10-Gb Ethernet ports should be available for connectivity to other devices

and networks.

Page 105: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 103

A single Cisco UCS-class1 cluster has the following characteristics:

� 16 BC-class1 chassis

� Two Cisco UCS 6120XP Fabric Interconnects

� Four 10GE USR SPF+ uplinks per Fabric Interconnect

� Each Cisco UCS 10GE uplink should carry all the necessary VLANs

© 20 09 Cis co Sy stems , I nc. All r igh ts re ser ve d. D CUCD v3. 0— LG -1 5

Sizing System – Fabric Interconnect Classes

� UCS-class1

– 16 BC-class1

– Two Cisco UCS 6120XP

� Per-fabric interconnect

– One N10-E0080 expansion

module

– SAN uplinks – 8 x 4G FC MM SFP

– LAN uplinks – 4 x 10GE USR SFP+

– Server links – 16 x 10GE CU SFP+

4x 10GE 4x 10GE8x 4G FC 8x 4G FC

16x 10GE 16x 10GE

BC-class1

16

chassis

Page 106: Design

104 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

A single Cisco UCS-class2 cluster has the following characteristics:

� Two BC-class2 chassis

� Two Cisco UCS 6120XP Fabric Interconnects

� Four 10GE USR SPF+ uplinks per Fabric Interconnect

� Each Cisco UCS 10GE uplink should carry all the necessary VLANs

© 20 09 Cis co Sy stems , I nc. All r igh ts re ser ve d. D CUCD v3. 0— LG -1 6

Sizing System – Fabric Interconnect Classes (Cont.)

� UCS-class2

– Two BC-class2

– Two Cisco UCS 6120XP

� Per-fabric interconnect

– One N10-E0080 expansion module

– SAN uplinks – 8 x 4G FC MM SFP

– LAN uplinks – 4 x 10GE USR SFP+

– Server links – 8 x 10GE CU SFP+

4x 10GE 4x 10GE8x 4G FC 8x 4G FC

8x 10GE 8x 10GE

BC-class2 BC-class2

Page 107: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 105

Lab 6-1: Exploring Cisco Unified Computing SAN Complete this lab exercise to examine the Cisco Unified Computing SAN topology and

components.

Activity Objective

In this activity, you will use the Data Center Unified Computing Design lab topology to

identify, examine, and verify SAN topology and hardware components. After completing this

activity, you will be able to meet these objectives:

� Use Fabric Manager to examine the SAN network topology

� Use Device Manager to examine the individual MDS switch configuration

� Use Cisco UCS Manager to examine the Cisco UCS system SAN configuration

� Verify the SAN high availability

Visual Objective

The figure illustrates what you will accomplish in this activity.

© 20 09 Cis co Sy stems , I nc. All r igh ts re ser ve d. DCUC D v3 .0—LG- 9

Pod5

Pod1

Pod3

Pod6

Pod2

Pod4

FC FC

Po rt 1,2,3,4 P ort 1,2,3,4

FC port

1 - 8

Fc 1/3-10

F c 1/12 Fc 1/13

Fc 1/3-10

Port

1,2,3,4

Port

1,2,3,4

FC port

1 - 8

S6100-A S6100-B

M DS9124-1 MDS9124-2

EMC1 EMC2

SPA SPB

Exploring Unified Computing Storage

� UCS-class2

– Two (2) BC-class2

– Two (2) UCS 6120XP

� Per fabric interconnect

– One (1) N10-E0080 expansion module

– SAN uplinks – 8x 4G FC MM SFP

Note The Cisco UCS chassis in your lab topology might have more than six server blades

inserted.

Page 108: Design

106 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Required Resources

These are the resources and equipment required to complete this activity:

� Two Cisco UCS 6120XP Fabric Interconnect switches

� One Cisco UCS 5108 chassis

� Two Cisco UCS 2104XP I/O modules

� Six Cisco UCS B200-M1 server blades

� Two MDS 9124 Series switches

Job Aid

Refer to the Lab Aids section of this lab guide for the following information:

� How to access the lab

� Assigned student desktop IP address and login credentials

� Cisco UCS cluster IP address and login credentials

SAN Parameter Assignment

Pod VSAN ID VSAN Name UCS Fabric MDS Seed Switch Windows 2003 WWPN

Pod1 11 VSAN11 A 172.16.1.31 20:00:00:00:00:00:01:01

Pod2 11 VSAN11 A 172.16.1.31 20:00:00:00:00:00:02:01

Pod3 11 VSAN11 A 172.16.1.31 20:00:00:00:00:00:03:01

Pod4 11 VSAN11 A 172.16.1.31 20:00:00:00:00:00:04:01

Pod5 11 VSAN11 A 172.16.1.31 20:00:00:00:00:00:05:01

Pod6 11 VSAN11 A 172.16.1.31 20:00:00:00:00:00:06:01

Pod7 11 VSAN11 A 172.16.1.31 20:00:00:00:00:00:07:01

Pod8 11 VSAN11 A 172.16.1.31 20:00:00:00:00:00:08:01

Page 109: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 107

Task 1: Examining SAN Network Topology

In this task, you will examine the SAN topology, MDS 9124 equipment, and configuration.

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will not change any parameters if not required by the task.

Activity Procedure

Complete these steps:

Step 1 Launch the Fabric Manager application, which you will use to explore the SAN

topology, by double-clicking the Cisco Fabric Manager SA icon on the desktop.

Step 2 Enter the login credentials for the Fabric Manager standalone server (username:

admin, password: password) when the login window appears.

Step 3 First, the Control Panel – Fabric Manager window appears if the application has

been used before, showing a fabric to be examined that should be selected. If the

Discover New Fabric window appears, skip the following step and proceed with the

discovery process.

Step 4 If there is an existing fabric listed, select the Fabrics tab, highlight the listed fabric

or fabrics, and remove them by clicking Remove. Next, click Discover to enter the

information for the discovery process.

Page 110: Design

108 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 5 Enter the required information in the Discover New Fabric window. Consult the

SAN Parameter Assignment table under the Job Aid section for the IP address of the

MDS Seed Switch. Use the student and 1234Qwer credentials and click the

Discover button to start the discovery process.

Step 6 You will be notified that the fabric is discovered.

Page 111: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 109

Step 7 Select the Open tab, mark the Select column for the discovered fabric and click

Open.

Step 8 The Fabric Manager window opens and shows the discovered fabric topology. You

will be presented with the topology where, on one side of the MDS switch, the EMC

Clariion AX4 disk array is connected and on the other side, the Cisco UCS Fabric

Interconnect identified by the switch WWN is connected.

Page 112: Design

110 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 9 Examine the Storage tab in the right-upper pane. You will see NetApp drive array.

Page 113: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 111

Step 10 The Summary tab shows that there are two switches present in the fabric, with eight

links configured as NPV links, and one link configured as F/FL link. The NPV links

are the Cisco UCS Fibre Channel uplinks, since Cisco UCS is operating in NPV

mode. The F/FL link is the EMC disk array connection, since this is a regular fabric

port.

todo

Page 114: Design

112 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 11 Expand the line between the MDS 9124 switch and the Cisco UCS Fabric

Interconnect by double clicking it. The connection between the switches blows up

into eight separate links. If you move your mouse over it, you will see the port

number on the MDS side as well as on the Cisco UCS side. Notice that the

connection is identified as an NP link.

Step 12 Select the Switches option in the Physical Attributes pane. Expand and select the

N_Port Virtualizer (NPV) option. Then select the NP Link tab in the upper-right

pane and examine the NPV configuration.

Page 115: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 113

Step 13 You can see the VSAN (VSANs), interface number on the MDS (F Port) and on the

UCS side (NP Port).

todo

Step 14 Double-click the MDS 9124 switch in the topology view to launch the Device

Manager application.

Step 15 Select the Summary tab to review the general switch information. Notice the

interfaces that are connected. For the interface fc1/3 – fc1/10, you can see the Cisco

UCS Fabric Interconnect hostname and interface numbering in the Connected To

column. These interfaces are running in the NPV mode of operation. The

information is acquired when the Cisco UCS Fabric Interconnect performs the fabric

login.

Page 116: Design

114 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 16 Examine the enabled features on the MDS switch by navigating to the Admin and

the Feature Control options in the menu. Search for the NPV and NPIV features.

The NPV feature is disabled, since the MDS 9124 is not operating as an NPV edge

switch but rather as an NPV core switch. For that reason, the switch must be enabled

with the NPIV feature.

Step 17 Close the Feature Control window. Select the Device tab and navigate to the

Interface. Then select the Fibre Channel Enabled option in the menu to examine

the Fibre Channel interfaces.

Step 18 Examine the FLOGI database by selecting the FLOGI tab in the Fibre Channel

Interfaces window. Apart from EMC disk array and the Cisco UCS Fabric

Interconnect, you can also see at least your Windows 2003 server logged into the

fabric. Search the WWPN address of your Windows 2003 server listed in the

Windows 2003 WWPN column of the SAN Assignment Parameter table under the

Job Aid section. Close the window after you finish the review.

Note You may also see Windows 2003 servers from other pods logged into the fabric.

Page 117: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 115

Step 19 Next, navigate to the Fibre Channel option in the menu and select Name Server.

Examine the FCNS database. The result will be similar to the FLOGI database.

Except for the EMC disk array interface, all others are marked as npv. You can also

see the device aliases defined for the Fibre Channel devices: For EMC1, the alias is

emc1-spa, for EMC2, the device alias is emc2-spb (if you are examining the second

fabric), and so on. Notice that each WWN is assigned an FCID that uniquely

identifies it in the Fibre Channel fabric.

Note You may also see Windows 2003 servers from other pods logged into the fabric.

Step 20 Examine the Advanced tab, where you can see the SymbolicNodeName, which

identifies the connected device.

Task 2: Examining Cisco UCS SAN Configuration

In this task, you will examine the Cisco UCS cluster SAN configuration.

Page 118: Design

116 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Note You will conduct this task on Cisco UCS equipment shared between the pods. For that

reason, you will not change any parameters if not required by the task.

Activity Procedure

Complete these steps:

Step 1 Open and log into the Cisco UCS Manager by double-clicking the Cisco UCS

Manager icon on the desktop.

Step 2 Select the SAN tab in the left pane to navigate to the Cisco UCS SAN configuration.

You will see physical SAN configuration in the right SAN Uplinks pane: Fabrics A

and B are connected to the external SAN via Fibre Channel ports 2/1 – 2/8.

Step 3 Select the SAN Cloud from the Filter list to examine only that portion of the

configuration.

Page 119: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 117

Step 4 Select the VSAN tab to examine the VSAN configuration. You can see the same

VSANs configured on the Cisco UCS as you have observed them on the MDS

switches. Notice the VSAN name—although each VSAN has an identifier, when

deploying the servers on the Cisco UCS system, VSAN names are used. You can see

also that VSAN11 is available via fabric A only and VSAN12 is available via fabric

B only.

Step 5 Double-click on VSAN11 to examine the detailed information. The action expands

the SAN cloud, Fabric A down to VSAN 11. You can verify that VSAN 11 is

available via Cisco UCS fabric A only. You can also see the FCoE VLAN number

1011, which is used to carry the VSAN over to the Fabric Interconnect via the DCE

connection.

Step 6 Right-click on VSANs in the left pane and select Create VSAN.

Page 120: Design

118 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Page 121: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 119

Step 7 In the Create VSAN window, enter the parameters to create a VSAN. The name

should be set to VSAN9X, fabric to A, VSAN ID to 9X, and the FCoE VLAN to

109X (where X is your pod number). Confirm VSAN creation by clicking OK.

Step 8 The VSAN should appear in the VSANs list. You will also see VSANs created by

other pod members.

Step 9 Delete the VSAN that you have created by right-clicking the VSAN in the VSAN

list in the right pane or by clicking Delete under VSAN details.

Page 122: Design

120 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Task 3: Examining the SAN High Availability (Demonstration)

In this task, you will examine the SAN high availability.

Note The instructor will show the demonstration.

Activity Procedure

Complete these steps:

Step 1 Open a MDS device manager session to 172.16.1.31. Use the student and

1234QWer credentials when logging into the device.

Step 2 Click the interfaces menu entry and click “fc F/FL/TL”

Step 3 Click the FLOGI tab and examine the FLOGI database on the MDS switch to which

your server blade is connected. For example, for fabric A you would examine the

FLOGI database on the MDS 9124-1 switch, and for fabric B, you would examine

the FLOGI database on MDS 9124-2 switch. You will see the WWPN of your server

logged in to the fabric over a Fibre Channel interface on the MDS switch. Search by

the device-alias (for example, p3-win), which is easier to read than WWPN.

Note You may also see Windows 2003 servers from other pods logged into the fabric.

Step 4 The Instructor will disable all but one Fibre Channel interface on the Cisco UCS

Fabric Interconnect switch.

Step 5 After disabling the interfaces, only one interface toward the Cisco UCS and one

interface towards the

NetApp

disk array stay active.

Page 123: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 121

Step 6 Re-examine the FLOGI database on the appropriate MDS switch. You should see

that the server WWPN address is logged into the fabric through a different Fibre

Channel interface on the MDS switch. The Windows 2003 server is still operational

even though the Cisco UCS re-pinned it to a different SAN uplink port.

Step 7 The instructor will reeenable the FC ports on the UCS fabric interconnect. After the

ports are up, check the FLOGI batabase again.

Step 8 Note the servers are still connected thourgh the same port. The UCS will rebalance

the servers only when logging in.

Page 124: Design

122 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Case Study 6-2: Designing Cisco Unified Computing SAN

In this case study, you will review the customer requirements to size the Cisco Unified

Computing SAN. That is, you will select the proper type and quantities of SAN devices, to

practice what you learned in the related module

Overview

The customer, or service provider, is planning to offer managed data center services. They have

decided to deploy Cisco UCS systems, and you have already completed the sizing for the

solution. The solution will support a multitenant environment.

You have already completed the UCS sizing process, server design, and network sizing in the

previous case study.

Assignment

Your assigned task is to perform the Cisco Unified Computing SAN sizing, which will be the

basis for the BOM. This includes the following:

� Review the service provider requirements

� Select the core SAN switches

� Determine the number and type of required interfaces

� Design the core SAN switches high availability

� Define how the Cisco UCS clusters will be connected to the core SAN switches

Work alone or together with one or more teammates to complete the assignment.

Requirements

You have conducted several technical meetings to collect the necessary input data based on

which you can do the Cisco Unified Computing SAN sizing. The requirements are as follows:

� Two types of Cisco UCS clusters will be used, Cisco UCS-class1 and Cisco UCS-class2.

� Initially, 10 Cisco UCS-class1 cluster and 10 Cisco UCS-class2 cluster will be installed.

� Cisco UCS clusters should be connected to two redundant SAN fabrics.

� There should be at least two core switches.

� Cisco UCS Fibre Channel uplink interfaces connected to the core switches can be 2:1

oversubscribed.

� An additional 20 Fibre Channel ports should be available for connectivity to other devices

and networks.

� An individual SAN switch should have free slots available for future expansion.

Page 125: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 123

A single Cisco UCS-class1 cluster has the following characteristics:

� 16 BC-class1 chassis

� Two Cisco UCS 6120XP Fabric Interconnects

� Eight 4G Fibre Channel MM SPF uplinks per Fabric Interconnect

� Fabric A switch is connected to SAN fabric 1

� Fabric B switch is connected to SAN fabric 2

© 20 09 Cis co Sy stems , I nc. All r igh ts re ser ve d. D CUCD v3. 0— LG -2 9

Sizing SAN – Fabric Interconnect Classes

� UCS-class1

– Sixteen (16) BC-class1

– Two (2) UCS 6120XP

� Per fabric interconnect

– One (1) N10-E0080 expansion module

– SAN uplinks – 8x 4G FC MM SFP

8x 4G FC 8x 4G FC

16x 10GE 16x 10GE

BC-class1

16

chassis

Page 126: Design

124 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

A single UCS-class2 cluster has the following characteristics:

� Two BC-class2 chassis

� Two Cisco UCS 6120XP Fabric Interconnects

� Eight 4G Fibre Channel MM SPF uplinks per Fabric Interconnect

� Fabric A switch is connected to SAN fabric 1

� Fabric B switch is connected to SAN fabric 2

© 20 09 Cis co Sy stems , I nc. All r igh ts re ser ve d. D CUCD v3. 0— LG -3 0

Sizing SAN– Fabric Interconnect Classes (Cont.)

� UCS-class2

– Two BC-class2

– Two UCS 6120XP

� Per fabric interconnect

– One N10-E0080 expansion module

– SAN uplinks – 8 x 4G FC MM SFP

8x 4G FC 8x 4G FC

8x 10GE 8x 10GE

BC-class2 BC-class2

Page 127: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 125

Lab 9-1: Installing VMware vSphere and vCenter Complete this lab activity to practice what you learned in the related lesson.

Activity Objective

In this activity, you will install VMware vSphere 4.0 on a Cisco UCS blade server. You will

then install vCenter on your student desktop and configure it to manage your ESX host. After

performing this lab, you should be able to:

� Demonstrate the process for creating a service profile

� Install vSphere 4.0 on your Pod’s service profile

� Import two virtual machines and a Cisco Nexus 1000V Virtual Ethernet Module (VEM)

image for use in later exercises

� Install vCenter on your student desktop to manage your ESX server

Visual Objective

The figure illustrates what you will accomplish in this activity.

© 2009 Cisco Systems, Inc. All rights reserved. DCUCI v3.0—11

Lab 9-1: Installing VMware vSphere and vCenter

Page 128: Design

126 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Required Resources

These are the resources and equipment that are required to complete this activity:

� Configured Cisco UCS environment

� Student desktop with network access and VMware vSphere client

� VMware ESX 4.0u1 ISO image

� Windows 2003 Virtual Machine

� Cisco Nexus 1000V installation folder

� VMware vCenter installation folder

Task 1: Create a Service Profile

In the task, you will create a service profile to use in your Cisco Nexus 1000V implementation.

Activity Procedure

Complete these steps:

Step 9 Log into Cisco UCS Manager if necessary.

Step 10 In the navigation pane, choose the LAN tab.

Step 11 Right-click the LAN Cloud icon and choose Create VLAN.

Page 129: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 127

Step 12 Name the VLAN PX-MANAGEMENT, and provide the VLAN ID of LP0.

Replace L with the Lab#, P with your pod number. Click OK.

Step 13 Click OK.

Step 14 Repeat the previous process to create the following VLANS. Replace P with your

Pod number, L with the Lab# (e.G. Lab1, Pod9 would use vlan191 P9-Data)

Name VLAN ID

PP-DATA LP1

PP-CONTROL LP2

PP-PACKET LP3

Step 15 Expand the VLANs icon and verify that you have the four VLANs that were created

in the previous steps.

Page 130: Design

128 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 16 In the navigation pane, choose the Servers tab, and choose the Service Profiles

icon.

Step 17 Right-click the Service Profiles icon and choose Create Service Profile (expert).

Page 131: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 129

Step 18 Name the profile PX-Nexus1000V, replacing X with your Pod number. Choose

Hardware Default in the UUID field. Click Next.

Step 19 Choose the Scrub Policy ScrubAll. If it does not exist, ask your instructor to create

it.

Step 20 Choose No vHBAs. Click Next.

Note This step is very important to ensure that other lab resources are not disturbed during this

exercise. Ensure that No vHBAs is selected.

Page 132: Design

130 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 21 Choose Expert and click Add to add a vNIC.

Step 22 Name the vNIC eth0, and choose Manual Using OUI for the MAC Address

Assignment. Specify the MAC address of 00:25:B5:0X:00:00, replacing X with your

Pod number.

Page 133: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 131

Step 23 Choose VLAN PX-MANAGEMENT, replacing X with y our Pod number. Set

VLAN trunking to Yes. Choose each of your Pod’s 4 VLANs, setting PX-

MANAGEMENT to Native. Click OK.

Step 24 Click Add to add another vNIC to the profile.

Step 25 Name the vNIC eth1, and choose Manual Using OUI for the MAC Address

Assignment. Specify the MAC address of 00:25:B5:0X:00:01, replacing X with your

Pod number.

Page 134: Design

132 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 26 Choose Fabric B. Click the VLAN Trunking Yes radio button. Check each of your

Pod’s 4 VLANs. Click OK to create the vNIC.

Step 27 Take a moment to verify that your vNIC configuration is correct. The first vNIC

(eth0) should only have all four of your Pod’s VLANs, with PX-MANAGEMENT

set as the Native VLAN. The second vNIC (eth1) should have all four of your Pod’s

VLANs configured, with none selected as a native VLAN. Click Next.

Step 28 Choose Create a Specific Boot Policy.

Step 29 Create a boot order that consists of the CD-ROM and Local Disk objects. Click

Next.

Page 135: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 133

Step 30 Choose Select existing Server, and choose your Pod server. Click Finish.

Note If your Pod server does NOT show up make sure it has been disassociated.

Step 31 Choose your Pod’s service profile. In the content pane, click KVM Console.

Step 32 Watch your server configure in the KVM window. When configuration is complete,

the KVM should look like this:

Page 136: Design

134 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Task 2: Install vSphere 4.0u1

In this task, you will install vSphere (ESX) 4.0u1 on your Pod’s service profile.

Activity Procedure

Complete these steps:

Step 33 Click Tools and Launch Virtual Media.

Step 34 Click Add Image.

Step 35 Navigate to c:\software.org\VMware and choose the ESX ISO image. Click Open.

Page 137: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 135

Step 36 Click the Mapped checkbox next to your ISO file.

Note Do not exit the Virtual Media Session after this step.

Step 37 Click inside the KVM window and press any key. The ESX 4.0 installer should

launch as shown.

Step 38 Press Enter to choose the default option Install ESX in Graphical Mode.

Page 138: Design

136 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 39 After a short delay, you should see the ESX Installer page. Click Next.

Step 40 If your mouse cursor does not align with the cursor in the KVM window, click

Tools, then Session Options.

Page 139: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 137

Step 41 Choose the Mouse tab, and ensure that Linux is selected. Click OK.

Step 42 Check the I accept the terms of the license agreement checkbox and click Next.

Step 43 Leave the keyboard setting at default and click Next.

Page 140: Design

138 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 44 Leave the custom driver setting at default and click Next.

Step 45 Click Yes.

Step 46 After the drivers have installed, click Next.

Page 141: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 139

Step 47 Choose Enter a serial number later, and click Next.

Step 48 Ensure that you choose the Network Adapter that matches the vNIC that you

configured as eth0. This is the vNIC that has the MAC address 00:25:b5:0X:00:00,

replacing X with your Pod number. Click Next.

Step 49 Set the IP address to 172.17.P1.1, replacing P with your pod number. Set the Subnet

Mask to 255.255.255.0. Set Host name to P#-ESX, replacing # with your pod

Page 142: Design

140 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

number. Leave the default Gateway value at default, but be certain to clear the DNS

server entries. Click Next.

Step 50 Leave the Setup Type set to Standard setup and click Next.

Page 143: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 141

Step 51 Choose the first drive listed and click Next.

Step 52 Click OK.

Page 144: Design

142 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 53 Choose your time zone values and click Next.

Step 54 Accept the default date and time and click Next.

Page 145: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 143

Step 55 Set the root password to 1234QWer and click Next.

Step 56 Click Next to start the installation.

Page 146: Design

144 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 57 The installation will take approximately 15 minutes. When complete, click Next.

Step 58 You will be prompted to restart your server. Click Finish to restart and wait for ESX

to finish booting.

Step 59 Close the KVM session window. Be sure that it has been exited and not merely

minimized.

Page 147: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 145

Task 3: Import a Virtual Machines

In this task, you will import two virtual machines and a VEM image for use in later exercises.

Activity Procedure

Complete these steps:

Step 60 Minimize the KVM console window and return to the student desktop. Find and

launch the VMware vSphere Client icon.

Step 61 Enter the IP address of your ESX host, 172.17.P1.1, replacing P with your Pod

number. Use the username root and the password 1234QWer. Click Login. If you

receive a warning message about certificates, click Ignore.

Page 148: Design

146 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 62 If you receive an error like this, click the Install this Certificate checkbox and click

Ignore.

Step 63 Click OK.

Step 64 Choose the Configuration tab.

Page 149: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 147

Step 65 In the Hardware pane, choose Storage. Right-click your host’s data store, Storage1,

and choose Browse Datastore.

Step 66 In the Datastore Browser window, click the Upload icon and choose Upload Folder.

Step 67 Navigate to c:\software.org\VMware and find the W2K3_VM folder to upload.

Select the folder and click OK.

Page 150: Design

148 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 68 Click Yes.

Step 69 Wait for the upload to complete. When complete, repeat the process to upload the

VSM_VM folder in the same location.

Page 151: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 149

Task 4: Install vCenter Server

In this task, you will install vCenter on your student desktop to manage your ESX server.

Activity Procedure

Complete these steps:

Step 70 Start Windows Explorer and navigate to c:\software.org\VMware\VCenter Install

CD. Start “autorun.exe”

Step 71 Click vCenter Server.

Page 152: Design

150 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 72 Accept the default language and click OK.

Step 73 Click Next.

Step 74 Choose I agree to the terms in the license agreement and click Next.

Page 153: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 151

Step 75 Enter something sensible or accept the defaults and click Next.

Step 76 Accept the default MS SQL express database selection and click Next.

Page 154: Design

152 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 77 Accept the default value and click Next.

Step 78 Accept the default installation location and click Next.

Page 155: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 153

Step 79 Accept the default selection and click Next.

Step 80 Change the Web Services HTTP port value to 8081, and accept the other default

values. Click Next.

Page 156: Design

154 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 81 Click Install.

Step 82 When the installation has completed, click Finish.

Page 157: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 155

Step 83 Double-click the VMware vSphere Client on your desktop. Use localhost as the IP

address / Name, and click Use Windows Session Credentials. Click Login.

Step 84 If you receive a warning like this, click Install this certificate… and click Ignore.

Step 85 Click OK.

Page 158: Design

156 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 86 Right-click your vCenter instance and click New Datacenter.

Step 87 Name your datacenter PodX-Datacenter, replacing X with your Pod number.

Step 88 Right-click your data center and choose Add Host.

Page 159: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 157

Step 89 Use the Host address 172.17.P1.1, replacing P with your Pod number. Enter the

username root and the password 1234QWer. If this process fails, just try again.

Step 90 Click Yes.

Page 160: Design

158 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 91 Click Next to proceed.

Step 92 Click Next to use the evaluation mode.

Step 93 Click Next.

Page 161: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 159

Step 94 Click Finish.

Step 95 Confirm that your ESX host now appears under your datacenter.

Page 162: Design

160 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Lab 9-2: Installing a Cisco Nexus 1000V VSM Complete this lab activity to practice what you learned in the related lesson.

Activity Objective

In this activity, you will configure a Cisco Nexus 1000V Virtual Supervisor Module (VSM) in

the Cisco UCS environment. After performing this lab, you should be able to install a Cisco

Nexus 1000V VSM.

Visual Objective

The figure illustrates what you will accomplish in this activity.

© 2009 Cisco Systems, Inc. All rights reserved. DCUCI v3.0—12

Lab 9-2: Installing a Cisco Nexus 1000V VSM

Cisco Nexus 1000V

VSM

Required Resources

These are the resources and equipment that are required to complete this activity:

� Configured Cisco UCS environment

� Installed VMware ESX and vCenter instances from Lab 9-1

Page 163: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 161

Task 1: Prepare the VLAN infrastructure

In this task, you will prepare the ESX infrastructure for the Nexus 1000 deployment.

Activity Procedure

Complete these steps:

Step 96 Log into the vSphere client if necessary. Log into localhost, using the Windows

session credentials.

Step 97 Open the Configuration > Networking view for your ESX host, and click

Properties for vSwitch0.

Page 164: Design

162 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 98 Click Add.

Step 99 Choose Virtual Machine and click Next.

Step 100 Name the network MANAGEMENT. DO NOT SPECIFY a VLAN number since

we are using the native VLAN for the management network.

Step 101 Click Next.

Step 102 Verify and click Finish.

Page 165: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 163

Step 103 Repeat the previous steps to create the following Port Groups, replacing X with your

Pod number and use the VLAN number specified below (L=Lab#; P=Pod#)

Network Label VLAN

CONTROL LP2

PACKET LP3

Note These Port groups will be used for the VSM.

Step 104 Click Close.

Page 166: Design

164 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Task 2: Install the Nexus 1000V VSM

In this task, you will install the VSM virtual machine.

Activity Procedure

Complete these steps:

Step 105 Open the File Menu and click “deploy OFV template”.

Step 106 Select “Deploy from file”, Click “Browse” and navigate to

C:\software.org\Nexus1000v.4.0.4.SV1.2\Nexus1000v.4.0.4.SV1.2\VSM\Install and

select the OVA template

Page 167: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 165

Step 107 Click Next to confirm the OVF import..

Step 108 Click “Accept” to accept the EULA and click “Next” to proceed.

Page 168: Design

166 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 109 Accept the default name and location and click Next.

Step 110 Select “Nexus 1000v Installer” and click “Next”.

Note This allows us to configure the VSM through the OFV import wizard, alternatively we could

also use the manual configuration.

Page 169: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 167

Step 111 Make sure the Networks are properly mapped.

Step 112 Configure password 1234QWer, 172.17.P1.200 as the VSM IP address (P is your

Pod #), 255.255.255.0 as the net mask, 172.17.P1.254 as the default gateway (P is

you Pod#) and click “Next”

Page 170: Design

168 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 113 Click Finish the complete the Wizard and start importing .

Step 114 Wait for the Deployment to complete.

Page 171: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 169

Step 115 Click “Close”.

Step 116 Select the VSM virtual machine and select the summary tab. Note the VMS is not

running yet.

Page 172: Design

170 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 117 Right-Click the VSM VM and select “Open Console”.

Step 118 Click the green “Power On” Button to start the VSM.

Page 173: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 171

Step 119 Wait for Bootup to complete. Note there not many messages shown.

Step 120 Start Internet Explorer on your Student desktop and navigate to http://172.17.P1.200

Page 174: Design

172 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 121 Right-click on the “Nexus1000V Extension” and save the file to the desktop.

Note This file will be used for authenticating the VSM to vCenter, it is unique per VSM.

Step 122 Right Click on the VEM software package and save it to your student desktop. Make

sure to select the correct file, this Lab uses ESX4.0U1, Build 208167

Page 175: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 173

Step 123 Click “Plug-ins” -> “Manage Plug-Ins” in VirtualCenter

Step 124 Right-click in the white space at the bottom of the windows and select “New Plugin”

Step 125 Navigate to your desktop and select the XML file we just downloaded.

Page 176: Design

174 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 126 Click “Register Plug-in” to add your Nexus 1000 to vCenter

Step 127 Click “Ignore” the accept the certificate

Step 128 Verify the plug-in and close the dialogue box.

Note Do not click the Download and Install button. This is for use with VMware Update Manager,

which is not in use in the lab environment.

Step 129 Open the C:\Documents and Settings\All Users\Application

Data\VMware\VMware VirtualCenter folder.

Step 130 Right-click the proxy.xml file and choose Open With.

Step 131 Choose WordPad.

Page 177: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 175

Step 132 Click OK.

Step 133 Choose Edit > Find.

Step 134 Enter :8089 and click Find Next.

Step 135 Ensure that the vCenter server address of your pod is correct as shown here.

<serverNamespace>172.17.1.2X:8089</serverNamespace>

Step 136 If changes were made, choose File > Save.

Step 137 Close the file and the folder.

Step 138 If changes were made, open services.msc and restart the VMware VirtualCenter

Server Service.

Step 139 Open the Nexus1000v console window (you can also use Putty to ssh to

172.17.P1.200 (P is your Pod#) but make sure to configure ONLY your own VSM!

Step 140 Login with “admin” and “1234Q

Step 141 Configure your SVS domain (where # is the Pod number).

Switch#

Switch# conf t

switch(config)# svs-domain

switch(config-svs-domain)# domain id P

switch(config-svs-domain)# control vlan LP2

switch(config-svs-domain)# packet vlan LP3

switch(config-svs-domain)# svs mode L2 (do not replace L ;))

switch(config-svs-domain)# exit

Step 142 Configure the session to vCenter

switch(config)# svs connection vcenter

switch(config-svs-conn)# protocol vmware-vim

switch(config-svs-conn)# remote ip address 172.17.1.2P

switch(config-svs-conn)# vmware dvs datacenter-name <yourDCname>

switch(config-svs-conn)# connect

Step 143 Exit config mode and save the configuration

switch(config-svs-conn)# end

switch#

switch# copy run start

[########################################] 100%

switch#

Page 178: Design

176 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 144 Verify the connection.

switch(config-svs-conn)# sho svs connections

connection vcenter:

ip address: 172.17.1.21

remote port: 80

protocol: vmware-vim https

certificate: default

datacenter name: Pod1

DVS uuid: 1e a4 19 50 ed b4 d4 ac-bd f5 1e 1f d2 27 93 32

config status: Enabled

operational status: Connected

sync status: Complete

version: VMware vCenter Server 4.0.0 build-208111

switch(config-svs-conn)#

Note The UUID will vary. This is the unique identifier for this VSM.

switch# sho int brief

---------------------------------------------------------------------

Port VRF Status IP Address Speed MTU

---------------------------------------------------------------------

mgmt0 -- up 172.17.P1.200 1000 1500

---------------------------------------------------------------------

Port VRF Status IP Address Speed MTU

---------------------------------------------------------------------

control0 -- up -- 1000 1500

switch#

Note No interfaces exist because they have not been added from vCenter.

switch# sho svs domain

SVS domain config:

Domain id: P

Control vlan: LP2

Packet vlan: LP3

L2/L3 Control mode: L2

L3 control interface: NA

Status: Config push to VC successful.

switch#

Note Of course L and P would be represented by numbers.

Step 145 Release the mouse by pressing Ctrl + Alt.

Step 146 Close the console window.

Step 147 Return to the VSphere client.

Step 148 You should see the following output from VCenter in the Recent Tasks pane at the

bottom of the window.

Page 179: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 177

Step 149 Click the Hosts and Clusters button and choose Networking.

Step 150 Expand the networking tree in the left pane to view the new Distributed Virtual

Switch.

Note Any ports not specifically placed in a port group will be placed in the “Quarantine” port

groups.

Page 180: Design

178 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Lab 9-3: Configuring Port Profiles Complete this lab activity to practice what you learned in the related lesson.

Activity Objective

In this activity, you will configure Cisco Nexus 1000V port profiles and install a Cisco Nexus

1000V VEM. After performing this lab, you should be able to:

� Create a port profile for the Cisco Nexus 1000V uplinks

� Create a Cisco Nexus 1000V virtual machine data port profile

� Add hosts to a Cisco Nexus 1000V VSM

� Add a host to a Cisco Nexus 1000V port group and validate the functionality of the virtual

Ethernet ports

Visual Objective

The figure illustrates what you will accomplish in this activity.

© 2009 Cisco Systems, Inc. All rights reserved. DCUCI v3.0—13

Lab 9-3: Configuring Port Profiles

VMware ESX

Cisco UCS

Blade Server

CNA CNA

Cisco Nexus 1000V

VSM

Virtual

Machine

Cisco Nexus 1000V VEM

Port

Profile

Required Resources

These are the resources and equipment that are required to complete this activity:

� Configured Cisco UCS environment

� Installed Cisco Nexus 1000V VSM from Lab 9-2

� Windows 2003 VM and Cisco Nexus 1000V VEM images from Lab 9-2

Page 181: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 179

Task 1: Create an Uplink Port Profile

In this task, you will create a port profile for the Cisco Nexus 1000V uplinks.

Activity Procedure

Complete these steps:

Step 151 From the desktop, open the PuTTY Client.

Step 152 Open an SSH session to your switch at IP Address 172.17.P1.200 (where P is your

pod number).

Step 153 Choose Yes if prompted to confirm the SSH key.

Page 182: Design

180 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 154 Log in to the switch by using username admin, password 1234Qwer.

Note SSH is the recommended method to access the switch after you have installed the Cisco

Nexus 1000V.

Step 155 Take some time to explore the switch and context sensitive help using show

commands and ?.

Step 156 Enter configuration mode.

n1000v# configure terminal (this can be shortened to conf)

n1000v(conf)#

Step 157 Rename the switch from “switch” to “n1000v” (or any other name you like)

switch(config)# hostname n1000v

n1000v(config)#

Step 158 Return to vCenter and note the switch has pushed this configuration change to

vCenter

Page 183: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 181

Step 159 Create the VLANs on the n1000v

n1000v(config)# vlan 110-113

n1000v(config-vlan)#

Step 160 Configure a system port profile. (replace L with the Lab # and P with your pod #)

n1000v(config)# port-profile pod#uplink

n1000v(config-port-prof)# description SystemUplink

n1000v(config-port-prof)# switchport mode trunk

n1000v(config-port-prof)# switchport trunk allowed vlan LP0-LP3

n1000v(config-port-prof)# no shutdown

n1000v(config-port-prof)# system vlan LP2,LP3 (where P is pod #)

n1000v(config-port-prof)# vmware port-group podXuplink (where X is

Pod number)

Note When using the vmware port-group command, if a name is not specified, the port profile

name is used for the VMware port group.

n1000v(config-port-prof)# capability uplink

n1000v(config-port-prof)# state enabled

Step 161 Save your configuration.

n1000v(config-port-prof)# copy run start

[########################################] 100%

Note This Port Profile will be used for communication between VSM and VEM and for outbound

VM communication. Separate Port Profiles can also be used for these functions.

Step 162 Note the port profile configuration was pushed to vCenter when you entered the

“state enabled” command.

Page 184: Design

182 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 163 Verify the port profile.

n1000v# sho port-profile name podPuplink

port-profile pod1uplink

description: system uplink description

type: ethernet

status: enabled

capability l3control: no

pinning control-vlan: -

pinning packet-vlan: -

system vlans: 112-113

port-group: pod1uplinkVC

max ports: -

inherit:

config attributes:

switchport mode trunk

switchport trunk allowed vlan 110-113

no shutdown

evaluated config attributes:

switchport mode trunk

switchport trunk allowed vlan 110-113

no shutdown

assigned interfaces:

n1000v#

Note No interfaces are shown because none have been assigned yet. Interfaces are assigned

from the vSphere client.

Page 185: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 183

Task 2: Create a Data Port Profile

In this task, you will create a Cisco Nexus 1000V virtual machine data port profile.

Activity Procedure

Complete these steps:

Step 164 Return to your open PuTTY session.

Step 165 Create the Management VLAN.

n1000v# conf

n1000v(config)# vlan LP1 (L=Lab, P=Pod)

Step 166 Create a VM data profile.

n1000v(config-vlan)# port-profile vmDataPodP (where P is pod number)

n1000v(config-port-prof)# switchport mode access

n1000v(config-port-prof)# switchport access vlan LP1

n1000v(config-port-prof)# vmware port-group

Note Because a port group name is not specified, the port profile name will be used to export the

profile to the vSphere server.

n1000v(config-port-prof)# no shut

n1000v(config-port-prof)# state enabled

Step 167 Save your configuration.

n1000v(config-port-prof)# copy run start

[########################################] 100%

Step 168 Verify the port profile configuration.

n1000v(config-port-prof)# sho port-profile name vmDataPod1

port-profile vmDataPod1

description:

type: vethernet

status: enabled

capability l3control: no

pinning control-vlan: -

pinning packet-vlan: -

system vlans: none

port-group: vmDataPod1

max ports: 32

inherit:

config attributes:

switchport mode access

Page 186: Design

184 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

switchport access vlan LP1

no shutdown

evaluated config attributes:

switchport mode access

switchport access vlan LP1

no shutdown

assigned interfaces:

n1000v(config-port-prof)#

Step 169 Return to the Datacenter Networking view.

Step 170 Your Pod’s VM data port profile should now be visible.

Page 187: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 185

Task 3: Add Hosts to a Cisco Nexus 1000V VSM

In this task, you will install the VEM to add hosts to the Cisco Nexus 1000V VSM.

Activity Procedure

Complete these steps:

Step 171 Switch your vCenter View to “Hosts and Clusters”

Step 172 Select your ESX server and in the summary tab right-click “storage” and select

“Browse Datastore”

Page 188: Design

186 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 173 Click “Upload files to this datastore”

Step 174 Navigate to your student desktop and select the VEM vib file we downloaded earlier

(if you cannot find it or you downloaded the wrong one just redownload at

http://172.17.P1.200 )

Step 175 Wait for the download to complete (should be very quick)

Step 176 Check if the file is now available to ESX

Page 189: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 187

Step 177 Open a Cisco UCS KVM session to your ESX host. (please note SSH would also

work but you CANNOT login because ESX does not allow root for remote access)

Step 178 If you see the main ESX splash screen, click Macros > Alt-F? > Alt-F1. If you see

a PX-ESX login prompt, skip to the next step.

Step 179 Log in to the server by using the username root, password 1234QWer.

Note For the following commands, ensure that you type each syntax as stated in the Lab Guide

and perform no additional commands… if you break your ESX server you probably have to

start over…

Step 180 Navigate to the directory where the Cisco Nexus 1000V VEM file is stored.

[root@P3-ESX ~]# cd /vmfs/volumes/Storage1

[root@P3-ESX Storage1]# ls

Step 181 Install the Cisco Nexus 1000V VEM image into the ESX host.

Note Using the tab key after starting to type the filename will automatically complete it without

having to type the entire filename.

[root@P3-ESX Storage1]# esxupdate -b cross_cisco-vem-v100-4.0.4.1.1.27-0.4.2-release.vib update

Step 182 Verify that the DPA is running.

[root@P3-ESX Storage1]# vem status

VEM modules are loaded

Switch Name Num Ports Used Ports Configured Ports MTU Uplinks

vSwitch0 32 4 32 1500 vmnic1

VEM Agent (vemdpa) is running

Page 190: Design

188 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 183 Exit the console.

[root@Pod1Server1 Storage1]# exit

Step 184 Log into the VSphere client and connect to your VCenter instance.

Step 185 Open the Datacenter Networking view.

Step 186 Right-click on the n1000v.

Step 187 Click Add Host.

Step 188 Choose the vmnic that is not associated with a vSwitch.

Page 191: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 189

Step 189 Choose the PodXuplink port group from the drop-down menu.

Step 190 Click Next.

Step 191 Verify the settings and click Finish.

Step 192 Choose your Cisco Nexus 1000 DVS.

Page 192: Design

190 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 193 Choose the Hosts tab.

Step 194 Ensure that your host is listed.

Step 195 Choose PodXuplink.

Step 196 Choose the Hosts tab.

Step 197 Ensure that your host is present.

Page 193: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 191

Step 198 Return to the PuTTY SSH window to configure your VSM.

Note If you have closed the SSH session, open a PuTTY session from the desktop to the Cisco

Nexus 1000V VSM 172.17.P1.200 of your pod by using username admin and password:

1234QWer.

Step 199 Verify that the Cisco Nexus 1000V VEM in each host is properly communicating

with the Cisco Nexus 1000V VSM.

Note It may take a while for the Module to show. The VSM will report “n1000v %PLATFORM-2-

MOD_PWRUP: Module 3 powered up (Serial number )” in syslog.

n1000v# sho module

Mod Ports Module-Type Model Status

-- ----- ---------------------------- ------------------ ---------

1 0 Virtual Supervisor Module Nexus1000V active *

3 248 Virtual Ethernet Module NA ok

Mod Sw Hw

--- --------------- ------

1 4.0(4)SV1(2) 0.0

3 4.0(4)SV1(2) 1.9

Mod MAC-Address(es) Serial-Num

--- -------------------------------------- ----------

1 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA

3 02-00-0c-00-03-00 to 02-00-0c-00-03-80 NA

Mod Server-IP Server-UUID Server-Name

--- ------------- ------------------------------------ -----------

1 172.17.11.200 NA NA

3 172.17.11.1 9fce9a94-b34b-11de-b37e-000bab01c0fb P1-ESX

* this terminal session

n1000v#

n1000v# sho module vem mapping

Mod Status UUID License

Status

--- ----------- ------------------------------------ --------

3 powered-up 9fce9a94-b34b-11de-b37e-000bab01c0fb licensed

n1000v#

Step 200 Save your configuration.

n1000v(config-port-prof)# copy run start

[########################################] 100%

Page 194: Design

192 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Task 4: Test Cisco Nexus 1000V Functionality

In this task, you will add a host to a Cisco Nexus 1000V port group and validate the

functionality of the virtual Ethernet ports.

Activity Procedure

Complete these steps:

Step 201 Return to the vSphere client.

Step 202 Choose the Hosts and Clusters view.

Step 203 Choose your ESX host and click the Configuration tab.

Step 204 Choose Storage and right-click your data store. Click Browse Datastore.

Page 195: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 193

Step 205 Navigate to the W2K3_VM folder that you uploaded in a previous exercise. Find the

vmx file; right-click it, and choose Add to Inventory.

Step 206 Accept the default name and click Next.

Page 196: Design

194 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 207 Choose your ESX host and click Next.

Step 208 Click Finish.

Page 197: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 195

Step 209 Return to the vSphere client. Right-click the W2K3 VM and choose Edit Settings.

Step 210 Click Network Adapter 1, and choose your vmDataPodX port group.

Page 198: Design

196 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 211 Click OK.

Step 212 Right-click the W2K3 VM and click Open Console.

Step 213 Click the green “power on” icon. If prompted, choose I_moved it

Step 214 After Windows boots, click VM > Guest > Send Ctrl+Alt+Del.

Step 215 Log in by using username Administrator, password 1234QWer.

Step 216 Making sure you are inside the Windows 2003 VM console window, click Start,

Control Panel, Network Connections, Local Area Connection.

Note If you see more than one network connection listed, you are likely looking at your student

desktop. Make sure that you are clicking the Start button within the Windows 2003 VM

console window.

Page 199: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 197

Step 217 Click Properties.

Step 218 Choose TCP/IP and click Properties.

Step 219 Enter the IP address 172.17.P2.100, replacing P with your Pod number. Enter a

subnet mask of 255.255.255.0, default gateway 172.17.P2.254. Leave the other

values empty and click OK.

Page 200: Design

198 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 220 Click Close.

Page 201: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 199

Step 221 Again, making sure that you are within the Windows 2003 VM console window,

click Start, Command Prompt to open a command prompt shell.

Step 222 Make sure that you can ping the L3 infrastructure at 172.17.P2.254, replacing X

with your Pod number.

Page 202: Design

200 Cisco Data Center Unified Computing Design (DCUCD) 3.0 © 2010 Cisco Systems, Inc. & Fast Lane

Step 223 Return to the PuTTY window that is attached to the VSM console. Look at the status

of the interfaces by entering sh int br.

Note The Veth1 port belongs to your Windows 2003 virtual machine. It was created when your

virtual machine came online with the vmDataPodX port profile. Whenever the machine is

moved, the veth port moves with it, so this would be the interface you make VM-specific

changes on (e.G. shutdown, policer etc.)

Step 224 Explore more information about your virtual Ethernet port.

Note Note that the owner is listed as the name of the virtual machine. Also note which VMware

network adapter is supporting this virtual Ethernet port.

Page 203: Design

© 2010 Cisco Systems, Inc. & Fast Lane Fast Lane Lab Guide V3.0.0 201

Step 225 Shut down the virtual Ethernet port.

Step 226 Return to your Windows 20003 virtual machine console and attempt to ping your

default gateway at 172.17.P2.254, replacing P with your Pod number.

Note Because you disabled the virtual Ethernet port to which this VM is attached, you are no

longer able to access network resources.

The Port inside the VM never receives “link-down” status - this is a VMWare DVS

implementation issue