Top Banner
Here is Your Customized Document Your Configuration is: Action to Perform - Plan configuration Configuration Type - Basic Storage-System Model - CX4-120 Connection Type - Fibre Channel Switch or Boot from SAN Server Operating System - HP-UX Management Tool - EMC Navisphere Manager Reporting Problems To send comments or report errors regarding this document, please email: [email protected]. For issues not related to this document, contact your service provider. Refer to Document ID: 1423524 Content Creation Date 2010/10/5
95

CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Sep 30, 2014

Download

Documents

donna5158
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Here is Your Customized DocumentYour Configuration is:

Action to Perform - Plan configurationConfiguration Type - BasicStorage-System Model - CX4-120Connection Type - Fibre Channel Switch or Boot from SANServer Operating System - HP-UXManagement Tool - EMC Navisphere Manager

Reporting ProblemsTo send comments or report errors regarding this document, please email:[email protected]. For issues not related to this document, contactyour service provider.Refer to Document ID: 1423524

Content Creation Date 2010/10/5

Page 2: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Content Creation Date 2010/10/5

Page 3: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Content Creation Date 2010/10/5

Page 4: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

CLARiiON® CX4™ SeriesPlanning Your Basic CX4-120

Storage-System SwitchConfiguration with an HP-UX Server

This guide introduces the CLARiiON® CX4-120 storage system withUltraFlex™ technology in Fibre Channel switch configurations with anHP-UX server. You should read this guide:

If you are considering the purchase of one of these storage systems andwant to understand its features, or

Before you plan the installation of one of these storage systems.

This guide has worksheets for planning:

Hardware components

Management port network and security login information

File systems and storage-system disks (LUNs and thin LUNs)

For information on planning replication and/or data mobility software(MirrorView™, SnapView™, SAN Copy™) configurations for your storage system,use the Plan Configuration link under Storage-system tasks on the CX4 supportwebsite.

These worksheets assume that you are familiar with the servers (hosts)that will use the storage systems and with the operating systems on theseservers. For each storage system that you will configure, complete aseparate copy of the worksheets included in this document.

For the most current, detailed, and complete CX4 series configurationrules and sample configurations, refer to the E-Lab™ InteroperabilityNavigator on the Powerlink® website (http://Powerlink.EMC.com). Besure to read the notes for the parts relevant to the configuration that youare planning. For background information on the storage system, readthe Hardware and Operational Overview and Technical Specifications for yourstorage system. You can generate the latest version of these documents

1

Page 5: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

using the customized documentation Learn about storage system linkunder Storage-system tasks on the storage-system support website.

Topics in this document are:

About the storage system............................................................... 3Storage-system Fibre Channel components ..................................... 13Storage-system management.......................................................... 29Basic storage concepts.................................................................... 48File systems and LUNs .................................................................. 75

2 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 6: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

About the storage system

Major topics are:

Storage-system overview, page 3

Fibre Channel overview, page 4

Storage-system connection limits and rules, page 8

Types of storage-system installations, page 9

Storage-system overview

The storage system provides terabytes of disk storage capacityin flexible configurations and highly available data at a low cost.End-to-end data transfer rates are up to:

8 Gb/s for Fibre Channel connections in any storage system

10 Gb/s for iSCSI connections in a storage system withUltraFlex™iSCSI I/O modules

10 Gb/s for Fibre Channel over Ethernet (FCoE) connections in astorage system with UltraFlex FCoE I/O modules

The storage system consists of:

One storage processor enclosure (SPE)

One or more separate disk-array enclosures (DAEs)

One or two standby power supplies (SPSs)

The storage processor enclosure does not include disks, and requiresat least one disk-array enclosure (DAE) with a minimum of 5 disks. Amaximum of 8 separate disk-array enclosures are supported for a totalof 120 disks. A DAE connects to a back-end bus, which consists oftwo redundant loops – one loop associated with a back-end port on SPA and one loop associated with the corresponding back-end port onSP B. Since each SP has two back-end ports, the storage system hasone back-end bus.You can connect a maximum of eight DAEs to oneback-end bus.

Storage processor enclosure

The storage processor enclosure (SPE) components are:

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 3

Page 7: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Two storage processors (SP A and SP B) that provide the RAID(redundant array of independent disks) features of the storagesystem and control disk activity and host I/O.

Four power/cooling modules, two associated with SP A and twoassociated with SP B.

Disk-array enclosures

The storage system’s disk-array enclosures are 4 Gb/sUltraPoint™ (point-to-point) enclosures (DAEs) that support eitherhigh-performance Fibre Channel disk modules or economical SATA(Serial Advanced Technology Attachment, SATA II) disk modules. TheDAE also supports Enterprise Flash Drive Fibre Channel modules.These modules are solid state disk (SSD) Fibre Channel modules, alsoknown as Flash or SSD disk modules or disks. You can mix Flash andstandard FC disk modules, but not Flash and SATA disk modules,within a DAE. You cannot mix SATA and Fibre Channel disk moduleswithin a DAE, but you can integrate and connect FC and SATAenclosures within a storage system. The enclosures operate at eithera 2 or 4 Gb/s bus speed (2 Gb/s components, including disks, cannotoperate on a 4 Gb/s bus).

Fibre Channel overview

Fibre Channel is a high-performance serial protocol that allowstransmission of both network and I/O data. It is a low-level protocol,independent of data types, and supports such formats as SCSI and IP.The Fibre Channel standard supports two physical protocols that thestorage system supports:

Arbitrated loop (FC-AL) for direct connection to a host (server)

Switch fabric connection to a host (server)

A Fibre Channel arbitrated loop is a circuit consisting of nodes. Eachnode has a unique address, called a Fibre Channel arbitrated loopaddress.

A Fibre Channel switch fabric is a set of point-to-point connectionsbetween nodes; each connection is made through one or more FibreChannel switches. Each node may have its own unique address, butthe path between nodes is governed by a switch.

Each node is either a server adapter (initiator) or a target (storagesystem). Fibre Channel switches are not considered nodes. Optical

4 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 8: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

cables connect nodes directly to the storage system or to switches. Anoptical cable can transmit data over great distances for connectionsthat span entire enterprises and can support remote disaster recoverysystems. We strongly recommend the use of OM3 50 μm cables for alloptical connections.

Each device in an arbitrated loop or a switch fabric is a server adapter(initiator) or a target (storage-system Fibre Channel SP data port).Figure 1 shows an initiator node and target node.

EMC1802

Server adapter (initiator)

Connection

Storage system (target)

Node

Adapter

Node

Figure 1 Fibre Channel nodes - initiator and target connections (1 of up to 6connections to an SP shown)

In addition to one or more storage systems, a Fibre Channel storageconfiguration has two main components:

A server component (host bus adapter driver with adapter andsoftware)

Interconnection components (cables based on Fibre Channelstandards and switches)

Fibre Channel initiator components (host bus adapter and driver)

The host bus adapter is a printed-circuit board that slides into an I/Oslot in the server’s cabinet. Under the control of a driver, the adaptertransfers data between server memory and one or more storage systemsover a Fibre Channel connection.

Fibre Channel target components

Target components are the target portals that accept and respond torequests from an initiator. The Fibre Channel target portals are thefront-end Fibre Channel data ports on the storage-system SP. Each SPhas 2 or 6 Fibre Channel front-end ports per SP. The Fibre Channelfront-end (data) ports communicate with Fibre Channel switches

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 5

Page 9: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

or servers. The connectivity speeds supported by these front-endports depends on the type of Fibre Channel UltraFlex™ I/O modulethat has the ports. The 4 Gb/s Fibre Channel I/O modules support1/2/4 GB/s front-end connectivity and the 8 Gb/s Fibre ChannelI/O modules support 2/4/8 Gb/s front-end connectivity. You cannotuse 8 Gb/s Fibre Channel I/O modules in an 1 Gb/s Fibre Channelenvironment. You can use 4 Gb/s Fibre Channel I/O modules in an8 Gb/s environment if the Fibre Channel interconnection componentsauto-adjust their speeds to 4 Gb/s.

Fibre Channel interconnection components

The interconnection components consist of optical cables betweencomponents and Fibre Channel switches.

The maximum length of the optical cable between a storage systemand a server or switch ranges from 10 to 500 meters (11 to 550 yards),depending on the type of cable and operating speed. With extenders,connections between servers, switches, and other devices can span upto 60 kilometers (36 miles) or more. This ability to span great distancesis a major advantage of using optical cables. We strongly recommendthe use of OM3 50 μm cables for all optical connections. Details oncable lengths and rules for using them are in Table 9.

Fibre channel switches

A Fibre channel switch, which is required for shared storage in astorage area network (SAN), connects all the nodes cabled to it usinga fabric topology. A switch adds serviceability and scalability to anyinstallation; it allows online insertion and removal of any deviceon the fabric and maintains integrity if any connected device stopsparticipating. A switch also provides server-to-storage-system accesscontrol and point-to-point connections. Figure 2 shows a Fibre channelswitch.

6 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 10: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

EMC1805

Adapter

Server

Adapter

Server

Adapter

Server

SP SP SP SPStorage system Storage system

Figure 2 Fibre channel switch connections (1 of up to 6 connections to one SP in eachstorage system shown)

You can cascade switches (connect one switch port to another switchport) for additional port connections.

Fibre Channel switch zoning

Switch zoning lets an administrator define paths between connectednodes based on the node’s unique World Wide Name. Each zoneincludes a server adapter node and/or one or more SP nodes. Werecommend single-initiator zoning, which limits each zone to a singleHBA port (initiator).

In Figure 3, the dotted lines show the zone that allows server 1 hasaccess to one SP in storage systems 1 and 2; server 1 has no access toany other SP.

Zone

Server 1 Server 2 Server 3

Storage system 1SP

Storage system 2SP

Storage system 3SP SP

Switch

EMC1806

SP SP

Adapter

Adapter

Adapter

Figure 3 Sample Fibre Channel switch zone

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 7

Page 11: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

To illustrate switch zoning, Figure 3 shows just one HBA per serverand one switch. Normally, such installations include multiple HBAsper server and two or more switches. In general a server should bezoned to 2 ports on each SP in a redundant configuration. If you do notdefine a zone in a switch, all adapter ports connected to the switch cancommunicate with all SP ports connected to the switch.

Fibre Channel switches are available with 8, 16, 32, or more ports. Theyare compact units that fit into a rackmount cabinet.

If your servers (hosts) and storage systems will be far apart, youcan place the switches closer to the servers or storage systems, asconvenient.

A switch is technically a repeater, not a node, in a Fibre Channel loop.However, it is bound by the same cabling distance rules as a node.

Storage-system connection limits and rules

For an initiator to communicate with a storage-system target, it must beregistered with the storage system. Table 1 lists the number of initiatorsthat can be registered with the storage system.

Table 1 Number of initiators that can be registered with a storage system

Maximum number of initiators

FLARE version Per SP Per storage system

FLARE 04.29 or later 256 512

FLARE 04.28 128 256

A CNA can run both 10 GbE iSCSI and FCoE at the same time. As ageneral rule, a single server cannot connect to the same storage systemthrough both the storage system’s iSCSI data ports and FCoE dataports or through both the storage system’s iSCSI data ports and FibreChannel data ports. The same general rule applies to servers in acluster group connected to the storage system. For example, you mustnot connect one server in the cluster to the storage system’s iSCSI dataports and another server in the cluster to the storage system’s FCoE orFibre Channel data ports. A single server with both Fibre ChannelHBAs and CNAs can connect through the same FCoE switch to thesame storage system through the storage’s Fibre Channel data portsand FCoE data ports.

8 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 12: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Servers with virtual machines or virtual systems that run differentinstances of the Navisphere® Host Agent than the kernel system runsare the exception to this rule. The initiators on servers that the hostagent registers with the storage system for the kernel system and thatthe host agent registers with the storage system for the virtual machinesappear to be from different servers. As a result, you can connect themto different storage groups.

You can attach a single server to a CX4 series, CX3 series, or CX seriesstorage system and an AX4-5 series or AX series storage system at thesame time only if the following conditions are met:

The server is running the Unisphere Server Utility and/or theUnisphere Host Agent or version 6.26.5 or later of the NavisphereServer Utility and/or the Navisphere Host Agent.

The AX4-5 series and AX series storage systems are runningNavisphere® Manager software.

The master of the domain with these storage systems is one of thefollowing:

A CX4 series storage system

A CX3 series storage system running FLARE 03.26.xxx.5.014or later

A CX series storage system running FLARE 02.24.xxx.5.018 orlater

An AX4-5 storage system running FLARE 02.23.050.5.5xx orlater

Either a Unisphere management station or a Navispheremanagement station running the Navisphere UIs version 6.28or later.

Types of storage-system installations

You can use a storage system in any of several types of installation:

Unshared direct with one server is the simplest and least costly.

Shared or clustered direct lets multiple servers share the storagesystem.

Shared switched with two or more Fibre Channel switch fabricsor network switches or routers lets multiple servers share theresources of several storage systems in a storage area network

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 9

Page 13: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

(SAN). Shared switched or network storage systems can havemultiple paths to each SP, providing multipath I/O for dynamicload sharing and greater throughput.

Figure 4 shows the three types of storage-system installation.

EMC1826b

Server component

Interconnection component

Storage component

Unshared direct (one or two servers)

Shared or clustered direct (two servers)

Shared switched (multiple servers, multiple paths to SPs)

Adapter

Server Server Server Server Server Server

Path 1 Path 2

SP A SP B SP A SP B SP A SP B SP A SP B SP A SP B Storage system Storage system Storage system Storage system Storage system

Adapter

Adapter

Adapter

Adapter

Adapter

Adapter

Adapter

Adapter

Adapter

Adapter

Adapter

FC or FCoEswitch or LAN

FC or FCoEswitch or LAN

Figure 4 Types of storage-system installation

The shared or clustered direct installation can be either shared (that is,with storage groups on the storage system enabled to control LUNaccess) or clustered (that is, with operating system cluster softwarecontrolling LUN access). In a clustered configuration, data accesscontrol on the storage system can be either enabled or disabled. Thenumber of servers in the cluster varies with the operating system.

About shared switched or network storage and storage area networks

A storage area network (SAN) is one or more storage devices connectedto servers through switches to provide a central location for diskstorage. Centralizing disk storage among multiple servers has manyadvantages, including:

Highly available data

Flexible association between servers and storage capacity

Centralized management for fast, effective response to users’ datastorage needs

10 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 14: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Easier file backup and recovery

A SAN is based on shared storage; that is, the SAN requires thatstorage-system storage groups are enabled to provide flexible accesscontrol to storage-system LUNs. Within the SAN, a network connectionto each SP in the storage system lets you configure and manage thestorage system. Figure 5 shows the components of a SAN.

EMC1810c

Server Server Server

Storage system

SP A SP B

Storage system

SP A SP B

Path 1 Path 2

Adapter

Adapter

Adapter

Adapter

Adapter

Adapter

FC or FCoEswitch or LAN

FC or FCoEswitch or LAN

Figure 5 Components of a SAN

In a Fibre Channel environment, the switches can control data accessto storage systems through the use of switch zoning. Switch zoningcannot selectively control data access to LUNs in a storage systembecause each SP appears as a single Fibre Channel device to the switchfabric. Switch zoning and restrictive authentication can prevent orallow communication with an SP, but not with specific disks or LUNsattached to an SP. For access control with LUNs, a different solution isrequired: storage groups.

Storage groups

Storage groups are the central component of shared storage; a storagesystem that is unshared (that is, dedicated to a single server) does notneed to use storage groups. When you configure shared storage, youcreate a storage group and specify which server(s) can access it (readfrom and/or write to it).

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 11

Page 15: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

More than one server can access the same storage group only if allthe servers run cluster software. The cluster software enforces orderlyaccess to the shared storage group LUNs.

Figure 6 shows a simple shared storage configuration consisting of onestorage system with two storage groups. One storage group servesa cluster of two servers running the same operating system, and theother storage group serves a database server with a different operatingsystem. Each server is configured with two independent paths to itsdata, including separate host bus adapters, switches, and SPs, so thereis no single point of failure for access to its data.

EMC1811c

Highly available cluster

File server

Operating system A

Mail server

Operatin g system A

Database server

Operatin g system B

SP A SP B

LUN

LUN

LUN

LUN

LUN

LUN

LUN

Physical storage system

Cluster storagegroup

Database serverstorage group

Pa th 1 Pa th 2

Adapter

Adapter

Adapter

Adapter

Adapter

Adapter

FC or FCoEswitch or LAN

FC or FCoEswitch or LAN

Figure 6 Sample shared storage configuration

12 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 16: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Storage-system Fibre Channel components

This section helps you plan the hardware components – adapters,cables, storage system requirements, and site requirements – for eachserver in your installation.

Major topics are:

Storage-system hardware components, page 13

Hardware components worksheet, page 25

Cache worksheet, page 28

Storage-system hardware components

The basic storage-system hardware components are:

Storage processor enclosure (SPE) – a sheet-metal housing witha front cover (bezel), midplane, and slots for the followingcomponents:

A pair of redundant storage processors (SP A and SP B), eachwith a CPU module and an I/O carrier with slots for UltraFlex™I/O modules

Four power supply/system cooling modules (referred to aspower/cooling modules), two associated with SP A and twoassociated with SP B

Two separate standby power supplies (SPSs) support write cachingand provide the highest data availability.

One or more disk-array enclosures (DAEs) with slots for 15 disks.One DAE with at least five disks is required.

Figure 7 and Figure 8 show the SPE components. If the enclosureprovides slots for two identical components, the component inslot A is called component-name A. The second component is calledcomponent-name B. For increased clarity, the following figures depict theSPE outside of the rack cabinet. Your SPE may arrive installed in arackmount cabinet.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 13

Page 17: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

CPU module A CPU module B CL4135

Power/cooling modules A0 - A1 Power/cooling modules B0 - B1

Figure 7 SPE components (front with bezel removed)

CL41341

23

10/100/1000

0

10/100/1000

0

Managementmodule B

Managementmodule A

SP ASP B

Figure 8 SPE components (back)

Storage processor

The storage processor (SP) provides the intelligence of the storagesystem. Using its own proprietary software (called FLARE® OperatingEnvironment), the SP processes the data written to or read from thedisk modules, and monitors the disk modules. An SP consists of a CPUmodule printed-circuit board with two central processing module andmemory modules, associated UltraFlex I/O modules, and status lights.

Each SP uses UltraFlex I/O modules for Fibre Channel (FC), FCoE,and iSCSI front-end port connectivity to hosts (servers) and FibreChannel (FC) back-end port connectivity to disks with the standardconfigurations listed in Table 2.

14 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 18: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Table 2 Standard SP port configurations

Storage system

iSCSI serverports

(see note)

FC serverports

(see note)

FCoE ports(see note)

FC back-endports

CX4-120 2 2 2 1

Each SP can have one optional UltraFlex I/O module for additionaliSCSI, Fibre Channel, or FCoE server ports.

Each SP also has an Ethernet connection through which the EMCNavisphere® management software lets you configure and reconfigurethe LUNs and storage groups in the storage system. Since each SPconnects to a network, you can still reconfigure your system, if needed,should one SP fail.

UltraFlex I/O modules

Table 3 lists the number of I/O modules the storage system supportsand the slots the I/O modules can occupy. More slots are availablefor optional I/O modules than the maximum number of optional I/Omodules supported because some slots are occupied by required I/Omodules. With the exception of slots A0 and B0, the slots occupied bythe required I/O modules can vary between configurations. Figure9 shows the I/O module slot locations and the I/O modules for thestandard minimum configuration with 1 GbE iSCSI modules. The 1GbE iSCSI modules shown in this example could be 10 GbE iSCSI orFCoE I/O modules.

Table 3 Number of supported I/O modules per SP

All I/O modules Optional I/O modules

Storage systemNumber

supported per SP SP A slots SP B slotsNumber

supported per SP SP A slots SP B slots

CX4-120 3 A0-A2 B0-B2 1 A1-A2 B1-B2

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 15

Page 19: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

CL4127

12

3

10/100/1000

0

10/100/1000

0

B0 B1 B2 B3 B4 A4A0 A1 A2 A3

Figure 9 I/O module slot locations (1 GbE iSCSI and FC I/O modules for a standardminimum configuration shown)

The following types of modules are available:

4 or 8 Gb Fibre Channel (FC) modules with either:

2 back-end (BE) ports for disk bus connections and 1 front-end(FE) port for server I/O connections (connection to a switch orserver HBA).

or

4 front-end (FE) ports for server I/O connections (connection toa switch or server HBA).

The 8 Gb FC module requires FLARE 04.28.000.5.7xx or later.

10 Gb Ethernet (10 GbE) FCoE module with 2 FCoE front-end (FE)ports for server I/O connections (connection to a FCoE switch andfrom the switch to the server CNA). The 10 GbE FCoE modulerequires FLARE 04.30.000.5.5xx or later.

1 Gb Ethernet (1 GbE) or 10 Gb Ethernet (10 GbE) iSCSI modulewith 2 iSCSI front-end (FE) ports for network server iSCSI I/Oconnections (connection to a network switch, router, server NIC,or iSCSI HBA). The 10 GbE iSCSI module requires FLARE 04.29or later.

16 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 20: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Table 4 lists the I/O modules available for the storage system and thenumber of each module that is standard and/or optional.

Table 4 I/O modules per SP

Number of modules per SP

Module Standard Optional

4 or 8 Gb FC module:1 BE port (0)2 FE ports (2, 3)

(port 1 not used)

1 0

4 or 8 Gb FC module:4 FE ports (0, 1, 2, 3)

0 1

10 GbE FCoE module:2 FE ports (0, 1)

1 or 0 (see note 1) 1 (see note 2)

1 or 10 GbE iSCSI module:2 FE ports (0, 1)

1 or 0 (see note 1) 1 (see note 2)

Note 1: The standard system has either 1 FCoE module or 1 iSCSI module per SP, butnot both types.Note 2: The maximum number of 10 GbE FCoE modules or 10 GbE iSCSI I/O modulesper SP is 1.

IMPORTANT

Always install I/O modules in pairs – one module in SP A and onemodule in SP B. Both SPs must have the same type of I/O modules inthe same slots. Slots A0 and B0 always contain a Fibre Channel I/Omodule with one back-end port and two front-end ports. The otheravailable slots can contain any type of I/O module that is supportedfor the storage system.

The actual number of each type of optional Fibre Channel, FCoE,and iSCSI I/O modules supported for a specific storage-systemconfiguration is limited by the available slots and the maximumnumber of Fibre Channel, FCoE, and iSCSI front-end ports supportedfor the storage system. Table 5 lists the maximum number of FibreChannel, FCoE, and iSCSI FE ports per SP for the storage system.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 17

Page 21: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Table 5 Maximum number of front-end (FE) ports per SP

Storage system

MaximumFibre Channel FE

ports per SP

MaximumFCoE FE ports

per SP

MaximumiSCSI FE ports

per SP(see note)

CX4-120 6 4 4

Note: The maximum number of 10 GbE iSCSI ports per SP is 2.

Back-end (BE) port connectivity

Each FC back-end port has a connector for a copper SFP-HSSDC2(small form factor pluggable to high speed serial data connector)cable. Back-end connectivity cannot exceed 4 Gb/s regardless of theI/O module’s speed. Table 6 lists the FC modules that support theback-end bus.

Table 6 FC I/O module ports supporting back-end bus

Storage system and FC modules Back-end bus (module port)

CX4-120

FC module in slots A0 and B0 Bus 0 (port 0)

Fibre Channel (FC) front-end connectivity

Each 4 Gb or 8 Gb FC front-end port has an SFP shielded Fibre Channelconnector for an optical cable. The FC front-end ports on a 4 Gb FCmodule support 1, 2, or 4 Gb/s connectivity, and the FC front-end portson an 8 Gb FC module support 2, 4, or 8 Gb/s connectivity. You cannotuse the FC front-end ports on an 8 Gb FC module in a 1 Gb/s FibreChannel environment. You can use the FC front-end ports on a 4 Gb FCmodule in an 8 Gb/s Fibre Channel environment if the FC switch orHBA ports to which the module’s FE ports connect auto-adjust theirspeed to 4 Gb/s.

Storage-system caching

The storage systems have an SP cache consisting of dynamic randomaccess memory (DRAM) on each storage processor (SP). A standbypower supply (SPS) protects data in the cache from power loss. If linepower fails, the SPS provides sufficient power to let the storage systemwrite cache contents to the vault disks. The vault disks are standarddisk modules that store user data but have space reserved for outside

18 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 22: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

operating system control. When power returns, the storage systemreads the cache information from the vault disks, and then writes it tothe file systems on the disks. This design ensures that all write-cachedinformation reaches its destination. During normal operation, no I/Ooccurs with the vault; therefore, a disk’s role as a vault disk has noeffect on its performance.

Storage-system caching improves read and write performance forLUNs. Write caching, particularly, helps write performance – aninherent problem for RAID types that require writing to multiple disks.Read and write caching improve performance in two ways:

For a read request – If a read request seeks information that isalready in the SP read or write cache, the storage system can deliverit immediately, much faster than a disk access can.

For a write request – The storage system writes updated informationto SP write-cache memory instead of to disk, allowing the serverto continue as if the write had actually completed. The write todisk from cache memory occurs later, at the most expedient time.If the request modifies information that is in the cache waiting tobe written to disk, the storage system updates the information inthe cache before writing it to disk; this requires just one disk accessinstead of two.

The FAST Cache is based on the locality of reference of the data setrequested. A data set with high locality of reference and that is mostfrequently accessed is a good candidate for promotion to the FASTCache. By promoting the data set to the FAST Cache the storagesystem services any subsequent requests for this data faster fromthe Flash disks that make up the FAST Cache, thus reducing theload on the disks in the LUNs that contain the data (the underlyingdisks). Applications such as File and OLTP (online transactionprocessing) have data sets that can benefit from the FAST Cache.

Disks

The disks – available in different capacities – fit into slots in the DAE.The storage system supports 4 Gb/s DAEs with the high-performanceFibre Channel disks or economical serial ATA (SATA) disks. The 1TB SATA disks operate on a 4 Gb/s back-end bus like the 4 Gb FCdisks, but have a 3 Gb/s bandwidth. Since they have a Fibre Channelinterface to the back-end loop, these disks are sometimes referred toas Fibre Channel disks. For information on the currently availabledisks and their usable capacities, refer to the EMC® CX4 Series Storage

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 19

Page 23: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Systems – Disk and FLARE® OE Matrix (P/N 300-007-437) on the EMCPowerlink website.

The 1 TB, 5.4K rpm disks are available only in a DAE that is fullypopulated with these disks, and they cannot be mixed with or replacedby the 1 TB, 7.2 rpm disks in a DAE.

Each disk has a unique ID that you use when you create a RAID groupcontaining the disk or when you monitor the disk’s operation. The IDis derived from the Fibre Channel loop number, enclosure address, anddisk slot in the enclosure.

Enclosure 0 on bus 0 in the storage system must contain disks with IDs000 through 004. The remaining disk slots can be empty unless they are1 TB, 5.4K rpm disks, in which case all the disks in the DAE must be 1TB, 5.4K rpm disks. You can mix Flash (SSD) disks and standard FibreChannel disks, but not Flash and SATA disks, in the same enclosure.Youcannot mix Fibre Channel and SATA disks in the same enclosure.

Disk power savingsSome disks have a power savings (spin-down) option, which lets youassign power savings settings to a RAID group composed of thesedisks in a storage system running FLARE 04.29 or later. If powersavings is enabled for both the storage system and the RAID group, thedisks in the RAID group will transition to the low power state afterbeing idle for at least 30 minutes. Power savings is not supportedfor a RAID group if any LUN in the RAID group is participating ina MirrorView/A, MirrorView/S, SnapView, or SAN Copy session.Background verification of data (sniffing) does not continue when disksare in a low power state. For the currently available disks that supportpower savings, refer to the EMC® CX4 Series Storage Systems – Disk andFLARE® OE Matrix (P/N 300-007-437) on the EMC Powerlink website.

Basic requirements for shared storage and unshared configurations

For shared switch storage, you need the components described below.

Components for shared switched storageFor shared switched storage, you must use a high-availabilityconfiguration. The minimum hardware required for shared switchedstorage is two servers, each with two Fibre Channel HBAs, two FibreChannel switch fabrics with one switch per fabric, and one storagesystem. You can use more servers, more Fibre Channel switches perfabric, and more storage systems (up to four are allowed).

20 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 24: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Dimensions and weight

Table 7 CX4-120 hardware dimensions and weight

Component Dimensions Vertical size Weight (see notes)

SPE Height: 8.90 cm (3.50 in)Width: 44.50 cm (17.50 in)Depth: 62.60 cm (24.25 in)

2 NEMA units 23.81 kg (52.5 lb)

DAE Height: 13.34 cm (5.25 in)Width: 45.0 cm (17.72 in)Depth: 35.56 cm (14.00 in)

3 NEMA units 30.8 kg (68 lb) with 15 disks

SPS Height: 4.02 cm (1.58 in)Mounting tray width: 42.1 cm (16.5 in)Depth: 60.33 cm (23.75 in)

1 NEMA units 10.8 kg (23.5 lb) per SPS

Notes: Weights do not include mounting rails. Allow 2.3- 4.5 kg (5-10 lb) for a rail set. A fully configured DAE includes 15 disk drives thattypically weigh 1.0 kg to 1.1 kg (2.25 to 2.4 lb) each. The weights listed in this table do not describe enclosures with Enterprise FlashDrives (solid state disk drives with Flash memory, or SSD drives). Each SSD drive module weighs 20.8 ounces (1.3 lb).

Cabinet for hardware components

The 19-inch wide cabinet, prewired with AC power strips and readyfor installation, has the dimensions listed in Table 8.

Table 8 Cabinet dimensions

Dimension 40U-C cabinet

Height (internal, usable for devices) 40U (179 cm; 70 in) from floor pan to cabinet top (fan installed)

Height (overall) 190 cm (75 in)

Width (usable for devices) NEMA 19 in standard; rail holes 45.78 cm (18.31 in) apart center-to-center

Width (overall) 60 cm (24 in)

Depth (front to back rail) 60 cm (24 in)

98.425 cm (39.37 in) without front doorDepth (overall)

103.75 cm (41.5 in) with optional front door

177 kg (390 lb) maximum without front doorWeight (empty)

200 kg (440 lb) maximum with optional front door

Maximum total device weight supported 945 kg (2100 lb)

This cabinet accepts combinations of:

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 21

Page 25: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

2U SPE

3U DAE

1U 1200W SPS

1U or 2U switch

The cabinet requires 200 to 240 volts AC at 50/60 Hz, and includes 2to 4 power strips with compatible outlets. Plug options are L6-30Pand IEC 309 30 A.

Filler panels of various sizes are available.

Data cable and configuration guidelines

Each Fibre Channel data port that you use on the storage systemrequires an optical cable connected to either a server HBA port or aswitch port. The cabling between the SPE and the DAEs and betweenthe DAEs is copper. Generally, you should minimize the number ofcable connections, since each connection degrades the signal slightlyand shortens the maximum distance of the signal.

SP optical cabling to a switch or server

Optical cables connect the small form-factor pluggable (SFP) moduleson the storage processors (SPs) to the external Fibre Channel or 10 GbEthernet environment. EMC strongly recommends the use of OM3 50µm cables for all optical connections. Table 9 lists the optical cables thatare available for your storage system.

Table 9 Optical cables

Cable type Operating speed Length

1.0625 Gb 2 m (6.6 ft) minimum to 500 m (1,650 ft) maximum

2.125 Gb 2 m (6.6 ft) minimum to 300 m (990 ft) maximum

4 Gb 2 m (6.6 ft) minimum to 150 m (495 ft) maximum

8 Gb OM3: 1 m (3.3 ft) minimum to 150 m (495 ft)maximumOM2: 1 m (3.3 ft) minimum to 50 m (165 ft)maximum

50 µm

10 Gb OM3: 1 m (3.3 ft) minimum to 300 m (990 ft )maximum

22 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 26: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Cable type Operating speed Length

OM2: 1 m (3.3 ft) minimum to 82 m (270 ft)maximum

1.0625 Gb 2 m (6.6 ft) minimum to 300 m (985 ft) maximum

2.125 Gb 2 m (6.6 ft) minimum to 150 m (492 ft) maximum

62.5 µm

4 Gb 2 m (6.6 ft) minimum to 70 m (231 ft) maximum

Notes: All cables are multi-mode, dual LC with a bend radius of 3 cm (1.2 in) minimum.

The maximum length for either the 62.5 µm or 50 µm cable (noted inthe table above) includes two connections or splices between sourceand destination.

! CAUTION

EMC does not recommend mixing 62.5 μm and 50 μm optical cablesin the same link. In certain situations you can add a 50 μm adaptercable to the end of an already installed 62.5 μm cable plant. Contactyour service representative for details.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 23

Page 27: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

SP-to-DAE and DAE-to-DAE copper cabling

The expansion port interface to the DAE is copper cabling. Thefollowing copper cables are available:

Cable type Length

SFP-to-HSSDC2 for SP-to-DAE 2 m (6.6 ft)5 m (16.5 ft)

HSSDC2-to-HSSDC2 for DAE-to-DAE 2 m (6.6 ft)5 m (16.5 ft)8 m (26.4 ft)

The cable connector can be either a direct-attach shielded SFP (smallform-factor pluggable) module or an HSSDC2 (high speed serial dataconnector), as detailed below:

SP connector — Shielded, 150 Ω differential, shield-bonded to SFPplug connector shell (360°), SFF-8470 150 specification for SFPtransceiver.

DAE connector — Shielded, 150 Ω differential, shield-bonded toplug connector shell (360°) FC-PI-2 standard, revision 13 or laterfor HSSDC2.

DAE enclosure addresses

Each disk enclosure in a Fibre Channel bus must have a uniqueenclosure address (also called an EA, or enclosure ID) that identifiesthe enclosure and determines disk module IDs. In many cases, thefactory sets the enclosure address before shipment to coincide with therest of the system; you will need to reset the selection if you installthe enclosure into your rack independently. The enclosure addressranges from 0 through 7.

Figure 10 shows sample back-end connections for a storage systemwith eight DAEs on its bus. The figure shows a configuration withDAE2P or DAE3Ps as the only disk-array enclosures. Environmentswith a mix of DAE2s and DAE2Ps and/or DAE3Ps follow the sameEA, bus balancing, and cabling conventions whenever possible andpractical. Each DAE supports two completely redundant loops.

24 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 28: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

!!

!!

!

EXP PRI

EXP PRI

#

!

EXPPRI

EXPPRI

#A

B

!!

!!

!

EXP PRI

EXP PRI

#

!

EXPPRI

EXPPRI

#A

B

!!

!!

!

EXP PRI

EXP PRI

#

!

EXPPRI

EXPPRI

#A

B

!!

!!

!

EXP PRI

EXP PRI

#

!

EXPPRI

EXPPRI

#A

B

!!

!!

!

EXP PRI

EXP PRI

#

!

EXPPRI

EXPPRI

#A

B

!!

!!

!

EXP PRI

EXP PRI

#

!

EXPPRI

EXPPRI

#A

B

!!

!!

!

EXP PRI

EXP PRI

#

!

EXPPRI

EXPPRI

#A

B

!!

!!

!

EXP PRI

EXP PRI

#

!

EXPPRI

EXPPRI

#A

B

SP ASP B

B A

EA0/Bus 0

EA1/Bus 0

EA2/Bus 0

EA3/Bus 0

EA5/Bus 0

EA6/Bus 0

EA7/Bus 0

EA4/Bus 0

10/10

0/100

00

1

01

23

10/10

0/100

00

12

3

10/10

0/100

0

1000

Bas

e-X

23

01

10/100/1000

10/10

0/100

00

1

01

23

10/10

0/100

00

12

3

10/10

0/100

0

1000

Bas

e-X

23

01

10/100/1000

CL4124

Figure 10 Sample storage-system configuration with eight DAEs

Hardware components worksheet

Use the worksheet in Table 10 and the cable planning template in Figure11 to plan the hardware components you want. Some installation typesdo not have switches or multiple servers.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 25

Page 29: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Table 10 Hardware components worksheet

Server information

Server name: Server operating system:

Adapters in server:

Server name: Server operating system:

Adapters in server:

Server name: Server operating system:

Adapters in server:

Server name: Server operating system:

Adapters in server:

Storage-system components

SPEs: DAEs: Cabinets:

Fibre Channel switch information

32-port: 24-port: 16-port: 8-port:

Cables between server and Fibre Channel switch ports — cable A

Cable A1 number: Length (m or ft):

Cable A2 number: Length (m or ft):

Cable A3 number: Length (m or ft):

Cable A4 number: Length (m or ft):

Cables between Fibre Channel switch ports and storage-system SP Fibre Channel data ports — cable B

Cable B1 (up to 6 per CX4-120 or CX4-240 SPE SP, optical) number: Length (m or ft):

Cable B2 (up to 6 per CX4-120 or CX4-240 SPE SP, optical) number: Length (m or ft):

Cables between enclosures — cable C

Cable C1 (copper) number (2 per SPE): Length (m or ft):

Cable C2 (copper) number (2 per DAE): Length (m or ft):

26 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 30: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

EMC3571

Switch 1 Switch 2

Adapter

Adapter

Adapter

Adapter

Adapter

Adapter

B1

B2

Cable between server and switch

Cable between switch and storage system

Cablebetweenenclosures

LCC LCC D AE

LCC LCC D AE

LCC LCC D AE

LCC LCC D AE

LCC LCC D AE

LCC LCC D AE

SP B SP A

SPE C1

C2

C2

C2

C2

C1

C2

C2

C2

C2

Path 1Path 2Other paths not cabled

A1 An Am A4 A2 A3

C2 C2

Figure 11 Cable planning template for a shared storage system (two front-end cablesper SP shown)

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 27

Page 31: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Cache worksheet

Use the worksheet in Table 11 to plan your cache configuration.

Table 11 Cache worksheet

Cache information

Read cache size: Write cache size:

Cache page size:

Cache information

You can use SP memory for read/write caching. You can use differentcache settings for different times of day. For example, for user I/Oduring the day, use more write cache; for sequential batch jobs at night,use more read cache. Generally, write caching improves performancefar more than read caching. Read caching is nonetheless crucial forgood sequential read performance, as seen in backups and table scans.The ability to specify caching on a LUN basis provides additionalflexibility, since you can use caching for only the units that will benefitfrom it. Read and write caching is recommended for any type of RAIDgroup or pool, particularly RAID 6 or RAID 5. You can enable cachingfor specific LUNs in a RAID Group and for all the LUNs in a pool,which allows you to tailor your cache resources according to priority.

The maximum cache size per SP is 598 MB, and the maximum writecache size is 598 MB.

Read cache sizeIf you want a read cache, enter the read cache size you want.

Write cache sizeEnter the write cache size that you want. Generally, we recommendthat the write cache should be the maximum allowed size, which is600 MB per SP.

Cache page sizeCache page size applies to both read and write caches. It can be 2, 4, 8,or 16 KB. As a general guideline, we suggest 8 KB. The ideal cache pagesize depends on the server’s operating system and application.

28 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 32: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Storage-system management

This section describes the storage-system management ports and themanagement applications you can use, and provides the appropriateplanning worksheets.

Major topics are:

Storage-system management ports, page 29

Storage-system management ports worksheet, page 30

CLARalert® software, page 40

CLARalert worksheet, page 40

Navisphere® management software, page 41

Navisphere Analyzer, page 46

Optional Navisphere Quality of Service Manager, page 46

Navisphere management worksheet, page 47

Storage-system management ports

The storage system has two management ports, one per SP,through which you manage the storage system. For storage-systeminitialization, these ports must be connected to a host on the networkfrom which the storage system will be initialized. This host must beon the same subnet as these ports. Initialization assigns network andsecurity characteristics to each SP. After initialization, these ports areused for storage-system management, which can be done from anyhost with a supported browser on the same network as these ports.

A storage system running FLARE 04.29 or later supports one virtualport with VLAN tagging for each management port. If a managementport is connected to a switch, you can:

Create a trunk port on the switch per IEEE 802.1q standards.

Configure the trunk port to pass along network traffic with theVLAN tag for the virtual management port and for any othervirtual ports that you want.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 29

Page 33: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Configure the truck port to drop all other traffic.

Storage-system management ports worksheet

Record network information for the storage-system management portsin Table 12. Your network administrator should provide most of thisinformation, except for the login information.

Table 12 Storage-system management port information

Storage-system information

Storage-system serial number:

Physical network information

IPv4 SP port information (default Internet protocol)

SP A IP address: Subnet mask: Gateway:

SP B IP address: Subnet mask: Gateway:

IPv6 SP port information (manual configuration only)

Global prefix: Gateway:

Virtual port network information

Storage processor Virtual port VLAN ID IP address

SP A

SP B

Login information

Username: Password:Role: Monitor Manager Administrator Administrator Security

Local replication Replication Replication and recovery

Storage-system information

Fill out the storage-system information section of the worksheet usingthe information that follows.

Storage-system serial numberThe hardware serial number (TLA S/N) is on a tag that is hanging fromthe back middle of the storage processor enclosure (Figure 12).

30 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 34: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

CL4037

01

23

10/1

00/1

000

01

23

10/100/1000

10/100/1000

10/1

00/1

000

10/1

00/1

000

01

01

23

01

23

10/1

00/1

000

10/1

00/1

000

01

Figure 12 Location of the storage-system serial number on the SPE

Physical network information

Fill out the physical network information section of the worksheetusing the information that follows.

The management ports support both IPv4 and IPv6 Internet Protocolsconcurrently. IPv4 is the default and you must provide IPv4 addresses.If your network supports IPv6, you can choose to use IPv6 withautomatic or manual configuration. For automatic configuration, youdo not need to provide any information. For manual configuration, youmust provide the global prefix and the gateway.

IPv4 SP port information

Fill out the IPv4 SP port information section of the worksheet using theinformation that follows.

IP addressEnter the static network IPv4 address (for example, 128.222.78.10) forconnecting to the management port of a storage processor (SP A or

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 31

Page 35: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

SP B). Do not use IP addresses 128.221.1.248 through 128.221.1.255,192.168.1.1, or 192.168.1.2.

Subnet maskEnter the subnet mask for the LAN to which the storage system isconnected for management, for example, 255.255.255.0.

GatewayEnter the gateway address if the LAN to which the storage-systemmanagement port is connected has a gateway.

IPv6 SP port information

Fill out the IPv6 SP port information section of the worksheet using theinformation that follows.

Global prefixEnter the IPv6 global prefix, which is 2000::/3.

GatewayEnter the gateway address if the LAN to which the storage-systemmanagement port is connected has a gateway.

Virtual port information

The virtual port for the management port supports 802.1q VLANtagging. FLARE 04.29 or later is required for virtual ports and VLANtagging. Navisphere Manager always represents the managementports as virtual ports on storage systems running FLARE 04.29 or later.Fill out the virtual port information section of the worksheet using theinformation that follows.

Virtual portEnter the name for the virtual port.

VLAN IDEnter a number between 1 and 4095, and the number must be unique.

IP addressEnter the network IP (Internet Protocol) address (for example,128.222.78.10) for connecting to the virtual port. Do not use IPaddresses 128.221.1.248 through 128.221.1.255, 192.168.1.1, or192.168.1.2.

32 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 36: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Login information

Fill out the Login information section of the worksheet using theinformation that follows.

UsernameEnter a username for the management interface. It must start with aletter and may contain 1 to 32 letters and numbers. The name maynot contain punctuation, spaces, or special characters. You can useuppercase and lowercase characters. Usernames are case-sensitive.For example, ABrown is a different username from abrown. Yournetwork administrator may provide the username. If not, then youneed to create one.

PasswordEnter a password for connecting to the management interface. It maycontain 1 to 32 characters, consisting of uppercase and lowercase lettersand numbers. As with the username, passwords are case-sensitive.For example, Azure23 is a different password than azure23. Thepassword is valid only for the username you specified. Your networkadministrator may provide the password. If not, then you need tocreate one.

User rolesFour basic user roles are available - monitor, manager, administrator,and security administrator. All users, except a security administrator,can monitor the status of a storage system. Users with the manager rolecan also configure a storage system. Administrators can maintain useraccounts, as well as configure a storage system. Security administratorscan only manage security and domain settings. Select the role you wantfor the user. For more information on roles, refer to Authentication, page42.

EMC Secure Remote Support IP Client for CLARiiON

ESRS IP Client for CLARiiON software does the following:

Monitors storage systems within a domain for error events.

Automatically and securely sends alerts (call homes) to your serviceprovider about events that require service provider notification.

Allows your service provider to securely connect remotely throughthe monitor station into your monitored storage system to helptroubleshoot storage-system issues.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 33

Page 37: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Sends a notification e-mail to you (the customer) when an alertis sent to your service provider.

We recommend that you use EMC Secure Remote Support (ESRS)IP Client for CLARiiON instead of the CLAR alert software if yourenvironment meets the requirements for the ESRS IP Client forCLARiiON software.

Figure 13 shows the communications infrastructure for Call Homefeature of the ESRS IP Client for CLARiiON, and Figure 14 shows thecommunication infrastructure for the remote access feature of the ESRSIP Client for CLARiiON.

LAN

Centralized monitoring stationrunning ESRS IP Client for CLARiiON

CX4-240 CX4-480

CX4-120

Event

Call Home

(SSL)

(SSL)

Service provider

(SSL)

(SSL)

(SSL)(SSL)

EventEvent

Figure 13 ESRS IP Client for CLARiiON communication information infrastructure for CallHome

34 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 38: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

LAN

Centralized monitoring stationrunning ESRS IP Client for CLARiiON

CX4-240 CX4-480

CX4-120

(SSL)

(SSL)

Service provider

(SSL)

Remote access

(SSL)

(SSL)(SSL)

Figure 14 ESRS IP Client for CLARiiON communications information infrastructure forremote access

The ESRS IP Client for CLARiiON:

Requires a monitor station, which is a host or virtual machinerunning a supported Windows operating system and the ESRS IPClient for CLARiiON software. The monitor station must:

Must have a 1.8 GHz or higher computer with at least 2 GB ofavailable storage for the ESRS IP Client

Not be a server (host connected to storage-system data ports).

Must be connected to the same network as your storage-systemmanagement ports and connected to the Internet through aproxy server.

Must have a static DCP reserved IP address.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 35

Page 39: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Must be connected over the network to a portal system, which is astorage system running the required FLARE version.

ESRS IP Client for CLARiiON worksheet

If you want to use ESRS IP Client for CLARiiON to monitor yourstorage system, record the information in Table 13.

Table 13 ESRS IP Client for CLARiiON worksheet

Monitor station network information

Host identifier: IP address:

Portal system network information

Portal identifier: IP address:

Username: Password:

Proxy server network information

Protocol: HTTPS SOCKS IP address or network name:

Username: Password:

Customer notification information

e-mail address: SMTP server name or IP address:

Powerlink credentials

Usename: Password:

Monitored system information

System name or IP address: Port:

Username: Password:

Customer contact information

Name: Phone:

e-mail address: Site:

Monitor station network information

Fill out the monitor station network information section of theworksheet using the information that follows

36 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 40: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Host identifierEnter the optional name or description that identifies the Windows hostthat will be the monitor station.

IP addressEnter the network IP (Internet Protocol) address (for example,128.222.78.10) for connecting to the monitor station. This address mustbe a static or DHCP reserved IP address. The portal system uses this IPaddress to manage the monitor station. Since the monitor station canhave multiple NICs, you must specify an IP address. This IP addresscannot be 128.221.1.250, 128.221.1.251,192.168.1.1, or 192.168.1.2 becausethese addresses are reserved.

Portal system network information

Fill out the portal system network information section of the worksheetusing the information that follows.

Portal identifierEnter the optional identifier (such as a hostname) for the storage systemthat is or will be the portal system.

IP addressEnter the network IP address for connecting to the portal system.This IP address is the IP address that you assign to one of the storagesystem’s SPs when you initialize it.

UsernameEnter the username for logging into the portal.

PasswordEnter the password for the username for logging into the portal.

Proxy server network information

Depending on your network configuration, you may have to connectto the Internet (or services outside the local network) through a proxyserver. In this situation, the server running the ESRS IP Client forCLARiiON uses the proxy server settings to access the proxy server soit can access Powerlink and send alerts to your service provider. Fillout the proxy server network information section of the worksheetusing the information that follows.

ProtocolEnter the protocol (HTTPS or SOCKS) for connecting to the proxyserver.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 37

Page 41: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

IP address or network nameEnter the network IP address or network name for connectingto the proxy server. The IP address cannot be 128.221.1.250,128.221.1.251,192.168.1.1, or 192.168.1.2 because these addresses arereserved.

UsernameEnter the username for accessing the proxy server if it requiresauthentication. The SOCKS protocol requires authentication, and theHTTPS protocol does not.

PasswordEnter the password for the username.

Customer notification information

Customer notification information is information about the person orgroup to be notified when ESRS sends alerts to the service provider.Fill out the customer notification information section of the worksheetusing the information that follows.

e-mail addressEnter the e-mail address of the person or group to notify when ESRSsends an alert to your service provider.

SMTP server name or IP addressEnter the address of the server in your corporate network that sendse-mail over the Internet. You can choose to have ESRS use e-mailas your backup communication for the Call Home feature and touse e-mail to notify you when ESRS sends alerts (remote notificationevents) to your service provider. To use e-mail in these cases, you mustprovider your SMPT server name or IP address, in addition to youre-mail address.

Powerlink credentials

The ESRS IP Client for CLARiiON software is available from Powerlink.To use the ESRS IP Client for CLARiiON Installation wizard to installthis software on the monitor station, you must provide your Powerlinkcredentials. Fill out the Powerlink credentials section of the worksheetusing the information that follows.

UsernameEnter the Powerlink username for the person who will install the ESRSIP Client for CLARiiON on the monitor station.

38 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 42: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

PasswordEnter the Powerlink password for the user.

Monitored system information

Fill out the monitored system information section of the worksheetusing the information that follows.

System name or IP addressEnter the name or network IP address for connecting to your storagesystem. This IP address is the IP address that you assign to one of thestorage system’s SPs when you initialize it.

PortEnter the port on the SP for ESRS to access.

UsernameEnter the name of user that will monitor your storage system and willhave monitoring access to your storage system.

PasswordEnter the user’s password.

Customer contact information

Customer contact information is the information that the serviceprovider needs to contact the person at your storage-system siteif the service provider cannot fix the problem with your storagesystem online. Fill out the customer contact information section of theworksheet using the information that follows.

NameEnter the name of the person at your storage-system site to contact.

e-mailEnter the e-mail address of the contact person.

PhoneEnter the phone number of the contact person.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 39

Page 43: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

SiteEnter the identifier of the site with your storage system.

CLARalert® software

CLARalert software monitors your storage system’s operation for errorevents and automatically notifies your service provider of any errorevents. It requires:

A monitor station, which is a host running a supported Windowsoperating system. This monitor station cannot be a server (hostconnected to storage-system data ports) and must be on the samenetwork as your storage-system management ports.

A portal system, which is a storage system running the requiredFLARE version.

We recommend that you use EMC Secure Remote Support (ESRS)IP Client for CLARiiON instead of the CLAR alert software if yourenvironment meets the requirements for the ESRS IP Client forCLARiiON software.

CLARalert worksheet

Record the network information for the CLARalert monitor stationand the portal system in Table 14.

Table 14 CLARalert worksheet

Monitor station network information

Host identifier: IP address:

Portal system network information

Portal identifier: IP address: Username: Password:

Monitor station network information

Fill out the monitor station network information section of theworksheet using the information that follows

Host identifierEnter the optional name or description that identifies the Windows hostthat will be the monitor station.

40 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 44: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

IP addressEnter the network IP (Internet Protocol) address (for example,128.222.78.10) for connecting to the monitor station. This address mustbe a static or DHCP reserved IP address. The portal system uses this IPaddress to manage the monitor station. Since the monitor station canhave multiple NICs, you must specify an IP address. This IP addresscannot be 128.221.1.250, 128.221.1.251,192.168.1.1, or 192.168.1.2 becausethese addresses are reserved.

Portal system network information

Fill out the portal system network information section of the worksheetusing the information that follows.

Portal identifierEnter the optional identifier (such as a hostname) for the storage systemthat is or will be the portal system.

IP addressEnter the network IP address for connecting to the portal system.This IP address is the IP address that you assign to one of the storagesystem’s SPs when you initialize it.

UsernameEnter the username for logging into the portal.

PasswordEnter the password for the username for logging into the portal.

Navisphere® management software

The Navisphere management software consists of the followingsoftware products:

Navisphere Manager

Unisphere Server Utility for supported operating systems

Unisphere Host Agent for supported operating systems

Navisphere host-based command line interface (CLI) for supportedoperating systems

Navisphere Manager

Navisphere Manager (called manager) lets you manage multiplestorage systems on multiple servers simultaneously. It includes an

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 41

Page 45: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

event monitor that checks storage systems for fault conditions and cannotify you and/or customer service if any fault condition occurs.

Some of the tasks that you can perform with Navisphere Manager are:

Manage server connections to the storage system

Create RAID groups and thin pools and LUNs on the RAID groupsand thin pools

Create storage groups

Manipulate caches

Examine storage-system status and events recorded in thestorage-system event logs

Transfer control from one SP to the other

Manager features a user interface (UI) with extensive online help. AllCX4 storage systems running FLARE 04.29.000.5.xxx or earlier shipwith Navisphere Manager installed and enabled.

Navisphere provides the following security functions and benefits:

Authentication

Authorization

Privacy

Audit

AuthenticationManager uses password-based authentication that is implemented bythe storage management server on each storage system in the domain.You assign a username and password when you create either global orlocal user accounts. Global user accounts apply to all storage systemsin a domain and local user accounts apply to a specific storage system.A global user account lets you manage user accounts from a singlelocation. When you create or change a global user account or add anew storage system to a domain, Manager automatically distributes theglobal account information to all storage systems in the domain.

AuthorizationManager bases authorization on the role associated with theauthenticated user. Four roles are available — monitor, manager,administrator, security administrator. All users, except a securityadministrator, can monitor the status of a storage system. Users withthe manager role can also configure a storage system. Administrators

42 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 46: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

can maintain user accounts, as well as configure a storage system.Security administrators can only manage security and domain settings.

PrivacyManager encrypts all data that passes between the browser andstorage management server, as well as the data that passes betweenstorage-system management servers. This encryption protects thetransferred data whether it is on local networks behind corporatefirewalls or on the Internet.

AuditManager maintains an SP event log that contains a time-stamped recordfor each event. This record includes information such as an event codeand event description. Manager also adds time-stamped audit recordsto the SP event log each time a user logs in or enters a request. Theserecords include information about the request and the requestor.

Unisphere Server Utility, Unisphere Host Agent, and Navisphere CLI

The Unisphere Server Utility and the Unisphere Host Agent areprovided for different operating systems. The Unisphere Server Utilityreplaces the Navisphere Server Utility and the Unisphere Host Agentreplaces the Navisphere Host Agent.

You should install the server utility on each server connected to thestorage system. Depending on your application needs, you can alsoinstall the host agent on each server connected to a storage system to:

Monitor storage-system events and notify personnel by e-mail,page, or modem when any designated event occurs.

Retrieve LUN world wide name (WWN) and capacity informationfrom Symmetrix® storage systems.

Register the server’s HBAs with the storage system. Alternatively,you can use the Unisphere Server Utility to register the server’sHBAs with the storage system. Table 15 describes the hostregistration differences between the host agent and the server utility

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 43

Page 47: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Table 15 Host Registration differences between the host agent and the serverutility

Function Unisphere Host Agent Unisphere Server Utility

Pushes LUN mappingand OS information to thestorage system.

Yes – LUN mapping information isdisplayed in the Manager UI next tothe LUN icon or with the CLI usingthe lunmapinfo command.

CX4 series, CX3 series, and CX series storage systemsNo – LUN mapping information is not sent to the storage system. Onlythe server’s name, ID, and IP address are sent to the storage system.Note: The text Manually Registered appears next to thehostname icon in the Manager UI indicating that the host agent wasnot used to register this server.

Runs automatically tosend information to thestorage system.

Yes – No user interaction isrequired.

CX4 series, CX3 series, and CX series storage systemsNo – You must manually update the information by starting the utilityor you can create a script to run the utility. Since you run the serverutility on demand, you have more control as to how often or when theutility is executed.

Requires networkconnectivity to the storagesystem.

Yes – Network connectivity allowsLUN mapping information to beavailable to the storage system.

CX4 series, CX3 series, and CX series storage systemsNo – LUN mapping information is not sent to the storage system.Note that if you are using the server utility to upload a high-availabilityreport to the storage system, you must have network connectivity.

The Navisphere CLI provides commands that implement NavisphereManager UI functions for in addition to commands that implement thefunctions of the UIs for the optional data replication and data mobilitysoftware. A major benefit offered by the CLI is the ability to writecommand scripts to automate management operations.

Using Navisphere manager software

With Navisphere manager you can assign storage systems on anintranet or Internet to a storage domain. For any installation, you cancreate one or more domains, provided that each storage system is inonly one domain. Each storage domain must have at least one memberwith Manager installed.

Each storage system in the domain is accessible from any other in thedomain. Using an Internet browser, you point at a storage system thathas Manager installed. The security software then prompts you to login. After logging in, depending on the privileges of your account, youcan monitor, manage, and/or define user accounts for any storagesystem in the domain. You cannot view storage systems outside thedomain from within the domain.

44 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 48: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

You can run the Internet browser on any supported station (often a PCor laptop) with a network controller. At least one storage system –ideally at least two for higher availability – in a domain must have orNavisphere Manager installed, preferably Unisphere.

Figure 15 shows an Internet configuration that connects 9 storagesystems. It shows 2 domains, a U.S. Division domain with 5 storagesystems (4 systems on SANs) and a European Division with 4 storagesystems. The 13 servers that use the storage system may be connectedto the same or a different network, but the intranet shown is the oneused to manage the storage systems.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 45

Page 49: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Server Server Server ServerEMC2276

StorageSystem

StorageSystem

StorageSystem

StorageSystem

StorageSystem

StorageSystem Storage

System StorageSystem

Switch Fabric Switch FabricSwitch Fabric Switch Fabric

Domain 1 - U.S. Division

Domain 2 - European Division

InternetBrowser

Server Server Server Server Server Server

Server

StorageSystem

Server Server

Internet

Figure 15 Storage domains on the Internet

Navisphere Analyzer

Navisphere Analyzer lets you measure, compare, and chart theperformance of SPs, LUNs, and disks to help you anticipate and findbottlenecks in your storage system.

Optional Navisphere Quality of Service Manager

The optional Navisphere Quality of Service Manager ( or NavisphereQoS Manager) lets you allocate storage-system performance resourceson an application-by-application basis. You can use Navisphere QoSManager to solve performance conflicts in consolidated environmentswhere multiple applications share the same storage system. Within

46 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 50: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

storage-system capacity, Navisphere QoS Manager lets you meetspecific performance targets for applications, and create performancethresholds to prevent applications from monopolizing storage-systemperformance.

Navisphere management worksheet

The worksheet in Table 16 will help you plan your Navisphere storagemanagement configuration.

Table 16 Storage management worksheet for Navisphere software

Storage-system name: Domain name:

Navisphere Analyzer Navisphere QoS Manager

Server name: Operating system:

Server name: Operating system:

Server name: Operating system:

Server name: Operating system:

Server name: Operating system:

Server name: Operating system:

Server name: Operating system:

Server name: Operating system:

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 47

Page 51: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Basic storage concepts

This section explains traditional provisioning and the VirtualProvisioning™ software concepts that you should understand to planyour storage-system configuration.

Major topics are:

Traditional provisioning concepts, page 48

Virtual Provisioning concepts, page 49

Virtual Provisioning versus traditional provisioning, page 50

Basic RAID concepts, page 52

Supported RAID types, page 53

RAID type benefits and trade-offs, page 64

RAID type guidelines for RAID groups or thin pools, page 69

Sample applications for RAID group or thin pool types, page 70

Fully automated storage tiering (FAST), page 72

Traditional provisioning concepts

Traditional provisioning allows you to assign storage capacity that isphysically available on the storage-system disks to a server (host). Youallocate this physical capacity using LUNs that you create on RAIDgroups. LUNs on RAID groups are often called RAID group LUNs.

RAID groupsA RAID group is a set of disks of the same type on which you createLUNs. These disks can be the vault disks (000–004). A RAID group hasone of the following RAID types: RAID 6, RAID 5, RAID 3, RAID 1,RAID 1/0, RAID 0, individual disk, or hot spare.

LUN (traditional LUN)A traditional LUN is a logical unit that groups space on the disks ina RAID group into one span of disk storage space and looks like anindividual disk to a server’s operating system. The capacity for eachLUN you create is distributed equally across the disks in the RAIDgroup. The amount of physical space allocated to a LUN is the same asthe user capacity that the server’s operating system sees. The storagecapacity of a LUN is set when you create the LUN; however, you canexpand it using metaLUNs, that is, by adding one or more other LUNs.

48 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 52: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

If the storage system is running FLARE 04.29 or later, you can shrink aRAID group LUN. A RAID group LUN can be a hot spare.

Virtual Provisioning concepts

Virtual Provisioning, unlike traditional provisioning, allows you toassign more storage capacity to a server (host) than is physicallyavailable by using thin pools on which you create thin LUNs.Virtual provisioning is available on storage systems running FLARE04.28.000.5.5xx or later and that have the optional virtual provisioningenabler installed.

Thin poolsThin pools are supported for storage systems running FLARE4.28.000.5.5xx or 04.29.000.5.xxx and with the Virtual Provisioningenabler installed. A thin pool is a set of disks that shares its usercapacity with one or more thin LUNs. We recommend that all diskswithin the pool have the same capacity. A thin pool has the RAID 6 orRAID 5 type. RAID 6 is the default RAID type. The storage-systemsoftware monitors storage demands on pools and adds storage capacityto them, as required, up to a certain specified amount.

Thin LUNThin LUNs are supported for storage systems with the ThinProvisioning enabler installed. A thin LUN is a logical unit of storagein a pool that looks like an individual disk to an operating system. Athin LUN competes with other thin LUNs in the pool for the pool’savailable storage. The capacity of the thin LUN that is visible to theserver is independent of the available physical storage in the pool. To aserver, a thin LUN behaves very much like RAID group LUN . Unlike aRAID group LUN or thick LUN however, a thin LUN can run out ofdisk space if the pool to which it belongs runs out of disk space. Bydefault, the storage system issues a warning alert when 70% of thepool’s space has been consumed; and, when 85% of the space has beenconsumed, it issues a critical alert. You can customize these thresholdsthat determine when the alerts are issued. As thin LUNs continueconsuming the pool’s space, both alerts continue to report the actualpercentage of consumed space. A thin LUN uses slightly more capacitythan the amount of user data written to it due to the metadata required

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 49

Page 53: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

to reference the data. Unlike a RAID group LUN or a thick LUN, a thinLUN can run out of space. A thin LUN cannot be a hot spare.

Virtual Provisioning versus traditional provisioning

Table 17 lists virtual provisioning and traditional provisioningtrade-offs.

Table 17 Virtual Provisioning and traditional provisioning trade-offs

Virtual Provisioning Traditional provisioning

RAID types RAID 6 and RAID 5 thin pools RAID 6, RAID 5, RAID 3, RAID 1/0, and RAID 1RAID groups or individual disk or hot spare

MetaLUNs Not supported Fully supported

LUN expansion Not supported Fully supported

LUN shrinking Not supported Fully supported for Windows Server 2008 hostsconnected to a storage system running FLARE04.29 or later.

LUN migration Fully supported Fully supported

Disk usage Disks in a thin pool can be Flash (SSD) disksonly if all the disks in the pool are Flash disks.You cannot intermix Flash disks and other disksin a pool. The disks in a thin pool cannot be vaultdisks 000-004.

All the disks in a RAID group must be of the sametype.

Space efficiency When you create a thin LUN, a minimum of 2 GBof space on the pool is reserved for the thin LUN.Space is assigned to a pool on an as-neededbasis. Since the thin LUNs on a pool completefor the pool’s space, a pool can run out of spacefor its thin LUNs.

When you create a LUN, the LUN is assignedphysical space on the RAID group equal to theLUN’s size. This space is always available to theLUN even if it does not actually use the space.

Hot sparing You cannot create a hot spare on a pool. Any hotspare, except one that is a Flash (SSD) disk, canbe a spare for any disk in a pool. A hot spare thatis a Flash disk can only be a spare for Flash disks.

You can create a hot spare on a RAID group. Anyhot spare, except one that is a Flash (SSD) disk,can be a spare for any disk in a RAID group. Ahot spare that is a Flash disk can only be a sparefor Flash disks.

Performance Thin LUN performance is typically lower thanLUN performance.

LUN performance is typically faster than thin LUNperformance.

50 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 54: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Virtual Provisioning Traditional provisioning

Manual administrative Thin pools require less manual administrativethan RAID groups.

RAID groups require more manual administrativethan pools.

Use with SnapView A thin LUN can be a snapshot source LUN, aclone LUN, a clone source LUN, but not a cloneprivate LUN, and not in the reserved LUN pool.

Fully supported for traditional LUNs

Use with MirrorView/A orMirrorView/S

Mirroring with thin LUNs as primary or secondaryimages is supported only between storagesystems running FLARE 04.29 or later. Formirroring between storage systems runningFLARE 04.29, the primary image, secondaryimage, or both images can be thin LUNs.

Fully supported for traditional LUNs

Use with SAN Copy Thin LUNs are supported for only for SAN Copysessions in the following configurations: Withina storage system running FLARE 04.29 or laterBetween systems running FLARE 04.29 or laterBetween systems running FLARE 04.29 or laterand systems running FLARE 04.28.005.504 orlater. The source LUN must be on the storagesystem that owns the SAN Copy session.

Fully supported for traditional LUNs in allconfigurations.

Guidelines for using Virtual and traditional provisioning

To decide when to use Virtual or traditional provisioning consider thefollowing guidelines:

Use Virtual Provisioning when:

Ease of use is more important than absolute performance.

Applications have controlled capacity growth and moderate to lowperformance requirements.

Examples:

Unstructured data in file systems

Archives

Data warehousing

Research and development

Use traditional provisioning when:

Absolute performance is most important.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 51

Page 55: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Applications have high I/O activity and low latency requirements.

Examples:

Classic online transaction processing (OLTP) applications

Backups of raw devices

Databases that initialize every block

Basic RAID concepts

This section discusses disk stripping, mirroring, pools and LUNs.

Disk striping

Using disk stripes, the storage-system hardware can read from andwrite to multiple disks simultaneously and independently. By allowingseveral read/write heads to work on the same task at once, diskstriping can enhance performance. The amount of information readfrom or written to each disk makes up the stripe element size. Thestripe size is the stripe element size multiplied by the data disks (notmirror or parity disks) in a RAID group or thin pool. For example,assume a stripe element size of 128 sectors (the default), then:

For a RAID 6 storage pool with 6 disks (the equivalent of 4 datadisks and 2 parity disks), the stripe size is 128 x 4 or 512 sectorsper stripe.

For a RAID 5 storage pool with 5 disks (equivalent of 4 data disksand 1 parity disk), the stripe size is 128 x 4 or 512 sectors per stripe.

For a RAID 1/0 storage pool with 6 disks (3 data disks and 3 mirrordisks), the stripe size is 128 x 3 or 384 sectors per stripe.

The storage system uses disk striping with most RAID types.

Mirroring

Mirroring maintains a copy of a logical disk image that providescontinuous access if the original image becomes inaccessible. Thesystem and user applications continue running on the good imagewithout interruption.

You can create a mirror by binding disks as a RAID 1 group (mirroredpair) or a RAID 1/0 group (mirrored RAID 0 group); the hardware willthen mirror the disks automatically.

52 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 56: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Pools and LUNs

You can create multiple LUNs on one RAID group or thin pool, andthen allot each LUN to a different user or application on a server. Forexample, you could create three LUNs with 100, 400, and 573 GB ofstorage capacity for temporary, mail, and customer files. Note that thestorage capacity of a RAID group LUNis the actual physical capacityon the storage system, whereas the storage capacity of a thin LUN maynot be actual physical capacity.

One disadvantage of multiple LUNs on a storage pool is that I/O toeach LUN may affect I/O to others on the RAID group or thin pool;that is, if traffic to one LUN is very heavy, I/O performance with otherLUNs may be degraded. The main advantage of multiple LUNs perRAID group or thin pool is the ability to divide the enormous amountof disk space that a RAID group or thin pool provides.

Figure 16 shows three LUNs on one 5-disk RAID group or thin pool.

LUN 0temp

LUN 0temp

LUN 0temp

LUN 0temp

LUN 0temp

LUN 1 mail

LUN 2customers

LUN 1 mail

LUN 1 mail

LUN 1 mail

LUN 1 mail

LUN 2customers

LUN 2customers

LUN 2customers

LUN 2customers

Disk Disk Disk Disk Disk

EMC1814a

Figure 16 Multiple LUNs on a RAID group or thin pool

Supported RAID types

This section discusses the RAID 6, RAID 5, RAID 3, RAID 1, RAID 1/0,and RAID 0 types and also individual disks, hot spares, and proactivesparing.

RAID 6 (double distributed parity)

RAID 6 is supported for all RAID groups or thin pools.

A single RAID 6 group usually consists of 6 or 12 disks, but can have 4,8, 10, 14, or 16 disks. On a RAID 6 group, you can create a maximum

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 53

Page 57: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

of 256 LUNs – the maximum number of LUNs per RAID group – toallocate disk space to users and applications that are on differentservers.

A single RAID 6 thin pool consists of a minimum of 4 disks up to themaximum number of disks per thin pool supported by the storagesystem. On a RAID 6 thin pool, you can create up to the maximumnumber of LUNs supported by the storage system to allocate diskspace to users and applications that are on different servers. Table18 lists these maximum limits.

Table 18 Thin pool disk and thin LUN limits

Maximum number of

FLARE version

Disksper thin pool

Disks in allthin pools

per storagesystem

Thin poolsper storage

system

Thin LUNsper storage

system

FLARE 04.29 orlater

40 80 20 512

FLARE 04.28 20 40 10 256

A RAID 6 group or thin pool uses disk striping. In a RAID 6 group orthin pool, some space is dedicated to parity and the remaining diskspace is for data. The storage system writes two independent sets ofparity information — row parity and diagonal parity — that lets thegroup or thin pool continue operating if one or two disks fail or if a hardmedia error occurs during a single-disk rebuild. When you replacethe failed disks, the SP rebuilds, or with proactive sparing, continuesrebuilding, the group or thin pool by using the information stored onthe working disks. Performance is degraded while the SP rebuilds thegroup or thin pool. This degradation is lessened with proactive sparing.During the rebuild, the storage system continues to function and givesusers access to all data, including data stored on the failed disks.

Proactive sparing creates a hot spare (a proactive spare) of a disk that isbecoming prone to errors by copying the contents of the disk to a hot spare.Subsequently, you can remove the disk before it fails and the proactive sparethen takes its place.

54 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 58: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

A RAID 6 group or thin pool distributes parity evenly across all drivesso that parity drives are not a bottleneck for write operations. Figure17 shows user and parity data with the default stripe element size of128 sectors (65,536 bytes) in a 6-disk RAID 6 group or thin pool. Noticethat the disk block addresses in the stripe proceed sequentially fromthe first disk to the second, third, fourth, fifth, and sixth, then back tothe first, and so on.

…CL3708

8 stripeelements

User data

Row parity data

Diagonal parity data

Sixth disk

…Fifth disk

…Fourth disk

…Third disk

Second disk

First disk

Stripe

Stripeblocks

Figure 17 RAID 6 group or thin pool

A RAID 6 group or thin pool offers good read performance andgood write performance. Write performance benefits greatly fromstorage-system caching.

RAID 5 (distributed parity)

RAID 5 is supported for RAID groups and thin pools.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 55

Page 59: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

A single RAID 5 group usually consists of 5 disks, but can have 3 to16 disks. On a RAID 5 group, you can create up to 256 LUNs – themaximum number of LUNs per RAID group – to allocate disk space tousers and applications that are on different servers.

A single RAID 5 thin pool consists of a minimum of 3 disks up tothe maximum number of disks per pool supported by the storagesystem. On a pool, you can create up to the maximum number ofLUNs supported by the storage system to allocate disk space to usersand applications on that are on different servers. Table 18 lists thesemaximum limits.

A RAID 5 group or thin pool uses disk striping. The storage systemwrites parity information that lets the group or thin pool continueoperating if a disk fails. When you replace the failed disk, the SPrebuilds, or with proactive sparing continues rebuilding, the groupor thin pool using the information stored on the working disk.Performance is degraded while the SP rebuilds the group or thin pool.This degradation can be lessened by using the proactive sparing feature.During the rebuild, the storage system continues to function and givesusers access to all data, including data stored on the failed disk.

Figure 18 shows user and parity data with the default stripe elementsize of 128 sectors (65,536 bytes) in a 5-disk RAID 5 group or thin pool.Notice that the disk block addresses in the stripe proceed sequentiallyfrom the first disk to the second, third, fourth, and fifth, then back tothe first, and so on.

56 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 60: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

CL3719

8 stripeelements

…Fifth disk

…Fourth disk

…Third disk

Second disk

First disk

Stripe

Stripeblocks

Parity data

User data

Figure 18 RAID 5 group or thin pool

RAID 5 groups or thin pools offer excellent read performance andgood write performance. Write performance benefits greatly fromstorage-system caching.

RAID 3 (single disk parity)

RAID 3 is supported for RAID groups only. A single RAID 3 groupconsists of five or nine disks and uses disk striping. To obtain thebest bandwidth performance with a RAID 3 LUN, you need to limitconcurrent access to the LUN. For example, a RAID 3 group may havemultiple LUNs, but the highest bandwidth is achieved with one to fourthreads of concurrent, large I/O.

Performance is degraded while the SP rebuilds the group. Thisdegradation can be lessened by using the proactive sparing feature.However, the storage system continues to function and gives usersaccess to all data, including data stored on the failed disk.

Figure 19 shows user and parity data with the default stripe elementsize of 128 sectors (65,536 bytes) in a RAID 3 group. Notice that the disk

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 57

Page 61: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

block addresses in the stripe proceed sequentially from the first disk tothe second, third, and fourth, then back to the first, and so on.

CL3720

Stripeelements

…Fifth disk

…Fourth disk

…Third disk

Second disk

First disk

Stripe

Stripeblocks

Parity data

User data

Figure 19 RAID 3 group

RAID 3 differs from RAID 6 and RAID 5 in one major way. With aRAID 3 group, the parity information is stored on one disk; with aRAID 6 or RAID 5 group or thinpool, it is stored on all disks. RAID 3can perform sequential I/O better than RAID 6 or RAID 5, but doesnot handle random access as well.

RAID 3 is best thought of as a specialized RAID 5 for applications withthe large or sequential I/O. However, with the write cache enabled forRAID 3 LUNs, RAID 3 is equivalent to RAID 5, and can handle somelevel of concurrency. A RAID 3 group works well for applications thatuse I/O with blocks 64 KB and larger. By using both the read and writecache, a RAID 3 group can handle several concurrent streams of access.

RAID 3 groups do not require any special buffer area. No fixed memoryis required to use write cache with RAID 3. Simply allocate write cacheas you would for RAID 5, and ensure that caching is turned on for the

58 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 62: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

LUNs in the RAID 3 groups. Access to RAID 3 LUNs is compatible withconcurrent access to LUNs of other RAID types on the storage system.

RAID 1 (mirrored pair)

RAID 1 is supported for RAID groups only. A single RAID 1group consists of two disks that are mirrored automatically by thestorage-system hardware. With a RAID 1 group, you can createmultiple RAID 1 LUNs to apportion disk space to different users,servers, and applications.

RAID 1 hardware mirroring within the storage system is not the sameas software mirroring, remote mirroring, or hardware mirroring forother kinds of disks. Functionally, the difference is that you cannotmanually stop mirroring on a RAID 1 mirrored pair, and then accessone of the images independently. If you want to use one of the disks insuch a mirror separately, you must unbind the mirror (losing all dataon it), rebind the disk as the type you want, and software format thenewly bound LUN.

With a storage system, RAID 1 hardware mirroring has the followingadvantages:

Automatic operation (you do not have to issue commands toinitiate it)

Physical duplication of images

Rebuild period that you can select during which the SP recreatesthe second image after a failure

With a RAID 1 mirrored pair, the storage system writes the same datato both disks, as shown in Figure 20.

CL3722

User data

Second disk

First disk

Figure 20 RAID 1 mirrored pair

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 59

Page 63: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

RAID 1/0 (mirrored nonredundant array)

RAID 1/0 is supported for RAID groups only. A single RAID 1/0group consists of 2, 4, 6, 8, 10, 12, 14, or 16 disks. These disks make up 2mirror images, with each image including 2 to 8 disks. The hardwareautomatically mirrors the disks. A RAID 1/0 group uses disk striping.It combines the speed advantage of RAID 0 with the redundancyadvantage of mirroring. With a RAID 1/0 group, you can create up to128 RAID 1/0 LUNs to apportion disk space to different users, servers,and applications.

Figure 21 shows the distribution of user data with the default stripeelement size of 128 sectors (65,536 bytes) in a 6-disk RAID 1/0 group.Notice that the disk block addresses in the stripe proceed sequentiallyfrom the first mirrored disks (first and fourth disks) to the secondmirrored disks (second and fifth disks), to the third mirrored disks(third and sixth disks), and then from the first mirrored disks, and so on.

60 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 64: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

…CL3721

Stripeelement

User data

Third disk of secondary image

…Second disk of secondary image

…First disk of secondary image

…Third disk of primary image

Second disk of primary image

First disk of primary image

Stripe

Stripeblocks

Figure 21 RAID 1/0 group

A RAID 1/0 group can survive the failure of multiple disks, providedthat one disk in each image pair survives.

RAID 0 (nonredundant RAID striping)

RAID 0 is supported for RAID groups only.

! CAUTION

A RAID 0 group provides no protection for your data. EMC doesnot recommend using a RAID 0 group unless you have some way ofprotecting your data, such as software mirroring.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 61

Page 65: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

A single RAID 0 group consists of 3 to a maximum of 16 disks. ARAID 0 group uses disk striping, in which the hardware writes to orreads from multiple disks simultaneously. You can create up to 128LUNs in any RAID 0 group.

Unlike the other RAID levels, with RAID 0 the hardware does notmaintain parity information on any disk; this type of group has noinherent data redundancy. As a result, if any failure (including anunrecoverable read error) occurs on a disk in the LUN, the informationon the LUN is lost.

RAID 0 offers enhanced performance through simultaneous I/O todifferent disks. A desirable alternative to RAID 0 is RAID 1/0, whichdoes protect your data.

Proactive sparing is not supported for a RAID 0 group.

Individual disk

The individual disk type is supported for RAID groups only. Anindividual disk unit is a disk bound to be independent of any otherdisk in the cabinet. An individual unit has no inherent high availability,but you can make it highly available by using software mirroring withanother individual unit.

Hot spare

A hot spare is a dedicated replacement disk on which users cannotstore information. A hot spare is global: if any disk in a RAID 6 groupor thin pool, RAID 5 group or thin pool, RAID 3 group, RAID 1/0group, or RAID 1 group fails, the SP automatically rebuilds the faileddisk’s structure on the hot spare. When the SP finishes rebuilding, theRAID group or thin pool functions as usual, using the hot spare insteadof the failed disk. When you replace the failed disk, the SP copies thedata from the former hot spare onto the replacement disk.

When the copy is done, the RAID group or thin pool consists of disksin the original slots, and the SP automatically frees the hot spare toserve as a hot spare again. A hot spare is most useful when you needthe highest data availability. It eliminates the time and effort neededfor someone to notice that a disk has failed, find a suitable replacementdisk, and insert the disk.

When you plan to use a hot spare, make sure the disk has the capacityto serve in any RAID group or thin pool in the storage system. A RAID

62 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 66: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

group or thin pool cannot use a hot spare that is smaller than a faileddisk in the RAID group or thin pool.

You can have one or more hot spares per storage system. You can makeany disk in the storage system a hot spare, except a disk that storesFLARE or the write cache vault; that is, a hot spare can be any diskexcept disk IDs 000 through 004.

If you use hot spares of different sizes, the storage system willautomatically use the hot spare of the proper size in place of a faileddisk.

! CAUTION

Do not use a SATA disk as a spare for a Fibre-Channel-based LUN,and do not use a Fibre Channel disk as a spare for a SATA-basedLUN. A hot spare that is a Flash (SSD) disk can be used only as aspare for a Flash disk. If you have Flash disks in a RAID group , youshould create at least one hot spare that is a Flash disk.

An example of hot spare usage follows in Figure 22.

Hot spareRAID 5 RAID 5

000

001

002

003

004

005

006

007

008

009

0010

0011

0012

0013

0014

EMC3445

RAID 1RAID 1

RAID 5 groups consist of disks 0-4 and 5-9;mirrored pairs are disks 10-11 and 12-13, disk 14 isa hot spare.

Disk 3 fails.

RAID 5 group becomes disks 0, 1, 2, 14, and 4; nowno hot spare is available.

System operator replaces failed disk 3 with afunctional module.

Once again, RAID 5 group consists of disks 0-4 andthe hot spare is disk 14.

6.

Disk 14 copies data to new disk 3.5.

1.

2.

3.

4.

Figure 22 How a hot spare works

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 63

Page 67: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Proactive sparing

Proactive sparing lets you proactively create a hot spare of a diskthat is becoming prone to errors (a proactive candidate). Theproactive sparing operation copies the contents of a disk to a hotspare before the disk fails. Subsequently, you can remove the diskfrom the storage system before it fails and the hot spare then takes itsplace. The proactive sparing operation is initiated automatically ormanually. When the storage-system software identifies certain typesor frequencies of errors on a disk, it identifies the disk as a proactivecandidate, and automatically begins the proactive sparing operation.The storage-system software copies the contents of the proactivecandidate to the proactive hot spare. Additionally, you can manuallycopy all the data from a proactive candidate to a proactive spareusing Navisphere Manager. When a proactive sparing copy operationcompletes, the proactive candidate is faulted. When you replace thefaulted disk, the storage system copies the data from the proactivespare to the replacement disk.

Any available hot spare can be a proactive spare, but only one hot sparecan be used for proactive sparing at a time. If the storage system hasonly one hot spare, it can be a proactive spare. Table 19 lists the numberof concurrent proactive spares supported per storage system.

Table 19 Proactive spares per RAID type

Unit RAID type Number of proactive spares

RAID 6, RAID 5, RAID 3 1

RAID 1 1 per pair

RAID 1/0 1 per mirrored pair

Proactive sparing is not supported for RAID 0 or individual disk units.

RAID type benefits and trade-offs

This section discuses performance, storage flexability, data availabilityand disk space for the different RAID types.

Performance

RAID 6 and RAID 5, with individual access, provide high readthroughput by allowing simultaneous reads from each disk in theRAID group or thin pool. RAID 6 and RAID 5 write performance is

64 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 68: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

excellent when the storage system uses write caching. RAID 6 groupperformance is better than RAID 6 thin pool performance and RAID 5group performance is better than RAID 5 thin pool performance.

RAID 3, with parallel access, provides high throughput for sequentialrequests. Large block sizes (more than 64 KB) are most efficient. RAID3 attempts to write full stripes to the disk, and avoid parity updateoperations.

Generally, the performance of a RAID 3 group increases as the size ofthe I/O request increases. Read performance increases incrementallywith read requests up to 1 MB. Write performance increasesincrementally for sequential write requests that are greater than 256 KB.

RAID 1 read performance will be higher than that of an individual diskwhile write performance remains approximately the same as that ofan individual disk.

A RAID 0 group (nonredundant RAID striping) or RAID 1/0 group(mirrored RAID 0 group) can have as many I/O operations occurringsimultaneously as there are disks in the group. In general, theperformance of RAID 1/0 equals the number of disk pairs times theRAID 1 performance number. If you want high throughput for aspecific LUN, use a RAID 1/0 or RAID 0 group. A RAID 1/0 grouprequires at least two disks; a RAID 0 group requires at least three disks.

A RAID 1/0 group can have as many I/O operations occurringsimultaneously as there are disks in the group. If you want highthroughput for a specific LUN, use a RAID 1/0 group . A RAID 1/0group requires at least two disks.

If you create multiple LUNs on a group , the LUNs share the groupdisks, and the I/O demands of each LUN affect the I/O service timefor the other LUN.

Storage flexibility

On a CX4-120 storage system you can create up to 1024 LUNs, and 512of these LUNs can be thin LUNs. On a RAID group you can createup to 256 LUNs, and on a thin pool you can create up to 512 LUNs.The number of LUNs that you can create adds flexibility, particularlywith large disks, since it lets you apportion LUNs of various sizes todifferent servers, applications, and users.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 65

Page 69: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Data availability and disk space usage in RAID groups

If data availability is critical and you cannot afford to wait hours toreplace a disk, rebind it, make it accessible to the operating system, andload its information from backup, then use a redundant RAID group –RAID 6, RAID 5, RAID 3, RAID 1 group, or RAID 1/0 – or a redundantthin pool – RAID 6 or RAID 5. If data availability is not critical, or diskspace usage is critical, bind an individual unit. Figure 23 illustratesdisk usage in RAID group configurations and Figure 24 illustrates diskusage in thin pool configurations.

66 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 70: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

EMC1820

RAID 0 group(nonredundant array)

2nd diskuser data

3rd diskuser data

1st diskuser data

100% user data

2nd diskredundant data

1st diskuser data

RAID 1 group(mirrored pair)

50% user data50% redundant data

User data

Individual disk unit

100% user dataReserved

Hot spare

No user data

RAID 1/0 group

2nd diskredundant data

3rd diskuser data

4th disk

6th disk

1st diskuser data

5th diskuser data

50% user data50% redundant data

redundant data

redundant data

RAID 6 group

2nd diskuser and parity data

3rd diskuser and parity data

4th diskuser and parity data

1st diskuser and parity data

5th diskuser and parity data

67% user data33% parity data

6th diskuser and parity data

80% user data

RAID 5 group RAID 3 group

2nd diskuser and parity data

2nd diskuser data

3rd diskuser and parity data

3rd diskuser data

4th diskuser and parity data

4th diskuser data

1st diskuser and parity data

1st diskuser data

5th diskuser and parity data

5th Diskparity data

20% parity data

Figure 23 Disk space usage in sample RAID group configurations

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 67

Page 71: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

EMC1820b

Reserved

Hot spare

No user data

RAID 6 pool

2nd diskuser and parity data

3rd diskuser and parity data

4th diskuser and parity data

1st diskuser and parity data

5th diskuser and parity data

67% user data33% parity data

6th diskuser and parity data

80% user data

RAID 5 pool

2nd diskuser and parity data

3rd diskuser and parity data

4th diskuser and parity data

1st diskuser and parity data

5th diskuser and parity data

20% parity data

Figure 24 Disk space usage in sample thin pool configurations

A RAID 1 or RAID 1/0 group provides very high data availability. It ismore expensive than a RAID 6, RAID 5, or RAID 3 group, since only 50percent of the total disk capacity is available for user data.

A RAID 6, RAID 5, or RAID 3 group provides high data availability, butrequires more disks than a RAID 1 group. A RAID 6 group providesthe highest data availability of these three groups. Likewise a RAID 6thin pool provides higher data availability than a RAID 5 thin pool. Ina RAID 6 group or thin pool, the disk space available for user data isthe total capacity of the of disks in RAID group or thin pool minusthe capacity of two disks in the RAID group or thin pool. In a RAID5 group or thin pool or a RAID 3 group, the disk space available foruser data is the total capacity of the of disks in RAID group or thinpool minus the capacity of one disk in the RAID group or thin pool.For example, in a 6-disk RAID 6 group or thin pool or a 5-disk RAID 5group or thin pool, the capacity of 4 disks is available for user data,which is 67% for RAID 6 or 80% for RAID 5 of the group’s or pool’stotal disk capacity. So RAID 6, RAID 5, and RAID 3 groups use diskspace much more efficiently than a RAID 1 group. A RAID 6, RAID 5,or RAID 3 group is usually more suitable than a RAID 1 group for

68 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 72: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

applications where high data availability, good performance, andefficient disk space usage are all of relatively equal importance.

A RAID 0 group (nonredundant RAID striping) provides all its diskspace for user files, but does not provide any high-availability features.For high availability, you should use a RAID 1/0 group instead.

A RAID 1/0 group provides the best combination of performance andavailability, at the highest cost per GB of disk space.

An individual unit, like a RAID 0 group, provides no high-availabilityfeatures. All its disk space is available for user data.

RAID type guidelines for RAID groups or thin pools

To decide when to use a RAID 6 group or thin pool, RAID 5 group orthin pool, RAID 3 group, RAID 1 group, RAID 1/0 group or pool, aRAID 0 group, individual disk unit, or hot spare, you need to weighthese factors:

Importance of data availability

Importance of performance

Amount of data stored

Cost of disk space

Use the following guidelines to decide on RAID types.

Use a RAID 6 (double distributed parity) or RAID 5 (distributedparity) group or thin pool for applications where:

Data availability is very important. A RAID 6 group or thin poolprovides higher availability than a RAID 5 group or thin pool,but uses more overhead than a RAID 5 group or thin pool. Theperformance of a RAID 6 group or RAID 5 group is better thanthe performance of a RAID 6 thin pool or a RAID 5 thin pool,respectively.

Large volumes of data will be stored.

Multitask applications use I/O transfers of different sizes.

Excellent read and good write performance is needed (writeperformance is very good with write caching).

You want the flexibility of multiple LUNs per RAID group or thinpool.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 69

Page 73: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Use a RAID 3 (single-disk parity) group for applications where:

Data availability is very important.

Large volumes of data will be stored.

Similar access patterns are likely and random access is unlikely.

The highest possible bandwidth performance is required.

Use a RAID 1 (mirrored pair) group for applications where:

Data availability is very important.

Speed of write access is important and write activity is heavy.

Use a RAID 1/0 (mirrored nonredundant array) group for applicationswhere:

Data availability is critically important.

Overall performance is very important.

Use a RAID 0 (nonredundant RAID striping) group for applicationswhere:

High availability is not important.

You can afford to lose access to all data stored on a LUN if a singledisk fails.

Overall performance is very important.

Use an individual unit for applications where:

High availability is not important.

Speed of write access is somewhat important.

Use a hot spare where:

In any RAID 6, RAID 5, RAID 3, RAID 1/0 or RAID 1 group, highavailability is so important that you want to regain data redundancyquickly without human intervention if any disk in the group fails.

Minimizing the degraded performance caused by disk failure in aRAID 6 group, RAID 5 group, or RAID 3 group is important.

Sample applications for RAID group or thin pool types

This section describes some sample applications for which you wouldwant to use the different RAID types for RAID group or thin pools.

70 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 74: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

RAID 6 (distributed dual parity) or RAID 5 (distributed parity)group or thin poolA RAID 6 or RAID 5 group or thin pool is useful as a databaserepository or a database server that uses a normal or low percentage ofwrite operations (writes are 33 percent or less of all I/O operations).Use a RAID 6 or RAID 5 group or thin pool where multitaskingapplications perform I/O transfers of different sizes. Write caching cansignificantly enhance the write performance of a RAID 6 or RAID 5group or thin pool. For higher data availability, use a RAID 6 groupor thin pool instead of a RAID 5 group or thin pool. The performanceof a LUN in RAID 6 group is typically better than the performance ofa thin LUN in a RAID 6 thin pool; and likewise, the performance of aLUN in a RAID 5 group is typically better than the performance of athin LUN in a RAID 5 thin pool.

For example, a RAID 6 or RAID 5 group or thin pool is suitable formultitasking applications that require a large history database witha high read rate, such as a database of legal cases, medical records,or census information. A RAID 6 or RAID 5 group or thin pool alsoworks well with transaction processing applications, such as an airlinereservations system, where users typically read the information aboutseveral available flights before making a reservation, which requiresa write operation. You could also use a RAID 6 or RAID 5 group orthin pool in a retail environment, such as a supermarket, to hold theprice information accessed by the point-of-sale terminals. Even thoughthe price information may be updated daily requiring many writeoperations, it is read many more times during the day.

RAID 3 (single-disk parity) groupA RAID 3 group is ideal for high-bandwidth reads or writes, that is,applications that perform either logically sequential I/O or use largeI/O sizes (stripe size or larger). Using read and write caching, severalapplications can read and write from a RAID 3 group. Random accessin a RAID 3 group is not optimal, so the ideal applications for RAID3 are backup to disk, real-time data capture, and storage of extremelylarge files.

You might use a RAID 3 group for a single-task application that doeslarge I/O transfers, like a weather tracking system, geologic chartingapplication, medical imaging system, or video storage application.

RAID 1 (mirrored pair) groupA RAID 1 (mirrored pair) group is useful for logging or record-keepingapplications because it requires fewer disks than a RAID 0

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 71

Page 75: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

(nonredundant array) group, and provides high availability and fastwrite access. Or you could use it to store daily updates to a databasethat resides on a RAID 6 or RAID 5 group or thin pool, and then,during off-peak hours, copy the updates to the database on the RAID6 or RAID 5 group or thin pool. Unlike a RAID 1/0 group, a RAID 1group is not expandable to more than two disks.

RAID 0 (nonredundant RAID striping) groupUse a RAID 0 group where the best overall performance is important.A RAID 0 group is useful for applications using short-term data towhich you need quick access.

RAID 1/0 groupA RAID 1/0 group provides the best balance of performance andavailability. You can use it very effectively for any of the RAID 6 orRAID 5 applications. The performance of a LUN in a RAID 1/0 group istypically better than the performance of a thin LUN in a RAID 1/0 pool.

Individual unitAn individual unit is useful for print spooling, user file exchange areas,or other such applications, where high availability is not important orwhere the information stored is easily restorable from backup.

The performance of an individual unit is slightly less than a standarddisk not in a storage system. The slight degradation results from SPoverhead.

Hot spareA hot spare provides no data storage but enhances the availability ofeach RAID 6 group or thin pool, RAID 5 group or thin pool, RAID 3group, RAID 1 group, and RAID 1/0 group in a storage system. Usea hot spare where you must regain high availability quickly withouthuman intervention if any disk in such a RAID group or thin pool fails.A hot spare also minimizes the period of degraded performance after adisk failure in a RAID 6 group or thin pool, RAID 5 group or thin pool,or RAID 3 group. Proactive sparing minimizes it even more for a diskfailure in any of these RAID groups or thin pools.

Fully automated storage tiering (FAST)

Storage tiering lets you assign different categories of data to differenttypes of storage to reduce total storage costs. You can base datacategories on levels of protection needed, performance requirements,frequency of use, costs, and other considerations. The purpose of tiered

72 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 76: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

storage is to retain the most frequently accessed of most importantdata on fast, high performance (most expensive) disks, and move theless frequently accessed and less important data to low performance(less expensive) disks.

Within a storage pool that is not a RAID group, storage from similarlyperforming disks are grouped together to form a tier of storage. Forexample, if you have Flash (SSD) disks, Fibre Channel (FC) disks, andSerial Advanced Technology Attachment (SATA) disks in the pool,the Flash disks form a tier, the FC disks form a tier, and the SATAdisks form a tier. Based on your input or internally computed usagestatistics, portions of LUNs (slices) can be moved between tiers tomaintain a service level close to the highest performing storage tierin the pool, even when some portion of the pool consists of lowerperforming (less expensive) disks. The tiers from highest to lowestare Flash, FC, and SATA.

FAST is not supported for RAID groups because all the disks in a RAIDgroup, unlike in a pool, must be of the same type (all Flash, all FC, orall SATA). The lowest performing disks in a RAID group determine aRAID group’s overall performance.

Two types of tiered storage are available:

Initial tier placement

Auto-tier placement

Initial tier placement

Initial tier policies are available for storage systems running FLARE04.30.000.5.xxx or later and do not require the FAST enabler. Initial tierplacement requires that you manually specify the storage tier on whichyou want to initially place the LUN’s data, and then either manuallymigrate the LUN to relocate the data to a different tier or installthe FAST enabler, which once installed, will perform the migrationautomatically. Table 20 describes the policies for initial tier placement.

Table 20 Initial tier settings for LUNs in a pool (FAST enabler not installed)

Initial tier placement policy Description

Optimize for Pool Performance(default)

No tier setting specified.

Highest Tier Available Sets the preferred tier for initial data placement to the highest tier available.

Lowest Tier Available Sets the preferred tier for initial data placement to the lowest tier available.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 73

Page 77: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

FAST policies

FAST policies are available for storage systems running FLARE04.30.000.5.xxx or later with the FAST enabler installed. The FASTfeature automatically migrates the data between storage tiers to providethe lowest total cost of ownership. Pools are configured with differenttypes of disks (Flash/SSD, FC, and SATA) and the storage-systemsoftware continually tracks the usage of the data stored on the LUNs inthe pools. Using these LUN statistics, the FAST feature relocates datablocks (slices) of each LUN to the storage tier that is best suited for thedata, based on the policies described in Table 21.

Table 21 FAST policies for LUNs in a pool (FAST enabler installed)

FAST policy Description

Auto Tier Moves data to a tier based on the LUN performance statistics (LUN usage rate).

Highest Tier Available Moves data to the highest possible tier.

Lowest Tier Available Moves data to the lowest possible tier.

No Data Movement Moves no data between tiers, and retains the current tier placement.

If you install the FAST enabler on a storage system with an initial tierplacement setting specified, the storage-system software bases theFAST policies on the initial tier policies, as shown in Table 22.

Table 22 Interaction between Initial tier placement settings and FAST policies

Initial tier placement before FAST enabler installed Default FAST policy after FAST enabler installed

Optimize for Pool Performance Auto-Tier

Highest Available Tier Highest Available Tier

Lowest Available Tier Lowest Available Tier

n/a No Data Movement (retains the initial tier placement settings)

74 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 78: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

File systems and LUNs

This section will help you plan your storage use – the applications torun, the LUNs that will hold the applications, and the storage groupthat will belong to each server. It provides background information,shows sample installations with switched and direct storage, andprovides worksheets for planning your storage installation.

Unless stated otherwise, the term LUN applies to all LUNs – RAIDgroup LUNs and thin LUNs.

Major topics are:

Multiple paths to LUNs, page 75

DAE requirements, page 76

Disk IDs and locations in DAEs, page 76

Disk configuration rules and recommendations, page 76

Sample shared switched or network configuration, page 78

Application and LUN worksheet, page 79

RAID group and thin pool worksheet, page 82

LUN and storage group worksheet, page 84

LUN details worksheet, page 86

Multiple paths to LUNs

A shared storage-system configuration includes two or more serversand one or more storage systems. Often shared storage installationsinclude two or more switches or routers.

In properly configured shared storage (switched or direct), eachserver has at least two paths to each LUN in the storage system. Thestorage-system FLARE Operating Environment (OE) detects all pathsand, using optional failover software (such as EMC PowerPath), canautomatically switch to the other path, without disrupting applications,if a device such as a host bus adapter or cable fails. With two adaptersand two or more ports per SP zoned to them, PowerPath can send I/Oto each available path in a user-selectable sequence (multipath I/O) forload sharing and greater throughput.

An unshared storage configuration has one server and one storagesystem. If the server has two adapters it can have two paths to each

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 75

Page 79: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

LUN. With two adapters, PowerPath performs the same function aswith shared systems: it automatically switches to the other path if adevice such as host bus adapter or cable fails.

DAE requirements

The storage system must have a minimum of one DAE with five disks.A maximum of 8 DAEs are supported for a total of 120 disks.Eachback-end bus can support eight DAEs.

Disk IDs and locations in DAEs

Disk IDs have the form bed, where:

b is the back-end loop (also referred to as a back-end bus) number (0 )

e is the enclosure number, set on the enclosure rear panel (0 for the first DAE)

d is the disk position in the enclosure (left is 0, right is 14)

Navisphere Manager displays disk IDs as b-e-d, and Navisphere CLIrecognizes disk IDs as b-e-d. Figure 25 shows the IDs for disks in DAEs.

030

031

032

033

034

035

036

037

038

039

0310

03

11

0312

03

13

0314

DAE

020

021

022

023

024

025

026

027

028

029

0210

02

11

0212

02

13

0214

DAE

010

011

012

013

014

015

016

017

018

019

0110

01

11

0112

01

13

0114

DAE

000

0

01

002

0

03

004

00

5 00

6 00

7 00

8 00

9 00

10

0011

00

12

0013

00

14 DAE

Vault disks

SPE

Bus 0 enclosures

Note: Unisphere or NavisphereManager displays disk IDs as n-n-n. CLI recognizes disk IDs as n_n_n

EMC3424

Figure 25 IDs for disks on the single bus in a storage system

Disk configuration rules and recommendations

The following rules and recommendations apply to the storage system:

76 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 80: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

You cannot use a Flash (SSD) disk as:

A vault disk – disks 000–004 (enclosure 0, bus 0, disks 0-4)

A hot spare for any disk except another Flash disk

For more information on Flash disk usage, refer to theBest Practices documentation on the Powerlink website(http://Powerlink.EMC.com).

You cannot use disks 000–004 (enclosure 0, bus 0, disks 0-4) as a hotspare. Do not use:

A SATA disk as a hot spare for a Fibre-Channel-based LUN

A Fibre Channel disk as a spare for a SATA-based LUN

The hardware reserves about 62 GB on each of disks 000–004 (vaultdisks) for the cache vault and internal tables. To improve SPperformance and conserve disk space, you should avoid bindingother disks into a RAID group that includes any vault disk. Anydisk you include in a RAID group with a vault disk 000–004 isbound to match the lower unreserved capacity, resulting in loststorage of several gigabytes per disk. The extra space on the vaultdisks is a good place for an archive.

To fully use disk space, all disks in the RAID group should havethe same capacity because all disks in a group are bound to matchthe smallest capacity disk. The first five drives (000–004) shouldalways be the same size.

If a storage system uses both SATA and Fibre Channel disks, do notmix SATA and Fibre Channel disks within a DAE.

If a storage system uses disks of different capacities and/or rpm(for example, 300 GB or 400 GB, or 10K or 15K rpm) within a DAE,then we recommend that you place them in a logical order to avoidmixing disks with different speeds in the same RAID group. Onepossible order is the following:

Place disks with the highest capacity in the first (leftmost) slots,followed by disks with lower capacities.

Within any specific capacity, place drives with the highest speedfirst, followed by disks with lower speeds.

If possible, do not mix disks with different speeds in a RAIDgroup.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 77

Page 81: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Sample shared switched or network configuration

Figure 26 shows a sample shared storage system connected to threeservers: two servers in a cluster and one server running a databasemanagement program. Note that each server has a completelyindependent connection to SP A and SP B. The storage system in thefigure is a single-cabinet storage system. You can also configure astorage system with multiple cabinets.

78 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 82: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

EMC2405

Highly Available Cluster

Database Server (DS) File Server (FS)

OperatingSystemA

Mail Server (MS)

OperatingSystemB

Disk IDs

010-0114

110-1114

120-1214

020-0214

100-1014

000-0014

ClusterStorageGroup

DatabaseServerStorageGroup

Switch fabricor network

SP A SP B

OperatingSystemB

FS R5Apps

FS R5Specs

FS R5UsersA_F

DS R5Dbase2

DS R5Dbase1

DS R5Dbase3

FS R5UsersP_S

FS R5UsersG_O

FS R5UsersT_Z

MS R5ISP B Mail

MS R5ISP A Mail

MS R5ISP C Mail

MS R5Specs

MS R5Users

MS R5Apps

DS R5 Users(6 disks)

Switch fabricor network

Figure 26 Sample shared switched storage configuration

Application and LUN worksheet

Use the Application and LUN worksheet in Table 23 to list theapplications you will run, and the RAID type and size of the LUNthat will hold them. For each application, write the application name,file system (if any), RAID type, LUN ID (ascending integers, starting

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 79

Page 83: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

with 0), disk space required, and finally the name of the servers andoperating systems that will use the LUN.

Table 23 Application worksheet

Application File system, partition,or drive

LUN or thinLUN RAIDtype

LUN or thinLUN ID

Disk spacerequired(GB)

Server hostname andoperating system

ApplicationEnter the application name or type.

File system, partition, or driveEnter the planned file system, partition, or drive name.

LUN RAID typeEnter the RAID type for the RAID group or thin pool for this filesystem, partition, or drive. You can create one or more LUNs on theRAID group or one or more thin LUNs on the thin pool.

LUN IDEnter the number for the LUN ID. The LUN ID is assigned when youcreate the LUN. By default, the ID of the first LUN bound is 0, thesecond 1, and so on. Each LUN ID must be unique within the storagesystem, regardless of its storage group or RAID group or thin pool.

The maximum number of LUNs supported on one host bus adapterdepends on the operating system.

80 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 84: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Disk space required (GB)Enter the largest amount of disk space this application will need plus afactor for growth. If you find in the future that you need more spacefor this application, you can expand the capacity of the LUN. For moreinformation, refer to the Navisphere Manager online help, which isavailable on the Powerlink website (http://Powerlink.EMC.com).

Server hostname and operating systemEnter the server hostname (or, if you do not know the name, a shortdescription that identifies the server) and the operating system name,if you know it.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 81

Page 85: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

RAID group and thin pool worksheet

Use the worksheet in Table 24 to select the disks that will make up theRAID groups and thin pools that the storage system will use. Completeas many of the RAID group and thin pool sections as needed for thestorage system.

Table 24 RAID group and thin pool worksheet

Storage-system number or name:

RAID group Thin pool

RAID group ID or thin pool name: RAID type:Disk selection: Automatic Manual

Use power saving eligible disks (automatic selection only)

Thin pool space alert threshold: Default ManualThreshold value (manual selection only):

RAID group Thin pool

RAID group ID or thin pool name: RAID type:Disk selection: Automatic Manual

Use power saving eligible disks (automatic selection only)

Thin pool space alert threshold: Default ManualThreshold value (manual selection only):

RAID group Thin pool

RAID group ID or thin pool name: RAID type:Disk selection: Automatic Manual

Use power saving eligible disks (automatic selection only)

Thin pool space alert threshold: Default ManualThreshold value (manual selection only):

RAID group Thin pool

RAID group ID or thin pool name: RAID type:Disk selection: Automatic Manual

Use power saving eligible disks (automatic selection only)

Thin pool space alert threshold: Default ManualThreshold value (manual selection only):

RAID group Thin pool

82 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 86: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Storage-system number or name:

RAID group ID or thin pool name: RAID type:Disk selection: Automatic Manual

Use power saving eligible disks (automatic selection only)

Thin pool space alert threshold: Default ManualThreshold value (manual selection only):

Storage-system number or nameEnter the number or name that identifies the storage system.

RAID group or thin poolSelect the appropriate box.

RAID group ID or thin pool nameEnter either the number to use for the RAID group ID or the name touse for the thin pool. When you create a RAID group, you can eitherassign an ID to it or have one assigned automatically. If the storagesystem assigns the RAID group IDs automatically, the ID of the firstRAID group is 0, the second 1, and so on. Each RAID group ID mustbe unique within the storage system. When you create a thin pool,you must assign a name to it, even though an ID is assigned to itautomatically. Each thin pool name must be unique within the storagesystem. The ID of the first thin pool is always 0, the second 1, and so on.

RAID typeEnter the RAID type for the RAID group or thin pool.

Disk selectionSelect either Automatic if you want Navisphere Manager to select thedisk for the RAID group or thin pool or Manual if you want to selectthe disks yourself. If you checked the Manual box, enter the IDs of thedisks that will make up the RAID group or thin pool. The capacity ofthe RAID group or thin pool is the result of the capacity and number ofthe disks selected less the overhead of the RAID type selected.

If you selected automatic disk selection for a RAID group, you canchoose to use power saving eligible disks for the RAID group and toassign power saving settings to these disks, so that the disks transitionto a low power state when the following conditions are met:

The storage system is running FLARE 04.29 or later.

Power saving for the RAID group containing the disks and thestorage system is enabled.

All disks in the RAID group:

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 83

Page 87: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Support disk power savings.

Have been idle for at least 30 minutes.

No LUNs in the RAID group are participating in replication and/ordata mobility software (SnapView, MirrorView/A, MirrorView/S,SAN Copy) sessions.

The RAID group does not include metaLUNs.

For information on the currently available disks that support powersavings, refer to the EMC® CX4 Series Storage Systems – Disk andFLARE® OE Matrix (P/N 300-007-437) on the EMC Powerlink website.

To use power saving disks in the RAID group, select the User powersaving eligible disks box.

Thin pool space alert thresholdBy default, the storage system issues a warning alert when 70% of thethin pool’s space has been consumed and a critical alert when 85% ofthe space has been consumed. As thin LUNs continue consuming thethin pool’s space, both alerts continue to report the actual percentage ofconsumed space. However, you can set the threshold for the warningalert. If you selected manual disk selection, enter either Default (70%)or the threshold value that you want to trigger the alert warning thatthe thin pool space is filling up. We recommend that you set the valuesomewhere between 50% and 75%.

LUN and storage group worksheet

Use the worksheet in Table 25 to select the disks that will make upthe LUNs and storage groups in each storage system. Complete aworksheet for each storage group in your configuration.

Table 25 LUN and storage group worksheet

Storage-system number or name:

Storage group ID or name: Server hostname:

LUN ID or name:RAID type: RAID 6 RAID 5 RAID 3 RAID 1

RAID 0 RAID 1/0 Individual disk Hot spare

LUN capacity:

LUN ID or name:RAID type: RAID 6 RAID 5 RAID 3 RAID 1

RAID 0 RAID 1/0 Individual disk Hot spare

LUN capacity:

84 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 88: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

LUN ID or name:RAID type: RAID 6 RAID 5 RAID 3 RAID 1

RAID 0 RAID 1/0 Individual disk Hot spare

LUN capacity:

LUN ID or name:RAID type: RAID 6 RAID 5 RAID 3 RAID 1

RAID 0 RAID 1/0 Individual disk Hot spare

LUN capacity:

LUN ID or name:RAID type: RAID 6 RAID 5 RAID 3 RAID 1

RAID 0 RAID 1/0 Individual disk Hot spare

LUN capacity:

LUN ID or name:RAID type: RAID 6 RAID 5 RAID 3 RAID 1

RAID 0 RAID 1/0 Individual disk Hot spare

LUN capacity:

LUN ID or name: RAID type: RAID 6 RAID 5 RAID 3 RAID 1

RAID 0 RAID 1/0 Individual disk Hot spare

LUN capacity:

LUN ID or name: RAID type: RAID 6 RAID 5 RAID 3 RAID 1

RAID 0 RAID 1/0 Individual disk Hot spare

LUN capacity:

Storage-system number or nameEnter the number or name that identifies the storage system. Thestorage system assigns the storage group ID number when you createthe storage group. By default, the ID of the first storage group is 0, thesecond 1, and so on. Each storage group ID must be unique withinthe storage system.

Server hostnameEnter the name or IP address for the server.

Storage group ID or nameEnter the ID or name that identifies the storage group.

LUN name or IDEnter either the name for the LUN or the number for the LUN ID.The storage system assigns the LUN ID when you create the LUN. Bydefault, the ID of the first LUN is 0, the second 1, and so on. Each LUNID must be unique within the storage system.

RAID typeSelect the RAID type of the LUN, which is the RAID type of the RAIDgroup or thin pool on which it was created. Only RAID 6 and RAID 5types are available for thin pools.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 85

Page 89: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

LUN capacityEnter the user capacity for the LUN.

LUN details worksheet

Use the worksheet in Table 26 to plan the individual LUNs. Completeas many blank worksheets as needed for all LUNs in the storage system.

Table 26 LUN details worksheet

Storage-system information

Storage-system number or name:

SP A information

IP address or hostname: Memory (MB):

SP B information

IP address or hostname: Memory (MB):

LUN information

LUN ID: SP owner: SP A SP B SP back-end bus

RAID group ID or thinpool name:

RAID group or thin pool namesize (GB):

LUN size (GB): Disk IDs:

RAID type:RAID 6 RAID 5 RAID 3 RAID 1 RAID 0 RAID 1/0 Individual disk Hot spare

SP caching (RAIDgroup LUN only):

Read and write Write only Read only None

Servers that can access this LUN’s storage group:

Operating system information

Device name: File system, partition, or drive:

86 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 90: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

LUN or thin LUN information

LUN ID:SP owner: SP A SP B

SP back-end bus

RAID group ID or thinpool name:

RAID group or thin pool namesize (GB):

LUN or thin poolname size (GB):

Disk IDs:

RAID type: RAID 6 RAID 5 RAID 3 RAID 1 RAID 0 RAID 1/0 Individual disk Hot spare

SP caching (RAIDgroup LUN only):

Read and write Write only Read only None

Servers that can access this LUN’s storage group:

Operating system information

Device name: File system, partition, or drive:

LUN or thin LUN information

LUN or thin LUN ID:SP owner: SP A SP B

SP back-end bus

RAID group ID or thinpool name:

RAID group or thin pool size(GB):

LUN size (GB): Disk IDs:

RAID type:RAID 6 RAID 5 RAID 3 RAID 1 RAID 0 RAID 1/0 Individual disk Hot spare

SP caching (RAIDgroup LUN only):

Read and write Write only Read only None

Servers that can access this LUN’s or thin LUN’s storage group:

Storage-system information

Fill in the storage-system information section of the worksheet usingthe information that follows.

Storage-system number or nameEnter the number or name that identifies the storage system.

SP A and SP B information

Fill out the SP A and SP B information section in the worksheet usingthe information that follows.

IP address or hostnameEnter the IP address or hostname for the SP. The IP address is requiredfor connecting to the SP. You do not need to complete it now, but youwill need it when the storage system is installed so that you can setup communication with the SP.

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 87

Page 91: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

MemoryEnter the amount of memory on the SP.

LUN information

Fill in the LUN information section of the worksheet using theinformation that follows.

LUN IDEnter the number for the LUN ID. The LUN ID is a number assignedwhen you create a LUN on a RAID group or a thin LUN on a thin pool.By default, the ID of the first LUN bound is 0, the second 1, and so on.Each LUN ID must be unique within the storage system, regardless ofits storage group or RAID group. The maximum number of LUNssupported on one host bus adapter depends on the operating system.

SP ownerSelect the SP that you want to own the LUN. You can let themanagement program automatically select the SP to balance theworkload between SPs; to do so, leave this entry blank.

SP back-end busesA back-end bus consists of a physical back-end loop on one SP that ispaired with its counterpart physical back-end loop on the other SP tocreate a redundant bus. Each SP supports one physical back-end loopthat is paired with its counterpart SP to create a redundant bus (bus 0).

The bus designation appears in the disk ID (in the form bed, where b isthe back-end bus number, e is the enclosure number, and d is the diskposition in the enclosure). For example, 013 indicates the fourth diskon bus 0, in enclosure 1 (numbering 0, 1, 2, 3... from the left) in thestorage system. Navisphere Manager displays disk IDs as b-e-d, andNavisphere CLI recognizes disk IDs as b-e-d.

RAID group ID or thin pool nameEnter the number for the RAID group ID or the name for the thinpool. When you create a RAID group, you can either assign an ID toit or have the storage system assign one automatically. If the storagesystem assigns the RAID group IDs automatically, the ID of the firstRAID group is 0, the second 1, and so on. Each RAID group ID mustbe unique within the storage system. When you create athin pool, youmust assign a name to it, even though the storage system assigns anID to it automatically. Each thin pool name must be unique within thestorage system. The ID of the first thin pool is always 0, the second 1,and so on.

88 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 92: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

RAID group or thin pool sizeEnter the user-available capacity in gigabytes (GB) of the whole RAIDgroup or thin pool. For a RAID group, the user capacity is assigned tophysical storage on the disks in the group when you create the group.For a thin LUN, the user capacity is assigned to physical storage in acapacity-on-demand basis from a shared thin pool of disks. The storagesystem monitors and adds storage capacity, as required, to each thinpool until its physical capacity is reached. You can determine the RAIDgroup capacity using Table 27.

Table 27 RAID group capacity

RAID group Disk capacity (GB)

RAID 6 group disk-size x (number-of-disks - 2)

RAID 5 or RAID 3 group disk-size x (number-of-disks - 1)

RAID 1/0 or RAID 1 group (disk-size x number-of-disks) / 2

RAID 0 group disk-size x number-of-disks

Individual unit disk-size

The 1 TB SATA disks operate on a 4 Gb/s back-end bus like the 4 Gb FCdisks, but have a 3 Gb/s bandwidth. Since they have a Fibre Channelinterface to the back-end loop, these disks are sometimes referred to asFibre Channel disks. The currently available disks and their usable diskspace are listed in EMC® CX4 Series Storage Systems – Disk and FLARE®OE Matrix (P/N 300-007-437) on the EMC Powerlink website.

The vault disks must all have the same capacity and same speed.

The 1 TB, 5.4K rpm SATA disks are available only in a DAE that is fullypopulated with these disks. Do not mix 1 TB, 5.4K rpm SATA diskswith 1 TB, 1.2K rpm SATA disks in the same DAE, and do not replace a1 TB, 5.4K rpm SATA disk with a 1 TB, 1.2K rpm SATA disk or a 1 TB,1.2K rpm SATA disk with a 1 TB, 5.4K rpm SATA disk.

LUN sizeEnter the user-available capacity in gigabytes (GB) of the LUN. Thetotal user capacity of all the LUNs on a RAID group cannot exceedthe total user capacity of the RAID group. You can make LUN sizethe same size or smaller than the user capacity of the RAID group.You might make a LUN smaller than the RAID group size if you wanta RAID 5 group with a large capacity and you want to place manysmaller capacity LUNs on it, for example, to specify a LUN for each

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 89

Page 93: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

user. You can make the thin LUN size greater than the user capacity ofthe thin pool because thin provisioning assigns storage to the server in acapacity-on-demand basis from a shared thin pool. The storage systemmonitors and adds storage capacity, as required to each thin pool.

If you want multiple LUNs per RAID group or multiple thin LUNs perthin pool, then use a RAID group/LUN series or thin pool/thin LUNseries of entries for each LUN or thin LUN. The LUNs is a RAID groupor thin pool share the same total performance capability of the diskdrives, so plan carefully.

Disk IDsEnter the IDs of all disks that will make up the LUN or hot spare. Theseare the same disk IDs you specified on the previous worksheet. Forexample, for a RAID 5 group or thin pool in a DAE on bus 0 (disks 10through 14), enter 0010, 0011, 0012, 0013, and 0014.

RAID typeCopy the RAID type from the previous worksheet, for example, RAID5 or hot spare. For a hot spare (strictly speaking not a LUN at all), skipthe rest of this LUN entry and continue to the next LUN entry (if any).

SP CachingSelect the type of SP caching you want — read and write, write, read,or none — for this LUN.

FAST (tiering) policy (pool LUN only)If the storage system has two of more types of disks (FC, SATA, Flash)and the FAST enabler installed, select the policy for placing data on thisstorage as it is written to the storage — Auto-Tier, Highest AvailableTier, Lowest Available Tier, No Data Movement.

Servers that can access this LUN’s storage groupFor switched shared storage or shared or clustered direct storage, enterthe name of each server (copied from the LUN and storage groupworksheet). For unshared direct storage, this entry does not apply.

Operating system information

Fill out the operating system information section of the worksheetusing the information that follows.

Device nameEnter the operating system device name, if this is important and if youknow it. Depending on your operating system, you may not be ableto complete this field now.

90 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server

Page 94: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

File system, partition, or driveWrite the name of the file system, partition, or drive letter for this LUN.This is the same name you wrote on the application worksheet.

On the following line, write any pertinent notes, for example, the filesystem mount-point or graft-point directory pathname (from the rootdirectory). If any of this storage system’s LUNs will be shared withanother server, and the other server is the primary owner of this LUN,write secondary. (As mentioned earlier, if the storage system will beused by two servers, complete one of these worksheets for each server.)

Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server 91

Page 95: CX4 Planning Your Basic Storage-System Configuration- Master 1423524

Copyright © 2010 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate as of its publication date. Theinformation is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATIONMAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TOTHE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIEDWARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires anapplicable software license.

For the most up-to-date regulatory document for your product line, go to the TechnicalDocumentation and Advisories section on EMC Powerlink.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks onEMC.com.

All other trademarks mentioned herein are the property of their respective owners.

92 Planning Your Basic CX4-120 Storage-System Switch Configuration with an HP-UX Server