Top Banner
EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.EMC.com EMC ® VNX TM Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide P/N 300-012-182 REV A03
164

Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Oct 24, 2014

Download

Documents

es9644
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC CorporationCorporate Headquarters:

Hopkinton, MA 01748-9103

1-508-435-1000www.EMC.com

EMC® VNXTM Series MPFS over FC and iSCSILinux Clients

Version 6.0

Product GuideP/N 300-012-182

REV A03

Page 2: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide2

Copyright © 2007-2011 EMC Corporation. All rights reserved.

Published September, 2011

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on the EMC Online Support website at Support.EMC.com.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

All other trademarks used herein are the property of their respective owners.

Page 3: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide 3

Preface

Chapter 1 Introducing EMC VNX MPFS over FC and iSCSIOverview of MPFS over FC and iSCSI .......................................... 18VNX MPFS architectures ................................................................. 19

MPFS over FC on VNX ............................................................. 19MPFS over iSCSI on VNX......................................................... 21MPFS over iSCSI/FC on VNX ................................................. 22

How VNX MPFS works ................................................................... 24

Chapter 2 EMC VNX MPFS Environment ConfigurationConfiguration roadmap ................................................................... 26Implementation guidelines.............................................................. 28

VNX with MPFS recommendations........................................ 28Storage configuration recommendations ............................... 29MPFS feature configurations.................................................... 30

MPFS installation and configuration process ............................... 35Configuration planning checklist ............................................ 36

Verifying system components ......................................................... 38Required hardware components ............................................. 38Required software components ............................................... 40Verifying configuration............................................................. 40Verifying system requirements ................................................ 41Verifying the FC switch requirements (FC configuration) .. 42Verifying the IP-SAN VNX for block requirements.............. 43

Setting up the VNX for file .............................................................. 44Running the VNX Installation Assistant for File/Unified.......... 45Setting up the file system................................................................. 46

Contents

Page 4: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide4

Contents

File system prerequisites .......................................................... 46Creating a file system on a VNX for file................................. 47

Enabling MPFS for the VNX for file............................................... 57Configuring the VNX for block by using CLI commands.......... 58

Best practices for VNX for block and VNX VG2/VG8 gateway configurations ............................................................ 58

Configuring the SAN switch and storage ..................................... 59Installing the FC switch (FC configuration) .......................... 59Zoning the SAN switch (FC configuration)........................... 59Creating a security file .............................................................. 60Configuring the VNX for block iSCSI ports .......................... 61Configuring Access Logix ........................................................ 63

Configuring and accessing storage ................................................ 67Installing the FC driver (FC configuration) ........................... 67Adding hosts to the storage group (FC configuration)........ 68Configuring the iSCSI driver for RHEL 4 (iSCSI configuration)................................................................. 70Configuring the iSCSI driver for RHEL 5-6, SLES 10-11, and CentOS 5-6 (iSCSI configuration) .................................... 73Adding initiators to the storage group (FC configuration) . 79Adding initiators to the storage group (iSCSI configuration)................................................................. 81

Mounting MPFS................................................................................ 84Examples..................................................................................... 85

Unmounting MPFS........................................................................... 88

Chapter 3 Installing, Upgrading, or Uninstalling VNX MPFS SoftwareInstalling the MPFS software .......................................................... 90

Before installing ......................................................................... 90Installing the MPFS software from a tar file .......................... 90Installing the MPFS software from a CD ............................... 92Post-installation checking ........................................................ 93Operating MPFS through a firewall ....................................... 94

Upgrading the MPFS software ....................................................... 95Upgrading the MPFS software ................................................ 95Upgrading the MPFS software with MPFS mounted .......... 97Post-installation checking ........................................................ 98Verifying the MPFS software upgrade ................................... 99

Uninstalling the MPFS software................................................... 100

Page 5: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

5EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Contents

Chapter 4 EMC VNX MPFS Command Line InterfaceUsing HighRoad disk protection .................................................. 102

VNX for file and hrdp ............................................................. 102hrdp command syntax ............................................................ 103Viewing hrdp protected devices ............................................ 106

Using the mpfsctl utility................................................................. 107mpfsctl help .............................................................................. 108mpfsctl diskreset ...................................................................... 109mpfsctl diskresetfreq ............................................................... 109mpfsctl max-readahead........................................................... 110mpfsctl prefetch........................................................................ 112mpfsctl reset .............................................................................. 113mpfsctl stats .............................................................................. 114mpfsctl version ......................................................................... 117mpfsctl volmgt.......................................................................... 117

Displaying statistics ........................................................................ 118Using the mpfsstat command ................................................ 118

Displaying MPFS device information .......................................... 120Listing devices with the mpfsinq command........................ 120Listing devices with the /proc/mpfs devices file............... 123Displaying mpfs disk quotas.................................................. 123Validating a Linux server installation ................................... 125

Setting MPFS parameters............................................................... 127Displaying Kernel parameters ...................................................... 127Setting persistent parameter values ............................................. 129

mpfs.conf parameters .............................................................. 129DirectIO support ...................................................................... 132EMCmpfs parameters ............................................................. 134

Appendix A File Syntax RulesFile syntax rules for creating a site .............................................. 138

VNX for file with iSCSI ports ................................................. 138File syntax rules for adding hosts ................................................ 139

Linux host.................................................................................. 139

Page 6: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide6

Contents

Appendix B Error Messages and TroubleshootingLinux server error messages ........................................................ 142Troubleshooting ............................................................................. 143

Installing MPFS software ....................................................... 143Mounting and unmounting a file system ............................ 145Miscellaneous issues ............................................................... 149

Known problems and limitations ................................................ 150

Glossary

Index

Page 7: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide 7

Title Page

1 MPFS over FC on VNX.................................................................................. 202 MPFS over FC on VNX VG2/VG8 gateway............................................... 203 MPFS over iSCSI on VNX ............................................................................. 214 MPFS over iSCSI on VNX VG2/VG8 gateway .......................................... 225 MPFS over iSCSI/FC on VNX...................................................................... 236 MPFS over iSCSI/FC on VNX VG2/VG8 gateway................................... 237 Configuration roadmap................................................................................. 27

Figures

Page 8: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide8

Figures

Page 9: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide 9

Title Page

1 Prefetch and read cache requirements ......................................................... 292 Arraycommpath and failovermode settings for storage groups.............. 673 iSCSI parameters for RHEL 4 using 2.6 kernels.......................................... 704 RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6

iSCSI parameters .............................................................................................745 Linux server firewall ports............................................................................. 946 Command line interface summary ............................................................. 1077 MPFS device information............................................................................. 1228 MPFS kernel parameters .............................................................................. 1289 Linux server error messages........................................................................ 142

Tables

Page 10: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide10

Tables

Page 11: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide 11

Preface

As part of an effort to improve and enhance the performance and capabilities of its product lines, EMC periodically releases revisions of its hardware and software. Therefore, some functions described in this document may not be supported by all versions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes.

If a product does not function properly or does not function as described in this document, contact your EMC representative.

Review the EMC Online Support website, http://Support.EMC.com, to ensure that you have the latest versions of the MPFS software and documentation.

For software, open Support > Software Downloads and Licensing > Downloads V and then select the necessary software for VNX MPFS from the menu.

For documentation, open Support > Technical Documentation > Hardware/Platforms > VNX Series.

For user personalized documentation for all VNX platforms, open http:// www.emc.com/vnxsupport.

Note: Only registered EMC Online Support users can download the MPFS software.

Page 12: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

12 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Preface

Audience This document is part of the EMC VNX MPFS documentation set, and is intended for use by Linux system administrators responsible for installing and maintaining Linux servers.

Readers of this document are expected to be familiar with these topics:

◆ VNX for block or EMC Symmetrix system

◆ VNX for file

◆ NFS protocol

◆ Linux operating system

◆ Operating environments to install the Linux server include:

• Red Hat Enterprise Linux 4, 5, and 6

• SuSE Linux Enterprise Server 10 and 11

• Community ENTerprise Operating System 5 and 6 (iSCSI only)

Relateddocumentation

Related documents include:

◆ EMC VNX MPFS for Linux Clients Release Notes

◆ EMC VNX VG2/VG8 Gateway Configuration Setup Guide

◆ EMC Host Connectivity Guide for Linux

◆ EMC Host Connectivity Guide for VMware ESX Server

◆ EMC documentation for HBAs

VNX for block◆ Removing ATF or CDE Software before Installing other Failover

Software

◆ Unisphere online help

Symmetrix◆ Symmetrix product manual

VNX for file◆ EMC VNX Documentation

◆ Using VNX Multi-Path File System

All of these publications are found on the EMC Online Support website.

Page 13: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide 13

Preface

EMC Online Support The EMC Online Support website provides the most up-to-date information on documentation, downloads, interoperability, product lifecycle, target revisions, and bug fixes. As a registered EMC Online Support user, you can subscribe to receive notifications when updates occur.

EMC E-LabInteroperability

Navigator

The EMC E-Lab Interoperability Navigator tool provides access to EMC interoperability support matrices. After logging in to EMC Online Support, go to Support > Interoperability and Product Lifecycle Information > E-Lab Interoperability Navigator.

Conventions used inthis document

EMC uses the following conventions for special notices.

Note: A note presents information that is important, but not hazard-related.

CAUTION!A caution contains information essential to avoid data loss or damage to the system or equipment.

IMPORTANT!An important notice contains information essential to operation of the software.

WARNING

A warning contains information essential to avoid a hazard that can cause severe personal injury, death, or substantial property damage if you ignore the warning.

DANGER

A danger notice contains information essential to avoid a hazard that will cause severe personal injury, death, or substantial property damage if you ignore the message.

Page 14: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

14 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Preface

Typographical conventionsEMC uses the following type style conventions in this document:

Normal Used in running (nonprocedural) text for:• Names of interface elements (such as names of windows,

dialog boxes, buttons, fields, and menus)• Names of resources, attributes, pools, Boolean expressions,

buttons, DQL statements, keywords, clauses, environment variables, functions, utilities

• URLs, pathnames, filenames, directory names, computer names, links, groups, service keys, file systems, notifications

Bold Used in running (nonprocedural) text for:• Names of commands, daemons, options, programs,

processes, services, applications, utilities, kernels, notifications, system calls, man pages

Used in procedures for:• Names of interface elements (such as names of windows,

dialog boxes, buttons, fields, and menus)• What user specifically selects, clicks, presses, or types

Italic Used in all text (including procedures) for:• Full titles of publications referenced in text• Emphasis (for example a new term)• Variables

Courier Used for:• System output, such as an error message or script • URLs, complete paths, filenames, prompts, and syntax when

shown outside of running text

Courier bold Used for:• Specific user input (such as commands)

Courier italic Used in procedures for:• Variables on command line• User input variables

< > Angle brackets enclose parameter or variable values supplied by the user

[ ] Square brackets enclose optional values

| Vertical bar indicates alternate selections - the bar means “or”

{ } Braces indicate content that you must specify (that is, x or y or z)

... Ellipses indicate nonessential information omitted from the example

Page 15: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide 15

Preface

Where to get help EMC support, product, and licensing information can be obtained as follows.

Product information — For documentation, release notes, software updates, or for information about EMC products, licensing, and service, go to the EMC Online Support website (registration required) at:

http://Support.EMC.com

Technical support — For technical support, go to EMC Customer Service on EMC Online Support. To open a service request through EMC Online Support, you must have a valid support agreement. Contact your EMC Customer Support Representative for details about obtaining a valid support agreement or to answer any questions about your account.

Your comments Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Please send your opinion of this document to:

[email protected]

Page 16: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

16 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Preface

Page 17: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Introducing EMC VNX MPFS over FC and iSCSI 17

1Invisible Body Tag

This chapter provides an overview of EMC VNX MPFS over FC and iSCSI and its architecture. This chapter includes these topics:

◆ Overview of MPFS over FC and iSCSI............................................ 18◆ VNX MPFS architectures .................................................................. 19◆ How VNX MPFS works .................................................................... 24

Introducing EMC VNXMPFS over FC and

iSCSI

Page 18: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

18 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Introducing EMC VNX MPFS over FC and iSCSI

Overview of MPFS over FC and iSCSI EMC® VNXTM series Multi-Path File System (MPFS) over Fibre Channel (FC) lets Linux, Windows, UNIX, AIX, or Solaris servers access shared data concurrently over FC connections, whereas MPFS over Internet Small Computer System Interface (iSCSI) on VNX lets servers access shared data concurrently over an iSCSI connection.

MPFS uses common Internet Protocol Local Area Network (IP LAN) topology to transport data and metadata to and from the servers.

Without MPFS, servers can access shared data by using standard Network File System (NFS) or Common Internet File System (CIFS) protocols. MPFS accelerates data access by providing separate transports for file data (file content) and metadata (control data).

For an FC-enabled server, data is transferred directly between the Linux server and storage array over an FC Storage Area Network (SAN).

For an iSCSI-enabled server, data is transferred over the IP LAN between the Linux server and storage array for a VNX or VNX VG2/VG8 gateway configuration.

Metadata passes through the VNX for file (and the IP network), which includes the network-attached storage (NAS) portion of the configuration.

Page 19: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

VNX MPFS architectures 19

Introducing EMC VNX MPFS over FC and iSCSI

VNX MPFS architecturesThree basic VNX MPFS architectures are available:

◆ MPFS over FC on VNX

◆ MPFS over iSCSI on VNX

◆ MPFS over iSCSI/FC on VNX

The FC architecture consists of these configurations:• EMC VNX5300, VNX5500, VNX5700, or VNX7500 over FC

• MPFS over FC on VNX VG2/VG8 gateway

The iSCSI architecture consists of these configurations:• VNX5300, VNX5500, VNX5700, or VNX7500 over iSCSI

• MPFS over iSCSI on VNX VG2/VG8 gatewayThe iSCSI/FC architecture consists of these configurations:• VNX5300, VNX5500, VNX5700, or VNX7500 over iSCSI/FC

• MPFS over iSCSI/FC on VNX VG2/VG8 gateway

Note: CLARiiON CX3 and CX4 systems are supported in VNX VG2/VG8 gateway configurations as shown in Figure 2 on page 20, Figure 4 on page 22 and Figure 6 on page 23.

MPFS over FC on VNX

The MPFS over FC on VNX architecture consists of:

◆ VNX with MPFS — A NAS device configured with a VNX and MPFS software

◆ VNX for block or EMC Symmetrix® system◆ Linux servers with MPFS software connected to a VNX through

the IP LAN, VNX for block, or Symmetrix system, by using FC architecture

Figure 1 on page 20 shows the MPFS over FC on VNX configuration where the Linux servers are connected to a VNX series (VNX5300, VNX5500, VNX5700, or VNX7500) by using an IP switch and one or more FC or FC over Ethernet (FCoE) switches. A VNX series is a VNX for file and VNX for block in a single cabinet. In a smaller configuration of one or two servers, the servers are connected directly to the VNX series without the use of FC or FCoE switches.

Page 20: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

20 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Introducing EMC VNX MPFS over FC and iSCSI

Figure 1 MPFS over FC on VNX

Figure 2 on page 20 shows the MPFS over FC on VNX VG2/VG8 gateway configuration. In this figure, the Linux servers are connected to a VNX for block or a Symmetrix system by using a VNX VG2/VG8 gateway, IP switch, and optional FC switch or FCoE switch.

Figure 2 MPFS over FC on VNX VG2/VG8 gateway

IP switch

FC switch/FCoE switch

FC

NFS/CIFSVNX series

MPFS data

Servers

VNX-000004

MPFS metadata

IP switch

VNX VG2/VG8gateway

FC switch/FCoE switch

FCFC

NFS/CIFS

VNX for block or Symmetrix

MPFS data

Servers

VNX-000001

MPFSmetadata

Page 21: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

VNX MPFS architectures 21

Introducing EMC VNX MPFS over FC and iSCSI

MPFS over iSCSI on VNX

The MPFS over iSCSI on VNX architecture consists of:

◆ VNX with MPFS — A NAS device configured with a VNX and MPFS software

◆ VNX for block or Symmetrix system

◆ Linux server with MPFS software connected to a VNX through the IP LAN, VNX for block, or Symmetrix system, by using iSCSI architecture

Figure 3 on page 21 shows the MPFS over iSCSI on VNX configuration where the Linux servers are connected to a VNX series by using one or more IP switches.

Figure 3 MPFS over iSCSI on VNX

IP switchNFS/CIFS

VNX series

MPFS data

Servers

IP switch

VNX-000005

MPFS metadata

iSCSI data

Page 22: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

22 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Introducing EMC VNX MPFS over FC and iSCSI

Figure 4 on page 22 shows the MPFS over iSCSI on VNX VG2/VG8 gateway configuration where the Linux servers are connected to a VNX for block or Symmetrix system by using a VNX VG2/VG8 gateway and one or more IP switches.

Figure 4 MPFS over iSCSI on VNX VG2/VG8 gateway

MPFS over iSCSI/FC on VNX

The MPFS over iSCSI/FC on VNX architecture consists of:

◆ VNX with MPFS — A NAS device that is configured with a VNX and MPFS software

◆ VNX for block or Symmetrix system

◆ Linux server with MPFS software connected to a VNX through the IP LAN, VNX for block, or Symmetrix system by using iSCSI/FC architecture

Figure 5 on page 23 shows the MPFS over iSCSI/FC on VNX configuration where the Linux servers are connected to a VNX series by using one or more IP switches and an FC switch or FCoE switch.

IP switch

IP switch

FCNFS/CIFS

VNX for block or Symmetrix

MPFS data

iSCSI data

Servers

VNX-000002

MPFSmetadata

VNX VG2/VG8gateway

Page 23: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

VNX MPFS architectures 23

Introducing EMC VNX MPFS over FC and iSCSI

Figure 5 MPFS over iSCSI/FC on VNX

Figure 6 on page 23 shows the MPFS over iSCSI/FC on VNX VG2/VG8 gateway configuration where the Linux servers are connected to a VNX for block or Symmetrix system by using a VNX VG2/VG8 gateway, one or more IP switches, and an FC switch or FCoE switch.

Figure 6 MPFS over iSCSI/FC on VNX VG2/VG8 gateway

IP switchNFS/CIFS

MPFS data

MPFS data

Servers

IP switch

VNX-000066

MPFS metadata

iSCSI data

FC switch/FCoE switch

FC

VNX series

IP switch

IP switch

FCNFS/CIFS

VNX for block or Symmetrix

MPFS data

iSCSI data

Servers

VNX-000067

MPFSmetadata

VNX VG2/VG8gateway

MPFS data

FC switch/FCoE switch

FC

Page 24: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

24 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Introducing EMC VNX MPFS over FC and iSCSI

How VNX MPFS worksAlthough called a file system, the VNX MPFS is neither a new nor a modified format for storing files. Instead, MPFS interoperates and uses the standard NFS and CIFS protocols to enforce access permissions. MPFS uses a protocol called File Mapping Protocol (FMP) to exchange metadata between the Linux server and the VNX for file.

All requests unrelated to file I/O pass directly to the NFS/CIFS layer. The MPFS layer intercepts only the open, close, read, and write system calls.

When a Linux server intercepts a file-read call, the server sends a request to the VNX for file asking for the file's location. The VNX for file responds with a list of file extents, which the Linux server then uses to read the file data directly from the disk.

When a Linux server intercepts a file-write call, the server asks the VNX for file to allocate blocks on disk for the file. The VNX for file allocates the space in contiguous extents and sends the extent list to the Linux server. The Linux server then writes data directly to disk, informing the VNX for file when finished, so that the VNX for file can permit other Linux servers to access the file.

The remaining chapters describe how to install, manage, and tune Linux servers. Using VNX Multi-Path File System technical module, available on EMC Online Support at http://Support.EMC.com, provides information on the MPFS commands.

Page 25: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX MPFS Environment Configuration 25

2Invisible Body Tag

This chapter presents a high-level overview of configuring and installing EMC VNX MPFS.

Topics include:

◆ Configuration roadmap .................................................................... 26◆ Implementation guidelines............................................................... 28◆ MPFS installation and configuration process ................................ 35◆ Verifying system components .......................................................... 38◆ Setting up the VNX for file ............................................................... 44◆ Running the VNX Installation Assistant for File/Unified........... 45◆ Setting up the file system.................................................................. 46◆ Enabling MPFS for the VNX for file ................................................ 57◆ Configuring the VNX for block by using CLI commands ........... 58◆ Configuring the SAN switch and storage ...................................... 59◆ Configuring and accessing storage ................................................. 67◆ Mounting MPFS ................................................................................. 84◆ Unmounting MPFS ............................................................................ 88

EMC VNX MPFSEnvironment

Configuration

Page 26: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

26 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Configuration roadmapFigure 7 on page 27 shows the roadmap for configuring and installing the EMC VNX MPFS over FC and iSCSI architectures for both FC and iSCSI environments. The roadmap contains the topics representing sequential phases of the configuration and installation process. The descriptions of each phase, which follow, contain an overview of the tasks required to complete the process, and a list of related documents for more information.

Page 27: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Configuration roadmap 27

EMC VNX MPFS Environment Configuration

Figure 7 Configuration roadmap

!

MPFS installation and configuration process

Verifying system components

Setting up the file system

Implementation guidelines

Enabling MPFS for the VNX for file

Setting up the VNX for file

Configuring the VNX for block by using CLI commands

Configuring the SAN switch and storage

Mounting MPFS

Running the VNX Installation Assistant for File/Unified

Configuring and accessing storage

Page 28: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

28 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Implementation guidelinesThe MPFS implementation guidelines are valid for all MPFS installations.

VNX with MPFS recommendations

These recommendations are described in detail in the EMC VNX MPFS Applied Best Practices Guide, which can be found at http://Support.EMC.com:

◆ MPFS is optimized for large I/O transfers and may be useful for workloads with average I/O sizes as small as 16 KB. However, MPFS has been shown conclusively to improve performance for I/O sizes of 128 KB and greater.

◆ For best MPFS performance, in most cases, configure the VNX for file volumes by using a volume stripe size of 256 KB.

◆ EMC PowerPath® is supported, but is not recommended since path failover is built into the Linux server. When using PowerPath, the performance of the MPFS system is lower. Primus article emc 165953 contains details on using PowerPath and MPFS.

◆ When MPFS is started, 16 threads are run, which is the default number of MPFS threads. The maximum number of threads is 128, which is also the best practice for MPFS. If system performance is slow, gradually increase the number of threads allotted for the Data Mover to improve system performance. Add threads conservatively, as the Data Mover allocates 16 KB of memory to accommodate each new thread. The optimal number of threads depends on the network configuration, the number of Linux servers, and the workload.

Using VNX Multi-Path File System provides procedures to adjust the thread count. This technical module is available with the EMC Documentation on EMC Online Support.

Data Mover capacity The EMC Support Matrix provides Data Mover capacity guidelines. After logging in to EMC Online Support, go to Support > Interoperability and Product Lifecycle Information > Interoperability Matrices.

Page 29: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Implementation guidelines 29

EMC VNX MPFS Environment Configuration

Linux serverconfiguration

All Linux servers using the MPFS software require:

◆ At least one FC connection or an iSCSI initiator connection to a SAN switch or a VNX for block or Symmetrix system

◆ Network connections to the Data Mover

Note: When deploying MPFS over iSCSI on a VNX5300, VNX5500, VNX5700, VNX7500 or a VNX VG2/VG8 gateway configuration based on the iSCSI-enabled VNX for block, the VNX for block iSCSI target is used.

Storage configuration recommendations

Linux servers read and write directly from a VNX for block. Reading and writing has several implications:

◆ Use the VNX Operating Environment (VNX OE) for best performance in new MPFS configurations.

◆ Unmount MPFS from the Linux server before changing any storage device or switch configuration.

Table 1 on page 29 lists the prefetch and read cache requirements.

Table 1 Prefetch and read cache requirements

Prefetch requirements Read cache Notes

Modest 50–100 MB 80% of the systems fall under this category.

Heavy 250 MB Requests greater than 64 KB and sequential reads from many LUNs expected over 300 MB/s.

Extremely heavy 1 GB 120 or more drives reading in parallel.

Page 30: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

30 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

MPFS feature configurations

These sections describe the configurations for MPFS features.

iSCSI CHAPauthentication

The Linux server with MPFS software and the VNX for block support the Challenge Handshake Authentication Protocol (CHAP) for iSCSI network security.

CHAP provides a method for the Linux server and VNX for block to authenticate each other through an exchange of a shared secret (a security key that is similar to a password), which is typically a string of 12 to 16 bytes.

CAUTION!If CHAP security is not configured for the VNX for block, any computer connected to the same IP network as the VNX for block iSCSI ports can read from or write to the VNX for block.

CHAP has two variants — One-way and reverse CHAP authentication:

◆ In one-way CHAP authentication, CHAP sets up the accounts that the Linux server uses to connect to the VNX for block. The VNX for block authenticates the Linux server.

◆ In reverse CHAP authentication, the VNX for block authenticates the Linux server and the Linux server also authenticates the VNX for block.

Because CHAP secrets are shared between the Linux server and VNX for block, the CHAP secrets are configured the same on both the Linux server and VNX for block.

The CX-Series iSCSI Security Setup Guide provides detailed information regarding CHAP and is located on the EMC Online Support website.

Page 31: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Implementation guidelines 31

EMC VNX MPFS Environment Configuration

VMware ESX(optional)

VMware is a software suite for optimizing and managing IT environments through virtualization technology. MPFS supports the Linux server guest operating systems running on a VMware ESX server.

The VMware ESX server is a robust, production-proven virtualization layer that abstracts processor, memory, storage, and networking resources into multiple virtual machines (software representation of a physical machine) running side-by-side on the same server.

VMware is not tied to any operating system, giving customers a bias-free choice of operating systems and software applications. All operating systems supported by VMware are supported with both iSCSI and NFS protocols for basic connectivity. This allows several instances of similar and different guest operating systems to run as virtual machines on one physical machine.

To run a Linux server guest operating system on a VMware ESX server, the configuration must meet these requirements:

◆ Run a supported version of the Linux operating system.

◆ Have the VNX for block supported HBA hardware and driver installed.

◆ Connect to each SP in each VNX for block directly or through a switch. Each SP must have an IP connection.

◆ Connect to a TCP/IP network with both SPs in the VNX for block.

Currently, the VMware ESX server has these limitations:

◆ Booting the guest Linux server off iSCSI is not supported.

◆ PowerPath is not supported.

◆ Virtual machines that run the Linux server guest operating system must use iSCSI to access the VNX for block.

◆ Store the virtual machine on a VMware datastore (VNX for block or Symmetrix system) and access it by the VMware ESX server by using either FC (ESX server versions 3.0.1, 3.0.2, or 3.5.1) or iSCSI (ESX server version 3.5.1).

The EMC Host Connectivity Guide for VMware ESX Server provides information on how to configure iSCSI initiator ports and how VMware operates in a Linux environment. The VMware website, http://www.vmware.com, provides more information.

Page 32: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

32 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Rainfinity GlobalNamespace

The EMC Rainfinity® Global Namespace (GNS) Appliance complements the Nested Mount File System (NMFS) by providing a global namespace across Data Movers and simplifying mount point management of network shared files. A global namespace organizes file shares across servers into a coherent directory structure.

A global namespace is a virtual hierarchy of folders and links to shares or exports, designed to ease access to distributed data. End users no longer need to know the server names and shared folders where the physical data resides. Instead, they mount only to the namespace and navigate the structure of the namespace which appears as though they are navigating a directory structure on a physical server. The Rainfinity GNA application works behind the scenes to provide Linux servers with the data they need from multiple physical servers or shared folders.

The Rainfinity GNA has these benefits:

◆ Leverages the MPFS architecture to provide a scalable NFS global namespace for Linux servers with an iSCSI interface.

◆ Creates a global view of file shares, simplifying the management of complex NAS and file server environments.

◆ Provides a single mount point for MPFS NAS shares, so as the file server environment grows and changes, users and applications do not have to experience the disruption of remounting.

◆ Supports 50,000 physical file shares in a single global namespace.

◆ Each Rainfinity GNA cluster supports 30,000 server connections with up to two clusters deployed to share a single global namespace.

The use of NAS devices and file servers increases storage management complexity. The Rainfinity GNS removes the dependency on physical storage location and makes it easier to consolidate, replace, and deploy NAS devices and file servers without disrupting server access.

MPFS is a VNX for file feature that allows heterogeneous servers with MPFS software to concurrently access, directly over FC or iSCSI channels, stored data on a VNX for block or Symmetrix system. MPFS NFS is a referral-based protocol that does not require Rainfinity to be permanent in-band 100 percent of the time. As a result, the protocols are very scalable.

Page 33: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Implementation guidelines 33

EMC VNX MPFS Environment Configuration

The EMC Rainfinity Global Namespace Appliance Getting Started Guide provides information on how the GNS solution works with MPFS, how to configure GNS when supporting Linux servers, and how to mount a Linux server to the GNS application.

Hierarchical volumemanagement

Hierarchical volume management (HVM) allows the user to cache more information about the file to disk mapping. It is particularly useful when using large files with random access I/O patterns and with file systems built on a small stripe.

A hierarchical volume is a tree-based structure composed of File Mapping Protocol (FMP) volumes. Each volume is either an FMP_VOLUME_DISK, FMP_VOLUME_SLICE, FMP_STRIPE or FMP_VOLUME_META. The root of the tree and the intermediate nodes are slices, stripes, or metas. The leaves of the tree are disks.

Every volume in a hierarchical volume description has a definition that includes an ID. By convention, a volume must be defined before it can be referenced by an ID. One consequence of this convention is that the volumes in the tree must be listed in depth-first search order.

Because of limitations on the transport medium, the description of an especially dense volume tree may require more than one RPC packet. Therefore, a hierarchical volume description may be incomplete, in which case the Linux server with MPFS software must send subsequent requests to obtain descriptions of the remaining volumes. Because the volume structure could change, for example, owing to automatic file system extension, each response contains a “cookie” that changes when the volume tree changes. A Linux server issuing a request for volume information must return the latest cookie, and if the volume tree has changed, the server will return a status of FMP_VOLUME_CHANGED. In this case, the Linux server must get the whole hierarchical volume description from the beginning by reissuing its mount request.

Page 34: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

34 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

MPFS changes the FMP protocol, which allows the FMP server to describe the volume slices, stripes, and concatenations used to create a logical volume on which a file system is stored. Linux servers communicate with the FMP server to request maps that allow a Linux server to read a file directly from the disk. These maps are described as offsets and lengths on a physical disk. Because most file systems are created on striped volumes, from the standpoint of Linux server communication, the maps are broken up into many extents. Each time the file crosses a stripe boundary, the FMP server must send a different ID to represent the physical volume, and a new offset and length on that volume. With HVM, when the user mounts a file system, the Linux server requests a description of the logical volumes (the striping pattern). The Linux server now describes file maps as locations within the logical volume. The Linux server is now responsible for noticing when a file crosses a stripe boundary, and dispatching the I/Os to the proper physical disk. This change allows the protocol to be more efficient by using less space to represent the maps. Furthermore, it allows the Linux server to represent the extent map in a more compact form, thus conserving Linux server memory and CPU resources.

Page 35: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

MPFS installation and configuration process 35

EMC VNX MPFS Environment Configuration

MPFS installation and configuration processThe MPFS configuration process involves performing tasks on various system components in a specific order. MPFS can be installed and configured manually or with the use of the VNX Installation Assistant for File/Unified as described in “Running the VNX Installation Assistant for File/Unified” on page 45.

Note: This document provides guidelines for installing and configuring MPFS with several options. Disregard steps that do not pertain to your environment.

To manually install and configure MPFS:

1. Run the VNX Installation Assistant for File/Unified to help setup the MPFS system (for MPFS-enabled systems only), which:

a. Provisions unused disks.

b. Creates/extends the MPFS storage pool.

c. Configures the VNX for block iSCSI ports (only for iSCSI ports).

d. Starts the MPFS service.

e. Installs the MPFS client software on multiple Linux hosts.

f. Configures the Linux host parameters and sysctl parameters.

g. Mounts the MPFS-enabled NFS exports.

2. Collect installation and configuration planning information and complete the checklist:

a. Collect the IP network addresses, FC port addresses, and VNX for block or Symmetrix system information.

b. Map the Ethernet and TCP/IP network topology.

c. Map the FC zoning topology.

d. Map the virtual storage area network (VSAN) topology.

Page 36: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

36 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

3. Install the MPFS software manually (on a native or VMware1 hosted Linux operating system):

a. Install the HBA driver (for FC configuration).

b. Install and configure iSCSI (for iSCSI configuration).2

c. Start the iSCSI service (for iSCSI configuration).

d. Install the MPFS software.

e. Verify the MPFS software configuration.

Configuration planning checklist

Collect information before beginning the MPFS installation and configuration process.

For an FC and iSCSI configuration:

❑ SP A IP address .....................................................................................

❑ SP A login name....................................................................................

❑ SP A password.......................................................................................

❑ SP B IP address ......................................................................................

❑ SP B login name.....................................................................................

❑ SP B password .......................................................................................

❑ Zoning for Data Movers.......................................................................

❑ First Data Mover LAN blade IP address or Data Mover IP address ...............................................................................................

❑ Second Data Mover LAN blade IP address or Data Mover IP address ...............................................................................................

❑ Control Station IP address or CS address..........................................

❑ LAN IP address (same as LAN Data Movers)..................................❑ Linux server IP address on LAN ........................................................

❑ VSAN name ...........................................................................................

❑ VSAN number (ensure the VSAN number is not in use) ...............

1. “VMware ESX (optional)” on page 31 provides information.2. Installing VNX iSCSI Host Components provides details.

Page 37: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

MPFS installation and configuration process 37

EMC VNX MPFS Environment Configuration

For an FC configuration:

❑ SP A FC port assignment or FC ports ................................................

❑ SP B FC port assignment or FC ports.................................................❑ FC switch name.....................................................................................

❑ FC switch password .............................................................................

❑ FC switch port IP address....................................................................❑ Zoning for each FC HBA port.............................................................❑ Zoning for each FC director ................................................................

For an iSCSI configuration:

❑ VNX with MPFS target IP address .....................................................❑ VNX for block or Symmetrix system target IP address ..................❑ Linux server IP address for iSCSI Gigabit connection ....................

❑ Initiator and Target Challenge Handshake Authentication Protocol (CHAP) password (optional) ..............................................

Page 38: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

38 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Verifying system componentsMPFS environments require standard VNX for file hardware and software, with the addition of a few components that are specific to either FC or iSCSI configurations. Set up an MPFS environment to verify that each of the previously mentioned components is in place and functioning normally. Each hardware and software component is discussed in these sections.

Required hardware components

This section lists the MPFS configurations with the required hardware components.

MPFS over FC on VNXconfiguration

The hardware components for a MPFS over FC on VNX configuration are:

◆ A VNX series connected to an FC network and SAN

◆ An IP switch that connects the VNX series to the servers

◆ An FC switch or FCoE switch with an HBA for each Linux server

“MPFS over FC on VNX” on page 19 provides more information.

MPFS over FC on VNXVG2/VG8 gateway

configuration

The hardware components for a MPFS over FC on VNX VG2/VG8 gateway configuration are:

◆ A VNX VG2/VG8 gateway connected to an FC network and SAN◆ A fabric-connected VNX for block or Symmetrix system, with

available LUNs

◆ An IP switch that connects the VNX VG2/VG8 gateway to the servers

◆ An FC switch or FCoE switch with an HBA for each Linux server

“MPFS over FC on VNX” on page 19 provides more information.

MPFS over iSCSI onVNX configuration

The hardware components for a MPFS over iSCSI on VNX configuration are:

◆ A VNX series◆ One or two IP switches that connect the VNX series to the servers

“MPFS over iSCSI on VNX” on page 21 provides more information.

Page 39: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Verifying system components 39

EMC VNX MPFS Environment Configuration

MPFS over iSCSI onVNX VG2/VG8

gatewayconfiguration

The hardware components for a MPFS over iSCSI on VNX VG2/VG8 gateway configuration are:

◆ A VNX VG2/VG8 gateway connected to an FC network and SAN

◆ A fabric-connected VNX for block or Symmetrix system with available LUNs

◆ One or two IP switches that connect the VNX VG2/VG8 gateway and the VNX for block or Symmetrix system to the servers

“MPFS over iSCSI on VNX” on page 21 provides more information.

MPFS over iSCSI/FC onVNX configuration

The hardware components for a MPFS over iSCSI/FC on VNX configuration are:

◆ A VNX for file

◆ One or two IP switches and an FC switch or FCoE switch that connect the VNX for file to the servers

“MPFS over iSCSI/FC on VNX” on page 22 provides more information.

MPFS over iSCSI/FC onVNX VG2/VG8

gatewayconfiguration

The hardware components for a MPFS over iSCSI/FC on VNX VG2/VG8 gateway configuration are:

◆ A VNX VG2/VG8 gateway connected to an FC network and SAN◆ A fabric-connected VNX for block or Symmetrix system with

available LUNs◆ One or two IP switches and an FC switch or FCoE switch that

connect the VNX VG2/VG8 gateway and the VNX for block or Symmetrix system to the servers

“MPFS over iSCSI/FC on VNX” on page 22 provides more information.

Configuring GigabitEthernet ports

Two Gigabit Ethernet NICs, or a multiport NIC with two available ports, connected to isolated IP networks or subnets are recommended for each Linux server for iSCSI. For each Linux server for FC, one NIC is required for NFS and FMP traffic. For maximum performance, use:

◆ One port for the connection between the Linux server and the Data Mover for MPFS metadata transfer and NFS traffic

◆ One port for the connection between the Linux server and the same subnet as the iSCSI discovery address dedicated to data transfer

Page 40: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

40 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Note: The second NIC for iSCSI must be on the same subnet as the discovery address.

Configuring and Managing VNX Networking provides detailed information for setting up network connections. The document is available on the EMC Online Support website.

Required software components

Software components required for an MPFS configuration:

◆ NAS software version that supports either FC or iSCSI configurations on Linux platforms

◆ VNX OE software version 7.0.x.x supports RHEL 6 or SuSE 11.

◆ Linux operating system and kernel version that supports HBAs or an iSCSI initiator

Note: The EMC E-Lab Interoperability Matrix lists the latest versions of Red Hat Enterprise Linux, SuSE Linux Enterprise Server, CentOS operating systems.

◆ MPFS software version 5.0 or later

◆ iSCSI initiator

Relateddocumentation

The EMC VNX MPFS for Linux Clients Release Notes, available on the EMC Online Support website, provide a complete list of EMC supported operating system versions.

Verifying configuration

Verify whether each of the previously mentioned system components is in place and functioning normally. If each of these components is operational, “MPFS installation and configuration process” on page 35 provides more information. If each of these components is not operational, “Error Messages and Troubleshooting” on page 141 provides more information.

Configure NFS and start the services on the VNX for file that are used for MPFS connectivity.

Page 41: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Verifying system components 41

EMC VNX MPFS Environment Configuration

Relateddocumentation

These technical modules, available on the EMC Online Support website, provide additional information:◆ Configuring and Managing EMC VNX Networking

◆ Managing VNX Volumes and File Systems Manually

◆ Configuring Standbys on VNX

Verifying system requirements

This section describes system requirements for an MPFS environment.

CAUTION!Ensure that the systems used for MPFS do not contain both VNX for block and Symmetrix system LUNs. MPFS does not support a mixed storage environment.

VNX configurations used within an MPFS environment must be designed for MPFS. These models are supported:

◆ VNX5300

◆ VNX5500

◆ VNX5700

◆ VNX7500

All VNX and VNX VG2/VG8 gateway configurations must meet these requirements:

◆ Have file systems built on disks from only one type (not a mixture of disk drives):

• For VNX configurations - Serial Attached iSCSI (SAS) or nearline SAS (NL-SAS)

• For VNX VG2/VG8 gateway configurations - Fibre Channel (FC) (for CLARiiON CX3, CX4, or Symmetrix system), SAS, NL-SAS, or Advanced Technology Attachment (ATA)

◆ Use disk volumes from the same storage system. A file system spanning multiple storage systems is not supported.

◆ Cannot use RAID groups that span across two different system enclosures.

◆ LUNs must be built by using RAID 1, RAID 3, RAID 5, or RAID 6 only.

◆ Management LUNs must be built by using 4+1 RAID 5 only.

Page 42: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

42 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

◆ Have write cache enabled.

◆ Use EMC Access Logix™.

◆ Run VNE OE with NAS 7.0.x or later.

Symmetrix systems used within an MPFS environment must be designed for MPFS. These models are supported:

◆ Symmetrix DMX™ series Enterprise Storage Platform (ESP)

◆ Symmetrix VMAX™ series

◆ Symmetrix 8000 series

All Symmetrix systems must meet these requirements:

◆ Use the correct version of the microcode. Do either for microcode release updates:

• Contact your EMC Customer Support Representative

• Check the EMC E-Lab™ Interoperability Navigator

◆ Have the Symmetrix FC/SCSI port flags properly configured for MPFS. Set the Avoid_Reset_Broadcast (ARB) flag for each port that is connected to a Linux server.

◆ Do not use a file system that spans across two different system enclosures.

Verifying the FC switch requirements (FC configuration)

To set up the FC switch:

1. Install the FC switch.

2. Verify that the host bus adapter (HBA) driver is loaded by selecting Start > Run and typing compmgmt.msc in the window. In the Explorer window, select Device Manager > Disk drives.

3. Connect cables from each HBA FC port to a switch port.

4. Verify the HBA connection to the switch by checking LEDs for the switch port connected to the HBA port.

5. Configure zoning for the switch as described in “Zoning the SAN switch (FC configuration)” on page 59.

Note: Configure zoning as single initiator, which means that each HBA port will have its own zone. Each zone has only one HBA port.

Page 43: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Verifying system components 43

EMC VNX MPFS Environment Configuration

Verifying the IP-SAN VNX for block requirements

The MPFS over FC on VNX and iSCSI environment with VNX for block configurations requires:

◆ For a VNX configuration, a VNX5300, VNX5500, VNX5700, or VNX7500.

◆ For a VNX VG2/VG8 gateway configuration:

• VNX for block, Symmetrix DMX, Symmetrix VMAX, or Symmetrix 8000.

• Same cabling as shared VNX for block cabling.

• Access Logix LUN masking by using iSCSI to present all managed LUNs to the Linux servers.

◆ Linux server configuration is the same as a standard Linux server connection to an iSCSI connection.

◆ Linux servers are load-balanced across VNX for block iSCSI ports for performance improvement and protection against single-port and Ethernet cable problems.

◆ Port 0 iSCSI through port 3 iSCSI on each storage processor is connected to the iSCSI network.

Page 44: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

44 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Setting up the VNX for fileThe VNX System Software Installation Guide provides information on how to set up the VNX for file, which is located on the EMC Online Support website.

Page 45: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Running the VNX Installation Assistant for File/Unified 45

EMC VNX MPFS Environment Configuration

Running the VNX Installation Assistant for File/UnifiedThe VNX Installation Assistant for File/Unified is a single instance, pre-configuration tool targeted for a factory installed (unconfigured) VNX or to open EMC Unisphere™ software. The VNX Installation Assistant for File/Unified helps in setting up an MPFS system (for MPFS-supported systems only) by doing:

◆ Provisions storage for MPFS use

◆ Creates an MPFS storage pool

◆ Configures VNX for block iSCSI ports (only for iSCSI ports)

◆ Starts the MPFS service on the VNX for file system

◆ Push-installs the MPFS client software on multiple Linux hosts

◆ Configures Linux host parameters

◆ Mounts MPFS-enabled NFS exports

The VNX Installation Assistant for File/Unified is available from the VNX Tools page on EMC Online Support. For the VNX Installation Assistant for File/Unified on EMC Online Support, open Support > Product and Diagnostic Tools > VNX Tools > VNX Startup Assistant and download the appropriate version of the VNX Installation Assistant for File/Unified from EMC Online Support.

Page 46: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

46 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Setting up the file systemThis section describes the prerequisites for file systems and the procedure for creating a file system.

File system prerequisites

File system prerequisites are guidelines to be met before building a file system. A properly built file system must:

◆ Use disk volumes from the same VNX for block.

Note: Do not use a file system spanning across two system enclosures. A file system spanning multiple systems is not supported even if the multiple systems are of the same type, such as VNX for block or Symmetrix system.

◆ Have file systems built on disks from only one type (not a mixture of disk drives):

• For VNX configurations - Serial Attached iSCSI (SAS) or nearline SAS (NL-SAS)

• For VNX VG2/VG8 gateway configurations - Fibre Channel (FC) (for CLARiiON CX3, CX4, or Symmetrix system), SAS, NL-SAS, or Advanced Technology Attachment (ATA)

◆ For best MPFS performance, in most cases, configure the volumes by using a volume stripe size of 256 KB. The EMC VNX MPFS Applied Best Practices Guide provides detailed performance related information.

◆ In a Symmetrix system environment, ensure that the Symmetrix FC/SCSI port flag settings are properly configured for MPFS; in particular, set the ARB flag. The EMC Customer Support Representative configures these settings.

Page 47: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Setting up the file system 47

EMC VNX MPFS Environment Configuration

Creating a file system on a VNX for file

This section describes how to configure, create, mount, and export file systems.

Ensure that LUNs for the new file system are created optimally for MPFS. All LUNs must:◆ Use the same RAID type ◆ Have the same number of spindles in each RAID group◆ Contain spindles of the same type and speed

In addition, ensure that all LUNs do not share spindles with: ◆ Other LUNs in the same file system◆ Another file system heavily utilized by high-I/O applications

Before creating the LUNs, ensure that the total usable capacity of all the LUNs within a single file system does not exceed 16 TB. The maximum number of LUNs tested that are supported in MPFS configurations per file system is 256. Ensure that the LUNs are accessible by the Data Movers through LUN masking, switch zoning, and VSAN settings.

Use this procedure to build or mount the MPFS on the VNX for file:

1. Log in to the Control Station as NAS administrator.

2. Before building the file system, type the nas_disk command to return a list of unused disks by using this command syntax:

$ nas_disk -list |grep n | more

For example, type:

$ nas_disk -list |grep n | more

Page 48: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

48 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

The output shows all disks not in use:

id inuse sizeMB storageID-devID type name servers7 n 466747 APM00065101342-0010 CLSTD d7 1,28 n 466747 APM00065101342-0011 CLSTD d8 1,29 n 549623 APM00065101342-0012 CLSTD d9 1,210 n 549623 APM00065101342-0014 CLSTD d10 1,211 n 549623 APM00065101342-0016 CLSTD d11 1,212 n 549623 APM00065101342-0018 CLSTD d12 1,213 n 549623 APM00065101342-0013 CLSTD d13 1,214 n 549623 APM00065101342-0015 CLSTD d14 1,215 n 549623 APM00065101342-0017 CLSTD d15 1,216 n 549623 APM00065101342-0019 CLSTD d16 1,217 n 549623 APM00065101342-001A CLSTD d17 1,218 n 549623 APM00065101342-001B CLSTD d18 1,219 n 549623 APM00065101342-001C CLSTD d19 1,220 n 549623 APM00065101342-001E CLSTD d20 1,221 n 549623 APM00065101342-0020 CLSTD d21 1,222 n 549623 APM00065101342-001D CLSTD d22 1,223 n 549623 APM00065101342-001F CLSTD d23 1,224 n 549623 APM00065101342-0021 CLSTD d24 1,225 n 549623 APM00065101342-0022 CLSTD d25 1,226 n 549623 APM00065101342-0024 CLSTD d26 1,227 n 549623 APM00065101342-0026 CLSTD d27 1,228 n 549623 APM00065101342-0023 CLSTD d28 1,229 n 549623 APM00065101342-0025 CLSTD d29 1,230 n 549623 APM00065101342-0027 CLSTD d30 1,2

Page 49: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Setting up the file system 49

EMC VNX MPFS Environment Configuration

3. Display all disks by using this command syntax:

$ nas_disk -list

For example, type:

$ nas_disk -list

Output:

The first stripe alternate SP ownership A,B,A,B,A,B is displayed in bold text and the second stripe alternate SP ownership B,A,B,A,B,A is displayed in a shaded background. The two different stripes (A, B, A) and (B, A, B) are both in RAID group X, Y, and Z.

id inuse sizeMB storageID-devID type name servers1 y 11263 APM00065101342-0000 CLSTD root_disk 1,22 y 11263 APM00065101342-0001 CLSTD root_disk 1,23 y 2047 APM00065101342-0002 CLSTD d3 1,24 y 2047 APM00065101342-0003 CLSTD d4 1,25 y 2047 APM00065101342-0004 CLSTD d5 1,26 y 2047 APM00065101342-0005 CLSTD d6 1,27 n 466747 APM00065101342-0010 CLSTD d7 1,28 n 466747 APM00065101342-0011 CLSTD d8 1,29 n 549623 APM00065101342-0012 CLSTD d9 1,210 n 549623 APM00065101342-0014 CLSTD d10 1,211 n 549623 APM00065101342-0016 CLSTD d11 1,212 n 549623 APM00065101342-0018 CLSTD d12 1,213 n 549623 APM00065101342-0013 CLSTD d13 1,214 n 549623 APM00065101342-0015 CLSTD d14 1,215 n 549623 APM00065101342-0017 CLSTD d15 1,216 n 549623 APM00065101342-0019 CLSTD d16 1,217 n 549623 APM00065101342-001A CLSTD d17 1,218 n 549623 APM00065101342-001B CLSTD d18 1,219 n 549623 APM00065101342-001C CLSTD d19 1,220 n 549623 APM00065101342-001E CLSTD d20 1,221 n 549623 APM00065101342-0020 CLSTD d21 1,222 n 549623 APM00065101342-001D CLSTD d22 1,223 n 549623 APM00065101342-001F CLSTD d23 1,224 n 549623 APM00065101342-0021 CLSTD d24 1,225 n 549623 APM00065101342-0022 CLSTD d25 1,226 n 549623 APM00065101342-0024 CLSTD d26 1,227 n 549623 APM00065101342-0026 CLSTD d27 1,228 n 549623 APM00065101342-0023 CLSTD d28 1,229 n 549623 APM00065101342-0025 CLSTD d29 1,230 n 549623 APM00065101342-0027 CLSTD d30 1,2

Page 50: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

50 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Note: Use Navicli or EMC Navisphere® Manager to determine which LUNs are on SP A and SP B.

4. Find the names of file systems mounted on all servers by using this command syntax:

$ server_df ALL

For example, type:

$ server_df ALL

Output:

server_2 :Filesystem kbytes used avail capacity Mounted onS2_Shgvdm_FS1 831372216 565300 825719208 1% /root_vdm_5/S2_Shgvdm_FS1root_fs_vdm_vdm01 114592 7992 106600 7% /root_vdm_5/.etcS2_Shg_FS2 831372216 19175496 812196720 2% /S2_Shg_mnt2S2_Shg_FS1 1662746472 25312984 1637433488 2% /S2_Shg_mnt1root_fs_common 153 5280 10088 34% /.etc_commonroot_fs_2 2581 80496 177632 31% /

server_3 :Filesystem kbytes used avail capacity Mounted onroot_fs_vdm_vdm02 114592 7992 106600 7% /root_vdm_6/.etcS3_Shgvdm_FS1 831372216 4304736 827067480 1% /root_vdm_6/S3_Shgvdm_FS1S3_Shg_FS1 831373240 11675136 819698104 1% /S3_Shg_mnt1S3_Shg_FS2 831373240 4204960 827168280 1% /S3_Shg_mnt2root_fs_commo 15368 5280 10088 34% /.etc_commonroot_fs_3 258128 8400 249728 3% /

vdm01 :Filesystem kbytes used avail capacity Mounted onS2_Shgvdm_FS1 831372216 5653008 825719208 1% /S2_Shgvdm_FS1

vdm02 :Filesystem kbytes used avail capacity Mounted onS3_Shgvdm_FS1 831372216 4304736 827067480 1% /S3_Shgvdm_FS1

Find the names of file systems mounted on a specific server by using this command syntax:

$ server_df <server_name>

where:<server_name> = name of the Linux server

Page 51: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Setting up the file system 51

EMC VNX MPFS Environment Configuration

For example, type:

$ server_df vdm02

Output:

vdm02 :Filesystem kbytes used avail capacity Mounted onS3_Shgvdm_FS1 831372216 4304736 827067480 1% /S3_Shgvdm_FS1

5. Find the names of existing file systems that are not mounted by using this command syntax:

$ nas_fs -list

For example, type:

$ nas_fs -list

Output:

id inuse type acl volume name server1 n 1 0 10 root_fs_12 y 1 0 12 root_fs_2 23 n 1 0 14 root_fs_34 n 1 0 16 root_fs_45 n 1 0 18 root_fs_56 n 1 0 20 root_fs_67 n 1 0 22 root_fs_78 n 1 0 24 root_fs_89 n 1 0 26 root_fs_910 n 1 0 28 root_fs_1011 n 1 0 30 root_fs_1112 n 1 0 32 root_fs_1213 n 1 0 34 root_fs_1314 n 1 0 36 root_fs_1415 n 1 0 38 root_fs_1516 y 1 0 40 root_fs_common 217 n 5 0 73 root_fs_ufslog18 n 5 0 76 root_panic_reserve19 n 5 0 77 root_fs_d320 n 5 0 78 root_fs_d421 n 5 0 79 root_fs_d522 n 5 0 80 root_fs_d625 y 1 0 116 S2_Shg_FS2 2221 y 1 0 112 S2_Shg_FS1 2222 n 1 0 1536 S3_Shg_FS1223 n 1 0 1537 S3_Shg_FS2384 y 1 0 3026 testdoc_fs2 2

Page 52: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

52 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

6. Find the names of volumes already mounted by using this command syntax:

$ nas_volume -list

For example, type:

$ nas_volume -list

Part of the output is similar to this:

id inuse type acl name cltype clid 1 y 4 0 root_disk 0 1-34,52 2 y 4 0 root_ldisk 0 35-51 3 y 4 0 d3 1 77 4 y 4 0 d4 1 78 5 y 4 0 d5 1 79 6 y 4 0 d6 1 80 7 n 1 0 root_dos 0 8 n 1 0 root_layout 0 9 y 1 0 root_slice_1 1 10 10 y 3 0 root_volume_1 2 1 11 y 1 0 root_slice_2 1 12 12 y 3 0 root_volume_2 2 2 13 y 1 0 root_slice_3 1 14 14 y 3 0 root_volume_3 2 3 15 y 1 0 root_slice_4 1 16 16 y 3 0 root_volume_4 2 4 . . . . . . . . . . . . . . . . . . . . . 1518 y 3 0 Meta_S2vdm_FS1 2 229 1527 y 3 0 Meta_S2_FS1 2 235

7. Create the first stripe by using this command syntax:

$ nas_volume -name <name> -create -Stripe <stripe_size> <volume_set>,...

where:<name> = name of new stripe pair<stripe_size> = size of the stripe<volume_set> = set of disks

For example, to create a stripe pair named s2_stripe1 and a depth of 262144 bytes (256 KB) by using disks d9, d14, d11, d16, d17, and d22, type:

$ nas_volume -name s2_stripe1 -create -Stripe 262144 d9,d14,d11,d16,d17,d22

Page 53: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Setting up the file system 53

EMC VNX MPFS Environment Configuration

Output:

id = 135name = s2_stripe1acl = 0in_use = Falsetype = stripestripe_size = 262144volume_set = d9,d14,d11,d16,d17,d22disks = d9,d14,d11,d16,d17,d22

Note: For best MPFS performance, in most cases, configure the file volumes by using a volume stripe size of 256 KB. Detailed performance-related information is available in the EMC VNX MPFS Applied Best Practices Guide.

8. Create the second stripe by using this command syntax:

$ nas_volume -name <name> -create -Stripe <stripe_size> <volume_set>,...

where:<name> = name of new stripe pair<stripe_size> = size of the stripe<volume_set> = set of disks

For example, to create a stripe pair named s2_stripe2 and a depth of 262144 bytes (256 KB) by using disks d13, d10, d15, d12, d18, and d19, type:

$ nas_volume -name s2_stripe2 -create -Stripe 262144 d13,d10,d15,d12,d18,d19

Output:

id = 136name = s2_stripe2acl = 0in_use = Falsetype = stripestripe_size = 262144volume_set = d13,d10,d15,d12,d18,d19disks = d13,d10,d15,d12,d18,d19

Page 54: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

54 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

9. Create the metavolume by using this command syntax:

$ nas_volume -name <name> -create -Meta <volume_name>

where:<name> = name of the new meta volume<volume_name> = names of the volumes

For example, to create a meta volume s2_meta1 with volumes s2_stripe1 and s2_stripe2, type:

$ nas_volume -name s2_metal -create -Meta s2_stripe1, s2_stripe2

Output:

id = 137name = s2_meta1acl = 0in_use = Falsetype = metavolume_set = s2_stripe1, s2_stripe2disks =

d9,d14,d11,d16,d17,d22,d13,d10,d15,d12,d18,d19

10. Create the file system by using this command syntax:

$ nas_fs -name <name> -create <volume_name>

where:<name> = name of the new file system<volume_name> = name of the meta volume

For example, to create a file system s2fs1 with a meta volume s2_meta1, type:

$ nas_fs -name s2fs1 -create s2_meta1

Output:

id = 33name = s2fs1acl = 0in_use = Falsetype = uxfsworm = complianceworm_clock = Thu Mar 6 16:26:09 EST 2008worm Max Retention Date = Fri April 18 12:30:40 EST 2008volume = s2_meta1pool = rw_servers= ro_servers= rw_vdms =

Page 55: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Setting up the file system 55

EMC VNX MPFS Environment Configuration

ro_vdms = auto_ext = no, virtual_provision=nostor_devs =

APM00065101342-0012,APM00065101342-0015,APM00065101342-0016,APM00065101342-0019,APM00065101342-001A,APM00065101342-001D,APM00065101342-0013,APM00065101342-0014,APM00065101342-0017,APM00065101342-0018,APM00065101342-001B,APM00065101342-001C

disks = d9,d14,d11,d16,d17,d22,d13,d10,d15,d12,d18,d19

11. Mount the file system by using this command syntax:

$ server_mount <movername> <fs_name> <mount_point>

where:<movername> = name of the Data Mover<fs_name> = name of the file system to mount<mount_point> = name of the mount point

For example, to mount a file system on Data Mover server_2 with file system s2fs1 and mount point /s2fs1, type:

$ server_mount server_2 s2fs1 /s2fs1

Output:server_2 : done

12. Export the file system by using this command syntax:

$ server_export <mover_name> -Protocol nfs -name <name> -option <options> <pathname>

where:<mover_name> = name of the Data Mover<name> = name of the alias for the <pathname><options> = options to include<pathname> = path of the mount point created

For example, to export a file system on Data Mover server_2 with a pathname alias of ufs1 and mount point path /ufs1, type:

$ server_export server_2 -P nfs -name ufs1 /ufs1

Output:server_2: done

Page 56: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

56 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Relateddocumentation

These documents provide more information on building MPFS and are available on the EMC Online Support website:

◆ EMC VNX Command Line Interface Reference for File Manual

◆ Configuring and Managing VNX Networking

◆ Managing VNX Volumes and File Systems Manually

◆ Using VNX Multi-Path File System

Page 57: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Enabling MPFS for the VNX for file 57

EMC VNX MPFS Environment Configuration

Enabling MPFS for the VNX for fileStart MPFS on the VNX for file. Use this command syntax:

$ server_setup <movername> -Protocol mpfs -option <options>

where:<movername> = name of the Data Mover<options> = options to include

For example, type:

$ server_setup server_2 -Protocol mpfs -option start

Output:server_2: done

Note: Start MPFS on the same Data Mover on which the file system was exported using NFS.

Page 58: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

58 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Configuring the VNX for block by using CLI commandsThis section presents an overview of configuring the VNX for block array ports mounted in VNX VG2/VG8 gateway configurations. Use site-specific parameters for these steps.

Use the VNX for block CLI commands to configure the array ports for a VNX VG2/VG8 gateway configuration.

Best practices for VNX for block and VNX VG2/VG8 gateway configurations

Configure the discovery addresses (IP addresses) and enabled targets for each Linux server so that all the iSCSI target ports on the system are equally balanced to achieve maximum performance and availability. Balancing the load across all ports enables speeds up to 4 x 10 Gb/s per storage processor. If one of the iSCSI target ports fails, the other three will remain operational. One-fourth of the Linux servers will fail over to the native NFS or CIFS protocol, but three-fourths of those servers will continue operating at higher speeds attainable through iSCSI.

VNX for block discovery sessions reveal paths to all iSCSI ports on each storage processor. The ports are described to the iSCSI initiators as individual targets. Each of these connections creates another session. The maximum number of initiator sessions or hosts per storage processor is dependent on the VNX for block configuration. To increase the number of achievable Linux servers for a VNX VG2/VG8 gateway configuration, disable access on each of the servers to as many as three out of four iSCSI targets per storage processor. Ensure that the enabled iSCSI targets (VNX for block iSCSI ports) match the storage group definition.

For VNX VG2/VG8 gateway configurations, Access Logix LUN masking is used to present all VNX for file managed LUNs to the Linux servers. LUNs that are not VNX for file LUNs are protected from the iSCSI initiators. A separate storage group, created for MPFS initiators and all VNX for file LUNs that are not control LUNs, is added to this group. Enable at least one port from each SP for each Linux server in this separate storage group.

Page 59: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Configuring the SAN switch and storage 59

EMC VNX MPFS Environment Configuration

In a VNX VG2/VG8 gateway environment, iSCSI initiator names are used in providing the path in the storage group for the Linux server to access the iSCSI targets. Unique, known iSCSI names are required for using Access Logix software.

Configuring the SAN switch and storageThis section describes how to configure the SAN switch and provides configuration information for VNX for block and Symmetrix systems.

Installing the FC switch (FC configuration)

To set up the FC switch:

1. Install the FC switch (if not already installed).

2. Connect cables from each HBA FC port to a switch port.

3. Verify the HBA connection to the switch by checking the LEDs for the switch port that is connected to the HBA port.

Note: Configure zoning as single initiator, which means that each HBA port will have its own zone. Each zone has only one HBA port.

Zoning the SAN switch (FC configuration)

To configure and zone the FC switch:

1. Record all attached port WWNs.

2. Create a zone for each FC HBA port and its associated FC Target.

Note: Configure the VNX for block so that each target is zoned to an SP A and SP B port. Configure the Symmetrix system so that it is zoned to a single FC Director or FC Adapter (FA).

Page 60: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

60 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Creating a security file

A VNX for block does not accept a secure CLI command unless the user who issues the command has a valid user account. Configure a Navisphere 6.X security file to issue secure CLI commands on the server. Secure CLI commands require the servers (or the password prompt) in each command line. The commands are not needed in the command line if a security file is created.

To create a security file:

1. Log in to the Control Station as NAS administrator.

2. Create a security file by using this naviseccli command syntax:

$ /nas/sbin/naviseccli -h <hostname:IP address> -AddUserSecurity -scope 0 -user nasadmin -password nasadmin

where:<hostname:IP address> = name of the VNX for file or IP address of the VNX for block

For example, type:

$ /nas/sbin/naviseccli -h 172.24.107.242 -AddUserSecurity -scope 0 -user nasadmin -password nasadmin

Output:

This command produces no system response. When the command has finished executing, only the command line prompt appears.

3. Verify that the security file was created correctly by using the command syntax:

$ /nas/sbin/naviseccli -h <hostname:IP address> getagent

where:<hostname:IP address> = name of the VNX for file or IP address of the VNX for block

If the security file was not created correctly or cannot be found, an error message is displayed:

Security file not found. Already removed or check -secfilepath option.

Page 61: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Configuring the SAN switch and storage 61

EMC VNX MPFS Environment Configuration

4. If an error message is displayed, repeat step 2 and step 3 to create the security file.

Configuring the VNX for block iSCSI ports

This section describes how to set up the VNX for block in an iSCSI configuration:

Note: The IP addresses of all systems <hostname:IP address> are located in the /etc/hosts file on the Control Station. If multiple arrays are used, EMC recommends registering them in the /etc/hosts file.

1. Configure iSCSI target hostname SP A and port IP address 0 on the system by using this naviseccli command syntax:

$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp a -portid 0 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

where:<hostname:IP address> = name of the VNX for file or IP address of the VNX for block.<port IP address> = IP address of a named logical element mapped to a port on a Data Mover. Each interface assigns an IP address to the port.<subnet mask> = 32-bit address mask used in IP to identify the bits of an IP address used for the subnet address.<gateway IP address> = IP address of the machine through which network traffic is routed.

For example, type:

$ /nas/sbin/naviseccli -h 172.24.107.242 connection -setport -sp a -portid 0 -address 172.241.107.1 -subnetmask 255.255.255.0 -gateway 172.241.107.2

Output:

It is recommended that you consult with your Network Manager to determine the correct settings before applying these changes. Changing the port properties may disrupt iSCSI traffic to all ports on this SP. Initiator configuration changes may be necessary to regain connections. Do you really want to perform this action (y/n)? y

Page 62: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

62 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

SP: APort ID: 0Port WWN: iqn.1992-04.com.emc:cx.apm00065101342.a0iSCSI Alias: 2147.a0IP Address: 172.24.107.242Subnet Mask: 255.255.255.0gateway Address: 172.241.107.2Initiator Authentication: false

Note: If the iSCSI target is not configured (by replying with n), the command line prompt appears.

2. Continue for SP A ports 1–3 and SP B ports 0–3 by using the command syntax:

$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp a -portid 1 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp a -portid 2 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp a -portid 3 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp b -portid 0 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp b -portid 1 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp b -portid 2 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

$ /nas/sbin/naviseccli -h <hostname:IP address> connection -setport -sp b -portid 3 -address <port IP address> -subnetmask <subnet mask> -gateway <gateway IP address>

Page 63: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Configuring the SAN switch and storage 63

EMC VNX MPFS Environment Configuration

The outputs for SP A ports 1–3 and SP B ports 0–3 are the same as SP A port 0 with specific port information for each port.

Note: Depending on the system configuration, additional storage processors (SP C, SP D, and so on) each containing ports 0–3 can exist.

Configuring Access Logix

This section describes how to set up an Access Logix configuration, create storage groups, add LUNs, set failovermode, and set the arraycommpath for the MPFS client.

Setting failovermodeand the

arraycommpath

The naviseccli failovermode command enables or disables the type of trespass needed for the failover software for the MPFS client. This method of setting failovermode works for VNX for block with Access Logix only.

The naviseccli arraycommpath command enables or disables a communication path from the VNX for file to the VNX for block. This command is needed to configure a VNX for block when LUN 0 is not configured. This method of setting arraycommpath works for VNX for block with Access Logix only.

CAUTION!Changing the failovermode setting may force the VNX for block to reboot. Changing the failovermode to the wrong value makes the storage group inaccessible to any connected server.

Note: Failovermode and arraycommpath should both be set to 1 for MPFS. If EMC PowerPath is enabled, failovermode must be set to 1.

To set and verify failovermode and arraycommpath settings:

1. Set failovermode to 1 (VNX for file only) by using this naviseccli command syntax:

$ /nas/sbin/naviseccli -h <hostname:IP address> -scope 0 -user nasadmin -password nasadmin failovermode 1

where:<hostname:IP address> = name of the VNX for file or IP address of the VNX for block

For example, type:

Page 64: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

64 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

$ /nas/sbin/naviseccli -h 172.24.107.242 -scope 0 -user nasadmin -password nasadmin failovermode 1

Output:

WARNING: Previous Failovermode setting will be lost!DO YOU WISH TO CONTINUE (y/n)? y

Note: Setting or not setting failovermode produces no system response. The system just displays the command line prompt.

2. Verify the failovermode setting (VNX for file only) by using this naviseccli command syntax:

$ /nas/sbin/naviseccli -h <hostname:IP address> -scope 0 -user nasadmin -password nasadmin failovermode

For example, type:

$ /nas/sbin/naviseccli -h 172.24.107.242 -scope 0 -user nasadmin -password nasadmin failovermode

Output:

Current failovermode setting is: 1

3. Set arraycommpath to 1 (VNX for file only) by using this naviseccli command syntax:

$ /nas/sbin/naviseccli -h <hostname:IP address> -scope 0 -user nasadmin -password nasadmin arraycommpath 1

where:<hostname:IP address> = name of the VNX for file or IP address of the VNX for block

For example, type:

$ /nas/sbin/naviseccli -h 172.24.107.242 -scope 0 -user nasadmin -password nasadmin arraycommpath 1

Output:

WARNING: Previous arraycommpath setting will be lost!DO YOU WISH TO CONTINUE (y/n)? y

Note: Setting or not setting failovermode produces no system response. The system just displays the command line prompt.

Page 65: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Configuring the SAN switch and storage 65

EMC VNX MPFS Environment Configuration

4. Verify the arraycommpath setting (VNX for file only) by using this naviseccli command syntax:

$ /nas/sbin/naviseccli -h <hostname:IP address> -scope 0 -user nasadmin -password nasadmin arraycommpath

For example, type:

$ /nas/sbin/naviseccli -h 172.24.107.242 -scope 0 -user nasadmin -password nasadmin arraycommpath

Output:

Current arraycommpath setting is: 1

To discover the current settings of failovermode or the arraycommpath, also use the port -list -failovermode or port -list -arraycommpath commands.

Note: The outputs of these commands provide more detail than just the failovermode and arraycommpath settings and may be multiple pages in length.

Creating storagegroups and adding

LUNs

This section describes how to create storage groups, add LUNs to the storage groups, and configure the storage groups for the MPFS client.

The IP addresses of all systems <hostname:IP address> is located in the /etc/hosts file on the Control Station. If multiple arrays are used, EMC recommends registering them in the /etc/hosts file:

Note: Specify the hostname as the name of the VNX for file, for example server_2.

1. Create a storage group by using this navicli command syntax:

$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -create -gname MPFS_Clients

where:<hostname:IP address> = name or IP address of the VNX for file

For example, type:

$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -create -gname MPFS_Clients

Page 66: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

66 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Output:

This command produces no system response. When the command has finished executing, only the command line prompt appears.

2. Add LUNs to the storage group by using this navicli command syntax:

$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -addhlu -gname MPFS_Clients -hlu 0 -alu 16

where:<hostname:IP address> = name or IP address of the VNX for file

For example, type:

$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -addhlu -gname MPFS_Clients -hlu 0 -alu 16

Output:

This command produces no system response. When the command has finished executing, only the command line prompt appears.

3. Continue adding LUNs to the rest of the storage group:

$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -addhlu -gname MPFS_Clients -hlu 0 -alu 17

where:<hostname:IP address> = name or IP address of the VNX for file

For example, type:

$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -addhlu -gname MPFS_Clients -hlu 0 -alu 17

Output:

This command produces no system response. When the command has finished executing, only the command line prompt appears.

Page 67: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Configuring and accessing storage 67

EMC VNX MPFS Environment Configuration

Configuring and accessing storageThis section describes how to install the FC driver, add hosts to storage groups, configure the iSCSI driver, and add initiators to the storage group.

The arraycommpath and failovermode settings are used to see both active and passive paths concurrently. For a LUN failover, LUNs can be presented from active to passive path or passive to active path. Use the arraycommpath and failovermode settings as described in Table 2 on page 67.

Any MPFS server that is connected and logged in to a storage group should have the arraycommpath and failovermode set to 1. For any VNX for file port connected to a storage group, these settings are 0. The settings are on an individual server/port basis and override the global settings on the system default of 0.

When using the VNX for block in an MPFS over iSCSI on VNX VG2/VG8 gateway configuration, the iSCSI initiator name, or IQN, is used to define the server, not a WWN.

Installing the FC driver (FC configuration)

Install the FC driver on the Linux server. The latest driver and qualification information is available on the Fibre Channel manufacturer’s website, the EMC E-Lab Interoperability Navigator, or the documentation provided with the Fibre Channel driver.

Table 2 Arraycommpath and failovermode settings for storage groups

Default VNX for file ports

MPFS clients

Access Logix units arraycommpath 0 0 1

failovermode 0 0 1

VNX and VNX for block arraycommpath 0 n/a n/a

failovermode 1 n/a n/a

Page 68: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

68 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Adding hosts to the storage group (FC configuration)

To view hosts in the storage group and add hosts to the storage group for SP A and SP B for the MPFS client:

1. List the hosts in the storage group by using this navicli command syntax:

$ /nas/sbin/navicli -h <hostname:IP address> port -list |grep "HBA UID:"

where:<hostname:IP address> = name or IP address of the VNX for file

For example, type:

$ /nas/sbin/navicli -h 172.24.107.242 port -list |grep "HBA UID:"

Output:

HBA UID: 20:01:00:1B:32:20:D1:3A:21:01:00:1B:32:20:D1:3AHBA UID: 20:00:00:1B:32:00:D1:3A:21:00:00:1B:32:00:D1:3AHBA UID: 20:01:00:1B:32:20:B5:35:21:01:00:1B:32:20:B5:35HBA UID: 20:00:00:1B:32:00:B5:35:21:00:00:1B:32:00:B5:35

2. Add hosts to the storage group by using this navicli command syntax:

$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -setpath -gname <gname> -hbauid <hbauid> -sp <sp> -spport <spport> -failovermode <failovermode> -arraycommpath <arraycommpath>

where:

<hostname:IP address> = name or IP address of the VNX for file

<gname> = storage group name<hbauid> = WWN of proxy initiator<sp> = storage processor<spport> = port on SP<failovermode> = enables or disables the type of

trespass needed for failover software (1 = enable, 0 = disable)

<arraycommpath> = creates or removes a communication path between the server and the VNX for block (1 = enable, 0 = disable)

Page 69: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Configuring and accessing storage 69

EMC VNX MPFS Environment Configuration

Examples of adding hosts to storage groups are shown in step 3 and step 4.

3. Add hosts to storage group A:

$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -setpath -gname MPFS_Clients -hbauid 20:0a:00:0d:ec:01:53:82:20:09:00:0d:ec:01:53:82 -sp a -spport 0 -failovermode 1 -arraycommpath 1

Note: The IP addresses of all systems <hostname:IP address> are located in the /etc/hosts file on the Control Station. If multiple arrays are used, EMC recommends registering them in the /etc/hosts file.

Output:

The recommended configuration is to have all HBAs on one host mapped to the same storage group.

Set Path to storage group MPFS_Clients (y/n)? y WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y

Note: This command produces no system response. When the command has finished executing, only the command line prompt appears.

4. Add hosts to storage group B:

$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -setpath -gname MPFS_Clients -hbauid 20:0a:00:0d:ec:01:53:82:20:09:00:0d:ec:01:53:82 -sp b -spport 0 -failovermode 1 -arraycommpath 1

Output:

The recommended configuration is to have all HBAs on one host mapped to the same storage group.

Set Path to storage group MPFS_Clients (y/n)? y

Page 70: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

70 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y

Note: This command produces no system response. When the command has finished executing, only the command line prompt appears.

Configuring the iSCSI driver for RHEL 4 (iSCSI configuration)

After installing the Linux server for iSCSI configurations, configure the iSCSI driver for RHEL 4 on the Linux server:

1. Edit the /etc/iscsi.conf file on the Linux server.

2. Edit the /etc/initiatorname.iscsi file on the Linux server.

3. Start the iSCSI service daemon.

Note: “Configuring the iSCSI driver for RHEL 5-6, SLES 10-11, and CentOS 5-6 (iSCSI configuration)” on page 73 provides information about RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6.

Edit the iscsi.conf file Using vi, or another standard text editor that does not add carriage return characters, edit the /etc/iscsi.conf file. Modify the file so that the iSCSI parameters shown in Table 3 on page 70 have comments removed and have the required values as listed in these tables.

Global parameters should be listed before the DiscoveryAddress, should start in column 1, and should not have any white space in front of them. The DiscoveryAddresses must also start in column 1, and not have any whitespace in front of it/them. DiscoveryAddresses should appear after all global parameters. Be sure to read the iscsi.conf man page carefully.

Table 3 iSCSI parameters for RHEL 4 using 2.6 kernels (page 1 of 2)

iSCSI parameter Required value

HeaderDigest Never

DataDigest Never

ConnFailTimeout 45

Page 71: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Configuring and accessing storage 71

EMC VNX MPFS Environment Configuration

Note: The discovery address is the IP address of the IP-SAN iSCSI LAN port. This address is an example of using an internal IP. The actual switch IP address will be different.

For VNX for block configurations, the target name is the IQN of the VNX for block array ports. Run this command from the Control Station to get the IQNs of the target ports:

$ /nas/sbin/navicli -h <hostname:IP address> port -list -all |grep "SP UID:"

SP UID: 50:06:01:60:C1:E0:05:9F:50:06:01:60:41:E0:05:9FSP UID: 50:06:01:60:C1:E0:05:9F:50:06:01:61:41:E0:05:9FSP UID: 50:06:01:60:C1:E0:05:9F:50:06:01:68:41:E0:05:9FSP UID: 50:06:01:60:C1:E0:05:9F:50:06:01:69:41:E0:05:9FSP UID: iqn.1992-04.com.emc:cx.hk192201067.a2SP UID: iqn.1992-04.com.emc:cx.hk192201067.a3SP UID: iqn.1992-04.com.emc:cx.hk192201067.a0SP UID: iqn.1992-04.com.emc:cx.hk192201067.a1SP UID: iqn.1992-04.com.emc:cx.hk192201067.b2SP UID: iqn.1992-04.com.emc:cx.hk192201067.b3SP UID: iqn.1992-04.com.emc:cx.hk192201067.b0SP UID: iqn.1992-04.com.emc:cx.hk192201067.b1

An example of iscsi.conf parameters for a VNX VG2/VG8 gateway configuration follows (two discovery addresses are shown as there are two zones):

Enabled=noTargetName=iqn.1992-04.com.emc:cx.hk192201067.a0TargetName=iqn.1992-04.com.emc:cx.hk192201067.a1TargetName=iqn.1992-04.com.emc:cx.hk192201067.a2TargetName=iqn.1992-04.com.emc:cx.hk192201067.a3TargetName=iqn.1992-04.com.emc:cx.hk192201067.b0TargetName=iqn.1992-04.com.emc:cx.hk192201067.b1TargetName=iqn.1992-04.com.emc:cx.hk192201067.b2TargetName=iqn.1992-04.com.emc:cx.hk192201067.b3

InitialR2T Yes

PingTimeout 45

ImmediateData No

DiscoveryAddress IP address of the iSCSI LAN port on IP-SAN switch

Table 3 iSCSI parameters for RHEL 4 using 2.6 kernels (page 2 of 2)

iSCSI parameter Required value

Page 72: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

72 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

DiscoveryAddress=45.246.0.41TargetName=iqn.1992-04.com.emc:cx.hk192201067.a1 Continuous=no Enabled=yes LUNs=16,17,18,19,20,21,22,23,24,25DiscoveryAddress=45.246.0.45TargetName=iqn.1992-04.com.emc:cx.hk192201067.b1 Continuous=no Enabled=yes LUNs=16,17,18,19,20,21,22,23,24,25

Editing theinitiatorname.iscsi file

The VNX for block automatically generates initiator names for the Linux server. However, initiator names must be unique and auto initiatorname generation may not always be unique.

iSCSI names are generalized by using a normalized character set (converted to lower case or equivalent), with no white space allowed, and very limited punctuation. For those using only ASCII characters (U+0000 to U+007F), these characters are allowed:

◆ ASCII dash character ('-' = U+002d)

◆ ASCII dot character ('.' = U+002e)

◆ ASCII colon character (':' = U+003a)

◆ ASCII lower-case characters ('a'..'z' = U+0061..U+007a)

◆ ASCII digit characters ('0'..'9' = U+0030..U+0039)

In addition, any upper-case characters input by using a user interface MUST be mapped to their lower-case equivalents. RFC 3722, http://www.ietf.org/rfc/rfc3722.txt, provides more information.

To generate a unique initiatorname:

1. To view the current /etc/initiator.iscsi file:

$ more /etc/initiatorname.iscsiGenerateName=yes

2. Using vi, or another standard text editor that does not add carriage return characters, edit the /etc/initiatorname.iscsi file and comment out the line containing GenerateName=yes.

Example of commented-out line:

#GenerateName=yes

Note: Do not exit the file until step 3 is completed.

Page 73: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Configuring and accessing storage 73

EMC VNX MPFS Environment Configuration

3. Place the unique IQN name, iSCSI qualified name, in the /etc/initiatorname.iscsi file:

#GenerateName=yesInitiatorName=iqn.2006-06.com.emc.mpfs:<xxxxxxx>

where:

<xxxxxxx> = Server name

In this example, use mpfsclient01 as the Linux server name:

#GenerateName=yesInitiatorName=iqn.2006-06.com.emc.mpfs:<mpfsclient01>

4. Save and exit the editor.

Note: If nodes exist on the switch, issue the show iscsi initiator command to show the IQN name. Care must be taken to not use duplicate IQN names (InitiatorName).

Starting iSCSI To start iSCSI, as root, type this command:

$ /etc/init.d/iscsi start

Output:

Starting iSCSI: iscsi iscsid [ OK ]

Configuring the iSCSI driver for RHEL 5-6, SLES 10-11, and CentOS 5-6 (iSCSI configuration)

After installing the Linux server for iSCSI configurations, follow the procedures below to configure the iSCSI driver for RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6 on the Linux server.

Installing andconfiguring RHEL 5-6,

SLES 10-11, andCentOS 5-6

To install the Linux Open iSCSI software initiator, consult the README files available within the Linux distribution and the release notes from the distributor.

Note: Complete these steps before continuing to the RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6 installation subsections. The open-iSCSI persistent configuration is implemented as a DBM database available on all Linux installations.

Page 74: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

74 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

The database contains two tables:

◆ Discovery table (discovery.db)

◆ Node table (node.db)

The iSCSI database files in RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6 are located in /var/lib/open-iscsi/. For SLES 10 SP3 and SLES 11 SP1 they will be found in /etc/iscsi/. Use these MPFS recommendations to complete the installation. The recommendations are generic to all distributions unless noted otherwise.

To configure the iSCSI driver for RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6 on the Linux server:

1. Edit the /etc/iscsi/iscsid.conf file.

There are several variables within the file. The default file from the initial installation is configured to operate with the default settings. The syntax of the file uses a pound (#) symbol to comment out a line in the configuration file. Enable a variable by deleting the pound (#) symbol preceding the variable in the iscsid.conf file. The entire set of variables with the default and optional settings is listed in each distribution’s README file and in the configuration file.

Table 4 on page 74 lists the recommended iSCSI parameter settings.

Table 4 RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6 iSCSI parameters (page 1 of 2)

Variable nameDefault settings

MPFS recommended Comments

node.startup manual auto None

node.session.iscsi.InitialR2T No Yes None

node.session.iscsi.ImmediateData Yes No None

node.session.timeo.replacement_timeout

120 60 With the use of multipathing software, this time can be decreased to 30 seconds for a faster failover. However, caution should be used to ensure that this timer is greater than the node.conn[0].timeo.timoe.noop_out_interval and node.conn[0].timeo.timeo.noop_out_timeout times combined.

Page 75: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Configuring and accessing storage 75

EMC VNX MPFS Environment Configuration

2. Set the run levels for the iSCSI daemon to automatically start at boot and to shut down when the Linux server is brought down:

• For RHEL 5, RHEL 6, CentOS 5, and CentOS 6:

# chkconfig - -level 345 iscsid on # service iscsi start

For RHEL 5 or RHEL 6, perform a series of eight iscsiadm commands to configure the targets to connect to with open-iSCSI. Consult the man pages for iscsiadm for a detailed explanation of the command and its syntax.

First, discover the targets to connect the server to using iSCSI.

• For SLES 10 and SLES 11:

# chkconfig -s open-iscsi 345# chkconfig -s open-iscsi on#/sbin/rcopen-iscsi start

Use the YaST utility on SLES 10 and SLES 11 to configure the iSCSI software initiator. It can be used to discover targets with the use of the iSCSI SendTargets command, add targets to be connected to the server, and start/stop the iSCSI service. Open YaST and select Network Services > iSCSI Initiator. Open the tab to Discovered Targets by typing the IP address of the target:

node.conn[0].timeo.timoe.noop_out_interval

10 > 10 congested network

Ensure that this value does not exceed that of the value in node.session.timeo.replacement_timeout.

node.conn[0].timeo.timeo.noop_out_timeout

15 > 15 congested network

Ensure that this value does not exceed that of the value in node.session.timeo.replacement_timeout.

node.conn[0].iscsi.MaxRecvDataSegmentLength

131072 262144 According to BP for previous versions.

Table 4 RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6 iSCSI parameters (page 2 of 2)

Variable nameDefault settings

MPFS recommended Comments

Page 76: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

76 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

• For a VNX for block:

Specify one of the target IP addresses and the array will return all its available targets to select. After discovering the targets, click the Connected Targets tab to log in to the targets to be connected to and select those to be logged in to automatically at boot time. Perform the discovery process on a single IP address and the array will return all its iSCSI configured targets.

• For a Symmetrix system:

Specify each individual target to discover and the array will return the specified targets to select. After discovering the targets, click the Connected Targets tab to log in to the targets to be connected to and select those to be logged in to automatically at boot time. Perform the discovery process on each individual target and the array will return the specified iSCSI configured targets.

Command examples To discover targets:

# iscsiadm -m(ode) discovery -t(ype) s(end)t(argets) -p(ortal) <port IP address>

output: <node.discovery_address>:3260,1 iqn.2007-06.com.test.cluster1:storage.cluster1

#iscsiadm -m discovery<node.discovery_address>:3260 via sendtargets<node.discovery_address>:3260 via sendtargets#iscsiadm --mode node (rhel5.0)<node.discovery_address>:3260,13570

iqn.1987-05.com.cisco:05.tomahawk.11-03.5006016941e00f1c

<node.discovery_address>:3260,13570 iqn.1987-05.com.cisco:05.tomahawk.11-03.5006016141e00f1c

#iscsiadm --mode node --targetname iqn.1987-05.com.cisco:05.tomahawk.11-02.5006016141e00f1c

node.name = iqn.1987-05.com.cisco:05.tomahawk.11-02.5006016141e00f1c

node.tpgt = 13569node.startup = automaticiface.hwaddress = defaultiface.iscsi_ifacename = defaultiface.net_ifacename = defaultiface.transport_name = tcpnode.discovery_address = 128.221.252.200

Page 77: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Configuring and accessing storage 77

EMC VNX MPFS Environment Configuration

node.discovery_port = 3260….

#iscsiadm --mode node (suse10.0)[2f21ef] <node.discovery_address>:3260,13569

iqn.1987-05.com.cisco:05.tomahawk.11-02.5006016941e019bd

[2f071e] <node.discovery_address>:3260,13569 iqn.1987-05.com.cisco:05.tomahawk.11-02.5006016941e00f1c

#iscsiadm -m node -r 2f21ef node.name = iqn.1987-05.com.cisco:05.tomahawk.11-02.5006016941e019bd

node.transport_name = tcpnode.tpgt = 13569node.active_conn = 1node.startup = automaticnode.session.initial_cmdsn = 0node.session.auth.authmethod = None

#iscsiadm --mode node --targetname iqn.2007-06.com.test.cluster1

:storage.cluster1 --portal <node.discovery_address>:3260 --login

#iscsiadm -m session -i#iscsiadm --mode node --targetname

iqn.2007-06.com.test.cluster1:storage.cluster1 --portal <node.discovery_address>:3260 --logout

To log in to the all targets:

# iscsiadm -m node -L all

Login session [45.246.0.45:3260 iqn.1992-04.com.MPFS:cx.hk192201109.b1]

Login session [45.246.0.45:3260 iqn.1992-04.com.MPFS:cx.hk192201109.a1]

Page 78: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

78 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Starting/stopping theiSCSI driver for

RHEL 5-6, SLES 10-11,and CentOS 5-6 (iSCSI

configuration)

Use these commands to start and stop the Open-iSCSI driver.

To manually start and stop the iSCSI driver for RHEL 5, RHEL 6, CentOS 5, and CentOS 6:

# etc/init.d/iscsid {start|stop|restart|status| condrestart}

To manually start and stop the iSCSI driver for SLES 10 and SLES 11:

# sbin/rcopen-iscsi {start|stop|status|restart}

If there are problems loading the iSCSI kernel module, diagnostic information will be placed in /var/log/iscsi.log.

The open_iscsi driver is a sysfs class driver. Many of its attributes can be accessed in the directory. The man page for iscsiadm (8) provides information for all administrative functions used to configure, gather statistics, target discovery, and so on. The command is in the format:

/sys/class/iscsi_<host, session, connection>

Note: Verify that anything that has an iSCSI device open has closed the iSCSI device before shutting down iSCSI. This includes file systems, volume managers, and user applications. If iSCSI devices are open when attempting to stop the driver, the scripts will error out instead of removing those devices. This prevents corrupting the data on iSCSI devices. In this case, iscsid will no longer be running. To continue by using the iSCSI devices, issue /etc/init.d/iscsi start command.

Limitations andworkarounds

Limitations and workarounds are:

◆ The Linux iSCSI driver, which is part of the Linux operating system, does not distinguish between NICs on the same subnet. Therefore to achieve load balancing and multipath failover, VNX for block connected to Linux servers must configure each NIC on a different subnet.

◆ The open-iSCSI daemon does not find targets automatically on boot when configured to log in at boot time. The Linux iSCSI Attach Release Notes provide more information.

Page 79: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Configuring and accessing storage 79

EMC VNX MPFS Environment Configuration

Adding initiators to the storage group(FC configuration)

In an FC configuration, the storage group should contain the HBA UID of the Linux servers.

The IP addresses of all systems <hostname:IP address> are located in the /etc/hosts file on the Control Station. If multiple arrays are used, EMC recommends registering them in the /etc/hosts file.

To add initiators to the storage group for SP A and SP B for the MPFS client:

1. Use this navicli command to list hosts in the storage group:

$ /nas/sbin/navicli -h <hostname:IP address> port -list |grep "HBA UID:"

For example, type:

$ /nas/sbin/navicli -h 172.24.107.242 port -list |grep "HBA UID:"

Output:

HBA UID: 20:01:00:1B:32:20:D1:3A:21:01:00:1B:32:20:D1:3AHBA UID: 20:00:00:1B:32:00:D1:3A:21:00:00:1B:32:00:D1:3AHBA UID: 20:01:00:1B:32:20:B5:35:21:01:00:1B:32:20:B5:35HBA UID: 20:00:00:1B:32:00:B5:35:21:00:00:1B:32:00:B5:35

2. Use this navicli command to add initiators to the storage group by using this command syntax:

$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -setpath -gname <gname> -hbauid <hbauid> -sp <sp> -spport <spport> -failovermode <failovermode> -arraycommpath <arraycommpath>

where:

Note: Perform this command for each SP.

<gname> = storage group name<hbauid> = HBA UID of Linux servers<sp> = storage processor<spport> = port on SP<failovermode> = enables or disables the type of trespass

needed for failover software (1 = enable, 0 = disable)

<arraycommpath> = creates or removes a communication path between the server and the VNX for block (1 = enable, 0 = disable)

Page 80: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

80 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Examples of adding initiators to storage groups are shown in step 3 and step 4.

3. Add initiators to the storage group for SP A:

$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -setpath -gname MPFS_Clients -hbauid 20:01:00:1B:32:20:D1:3A:21:01:00:1B:32:20:D1:3A -sp a -spport 0 -failovermode 1 -arraycommpath 1

Output:

The recommended configuration is to have all HBAs on one host mapped to the same storage group.

Set Path to storage group MPFS_Clients (y/n)? y WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y

Note: This command produces no system response. When the command has finished executing, only the command line prompt appears.

4. Add initiators to the storage group for SP B:

$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -setpath -gname MPFS_Clients -hbauid 20:01:00:1B:32:20:D1:3A:21:01:00:1B:32:20:D1:3A -sp b -spport 0 -failovermode 1 -arraycommpath 1

Output:

The recommended configuration is to have all HBAs on one host mapped to the same storage group.

Set Path to storage group MPFS_Clients (y/n)? y WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y

Page 81: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Configuring and accessing storage 81

EMC VNX MPFS Environment Configuration

Note: This command produces no system response. When the command has finished executing, only the command line prompt appears.

Adding initiators to the storage group(iSCSI configuration)

When using the VNX for block in a MPFS over iSCSI on VNX VG2/VG8 gateway configuration, the iSCSI initiator name, or IQN, is used to define the host, not a WWN.

The IP addresses of all systems <hostname:IP address> are located in the /etc/hosts file on the Control Station. If multiple arrays are used, EMC recommends registering them in the /etc/hosts file.

To add initiators to the storage group for SP A and SP B for the MPFS client:

1. Find the IQN used to define the host by using this navicli command syntax:

$ /nas/sbin/navicli -h <hostname:IP address> port -list |grep "HBA UID:" |grep iqn

For example, type:

$ /nas/sbin/navicli -h 172.24.107.242 port -list |grep "HBA UID:" |grep iqn

Output:

InitiatorName=iqn.1994-05.com.Red Hat:58c8b0919b31

2. Use this navicli command to add initiators to the storage group by using this command syntax:

$ /nas/sbin/navicli -h <hostname:IP address> storagegroup -setpath -gname <gname> -hbauid <hbauid> -sp <sp> -spport <spport> -failovermode <failovermode> -arraycommpath <arraycommpath>

Page 82: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

82 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

where:

Examples of adding initiators to storage groups are shown in step 3 and step 4.

Note: Perform this command for each iSCSI proxy-initiator.

3. Add initiators to the storage group for SP A:

$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -setpath -gname MPFS_Clients -hbauid iqn.1994-05.com.Red Hat:58c8b0919b31 -sp a -spport 0 -failovermode 1 -arraycommpath 1

Output:

The recommended configuration is to have all HBAs on one host mapped to the same storage group.

Set Path to storage group MPFS_Clients (y/n)? y WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y

Note: This command produces no system response. When the command has finished executing, only the command line prompt appears.

<gname> = storage group name<hbauid> = iSCSI initiator name<sp> = storage processor<spport> = port on SP<failovermode> = enables or disables the type of trespass

needed for failover software (1 = enable, 0 = disable)

<arraycommpath> = creates or removes a communication path between the server and the VNX for block (1 = enable, 0 = disable)

Page 83: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Configuring and accessing storage 83

EMC VNX MPFS Environment Configuration

4. Add initiators to the storage group for SP B:

$ /nas/sbin/navicli -h 172.24.107.242 storagegroup -setpath -gname MPFS_Clients -hbauid iqn.1994-05.com.Red Hat:58c8b0919b31 -sp b -spport 0 -failovermode 1 -arraycommpath 1

Output:

The recommended configuration is to have all HBAs on one host mapped to the same storage group.

Set Path to storage group MPFS_Clients (y/n)? y WARNING: Changing configuration options may cause the array to stop functioning correctly. Failover-related Initiator settings for a single host MUST BE CONSISTENT for all paths from the host to the storage system. Please verify after reconnect. Do you wish to continue (y/n)? y

Note: This command produces no system response. When the command has finished executing, only the command line prompt appears.

Page 84: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

84 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Mounting MPFSA connection between the Linux server and the VNX for file, known as a session, must be completed before mounting MPFS. Establish a session by mounting the MPFS on the Linux server.

Note: MPFS can be added to the /etc/fstab file to mount the file system automatically after the server is rebooted or shut down.

To mount MPFS on the Linux server, use the mount command with this syntax:mount -t mpfs [-o] <MPFS_specific_options>

<movername>:/<FS_export> <mount_point>

where:

◆ <MPFS_specific_options> is a comma-separated list (without spaces) of arguments to the -o option that are supported by MPFS. Most arguments to the -o option that are supported by the NFS mount and mount_nfs commands are also supported by MPFS. MPFS also supports these additional arguments:

• -o mpfs_verbose — Executes the mount command in verbose mode. If the mount succeeds, the list of disk signatures used by the MPFS volume is printed on standard output.

• -o mpfs_keep_nfs — If the mount using MPFS fails, the file system is mounted by using NFS. Warning messages inform the user that the MPFS mount failed.

• -o hvl — Specify the volume management type as hierarchical by default if it is supported by the server (-o hvl=1) or as not hierarchical by default (-o hvl=0). Setting this value overrides the default value specified in /etc/sysconfig/EMCmpfs.“Hierarchical volume management” on page 33 describes hierarchical volumes and their management.

The -t option specifies the type of file system (such as MPFS).

Note: The -o hvl option requires NAS software version 5.6 or later.

◆ <movername> is the name of the VNX for file.

◆ <FS_export> is the absolute pathname of the directory that is exported on the VNX for file.

Page 85: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Mounting MPFS 85

EMC VNX MPFS Environment Configuration

◆ <mount_point> is the absolute pathname of the directory on the Linux server on which to mount MPFS.

Note: To view the man page for the mount command, type man mount_mpfs.

Examples This command mounts MPFS without any MPFS specific options:

mount -t mpfs <hostname:IP address>:/src /usr/src

Output:

This command produces no system response. When the command has finished executing, only the command line prompt appears.

The default behavior of mount –t mpfs is to try to mount the file system. If all disks are not available, the mount will fail with this error:

$ mount -t mpfs <hostname:IP address>:/src /usr/src -vRequested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Volume ’APM000643042520000-0008’ not found.Error mounting /mnt/mpfs via MPFS

This command mounts MPFS and displays a list of disk signatures:

mount -t mpfs -o mpfs_verbose <hostname:IP address>:/src /usr/src

Output:

VNX signature vendor product_id device serial number or path

APM000531007850006-001c EMC SYMMETRIX /dev/sdab path = /dev/sdab(0x41b0) ActiveAPM000531007850006-001d EMC SYMMETRIX /dev/sdab path = /dev/sdab(0x41b0) ActiveAPM000531007850006-001e EMC SYMMETRIX /dev/sdac path = /dev/sdac(0x41c0) ActiveAPM000531007850006-001f EMC SYMMETRIX /dev/sdad path = /dev/sdad(0x41d0) ActiveAPM000531007850006-0020 EMC SYMMETRIX /dev/sdae path = /dev/sdae(0x41e0) ActiveAPM000531007850006-0021 EMC SYMMETRIX /dev/sdaf path = /dev/sdaf(0x41f0) Active

Page 86: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

86 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

If all disks are not available, the mount will fail with this error:

$ mount -t mpfs -o <hostname:IP address>:/src /usr/src -vRequested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Volume ’APM000643042520000-0008’ not found.Error mounting /mnt/mpfs via MPFS

This command mounts MPFS. The mpfs_keep_nfs option causes the file system to mount by using NFS if the mount using MPFS fails:

mount -t mpfs -o mpfs_keep_nfs <hostname:IP address>:/src /usr/src

Output:

This command produces no system response. When the command has finished executing, only the command line prompt appears.

With the mpfs_keep_nfs option, the behavior is to try to mount the file system by using MPFS. If all the disks are not available, the mount will default to NFS:

$ mount -t mpfs <hostname:IP address>:/rcfs /mnt/mpfs -v -o mpfs_keep_mpfs

<hostname:IP address>:/rcfs on mnt/mpfs type mpfs (rw,addr=<hostname:IP address>)

<hostname:IP address>:/rcfs using disksNo disks found, ignore and work through NFS now!It will failback to MPFS automatically when the disks are

OK.

This command specifies the volume management type as hierarchical volume management:

mount -t mpfs -o hvl <hostname:IP address>:/src /usr/src

Output:

This command produces no system response. When the command has finished executing, only the command line prompt appears.

Page 87: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Mounting MPFS 87

EMC VNX MPFS Environment Configuration

If all disks are not available, the mount will fail with this error:

$ mount -t mpfs -o hvl <hostname:IP address>:/src /usr/src -v

Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Requested volume not found. Attempting re-discovery...Volume ’APM000643042520000-0008’ not found.Error mounting /mnt/mpfs via MPFS

Never retry I/O through the SAN. For all intents and purposes the behavior is as if the user typed mount –t nfs. Use this option for mounts that are done automatically and to ensure that the volume is mounted with or without MPFS.

Page 88: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

88 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Environment Configuration

Unmounting MPFSTo unmount the MPFS file system on the Linux server, use the umount command with this syntax:umount -t mpfs [-a] <mount_point>

where:

◆ -t is the type of file system (such as MPFS).

◆ -a specifies to unmount MPFS.

◆ <mount_point> is the absolute pathname of the directory on the Linux server on which to unmount the MPFS file system.

Example This command unmounts MPFS:

umount -t mpfs -a

To unmount a specific file system, type either of these commands:

umount -t mpfs /mnt/fs1

or

umount /mnt/fs1

If a file system cannot be unmounted or is not in use, the umount command displays this error message:

Error unmounting /mnt/fs1/mpfs via MPFS

If a file system cannot be unmounted as it is in use, the umount command displays this error message:

umount: device busy

Note: These commands produce no system response. When the commands have finished executing, only the command line prompt appears.

Page 89: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Installing, Upgrading, or Uninstalling VNX MPFS Software 89

3Invisible Body Tag

This chapter describes how to install, upgrade, and uninstall the EMC VNX MPFS software.

Topics include:

◆ Installing the MPFS software ........................................................... 90◆ Upgrading the MPFS software......................................................... 95◆ Uninstalling the MPFS software .................................................... 100

Installing, Upgrading,or Uninstalling VNX

MPFS Software

Page 90: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Installing, Upgrading, or Uninstalling VNX MPFS Software

90 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Installing the MPFS softwareThis section describes the requirements necessary before installing and two methods to install the MPFS software:

◆ Install the MPFS software from a tar file

◆ Install the MPFS software from a CD

Before installing Before installing the MPFS software, read the prerequisites for the Linux server and VNX for block, listed in this section:❑ Verify that the Linux server on which the MPFS software will be

installed meets the MPFS configuration requirements specified in the EMC VNX MPFS for Linux Clients Release Notes.

❑ Ensure that the Linux server has a network connection to the Data Mover on which the MPFS software resides and that the Data Mover can be contacted.

❑ Ensure that the Linux server meets the overall system and other configuration requirements specified in the E-Lab Interoperability Navigator.

Installing the MPFS software from a tar file

To install the MPFS software from a compressed tar file, download the file from the EMC Online Support website. Then, uncompress and extract the tar file on the Linux server and execute the install-mpfs script.

Note: The uncompressed tar file needs approximately 17 MB and the installation RPM file needs approximately 5 MB of disk space.

Note: Unless noted as an output, when the commands in the procedures have finished executing, only the command line prompt is returned.

To download, uncompress, extract, and install the MPFS software from the compressed tar file:

1. Create the directory /tmp/temp_mpfs if it does not already exist.

2. Locate the compressed tar file on the EMC Online Support website.

Page 91: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Installing the MPFS software 91

Installing, Upgrading, or Uninstalling VNX MPFS Software

Depending on the specific MPFS software release and version, the filename will appear as:EMCmpfs.linux.6.0.x.x.tar.Z

3. Download the compressed tar file from the EMC Online Support website to the directory created in step 1.

4. Change to the /tmp/temp_mpfs directory:

cd /tmp/temp_mpfs

5. Uncompress the tar file by using this command syntax:

uncompress <filename>

where <filename> is the name of the tar fileFor example, type:

uncompress EMCmpfs.linux.6.0.x.x.tar.Z

6. Extract the tar file by using this command syntax:

tar -zxvf <filename>

where <filename> is the name of the tar file For example, type:

tar -zxvf EMCmpfs.linux.6.0.x.x.tar.Z

7. Go to the Linux directory created by the last step: cd /tmp/temp_mpfs/linux

8. Install the MPFS software: $ ./install-mpfs

Output: Installing ./EMCmpfs-6.0.2.x-i686.rpm on localhost

[ Step 1 ] Checking installed MPFSpackage ...[ Step 2 ] Installing MPFS package ...

Preparing... ########################################### [100%] 1:EMCmpfs ########################################### [100%]Loading EMC MPFS Disk Protection [ OK ]Protecting EMC VNX disks [ OK ]Loading EMC MPFS [ OK ]Starting MPFS daemon [ OK ]Discover MPFS devices [ OK ]Starting MPFS perf daemon [ OK ][ Done ]

9. Follow the instructions in “Post-installation checking” on page 98.

Page 92: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Installing, Upgrading, or Uninstalling VNX MPFS Software

92 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Installing the MPFS software from a CD

To install the MPFS software from the EMC MPFS for Linux Client Software CD, mount the CD, view the architecture subdirectories, find the architecture being used, select the desired architecture subdirectory, and execute the install-mpfs script:

Note: Unless noted as an output, when the commands in the procedures have finished executing, only the command line prompt is returned.

1. Insert the CD in to the CD drive.

2. Mount the CD in the mnt directory:$ mount /dev/cdrom /mnt

Output: mount: block device/dev/cdrom is write-protected,

mounting read-only

3. Go to the mnt directory created by the last step: $ cd /mnt

4. View the architecture subdirectories:$ ls -lt

Output:dr-xr-xr-x 2 root root 2048 Jul 31 14:46 Packages-r--r--r-- 1 root root 694 Jul 31 14:46 README.txt-r--r--r-- 1 root root 442 Jul 31 14:46 TRANS.TBL

5. Go to the Packages directory: $ cd Packages

6. View the architecture subdirectories:$ ls -lt

Output:-r--r--r-- 1 root root 42164 Jul 31 14:46 EMCmpfs-6.0.2.x-ia32e.rpm-r--r--r-- 1 root root 3381317 Jul 31 14:46 EMCmpfs-6.0.2.x-ia64.rpm-r--r--r-- 1 root root 3955356 Jul 31 14:46 EMCmpfs-6.0.2.x-x86_64.rpm-r-xr-xr-x 1 root root 11711 Jul 31 14:46 install-mpfs-r--r--r-- 1 root root 1175 Jul 31 14:46 TRANS.TBL-r--r--r-- 1 root root 4807898 Jul 31 14:46 EMCmpfs-6.0.2.x-i686.rpm

Page 93: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Installing the MPFS software 93

Installing, Upgrading, or Uninstalling VNX MPFS Software

7. Install the MPFS software: $ ./install-mpfs

Output: Installing ./EMCmpfs-6.0.2.x-i686.rpm on localhost

[ Step 1 ] Checking installed MPFSpackage ...[ Step 2 ] Installing MPFS package ...

Preparing... ########################################### [100%] 1:EMCmpfs ########################################### [100%]Loading EMC MPFS Disk Protection [ OK ]Protecting EMC VNX disks [ OK ]Loading EMC MPFS [ OK ]Starting MPFS daemon [ OK ]Discover MPFS devices [ OK ]Starting MPFS perf daemon [ OK ][ Done ]

8. Follow the instructions in “Post-installation checking” on page 98.

Post-installation checking

After installing the MPFS software:

1. Verify that the MPFS software is installed properly and the MPFS daemon (mpfsd) has started as described in “Verifying the MPFS software upgrade” on page 99.

2. Start the MPFS software by mounting an MPFS file system as described in “Mounting MPFS” on page 84.

If the MPFS software does not run, Appendix B, Error Messages and Troubleshooting provides information on troubleshooting the MPFS software.

Page 94: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Installing, Upgrading, or Uninstalling VNX MPFS Software

94 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Operating MPFS through a firewall

For proper MPFS operation, the Linux server and VNX for file (a Data Mover) must communicate with each other on their File Mapping Protocol (FMP) ports.

If a firewall resides between the Linux server and the VNX for file, the firewall must allow access to the ports listed in Table 5 on page 94 for the Linux server.

Table 5 Linux server firewall ports

Linux server Linux server port/use VNX for file port/use

Linux O/SRHEL, SLES

Cent O/S

6907 - FMP notify protocol 4656a - FMP

2049a - NFS

1234 - mountd

111 - portmap/rpcbind

a.Both ports 2049 and 4656 must be open to run the FMP service.

Page 95: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Upgrading the MPFS software 95

Installing, Upgrading, or Uninstalling VNX MPFS Software

Upgrading the MPFS softwareUse this procedure to upgrade the MPFS software.

Upgrading the MPFS software

Upgrade the existing MPFS software by using the install-mpfs script.

The install-mpfs script can store information about the MPFS configuration, unmount MPFS, and restore the configuration after an upgrade.

The command syntax for the install-mpfs script is:

install-mpfs [-s] [-r]

where:-s = silent mode, which unmounts MPFS and upgrades RPM without prompting the user. -r = restores configurations by backing up the current MPFS configurations and restoring them after an upgrade.

The install script will automatically issue rpm -e EMCmpfs to remove the existing MPFS software.

Note: Do not back up and restore MPFS configuration files by default.

Page 96: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Installing, Upgrading, or Uninstalling VNX MPFS Software

96 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

To upgrade the MPFS software on a Linux server that has an earlier version of MPFS software installed:

1. Type:

$ ./install-mpfs

Output: Installing ./EMCmpfs-6.0.2.x-i686.rpm on localhost

[ Step 1 ] Checking installed MPFSpackage ...Warning: Package EMCmpfs-6.0.1-0 has already been installed.Do you want to upgrade to new package? [yes/no]yes[ Step 2 ] Checking mounted mpfs file system ...Fine, no mpfs file system is mounted. Install process will continue. [ Step 3 ] Upgrading MPFS package ...Preparing... ########################################### [100%] 1:EMCmpfs ########################################### [100%]Unloading old version modules...unprotectLoading EMC MPFS Disk Protection [ OK ]Protecting EMC VNX disks [ OK ]Loading EMC MPFS [ OK ]Starting MPFS daemon [ OK ]Discover MPFS devices [ OK ]Starting MPFS perf daemon [ OK ][ Done ]

2. The installation is complete. Follow the instructions in “Post-installation checking” on page 98.

Page 97: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Upgrading the MPFS software 97

Installing, Upgrading, or Uninstalling VNX MPFS Software

Upgrading the MPFS software with MPFS mounted

The install-mpfs script can store information about the MPFS configuration, unmount MPFS, and restore the configuration after an upgrade.

The command syntax for the install-mpfs script is:

install-mpfs [-s] [-r]

where:-s = silent mode which unmounts MPFS and upgrades RPM without prompting the user. -r = restores configurations by backing up the current MPFS configurations and restoring them after an upgrade.The install-mpfs script will attempt to unmount MPFS after prompting the user to proceed.

Note: Do not back up and restore MPFS configuration files by default.

To install the MPFS software on a Linux server that has an earlier version of MPFS software installed and MPFS mounted:

1. Type:

$ ./install-mpfs

Output: Installing ./EMCmpfs-6.0.2.x-i686.rpm on localhost

[ Step 1 ] Checking installed MPFSpackage ...Warning: Package EMCmpfs-6.0.1-0 has already been installed.Do you want to upgrade to new package? [yes/no]yes[ Step 2 ] Checking mounted mpfs file system ...The following mpfs file system are mounted:/mntDo you want installation to umount these file system automatically? [yes/no]yes

Page 98: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Installing, Upgrading, or Uninstalling VNX MPFS Software

98 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Unmounting MPFS filesystems...Successfully umount all mpfs file system.[ Step 3 ] Upgrading MPFS package ...Preparing... ########################################### [100%] 1:EMCmpfs ########################################### [100%]Unloading old version modules...unprotectLoading EMC MPFS Disk Protection [ OK ]Protecting EMC VNX disks [ OK ]Loading EMC MPFS [ OK ]Starting MPFS daemon [ OK ]Discover MPFS devices [ OK ]Starting MPFS perf daemon [ OK ][ Done ]

2. The installation is complete. Follow the instructions in “Post-installation checking” on page 98.

Post-installation checking

After upgrading the MPFS software:

1. Verify that the MPFS software is upgraded properly and the MPFS daemon (mpfsd) has started as described in “Verifying the MPFS software upgrade” on page 99.

2. Start the MPFS software by mounting MPFS as described in “Mounting MPFS” on page 84.

Page 99: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Upgrading the MPFS software 99

Installing, Upgrading, or Uninstalling VNX MPFS Software

Verifying the MPFS software upgrade

To verify that the MPFS software is upgraded and that the MPFS daemon is started:

1. Use RPM to verify the MPFS software upgrade:

rpm -q EMCmpfs

If the MPFS software is upgraded properly, the command displays an output such as:

EMCmpfs-6.0.x-x

Note: Alternatively, use the mpfsctl version command to verify the MPFS software is upgraded. The mpfsctl man page or “Using the mpfsctl utility” on page 107 provides additional information.

2. Use the ps command to verify that the MPFS daemon has started:

ps -ef |grep mpfsd

The output will look like this if the MPFS daemon has started:

root 847 1 0 15:19 ? 00:00:00 /usr/sbin/mpfsd

3. If the ps command output does not show the MPFS daemon process is running, as root, start MPFS software by using this command:

$ /etc/rc.d/init.d/mpfs start

Page 100: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Installing, Upgrading, or Uninstalling VNX MPFS Software

100 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Uninstalling the MPFS softwareTo uninstall the MPFS software from a Linux server:

1. To uninstall the MPFS software:

$ rpm -e EMCmpfs

If the MPFS software was uninstalled correctly, this message appears on the screen:

Unloading EMCmpfs module...[root@###14583 root]#

2. If the MPFS software was not uninstalled due to MPFS being mounted, this error message appears:

[root@###14583 root]# rpm -e EMCmpfsERROR: Mounted mpfs filesystems found.Please unmount all mpfs filesystems before uninstalling the product.error: %preun(EMCmpfs-6.0.2-x) scriptlet failed, exit status 1

3. Unmount MPFS. Follow the instructions in “Unmounting MPFS” on page 88.

4. Repeat step 1.

Page 101: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX MPFS Command Line Interface 101

4Invisible Body Tag

This chapter discusses the EMC VNX MPFS commands, parameters, and procedures used to manage and fine-tune a Linux server. Topics include:

◆ Using HighRoad disk protection ................................................... 102◆ Using the mpfsctl utility ................................................................. 107◆ Displaying statistics ......................................................................... 118◆ Displaying MPFS device information ........................................... 120◆ Setting MPFS parameters................................................................ 127◆ Displaying Kernel parameters ....................................................... 127◆ Setting persistent parameter values .............................................. 129

EMC VNX MPFSCommand Line

Interface

Page 102: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

102 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

Using HighRoad disk protectionLinux servers provide hard disk protection for VNX for block and Symmetrix system volumes associated with MPFS. These volumes are called File Mapping Protocol (FMP) volumes. The program providing this protection is called the EMC HighRoad® Disk Protection (hrdp) program.

With hrdp read/write protection activated, I/O requests to FMP volumes from the Linux server are allowed, but I/O requests from other sources are denied. For example, root users can use the dd utility to read/write to an MPFS mounted file system, but cannot use the dd utility to read/write to the device files themselves (/dev).

The reason for disk protection is two fold. The first reason is to provide security. Arbitrary users on a Linux server should not be able to access the data stored on FMP volumes. The second reason is to provide data integrity. Hard drive protection prevents the accidental corruption of file systems.

This section describes the behavior and interface characteristics between the VNX for file and Linux servers.

VNX for file and hrdp

Linux servers depend on the VNX for file to tag relevant volumes to identify them as FMP volumes. To accomplish this, the VNX for file writes a signature on all visible volumes. From a disk protection view, a VNX for file and an FMP volume are synonymous.

Discovering disks When a Linux server performs a disk discovery action, it tries to read a VNX for file signature from every accessible volume.

For VNX for block volumes, which may be accessible through two different service processors (SP A and SP B), hrdp is not able to read a VNX for file signature from the passive path. However, hrdp does recognize that the two paths lead to the same device. The hrdp program protects both the passive and active paths to the VNX for block volumes.

Because a set of FMP volumes may change over time, hrdp must perform disk discovery periodically. The hrdp program receives notifications of changes to device paths, and responds accordingly by protecting any newly accessible VNX for file devices.

Page 103: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Using HighRoad disk protection 103

EMC VNX MPFS Command Line Interface

hrdp command syntax

The hrdp program can be used to manually control device protection. Used with no arguments, hrdp identifies all the devices in the system, and protects the devices or partitions with a VNX for file disk signature.

Command syntax hrdp [-d] [-h] [-n] [-p] [ -s sleep_time ] [-u] [-v] [-w]

where:

-d = run hrdp as a daemon, periodically scan devices, and update the kernel.

-h = print hrdp usage information.

-n = scan for new volumes, but do not inform the kernel about them.

-p = enable protection (read and write) for all VNX for file volumes.

-s sleep_time = when run as a daemon, sleep the specified number of seconds between rediscovery. The default sleep time is 900 seconds.

Note: Sleep time can also be set by using HRDP_SLEEP_TIME as an environment variable, or as a parameter in /etc/sysconfig/EMCmpfs. The sysconfig parameter is explained in detail in “Displaying statistics” on page 118.

-u = disable protection (read and write) for all VNX for file volumes.

-v = scan in verbose mode; print the signatures of new volumes as they are found.

-w = enable write protection for all VNX for file volumes.

Examples These examples illustrate the hrdp command output.

This command runs hrdp as a daemon, periodically scans devices, and updates the kernel:

$ hrdp -d

Note: When the command has finished executing, only the command line prompt is returned.

Page 104: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

104 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

This command prints information about hrdp usage:$ hrdp -h

Output:

Usage: hrdp [options]Options: -d run as a daemon -h print this help information -n do not update kernel just print results -p enable protection -s time seconds to sleep between reprotection if run

as daemon -u disabled (unprotect) protection -v verbose -w enable write protection (i.e. allow reads)$

This command does not inform the kernel about scanning for new volumes:$ hrdp -n

Note: When the command has finished executing, only the command line prompt is returned.

This command enables read and write protection for all VNX for file volumes:$ hrdp -p

Output:

protect$

This command displays "protect" to show that read and write protection is enabled for all VNX for file volumes.

This command when running hrdp as a daemon sleeps the specified number of seconds between rediscoveries:$ hrdp -s sleep_time

Note: When the command has finished executing, only the command line prompt is returned.

Page 105: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Using HighRoad disk protection 105

EMC VNX MPFS Command Line Interface

This command disables read and write protection for all VNX for file volumes:$ hrdp -u

Output:

unprotect$

This command displays "unprotect" to show that read and write protection is disabled for all VNX for file volumes.

This command scans in verbose mode and prints the signatures of new volumes as they are found.$ hrdp -v

Output:

VNX signature vendor product_id device serial number or path info0001874307271FA0-00f1 EMC SYMMETRIX 60:06:04:80:00:01:87:43:07:27:53:30:32:34:35:32 path = /dev/sdig Active FA-51b /dev/sg2400001874307271FA0-00ee EMC SYMMETRIX 60:06:04:80:00:01:87:43:07:27:53:30:32:34:34:33 path = /dev/sdid Active FA-51b /dev/sg2370001874307271FA0-00f0 EMC SYMMETRIX 60:06:04:80:00:01:87:43:07:27:53:30:32:34:34:44 path = /dev/sdif Active FA-51b /dev/sg239

$

This command enables write protection for all VNX for file volumes:$ hrdp -w

Output:

protect$

This command displays "protect" to show that write protection is enabled for all VNX for file volumes.

Page 106: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

106 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

Viewing hrdp protected devices

Devices being protected by hrdp can be seen by listing the /proc/hrdp file. For a list of protected devices:

$ cat /proc/hrdp

Output:

Disk Protection Enabled for reads and writes

Device Status274: /dev/sddw 71.224 protected275: /dev/sddx 71.240 protected276: /dev/sddy 128.000 protected277: /dev/sddz 128.016 protected

Page 107: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Using the mpfsctl utility 107

EMC VNX MPFS Command Line Interface

Using the mpfsctl utilityThe MPFS Control Program, mpfsctl, is a command line utility that can be used by MPFS system administrators to troubleshoot and fine-tune their systems. The mpfsctl utility resides in /usr/sbin/mpfsctl. Table 6 on page 107 lists the mpfsctl commands.

Table 6 Command line interface summary

Error message If any of these commands are used and an error is received, ensure that the MPFS software has been loaded. Use the command “mpfsctl version” on page 117 to verify the version number of MPFS software.

Command Description Page

mpfsctl help Displays a list of mpfsctl commands. 108

mpfsctl diskreset Clears any file error conditions and causes MPFS to retry I/Os through the SAN.

109

mpfsctl diskrestfreq

Clears file error conditions, and tells MPFS to retry I/Os through the SAN in a specified timeframe.

109

mpfsctl max-readahead

Allows for adjustment of the number of kilobytes to prefetch when MPFS detects sequential read requests.

110

mpfsctl prefetch Sets the number of blocks of metadata to prefetch.

112

mpfsctl reset Sets the statistical counters. 113

mpfsctl stats Displays statistical data about MPFS. 114, 118

mpfsctl version Displays the current version of MPFS software running on the Linux server.

117

mpfsctl volmgt Displays the volume management type used by each mounted file system.

117

Page 108: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

108 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

mpfsctl help This command displays a list of the various mpfsctl program commands. Each command is explained in the rest of this chapter.

Why use thiscommand

Get a listing of all available mpfsctl commands.

Command syntax mpfsctl help

Input:

$ mpfsctl help

Output: Usage: mpfsctl op ...Operations supported (arguments in parentheses): diskreset resets disk connections diskresetfreq sets the disk reset frequency (seconds) max-readahead set number of readahead pages help print this list prefetch set number of blocks to prefetch reset reset statistics stats print statistics version display product compile time stamp volmgt get volume management type$

Use the man page facility on the Linux server for mpfsctl by typing man mpfsctl at the command line prompt.

Page 109: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Using the mpfsctl utility 109

EMC VNX MPFS Command Line Interface

mpfsctl diskreset This command clears any file error conditions and tells MPFS to retry I/Os through the SAN.

Why use thiscommand

When MPFS detects that I/Os through the SAN are failing, it uses NFS to transport data. There are many reasons why a SAN I/O failure can occur. Use the mpfsctl diskreset command when:

◆ A cable has been disconnected. After the reconnection, use the mpfsctl diskreset command to immediately retry the SAN.

◆ A configuration change or a hardware failure has occurred and the MPFS I/O needs to be reset through the SAN after the repair or change has been completed.

◆ Network congestion has occurred and the MPFS I/O needs to be reset through the SAN when the network congestion has been identified and eliminated.

Command syntax mpfsctl diskreset

Input:

$ mpfsctl diskreset

Note: Setting or not setting failovermode produces no system response. When the command has finished executing, only the command line prompt is returned.

mpfsctl diskresetfreq

This command sets the frequency at which the kernel automatically clears errors associated with using the SAN.

Why use thiscommand

Invoke NFS until the errors are cleared either manually with the mpfsctl diskreset command, or automatically when the frequency is greater than zero.

Command syntax mpfsctl diskresetfreq <interval_seconds>

where:

<interval_seconds> = time between the clearing of errors in seconds

Input:

$ mpfsctl diskresetfreq 650

Page 110: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

110 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

Note: Setting or not setting failovermode produces no system response. When the command has finished executing, only the command line prompt is returned.

To verify that the new interval has been set:

$ cat /proc/mpfs/paramsKernel ParametersDirectIO=1disk-reset-interval=650 secondsecache-size=2047 extentsmax-retries=10prefetch-size=256MaxConcurrentNfsWrites=128MaxComitBlocks=2048NotifyPort=6907StatfsBsize=65536Readahead=0defer-close-seconds=60 secondsdefer-close-max=1024UsePseudo=1ExostraMode=0ReadaheadForRandomIO=0SmallFileThreshold=0$

The default for the disk-reset-interval parameter is 600 seconds with a minimum of 60 seconds and a maximum of 3600 seconds. Note the value change in the example.

mpfsctl max-readahead

This command allows for adjustment of the number of kilobytes of data to prefetch when MPFS detects sequential read requests.

The mpfsctl max-readahead command is designed for 2.6 Linux kernels to provide functionality similar to the kernel parameter entry and /proc/sys/vm/max-readahead for 2.4 Linux kernels. One difference is that in 2.4 Linux kernels, this kernel parameter is system-wide and the mpfsctl max-readahead parameter only applies to I/O issued to MPFS.

This option to the mpfsctl command allows experimentation with different settings on a currently running system. Changes to the mpfsctl max-readahead value are not persistent across system reboots. However, mpfsctl max-readahead value changes take effect immediately for file systems that are currently mounted.

Page 111: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Using the mpfsctl utility 111

EMC VNX MPFS Command Line Interface

To load a new value every time MPFS starts, remove the comments from the globReadahead parameter in the /etc/mpfs.conf file if it is present. If it is not present, add the globReadahead on a line by itself to change the default value.

Note: The prefetch parameter value can be set to stay in effect after a reboot. “Setting persistent parameter values” on page 129 describes how to set this value persistently.

For example:

globReadahead=120 (120 x 4 K = 480 KB) where 120 equals 480 KB on an x86_64 machine.

Why use thiscommand

Tune MPFS for higher read performance.

Command syntax mpfsctl max-readahead <kilobytes>

where:

<kilobytes> = an integer between 0 and 32768

The minimum/default value 0 specifies use of the kernel default, which is 480 KB. A maximum value specifies 32,768 KB of data to be read ahead.

Input:

$ mpfsctl max-readahead 0

Note: Setting or not setting failovermode produces no system response. When the command has finished executing, only the command line prompt is returned.

To verify that the new max readahead value has been set:

$ cat /proc/mpfs/paramsKernel ParametersDirectIO=1disk-reset-interval=600 secondsecache-size=2047 extentsmax-retries=10prefetch-size=256MaxConcurrentNfsWrites=128MaxComitBlocks=2048NotifyPort=6907StatfsBsize=65536Readahead=0defer-close-seconds=60 seconds

Page 112: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

112 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

defer-close-max=1024UsePseudo=1ExostraMode=0ReadaheadForRandomIO=0SmallFileThreshold=0$

mpfsctl prefetch This command sets the number of data blocks to prefetch metadata. Metadata is information that describes the location of file data on the SAN. It is this prefetched metadata that allows for fast, accurate access to file data through the SAN.

Why use thiscommand

Tune MPFS for higher performance.

Command syntax mpfsctl prefetch <blocks>

where:

<blocks> = an integer between 4 and 4096 that specifies the number of blocks for which to prefetch metadata.

A block contains 8 KB of metadata. Metadata can be prefetched that maps (describes) between 32 KB (4 blocks) and 32 MB (4096 blocks) of data.

The default is 256 blocks or 2 MB for which metadata is prefetched. This is the best number for a variety of workloads. Leave this value unchanged. However, mpfsctl prefetch can be changed in situations when higher performance is required.

Changing the prefetch value does not affect current MPFS mounts, only subsequent mounts.

Note: The prefetch parameter value can be set to stay in effect after a reboot. “Setting persistent parameter values” on page 129 describes how to set this value persistently.

Input:

$ mpfsctl prefetch 256

Note: Setting or not setting failovermode produces no system response. When the command has finished executing, only the command line prompt is returned.

Page 113: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Using the mpfsctl utility 113

EMC VNX MPFS Command Line Interface

To verify that the new prefetch value has been set:

$ cat /proc/mpfs/paramsKernel ParametersDirectIO=1disk-reset-interval=650 secondsecache-size=2047 extentsmax-retries=10prefetch-size=256MaxConcurrentNfsWrites=128MaxComitBlocks=2048NotifyPort=6907StatfsBsize=65536Readahead=0defer-close-seconds=60 secondsdefer-close-max=1024UsePseudo=1ExostraMode=0ReadaheadForRandomIO=0SmallFileThreshold=0$

mpfsctl reset This command resets the statistical counters read by the mpfsctl stats command. “Displaying statistics” on page 118 provides additional information.

Why use thiscommand

By default, statistics accumulate until the system is rebooted. Use the mpfsctl reset command to reset the counters to 0 before executing the mpfsctl stats command.

Command syntax mpfsctl reset

Input:$ mpfsctl reset

Note: Setting or not setting failovermode produces no system response. When the command has finished executing, only the command line prompt is returned.

Page 114: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

114 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

mpfsctl stats This command displays a set of statistics showing the internal operation of the Linux server. By default, statistics accumulate until the system is rebooted. The command “mpfsctl reset” on page 113 provides information to reset the counters to 0 before executing the mpfsctl stats command.

Why use thiscommand

The output of the mpfsctl stats command can help pinpoint performance problems.

Command syntax mpfsctl stats

Input: $ mpfsctl stats

Output:=== OS INTERFACE

8534 reads totalling 107683852 bytes

5378 direct reads totalling 107683852 bytes

4974 writes totalling 74902093 bytes

2378 direct writes totalling 107683852 bytes

0 split I/Os

25 commits, 14 setattributes 4 fallthroughs involving 28

bytes

=== Buffer Cache

8534 disk reads totalling 107683852 bytes

4974 disk writes totalling 74902093 bytes

0 failed disk reads totalling 0 bytes

0 failed disk writes totalling 0 bytes

=== NFS Rewrite

6436 sync read calls totalling 107683852 bytes

3756 sync write calls totalling 74902093 bytes

=== Address Space Errors

321 swap failed writes

=== EXTENT CACHE

8364 read-cache hits (97%)

3111 write-cache hits (62%)

=== NETWORK INTERFACE

188 open messages, 187 closes

Page 115: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Using the mpfsctl utility 115

EMC VNX MPFS Command Line Interface

178 getmap, 1897 allocspace

825 flushes of 1618 extents and 9283 blocks, 43 releases

1 notify messages

=== ERRORS

0 WRONG_MSG_NUM, 0 QUEUE_FULL 0 INVALIDARG0 client-detected sequence errors

0 RPC errors, 0 other errors

$

When the command has finished executing, the command line prompt is returned.

Page 116: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

116 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

Understanding mpfsctlstats output

Each of the output sections is explained next.

OS INTERFACE — The first four lines show the number of NFS message types that MPFS either handles (reads, direct reads, writes, and direct writes) or watches and augments (split I/Os, commits, and setattributes).

The last line shows the number of fallthroughs or reads and writes attempted over MPFS, but accomplished over NFS. The number of fallthroughs should be small. A large number of fallthroughs indicates that MPFS is not being used to its full advantage.

Buffer Cache — The first two lines show the number of disk (reads and writes) that MPFS reads and writes to and from cache. The last two lines show the number of failed disk reads and writes to cache.

NFS Rewrite — The first line shows the number of synchronized read calls that NFS rewrites. The second line shows the number of synchronized write calls that NFS rewrites.

Address Space Errors — This line shows the number of failed writes due to memory pressure, which will be retried later.

EXTENT CACHE — These lines show the cache-hit rates. A low percentage (such as the percentage of write-cache hits in this example) indicates that the application has a random access pattern rather than a more sequential access pattern.

NETWORK INTERFACE — These lines show the number of FMP messages sent. In this example, the number is 187.

The number of blocks (9283) per flush (825) is also significant; in this case it is a 11:1 ratio. Coalescing multiple blocks into a single flush is a major part of the MPFS strategy for reducing message traffic.

ERRORS — This section shows both serious errors and completely recoverable errors. The only serious errors are those described as either RPC or other. Contact EMC Customer Support if a significant number of errors are reported in a short period of time.

Page 117: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Using the mpfsctl utility 117

EMC VNX MPFS Command Line Interface

mpfsctl version This command displays the version number of the MPFS software running on the Linux server.

Why use thiscommand

Find the specific version number of the MPFS software running on the Linux server.

Command syntax mpfsctl version

Input:

$ mpfsctl version

Output:

version: EMCmpfs.linux.6.0.2.x.x /emc/test/mpfslinux (test@eng111111), 12/10/10 01:41:24 PM

$

When the command has finished executing, only the command line prompt is returned.

If the MPFS software is not loaded, this error message appears:

/dev/mpfs : No such file or directory

Install the MPFS software by following the procedure in “Installing the MPFS software” on page 90.

mpfsctl volmgt This command displays the volume management type used by each mounted file system.

Why use thiscommand

Find if the volume management type is hierarchical volume management.

Command syntax mpfsctl volmgt

Input:

$ mpfsctl volmgt

Output:

Fs ID VolMgtType1423638547 Hvl management Disk signature$

When the command has finished executing, only the command line prompt is returned.

Page 118: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

118 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

Displaying statisticsMPFS statistics for the system can be retrieved by using the mpfsstat command.

Using the mpfsstat command

This command displays a set of statistics. The command reports I/O activity for MPFS. Without options, mpfsstat reports global statistics in megabytes per second. By default, statistics accumulate until the Linux server is rebooted. To reset the counters to 0, run mpfsstat with the -z option.

Why use thiscommand

Help troubleshoot MPFS performance issues or to gain general knowledge about the performance of MPFS.

Command syntax mpfsstat [-d] [-h] [-k][-z] [interval[count]]

where:-d = report statistics about the MPFS disk interface-h = print mpfsstat usage information-k = report statistics in kilobytes instead of megabytes-z = reset the counters to 0 before reporting statistics

Operands:interval = report statistics every interval secondscount = print only count lines of statistics

Examples These examples illustrate the mpfsstat command output.

This command prints the I/O rate for all MPFS-mounted file systems:$ mpfsstat

Output:

r/s w/s dr/s dw/s mr/s mw/s mdr/s mdw/s Fallthroughs0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0

$

This command reports MPFS disk interface statistics:$ mpfsstat -d

Page 119: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Displaying statistics 119

EMC VNX MPFS Command Line Interface

Output:

disk syncnfs failed zeror/s w/s mr/s mw/s r/s w/s mr/s mw/s r+w/s mr+w/s blocks0 0 0.0 0.0 0 0 0.0 0.0 0 0.0 0

$

This command prints information about mpfsstat usage:$ mpfsstat -h

Output:

Usage: mpfsstat [-dhkz] [interval [count]] -d Print disk statistics -h Print This screen -k Print Statistics in Kilobytes per sec. -z Clear all statistics$

This command reports statistics in kilobytes instead of megabytes:$ mpfsstat -k

Output:

r/s w/s drs dw/s kr/s kw/s kdr/s kdw/s Fallthroughs0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0

$

This command resets the counters to zero before reporting statistics:$ mpfsstat -z

Output:

r/s w/s dr/s dw/s mr/s mw/s mdr/s mdw/s Fallthroughs0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0

$

This command prints two lines of statistics, waiting one second between prints:$ mpfsstat 1 2

Output:

r/s w/s dr/s dw/s mr/s mw/s mdr/s mdw/s Fallthroughs0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 00.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0

$

Page 120: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

120 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

Displaying MPFS device informationSeveral types of information can be displayed for MPFS devices, including the device’s vendor ID, product ID, active/passive state, and mapped paths. The methods for displaying device information are:

◆ Using the mpfsinq command

◆ Listing the /proc/mpfs/devices file

◆ Using the hrdp command

◆ Using the mpfsquota command

This section describes each of these methods and the type of information provided by each.

Listing devices with the mpfsinq command

The mpfsinq command displays disk signature information, shows where the path where disks are mapped, and identifies whether the devices are active (available for I/O).

The command syntax is:mpfsinq [-c time] [-h] [-m] [-S] [-v] <devices>

where:-c time = timeout for SCSI command in seconds

-h = print mpfsinq usage information

-m = used to write scripts based on the output of mpfsinq; it prints out information in a machine readable format that can be easily edited with awk and sed

-S = tests the disk speed

-v = run in verbose mode printing out additional information

<devices> = active devices available for I/O

Page 121: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Displaying MPFS device information 121

EMC VNX MPFS Command Line Interface

To view the timeout in seconds for a SCSI command:

$ mpfsinq -c time

Output:

FNM00083700177002E-0007 DGC RAID 5 60:06:01:60:00:03:22:00:27:b4:48:b9:d8:03:de:11

path = /dev/sdbm (0x4400 | 0x4003f00) Active SP-a3 /dev/sg65

path = /dev/sdbi (0x43c0 | 0x3003f00) Passive SP-b3 /dev/sg61 . . . . . .FNM000837001770000-000e DGC RAID 5

60:06:01:60:00:03:22:00:5d:59:b0:aa:71:02:de:11 path = /dev/sdbh (0x43b0 | 0x1001300) Active SP-a2* /dev/sg60

path = /dev/sdee (0x8260 | 0x2000000) Passive SP-b2 /dev/sg167

* designates active path using non-default controller$

To print information about mpfsinq usage:$ mpfsinq -h

Output:

Usage: mpfsinq [options] <devices>Options: -c time timeout for scsi command in seconds -h print this help information -m machine readable format for output -S test disk speed -v verbose$

To write scripts based on the output of the mpfsinq command with information printed out in a machine readable format that can be edited with awk and sed:

$ mpfsinq -m

b2:38:7e:d9:03:de:11 /dev/sdcy Active /dev/sg103FNM00083700177002E-0010 DGC RAID 5 60:06:01:60:00:03:22:00:4a:b2:38:7e:d9:03:de:11 /dev/sddb Passive /dev/sg106FNM000837001770028-000c DGC RAID 5 60:06:01:60:00:03:22:00:8d:e7:5d:67:d9:03:de:11 /dev/sdcq Active /dev/sg95 . . . . . .FNM000837001770000-0001 DGC RAID 5 60:06:01:60:00:03:22:00:16:b6:c1:d5:6e:02:de:11 /dev/sdee Passive /dev/sg135

Page 122: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

122 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

To test the disk speed:

$ mpfsinq -S

Output:

FNM000837001770000-0017 DGC RAID 5 60:06:01:60:00:03:22:00:5c:59:b0:aa:71:02:de:11 path = /dev/sdbd (0x4370 | 0x1001200) Active SP-a2* /dev/sg56 50MB/s

path = /dev/sdfj (0x8250 | 0x2001200) Passive SP-b2 /dev/sg166 . . . . . .FNM000837001770000-0001 DGC RAID 5 60:06:01:60:00:03:22:00:16:b6:c1:d5:be:02:de:11 path = /dev/sdc (0x820 | 0x1000000) Active SP-a2 /dev/sg3 50MB/s

path = /dev/sder (0x8130 | 0x2000000) Passive SP-b2 /dev/sg148

* designates active path using non-default controller$

To display MPFS devices in verbose mode printing out additional information:

$ mpfsinq -v

Output:

mpfsinq -v

VNX signature vendor product_id device serial number or path info0001874307271FA0-00f1 EMC SYMMETRIX 60:06:04:80:00:01:87:43:07:27:53:30:32:34:35:32 path = /dev/sdig Active FA-51b /dev/sg2400001874307271FA0-00ee EMC SYMMETRIX 60:06:04:80:00:01:87:43:07:27:53:30:32:34:34:33 path = /dev/sdid Active FA-51b /dev/sg2370001874307271FA0-00f0 EMC SYMMETRIX 60:06:04:80:00:01:87:43:07:27:53:30:32:34:34:44 path = /dev/sdif Active FA-51b /dev/sg239

Note: A passive path will be shown in the output only if there is a secondary path mapped to the device. Only the VNX for block has Active/Passive states. Symmetrix system arrays are Active/Active only as shown in Table 7 on page 122.

Table 7 MPFS device information

Vendor ID State I/O available

Symmetrix Active Yes

VNX for block Active Yes

VNX for block Passive No

Page 123: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Displaying MPFS device information 123

EMC VNX MPFS Command Line Interface

Listing devices with the /proc/mpfs/devices file

The state of MPFS devices may also be shown by listing the /proc/mpfs/devices file.

To list the MPFS devices in the /proc/mpfs/devices file:

$ cat /proc/mpfs/devices

Output:VNX Signature Path StateFNM000837001770000-0001 /dev/sdc activeFNM000837001770034-0001 /dev/sdak activeFNM000837001770000-0002 /dev/sdf activeFNM000837001770034-0002 /dev/sdan activeFNM000837001770028-0002 /dev/sdg activeFNM00083700177002E-0002 /dev/sdv activeFNM000837001770000-0003 /dev/sdaq active

Displaying mpfs disk quotas

Use the mpfsquota command to display a user’s MPFS disk quota and usage. Log in as root to use the optional username argument and view the limits of other users. Without options, mpfsquota displays warnings about mounted file systems where usage is over quota. Remote mounted file systems that do not have quotas turned on are ignored.

Note: If quota is not turned on in the file system, log in to the VNX for file and execute the nas_quotas commands.

Example To set quotas in the server:$ nas_quotas -edit -user -fs server2_fs1 501

Output:Userid : 501fs "server2_fs1" blocks (soft = 2000, hard = 3000) inodes (soft = 0, hard = 0)

To turn on the quotas:$ nas_quotas -on -user -fs server2_fs1

Output:done$

Page 124: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

124 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

To run a report on the quotas:$ nas_quotas -report -fs server2_fs1

Output:

Report for user quotas on file system server2_fs1 mounted on /server2fs1+------------+-------------------------------------+---------------------------+|User | Bytes Used (1K) | Files |+------------+-----------+-------+-------+---------+------+-----+-----+--------+| | Used | Soft | Hard |Timeleft | Used | Soft| Hard|Timeleft|+------------+-----------+-------+-------+---------+------+-----+-----+--------+|#501 | 8 | 2000| 3000| | 1 | 0| 0| ||#32769 | 1864424 | 0| 0| | 206 | 0| 0| |+------------+-----------+-------+-------+---------+------+-----+-----+--------+

done$

Mount the file system with the mpfs option from a Linux server.

The command syntax is:mpfsquota -v [username/UID]

where:-v = is a required option

username/UID = is the user ID

To display all MPFS-mounted file systems where quotas exist:

$ mpfsquota -v

To view the quota of UID 501:$ mpfsquota -v 501

Output:Filesystem usage quota limit timeleft files quota limit timeleft/mnt 8 2000 3000 1 0 0

Example If quota is turned off in the server, this message appears:$ mpfsquota 501

Output:No quota

Page 125: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Displaying MPFS device information 125

EMC VNX MPFS Command Line Interface

Validating a Linux server installation

Use the mpfsinfo command to validate a Linux server and VNX for file installation by querying an FMP server (Data Mover) and validating that the Linux server can access all the disks required to use MPFS for each exported file system.

The user must supply the name or IP address of at least one FMP server and have the Tool Command Language (Tcl) and Extended Toll Command Language (Tclx) installed. Multiple FMP servers may be specified, in which case the validation is done for the exported file systems on all the listed servers.

mpfsinfo command The command syntax is:mpfsinfo [-v] [-h] <fmpserver>

where:-v = run in verbose mode printing out additional information

-h = print mpfsinfo usage information

<fmpserver> = name or IP address of the FMP server

To query FMP server ka0abc12s402:

$ mpfsinfo ka0abc12s402

Output: ka0abc12s402:/server4fs1 OKka0abc12s402:/server4fs2 OKka0abc12s402:/server4fs3 OKka0abc12s402:/server4fs4 OKka0abc12s402:/server4fs5 OK$When the Linux server cannot access all of the disks required for each exported file system, this output appears.

To query FMP server kc0abc17s901:

$ mpfsinfo kc0abc17s901

Output: kc0abc17s901:/server9fs1 MISSING DISK(s) APM000637001700000-0049 MISSING APM000637001700000-004a MISSING APM000637001700000-0053 MISSING APM000637001700000-0054 MISSINGkc0abc17s901:/server9fs2 MISSING DISK(s) APM000637001700000-0049 MISSING APM000637001700000-004a MISSING APM000637001700000-0053 MISSING APM000637001700000-0054 MISSING

Page 126: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

126 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

To run in verbose mode printing out additional information:$ mpfsinfo -v 172.24.107.243

Output:172.24.107.243:/S2_Shg_mnt1 OK FNM000836000810000-0007 OK FNM000836000810000-0008 OK172.24.107.243:/S2_Shg_mnt2 OK FNM000836000810000-0009 OK FNM000836000810000-000a OK172.24.107.243:/S2_Shg_mnt3 OK FNM000836000810000-000d OK FNM000836000810000-000e OK$

To print mpfsinfo usage information:$ mpfsinfo -h

Output:Usage: /usr/sbin/mpfsinfo [options] fmpserver...options: -h help -v verbose

If the server is not available, this error message is displayed:

$ mpfsinfo -v ka0abc12s402

Warning: No MPFS disks foundka0abc12s402: Cannot reach server.

Page 127: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Setting MPFS parameters 127

EMC VNX MPFS Command Line Interface

Setting MPFS parametersA list of MPFS parameters may be found in the /proc/mpfs/params file. The parameter settings shown are the default or recommended values.

If a Linux server reboot is performed, several of these parameters revert to the default value unless they are set to a persistent state. “Setting persistent parameter values” on page 129 explains the procedure for applying these parameters across the reboot process.

Displaying Kernel parametersUse Table 8 on page 128 as a guide for minimum and maximum settings for each parameter. To display the current settings:

$ cat /proc/mpfs/params

Output:

Kernel ParametersDirectIO=1disk-reset-interval=600 secondsecache-size=2047 extentsmax-retries=10prefetch-size=256MaxConcurrentNfsWrites=128MaxComitBlocks=2048NotifyPort=6907StatfsBsize=65536Readahead=0defer-close-seconds=60 secondsdefer-close-max=1024UsePseudo=1ExostraMode=0ReadaheadForRandomIO=0SmallFileThreshold=0

Page 128: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

128 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

Table 8 MPFS kernel parameters (page 1 of 2)

Parameter Default Minimum Maximum Meaning

defer-close-max 1024 0 None Closes 12 files when the number of open files exceeds the defer-close-max value.

defer-close-seconds 60 0 None When an application closes a file, the FMP module will not send the FMP close command to the server until the defer-close-seconds time has passed.

DirectIO 1 0 3 Allows file reads and writes to go directly from an application to a storage device bypassing operating system buffer cache.

disk-reset-interval 600 60 3,600 Sets the timeframe in seconds for failback to start by using the SAN for all open files.

ecache-size 2,047 31 16,383 Sets the number of extents per file to keep in the extent cache.

ExostraMode 0 Do not change

Do not change

Use default setting; for EMC use only.

MaxCommitBlocks 2,048 Do not change

Do not change

Sets the maximum number of blocks to commit to a single commit command.

MaxConcurrentNfsWrites 128 Do not change

Do not change

Sets the maximum number of concurrent NFS writes allowed.

max-retries 10 2 20 Sets the maximum number of SAN-based retries before failing over to NFS.

NotifyPort 6,907 Do not change

Do not change

The notification port that is used by default.

prefetch-size 256 4 2,048 Sets the number of prefetch blocks. Recommended size: no larger than 512 unless instructed to do so by your EMC Customer Support Representative.

Readahead 0 Do not change

Do not change

Specifies the read ahead in pages. This parameter only applies to 2.6 kernels.

ReadaheadForRandomIO 0 0 1 When an application reads a file randomly, the readahead size is reduced by the kernel. By setting this parameter to 1, the readahead size is not reduced by the kernel.

Page 129: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Setting persistent parameter values 129

EMC VNX MPFS Command Line Interface

Setting persistent parameter values Parameters in /etc/mpfs.conf and /etc/sysconfig/EMCmpfs can be set to persistently remain in effect after a Linux server reboot.

mpfs.conf parameters

Prefetch along with several other MPFS parameters, may be set to a persistent state by modifying the /etc/mpfs.conf file. These parameters are:

◆ globPrefetchSize — Sets the number of blocks to prefetch when requesting mapping information.

◆ globMaxRetries — Sets the number of retries for all FMP requests.

◆ globDiskResetInterval — Sets the number of seconds between retrying by using SAN.

To view the /etc/mpfs.conf file: $ cat /etc/mpfs.conf

Output: ## This is the MPFSi configuration file## It contains parameters that are used when the MPFSi module is loaded#

## Users who supply the direct I/O flag when opening a file will get

SmallFileThreshold 0 0 None Sets the size threshold for files. For files smaller than this value, I/O will go through NFS instead of MPFS. When set to 0, this function is disabled.

StatfsBsize 65,536 8,192 2 M The file system block size as returned by the statsfs system call. This value is not used by MPFS, but some applications choose this as the size of their writes.

UsePseudo 1 0 1 Enables MPFS to use pseudo devices created by Multipathing software, such as PowerPatha and the Device-Mapper Multipath toolb.

a. MPFS supports PowerPath version 5.3 on RHEL 4 U6-U8, RHEL 5 U5-U7, SLES 10 SP3, and SLES 11 SP1.

b. MPFS supports the Device-Mapper Multipath tool on RHEL 4 U6-U8 and RHEL 5 U5-U7.

Table 8 MPFS kernel parameters (page 2 of 2)

Parameter Default Minimum Maximum Meaning

Page 130: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

130 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

# behavior that is dependant on the global setting of a parameter# called "globDirectIO".## There are three valid values for this parameter. They are:# # 0 -- No direct I/O support, return ENOSUPP# 1 -- Direct I/O via MPFS# 2 -- Direct I/O via NFS even on MPFS file systems# 3 -- Direct I/O via MPFS, and optimized for DIO to pre-allocated file, DIO to non-allocated file will fallback to NFS## globDirectIO=1

## Set the number of seconds between retrying via SAN# # globDiskResetInterval_sec=600

## Set number of extents per file to keep in the extent cache. This # should be a power of 2 minus 1. # Too many extents means that searching the extent cache may be slow. # Too few, and we will have to do too many RPCs.# # globECacheSize=2047

## Set the number of retries for all FMP requests## globMaxRetries=10

## Set the number of blocks to prefetch when requesting mapping

information## globPrefetchSize=256

## Set number of simultaneous NFS writes to dispatch on SAN failure## globMaxConcurrentNfsWrites=128

## Set optimal blocksize for MPFS file systems## globStatfsBsize=65536## Set number of readahead pages# This is only used for 2.6 kernels. For 2.4 kernel users, please set

vm.max_readahead## globReadahead=250

Page 131: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Setting persistent parameter values 131

EMC VNX MPFS Command Line Interface

## Readahead support for random I/O# When an application reads a file randomly, the readahead size is

reduced by kernel.# By setting this parameter to 1, the readahead size is not reduced.## globReadaheadForRandomIO=0

## Set maximum number of blocks to commit in a single commit command.## globMaxCommitBlocks=2048

## Set the notification port if the mpfsd is unable to get the requested# port when it starts.## globNotifyPort=6907

## Enable MPFS to use Pseudo devices created by Multipathing software,# namely PowerPath and Device-Mapper Multipath tool.# globUsePseudo=1

## Set Defer Close Second for FileObj, 0 to disable## globDeferCloseSec=60

## Set maximum Defer Close files## globDeferCloseMax=1024

## Set size threshold for files# For file smaller than this value, IO will go through NFS instead of

MPFS# When set to 0, this function is disabled.#

# globSmallFileThreshold=0

To modify the /etc/mpfs.conf file, use vi or another text editor that does not add carriage returns to the file. Remove the comment from the parameter by deleting the hash mark (#) on the line, replace the value, and save the file. This example shows this file after modification:

##This is the MPFSi configuration file

Page 132: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

132 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

##It contains parameters that are used when the MPFSi module is loaded## Set the number of blocks to prefetch when requesting mapping

information## globPrefetchSize=256

# Set number of readahead pages## globReadahead=250

# Set the number of seconds between retrying via SAN## globDiskResetInterval_sec=600

# Set the number of retries for all FMP requests#globMaxRetries=8

# Set the number of seconds between retrying via SAN#globDiskResetInterval_sec=700#

In the example, globMaxRetries was changed to 8 and globDiskResetInterval_sec was changed to 700.

DirectIO support DirectIO allows file reads and writes to go directly from an application to a storage device bypassing operating system buffer cache. This feature was added for applications that use the O_DIRECT flag when opening a file.

When MPFS opens files by using DirectIO, the read/write behavior depends on the global setting of a parameter called DirectIO.

Note: DirectIO is only a valid option for 2.6 kernels (such as RHEL 4, RHEL 5, RHEL 6, SLES 10, SLES 11, CentOS 5, and CentOS 6).

To examine the value of the MPFS DirectIO setting:

$ grep DirectIO /proc/mpfs/params$ globDirectIO=1 $

Note: The default value is 1, meaning that DirectIO is enabled.

Page 133: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Setting persistent parameter values 133

EMC VNX MPFS Command Line Interface

To change the DirectIO parameter value, use vi or another text editor that does not add carriage return characters to the file. Remove the comment from line in the /etc/mpfs.conf file on the server:

globDirectIO=1

Change the value 1 to the desired DirectIO action value,

where:

0 = No DirectIO support, return ENOSUPP

1 = DirectIO via MPFS

2 = DirectIO via NFS even on MPFS

3 = DirectIO via MPFS and optimized for DirectIO to a pre-allocated file, DirectIO to a non-allocated file will fallback to NFS

After changing the DirectIO parameter in the /etc/mpfs.conf file, activate DirectIO for MPFS:

1. Unmount MPFS:

$ umount -a -t mpfs

2. Stop the MPFS service:

$ service mpfs stop

3. Restart the MPFS service:

$ service mpfs start

4. Remount MPFS:

$ mount -a -t mpfs

Rebooting the Linux server will also activate the changes made in the /etc/mpfs.conf file.

Changes to global parameters in the /etc/mpfs.conf file persist across reboots.

Example In this example, the /etc/mpfs.conf file has been modified so that MPFS does not use DirectIO when writing to and reading from MPFS.

Type the command:

$ cat /etc/mpfs.conf$

Page 134: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

134 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

Output: ## This is the MPFSi configuration file## It contains parameters that are used when the MPFSi

module is loaded#

## Users who supply the direct I/O flag when opening a# file will get behavior that is dependant on the global# setting of a parameter called "globDirectIO".## There are three valid values for this parameter. They# are:# 0 -- No direct I/O support, return ENOSUPP# 1 -- Direct I/O via MPFS# 2 -- Direct I/O via NFS even on MPFS file systems# 3 -- Direct I/O via MPFS, and optimized for Direct I/O

to pre-allocated file, Direct I/O to non-allocated file will fallback to NFS

#globDirectIO=0#

Note: The O_DIRECT flag is used by the DirectIO parameter. The man 2 open man page contains detailed information on the O_DIRECT flag.

Asynchronous I/O interfaces allow an application thread to dispatch an I/O without waiting for the I/O operation to complete. Later the thread can verify to see if the I/O has completed. This feature is for applications that use the aio_read and aio_write interfaces. Asynchronous I/O is now supported natively in 2.6 Linux kernels.

EMCmpfs parameters

The /etc/sysconfig/EMCmpfs file contains these parameters:

◆ When run as a daemon, MPFS_DISCOVER_SLEEP_TIME waits the specified number of seconds before it performs a disk rediscovery process. The default is 900 seconds.

If an error occurs on any VNX for file volume, the daemon wakes so that it can perform a rediscovery without waiting the full sleep time.

◆ The purpose of the HRDP_SLEEP_TIME daemon is to periodically wake up, notice if there are additional disks, and protect them if they are VNX for file disks. The default is 300 seconds.

Page 135: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Setting persistent parameter values 135

EMC VNX MPFS Command Line Interface

◆ The MPFS_ISCSI_PID_FILE parameter can be used to customize the name of the file containing the Process ID (PID) of the iSCSI daemon.

◆ The purpose of the MPFS_ISCSI_REDISCOVER_TIME daemon is to wait the specified number of seconds to allow iSCSI to rediscover new LUNs. The default is 10 seconds.

◆ The MPFS_SCSI_CMD_TIMEOUT daemon waits the specified number of seconds for SCSI commands before timing out. The default is 5 seconds.

◆ The PERF_TIMEOUT daemon waits the specified number of seconds to send performance packets after the last hello message. The default is 900 seconds.

◆ The purpose of the MPFS_DISKSPEED_BUF_SIZE daemon is to set the default disk speed test buffer size. The default is 5 MB.

◆ The purpose of the MPFS_MOUNT_HVL parameter is to set the default behavior for using hierarchical volume management (hvm). HVM uses protocols which allows the Linux server to conserve memory and CPU resources. The default value of 1 uses hierarchical volume management. A value of 0 does not use hierarchical volume management if it is supported by the server. The value can be changed by using the -o hvl=0 option to disable hvm or -o hvl=1 option to enable hvm on the mount command. “Hierarchical volume management” on page 33 describes hierarchical volumes and their management.

◆ The MPFS_DISCOVER_LOAD_BALANCE parameter is based on VNX for file best practices to statically load-balance the VNX for block. Load-balancing the Symmetrix system is not necessary. The default is to disable userspace load-balancing.

To view the parameters:

$ cat /etc/sysconfig/EMCmpfs# Default values for MPFS daemons### /** Default amount of time to sleep between rediscovery */# MPFS_DISCOVER_SLEEP_TIME=900## /** Default amount of time to sleep between reprotection of disks */# HRDP_SLEEP_TIME=300## /** Default name of iscsi pid file */# MPFS_ISCSI_PID_FILE=/var/run/iscsi.pid#

Page 136: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

136 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

EMC VNX MPFS Command Line Interface

# /** Default time to allow iscsi to rediscover new LUNs */# MPFS_ISCSI_REDISCOVER_TIME=10## /** Default timeout for scsi commands (inquiry, etc) in seconds */# MPFS_SCSI_CMD_TIMEOUT=5## /** Number of seconds to send performance packets after last hello

message */# PERF_TIMEOUT=900## /** Default disk speed test buffer size, unit is MB */# MPFS_DISKSPEED_BUF_SIZE=5## /** The value of this determines the default behavior for using

hierarchical volume management.# Assign a value of 1 to use hierarchical volume management by default

if it is supported by the server# Assign a value of 0 to not use hierarchical volume management by

default. The default value # can be changed by using the -o hvl=0 or -o hvl=1 option on the mount

command. */ # MPFS_MOUNT_HVL=1## /** Default value for Multipath static load-balancing */# The static load-balancing is based on VNX best practice for VNX for

block backend.# It is not useful for Symmetrix backend# Set 1 to get optimized load-balancing for multiple clients# Set 2 to get optimized load-balancing for single client# Set 0 to disable userspace load-balancing# MPFS_DISCOVER_LOAD_BALANCE=0

To modify the /etc/sysconfig/EMCmpfs file, use vi or another text editor that does not add carriage returns to the file. Remove the comment from the parameter by deleting the hash mark (#) on the line, replace the value, and save the file. This example shows this file after modification:

# Default values for MPFS daemons### /** Default amount of time to sleep between rediscovery */

MPFS_DISCOVER_SLEEP_TIME=800$

Page 137: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

File Syntax Rules 137

AInvisible Body Tag

This appendix describes the file syntax rules to follow when creating a text (.txt) file to create a site and add Linux hosts. This appendix includes these topics:

◆ File syntax rules for creating a site ................................................ 138◆ File syntax rules for adding hosts.................................................. 139

File Syntax Rules

Page 138: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

138 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

File Syntax Rules

File syntax rules for creating a siteThe file syntax rules for creating a text (.txt) file used to create a site are described below.

VNX for file with iSCSI ports

To create a text file for a site with a VNX for file with iSCSI ports:

Command syntax cssite sn=<site-name> spw=<site-password> un=<VNX-user-name> pw=<VNX-password> addr=<VNX-name>

where:<site-name> = name of the site<site-password> = password for the site<VNX-user-name> = username of the Control Station<VNX-password> = password for the Control Station<VNX-name> = network name or IP address of the Control Station

Example To create a site with a site name of mysite, a site password of password, a VNX for file username of VNXtest, a VNX for file password of swlabtest, and an IP address of 172.24.107.242:

cssite sn=mysite spw=password un=VNXtest pw=swlabtest addr=172.24.107.242

Page 139: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

File syntax rules for adding hosts 139

File Syntax Rules

File syntax rules for adding hostsThe file syntax rules for creating a text (.txt) file used to add Linux hosts are described below.

Linux host To create one or more Linux hosts that share the same username (root) and password:

Command syntax linuxhost un=<host-root-user-name> pw=<host-password> <host-name1>[...<host-nameN>]

where:<host-root-user-name> = root username of the Linux host<host-password> = password for the Linux host<host-name1>[...<host-nameN>] = one or more Linux hostnames or IP addresses

Example To create a Linux host with a root username of test, a Linux host password of swlabtest, and Linux host IP addresses of 172.24.107.242 and 135.79.124.68:

linuxhost un=test pw=swlabtest 172.24.107.242 135.79.124.68

Page 140: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

140 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

File Syntax Rules

Page 141: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Error Messages and Troubleshooting 141

BInvisible Body Tag

This appendix describes messages that the Linux server writes to the system error log and troubleshooting problems, causes, and solutions. This appendix includes these topics:

◆ Linux server error messages........................................................... 142◆ Troubleshooting................................................................................ 143◆ Known problems and limitations .................................................. 150

Error Messages andTroubleshooting

Page 142: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

142 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Error Messages and Troubleshooting

Linux server error messagesTable 9 on page 142 describes Linux server error messages.

Table 9 Linux server error messages

Message Explanation

notification error on session create The session was not created. Verify the mpfsd process is running as described in “Troubleshooting” on page 143

session to server lost The Linux server has lost contact with the VNX for file. This loss of contact is probably due to a network or server problem and not an I/O error. <server_name> session expired now=<time>,

expiration=<time>

reestablished session OK The first message indicates the Linux server has re-established contact with the VNX for file. The second indicates an attempt at re-establishing contact has been made, but has not succeeded. Neither message indicates an I/O error.

handles may have been lost

could not find disk signature for <nnnnn> (<nnnnn> is the disk signature).

The VNX for file specified a storage location <nnnnn>, that is inaccessible from the Linux server.

could not start <xxx> thread A component of the Linux server failed to start. (<xxx> is a component of the Linux server.)

error accessing volume. I/O routed to LAN This message is printed in the log file when the Linux server receives an error message while communicating with Symmetrix system storage over FC. All subsequent I/O operations for the file are done over NFS until the file is reopened. After the file is reopened, the FC SAN path is retried.

Page 143: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Troubleshooting 143

Error Messages and Troubleshooting

TroubleshootingThis section lists problems, causes, and solutions in troubleshooting the EMC VNX MPFS software.

The EMC VNX MPFS for Linux Clients Release Notes provide additional information on troubleshooting, known problems, and limitations.

Installing MPFS software

These problems may be encountered while installing the MPFS software.

Page 144: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

144 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Error Messages and Troubleshooting

Problem Installation of the MPFS software fails with an error message such as: Installing ./EMCmpfs-5.0.32.x-i686.rpm on localhost[ Step 1 ] Checking installed MPFSpackage ...[ Step 2 ] Installing MPFS package ...Preparing... ##################################### [100%] 1:EMCmpfs #####################################[100%]The kernel that you are running,2.6.22.18-0.2-default, is not

supported by MPFS.The following kernels are supported by MPFS on SuSE: SuSE-2.6.16.46-0.12-default SuSE-2.6.16.46-0.12-smp SuSE-2.6.16.46-0.14-default SuSE-2.6.16.46-0.14-smp SuSE-2.6.16.53-0.8-default SuSE-2.6.16.53-0.8-smp SuSE-2.6.16.53-0.16-default SuSE-2.6.16.53-0.16-smp SuSE-2.6.16.60-0.21-default SuSE-2.6.16.60-0.21-smp SuSE-2.6.16.60-0.27-default SuSE-2.6.16.60-0.27-smp SuSE-2.6.16.60-0.37-default SuSE-2.6.16.60-0.37-smp SuSE-2.6.16.60-0.60.1-default SuSE-2.6.16.60-0.60.1-smp SuSE-2.6.16.60-0.69.1-default SuSE-2.6.16.60-0.69.1-smp SuSE-2.6.5-7.282-default SuSE-2.6.5-7.282-smp SuSE-2.6.5-7.283-default SuSE-2.6.5-7.283-smp SuSE-2.6.5-7.286-default SuSE-2.6.5-7.286-smp SuSE-2.6.5-7.287.3-default SuSE-2.6.5-7.287.3-smp SuSE-2.6.5-7.305-default SuSE-2.6.5-7.305-smp SuSE-2.6.5-7.308-default SuSE-2.6.5-7.308-smp

Cause The kernel being used is not supported.

Solution Use a supported OS kernel. The EMC VNX MPFS for Linux Clients Release Notes provide a list of supported kernels.

Page 145: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Troubleshooting 145

Error Messages and Troubleshooting

Mounting and unmounting a file system

These problems may be encountered in mounting or unmounting a file system. Refer to “Mounting MPFS” on page 84 and “Unmounting MPFS” on page 88 for more information.

Problem The MPFS software does not run or the MPFS daemon did not start.

Cause The MPFS software may not be installed.

Solution Verify that the MPFS software is installed and the MPFS daemon has started by using this procedure:1. Use RPM to verify the installation:

rpm -q EMCmpfs

If the MPFS software is installed properly, the output is displayed as:

EMCmpfs-5.0.x-x

Note: Alternatively, use the mpfsctl version command to verify that Linux server is installed. The mpfsctl man page or “Using the mpfsctl utility” on page 107 provides additional information.

2. Use the ps command to verify that the MPFS daemon has started:

ps -ef |grep mpfsd

The output will look like this if the MPFS daemon has started:

root 847 1 0 15:19 ? 00:00:00 /usr/sbin/mpfsd

3. If the ps command output does not show the MPFS daemon process is running, as root, start MPFS by using this command:

$ /etc/rc.d/init.d/mpfs start

Problem The mount command displays messages about unknown file systems.

Cause An option was specified that is not supported by the mount command.

Solution Check the mount command options and correct any unsupported options:1. Display the mount_mpfs man page to find supported options by typing man mount_mpfs at the command prompt.2. Run the mount command again with the correct options.

Page 146: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

146 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Error Messages and Troubleshooting

Problem The mount command displays this message:mount: must be root to use mount

Cause Permissions are required to use the mount command.

Solution Log in as root and try the mount command again.

Problem The mount command displays this message:nfs mount: get_fh: <hostname>:: RPC: Rpcbind failure - RPC: Timed out

Cause The VNX Server or NFS server specified is down.

Solution Check that the correct server name was specified and that the server is up with an exported file system.

Problem The mount command displays this message:$ mount -t mpfs 172.24.107.242:/rcfs /mnt/mpfsVolume ’APM000643042520000-0008’ not found.Error mounting /mnt/mpfs via MPFS

Cause The MPFS mount operation could not find the physical disk associated with the specified file system.

Solutions Use the mpfsinq command to verify the physical disk device associated with the file system is connected to the server over FC and is accessible from the server as described in “Listing devices with the mpfsinq command” on page 120.

Problem The mount command displays this message:mount: /<filesystem>: No such file or directory

Cause No mount point exists.

Solution Create a mount point and try the mount again.

Page 147: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Troubleshooting 147

Error Messages and Troubleshooting

Problem The mount command displays this message:mount: fs type mpfs not supported by kernel.

Cause The MPFS software is not installed.

Solution Install the MPFS software and try the mount command again.

Problem A file system cannot be unmounted. The unmount command displays this message:umount: Device busy

Cause Existing processes were using the file system when an attempt was made to unmount it, or the umount command was issued from the file system itself.

Solution Identify all processes, stop all processes, and unmount the file system again:1. Use the fuser command to identify all processes using the file system.2. Use the kill -9 command to stop all processes.3. Run the umount command again.

Problem The mount command hangs.

Cause The server specified with the mount command does not exist or cannot be reached.

Solution Stop the mount command, check for a valid server, and retry the mount command again:1. Interrupt the mount command by using the interrupt key combinations (usually Ctrl-C). 2. Try to reach the server by using the ping command.3. If the ping command succeeds, retry the mount.

Page 148: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

148 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Error Messages and Troubleshooting

Problem The mount command displays the message:permission denied.

Causes Cause 1Permissions are required to access the file system specified in the mount command.Cause 2You are not the root user on the server.

Solutions Solution 1Ensure that the file system has been exported with the right permissions, or set the right permissions for the file system (the EMC VNX Command Line Interface Reference for File Manual provides information on permissions).Solution 2Use the su command to become the root user.

Problem The mount command displays the message:RPC program not registered.

Cause The server specified in the mount command is not a VNX Server or NFS server.

Solution Check that the correct server name was specified and the server has an exported file system.

Problem The mount command logs this message in the /var/log/messages file: Couldn’t find device during mount.

Cause The MPFS mount operation could not find the physical disk associated with the specified file system.

Solution Use either the fdisk command or the mpfsinq command (as described in “Listing devices with the mpfsinq command” on page 120) to verify the physical disk device associated with the file system is connected to the server over FC and is accessible from the server.

Page 149: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

Troubleshooting 149

Error Messages and Troubleshooting

Miscellaneous issues

These miscellaneous issues may be encountered with a Linux server.

Problem The mount command displays this message:RPC: Unknown host.

Cause The server name specified in the mount command does not exist on the network.

Solution Check the server name and use the IP address if necessary, to mount the file system:1. Ensure that the correct server name is specified in the mount command.2. If the correct name was not specified, check whether the host’s /etc/hosts file or the NIS/DNS map Xcontains an entry for the server.3. If the server does appear in /etc/hosts or the NIS/DNS map, check whether the server responds to the ping command.4. If the ping command succeeds, try using the server’s IP address instead of its name in the mount Xcommand.

Problem The mount command displays this message:$ mount -t mpfs ka0abc12s401:/server4fs1 /mnt mount: fs type mpfs not supported by kernel

Cause The MPFS software is not installed on the Linux server.

Solution Install the MPFS software and try the mount command again:1. Install the MPFS software on the Linux server as described in “Installing the MPFS software” on page 90.2. Run the mount command again.

Problem The user cannot write to a mounted file system.

Cause Write permission is required on the file system or the file system is mounted as read-only.

Solution Verify that you have write permission and try writing to a mounted file system again:1. Check that you have write permission on the file system. 2. Try unmounting the file system (as described in “Unmounting MPFS” on page 88.) and remounting it in read/write mode.

Page 150: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

150 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Error Messages and Troubleshooting

Known problems and limitationsThe EMC VNX MPFS for Linux Clients Release Notes provide known problems and limitations for MPFS clients.

Problem This message appears:NFS server not responding.

Cause The VNX Server is unavailable due to a network-related problem, a reboot, or a shutdown.

Solution Check whether the server responds to the ping command. Also try unmounting and remounting the file system.

Problem Removing the MPFS software package fails.

Cause 1 The MPFS software package is not installed on the Linux server.

Solution 1 Ensure that the MPFS software package name is spelled correctly, with uppercase and lowercase letters specified. If the MPFS software package name is spelled correctly, verify that the MPFS software is installed on the Linux server:$ rpm -q EMCmpfs

If the MPFS software is installed properly, the output is displayed as: EMCmpfs-5.0.32-xxx

If the MPFS software is not installed, the output is displayed as:Package "EMCmpfs" was not found.

Cause 2 Trying to remove the MPFS software package while one or more MPFS-mounted file systems are active, and I/O is taking place on the active file system. A message appears on the Linux server such as:ERROR: Mounted MPFS filesystems found on the system.

Please unmount all MPFS filesystems before removing the product.

Solution 2 Unmount MPFS and try removing the MPFS software package again:1. Stop the I/O. 2. Unmount all active MPFS file systems by using the umount command as described in “Unmounting MPFS” on page 88.3. Restart the removal process.

Page 151: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide 151

Glossary

This glossary defines terms useful for MPFS administrators.

CChallenge

HandshakeAuthentication

Protocol (CHAP)

Access control protocol for secure authentication using shared passwords called secrets.

client Front-end device that requests services from a server, often across a network.

command lineinterface (CLI)

Interface for typing commands through the Control Station to perform tasks that include the management and configuration of the database and Data Movers and the monitoring of statistics for the VNX for file cabinet components.

Common Internet FileSystem (CIFS)

File-sharing protocol based on the Microsoft Server Message Block (SMB). It allows users to share file systems over the Internet and intranets.

Control Station Hardware and software component of the VNX for file that manages the system and provides the user interface to all VNX for file components.

Page 152: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

152 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Glossary

Ddaemon UNIX process that runs continuously in the background, but does

nothing until it is activated by another process or triggered by a particular event.

Data Mover In a VNX for file, a cabinet component running its own operating system that retrieves data from a storage device and makes them available to a network client. This is also referred to as a blade.

disk volume On VNX for file, a physical storage unit as exported from the system. All other volume types are created from disk volumes. See also metavolume, slice volume, stripe volume, and volume.

Eextent Set of adjacent physical blocks.

Ffallthrough Fallthrough occurs when MPFS temporarily employs the NFS or CIFS

protocol to provide continuous data availability, reliability, and protection while block I/O path congestion or unavailability is resolved. This fallthrough technology is seamless and transparent to the application being used.

Fast Ethernet Any Ethernet specification with a speed of 100 Mb/s. Based on the IEEE 802.3u specification.

Fibre Channel Nominally 1 Gb/s data transfer interface technology, although the specification allows data transfer rates from 133 Mb/s up to 4.25 Gb/s. Data can be transmitted and received simultaneously. Common transport protocols, such as Internet Protocol (IP) and Small Computer Systems Interface (SCSI), run over Fibre Channel. Consequently, a single connectivity technology can support high-speed I/O and networking.

File Mapping Protocol(FMP)

File system protocol used to exchange file layout information between an MPFS client and the VNX for file. See also Multi-Path File Systems (MPFS).

file system Method of cataloging and managing the files and directories on a VNX for block.

Page 153: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide 153

Glossary

Ggateway VNX for file that is capable of connecting to multiple systems, either

directly (direct-connected) or through a Fibre Channel switch (fabric-connected).

Gigabit Ethernet Any Ethernet specification with a speed of 1000 Mb/s. IEEE 802.3z defines Gigabit Ethernet over fiber and cable, which has a physical media standard of 1000Base-X (1000Base-SX short wave, 1000Base-LX long wave) and 1000Base-CX shielded copper cable. IEEE 802.3ab defines Gigabit Ethernet over an unshielded twisted pair (1000Base-T).

Hhost Addressable end node capable of transmitting and receiving data.

IInternet Protocol (IP) Network layer protocol that is part of the Open Systems

Interconnection (OSI) reference model. IP provides logical addressing and service for end-to-end delivery.

Internet Protocoladdress (IP address)

Address uniquely identifying a device on any TCP/IP network. Each address consists of four octets (32 bits), represented as decimal numbers separated by periods. An address is made up of a network number, an optional subnetwork number, and a host number.

Internet SCSI (iSCSI) Protocol for sending SCSI packets over TCP/IP networks.

iSCSI initiator iSCSI endpoint, identified by a unique iSCSI name, which begins an iSCSI session by issuing a command to the other endpoint (the target).

iSCSI target iSCSI endpoint, identified by a unique iSCSI name, which executes commands issued by the iSCSI initiator.

Kkernel Software responsible for interacting most directly with the

computer’s hardware. The kernel manages memory, controls user access, maintains file systems, handles interrupts and errors, performs input and output services, and allocates computer resources.

Page 154: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

154 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Glossary

Llogical device One or more physical devices or partitions managed by the storage

controller as a single logical entity.

logical unit (LU) For iSCSI on a VNX for file, a logical unit is an iSCSI software feature that processes SCSI commands, such as reading from and writing to storage media. From a iSCSI host perspective, a logical unit appears as a disk device.

logical unit number(LUN)

Identifying number of a SCSI or iSCSI object that processes SCSI commands. The LUN is the last part of the SCSI address for a SCSI object. The LUN is an ID for the logical unit, but the term is often used to refer to the logical unit itself.

logical volume Logical devices aggregated and managed at a higher level by a volume manager. See also logical device.

Mmetadata Data that contains structural information, such as access methods,

about itself.

metavolume On a VNX for file, a concatenation of volumes, which can consist of disk, slice, or stripe volumes. Also called a hyper volume or hyper. Every file system must be created on top of a unique metavolume. See also disk volume, slice volume, stripe volume, and volume.

mirrored pair Logical volume with all data recorded twice, once on each of two different physical devices.

mirroring Method by which the VNX for block maintains two identical copies of a designated volume on separate disks.

mount Process of attaching a subdirectory of a remote file system to a mount point on the local machine.

mount point Local subdirectory to which a mount operation attaches a subdirectory of a remote file system.

Page 155: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide 155

Glossary

MPFS over iSCSI Multi-Path File System over iSCSI-based clients. MPFS client running an iSCSI initiator works in conjunction with an IP-SAN switch containing an iSCSI to SAN blade. The IP-SAN blade provides one or more iSCSI targets that transfer data to the storage area network (SAN) systems. See also Multi-Path File Systems (MPFS).

MPFS session Connection between an MPFS client and a VNX for file.

MPFS share Shared resource designated for multiplexed communications using the MPFS file system.

Multi-Path File Systems(MPFS)

VNX for file feature that allows heterogeneous servers with MPFS software to concurrently access, directly over Fibre Channel or iSCSI channels, shared data stored on a EMC Symmetrix or VNX for block. MPFS adds a lightweight protocol called File Mapping Protocol (FMP) that controls metadata operations.

Nnested mount file

system (NMFS)File system that contains the nested mount root file system and component file systems.

nested mount filesystem root

File system on which the component file systems are mounted read-only except for mount points of the component file systems.

network-attachedstorage (NAS)

Specialized file server that connects to the network. A NAS device, such as VNX for file, contains a specialized operating system and a file system, and processes only I/O requests by supporting popular file sharing protocols such as NFS and CIFS.

network file system(NFS)

Network file system (NFS) is a network file system protocol allowing a user on a client computer to access files over a network as easily as if the network devices were attached to its local disks.

PPowerPath EMC host-resident software that integrates multiple path I/O

capabilities, automatic load balancing, and path failover functions into one comprehensive package for use on open server platforms connected to Symmetrix or VNX for block.

Page 156: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

156 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Glossary

RRedundant Array ofIndependent Disks

(RAID)

Method for storing information where the data is stored on multiple disk drives to increase performance and storage capacities and to provide redundancy and fault tolerance.

Sserver Device that handles requests made by clients connected through a

network.

slice volume On a VNX for file, a logical piece or specified area of a volume used to create smaller, more manageable units of storage. See also disk volume, metavolume, stripe volume, and volume.

small computersystem interface

(SCSI)

Standard set of protocols for host computers communicating with attached peripherals.

storage area network(SAN)

Network of data storage disks. In large enterprises, a SAN connects multiple servers to a centralized pool of disk storage. See also network-attached storage (NAS).

storage processor (SP) Storage processor on a VNX for block. On a VNX for block, a circuit board with memory modules and control logic that manages the VNX for block I/O between the host’s Fibre Channel adapter and the disk modules.

Storage processor A(SP A)

Generic term for the first storage processor in a VNX for block.

Storage processor B(SP B)

Generic term for the second storage processor in a VNX for block.

stripe size Number of blocks in one stripe of a stripe volume.

stripe volume Arrangement of volumes that appear as a single volume. Allows for stripe units that cut across the volume and are addressed in an interlaced manner. Stripe volumes make load balancing possible. See also disk volume, metavolume, slice volume, and volume.

Page 157: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide 157

Glossary

Symmetrix RemoteData Facility (SRDF)

EMC technology that allows two or more Symmetrix systems to maintain a remote mirror of data in more than one location. The systems can be located within the same facility, in a campus, or hundreds of miles apart using fiber or dedicated high-speed circuits. The SRDF family of replication software offers various levels of high-availability configurations, such as SRDF/Synchronous (SRDF/S) and SRDF/Asynchronous (SRDF/A).

Ttar Backup format in PAX that traverses a file tree in depth-first order.

Transmission ControlProtocol (TCP)

Connection-oriented transport protocol that provides reliable data delivery.

Uunified storage VNX for file that is connected to a captive system that is not shared

with any other VNX for files and is not capable of connecting to multiple systems.

VVirtual Storage Area

Network (VSAN)SAN that can be broken up into sections allowing traffic to be isolated within the section.

VNX EMC network-attached storage (NAS) product line.

VNX for block EMC midrange block system.

VNX OE Embedded operating system in VNX for block disk arrays.

volume On a VNX for file, a virtual disk into which a file system, database management system, or other application places data. A volume can be a single disk partition or multiple partitions on one or more physical drives. See also disk volume, metavolume, slice volume, and stripe volume.

Page 158: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

158 EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Glossary

Page 159: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide 159

AAccess Logix configuration 63, 68, 79, 81accessing storage 67administering MPFS 101architecture

MPFS over Fibre Channel on VNX 19, 38MPFS over Fibre Channel on VNX VG2/VG8

gateway 20, 38MPFS over iSCSI on VG2/VG8 gateway 22MPFS over iSCSI on VNX 21, 38MPFS over iSCSI on VNX VG2/VG8

gateway 39MPFS over iSCSI/FC on VNX 22, 39MPFS over iSCSI/FC on VNX VG2/VG8

gateway 23, 39arraycommpath 63, 65, 67, 68, 79, 82Asynchronous I/O support 134authentication, CHAP 30

Bbest practices

file system 46, 47LUNs 47MPFS 28, 46, 53MPFS threads 28storage configuration 29stripe size 53VNX for block 58VNX for file volumes 46VNX for file with MPFS 28VNX VG2/VG8 gateway 58

CChallenge Handshake Authentication Protocol.

See CHAPCHAP

one-way authentication 30reverse authentication 30secret 30session authentication 30

command line interface. See mpfsctl commandscommands

/proc/mpfs/devices 123mpfsctl diskreset 109mpfsctl diskresetfreq 109mpfsctl help 108mpfsctl max-readahead 110mpfsctl prefetch 112mpfsctl reset 113mpfsctl stats 114mpfsctl version 117mpfsctl volmgt 117mpfsinfo 125mpfsinq 120mpfsquota 123mpfsstat 118

comments 15configuration

overview 26planning checklist 36

configuringGigabit Ethernet ports 39iSCSI target 61storage 67storage access 58

Index

Page 160: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide160

Index

VNX for file 47VNX with MPFS 28zones 58

creatinga file system 47file system 54metavolume 54security file 60storage groups 65stripe 52

DDirectIO support 132disabling

arraycommpath 63, 68, 79, 82failovermode 63, 68, 79, 82HVM 135read and write protection for VNX for file

volumes 103displaying

accessible LUNs 47disks 49MPFS devices 120, 122MPFS software version 117MPFS statistics 114

EEMC HighRoad Disk Protection (hrdp) program

102EMC Unisphere software 45EMCmpfs parameters

hrdp_sleep_time 134mpfs_discover_sleep_time 134mpfs_diskspeed_buffer_size 135mpfs_iscsi_pid_file 135mpfs_iscsi_rediscover_time 135mpfs_mount_hvl 135mpfs_scsi_cmd_timeout 135perf_timeout 135

enablingarraycommpath 63, 68, 79, 82failovermode 63, 68, 79, 82

error messages 142

Ffailovermode 63, 65, 67, 68, 79, 82Fibre Channel

adding hosts to storage groups 68, 79driver installation 67switch installation 59switch requirements 42

Fibre Channel over Ethernet (FCoE) 19File Mapping Protocol (FMP) 24file syntax rules 137file system

creating 47exporting 55mounting 55names of mounted 50names of unmounted 51setup 46unmounting 147

firewall FMP ports 94

GGigabit Ethernet port configuration 39

HHierarchical volume management (HVM)

default settings 135enable/disable 135overview 33values 135

II/O sizes 28installing

MPFS software 90MPFS software, troubleshooting 143storage configuration 41

iSCSI CHAP authentication 30iSCSI discovery address 39iSCSI driver

starting 78stopping 78

iSCSI driver configurationCentOS 5 73RHEL 4 70RHEL 5 73

Page 161: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

161EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Index

RHEL 6 73SLES 10 70, 73SLES 11 73

iSCSI initiatorconfiguring 75configuring ports 31, 58connection to a SAN switch 29IQN to define a host 81names 59show IQN name 73

iSCSI port configuration 61iSCSI target configuration 61

LLinux server

configuration 29error messages 142

LUNsaccessible by Data Movers 47adding 65best practices 47displaying 50displaying all 71failover 67maximum number supported 47mixed not supported 41rediscover new 135total usable capacity 47

Mmanaging using mpfs commands 101metavolume 54mounting a file system

troubleshooting 145mounting MPFS 84MPFS

creating a file system 47, 54exporting a file system 55mounting 84mounting a file system 55, 133setup 46storage requirements 41unmounting 88, 133

MPFS client troubleshooting 143mpfs commands

/proc/mpfs/devices 123

mpfsinfo 125mpfsinq 120mpfsquota 123mpfsstat 118

MPFS configuration roadmap 27MPFS configurations 21, 22MPFS devices, displaying 120MPFS over Fibre Channel

VNX 19, 38VNX VG2/VG8 gateway 20, 38

MPFS over iSCSIVG2/VG8 gateway 22VNX 21, 38VNX VG2/VG8 gateway 39

MPFS over iSCSI/FCVNX 22, 39VNX VG2/VG8 gateway 23, 39

MPFS overview 18MPFS parameters

conf parameters 129kernel parameters 127persistent parameters 129

MPFS servicerestarting 133stopping 133

MPFS softwarebefore installing 90blocks per flush 116install over existing 95installation 90installing from a CD 92installing from a tar file 90installing from EMC Online Support 90managing using hrdp commands 103managing using mpfs commands 101managing using mpfsctl commands 101post installation instructions 98starting 95, 97uninstalling 100upgrading 95upgrading from an earlier version 96upgrading with file system mounted 97verifying upgrade 99version number 117

MPFS threads 28mpfsctl commands

mpfsctl diskreset 109

Page 162: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide162

Index

mpfsctl diskresetfreq 109mpfsctl help 108mpfsctl max-readahead 110mpfsctl prefetch 112mpfsctl reset 113mpfsctl stats 114mpfsctl version 117mpfsctl volmgt 117

mpfsinqtroubleshooting 146

Nnumber of blocks per flush 116

Oone-way CHAP authentication 30overview of configuring MPFS 26

Pperformance

file systems 46gigabit ethernet ports 39iSCSI ports 43Linux server 43, 58MPFS 28, 29, 46, 53, 112, 118MPFS reads 111MPFS threads 28MPFS with PowerPath 28problems with Linux server 114read ahead 111storage configuration 29stripe size 28, 46, 53VNX for block 58VNX for block volumes 46

post installation instructions 93PowerPath

support 28with MPFS 63

Prefetch requirements 29, 94

RRainfinity Global Namespace 32Read ahead performance 111Read cache requirements 29, 94

removing MPFS software, troubleshooting 150reverse CHAP authentication 30

SSAN switch zoning 59secret (CHAP) 30security file creation 60SendTargets discovery 75setting

arraycommpath 64, 68, 79, 82failovermode 63, 68, 79, 82

setting upMPFS 46VNX for file 44

software componentsCentOS 5 40iSCSI initiator 40MPFS software 40NAS software 40Red Hat Enterprise Linux 40SuSE Linux Enterprise 40

starting MPFS 95, 97starting the iSCSI driver 78statistics

displaying 114, 118resetting counters 113

stopping the iSCSI driver 78storage configuration

configuring 59installation 41recommendations 29requirements 41

storage groupadding Fibre Channel hosts 68, 79adding initiators 79configuring 67

storage guidelines 29storage pool, created MPFS 35, 45stripe size 46system component verification 38

Ttroubleshooting

cannot write to a mounted file system 149installing MPFS software 143Linux client 143

Page 163: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

163EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide

Index

mounting a file system 145mpfsinq command 146NFS server response 150removing MPFS software 150uninstalling MPFS software 100unmounting a file system 147

tuning MPFS 101

Uuninstalling MPFS software 100unmounting a file system 147unmounting MPFS 88upgrading MPFS software 95

Vverifying an MPFS software upgrade 99version number, displaying 117VMware ESX server

limitations 31requirements with Linux 31

VNX for blockbest practices 58configuring using CLI commands 58iSCSI port configuration 61storage requirements 41system requirements 41

VNX for fileconfiguring 47enabling MPFS 57setup 44

VNX MPFSbest practices guide 28, 46, 53configuration 28

volume stripe size 28volumes mounted 52volumes, names of 52

Page 164: Docu31586 VNX Series MPFS Over FC and iSCSI Linux Clients 6.0 Product Guide

EMC VNX Series MPFS over FC and iSCSI Linux Clients Version 6.0 Product Guide164

Index