Top Banner

of 44

Doc 3.1 Ha Cluster Userguide

Apr 03, 2018

Download

Documents

secretnoon
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    1/44

    3000-hac -v2.5-000023-B

    High Ava ilability Cluster Plugin

    User Guide

    v2.5x

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    2/44

    ii High A va ilab ility C luste r Plugin User G uide

    Copyright 2012 Ne xenta Systems, ALL RIGHTS RESERVED

    Notic e: No p art of this pub lic ation ma y be rep rod uce d or transmitted in any form o r by any mea ns, elec tronic

    or mec hanica l, including p hotoc op ying and rec ording, or stored in a da tab ase o r retrieva l system for any

    purpo se, w ithout the express written pe rmission o f Nexenta System s (hereinafter referred to as Nexenta ).

    Nexenta reserves the right to ma ke c hang es to this do c umenta tion a t a ny time without notice and assumes

    no respo nsibility for its use. Nexenta p rod uc ts and services only c an b e o rdered und er the te rms andc ond itions of Nexenta System s a pp licab le ag reements. All of the fea tures de sc ribe d in this do c ume nta tion

    ma y not be ava ilab le c urrently. Refer to the latest produc t announc eme nt or conta c t your loca l Nexenta

    System s sales office for informa tion on fe atu re and prod uc t a va ilab ility. This do c ume nta tion include s the

    late st information a vailab le a t the time of p ublic ation.

    Nexenta is a registered trad em ark of Nexenta System s in the United Sta tes and othe r co untries. All othe r

    tradem arks, servic e m arks, and com pa ny nam es in this do c ume nta tion are p rope rties of their respe c tive

    owners.

    All othe r trade ma rks, servic e marks, and com pa ny nam es in this doc ume nta tion are p rope rties of their

    respe c tive ow ners.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    3/44

    High A va ilab ility C luste r Plugin User G uide iii

    Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

    1 I n t r o d u c t i o n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

    About Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1

    Product Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

    Server Monitoring and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    Storage Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2

    Exclusive Access to Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3

    SCSI-2 PGR for Additional Protection . . . . . . . . . . . . . . . . . . . . . . . . .4

    Service Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    Additional Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    2 I n s t al l at i o n & Se t u p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    About Installation and Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    Adding Plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    Installing Plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    Sample Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6

    3 Co n f i g u r i n g t h e HA Cl u st e r . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    About Configuring the HA Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

    Binding the Nodes Together with SSH . . . . . . . . . . . . . . . . . . . . . . . . . . .9

    Configuring the HA Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    4 Co n f ig u r i n g t h e Cl u st e r s Sh a r ed Vo l um e s . . . . . . . . . . . . . . . . . . . . 1 1

    About Configuring the Clusters Shared Volumes . . . . . . . . . . . . . . . . . . . 11

    Configuring the Clusters Shared Volumes . . . . . . . . . . . . . . . . . . . . . . . 11Importing the Current Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    Adding a Virtual IP or Hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    5 H ea r t b ea t an d N et w o r k I n t e r fa ce s . . . . . . . . . . . . . . . . . . . . . . . . . . 1 3

    About Heartbeat and Network Interfaces . . . . . . . . . . . . . . . . . . . . . . . . 13

    Heartbeat Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    Contents

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    4/44

    Contents

    iv High A va ilab ility C luste r Plugin User G uide

    Configuring the Cluster and Heartbeat Interfaces . . . . . . . . . . . . . . . . . . 14

    Serial Link . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    6 Co n f i g u r i n g St o r a g e Fa i lo v e r . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 7

    About Configuring Storage Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    Cluster Configuration Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    Mapping Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    NFS/CIFS Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    Configuring iSCSI Targets for Failover . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    Configuring Fibre Channel Targets for Failover . . . . . . . . . . . . . . . . . . . . 20

    7 A d v an c ed Se t u p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3

    About Advanced Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    Setting Failover Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    Adding Additional Virtual Hostnames . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    8 Sy s t em Op e r a t io n s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 5

    About System Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

    Checking Status of Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

    Checking Cluster Failover Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    Failure Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    Service Repair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    Replacing a Faulted Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28System Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    Upgrade Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    9 Te st i n g a n d Tr o u b l e sh o o t i n g . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1

    About Testing and Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    Resolving Name Conflicts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    Specifying Cache Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

    Manually Triggering a Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    Verifying DNS Entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33Verifying Moving Resources Between Nodes . . . . . . . . . . . . . . . . . . . . . . 33

    Verifying Failing Service Back to Original Node . . . . . . . . . . . . . . . . . . . . 34

    Gathering Support Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

    I nd ex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    5/44

    High A va ilab ility C luste r Plugin User G uide v

    Preface

    This documentation presents information specific to Nexenta Systems, Inc.products. The information is for reference purposes and is subject tochange.

    Intended Audience

    The information in this document is confidential and may not be disclosedto any third parties without the prior written consent of Nexenta Systems,Inc..

    This documentation is intended for Network Storage Administrators. Itassumes that you have experience NexentaStor and with data storageconcepts, such as NAS, SAN, NFS, and ZFS.

    Doc umenta tion History

    The following table lists the released revisions of this documentation.

    Contac ting Support

    Visit the Nexenta Systems, Inc. customer portal http://

    www.nexenta.com/corp/support/support-overview/account-management.Login and browse the customers knowledge base.

    Choose a method for contacting support:

    Using the NexentaStor user interface, NMV (Nexenta ManagementView):

    a. Click Suppo r t .b. Complete the request form.

    c. Click Send Request .

    Using the NexentaStor command line, NMC (Nexenta ManagementConsole):

    a. At the command line, type support.

    b. Complete the support wizard.

    Table 1: Doc umentation Rev ision History

    Revision Date Desc rip tion

    3000-hac-v2.5-000023-B December, 2012 GA

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    6/44

    Preface

    vi High A va ilab ility C luste r Plugin User G uide

    Comments

    Your comments and suggestions to improve this documentation are greatlyappreciated. Send any feedback to [email protected] andinclude the documentation title, number, and revision. Refer to specificpages, sections, and paragraphs whenever possible.

    mailto:[email protected]:[email protected]
  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    7/44

    High A va ilab ility C luste r Plugin User G uide 1

    1

    Introduction

    This section includes the following topics:

    About Introduction

    Product Features

    Server Monitoring and Failover

    Storage Failover

    Exclusive Access to Storage

    Service Failover

    Additional Resources

    About Introduc tion

    The following section explains high-availability and failover concepts.

    Produc t Features

    The Nexenta High Availability Cluster (HAC) plugin provides a storagevolume-sharing service. You make one or more shared volumes highlyavailable by detecting system failures and transferring ownership of sharedvolumes to the other server in the cluster pair.

    An HA Cluster consists of two NexentaStor appliances. Neither system isdesignated as the primary or secondary system. You can manage bothsystems actively for shared storage, although only one system owns eachvolume at a time.

    HA Cluster is based on RSF-1 (Resilient Server Facility), an industry-leadinghigh-availability and cluster middleware application that ensures criticalapplications and services continue to run in the event of system failures.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    8/44

    Introduction

    2 High A va ilab ility C luste r Plugin User G uide

    Server Monitoring and Failover

    HA Cluster provides server monitoring and failover. Protection of services,such as iSCSI, involves cooperation with other modules such as the SCSITarget plugin.

    You can execute NMC commands on all appliances in the group.

    An HA Cluster consists of:

    NexentaStor App l iances Runs a defined set of services andmonitors each other for failures. HAC connects these NexentaStorappliances through various communication channels, throughwhich they exchange heartbeats that provide information abouttheir states and the services that on them.

    RSF-1 Cluster Serv ice Transferable unit consists of:

    Application start-up and shutdown code

    Network identity and appliance data

    You can migrate services between cluster appliances manually, orautomatically, if one appliance fails.

    To view the existing groups of appliances, using NMC:

    Type:

    nmc:/$ show group

    To view the existing groups of appliances, using NMV:

    1 . Click St a t us > Gene r a l .

    2 . In the Appliances panel, click .

    Storage Failover

    The primary benefit of HA Cluster is to detect storage system failures andtransfer ownership of shared volumes to the alternate NexentaStorappliance. All configured services fail over to the other server. HA Clusterensures service continuity during exceptional events, including poweroutages, disk failures, appliances that run out of memory or crash, andother failures.

    Currently, the minimum time to detect that an appliance has failed isapproximately 10 seconds. The failover and recovery time is largely

    dependent on the amount of time it takes to re-import the data volume onthe alternate appliance. Best practices to reduce the failover time includeusing fewer zvols and file systems for each data volume. When using fewerfile systems, you may want to use other properties, such as reservationsand quotas, to control resource contention between multiple applications.

    In the default configuration, HA Cluster implements failover storage servicesif network connectivity is lost. HA Cluster automatically determines whichnetwork device to monitor based on the services that are bound to aninterface. It checks all nodes in the cluster, so even if a node is not running

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    9/44

    Introduction

    High A va ilab ility C luste r Plugin User G uide 3

    any services, HA Cluster continues to monitor the unused interfaces. If thestate of one changes to offline, it prevents failover to this node for servicesthat are bound to that interface. When the interface recovers, HA Clusterenables failover for that interface again.

    Other types of failure protection include link aggregation for networkinterfaces and MPxIO for protection against SAS link failures.

    For example, it does not move local Users that were configured for theNexentaStor appliance. Nexenta highly recommends that you use adirectory service such as LDAP for that case.

    Exc lusive Ac cess to Storage

    You access a shared volume exclusively through the appliance that currentlyprovides the corresponding volume-sharing service. To ensure thisexclusivity, HA Cluster provides reliable fencing through the utilization ofmultiple types of heartbeats. Fencing is the process of isolating a node in a

    HA cluster, and/or protecting shared resources when a node malfunctions.Heartbeats, or pinging, allow for constant communication between theservers. The most important of these is the disk heartbeat in conjunctionwith any other type. Generally, additional heartbeat mechanisms increasereliability of the cluster's fencing logic; the disk heartbeats, however, areessential.

    HA Cluster can reboot the failed appliance in certain cases:

    Failure to export the shared volume from an appliance that hasfailed to provide the (volume-sharing) service. This functionality isanalogous to Stonith, the technique for fencing in computerclusters.

    In addition, NexentaStor RSF-1 cluster provides a number of other fail safemechanisms:

    When you start a (volume sharing) service, make sure that the IPaddress associated with that service is NOT attached to anyinterface. The cluster automatically detects and reports if aninterface is using the IP address. If it is, the local service does notperform the start-up sequence.

    On disc systems which support a SCSI reservation, you can place a discbefore accessing the file systems, and have the system set to panic if it losesthe reservation. This feature also serves to protect the data on a discsystem.

    !

    Note:

    HA Cluster also supports SCSI-2 reservations.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    10/44

    Introduction

    4 High A va ilab ility C luste r Plugin User G uide

    SCSI-2 PGR for Additiona l Protec tion

    HA Cluster employs SCSI-2 PGR (persistent group reservations) for

    additional protection.PGR enables access for multiple nodes to a device andsimultaneously blocks access for other nodes.

    You can enable PGR by issuing SCSI reservations on the devices in a volumebefore you import them. This feature enforces data integrity which preventsthe pool from importing into two nodes at any one time.

    Nexenta recommends that you always deploy the HAC Cluster with a shareddisk (quorum device) and at least one or more heartbeat channels (Ethernetor serial). This configuration ensures that the cluster always exclusive

    access that is independent of the storage interconnects used in the Cluster.

    Servic e Failover

    As discussed previously, system failures result in the failover of ownershipof the shared volume to the alternate node. As part of the failover process,HA Cluster migrates the storage services that are associated with the sharedvolume(s) and restarts the services on the alternate node.

    Additional Resources

    Nexenta has various professional services offerings to assist with managingHA Cluster. Nexenta strongly encourages a services engagement to plan andinstall the plugin. Nexenta also offers training courses on high availabilityand other features. For service and training offerings, go to our website at:

    http://www.nexenta.com

    For troubleshooting cluster issues, contact:

    [email protected]

    For licensing questions, email:

    [email protected]

    HA cluster is based on the Resilient Server Facility from High-Availability.com. For additional information on cluster concepts and theoryof operations, visit their website at:

    http://www.high-availability.com

    For more advanced questions that are related to the product, check our FAQfor the latest information:

    http://www.nexenta.com/corp/frequently-asked-questions

    !

    Note:

    SCSI-3 PGR is not supported for HA Cluster because it does not work withSATA drives and has certain other limitations.

    http://www.nexenta.com/http://www.nexenta.com/
  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    11/44

    High A va ilab ility C luste r Plugin User G uide 5

    2

    Installation & Setup

    This section includes the following topics:

    About Installation and Setup

    Prerequisites

    Adding Plugins

    Sample Network Architecture

    About Installa tion and Setup

    The HA Cluster plugin provides high-availability functionality forNexentaStor. You must install this software on each NexentaStor appliancein the cluster. This section describes how to set up and install HAC on bothappliances.

    Prerequisites

    HA Cluster requires shared storage between the NexentaStor Clusteredappliances. You must also set up:

    One IP address for each cluster service unit (zvol, NAS folder oriSCSI LUN)

    Multiple NICs (ethernet cards) on different subnets for clusterheartbeat and NMV management (This is a good practice, but notmandatory)

    DNS entry for each service name in the cluster

    NexentaStor supports the use of a separate device as a transaction log forcommitted writes. HA Cluster requires that you make the ZFS Intent Log

    (ZIL) part of the same storage system as the shared volume.

    !

    Not e :

    SCSI and iSCSI failover services use the SCSI Target plugin, which isincluded with the NexentaStor software.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    12/44

    Installation & Setup

    6 High A va ilab ility C luste r Plugin User G uide

    Adding Plugins

    The HAC plugin installs just as any other NexentaStor plugin installs.

    Installing Plugins

    You can install plugins through both NMC and NMV.

    To install a plugin, using NMC:

    1 . Type:

    nmc:/$ setup plugin

    Example:

    nmc:/$ setup plugin rsf-cluster

    2 . Confirm or cancel the installation.

    3 . Repeat Step 1 Step 2 for the other node. To install a plugin, using NMV:

    1 . Click Se t t i ngs > App l i ance .

    2 . In the Administration panel, click Plug ins.

    3 . Click Add P lug in in the Remotely-available plugins section.

    4 . Confirm or cancel the installation.

    5 . Repeat Step 1 Step 4 for the other node.

    Sample Network Architec ture

    The cluster hardware setup needs:

    Two x86/64-bit systems with a SAS-connected JBOD

    Two network interface cards (not mandatory, but good practice)

    The following illustration is an example of an HA Cluster deployment of aNexenta iSCSI environment. The host server attaches to iSCSI LUNS which

    are connected to the Nexenta appliances node A and node B. The Nexentaappliances use the Active/Passive function of the HA cluster. Node A servicesone group of iSCSI luns while node B presents a NAS storage LUN.

    !

    Not e :

    The plugins may not be immediately available from your NexentaStorrepository. It can take up to six hours before the plugins become available.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    13/44

    Installation & Setup

    High A va ilab ility C luste r Plugin User G uide 7

    Figu r e 2 - 1 : High Availab ility Configuration

    >/Ed^/EdtKZdZEdWd,

    EyEdWW>/E

    EyEd^dKZ,/',s/>>/>/dz

    KE&/'hZd/KE

    EyEdWW>/E

    ^,ZsK>hD

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    14/44

    Installation & Setup

    8 High A va ilab ility C luste r Plugin User G uide

    Th is page in t en t i ona l l y l e f t b lank

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    15/44

    High A va ilab ility C luste r Plugin User G uide 9

    3

    Configuring the HA Cluster

    This section includes the following topics:

    About Configuring the HA Cluster

    Binding the Nodes Together with SSH

    Configuring the HA Cluster

    About Configuring the HA Cluster

    You can configure and manage the HA cluster through the appliances webinterface, the Nexenta Management View (NMV), or the NexentaManagement Console (NMC).

    Bind ing the Nodes Together with SSH

    You must bind the two HA nodes together with the SSH protocol so that theycan communicate.

    To bind the two nodes, using NMC:

    1 . Type the following on node A:

    nmc:/$ setup network ssh-bind root@

    2 . When prompted, type the Super User password.

    3 . Type the following on node B:

    nmc:/$ setup network ssh-bind root@

    4 . When prompted, type the Super User password.

    !

    Not e :

    If ssh-binding fails, you can manually configure the /etc/hosts/ file,which contains the Internet host table. (Type Setup appliance hosts toaccess the file.)

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    16/44

    Configuring the HA Cluster

    10 High A va ilab ility C luste r Plugin User G uide

    Configuring the HA Cluster

    You need to configure multiple options for the HA cluster before you can useit successfully.

    To configure an HA cluster, using NMV:

    1 . Select Set t ings > HA Clus te r.

    2 . Type the Admin name and password.

    3 . Click I n i t i a l i ze.

    4 . Type or change the Clus t e r nam e.

    5 . Type a description, (optional).

    6 . Select the following parameters: Enab le Ne t w o r k M on i t o r i ng

    The Cluster monitors the network for nodes.

    Conf igu re Ser ia l Hear tbeat

    The nodes exchange serial heartbeat packets through adedicated RS232 serial link between the appliances, using acustom protocol.

    7 . Click Conf igure .

    8 . Repeat Step 1 Step 7 for the second node.

    To configure an HA cluster, using NMC: Type:

    nmc:/$ create group rsf-cluster

    Use the following options when prompted:

    Group name:

    Appliances: nodeA, nodeB

    Description:

    Heartbeat disk:

    Enable inter-appliance heartbeat through primary interfaces?:

    Yes

    Enable inter-appliance heartbeat through serial ports?: No

    !

    Not e :

    You cannot assign a NexentaStor appliance to more than one HA cluster.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    17/44

    High A va ilab ility C luste r Plugin User G uide 11

    4

    Configuring the Clusters SharedVolumes

    This section includes the following topics:

    About Configuring the Clusters Shared Volumes

    Configuring the Clusters Shared Volumes

    About Configuring the Clusters Shared Volumes

    After setting up the HA cluster, you must create one or more sharedvolumes.

    Configuring the Clusters Shared Volumes

    After cluster initialization, NexentaStor automatically redirects you to theadding volume services page. The shared logical hostname is a nameassociated with the failover IP interface that is moved to the alternate nodeas part of the failover.

    In the event of a system failure, once the volume is shared, the volumeremains accessible to Users as long as one of the systems continues to run.

    Importing the Current Node

    Although the appliances can access all of the shared volumes, only thevolumes that have been imported to the current appliance display in the list.If you want to create a new cluster service with a specific shared volume,you must import this volume to the current node.

    !

    Not e :

    If you receive an error indicating that the shared logical hostname is notresolvable, see Resolving Name Conflicts.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    18/44

    Configuring the Clusters Shared Volumes

    12 High A va ilab ility C luste r Plugin User G uide

    To im port t he volume t o the current node, using NMC:

    1 . Type:

    setup group rsf-cluster shared volume add

    System response:

    Scanning for volumes accessible from all appliances

    2 . Validate the cluster interconnect and share the volume:

    ...verify appliances interconnect: Yes

    Initial timeout: 60

    Standard timeout: 60

    Adding a Virtual IP or Hostname

    You can add virtual IPs/hostnames per volume service.

    To add a virtual IP address, using NMC:

    1 . Type:

    nmc:/$ setup group rsf-cluster vips add

    System response:

    nmc:/$ VIP____

    2 . Type the virtual IP address.

    To add a virtual IP address, using NMV:

    1 . In the Cluster Settings panel, click Advanced .

    2 . Click Add i t i ona l V i r t ua l Host nam es.

    3 . Click A d d a n e w v i r t u a l h o st n a m e .

    4 . Type values for the following:

    Virtual Hostname

    Netmask

    Interface on node A

    Interface on node B

    5 . Click Add .

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    19/44

    High A va ilab ility C luste r Plugin User G uide 13

    5

    Heartbeat and NetworkInterfaces

    This section includes the following topics:

    About Heartbeat and Network Interfaces

    Heartbeat Mechanism

    Configuring the Cluster and Heartbeat Interfaces

    Serial Link

    About Hea rtbeat and Network Interfaces

    NexentaStor appliances in the HA Cluster constantly monitor the states andstatus of the other appliances in the Cluster through heartbeats. BecauseHA Cluster servers must determine that an appliance (member of thecluster) has failed before taking over its services, you configure the cluster

    to use several communication channels through which to exchangeheartbeats.

    Heartbeat Mec hanism

    In Nexenta, VDEV labels of devices in the shared volume perform theheartbeat function. If a shared volume consists of a few disks, NexentaStoruses VDEV labels for two disks for the heartbeat mechanism. You can specifywhich disks.

    Though the quorum disk option still remains in the configuration file,

    Nexenta recommends using the shared volume's labels.

    The heartbeat mechanism uses sectors 512 and 518 in the blank 8K spaceof the VDEV label on each of the shared disks.

    The loss of all heartbeat channels represents a failure. If an appliancewrongly detects a failure, it may attempt to start a service that is alreadyrunning on another server, leading to so-called split brain syndrome. Thiscan result in confusion and data corruption. Multiple, redundant heartbeatsprevent this from occurring.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    20/44

    Heartbeat and Network Interfaces

    14 High A va ilab ility C luste r Plugin User G uide

    HA Cluster supports the following types of heartbeat communicationchannels:

    Sh a r e d D i sk / Qu o r u m D ev i ce

    Accessible and writable from all appliances in the cluster or VDEVlabels of the devices in the shared volume.

    N et w o r k I n t e r fa ce s

    Including configured interfaces, unconfigured interfaces, and linkaggregates.

    Se r ia l L inks

    If two NexentaStor appliances do not share any services, then they do notrequire direct heartbeats between them. However, each member of a clustermust transmit at least one heartbeat to propagate control and monitoringrequests. The heartbeat monitoring logic is defined by two parameters: Xand Y, where:

    X equals the number of failed heartbeats the interface monitorsbefore taking any action

    Y represents the number of active heartbeats an interface monitorsbefore making it available again to the cluster.

    The current heartbeat defaults are 3 and 2 heartbeats, respectively.

    NexentaStor also provides protection for network interfaces through linkaggregation. You can set up aggregated network interfaces using NMC orNMV.

    Configuring the Cluster and Hea rtbeat Interfaces

    When you define the cluster, note that a NexentaStor appliance cannotbelong to more than one cluster.

    To define the HA cluster, using NMC:

    Type:

    nmc:/$ create group rsf-cluster

    System response:

    Group name : cluster-example

    Appliances : nodeA, nodeB

    Description : some description

    Scanning for disks accessible from all appliances ...

    Heartbeat disk : c2t4d0Enable inter-appliance heartbeat through dedicated

    heartbeats disk? No

    Enable inter-appliance heartbeat through primary

    interfaces? Yes

    Enable inter-appliance heartbeat through serial ports?

    No

    Custom properties :

    Bringing up the cluster nodes, please wait ...

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    21/44

    Heartbeat and Network Interfaces

    High A va ilab ility C luste r Plugin User G uide 15

    Jun 20 12:18:39 nodeA RSF-1[23402]: [ID 702911

    local0.alert] RSF-1 cold restart: All services stopped.

    RSF-1 cluster 'cluster-example' created.

    Initializing ..... done.

    To configure the Cluster, using NMV:

    1 . Click Se t t i ngs > HA Clus t e r > I n i t i a l i ze.2 . Type in a Cluster Name and Description.

    3 . Select the following options:

    Enable Network Monitoring

    Configure Serial Heartbeat

    4 . Click Yes to create the initial configuration. Click OK.

    The cluster is initialized. You can add shared volumes to cluster.

    To change heartbeat properties, using NMC:

    Type:

    nmc:/$ setup group rsf-cluster

    hb_properties

    System response:

    Enable inter-appliance heartbeat through primary interfaces?:Yes

    Enable inter-appliance heartbeat through serial ports?: No

    Proceed: Yes

    To add additional hostnam es to a v olume service:

    1 . Click Advanced > Add i t i ona l V ir t ua l Hos t nam es .

    2 . Click A d d a n ew v i r t u a l h o st n a m e .

    Seria l Link

    HAC exchanges serial heartbeat packets through a dedicated RS232 seriallink between any two appliances, using a custom protocol. To preventrouting problems affecting this type of heartbeat, do not use IP on this link.

    The serial link requires:

    Spare RS232 serial ports on each HA Cluster server

    Crossover, or null modem RS232 cable, with an appropriateconnector on each end

    You can use null modem cables to connect pieces of the Data TerminalEquipment (DTE) together or attach console terminals.

    On each server, enable the relevant serial port devices but disable any login,modem or printer services running on it. Do not use the serial port for anyother purpose.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    22/44

    Heartbeat and Network Interfaces

    16 High A va ilab ility C luste r Plugin User G uide

    To configure serial port heartbeats:

    Type Ye s to the following question during HA cluster group creation:

    Enable inter-appliance heartbeat through serial ports?

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    23/44

    High A va ilab ility C luste r Plugin User G uide 17

    6

    Configuring Storage Failover

    This section includes the following topics:

    About Configuring Storage Failover

    Cluster Configuration Data

    Mapping Information

    NFS/CIFS Failover

    Configuring iSCSI Targets for Failover

    Configuring Fibre Channel Targets for Failover

    About Configuring Storage Failover

    HA Cluster detects storage system failures and transfers ownership ofshared volumes to the alternate NexentaStor appliance. HA Cluster ensuresservice continuity in the presence of service level exceptional events,

    including power outage, disk failures, appliance running out of memory orcrashing, etc.

    Cluster Configuration Data

    When you configure SCSI targets in a cluster environment, make sure thatyou are consistent with configurations and mappings across the clustermembers. HAC automatically propagates all SCSI Target operations.However, if the alternate node is not available at the time of theconfiguration change, problems can occur. By default, the operation resultsin a warning to the User that the remote update failed. You can also set HA

    Cluster to synchronous mode. In this case, the action fails completely if theremote update fails.

    To protect local configuration information that did not migrate, periodicallysave this configuration to a remote site (perhaps the alternate node) andthen use NMC commands to restore it in the event of a failover.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    24/44

    Configuring Storage Failover

    18 High A va ilab ility C luste r Plugin User G uide

    To save the cluster, using NMC:

    Type:

    setup application configuration -save

    To restore the cluster, using NMC:

    Type:setup application configuration -restore

    Following are examples of using NMC commands to synchronize the clusterconfiguration, if one of the nodes is not current. In the following examples,node A contains the latest configuration and node B needs updating.

    Example:

    To run this command from node A:

    Type:

    nmc:/$ setup iscsi config restore-to nodeB

    The above command:

    Saves the configuration of node A

    Copies it to node B

    Restores it on node B

    Example:

    To run this command from node B:

    Type:

    nmc:/$ setup iscsi config restore-from nodeA

    The restore command saves key configuration data that includes:

    Target groups

    Host groups (stmf.config)

    Targets Initiators

    Target portal groups (iscsi.conf)

    If you use CHAP authentication, and you configured the CHAP configurationthrough NMC or NMV, then you can safely save and restore theconfiguration.

    !

    Not e :

    Restore operations are destructive. Only perform them during a planneddowntime window.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    25/44

    Configuring Storage Failover

    High A va ilab ility C luste r Plugin User G uide 19

    Mapping Information

    Use SCSI Target to map zvols from the cluster nodes to client systems. It iscritical that the cluster nodes contain the same mapping information.Mapping information is specific to the volume and is stored with the volume

    itself.

    You can perform manual maintenance tasks on the mapping informationusing the mapmgr command.

    NFS/ CIFS Failover

    You can use HA Cluster to ensure the availability of NFS shares to users.However, note that HA Cluster does not detect the failure of the NFS serversoftware.

    Configuring iSCSI Targets for Failover

    You can use HA Cluster to failover iSCSI volumes from one cluster node toanother. The target IQN moves as part of the failover.

    Setting up iSCSI failover involves setting up a zvol in the shared volume.Note that you perform the process of creating a zvol and sharing it throughiSCSI separately from the HA Cluster configuration.

    If you create iSCSI zvols before marking the zvols volume as a sharedcluster volume, then when you share the cluster volume as an active iSCSIsession, it may experience some delays. Depending on the network,

    application environment and active workload, you may also see commandlevel failures or disconnects during this period.

    When you add a shared volume to a cluster which has zvols created as backup storage for iSCSI targets, it is vital that you configure all client iSCSIinitiators, regardless of the operating system, to access those targets usingthe shared logical hostname that is specified when the volume service wascreated, rather than a real hostname associated with one of the appliances.

    Note that the cluster manages all aspects of the shared logical hostnameconfiguration. Therefore, do not configure the shared logical hostnamemanually. Furthermore, unless the shared volume service is running, theshared logical hostname is not present on the network, however, you canverify it with the ICMP ping command.

    To configure iSCSI targets on the active appliance, using NMV:

    1 . Click Dat a M anagem en t > SCSI Tar ge t > zvo l s.

    2 . Create a virtual block device using the shared volume.

    Make the virtual block device >200MB.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    26/44

    Configuring Storage Failover

    20 High A va ilab ility C luste r Plugin User G uide

    HAC automatically migrates the newly created zvol to the otherappliance on failover. Therefore, you do not have to duplicate itmanually.

    3 . From the iSCSI pane, click iSCSI > Targe t Por ta l Grou ps anddefine a target portal group.

    HAC automatically replicates the newly created target portal group to theother appliance.

    To create an iSCSI t arget and add it to t he target portal group, using

    NMV:

    1 . Click iSCSI > Targe ts .

    This limits zvol visibility from client initiators to the target portalgroup. The newly created iSCSI target is automatically replicated tothe other appliance

    2 . Type a name and an alias.

    The newly created iSCSI target displays in the Targets page.

    To create a LUN mapping to the zvol, using NMV:

    1 . From the SCSI Target pane, click Mappings . This creates a LUNmapping to the zvol for use as backup storage for the iSCSI target.

    The newly created LUN mapping is automatically migrated to theother appliance on failover.

    2 . On the client, configure the iSCSI initiator to use both the IQN of theiSCSI target created and the shared logical hostname associated

    with both the volume service and the target portal group to accessthe zvol through iSCSI.

    Failover time varies depending on the environment. As an example,initiating failover for a pool containing six zvols, the observed failover timeis 32 seconds. Nodes may stall while the failover occurs, but otherwiserecover quickly.

    See Also:

    Managing SCSI Targets in the NexentaStor User Guide

    Configuring Fibre Channel Targets for Failover

    As a prerequisite for configuring Fibre Channel targets, change the HBA portmodes of both appliances from Initiator mode to Target mode.

    To change the HBA port mode, using NMV:

    1 . Click Data Managem ent > SCSI Targe t P lus > Fib re Chann e l >Por ts .

    !

    Not e :

    It is critical that the IPv4 portal address is the shared logical hostnamespecified when the volume service was created, instead of a real hostnameassociated with one of the appliances.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    27/44

    Configuring Storage Failover

    High A va ilab ility C luste r Plugin User G uide 21

    2 . Select Target from the dropdown menu.

    3 . Once you change the HBA port modes of both appliances fromInitiator mode to Target mode, reboot both appliances so the Targetmode changes can take effect.

    To configure Fibre Channel targets on the appliance, in NMV:

    1 . Click Dat a M anagem en t > SCSI Tar ge t on the appliance wherethe volume service is currently running.

    2 . In the zvols pane, click Create.

    3 . Create a zvol with the following characteristics:

    Virtual block device: 200 MB

    Use the shared volume: Yes

    The newly created zvol is automatically migrated to the otherappliance on failover.

    4 . From the SCSI Target pane, click Mappings . This creates a LUNmapping to the zvol for use as backup storage for the iSCSI target.

    The newly created LUN mapping is automatically migrated to theother appliance on failover.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    28/44

    Configuring Storage Failover

    22 High A va ilab ility C luste r Plugin User G uide

    Th is page in t en t i ona l l y l e f t b lank

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    29/44

    High A va ilab ility C luste r Plugin User G uide 23

    7

    Advanced Setup

    This section includes the following topics:

    About Advanced Setup

    Setting Failover Mode

    Adding Additional Virtual Hostnames

    About Advanced Setup

    This section describes advanced functions of HA Cluster, such as setting thefailover mode, adding virtual hostnames and volumes, and othermiscellaneous options.

    Setting Failover Mode

    The failovermode defines whether or not an appliance attempts to start aservice when it is not running. There are separate failovermode settings foreach appliance that can run a service.

    You can set the failovermodes to automatic or manual. In automatic mode,the appliance attempts to start the service when it detects that there is noavailable parallel appliance running in the cluster.

    In manual mode, it does not attempt to start the service, but it generateswarnings when it is not available. If the appliance cannot obtain a definitiveanswer about the state of the service, or the service is not runninganywhere else, the appropriate timeout must expire before you can takeany action. The primary service failovermodes are typically set to automatic

    to ensure that an appliance starts its primary service(s) on boot up. Notethat putting a service into manual mode when the service is already runningdoes not stop that service, it only prevents the service from starting on thatappliance.

    To set t he failover m ode to m anual, using NMV:

    1 . Click Advanced Setup > Clus te r Opera t io ns > Set a l l Manua l .

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    30/44

    Advanced Setup

    24 High A va ilab ility C luste r Plugin User G uide

    2 . Click Ye s to confirm.

    To set t he failover m ode to m anual, using NMC:

    Type:

    nmc:/$ setup group rsf-cluster shared-

    volume manual

    To set t he failover mode t o autom atic, using NMV:

    1 . Click Advanced Set up > Clus t e r Oper a t i ons > Se t a l l Au t om a t i c

    2 . Click Ye s to confirm.

    To set t he failover m ode to m anual, using NMC:

    Type:

    nmc:/$ setup group rsf-cluster shared-

    volume automatic

    To stop all services in the Cluster, using NMV:

    1 . Click Stop Al l Serv ices.

    2 . Click Ye s to confirm.

    Adding Additional Virtual Hostnames

    After you have initialized the cluster, you can add shared volumes andvirtual hostnames to a cluster.

    See Also:

    Adding a Virtual IP or Hostname

    !

    Not e :

    Until you select Set All Services to manual, automatic or stopped, theRestore and Clear Saved State buttons do not display.

    Before HAC performs an operation, it saves the state of the services in thecluster, which you can later re-apply to the cluster using the restore

    button. Once HAC restores the service state, HAC clears the saved state.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    31/44

    High A va ilab ility C luste r Plugin User G uide 25

    8

    System Operations

    This section includes the following topics:

    About System Operations

    Checking Status of Cluster

    Checking Cluster Failover Mode

    Failure Events

    Service Repair

    Replacing a Faulted Node

    Maintenance

    System Upgrades

    About System Operations

    There are a variety of commands and GUI screens to help you with dailycluster operations. There is a set of cluster-specific commands tosupplement NMC.

    Chec king Status of Cluster

    You can check the status of the Cluster at any time.

    To see a list of available commands, using NMC:

    Type:

    help keyword cluster

    !

    Not e :

    Although both cluster nodes can access a shared volume using the showvolume command, that command only shows the volume if it's running onthe node currently owning that volume.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    32/44

    System Operations

    26 High A va ilab ility C luste r Plugin User G uide

    You can use NMV to check the overall cluster status.

    To check the status of the Cluster, using NMV:

    Click Set t ings > HA Clus te r.

    To check the status of the Cluster, using NMC:

    Type:

    show group rsf-cluster

    Checking Cluster Failover Mode

    You can configure HA Cluster to detect failures and alert the user, or tofailover the shared volumes automatically.

    To check the current appliance configuration mode:

    Type:

    nmc:/$ show group rsf-cluster System response:

    PROPERTY VALUE

    name : HA-Cluster

    appliances : [nodeB nodeA]

    hbipifs : nodeB:nodeA: nodeA:nodeB:

    netmon : 1

    info : Nexenta HA Cluster

    generation : 1

    refresh_timestamp : 1297423688.31731

    hbdisks : nodeA:c2t1d0 nodeB:c2t0d0

    type : rsf-cluster

    creation : Feb 11 03:28:08 2011

    SHARED VOLUME: ha-test

    svc-ha-test-ipdevs : rsf-data nodeB:e1000g1

    nodeA:e1000g1

    svc-ha-test-main-node : nodeA

    svc-ha-test-shared-vol-name : ha-test

    HA CLUSTER STATUS: HA-Cluster

    nodeA:

    ha-test running auto unblocked rsf-data

    e1000g1 60 60

    nodeB:ha-test stopped auto unblocked rsf-data

    e1000g1 60 60

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    33/44

    System Operations

    High A va ilab ility C luste r Plugin User G uide 27

    Failure Events

    NexentaStor tracks various appliance components, and their state. If andwhen failover occurs (or any service changes to a broken state),NexentaStor sends an email to the administrator describing the event.

    Servic e Repair

    There are two broken states:

    B r o k en _ Sa f e

    A problem occurred while starting the service on the server, but itwas stopped safely and you can run it elsewhere.

    B r o k en _ U n sa f e

    A fatal problem occurred while starting or stopping the service on theserver. The service cannot run on any other server in the cluster untilit is repaired.

    To repair a shared volume which is in broken state, using NMC:

    nmc:/$ setup group rsf-cluster shared-volume repair

    This initiates and runs the repair process.

    To mark a service as repaired, using NMV:

    1 . Click Set t ings > HA Clus te r.

    2 . In the Action column, set the action to repaired.

    3 . Click Conf i rm .

    Rep lac ing a Faulted Node

    NexentaStor provides advanced capabilities to restore a node in a cluster, incase the state changes to out of service. There is no need to delete thecluster group on another node and reconfigure it and all of the clusterservices.

    To replace a faulted node, using NMC:

    nmc:/$ setup group rsf-cluster replace_node

    !

    Not e :

    During the NexentaStor installation, you setup SMTP configuration and testthat you can receive emails from the appliance.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    34/44

    System Operations

    28 High A va ilab ility C luste r Plugin User G uide

    After executing the command, the system asks you to choose which nodeto exclude from the cluster and which to use instead. The system checks thehost parameters of the new node and if they match the requirements of thecluster group, it replaces the old one.

    Maintenance

    You can stop HAC from triggering a failover so you can performmaintenance.

    To stop HAC from triggering a failover, using NMC:

    Type:

    nmc:/$ setup group rsf-cluster shared-volume manual

    System Upgrades

    Occasionally, you may need to upgrade NexentaStor software on theappliance. Since this may require a reboot, manage it carefully in a clusterenvironment. HAC reminds the User that the cluster service is not availableduring the upgrade.

    Upgrade Procedure

    When upgrading, you have an active and a passive node.

    To upgrade both nodes, using NMC:

    1 . Login to the passive node and type:

    nmc:/$ setup appliance upgrade

    2 . After the upgrade successfully finishes, login to the active node andtype the following to failover to the passive node:

    nmc:/$ setup group rsf-cluster

    failover

    3 . After failover finishes, the nodes swap. The active node becomes thepassive node and vice versa. Type the following command on thecurrent passive node:

    nmc:/$ setup appliance upgrade

    !Not e :

    Before performing a replace node operation, you must set up the identicalconfiguration on the new or restored hardware, which HAC uses to replace

    the old faulted node. Otherwise, the operation fails. Make the serial portheartbeats configuration the same, as well.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    35/44

    System Operations

    High A va ilab ility C luste r Plugin User G uide 29

    4 . Type the following to run the failover command on the current activenode and thereby, make it passive again:

    nmc:/$ setup group rsf-cluster

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    36/44

    System Operations

    30 High A va ilab ility C luste r Plugin User G uide

    Th is page in t en t i ona l l y l e f t b lank

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    37/44

    High A va ilab ility C luste r Plugin User G uide 31

    9

    Testing and Troub leshooting

    This section includes the following topics:

    About Testing and Troubleshooting

    Resolving Name Conflicts

    Specifying Cache Devices

    Manually Triggering a Failover

    Verifying DNS Entries

    Verifying Moving Resources Between Nodes

    Verifying Failing Service Back to Original Node

    Gathering Support Logs

    Ab out Testing and Troubleshooting

    The following section contains various testing and trouble-shooting tips.

    Resolving Name Conflic ts

    You must make the appliances in the HA cluster group resolvable to eachother. This means they must be able to detect each other on the networkand communicate. To achieve this, you can either configure your DNS serveraccordingly, or add records to /etc/hosts. If you do not want to edit /etc/hosts manually, you can set this up when you create the shared volumes.(You have to enter a virtual shared service hostname and a virtual IPaddress.)

    Defining these parameters allows the software to modify /etc/hosts tablesautomatically on all HA-cluster group members.

    !

    Not e :

    You can use a virtual IP address instead of a shared logical hostname.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    38/44

    Testing and Troubleshooting

    32 High A va ilab ility C luste r Plugin User G uide

    You can add these host records automatically when you create the sharedvolumes. If a shared logical hostname cannot find a server, then you needto define the IP address for that server. Then it adds records to that serverautomatically.

    You can choose to manually configure your DNS server, or local hosts tableson the appliances.

    To see more information, using NMC:

    Type:

    nmc:/$ setup appliance hosts -h

    Alternatively, you could allow this cluster configuration logic to update yourlocal hosts records automatically.

    To create a shared service, using NMC:

    Type:

    nmc:/$ setup group rsf-cluster HA-Cluster shared-volume

    add

    System response:

    Scanning for volumes accessible from all appliances ...

    To configure the shared cluster v olume m anually:

    1 . Using a text editor, open /etc/hosts file for node A:

    2 . Add the IP addresses for each node:

    Example:

    172.16.3.20 nodeA nodeA.mydomain

    172.16.3.21 nodeB nodeB.mydomain

    172.16.3.22 rsf-data172.16.3.23 rsf-data2

    3 . Repeat Step 1 Step 2 for node B.

    Spec ifying Cache Devices

    NexentaStor allows you to configure specific devices in a data volume ascache devices. For example, using solid-state disks as cache devices canimprove performance for random read workloads of mostly static content.

    To specify cache devices when you create the volume, using NMC:

    Type:

    nmc:/$ setup volume grow

    Cache devices are also available for shared volumes in the HA Cluster.However, note that if you use local disks as cache, they cannot fail over sincethey are not accessible by the alternate node. After failover, therefore, thevolume is marked Degraded because of the missing devices.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    39/44

    Testing and Troubleshooting

    High A va ilab ility C luste r Plugin User G uide 33

    If local cache is critical for performance, then set up local cache devices forthe shared volume on each cluster node when you first configure thevolume. This involves setting up local cache on one node, and then manuallyfailing over the volume to the alternate node so that you can add local cachethere as well. This ensures the volume has cache devices availableautomatically after a failover, however, the data volume changes toDegraded and remains in that state because the cache devices are alwaysunavailable.

    Additionally, Users can control the cache settings for zvols within the datavolume. In a cluster, set the zvol cache policy write-through not write-back.

    To administer the cache policy, using NMC:

    Type:

    nmc:/$ setup volume zvol cache

    nmc:/$ setup zvol cachenmc:/$ show zvol cache

    Manually Triggering a Failover

    You can manually trigger a failover between systems when needed.Performing a failover from the current appliance to the specified appliancecauses the volume sharing service to stop on the current appliance, and theopposite actions take place on the passive appliance. Additionally, thevolume exports to the other node.

    To manually trigger a failover, using NMC:

    Type:

    nmc:/$setup group rsf-cluster failover

    Verifying DNS Entries

    There is a name associated with the cluster that is referred to as a sharedlog i cal hos t nam e. This hostname must be able to detect the clients thatare accessing the cluster. One way to manage this is with DNS.

    Install a DNS management application. Through this software, you can viewthe host record of the shared cluster hostname to verify that it was setupproperly.

    Verifying Moving Resources Between Nodes

    Use a manual failover to move a resource from one NexentaStor node toanother node in the cluster.

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    40/44

    Testing and Troubleshooting

    34 High A va ilab ility C luste r Plugin User G uide

    To move a shared volum e from node A to node B:

    Type:

    nmc@nodeA:/$ setup group

    shared-volume show

    System response:

    HA CLUSTER STATUS: HA-Cluster

    nodeA:

    vol1-114 stopped

    manual unblocked 10.3.60.134 e1000g0 20 8

    nodeB:

    vol1-114 running auto

    unblocked 10.3.60.134 e1000g0 20 8

    Verifying Failing Service Back to Original Node

    The first system response below illustrates a failed failover. The secondillustrates a successful failover.

    To verify failover back t o the original node:

    Type:

    nmc:/$ setup group shared-

    volume failover

    System response:

    SystemCallError: (HA Cluster HA-Cluster): cannot set mode

    for cluster node 'nodeA': Service ha-test2 is already

    running on nodeA (172.16.3.20)

    Type:

    nmc:/$ setup group shared-

    volume failover

    System response:

    Waiting for failover operation to complete......... done.

    nodeB:

    ha-test2 running auto unblocked rsf-data2

    e1000g1 60 60

    Gathering Support Log s

    To view support logs, using NMV:

    Click View Log .

    To view support logs, using NMC:

    Type:

    nmc:/$ show log

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    41/44

    High A va ilab ility C luste r Plugin User G uide 35

    Aadding

    hostname addresses12

    virtual IP addresses12additional resources4Bbasic concepts2binding nodes9Cconfiguring

    shared volumes11configuring HA cluster10Eexclusive access

    to storage3Ffailover

    service4storage2

    Gguide

    audiencevHHA cluster

    configuring10Iimporting current node11Nnetwork architecture6nexentaStor appliances2node

    importing11Pplugins

    installing6prerequisites5product features1

    RRSF-1 cluster service2SSCSI-2 PGR4service failover4shared volumes

    configuring11SSH

    binding the nodes9storage failover2support

    contactv

    Index

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    42/44

    Index

    36 High A va ilab ility Cluste r Plugin User G uide

    Th is page in t en t i ona l l y l e f t b lank

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    43/44

    3000-hac -v2.5-000023-B

    Global Headquarters455 El Camino Rea l

    Santa Clara, Ca lifornia 95050

    Nexenta EMEA Hea dq uartersCa me rastraa t 8

    1322 BC Almere

    Netherlands

    Houston Office2203 Timb erloc h Pl. Ste . 112

    The Woo d lands, TX 77380

    Nexenta Systems ItalyVia Vespuc c i 8B

    26900 Lod i

    Italy

    Nexenta Systems ChinaRoo m 806, Hanha i Culture Build ing,

    Chaoyang District,

    Beijing, China 100020

    Nexenta System s Japa n 102-0083

    Chiyod aku koujimac hi 2-10-3

    Tokyo, Jap an

    Nexenta Systems Korea Chusik Hoesa3001, 30F World Trad e Cente r

    511 YoungDongDa-Ro

    Ga ngNa m-Gu, 135-729

    Seo ul, Korea

  • 7/29/2019 Doc 3.1 Ha Cluster Userguide

    44/44

    Th is page in t en t i ona l l y l e f t b lank