Top Banner
SANique Cluster File System (CFS) Version 2.4 Administrator s Guide For Linux 2.4 Kernel May, 2003 Copyright 2003. MacroImpact, Inc. All Rights Reserved.
103

SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

Mar 11, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

SANique™ Cluster File System (CFS) Version 2.4

Administrator’s Guide

For Linux 2.4 Kernel

May, 2003

Copyright 2003. MacroImpact, Inc. All Rights Reserved.

Page 2: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

ii

Notices This information is provided for products and services that are offered in the U.S.A. The licensed program described in this document and all licensed material available for it are provided by MacroImpact under terms of End User License Agreement or any equivalent agreement between us. MacroImpact Inc. (hereafter “MacroImpact”) may not offer the products, services, or features that are discussed in this document in other countries. Any reference to a MacroImpact product, program, or service is not intended to state or imply that only that MacroImpact product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any MacroImpact intellectual property right may be used instead. However, it is the user’s responsibility to evaluate and verify the operation of any non-MacroImpact product, program, or service. MacroImpact may have patents or pending patent applications to cover subject matter that is described in this document. The furnishing of this document does not give you any license to these patents. MACROIMPACT INCORPORATED PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions; therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. MacroImpact may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Information concerning non-MacroImpact products was obtained from the suppliers of those products, their published announcements or other publicly available sources. MacroImpact has not tested those products and cannot confirm the accuracy of

Page 3: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

iii

SANique Cluster File System v2.4 Administrator’s Guide

performance, compatibility or any other claims related to non-MacroImpact products. Questions on the capabilities of non-MacroImpact products should be addressed to the suppliers of those products. SANique CFS is copyright 2003 by MacroImpact Inc. This includes all software, documentations, and associated materials. MacroImpact and SANique are trademarks or registered trademarks of MacroImpact Inc. in the United States and in Korea. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Qlogic is a trademark of Qlogic Corporation in the United States, other countries, or both. Java is a trademark of Sun Microsystems in the United States, other countries, or both. Pentium is a trademark of Intel Corporation in the United States, other countries, or both. Other company, product, and service names may be trademarks or service marks of others. © Copyright MacroImpact Incorporated 2003. All rights reserved. Printed in Korea.

Page 4: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

iv

Contents

Preface

Notices ...................................................................................................................................... ii

About this Book.......................................................................................................................viii

What is Covered in this Guide ................................................................................................viii

Conventions Used in this Guide .............................................................................................. ix

Customer Support.................................................................................................................... ix

Chapter 1: Introducing SANique CFS

What is SANique CFS?.............................................................................................................2

What is Storage Area Network (SAN)?.....................................................................................3

SANique CFS Features ............................................................................................................7

Support for SAN and Cluster Environment...............................................................8

High I/O Performance ................................................................................................9

High Availability and High Scalability ....................................................................... 9

Unix File System Functionality not Supported by SANique CFS 2.4 .....................................10

Technical Support ...................................................................................................................10

Chapter 2: Preparing Installation

Contents of SANique CFS Package .......................................................................................12

System Requirements.............................................................................................................12

Things to Check before Installation ........................................................................................14

Chapter 3: Installing and Configuring SANique CFS

Building SANique 2.4.x Kernel................................................................................................18

Building Standard SANique 2.4.x Kernel by RPM.................................................. 19

Building Standard SANique 2.4.x Kernel by Compiling Kernel Source ................ 20

Installing SANique CFS ..........................................................................................................25

Configuring SANique CFS ......................................................................................................27

Page 5: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

v

SANique Cluster File System v2.4 Administrator’s Guide

Running SANique CFS ...........................................................................................................31

Loading device drivers for Fibre Channel HBA .....................................................31

Starting SANique CFS..............................................................................................31

Creating a quorum partition and CVM log area......................................................33

Creating shared volume devices ............................................................................. 34

Creating SANique shared file systems ................................................................... 34

Mounting SANique shared file systems .................................................................. 35

Performing final sanity check.................................................................................. 36

Chapter 4: Operating SANique CFS

Mounting SANique File Systems ............................................................................................38

Unmounting SANique File Systems........................................................................................38

Listing Device Information ......................................................................................................39

Listing Logical Device Information..........................................................................................40

Listing Mount Information .......................................................................................................40

Listing Node Information.........................................................................................................41

Removing a Member Node.....................................................................................................41

Adding a New Member Node..................................................................................................41

Shutting Down SANique CFS.................................................................................................42

Joining a Failed Member Node Back......................................................................................43

Reconfiguring Lock Space ......................................................................................................43

Meaning of Lock Space............................................................................................43

Reconfiguring Lock Space .......................................................................................44

A Brief Guideline for Choosing Lock Space ...........................................................44

Reconfiguring Global Lock Manager (GLM) ...........................................................................44

Mounting SANique File Systems Automatically......................................................................46

Installing SANique CFS User Quota Package........................................................................47

Reconfiguring Other SANique CFS Parameters ....................................................................48

Chapter 5: Operating SANique CVM

Disk Management ...................................................................................................................50

Managing Disk Devices on Linux ............................................................................ 50

Creating CVM Disks................................................................................................. 51

Page 6: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

vi

Removing CVM Disks...............................................................................................53

Listing Device Information ......................................................................................53

Adding New Disks to SANique Cluster................................................................... 54

Removing Disks from SANique Cluster .................................................................. 55

Managing Disk Group .............................................................................................................56

Disk Group................................................................................................................ 56

Creating a Disk Group..............................................................................................57

Changing the Name of a Disk Group ....................................................................... 57

Adding New CVM Disks to a Disk Group ...............................................................58

Listing Disk Group Information ............................................................................... 59

Merging Disk Groups ...............................................................................................59

Splitting a Disk Group ..............................................................................................60

Removing CVM Disks from a Disk Group ...............................................................61

Exporting a Disk Group to another SANique cluster System................................61

Importing a Disk Group from another SANique cluster System............................62

Creating Logical Volumes .......................................................................................................63

Logical Volume Types .............................................................................................63

Logical Volume Configuration ................................................................................. 63

Volume Types and Configuration............................................................................ 65

Creating Concatenation Volumes ............................................................................ 66

Creating RAID-0 Volumes (Enterprise Edition Only) ............................................ 66

Creating RAID-1 Volumes (Enterprise Edition Only) ............................................ 67

Removing Logical Volumes ....................................................................................................68

Listing Volume Information .....................................................................................68

Changing the Properties of Logical Volumes..........................................................................69

Reconfigurable volume properties .......................................................................... 69

Changing volume name ............................................................................................69

Extending logical volumes (Enterprise Edition Only) ............................................ 70

Reducing logical volumes (Enterprise Edition Only).............................................. 70

Recovering Logical Volumes ..................................................................................................70

Cleaning up storage system .................................................................................... 70

Recovering RAID-1 volumes from node failures (Enterprise Ed. Only)............... 71

Recovering RAID-1 volumes from disk failures (Enterprise Ed. Only) ................ 71

Page 7: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

vii

SANique Cluster File System v2.4 Administrator’s Guide

Recovering RAID-1 volumes from temporary disk failures (Enterprise Ed. Only)

.................................................................................................................................. 72

Recovering logical volumes from system failures ................................................. 73

Snapshot (Enterprise Edition Only) ........................................................................................74

Creating a snapshot ................................................................................................. 74

Removing a snapshot ...............................................................................................75

Chapter 6: SANique Command List Using SANique File System Utilities .......................................................................................77

mkfs: Creating shared file systems ........................................................................ 77

fsck: Checking and fixing shared file systems.......................................................78

sanique_extend_fs: Extending SANique file systems ............................................ 81

Using Node Management Utilities ..........................................................................................81

sanique_add_node: Activating SANique member nodes ........................................ 81

sanique_lock_reconf: Reconfiguring global lock service ....................................... 82

sanique_ls_dev: Listing device information ............................................................82

sanique_ls_lv: Listing logical volume information .................................................. 82

sanique_ls_mtab: Listing all file system mount information .................................. 83

sanique_node_stat: Listing the status of SANique cluster .................................... 83

sanique_rm_node: Deactivating SANique member nodes ...................................... 84

sanique_shutdown: Shuting down SANique CFS .................................................... 84

sanique_start: Starting SANique CFS ..................................................................... 84

sanique_sync_conf: Synchronizing SANique CFS configuration ...........................85

sanique_mount: Mounting SANique file systems automatically ............................85

sanique_umount: Unmounting SANique file systems automatically ......................85

sanique_version: Showing version information of SANique .................................. 85

Preparing Module Compilation ...............................................................................................87

Compiling HBA Driver Module ................................................................................................89

Loading Module Automatically at boot time............................................................................89

Appendix 1: Installing HBA Driver Modules Appendix 2: SANique CFS Directory List

Page 8: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

viii

About this Book

This Administrator’s Guide contains information for system administrators who install, configure, and operate SANique™ Cluster File System (CFS) version 2.4. This Guide assumes that the reader have enough knowledge that includes: l Basic understandings on system management l Basic skills for installing and configuring Linux operating system l Detailed information on servers and storage environment

What is Covered in this Guide This Guide is organized as follows: l Chapter 1 “Introducing SANique CFS” describes the functionalities and

features of SANique CFS. l Chapter 2 “Preparing Installation” goes over system requirements for installing

SANique CFS and prerequisites that should be done before installation. l Chapter 3 “Installing and Configuring SANique CFS” explains how to install

SANique CFS step by step. l Chapter 4 “Operating SANique CFS” describes management issues and

tunable factors required for operating SANique CFS. l Chapter 5 “Operating SANique CVM” describes the functionality provided by

SANique CVM and management issues regarding operating SANique CVM. l Chapter 6 “SANique Command List” provides a brief summary of each major

SANique command-line utility. l Appendix “SANique CFS Directory List” depicts names and locations of

SANique CFS related files after proper installation.

Page 9: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

ix

SANique Cluster File System v2.4 Administrator’s Guide

Conventions Used in this Guide This Guide uses the following typographical conventions:

Fonts Description Examples

Monospace Characters as it would

appear on a display

screen (file and directory

names, fragments of a

code, function names and

parameters)

/usr/local

Monospace

(bold)

Command line inputs $umount /mnt/cdrom

Arial (darkened

bold in box)

Button inputs on graphical

user interfaces

Click On and Off to…

Italic Terms master node means…

Customer Support For inquiries on registration key, trouble report, and other supports, please contact MacroImpact Customer Support in one of the following ways: MacroImpact Customer Support Phone: +82-2-3446-3508 (will be charged for international calls) Email: [email protected] For more information on MacroImpact and its products, please visit our Web site at http://www.macroimpact.com.

Page 10: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount
Page 11: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

This chapter describes the functionalities and features of SANique CFS..

CCChhhaaapppttteeerrr 111

IIInnntttrrroooddduuuccciiinnnggg SSSAAANNNiiiqqquuueee CCCFFFSSS

Page 12: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

2

What is SANique CFS?

SANique™ Cluster File System (CFS, hereafter) version 2.4 is an innovative system software that enables physical file sharing among multiple nodes on Storage Area Network (SAN, hereafter) based cluster servers.

Applications

Hardware Driver

SANique CFS

VFS

Applications

Hardware Driver

SANique CFS

VFS

Applications

Hardware Driver

SANique CFS

VFS

Applications

Hardware Driver

SANique CFS

VFS

SAN

LAN

Figure 1-1: File Sharing on FC SAN via SANique CFS

As depicted in Figure 1-1, all computer nodes connected to a SAN have access to all storage devices connected to a SAN. These physical storage devices can be managed and partitioned in a conventional manner by any one node in a cluster. Since they are physically connected to multiple cluster nodes, however, there are a number of additional issues to be taken care of, including the followings:

l A host computer on SAN has no knowledge of what change has been made to shared disks by other computers.

l A host computer on SAN has to reboot or re-mount file sytems in order to access the files created on shared disks by other computers.

l When two or more host computers on SAN are accessing a same .file system at the same time, the file system losts consistency and gets corrupted.

l When two or more host computers on SAN are accessing a same .file at the same time, the file losts its integrity and gets corrupted.

Page 13: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

3

SANique Cluster File System v2.4 Administrator’s Guide

SANique CFS takes care of such issues newly created due to the characteristis of physical sharing of storage devices by providing cluster concurrency control mechanism. Armed with SANique CFS, every single node in a cluster can read from or write to shared disks as if they were its own local disks. Each and every host computer on SAN can physically share files and file systems on shard disks without any conflict since SANique CFS prevents the possible corruptions of files and file systems. In addition, SANique CFS quarantees a high bandwidth data transfer rate since all data transfers are done at SAN speed.

SANique CFS does not require any specific hardware or software support other than standard LAN and SAN configuration. Furthermore, unlike other existing file sharing software, SANique CFS exposes no performance degradation even when reading or writing small files. In any case, data I/O on each and every host computer is carried out at SAN speed.

What is Storage Area Network (SAN)? Storage Area Network (SAN) is a dedicated network for the exchange of storage data. As of 2002, the most popular protocol for SAN is based on Fibre Channel Protocol, which is designed to optimize block data transfer between storage devices and system buses. Currently, the typical setup of a SAN is built up with multiple host computers equipped with host bus adapters (HBAs), one or more fibre channel switches or hubs, and fibre channel RAIDs or JBODs. Tape libraries for backup and other SAN appliances may be included in the setup. Figure 1-1 illustrates a typical setup of SAN. As illustrated in Figure 1-2, the hardware topology of a SAN allows each computer (usually called “node”) to access SAN-based storage devices concurrently. Without a proper file sharing software support, however, data integrity or consistency gets corrupted as explained earlier. Either one of two possible approaches can be taken in order to avoid such disastrous situation:

Page 14: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

4

Figure 1-2: Typical Setup of a Fibre-Channel Based SAN

l SAN storages can be partitioned and each host computer on SAN accesses its own partition, This approach is called Zoning. This approach then is no different from using one’s own local storages giving up all such merits coming from SAN architecture as sharing, availability, and scalability.

l A file sharing software can be employed in order to share SAN storages logically as well as physically. With a proper sharing software support, each and every host computer on SAN can access all SAN storages as if they were its own local storages and a file created by one host computer is immediately reflected to other host computers on SAN. SANique CFS is a cluster file system software providing such file sharing among multiple cluster nodes on a single SAN island.

Applications that can leverage SAN potentials with SANique CFS include, but not limited to the followings: l The performance, availability, and scalability of back-end file servers can be

maximized when they are clusterized within a single SAN island and provide file service via NFS or SMB. Such a clustered file server provides a single system imange while being able to provide uninterrupted file service even if one or more

Computers(nodes)

HBAs

Switch / Hub

RAID / JBOD

Page 15: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

5

SANique Cluster File System v2.4 Administrator’s Guide

individual servers fail. In addition, when the number of clients is increased or decreased, the clustered file server can be easily expanded or shrinked for more or less file service bandwidth by adding or removing individual servers on-line.

l It is also possible to build a SAN-based cluster server for almost all front-end service such as web, mail, or OLTP. Since such front-end service are to receive and serve multiple independent requests from clients, it is possible to clusterize server system without altering server software architectures and provide n-way active-all type of service which is adequate for current service load. Since no data replication is required with SANique CFS, no redundent storage space or additional software for data synchronization is required. Of course, no single or part of server failures cause service interruption. Possible data inconsistency due to periodic data synchronization can be also eliminated by SANique CFS.

l Multiedia data service such as streaming, VOD, AOD, or FTP service can be characterized and distinguished from other service by its large data size. Another notable characteristics of multimedia service is that service requests are read-oriented. Data sharing is essential for such read-oriented applications on a set of large data; n-way active-all type of service is inevitable in order to secure high enough service bandwidth. Unless otherwise, the cost of redundant storage for replicated data is enormous. In addition, 2Gbps of SAN bandwidth can be fully utilzed since data are hardly updated and bandwidth loss due to concurrency control overhead can be minimized.

l DB service is one of the most important and common service among all IT applications and shares the largest segment in the IT solution market. Unlike most other applications, however, DBMS implements user level data caching and such a feature forces DBMS to be cluster-aware. Hence, n-way clusterized DB service requires the clusterization of corresponding DBMS. Currently, a few commercial DBMSs such as Oracle or DB2 support clustered service and such DBMS can be configured for n-way active-all cluster service on top of SANique CFS. It is possible to operate a cluster DBMS on shared raw volumes without shared file system support, or some cluster DBMSs even offer their own embedded shared file system feature. In the former case, the corresponding cluster DBMS takes up too many raw devices which are limited system resource and managing a large number of such raw devices is painful. In the latter case,

Page 16: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

6

DB administrators have to deal with different file systems not allowing regular files to be shared. In both cases, dealing with DB growth in size is the main problem. On top of SANique CFS, one large volume device is enough to build any cluster DBMS and management job is as easy as managing regular files, In addition, cluster volume management feature embedded into SANique CFS simplifies handling data growth in DB service through supporting on-line volume resizing. For those DBMSs not supporting clusterization, active-standby type of DB service can be set up on top of SANique CFS. Unlike other active-standby service, however, active-standby service on top of SANique CFS provides a fast service-failover because the standby DB instance can be also active and ready to takeover before the active DB instance fails. In addition, no volume mounting is required during service failover and such a feature eliminates the possibility of data corruption during an abnormal mouting operation.

l Data sharing can also improve the performance of data processing systems in addition to those service systems mentioned above. Data processing systems such as rendering, graphic editing, or animation systems are typical producer-consumer type of application characterized by large data size and multi-phase data processing. In such systems, the output produced by a computer becomes the input to another computer in a pipelined manner. Since the size of data is large, such a pipelining operation usually takes long. In a SAN-based cluster system with SANique CFS installed, there is no need to copy data between nodes; when a node needs to access data, it is already there.

l Paralle or distributed processing is another type of typical data processing aapplication. Unlike in the producer-consumer type of applications, in a parallel or distributed application, the same data file is simultaneously accessed by multiple application instances from multiple nodes. SANique CFS emliminates unnecessary data copy overhead in such parallel or distributed data processing systems.

Page 17: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

7

SANique Cluster File System v2.4 Administrator’s Guide

SANique CFS Features

SANique CFS version 2.4 is a cluster file system based on Linux kernel 2.4 and provides standard POSIX file system interfaces and the same semantics as Linux native file systems – Ext2 and Ext3. Therefore, employing SANique CFS 2.4 is transparent to those users and applications of Linux 2.4 including packaged distributions such as Red Hat or SuSE Linux.

The main functionality of SANique CFS is file sharing within a cluster on a single SAN island. Unlike other existing file sharing mechanism such as NFS or Samba, SANique CFS allows to share data over SAN instead of LAN leveraging the high speed data transfer of SAN while keeping LAN traffics at its lightest possible level.

SANique CFS has failover functionality embedded in it. In case of any system failure, SANique CFS automatically detects failures, isolates failed nodes, and reconfigures the cluster system (commonly known as failover) so that the rest of the system can function as intended. This failover operation prevents any potential corruption of shared file systems that might be caused by node failures. SANique CFS can recover the integrity of shared file system by journaling techonology and can do it within a constant amount of time (usually within a few second) regardless of file system size. All other nodes than failed nodes in the cluster may be temporarily blocked during recovery and can continue to access shared files and file systems after recovery as if no system failure ever occured. With erroneous status being taken care of, the failed node can join the cluster on-line and regain access to shared file systems with a simple manipulation.

Page 18: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

8

Applications

Hardware Driver

SANique CFS

VFS

Applications

Hardware Driver

SANique CFS

VFS

Applications

Hardware Driver

SANique CFS

VFS

Applications

Hardware Driver

SANique CFS

VFS

SAN

LAN

Figure 1-3: File Sharing over SAN with SANique CFS

Applications

Hardware Driver

SANique CFS

VFS

Applications

Hardware Driver

SANique CFS

VFS

Applications

Hardware Driver

SANique CFS

VFS

Applications

Hardware Driver

SANique CFS

VFS

SAN

LAN

Figure 1-4: File Sharing over SAN with SANique CFS during Failover

The main functionalities of SANique CFS 2.4 include, but are not limited to the followings:

Support for SAN and Cluster Environment l SANique CFS enables multiple cluster nodes to share the same files and file

systems simultaneously. l SANique CFS supports on-line addition and removal of cluster nodes.. l SANique CFS supports up to 128 storage devices. l SANique CFS supports 64-bit address space. (Due to the limitation of kernel,

Page 19: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

9

SANique Cluster File System v2.4 Administrator’s Guide

the largest file or file system that can be created is 1 tera bytes in Linux environment.)

l SANique CFS supports the centralized management of shared file systems; any signle node can manage all shared file systems.

High I/O Performance l SANique CFS manages to maximize file I/O performance by an efficient

distributed lock management and global cache management technologies. The implementation of DLM eliminates a possible performance bottleneck at the centralized lock manager in other cluster file system implementations and global cache management avoids unnecessary disk accesses.

l SANique CFS yields the near native file system performance and exposes no drastic performance degradation even when accessing a large number of small files.

l SANique CFS produces the highest possible performance with I/O bound applications leveraging the high block transfer rate of SAN.

High Availability and High Scalability l The failure detection mechanism of SANique CFS based on heartbeat

checking is highly reponsive; it immediately detects any failure in the cluster and isolates the failed nodes in order to prevent the possible corruptions of file systems so that other nodes in the cluster can keep accessing data.

l There is virtually no limit in the size of SANique CFS cluster. The SANique CFS cluster can be any size and also dynamically resized.

l A higher availability can be achieved by reducing possible service downtime through on-line node failback, addition, and removal. There is no need to interrupt service due to system reconfiguration.

l Integrated together with SANique Cluster Volume Manager (CVM), SANique CFS can maximize availability, scalability and performance by on-line storage addition and removal, up to 32-way mirroring, or data striping across multiple disks.

Page 20: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

10

Unix File System Functionality not Supported by SANique CFS 2.4

SANique CFS is basically designed and implemented to support all possible standard Unix file system semantics. Due to the characteristics and limitations of cluster file system, however, SANique CFS 2.4 does not support some traditional file system functionalities. Some of them are not possible or meaningful to support and others are left for the next version of SANique CFS to support due to the complexity in implementation or the tight time schedule. The following functionalities are not supported by SANique CFS 2.4. l Hard Link l Flock

Technical Support

If you are having trouble using SANique CFS, please follow instructions below:

① Retry the corresponding operation in trouble following instructions given in this Guide step by step.

② In case of disk I/O error, most troubles are due to misuse or erroneous configuration of SAN equipment. Please make sure that disk I/O is normal with Linux native file system.

③ If problems persist, make a note on the serquence of operations that leads SANique CFS into trouble, save screen messaages at the time, extract corresponding error messages from the log file, and please report to:.

MacroImpact Customer Support Phone: +82-2-3446-3508 (will be charged for international calls) Email: [email protected]

Page 21: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

11

SANique Cluster File System v2.4 Administrator’s Guide

This chapter describes system requirements and procedures recommended to check prior to installation. Before installing SANique CFS, please make sure that your SAN system is properly functioning . A simple two-step procedure to check it out is: ① Check out if your LAN and SAN are properly setup. ② Make sure if all SAN storages are properly recognized by and accessible from

all cluster nodes. Please refer to the following sections for more details.

CCChhhaaapppttteeerrr 222

PPPrrreeepppaaarrriiinnnggg IIInnnssstttaaallllllaaatttiiiooonnn

Page 22: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

12

Contents of SANique CFS Package SANique CFS package is comprised of 4 kernel module files, patched kernel images, kernel pathch files, and additional administrative tools for operation. Please refer to Appendix for the exact names of files and direcories.

Types File Name Description

sanique_cfm cluster file system module

Saniique_cvm cluster volume manager module

sanique_clm2 DLM module Module Files

sanique_csm2 cluster system management module

Kernel kernel-2.4.x-Xsanique2.i686.rpm Patched Linux 2.4.x kernel package

Shell files and executable binaries for installing/operating SANique CFS Java class files for graphic user interface

Admin. Tools

Document files

SANique CFS should be run on all machines connected to SAN storages after all modules above properly patched onto the kernel.

System Requirements

Multiple host computers can be directly connected to a SAN storage device without switching device when the storage device supports multiple ports. Otherwise, one or more switching devices should be used to connect multiple hosts and storage devices. A small-scale SAN system comprised of two host computers and a dual-port SCSI disk employes the former method in general. In most cases, host-based adapters and switches are exploited to build a typical SAN island. In order to build a SAN island, the system should meet the following requirements in order for multiple host comupters to physically share SAN storage devices with SANique CFS.

System hardware

CPU one or more Intel Pentium / Pentium Pro compatible processor(s)

Page 23: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

13

SANique Cluster File System v2.4 Administrator’s Guide

NIC one or more standard Ethernet network interface cards

HBA one or more FC HBAs for FC SAN or iSCSI adapter for IP SAN

Switch FC switch for FC SAN or iSCSI router for IP SAN

Memory 128MB minimum, 512 MB or larger recommended

OS Linux 2.4.x kernel

Driver FC HBA or iSCSI adapter driver for Linux 2.4.x kernel Software

Protocol TCP/IP

Additionally, the following requirements should be met depending on the working environment of user’s choice:

l In order to use graphical user inferface with SANique CFS, JDK 1.4 needs to be installed prior to SANique CFS installation.

l In order to use SANique CFS cluster as a back-end file server, corresponding service modules such as NFS or Samba needs to be installed.

Make sure that all host computers running SANique CFS should be able to communicate with each other and have a direct access to all SAN storages.

The degien and implementation of SANique CFS are hardware-independent and SANique CFS should run with any standard FC, iSCSI, and Ethernet devices.

SANique CFS does not explicitly limit the maximum number of nodes in a cluster. Especially, there is no practical limitation on the number of nodes for read-oriented applications. For applications with frequent updates, however, associating communication overhead might be little bit too high to yield high speed I/O rate; for such applications, up to 32-node cluster configuration is highly recommended.

Page 24: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

14

Things to Check before Installation Before installing SANique CFS, please make sure that your SAN and LAN operate normally. That is, each SAN device should be recognized, mountable, and accessible by all cluster nodes.

A special attention should be paid to check the interoperability among your SAN devices such as HBA, iSCSI adapter, switch/hub, iSCSI router, and FC RAID/JBOD. Typical concerns to be checked out prior to SANique CFS installation include the followings:

l If you are using full-fabric Fibre Channel switch, make sure that your HBAs also support full fabric operations. HBAs from the same vendor may or may not support full fabric operations depending on product model.

l If you are using FC-AL RAIDs/JBODs in conjunction with full-fabric Fibre Channel switch, please make sure that corresponding switch ports are configured in arbitrated loop mode rather than fabric mode. In general, you can configure each FC switch port in either full fabric or arbitrated loop mode. Please refer to the User’s Guide of your FC switch for how to configure switch ports.

l If you are using any FC hub cascaded, low throughput FC RAIDs/JBOBs, or low bandwidth FC switch/hub, you may want to limit the queue size of your HBA in order to reduce possible transfer error rate. In general, most HBAs provide configurable parameters such as queue size or trottle value

l .If you are using zoning or VSAN, make sure that all nodes and shared storage under SANique CFS’s control belong to the same zone or VSAN.

l If you are building an IP SAN island, make sure the interoperabiity between iSCSI router and storage devices. Some storage devices may not be recognized by host computers via the given iSCSI routers depending on vendors and product models.

l Two possible approaches are available in order to build an IP SAN island: via a standard NIC and an iSCSI driver or via an iSCSI adapter. An iSCSI adapter with TOE is highly recommended because it accelerates not only iSCSI but also TCP part of communication service.

Once your SAN and LAN are properly set up, please go through the following check list one by one:

Page 25: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

15

SANique Cluster File System v2.4 Administrator’s Guide

① In a Fibre Channel based SAN, FC HBA driver should be functioning properly.

FC HBA driver can be either configured as a dynamically loadable kernel module or compiled into kernel. In general, it is recommended to configure HBA driver as a dynamically loadable module and to have it loaded at the system boot time. You can check the status of HBA driver by lsmod command utility. For instance, lsmod with Qlogic qla2x00 driver should print out as follow:

$lsmod Module qla2x00 nfs

Size 173826 28768

Used by 1 1

Note that a dynamically loadable HBA driver should be first unloaded (by

rmmod) and then reloaded (by insmod) when there is any change in SAN confiuration after it is loaded.

② In both FC SAN and IP SAN, each SAN disk should be attached to a Linux device file. You can check it out with “cat /proc/scsi/scsi..” A sample screen output would look like:

$cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: SEAGATE Model: ST318304FC Rev: 0003 Type: Direct-Access ANSI SCSI…

Host: scsi0 Channel: 00 Id: 01 Lun: 00 Vendor: SEAGATE Model: ST318304FC Rev: 0003 Type: Direct-Access ANSI SCSI…

In general, LUN numbers should start from 0 and it is desired to use consecutive numbers without holes. Once a device is recognized, you can use fdisk command to partition it. Once you partition a device or if it is already partitioned, you might want to check out its status by “cat /proc/partitions.” A sample screen output would look like:

Page 26: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

16

$cat /proc/partitions major minor #blocks name 8

8 8 3 3 3 3 3

0 1 2 0 1 2 3 5

17982150 2098160 2098176 40031712 1052226 1028160

1 10490413

sda sda1 sda2 had hda1 hda2 hda3 hda5

③ After all SAN disks are properly attachted, normal I/O capability should be checked out for each device. You can do this by executing dd command as follow:

$dd if=/dev/zero of=/dev/sda1 bs=1024 count=1000

④ Finally, please check out LAN connection between all host computers on which SANique CFS will be installed by simply pinging each other.

⑤ After completing all the tests above, proceed to installing SANique CFS on each host computer as instructed in the next chapter.

Page 27: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

17

SANique Cluster File System v2.4 Administrator’s Guide

This chapter describes how to install and configure SANique CFS on each host computer. The installation and configuration procedure of SANique CFS consists of following four major steps: ① Building SANique kernel ② Installing SANique CFS ③ Configuring SANique CFS ④ Running SANique CFS

Following sections provide detailed description for each of above steps.

CCChhhaaapppttteeerrr 333

IIInnnssstttaaalllllliiinnnggg aaannnddd CCCooonnnfffiiiggguuurrriiinnnggg SSSAAANNNiiiqqquuueee CCCFFFSSS

Page 28: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

18

Building SANique 2.4.x Kernel Throughout this Guide, the SANique Kernel, SANique 2.4 kernel or SANique 2.4.x Kernel refers to a SANique-enabled kernel image built by recompiling Linux 2.4.x source with SANique CFS patch file applied. SANique 2.4.x kernel is compatible with standard GNU-based Linux 2.4.x kernel. This Guide assumes that Linux 2.4.x is already installed and running on each host computer on which SANique CFS is to be installed. If Linux is not installed at all or other Linux kernel release than 2.4.x is in action, please install Linux 2.4.x kernel before proceeding any further. Refer to accompanying User’s Guide for installing Linux.

SANique kernel should be built on each and every host computer on which SANique CFS is to be installed. Building SANique kernel can be done either by applying RPM or by compiling kernel source. In the following sections, detailed instructions are given for building GNU-based SANique 2.4.x kernel. For more general procedure of compiling Linux kernel, please refer to articles from Linux homepage or each Linux distribution vendor. [Notice] For SANique kernel, the following naming convention will be used throughout this guide.

Variables Meaning

x The last number of GNU Linux kernel version.

D The release number of the Linux distribution to which

SANique CFS is ported

X The release number of SANique CFS

For instance, kernel-smp-2.4.18-3rh5sanique2.i686.rpm means

SANique kernel release 5 based on Redhat 7.3 release 3 which is based on GNU Linux 2.4.18 kernel and it will be represented as kernel-smp-2.4.x-DXsanique2.i686.rpm in this guide.

Page 29: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

19

SANique Cluster File System v2.4 Administrator’s Guide

Building Standard SANique 2.4.x Kernel by RPM Step 1: Mount SANique installation CD.

Step 2: Prepare SANique RPM package. Three separate RPM packages are provided: one for SMP machines, one for single CPU machines, and another for machines with large memory (more than 4GB).

For SMP machines: /CDROM/kernel/DIST_DIR/rpms/i686/kernel-smp-2.4.x-DXsanique2.i686.rpm For single CPU machines: /CDROM/kernel/DIST_DIR/rpms/i686/kernel-2.4.x-DXsanique2.i386.rpm For machines:with large memory: /CDROM/kernel/DIST_DIR/rpms/i686/kernel-bigmem-2.4.x-DXsanique2.i386.rpm

[Notice] The variables x, D, and X in SANique kernel name represent GNU Linux kernel version, the corresponding Linux distribution release number, and the latest release number of SANique CFS, respectively.

Step 3: Apply SANique RPM package. For SMP machines: $rpm –ivh kernel-smp-2.4.x-DXsanique2.i686.rpm For single CPU machines: $rpm –ivh kernel-2.4.x-DXsanique2.i686.rpm For machines with large memory: $rpm –ivh kernel-bigmem-2.4.x-DXsanique2.i686.rpm

Page 30: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

20

Step 4: Boot up the system with SANique 2.4.x kernel.

Make sure that the boot image has been successfully created under the name of /boot/vmlinux-2.4.x-DXsanique2smp, /boot/vmlinux-2.4.x-DXsanique2. or /boot/vmlinux-2.4.x-DXsanique2bigmem. Such boot image should also have been reflected onto /etc /lilo.conf or /etc/grun.conf (if not, do it manually).

When a kernel image is newly added, lilo should be executed to register the new kernel image so that it can be effective when the system is rebooted: $lilo Then, reboot the system: $reboot

Building Standard SANique 2.4.x Kernel by Compiling Kernel Source

Step 1: Mount SANique installation CD.

Step 2: Prepare to build SANique 2.4.x kernel. SANique 2.4.x kernel can be built by either applying a proper SANique patch to GNU based 2.4.x kernel and recompiling or simply recompiling pre-patched SANique 2.4.x kernel source contained in SANique installation CD.

l When building SANique kernel source using SANique patch: ① Uncompress “/mnt/cdrom/kernel/DIST_DIR/org/linux-

2_4_x-DX.tgz” under /usr/src directory. A new directory /usr/src /linux will be created.

Change your working directory to /usr/src by: $cd /usr/src

Page 31: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

21

SANique Cluster File System v2.4 Administrator’s Guide

Uncompress source file. $tar zxvf /mnt/cdrom/kernel/DIST_DIR/org/linux-

2_4_x-DX.tgz

[NOTICE] If you already have /usr/src/linux directory, make sure to

rename it before entering the command; otherwise, it will be overwritten.

② Apply SANique patch and make appropriate symbolic links. Apply SANique patch and make a symbolic link. $patch –p0 < /mnt/cdrom/kernel/DIST_DIR/patch/linux-2_4_x-

DXsanique2.patch $mv linux linux-2.4.x-DXsanique2 $ln –s /usr/src/linux-2.4.x-DXsanique2 linux

③ Provide FC HBA device driver in case of FC SAN. [NOTICE] If you are using other FC HBAs than those from Qlogic, you have to obtain a correct device driver for Linux 2.4.x kernel and copy it onto kernel direcoty so that it can be included as a part of kernel source. Please contact your HBA vendor for details on what and where to copy. In case of Qlogic’s qla2x00 series HBAs, corresponding device drivers are already included in SANique patch file linux-2.4.x-DXsanique2.patch and you can skip this step.

l When using pre-patched SANique 2.4.x kernel source:

① Uncompress the pre-patched source file and make a symbolic link.

Change your working directory to /usr/src by: $cd /usr/src Uncompress pre-patched SANique kernel source. $tar zxvf /mnt/cdrom/kernel/DIST_DIR/patched/linux-

2.4.x-DXsanique2.tgz Make a symbolic link.

Page 32: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

22

$ln –s /usr/src/linux-2.4.x-DXsanique2 linux

② Provide FC HBA device driver in case of FC SAN.

[NOTICE] If you are using other FC HBAs than those from Qlogic, you have to obtain a correct device driver for Linux 2.4.x kernel and copy it onto kernel direcoty so that it can be included as a part of kernel source. Please contact your HBA vendor for details on what and where to copy. In case of Qlogic’s qla2x00 series HBAs, corresponding device drivers are already included in SANique patch file linux-2.4.x-DXsanique2.patch and you can skip this step.

Step 3: Configure and build SANique 2.4.x kernel. For general kernel compilation procedures, please refer to articles or documents available on the Internet or from your vendors. The following exemplifies a typical procedure for kernel compilation.

[Assumption] kernel source directory = /usr/src/linux

Change your working directory to kernel source directory by: $cd /usr/src/linux Eliminate any existing dependency and other information that might affect kernel compilation by: $make mrproper Configure kernel environment with care such as setting up network driver, local SCSI, and so on by: $make menuconfig Build a new source dependency before compiling by: . $make dep Remove all intermediate files created during previous kernel compilation by

Page 33: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

23

SANique Cluster File System v2.4 Administrator’s Guide

$make clean Create a new kernel image (it will take some time) by: $make bzImage Compile kernel modules selected during configuration by: $make modules Copy modulized files into /lib/modules/2.4.x by: $make modules_install

[NOTICE] In case of FC SAN, make sure that kernel image includes an HBA device driver for Linux 2.4.x kernel. Without a proper HBA device driver enabled, SAN disks cannot be accessed and no storage sharing can be allowed. In general, it is recommended that the HBA device driver is configured as a dynamically loadable module rather than statically embedded into the kernel image.

Step 4: Boot up the system with SANique 2.4.x kernel. Copy the new kernel image into /boot direcoty and modify /etc/lilo.conf or /etc/grub.conf file in order to boot from SANique 2.4.x kernel.

Page 34: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

24

Copy the new kernel image onto /boot/linux-2.4.x-Xsanique2 by: $cp /usr/src/linux/arch/i386/boot/bzImage /boot/linux

-2.4.x-DXsanique2 Modify /etc/lilo.conf file to inform the boot loader of the new kernel

image. It is recommended that you leave current booting information as it is and add new information in the same format supplying the name of the new kernel image (linux-2.4.x-DXsanique2) and an appropriate label of your choice. $vi /etc/lilo.conf (or using your favorite editor) or $vi /etc/grub.conf Register the new kernel image by executing lilo. Unless otherwise, the system

won’t know whether there is a new kernel image and rebooting won’t do any good with SANique 2.4 kernel. With executing lilo, you will be able to confirm if the

new kernel image is properly registered by checking out the new lable that you just supply. $lilo Reboot the system. The change will be in effect after you reboot the system. $reboot

Page 35: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

25

SANique Cluster File System v2.4 Administrator’s Guide

Installing SANique CFS

Step 1: Mount SANique installation CD.

Mount SANique CFS installation CD if not already done. $mount /dev/cdrom Change your working direcoty. $cd /mnt/cdrom

Step 2: Install SANique CFS. l Installing SANique CFS by RPM SANique CFS provides two RPM package: one for smp machines and other for single CPU machines.

For SMP machines: /CDROM/sanique/DIST_DIR/rpms/i686/sanique-smp-2.4-BkernelDX.i686.rpm For single CPU machines: /CDROM/sanique/DIST_DIR/rpms/i686/sanique-2.4-BkernelDX.i686.rpm Install SANique CFS by RPM: For SMP machines: $rpm –ivh sanique-smp-2.4-BkernelDX.i686.rpm For single CPU machines: $rpm –ivh sanique-2.4-BkernelDX.i686.rpm

[주의] “B” in sanique-smp-2.4-B-2.4.x-DX.i686.rpm means SANique build number. l Intalling SANIque CFS by installing shell file

Page 36: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

26

This step includes copying SANique kernel modules, command utilities, and other shell scripts from installation CD to appropriate locations after each host computer is booted from SANique 2.4.x kernel.

Run installation script. $install <install_script>

Example – Redhat 7.3 based kernel with multiple CPUs: $install redhat_7_3_smp Example – Redhat 7.3 based kernel with single CPU: $install redhat_7_3

You may want to check out whether SANique CFS installation is successfully performed by comparing the directoy tree of /usr/local/sanique with that

listed in Appendix.

Step 3: Activate SANique CFS license.

SANique CFS won’t start without a proper license key. Copy sanique.license file into /usr/local/sanique/config directory and reboot the system.

Copy the license file. $cp sanique.license /usr/local/sanique/config Reboot the system so that the license is activated. $reboot

Page 37: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

27

SANique Cluster File System v2.4 Administrator’s Guide

Configuring SANique CFS After the installation script has been successfully run on each host computer, SANique CFS should be configured in order for it to work cluster-wise in a synchronized manner and configuring SANique CFS be done once on the master node. You can configure SANique CFS by directly editing /usr/local /sanique/config/sanique.conf file via your favorite editor. [NOTICE] The meaning of the master node in SANique CFS The msater node in SANique CFS cluster means a node on which some cluster-wise SANique CFS commands (such as sanique_start or sanique_lock_reconf,) can be executed. The master node can be dynamically reassigned and automatically reassigned when the master node fails during failover procedure. Since SANique CFS cluster is fully symmetric, each and every node has the same arcitecture and equally shares all functionalities and service load; the master node just plays a role as a decision maker whenever necessary. Hence, it is quite different from the concept of centralized metadata server in other cluster file systems. Copy /usr/local/sanique/config/sanique.conf.sample file under the name of /usr/local/sanique/config/sanique.conf, and then customize sanique.conf file to your working environment referring to the

description given below:

Change your working directory. $cd /usr/local/sanique/config Copy sample config file onto sanique.conf. $cp sanique.conf.sample sanique.conf Configure SANique CFS by editing sanique.conf. with your favorite editor. $vi sanique.conf [NOTICE] Lines staring with “#” are comments.

Table 3-1 represents SANique CFS configuration parameters and their default values.

Page 38: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

28

Below Table 3-1, brief descriptions for each configuration parameter are given.

Parameters Default Value

NODE_ID 0 MASTER_ID 0 NUM_LOCKS 40960 TIMEOUT 10 AUTO_FAILBACK N AUTO_JOIN N AUTO_CONFIG_LOCK N GATEWAY 10.1.1.1 HEARTBEAT_ENABLE Y HEARTBEAT_INTERVAL 30 NODE_INFO 0 10.1.1.10 Y Y test_server

Table 3-1: SANique CFS Configuration Parameters

NODE_ID This integer represents the identification number of the node within a SANique cluster. Node ID sould be unique and start from 0. to (number of nodes – 1). Node IDs should be assigned with consecutive numbers without omission.

MASTER_ID This parameter should be one of valid node IDs and represents the ID of acting master node in SANique CFS cluster. The ID of acting master node should be identical cluster-wise. The default value is 0. [NOTICE] Some SANique CFS operations can be initiated only from the acting master node. Setting the acting master is not static and the master can be changed on-line either explictly by command or implicitly during failover.

NUM_LOCKS This field is for setting how much system memory should be allocated for cluster-wise global lock space in bytes in each SANique CFS node. The default value is 40,960 locks and it should be larger than 20,000 locks in order SANique CFS to work properly. For more detailed information, please refer to

Page 39: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

29

SANique Cluster File System v2.4 Administrator’s Guide

“Reconfiguraing Lock Space” in Chapter 4. TIMEOUT

This field is for setting operation timeout value in second. No SANique CFS operation including heartbeat checking should wait forever since a single component failure might block the whole cluster. All operations should eventually return in any case and handle corresponding exceptions properly in case of any system failure. This timeout value specifies how long each operation should wait before it decides to go for exception handling. The default value is ‘10’ seconds and the minimum value can be 1 second.

AUTO_FAILBACK SANique CFS provides an option for each node to automatically rejoin SANique CFS cluster at its boottime after failure. If you want each node to rejoin the cluster at its boottime after failure, set this value to ‘Y.’ If you want to examine each node before it rejoin the cluster after failure, set this value to ‘N.’ [NOTICE] Even if you set this value to ‘Y,’ the failed node won’t rejoin the cluster automatically when it does not come up normally.

AUTO_JOIN SANique CFS provides an option for each node to automatically join SANique CFS cluster at its normal boottime. If you want each node to join the cluster automatically at its boottime, set this value to ‘Y.’ Otherwise, set this value to ‘N.’ The default value is ‘N.’

AUTO_CONFIG_LOCK SANique CFS provides an option for global lock service to be recofigured automatically when nodes are newly added to or removed from SANique cluster. If you want global lock service to be reconfigured automatically, set this value to ‘Y.” The default value is “N.” [NOTICE] If you turn this option on, you are not able to reconfigure global lock service manually.

GATEWAY Please provide the gateway IP address of your subnet in which SANique CFS cluster resides. This information is not required for normal SANique CFS operations but is important in terms of providing fault resilience when one or more cluster nodes are experiencing some functional difficulties. This IP

Page 40: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

30

address will be used when each node needs to check its connectivity to outside world by pinging the gateway. Depending on the result of pinging, each node may or may not be the winner in case that the whole cluster is experiencing split-brain.

HEARTBEAT_ENABLE This option is for turning on or off cluster-wise heartbeat checking. With heartbeat enabled, SANique CFS will detect any possible node failure and failover in the earliest possible time. The value should be either ‘Y’ for enabling heartbeat and ‘N’ for disabling; the default value is ‘Y.’ When heartbeating is disabled, SANique CFS will detect node failures at the moment the first disk operation takes place after failure.

HEARTBEAT_INTERVAL This parameter specifies in what interval a heartbeat signal should be given out. This option is in effect only when heartbeating is enabled and has no significant impact when disabled. If the value specified is too small, each node might be too busy giving out heartbeat signal. If too large, the detection of a node failure might be defferred too long. The default value is ‘10’ seconds. [NOTICE] HEARTBEAT INTERVAL should be equal to or greater than TIMEOUT explained above in order for it to be effective.

NODE_INFO This option has 5 sub-fields. The following example illustates configuring NODE_INFO with 3-node SANique CFS cluster and brief descriptions for each sub-field are given below. Example of 3-Node SANique CFS cluster #node_id ip_address glm active node_name NODE_INFO=0 xxx.xxx.xxx.xxx Y Y host_name_0 NODE_INFO=1 xxx.xxx.xxx.xxx Y Y host_name_1 NODE_INFO=2 xxx.xxx.xxx.xxx Y Y host_name_2

① node_id: This field represents the ID of the given node. ② ip_address: This field specifies the IP address of the given node.

SANique CFS employs TCP based cluster-wise communication package so that it is critical to provide correct IP address for each node. When more than

Page 41: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

31

SANique Cluster File System v2.4 Administrator’s Guide

one IP networks, make sure that all IP addresses specified are on the same IP network. Using a separate IP network other than the public network is highly recommended.

③ glm: This field specifies whether the given node is a global lock manager node. The value should be either ‘Y’ or ‘N,’ and the default value is ‘Y.’ When AUTO_CONFIG_LOCK is set to “Y,” this value is ignored. For more

information, please refer to “Reconfiguring Global Lock Manager” in Chapter 4.

④ active: This field is for specifying SANique CFS cluster membership. When it is set to ‘Y,’ SANique CFS will consider the given node as a valid member and allow to share storages. When set to ‘N,’ SANique CFS will simply ignore the given node. This filed can be useful when a valid member node needs to be temporally suspended by setting the field to ‘N.’ Note that valid node IDs should still be consecutive.

⑤ node_name: This filed represents the host name of the gien node. Running SANique CFS

This section covers the basic procedure to run SANique CFS including how to start or shutdown SANique CFS, how to make or mount SANique file systems, and so on.

Loading device drivers for Fibre Channel HBA

If you are running Fibre Channel SAN and the device driver for Fibre Channel HBA is dynamically loadable, you should first load the device driver.

In case of Qlogic’s qla2x00, enter: $modprobe qla2x00 [NOTICE] Refer to acommanying User’s Guide for other types of FC HBA.

Starting SANique CFS SANique CFS should be started on the acting master node.

Change your working directory if it is not already in your path: $cd /usr/local/sanique/bin

Page 42: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

32

Start SANique CFS: $sanique_start SUCCESS : SANique successfully loaded! [NOTICE] Check if license is valid or if network functions normally when SANique startup fails.

When SANique startup fails, please check the followings:

ü Check if license is valied.

ü Check if network functions normally.

ü Check if sanique_lmd and sanique_sdd daemons are up and active by ps utility. If not, please restart those daemons from /etc/init.d on each

and every node and wait for about 10 seconds.

ü Check all items in /usr/local/sanique/config/sanique.conf

file are vaild on the acting master node.

ü Check all SANique modules are normally loaded by lsmod utility. If not, try sanique_start again.

Once SANique CFS starts, you may want to perform a sanity check by executing

‘sanique_node_stat.’

Make sure SANique CFS binary directory is in you path and enter: $sanique_node_stat ------------------------------------------------------------ SANique Ver2.4 Configuration Information ------------------------------------------------------------ MyID : 0 MasterID : 0 Timeout : 10

Page 43: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

33

SANique Cluster File System v2.4 Administrator’s Guide

Number of Lokcs : 102400 Size of Lock Space : 28672000 byte PortNumber : 50070 Gateway : 10.1.1.1 Auto Config Lock Service : OFF HeartBeat Mode : ON HeartBeat Interval : 30 sec ------------------------------------------------------------ [ Node 0 ] Node Name : node0 IP Address : 10.1.1.10 Global Lock Service : ON Active : YES Member of SANique Cluster : YES ------------------------------------------------------------ [ Node 1 ] Node Name : node1 IP Address : 10.1.1.11 Global Lock Service : ON Active : YES Member of SANique Cluster : YES ------------------------------------------------------------

Creating a quorum partition and CVM log area All nodes in SANique CFS communicate with each other via LAN and access storage devices via SAN. Since all SAN storage devices are shared by all nodes in SANique CFS cluster, therefore, data might be corrupted if more than one node try to access the same data at the same time when nodes are unable to communicate with each other due to LAN failure. In case of such a split brain situation, all SANique CFS nodes try to communicate with each other via a quorum partition which is a physically shared storage space in order to prevent possible data corruption. Creating a quorum partition is the last step in SANique CFS installation procedure. The quorum partition can be created via

Page 44: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

34

mkpdisk command1 after SANique CFS starts and needs to be created once

on the acting master node. Such a quorum partition is also used as a SANique CVM log area. Please refer to “Creating SANique CVM disks” in Chapter 5 for more details.

Change current working directory: $cd /usr/local/sanique/cvm_util Create a quorum partition: $mkpdisk –t syslog <device name> Example: $mkpdisk –t syslog /dev/sda1

Creating shared volume devices SANique CFS can be created either on top of raw device volumes or on top of logical volumes. When created on top of raw device volumes, data stored on such file systems must be converted in order to install SANique CVM later on. For that reason, creating file systems on top of logical volumes is highly recommended so that no data conversion is required when SANique CVM is added. In order to do that, logical volumes should be first created before file systems are created. Partition raw devices as you wish using fdisk utility. Then, create logical volumes using SANique CVM utilities such as mkpdisk, mkdg, and mklv. Logical volumes will be created with the given names (i.e., lv1, lv2, loglv1, and so on) under /dev directory. For more details about

creating logical volumes, please refer to “Operating SANique CVM” in Chapter 5.

Creating SANique shared file systems You can use ‘mkfs’ command utility to create SANique file systems just like creating other file systems; just specify ‘sanique2’ as file system type option with

1 mkpdisk is a SANique CVM command and SANique CVM is a separate product proving logical volume management for cluster systems. SANique CFS provides a set of basic CVM functionalities for interoperability with SANique CVM.

Page 45: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

35

SANique Cluster File System v2.4 Administrator’s Guide

‘mkfs.’ Creating a SANique file system on any one node will be immediately

reflected to other nodes in SANique CFS cluster for sharing, so that you don’t have to create it again. For more details regarding creating SANique file systems, please refer to mkfs manual page. Note that SANique CFS 2.4 currently supports only 4 KB block size.

To create a SANique file system on /dev/lv1 with /dev/loglv1 as a

logging device, enter: $mkfs –t sanique2 –l ld=/dev/loglv1 /dev/lv1 To create a SANique file system with journaling turned off, enter: $mkfs –t sanique2 /dev/lv1 [NOTICE] Once you create a SANique file system with journaling turned off, you can never turn it on unless you recreate the file system. Hence, we highly recommend that you create SANique file systems with journaling turned on and use mount-time option (-o lf=OFF) if you need to turn journaling off for some reasons. If you are having trouble with creating SANique file systems, check if /sbin is in your execution path and /sbin/mkfs.sanique2 exists.

Mounting SANique shared file systems In order for more than one SANique cluster node to share the same file system, the file system should be created with ‘sanique2’ file system type option. When you

mount SANique file system, all mounting options used for statnd POSIX file systems such as Ext2 can be used with the same semantics except that file system type should be ‘sanique2.’ You can mount the same SANique file system on as

many nodes as you want and use it as if it were a local file system to each node. Since actual mounting points may defer from node to node, you should mount SANique file systems on each node that you are mounting one by one. The following example illustrates mounting /dev/lv1 created as a SANique file system on /mnt/sanique point of a node.

Mount /dev/lv1 on /mnt/sanique directory:

Page 46: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

36

$mount –t sanique2 /dev/lv1 /mnt/sanique [NOTICE] When mounting operation fails, check out the following: l Check if the given logical device exists via lslv utility.

l Check FC HBA status to see if no shared disks are recognized. l Check if all modules are up and running. l Check if all communication connections are established. l Check if the given file system is created as ‘sanique2’ type.

Performing final sanity check In order to make sure that everything is alright, please check the followings: l Check if all modules are normally loaded. l Check if all files are located as depicted in Apendix.

[NOTICE] If you add /usr/local/sanique/bin and /usr/local/ sanique/cvm_util to your current path, you don’t have to change your

working directoy whenever you use SANique command utilities.

Page 47: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

37

SANique Cluster File System v2.4 Administrator’s Guide

This chapter describes how to operates SANique CFS cluster. Topics covered in this chapter include: ① Mouning and unmounting SANique shared file systems ② Shutting down SANique CFS ③ Adding or removing nodes to/from SANique CFS cluster ④ Reconfiguring global lock service ⑤ Reconfiguring other SANique CFS parameters ⑥ Using SANique file system utilities The following sections provide more details on each topics above.

CCChhhaaapppttteeerrr 444

OOOpppeeerrraaatttiiinnnggg SSSAAANNNiiiqqquuueee CCCFFFSSS

Page 48: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

38

Mounting SANique File Systems File systems to be shared should be first created as a SANique cluster file system with ‘sanique2’ type option. The following example illustrates how to create a

SANique shared file system. In the example, a SANique shared file system is created on /dev/lv1 with /dev/loglv1 as a logging device.

Create a SANique shared file system: $mkfs –t sanique2 –l ld=<log_device> <device> Example: $mkfs –t sanique2 –l ld=/dev/loglv1 /dev/lv1 [NOTICE] If you have already created SANique file systems, you can skip this step.

A SANique file system is created and you can now mount it on as many nodes as you want for sharing. Mounting SANique shared file systems is not different from mounting any other file systems from the point of a single node; just use ‘sanique2’ type option. Unlike other file systems, however, the same SANique

file system can be physically mounted on and accessed by multiple nodes at the same time without corrupting the file system consistency. The following example illustrates how to mount a SANique shared file system on a node.

Mount a SANique shared file system: $mount –t sanique2 <device> <mount_point> Example: $mount –t sanique2 /dev/lv1 /mnt/test

Unmounting SANique File Systems Unmounting SANique file systems is exactly same as unmounting other file systems syntactically. Its semantics, however, might be slightly different from others due to the aspects of device sharing. Sometimes, umounting a SANique file system from a node implies that the corresponding device is no longer shared. Then again, sharing itself is transparent to users and therefore the sole ownership of the device has no

Page 49: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

39

SANique Cluster File System v2.4 Administrator’s Guide

special meaning to users as well. The following example demonstrates umounting a SANique file system.

Umount a SANique CFS file system: $umount <mount_point> Example: $umount /mnt/test

Listing Device Information l sanique_ls_dev This command utility lists the information of those devices physically shared via SAN. Device types include: ü SANique CVM device : logical device created by SANique CVM ü SANique FS device : file system device created by SANique CFS ü SANIque LOG device : log device for journaling by SANique CFS ü Non SANique device : other raw devices

List current device information: $sanique_ls_dev [–a|-s] $sanique_ls_dev [-v][-f][-l][-n] Example: $sanique_ls_dev -s

Options given with sanique_ls_dev are:

-a : list all device information in extended format

-s : list all device information in simplified format

-v : list only SANique CVM device information

-f : list only SANique FS device information

-l : list only SANique LOG device information

Page 50: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

40

-n : list only non-SANique device information

Listing Logical Device Information l sanique_ls_lv This command utility lists the information of those logical devices created by SANique CVM. Logical volume types include: ü SANique FS volume : logical volumes used as SANique file system ü SANIque LOG volume : logical volumes used as SANique FS log area ü Non SANique volume : logical volumes not used as SANique volume

List current device information: $sanique_ls_lv [–a|-s] $sanique_ls_lv [-f][-l][-n] Example: $ sanique_ls_lv -s

Options given with sanique_ls_lv are:

-a : list all volume information in extended format

-s : list all volume information in simplified format

-f : list only SANique FS volume information

-l : list only SANique LOG volume information

-n : list only non-SANique volume information

Listing Mount Information l sanique_ls_mtab This command utility lists the information of those shared file systems currently mounted. A volume and a file system might be different from each other, but they can be mapped one to one. Therefore, volumes and file systems are used as the same meaning unless otherwise they need to be distinguished explicitly.

Page 51: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

41

SANique Cluster File System v2.4 Administrator’s Guide

List current mount information: $sanique_ls_mtab Example: $sanique_ls_mtab

Listing Node Information l sanique_node_stat This command utility shows the status of those SANique clutster nodes with valid membership.

Show the status of SANique cluster nodes: $sanique_node_stat Example: $sanique_node_stat

Removing a Member Node l sanique_rm_node Removing a member node means that you invalidate the membership of the node and shutdown SANique CFS on the node. You need to specify the ID of the node being removed.

Remove a member node from SANique CFS cluster: $sanique_rm_node <node_ID> Example – remove node 1 from SANique CFS cluster: $sanique_rm_node 1

Adding a New Member Node l sanique_add_node Adding a new member node means that you register a new node as a SANique CFS node or reactivate a deactivated member node. You can add a new member node to

Page 52: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

42

SANique CFS cluster on-line. You need to specify the IP address of the node being added or reactivated.

Add a new member node to SANique CFS cluster: $sanique_add_node <node_IP_address> Example – adding a node whose IP address is 211.201.111.101: $sanique_add_node 211.201.111.101

Shutting Down SANique CFS l sanique_shutdown

SANique CFS shutdown process includes invalidating the SANique CFS cluster membership of all nodes, disconnecting all communication channels, and unloading all SANique CFS modules on all nodes.

[NOTICE] In order to shutdown SANique CFS, all nodes being shut down should not be mounting any SANique shared file system.

[NOTICE] Shutting down SANique CFS may take a few seconds depending on the interval of heartbeat signal.

In order to shutdown SANique CFS, follow the following steps:

① Make sure that no SANique file system is mounted on any of those nodes being shutdown. If any, umount them first.

② Then, issue sanique_shutdown command.

Shutdown SANique CFS: $sanique_shutdown [NOTICE] You can check out if all SANique CFS modules are unloaded using lsmod command.

Page 53: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

43

SANique Cluster File System v2.4 Administrator’s Guide

Joining a Failed Member Node Back l sanique_add_node When a member node fails while SANique CFS is in service, SANique CFS fails over isolating the failed node, and another member node takes over failed service in cooperation with other member nodes. After failover, the SANique CFS membership of the failed node is invalidated. Aftrer the cause of troubles being taken care of, the failed node can be rejoin SANique CFS cluster. Such a procedure is called “failback,” and there are two possible ways to failback a node: automatically and manually. A failed node can rejoin SANique CFS cluster automatically when rebooted when AUTO_FAILBACK parameter in sanique.conf file is set to

‘Y.’ The procedure of adding a failed node back manually is identical to adding a new member node explained above.

[NOTICE] After changing the value of AUTO_FAILBACK parameter in sanique.conf file, please restart the license daemon by: $/etc/init.d/sanique_lmd restart

Reconfiguring Lock Space SANique CFS employs a distributed lock management (DLM) scheme for concurrency control of shared file objects. Such DLM is implemented in a SANique CFS module called Cluter Lock Manager (CLM). SANique CLM consists of Global Lock Manager (GLM) and Local Lock Manager (LLM). SANique LLM module resides on each and every SANique CFS cluster node while GLM is running only on those nodes configured for a global lock server node. How to distribute GLM and how much lock space to allocate on each node affect the performance of SANique CFS cluster. Hence, the distributtion of GLM and the allocation of lock space are important tuning factors for SANique CFS performance. In the following, a brief guideline to determine the amount of lock space is given. Please contact Techical Support Team at [email protected] for more detailed guideline.

Meaning of Lock Space Lock space can be specified in the SANique configuration file and it is tightly related

Page 54: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

44

to the maximum number of files that can be simultaneously maintained by SANique LLM. From the lock management point of view, therefore, the more lock space you allocate the better performance you can expect. However, there is only limited amount of physical memory available on each node and therefore it will degrade the performance of other components if too much memory is allocated for lock space resulting in the degradation of the overall node performance. Therefore, the amount of lock space should be carefully chosen from somewhere in btween two extremes. Each lock requires 280 bytes of memory space and the overall lock space on each node can be computed based on the following relation:

Size of lock space = No. of lock × Memory size per lock (280 bytes)

Reconfiguring Lock Space Lock space is allocated at SANique CFS startup time and remains as it is during the whole SANique CFS session. That is, SANique CFS lock space cannot be reconfigured on-line. In order to reconfure lock space, therefore, you should first shutdown SANique CFS, change NUM_LOCKS configuration parameter in sanique.conf file, and then restart SANique CFS. This is an executive decision

because you have to stop servicing in order to reconfigure lock space; please give enough thought before you decide the amount of lock space for the first time.

A Brief Guideline for Choosing Lock Space The amount of lock space affects the behavior of SANique CFS and other kernel modules as well as application programs. If your applications access a large number of files, lock space more than 100,000 locks is recommended. In general, lock space in between 20,000 locks and 100,000 locks are approriate. Since 1,000 lock takes about 2.5 MB of memory, lock space between 20,000 and 40,000 locks is recommended for those machines with 128 MB of main memory and lock space between 40,000 and 100,000 locks is suitable for machines with 256 MB of memory or more. Again, please contact MacroImpact Techinical Support Team at [email protected] for more information.

Reconfiguring Global Lock Manager (GLM) SANique CFS avoids possible hot spots, provides a rapid lock service, eliminates a

Page 55: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

45

SANique Cluster File System v2.4 Administrator’s Guide

single point of failure, and supports a modular failure recovery by employing partitioned lock management in which all lock objects are evenly partitioned and distributed for service among multiple SANique cluster nodes configured as global lock managers. You can choose which SANique cluster nodes to provide such global lock service. For centralized lock management, pick one node and configure it as a global lock manager node. For fully distributed lock management, configure all SANique cluster nodes as global lock managers. You can also configure a part of SANique cluster nodes as global lock managers. Global lock service can be either configured before SANique CFS starts to serve or can be reconfigure on-line while SANique CFS is in service. For the initial configuration of global lock service, please refer to “Configuring SANique CFS” discussed in Chapter 3. Steps to reconfigure global lock service on-line are given below.

[NOTICE] Global lock reconfiguration can be done only from the master node.

However, when you turn AUTO_CONFIG_LOCK option on, you are not able to

reconfigure global lock service manually.

① Make sure that those SANique CFS cluster nodes to be configured as global lock manaagers are currently all active using sanique_node_stat command

utility. ② Execute sanique_lock_reconf command utility with the list of global lock

manager nodes from the acting master node. ③ Check out the result of lock reconfiguration with sanique_node_stat

command utility.

From the acting master node, execute the following command with the list of global lock managers: $sanique_lock_reconf <Node_List> Example – reconfiguring global lock service with node 0, 1, and 3: $sanique_lock_reconf 0 1 3 [NOTICE] Note that the list of global lock managers includes old GLMs and new GLMs together. For instance, if node 0 and 1 are currently providing global lock service and node 3 is to be configured as an additional global lock manager,

Page 56: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

46

the list of lock managers 0, 1, and 3 should be given with sanique_lock_reconf command as shown in the example above.

You can check out the result of global lock server reconfiguration using sanique_node_stat command.

Mounting SANique File Systems Automatically SANique CFS provides remote and automatic mount features through /usr /local/sanique/config/sanique_fstab file. ######################################################################## # # SANique CFS ver2.4 - File System Mount table # # MacroImpact Inc. # System Software Labs # ######################################################################## #device device mount fsck mount mount #to mount to fsck point pass at boot options /dev/lv1 /dev/lv1 /mnt/test 1 yes usrquota,grpquota

[ Example sanique_fstab File] Each field in sanique_fstab file is separated by space or tab and should be in order. The meaning of each field is as follow:

Field Meaning device to mount Device name to be mounted

device to fsck Device path to perform fsck when terminated

abnormally mount point Mounting point

fsck pass The number of fsck to be repeated (0 means no fsck)

mount at boot Boot time mounting option: “Yes” or “No” mount option Options for mount opertion

Table 4-1: Field Description of sanique_fstab

Page 57: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

47

SANique Cluster File System v2.4 Administrator’s Guide

You can edit sanique_fstab file using your favorite editor. $vi /usr/local/sanique/config/sanique_fstab

Installing SANique CFS User Quota Package A set of quota utilities provided by Linux won’t work with file systems mounted with sanique2 type. Hence, in order to use user quota with SANique CFS file

systems, you need to replace the existing quota utilities with those included in the SANique installation CD. Follow the steps below to install SANique CFS user quota package:

① Uncompress quota-3_08-sanique.tgz file. ② Apply the current system environment by executing configure. ③ Compile the package by executing make. ④ Install the comipled package by executing make install..

Mount SANique CFS installation CD and uncompress quota-3_08-sanique.tgz: $tar –zxvf /mnt/cdrom/etc/quota/quota-3_08-sanique.tgz Apply the current system environment: $configure compile the quota package: $make Install the compiled package: $make install [NOTICE] Note that once quota utilities are replaced, you can use those utilities as you used them before. For one thing, user accounts shoould be created within the same node so that UID, GID, and user name are unique across the whole cluster system.

Page 58: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

48

Reconfiguring Other SANique CFS Parameters In order to reconfigure other SANique CFS configuration parameters than those mentioned above, SANique CFS service should be brought down. SANique CFS 2.4 does not support the on-line reconfiguration of IP address or lock space. Reconfiguring such SANique CFS parameters can be done by changing them in sanique.conf file and restarting SANique CFS. Follow the steps below to

reconfigure such paramenters:

① Unmount all SANique shared file systems on all SANique CFS cluster nodes. ② Shutdown SANique CFS. Refer to “Shutting Down SANique CFS” in Chapter

4. ③ Edit /usr/local/sanique/config/sanique.conf file as you need

from the acting master node. Refer to “Configuring SANique CFS” in Chapter 3.

④ Restart SANique CFS referring to “Running SANique CFS” in Chapter 3. ⑤ Mount all SANique shared file systems on all SANique CFS cluster nodes.

Page 59: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

49

SANique Cluster File System v2.4 Administrator’s Guide

This chapter describes how to operates SANique CVM to create a CVM logical or virtual volume. SANique CVM is a separate product from SANique CFS and can be installed independently. In order to provide inter-operability with SANique CVM, then, SANique CFS provides the basic functionalities of SANique CVM. Topics covered in this chapter are such basic functionalities and include: ① Creating devices with fdisk ② Creating CVM disks ③ Creating and configuring disk groups ④ Creating CVM volumes

Once a logical volume is created, you can reconfigure the volume or create its snapshot. The following sections provide more details on each topics above.

CCChhhaaapppttteeerrr 555

OOOpppeeerrraaatttiiinnnggg SSSAAANNNiiiqqquuueee CCCVVVMMM

Page 60: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

50

Disk Management

Managing Disk Devices on Linux In Linux operating system like other Unix-like operating systems, all devices inclusing disks are treated as a special file and managed via file system mechanism. In Linux device file convention, all disk devices of IDE type are represented by naming convention starting with ‘hd’ under /dev direcoty and all disk devices of SCSI type by naming convention starting with ‘sd’ under /dev directory. With those ‘hd’ or ‘sd’ as a prefix, each device is named in alphabetical order starting from ‘a’ such as ‘hda,’ ‘hdb,’ ‘sda,’ ‘sdb’, and so on.

Table 5-1: Terms

Each device file has a unique identification number consisting of a major and minor numbers within a system. Devices of same type have the same major number. Minor numbers are used to distinguish each device of the same type. For SCSI disks, the minor number of each device increases by a unit of 16. For example, the minor number of sda is 0, the minor number of sdb is 16 and so on. Numbers other than

multiples of 16 are reserved for identifying the partitions of each device. Therefore, each physical disk device can be partitioned into 15 pieces at maximum. Throughout

Term Description

Physical Disk or Disk

a physical unit that can be detachable as a unit and recognized as an individual device by Linux such as sda or sdb

Partition a logical unit that can be created by logically dividing a physical disk and recognized as sda1 or sdb2 by Linux

Unit Disk a base unit of which a CVM disk can be made: either a physical disk or a partition depending on user’s choice

CVM Disk a building block of a CVM logical volume, which can be created by initializing unit disk

Page 61: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

51

SANique Cluster File System v2.4 Administrator’s Guide

this guide, the terms described in Table 5-1 will be used in the meaning as given.

In general, most Fiber Channel HBA drivers recognize the disks connected to SAN as SCSI disks. Therefore, the number of SAN disks that can be recognized by Linux is limited to the maximum number of SCSI disks manageable by Linux. At present, Linux can manage up to 128 disks, however this limitation can be eliminated using the enhanced feature of device file system.

At this point, the lowest configuration unit of CVM is a partition. The basic building block of logical volums is a partition and the disks managed by CVM (referred to as ‘CVM disk’ hereunder) are initialized with such partitions. Once a disk is partitioned, it cannot be changed. SANique CVM allocates space assuming that each CVM disk is physically separate. For instance, SANique CVM allocates the equal amount of spaces from two CVM disks in order to create a 2-way mirrored RAID-1 volume assuming that those two CVM disks are physically different. For that reason, it is highly recommended not to partition a physical disk and to use the whole physical disk as a single partition instead.

Creating CVM Disks In order to create a CVM logical volume using physical disks on SAN, CVM disks should be created first. Creating CVM disks from unit disks is a process of writing special data onto the predefined block of each physical disk making them a CVM-allocatable space unit (generally known as ‘formatting’).

[NOTICE] A whole physical disk can be initialized as a CVM disk. For instance, instead of initializing /dev/sda1 or /dev/sda2, a whole disk /dev/sda

can be initialized as a CVM disk without partitioning. This is just a part of CVM management policy. Once a whole physical disk is configured as a CVM disk without partitioning, however, the corresponding physical disk can NEVER be attempted to be partitioned sometime later. The information written on disk blocks during partitioning may overwrite the information written by SANique CVM during its initialization, and such a conflict may create a critical problem in

Page 62: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

52

accessing data stored on SAN disks.

Each CVM disk can be initialized as three different types: data disk, log disk, and system log disk type. Each disk type has its own purpose and terefore a CVM disk should be properly initialized as to conform its purpose.

l Data Disk : Data disks are used to create a CVM logical volume for storing user data. (data logical volume or data volume, hereafter).

l Log Disk : Log disks are used to create a CVM logical volume for metadata logging by journaling file systems (log logical volume or log volume, hereafter).

l System Log Disk : SANique CVM writes its own log for fast recovery from unexpected system failures. System log disks are used to create a CVM logical volume for CVM metadata logging (CVM log logical volume or CVM log volume, hereafter). SANique CVM requires at least 500MB of a disk partition to be initially created as a system log disk in order to function properly. Otherwise, SANique CVM is not able to perform I/O operations on logical volumes. In addition, it is highly recommended to build system log disks using the most reliable raw disks because log records witten in this area are critical for volume recovery in case of system failures.

[Notice] A log logical volume is provided to benefit users employing journaling file systems. It is also possible to use a data logical volume as a log logical volume.

CVM disks can be created as follow:

① Determine unit disks to initiaize as CVM disks. ② Select CVM disk type and execute mkpdisk command. CVM Ddsk type can

be spedified with –t option. Use data, log, and syslog following –t

optin for a data disk, a log disk and a system log disk, respectively.

③ Creating a CVM disk is also a SANique CVM operation. Hence, in order to log creating operation itself, CVM system log disks should be created before other types of disks.

Page 63: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

53

SANique Cluster File System v2.4 Administrator’s Guide

Create a system log CVM disk: $cd /usr/local/sanique/cvm_util $mkpdisk –t <disk_type> <dev_path&name> Example – initialize /dev/sda1 as a CVM system log disk: $mkpdisk –t syslog /dev/sda1

Removing CVM Disks This process is a reverse operation of CVM disk creation. When a CVM disk is removed, the corresponding unit disk is no longer under SANique CVM’s management. In order to prevent an accidental removal, a CVM disk cannot be removed when it is a part of a CVM disk group or a CVM logical volume. Such a CVM disk should be first excluded from the CVM disk group or the CVM logical volume before removed. CVM diskscan be removed as follow: ① Determine CVM disks to remove. ② Execute rmpdisk command.

Remove a CVM disk: $cd /usr/local/sanique/cvm_util $rmpdisk <dev_path&name> Example – remove a CVM disk sda1 $rmpdisk /dev/sda1

Listing Device Information Device information can be listed via lspdisk command utility. lspdisk command reads the predefined portion of the target CVM disk and displays device inforrmation. Options can specify which device information to be listed: unit disks, CVM disks, non-CVM disks, or all of them. When logical volumes have been allocated from the target disks, space-volume mapping information can be also listed.

Page 64: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

54

Device information can be displayed as follow: $cd /usr/local/sanique/cvm_util $lspdisk <-a> <-t disk_type> <-l> <CVM_disk_list>

Example – display volume allocation information of all devices: $lspdisk -al

[NOTICE] Since lspdisk reads specific disk area and displays, it works even

when SANique CVM is not loaded.

Adding New Disks to SANique Cluster Adding a new disk to SANique cluster consists of two steps. First, all SANique cluster nodes should be able to recognize the disk device just attached. Recognizing a disk device is not a SANique CVM’ job; it is upto Linux operating system. In Linux operaing system, a disk device can be newly added or removed by issuing a correesponding command to SCSI subsystem. Please refer to Linux documents for more detailed information.

Detect a new disk device without rebooting the system: $echo “scsi add-single-device <host> <bus> <target>

<lun>” > /proc/scsi/scsi

where, <host> : host adapter id <bus> : SCSI channel on host adapter <target> : SCSI ID <lun> : LUN

The above operation should be performed on each and every SANique cluster node in order for them to share the disk. As the second step, notify SANique CVM of the attachment of a new disk device by executing appropriate SANique CVM command utility on each and every cluster node. SANique CVM maintains device information based on the disk configuration at the time of module initialization and is not able to

Page 65: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

55

SANique Cluster File System v2.4 Administrator’s Guide

reflect the changes on the system voluntarily. Therefore, any change on the system such as disk addition or removal should be informed in order for SANique CVM to update its device information and to include the new device under its management. Adding a new disks to SANique cluster can be done as follow: ① Determine a physical disk to add. ② Execute atdisk command. Add a new disk to SANique cluster: $cd /usr/local/sanique/cvm_util $atdisk <dev_path&name> Example – add a new disk /dev/sde1 under SANique CVM’s management: $atdisk /dev/sde1

[NOTICE] When the disk newly added is composed of more than one partition and the unit of CVM disk is a partition, the procedure above needs to be done for all partitions of the new disk.

Removing Disks from SANique Cluster Removing a disk from SANique cluster consists of two steps as in adding a new disk to SANique cluster. Only this time, the execution order is reversed: SANique CVM operation first and then Linux operation followed. The following Linux command will remove a disk from the system without rebooting.

Remove a disk from the system without rebooting: $echo “scsi remove-single-device <host> <bus> <target>

<lun>” > /proc/scsi/scsi where,

<host> : hostadapter id <bus> : SCSI channel on hostadapter <target> : SCSI ID <lun> : LUN

Page 66: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

56

The above operation should be performed on each and every SANique cluster node in order for them to maintain a single view storage-wise. Follow steps below to remove disks from SANique cluster. ① Determine a disk to remove. ② Execute dtdisk command. Removea new disk from SANique cluster: $cd /usr/local/sanique/cvm_util $dtdisk <dev_path&name> Example – remove a disk sde1 from SANique CVM’s management: $dtdisk /dev/sde1

[NOTICE] When the disk being removed is composed of more than one unit disk or CVM disk, the procedure above needs to be done repeatedly for each and every unit disk or CVM disk of the disk.

Managing Disk Group

Disk Group Creating CVM disks is the first step for creating logical volumes. CVM disks form a disk group and logical volumes can be created on top of disk groups. A disk group is composed of a number of CVM disks with the same property. The property can be physical distance between CVM disks, the performance of CVM disks, or the type of CVM disks. The main purpose of a disk group is an equal or uniform distribution of a logical volume. Therefore, it is highly recommanded to create a disk group using those CVM disks with similar I/O performance in order to minimize the variation of overall I/O performance of the logical volume to be built.

The type of a disk group is inherited from the type of CVM disks consisting of the disk group. When a disk group is made of data CVM disks, the disk group becomes

Page 67: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

57

SANique Cluster File System v2.4 Administrator’s Guide

a data disk group and only a data volume can be built out of it. By the same token, a log disk group can be used to create a log volume only. However, system log disks cannot be combined into any disk group. Also, different types of CVM disks are not allowd to form any disk group. For example, data disks and log disks cannot form a disk group.

Creating a Disk Group A disk group can be created following the steps below: ① Select CVM disks to form a disk group and determine the name of the disk

group. Please make sure that applicable unit disks are properly initialized as CVM disks.

② Execute mkdg command. Specify the name of the disk group followed by the sequence of CVM disk paths.

Create a disk group: $cd /usr/local/sanique/cvm_util $mkdg <dg_name> <CVM_disk_list> Example – create a disk group dg1 with CVM disks sda1 and sda2: $mkdg dg1 /dev/sda1 /dev/sda2

Changing the Name of a Disk Group The name of disk groups identifies each disk group and can be changed as follow. ① Determine the disk group to change the name and its new name. ② Execute chdg command. Specify the new name of the disk group followed by

its path old and name.

Change the name of a disk group: $cd /usr/local/sanique/cvm_util $chdg –n <new_dg_name> <old_dg_path&name> Example – change the name of a disk group from dg1 to dg2: $chdg –n dg2 /dev/dg1

Page 68: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

58

Adding New CVM Disks to a Disk Group A existing disk group can be expanded by adding more CVM disks into it. This operation can be used when new physical disks are added to the system or a disk group is running out of space.

Please perform a sanity check on the following before adding a new CVM disk to a disk group.

l If new CVM disks are to be added to a disk group due to the lack of space to allocate for a logical volume, make sure that the property of the new CVM disks to be added and that of the CVM disks already in the disk group are similar to each other.

l If new CVM disks are to be added to a disk group in order to expand the degree of data striping or mirroring of the logical volume built from the given disk group, make sure that the new CVM disks to be added are physically separated from those CVM disks already in the given disk group. For instance, when a disk group is made of two physically separate CVM disks, only a 2-way mirrored RAID-1 volume can be created out of the given disk group. If a new CVM disk is to be added to the given disk group in order to create a 3-way middored RAID-1 volume out of the given disk group, the new CVM disk to be added would better be physically separated from those two CVM disks already in the given disk group.

l CVM disks to be newly added to a disk group should not have any portion already allocated to a logical volume. When any portion of such CVM disks are already allocated to a logical volume, the operation will be denied. If those CVM disks have to be added to a disk group inspite of such space already in use, those space already allocated to a logical volume should be first disallocated from the given logical volume.

CVM disks can be added to a disk group as follow: ① Select the disk group to expand and the CVM disks to add. ② Make sure that the CVM disks to be added are the same type as those CVM

disks already in the disk group by executing lsdg and lspdisk. ③ Execute extenddg. Specify the path and name of the disk group followed by

the sequence of the CVM disk paths.

Add CVM disks to a disk group:

Page 69: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

59

SANique Cluster File System v2.4 Administrator’s Guide

$cd /usr/local/sanique/cvm_util $extenddg <dg_path&name> <CVM_disk_list> Add CVM disks/dev/sda3 and /dev/sda4 to the disk group dg1: $extenddg /dev/dg1 /dev/sda3 /dev/sda4

Listing Disk Group Information Disk group information can be listed via lsdg and scandg command utilities.

Disk group information includes the list of disk groups in the system and the list of disks composing disk groups. lsdg command can be used when SANique CVM is loaded and scandg can be used even when SANique CVM is not loaded. When

SANique CVM is loaded, disk group information can be listed as follow: $cd /usr/local/sanique/cvm_util $lsdg <-a> <-t dg_type> <-l> <dg_list> Example – display all disk group information including disk components: $lsdg -al

When SANique CVM is not loaded, disk group information can be listed as follow: $cd /usr/local/sanique/cvm_util $scandg <-a> <target_CVM_disk_list> Example – scan all CVM disks and display infromation: $scandg –al

Merging Disk Groups Two or more disk groups can be merged into a single disk grop. This operation can be used when a disk group is running out of space or when multiple disk groups need to be merged into a single disk group for easy management. Before marging disk grops, the followings need to be checked:

l Make sure that all disk groups to be merged are the same type. Otherwise, the operation will be denied.

Page 70: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

60

l Make sure that the properties of all CVM disks belonging to the disk groups are similar.

Disk groups can be merged as follow:

① Determine disk groups to merge. ② Make sure that all CVM disks in those disk groups are the same type using lsdg and lspdisk.

③ Execure mergedg command. Specify the path and name of the disk group to merge followed by the path and name of the disk group to be merged.

$cd /usr/local/sanique/cvm_util $mergedg <merging_dg_path&name> <merged_dg_path&name> Example – merge /dev/dg2 into /dev/dg1: $mergedg /dev/dg1 /dev/dg2

Splitting a Disk Group A disk group can be splitted into two separate disk grops. Before splitting a disk group, the following needs to be checked:

l If logical volumes created from the disk group are spread out across all physical disks, a part of such physical disks cannot form a new disk group. If one or more logical volumes are to be spread out across two disk groups by the split operation,the operation will be denied.

A disk group can be splitted into two disk groups as follow:

① Determine a disk group to split. ② Execute splitdg command. Specify the path and name of the disk group to

be splitted followed by the name of the new disk group and the list of CVM disks to form the new disk group

$cd /usr/local/sanique/cvm_util $splitdg <dg_path&name> <new_dg_name> <CVM_disk_list>

Page 71: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

61

SANique Cluster File System v2.4 Administrator’s Guide

Example – sepatrae /dev/sda3 and /dev/sda4 from /dev/dg1 and create a new disk group dg2 using /dev/sda3 and /dev/sda4: $splitdg /dev/dg1 dg2 /dev/sda3 /dev/sda4

Removing CVM Disks from a Disk Group CVM disks should be removed from the disk group that they are currently belonging to before being migrated to another disk group or before being uninitialized to unit disks. Make sure that those CVM disks have no portion of their space allocated for some logical volumes. A SANique CVM command utility lspdisk can be used to check it out. When any portion of such CVM disks are

already allocated to a logical volume, the operation will be denied. In order to remove those CVM disks from the given disk group perperly, such space already allocated to a logical volume should be first disallocated from the given logical volume. CVM disks can be removed from a disk group as follow: ① Select a disk group and the CVM disks to remove from the selected disk group. ② Execute reducedg. Specify the path and name of the disk group followed by

the sequence of CVM disk paths to be removed.

Revoce CVM disks from a disk group: $cd /usr/local/sanique/cvm_util $reducedg <dg_path&name> <CVM_disk_list> Remove CVM disks /dev/sda3 and /dev/sda4 from the disk group /dev/dg1: $reducedg /dev/dg1 /dev/sda3 /dev/sda4

Exporting a Disk Group to another SANique cluster System A disk group can be defined within a single SANique cluster environment. When multiple SANique cluster systems co-exist, there might be a need to migrate a whole disk group from one SANique cluster system to another. SANique CVM supports such a storage migration and it should be done in the unit of a disk group since SANique CVM manages logical volumes based on disk groups.

[NOTICE] SANique CVM employs a universal unique identifier (UUID) which

Page 72: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

62

uniquely identifies CVM disks, disk groups, and logical volumes across multiple cluster systems in order to support exporting a disk group to another cluster system.

SANique cluster system to which disk groups are migrated should be able to dynamically recognize new disk devices in order for such storage migration to be meaningful. This issue is closely related to Linux operating system and supported device drivers. Please refer to “Removing Disks from SANique Cluster” described earlier in this guid for more details. A disk group can be exported to another SANique cluster as follow:

① Select a disk group to export. ② Execute exportdg. Specify the path and name of the target disk group.

Export a disk group: $cd /usr/local/sanique/cvm_util $exportdg <dg_path&name> Example – export disk group /dev/dg1 from the current SANique cluster to

another: $exportdg /dev/dg1

Importing a Disk Group from another SANique cluster System This is a counter operation to exporting a disk group to another SANique cluster system. A disk group can be imported from another SANique cluster as follow: ① Select a disk group to import. ② Execute importdg. Specify the pathes and names of CVM disks that

compose the disk group.

Import a disk group from another SANique cluster: $cd /usr/local/sanique/cvm_util $importdg <CVM_disk_list> Example – import a disk group that consists of /dev/sda1 and /dev/sdb1

to the current SANique cluster system:

Page 73: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

63

SANique Cluster File System v2.4 Administrator’s Guide

$importdg /dev/sda1 /dev/sdb1 [NOTICE] If the name of disk group to be imported is already in use, importing operation will fail. Try again after chaging the name of existing disk group.

Creating Logical Volumes Logical Volume Types

SANique CVM supports three different types of logical volumes. Each logical volume type has its own purpose and is different from logical volume configuration explained later. Three different types of logical volumes are as follow: l Data logical volume (or data volume): A data logical volume is a general-

purpose logical volume and created from data disk groups. A regular file system can be created on this type of logical volumes.

l Log logical volume (or log volume): A log logical volume is created from log disk groups. A log logical volume is used for metadata logging by journaling file systems. This type of logical volume can be created only by concatenating log CVM disks and snapshot cannot be applied to it. There are some functional limitations with a log logical volume.

l Snapshot logical volume (or snapshot volume): This type of logical volume is used to store a snapshot of a data logical volume. This logical volume is a read-only volume and mainly used for on-line backup.

Logical Volume Configuration

SANique CVM currently supports three different types of logical volume configurations depending on how a logical volume is configured. Three different types of logical volume configurations are as follow: l Concatenation: A logical volume is just an aggregation of disk spaces and

created by allocating space from any CVM disk in the given disk group at random. This configuration is used only for expanding the size of the given logical volume unlike RAID-0 or RAID-1.

l RAID-0: Data stored on this logical volume is striped across multiple disks. Therefore, I/O bandwidth on this volume can be increased propotional to the degree of data striping and a RAID-0 volume generally produces a better performance than .a concatenation volume in terms of per-application I/O bandwidth. There are three different types of RAID-0 volumes based on the

Page 74: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

64

algorithm to distribute data. ① Raid0-rr: This configuration uses a simple round robin mehtod to distribute

data across multiple disks. The first unit of striped data is always stored on the same CVM disk. For example, if data is striped across Disk0, Disk1, and Disk2, the first unit of each striped data is stored always on Disk0, the second on Disk1, the third on Disk2, and so on. (Figure 5-1, A)

② Raid0-lstag:. With this configuration, data starts from the place shifted to the left by one column when the data dispersing row changes. For the example above, the forth unit starts from Disk2 (Figure 5-1, C)

③ Raid0-rstag: With this configuration, data starts from the place shifted to the right by one column when the data dispersing row changes. For the example above, the forth unit starts from Disk1. (Figure 5-1, B)

[NOTICE] RAID-0 type of logical volume configuration is supported only with enterprise edition.

Figure 5-1: Striping Policies

l RAID-1: Data written to a logical volume is mirrored to a number of physical disks. This increases the availability of the system. Unless all physical disks in RAID-1 volume fail, data is available for service. This configuration has advantages and disadvantages. Write performance is decreased by the fraction of the number of mirrors to be made since data is written to multiple disks at the same time and the time of latest write dominates the total I/O time due to ACID property. On the contrary, read performance is increased much the same

0 1 2 3 4

9 5 6 7 8

13 14 10 11 12

17 18 19 15 16

21 22 23 24 20

1 2 3 4 0

7 8 9 5 6

13 14 10 11 12

19 15 16 17 18

20 21 22 23 24

0 1 2 3 4

5 6 7 8 9

10 11 12 13 14

15 16 17 18 19

20 21 22 23 24

(A) RAID0-rr (B) RAID0-rstag (C) RAID0-lstag

Page 75: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

65

SANique Cluster File System v2.4 Administrator’s Guide

way as in RAID-0 logical volumes because data is read from multiple disks simultaneously increasing the degree of concurrency. Therefore, RAID-1 logical volumes are ideal for those read-oriented applications with infrequent updates or for those applications which tolerate no data loss at all.

[NOTICE] RAID-1 type of logical volume configuration is supported only with enterprise edition.

The followings are a set of common parameters to be configured in volume creation: l Volume Name: identifies the volume. l Volume Type: can be either data logical volume, log logical volume, or

snapshot logical volume. l Disk Group: needs to be specified to create a logical volume. The type of a

disk group must be the same as the type of a logical volume. Otherwise, a logical volume cannot be created.

l Extent Size: is the smallest physically continuous unit size guaranteed by a logical volume. The larger the extent size is, the smaller the amount of metadata managed by a logical volume is and the less flexible management is. In case of RAID-0 volumes, the extent size is same as its striping unit.

l Volume Configuration: can be one of concatenation, RAID-0, or RAID-1. l Size: indicates the size of the given logical volume. For RAID-1 volume, it

implies the size of usable logical volume. For instance, when creating 1GB of 3-way RAID-1 volume, the actual size of storage will be 3GB with 1GB of usable space.

l Striping Degree: specifies across how many CVM disks a logical volume is striped. When omitted, the default value is used based on the given disk group. The maximum striping degree supported is 256 and it is closely related to extending or reducing the given logical volume. The striping unit of a logical volume is fixed for the given volume and extending or reducing the logical volume should be done in the unit of such striping unit. Hence, when extending or reducing two logical volumes of the same size, the minimum operating size may differ from each other if the striping degrees of two logical volumes are different.

Volume Types and Configuration Data logical volumes can be configured in all three different types described above. However, log logical volumes can be created as concatenation volumes only. The

Page 76: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

66

type of a snapshot logical volume accords with that of data logical volume having a snapshot taken.

Creating Concatenation Volumes A set of common parameters described above is enough to create a concatenation volume. No additional parameter is required. A concatenation volume can be created as follow: ① Determine a set of parameters for the logical volume to be created. ② Execute mklv with the given set of parameters.

Create a concatenation logical volume: $cd /usr/local/sanique/cvm_util $mklv –t <lv_type> -d <dg_name> -e <extent_size> -S <lv_config> -s <lv_size> <lv_name> Example – create a concatenation type data logical volume of size 1 GB out of disk group dg1 under the name of lv1 using 4K extent size: $mklv –t data –d dg1 –e 4k –S concat –s 1g lv1

Creating RAID-0 Volumes (Enterprise Edition Only) When creating a RAID-0 volume, the number of physical disks across which data will be striped should be specified in addition to a set of common parameters. The number of physical disks in the target disk group is the upper bound to be specified. A RAID-0 volume can be created as follow: ① Determine a set of parameters for the logical volume to be created. ② Execute mklv with the given set of parameters.

Create a RAID-0 logical volume: $cd /usr/local/sanique/cvm_util $mklv –t <lv_type> -d <dg_name> -e <extent_size> -S <lv_config> -s <lv_size> -w <stripe> <lv_name> Example – create a RAID-0 type data logical volume of size 1 GB out of disk group dg1 under the name of lv1 using 4K extent size across 2 disks:

Page 77: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

67

SANique Cluster File System v2.4 Administrator’s Guide

$mklv –t data –d dg1 –e 4k –S raid0-rr –s 1g –w 2 lv1

Creating RAID-1 Volumes (Enterprise Edition Only) When creating a RAID-1 volume, the number of mirrored volume copies should be specified in addition to a set of common parameters. The number of physical disks in the target disk group is the upper bound to be specified. A RAID-1 volume provides high availability against disk failure. Unless all disks in the volume fail simultaneously, volume serivce can be continued without interruption. However, there are other types of failure than disk failure. Power failure or bad connection may result in data inaccessibility. Such inaccessibility may last long enough to be considered as a faulure or only for a moment. SANique CVM provides the optional journaling of all write operations onto RAID-1 volumes in order to handle such a temporary inaccessibility. When logging option is enabled, 32MB of log space is allocated per each cluster member node for the given RAID-1 volume. For more information on recovery using log data, please refer to ‘Recovering Logical Volumes’ in this manual. A RAID-1 volume can be created as follow:

① Determine a set of parameters for the logical volume to be created ② Execute mklv with the given set of parameters.

Create a RAID-1 logical volume: $cd /usr/local/sanique/cvm_util $mklv –t <lv_type> -d <dg_name> -e <extent_size> -S <lv_config> -s <lv_size> -w <mirror> -E <log> <lv_name> Example – create a 2-way mirrored RAID-1 type data logical volume of size 1 GB out of disk group dg1 under the name of lv1 using 4K extent size with logging enabled: $cd /usr/local/sanique/cvm_util $mklv –t data –d dg1 –e 4k –S raid1 –s 1g –w 2 –E y lv1 [NOTICE] In order to create a RAID-1 logical volume, disk space of (logical volume

Page 78: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

68

size × number of mirrors) is required. For example, 2GB of disk space is required in the example above.

[NOTICE] The option –w implies the number of physical disks across which a

logical volume is to be distributed and must be specified when creating RAID-0 and RAID-1 volumes. It means the data striping degree for RAID-0 volumes and the degree of mirror for RAID-1 volumes.

Removing Logical Volumes When a logical volume is no longer in use, the given logical volume can be removed and its disk space can be returned to storage pool for reuse. Please make sure that the volume is not in use and it is safe to delete all data in it before removing. Once the volume is removed, there is no way to get data back unless perperly backed up. A logical volumes can be removed as follow regardless of its configuration:

① Determine a logical volume to remove. ② Execute rmlv with the volume path and name.

Remove a logical volume: $cd /usr/local/sanique/cvm_util $rmlv <lv_path&name> Example – remove logical volume lv1: $rmlv /dev/lv1

[NOTICE] Once removed, there is no way to recover data in the given logical volume.

Listing Volume Information Logical volume information can be listed via lslv and scanlv command utilities.

Logical volume information includes the list of logical volumes in the system and the list of CVM disks composing logical volumes. lslv command can be used when SANique CVM is loaded and scanlv can be used even when SANique CVM is

not loaded. When SANique CVM is loaded, logical volume information can be listed as follow:

Page 79: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

69

SANique Cluster File System v2.4 Administrator’s Guide

$cd /usr/local/sanique/cvm_util $lslv <-a> <-t lv_type> <-l> <lv_list> Example – display all logical volume information including component CVM disks: $lslv -al

When SANique CVM is not loaded, logical volume information can be listed as follow: $cd /usr/local/sanique/cvm_util $scandg <-a> <CVM_disk_list> Example – display logical volume information via scaning all CVM disks: $scanlv –al

Changing the Properties of Logical Volumes

Reconfigurable volume properties Some of volume properties can be reconfigurable on-line and other can’t be.The name of each volume can be changed any time as long as the new name does not conflict with existing one. Volume size, the striping degree of RAID-0 volumes, and the mirroring degree of RAID-1 volumes can be reconfigured on-line. However, the type of volume configuration cannot be changed once fixed in its creation time.

Changing volume name A logcal volume can be renamed as follow: ① Determine a logical volume to rename. ② Execute chlv with the new name followed by path and old name.

Rename a logical volume: $cd /usr/local/sanique/cvm_util $chlv –n <new_lv_name> <old__lv_path&name>

Page 80: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

70

Example – rename a logical volume from lv1 to lv2: $chlv –n lv2 /dev/lv1

Extending logical volumes (Enterprise Edition Only) A logical volume can be extended on-line as follow: ① Determine the logical volume to extend and the size to extend. ② Execute extendlv with perper parameters.

Extend a logical volume: $cd /usr/local/sanique/cvm_util $extendlv –s <size> <lv_path&name> Example – extend volume lv1 by 100MB: $extendlv –s 100m /dev/lv1

Reducing logical volumes (Enterprise Edition Only) A logical volume can be reduced as follow: ① Determine the logical volume to reduce and the size to reduce. ② Execute reducelv with proper parameters.

Reduce a logical volume: $cd /usr/local/sanique/cvm_util $reducelv –s <size> <lv_path&name>

Example – reduce volume lv1 by 100MB: $reducelv –s 100m /dev/lv1

Recovering Logical Volumes

Cleaning up storage system When there are frequent changes in hardware setup such as disk addition or removal after SANique CVM is installed in the cluster system, there might be garbages left on the disk or SANique CVM meta-data might be corrupted. In such a case, it is highly recommanded to initialize the whole storage system by deleting all SANique CVM

Page 81: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

71

SANique Cluster File System v2.4 Administrator’s Guide

meta-data and rebuild from backup volumes. SANique CVM provides clrpdisk

command for such a purpose.

Clean up storage system: $cd /usr/local/sanique/cvm_util $clrpdisk <options> Example – clean up the whole storage system: $clrpdisk –af

[NOTICE] Command clrpdisk can be selectively applied to a portion of stroage

system per partition, but it is highly probable that the consistency of SANique CVM meta-data might be corrupted. Command clrpdisk should be executed before

SANique CVM is loaded into the kernel; it will result in a fatal error when clrpdisk is executed while SANique CVM is already loaded.

Recovering RAID-1 volumes from node failures (Enterprise Ed. Only)

In case of a node failure, one of other active member nodes recovers those logical volumes that are possibly currupted. For a RAID-1 logical volume, the same number of a single writing operation as the number of mirrors are performed simultaneously across multiple disks. Those multiple writings, however, can hardly be synchronized due to the different circumstances of each disk. Therefore, if a node fails in the middle of a RAID-1 writing, data consistancy within a mirrored RAID-1 volume can not be guaranteed most of times. When a logging option is on, data on such a RAID-1 volume can be resynchronized during recovery process by reading logged data block information and reapplying necessary writings. When logging option is not turned on, data consistancy on RAID-1 volumes cannot be guaranteed and the volumes will be easily curupted with a sudden node failure.

Recovering RAID-1 volumes from disk failures (Enterprise Ed. Only)

When a disk which is a part of a RAID-1 volume fails, the status of the corresponding mirrored volume is changed to “Failed”. Recovery from such a disk

Page 82: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

72

failure involves two steps. The failed mirror first has to be removed from the logical volume decreasing the degree of mirroring and then the volume needs to be reconfigured to increase the degree of mirroring back to as it was. A RAID-1 volume can be recovered from a disk failure as follow: ① Determine the RAID-1 volume to recover and the mirror to be removed. ② Execute reducelv. ③ Execute extendlv.

Recover a RAID-1 volume from a disk failure: $cd /usr/local/sanique/cvm_util $reducelv –m <mirror_no.> <lv_path&name> $extendlv –w <degree> <lv_path&name> Example – recover RAID-1 volume lv1 by removing the second mirror and

extending mirroing degree: $reducelv –m 1 /dev/lv1 $extendlv –w 2 /dev/lv1

[NOTICE] The status of the mirror should be “Failed” when checked by lslv –l command. Each mirror is numbered starting from 0. Hence, specifying ‘–m 1’ indicates the second copy of mirrors.

Recovering RAID-1 volumes from temporary disk failures (Enterprise Ed. Only)

Most disk failures tend to be permanent, but there are still chances that disk access might be temporarily blocked due to a bad connection or SAN switch malfunction. In such cases, the mirrors in the corresponding disk gets most likely corrupted and consistancy with other mirrors is broken. With logging option turned on, SANique CVM writes access log for all writing operations and uses it to resynchronize the currupted mirrors whenever necessary. ① Determine the RAID-1 volume to recover and the mirror to resynchronize. ② Execute extendlv.

Resynchronize a mirror volume:

Page 83: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

73

SANique Cluster File System v2.4 Administrator’s Guide

$cd /usr/local/sanique/cvm_util $extendlv –m <mirror_no.> <lv_path&name> Example – resynchronize the second mirror of RAID-1 volume lv1: $extendlv –m 1 /dev/lv1

Recovering logical volumes from system failures When the whole SANique cluster system is down in a disasterous situation such as main power failure, no on-line recovery can be performed since there is no node active to recover the system. The recovery of all logical volumes in the system should be manually executed after SANique cluster system comes back up and before SANique CFS or CVM is loaded. resynchronize the currupted mirrors whenever necessary. ① Reboot the whole SANique cluter. ② Execute cklv.

Check and fix a logical volumes: $cd /usr/local/sanique/cvm_util $cklv <options> Example – check and fix all logical volumes: $cklv –a [NOTICE] Before executing cklv, the system should be rebooted and the volume being recovered should be off-line. That is, cklv should be executed before SANique CFS or CVM starts. Then, cklv checks and fixes logical volumes as fsck does on a corrupted file system. cklv can be also applied to

an individual logical volume. No GUI interface is available because it should be done off-line before SANique CFS or CVM is loaded.

Page 84: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

74

Snapshot (Enterprise Edition Only)

Creating a snapshot Just like taking a snapshot picture, SANique CVM provides a functionality to take a snapshot image of a data type logical volume. When mklv is executed with snapshot

option, a snapshot volume is created with the exactly same set of properties as that of the target logical volume and preserves the image of the target volume at the very momont of the given snapshot. The only difference is that such a snapshot volume is a read-only volume. When a snapshot is taken for such volumes on which other file systems than SANique CFS are build, the consistancy of the given file system cannot be guaranteed. In order for a snapshot volume to preserve the consistancy of the file system, the snapshot should be taken when the target volume is consistent. That is, all in-core meta-data should be synchonized onto the target volume before taking a snapshot. SANique CFS provides an interface to force the synchronization of all in-core meta-data and SANique CVM utilizes such an interface before taking a volume snapshot in order to guarantee the file-level consistency of a snapshot volume. ① Determine the logical volume to take a snapshot. ② Execute mklv.

Create a snapshot of a logical volume: $cd /usr/local/sanique/cvm_util $mklv –t <snapshot_type> –s <size> <snapshot_lv_name> <lv_path&name> Example – create a full snapshot ss1 of logical volume lv1 with 200 MB of

space: $mklv –t f-snapshot –s 200m ss1 /dev/lv1 [NOTICE] The type of snapshot is f-snapshot meaning a full- snapshot. SANique CVM also provides i-snapshot meaning incremental snapshot. Please refer to SANique CVM Release Note to see if your SANique CVM version supports an incremental snapshot. When the size of a snapshot volume exceeds the given size

Page 85: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

75

SANique Cluster File System v2.4 Administrator’s Guide

(200MB in this example), there will be an error reported and the snapshot volume is no longer accessible. Therefore, it is recommended to provide enough space for snapshot volume.

[NOTICE] A snapshot is not avaliable for both log and snapshot type of logical volumes.

Removing a snapshot A snapshot volume can be removed in the same way as other logical volumes. Please refer to “Removing logical volumes” in this manual for more detailes.

Page 86: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

76

This chapter summaries and describes a set of basic command utilities required for operating SANique CFS. Topics covered in this chapter include: ① Using fils system utilities ② Using node management utilities

The following sections provide more details on each topics above.

CCChhhaaapppttteeerrr 666

SSSAAANNNiiiqqquuueee CCCooommmmmmaaannnddd LLLiiisssttt

Page 87: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

77

SANique Cluster File System v2.4 Administrator’s Guide

Using SANique File System Utilities

mkfs: Creating shared file systems This is a command utility to create SANique shared file system of sanique2 type.

Usage: $mkfs [options] <fs_device>

Options for mkfs are as follows:

l –t sanique2

ü Mandatory

ü The type of file system should be sanique2.

l [<-l/--log> ld=<log_device>, ls=<size>]

ü Optional

ü ld=<log_deivce>

û Mandatory if –l option is not omitted

û The path name for log device should be specified.

ü ls=<size>

û Optional, default size = 32 MB

û This option implies the amount of log space per member node and 32 MB of space is allocated in default when omitted. The unit size is megabytes and the unit shold be given as in “ls=32.” The

size of given device therefore should be larger than or equal to size × No. of nodes. It is recommended to use sizes between 10 MB and 50 MB.

ü Omitting the whole option implies making SANique shared file system without journaling and on-line recovery of the given file system is not supported when failed. In order to recover such file systems, they should be first umounted from all member nodes and

Page 88: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

78

fsck utitlity should be manually applied. The time taken to recover a corrupted file system by fsck will be propotional to the size of the

given file system.

l [<–b/--block_size> <size>]

ü Optional, default = 4096 bytes

ü This option specifies the size of block in number of bytes and should be one of 1024, 2048, or 4096. When omitted, 4096 bytes is set in default.

l [–c/--clear]

ü Optional

ü This option specifies whether to clear the whole device with NULL before creating a new SANique shared file system. With this option specified, the time taken is propotional to the size of the given file system.

l [–v/--version]

ü Optional

ü This option shows the version information of SANique mkfs.

l [–h/--help]

ü Optional

ü This option shows the usage of SANique mkfs.

fsck: Checking and fixing shared file systems This is a command utility to check the sanity of SANique shared file systems and to fix the corrupted part of each SANique shared file system if any. In general, this utitlity has no effect at all when the given file system is mormally umounted.

Usage: $fsck [options] <fs_device>

Page 89: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

79

SANique Cluster File System v2.4 Administrator’s Guide

Options for fsck are as follows:

l –t sanique2

ü Mandatory

ü The type of file system should be sanique2. l [<–s/--superblock> <offset>]

ü Optional, default = 1024

ü This option specifies the location of superblock on the file system and 1024 is used in default when omitted. When the base superblock is corrupted, a replicated copy of superblock is used. The offset of replicated superblock should be known in advance. In gerneral, the offset of replicated superblock is (extent size + base super block offset).

l [–f/--force]

ü Optional

ü This option forces to check the given file system no matter whether it is normally umounted ot not.

l [–c/--check_only]

ü Optional

ü This option lets SANique fsck skip file system fix and can be used when to check the status of a file system without fixing it.

ü This option cannot be used in conjuction with –y and –i options. When –y, -i, and –c options are all omitted, fsck works in a partial

user interactive mode in default. In a partial user interactive mode, fsck fixes predefined errors without acking and prompts users to

provide an action for other types of errors. Typing in ‘a’ on any “Fix?(y/n/a)” prompt would convert a partial user interactive mode into a non-interactive mode.

l [–y/--yes_all]

Page 90: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

80

ü Optional

ü This option lets SANique fsck work in a non-interactive mode and

can be used when to fix the given file system without user’s decision involved.

ü This option cannot be used in conjuction with –c and –i options. When –c, -i, and –y options are all omitted, fsck works in a partial

user interactive mode in default.

l [–i/--interactive]

ü Optional

ü This option lets SANique fsck work in a user interactive mode. In a user interactive mode, fsck prompts users to provide an action for

every single error detected. Typing in ‘a’ on any “Fix?(y/n/a)” prompt would convert a user interactive mode into a non-interactive mode.

ü This option cannot be used in conjuction with –y and –C options. When –y, -C, and –i options are all omitted, fsck works in a partial

user interactive mode in default.

l [–j/--journal]

ü Optional

ü This option lets SANique fsck recover corrupted file systems using

logged journal. This option enables fast file system recovery, but file systems might not be fixed when logged journal itself is corrupted or when no journaling option was turned on for the given file systems.

ü This option cannot be used in conjuction with other options such as –c, -i and –f options.

l [–v/--version]

ü Optional

ü This option show the version information of SANique fsck.

l [–h/--help]

Page 91: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

81

SANique Cluster File System v2.4 Administrator’s Guide

ü Optional

ü This option shows the usage of SANique fsck.

sanique_extend_fs: Extending SANique file systems This is a command utility to extend file systems on-line.

Usage: $sanique_extend_fs <fs_device> | <mount_point>

[NOTICE]

l sanique_extend_fs works when the target file system is on-line.

Therefore, the command should be given while the target file system is mounted. Otherwise, the command will be denied.

l Before the target file sytem is extended, the corresponding logical volume should be extended first. Hence, extend the logical volume first using extend_lv before extending file system.

l File systems can be extended repeatdely. Currently, a file system can be extended up to 12-times of the original file system. That is, if a file system is originally created with the size of 100MB, the file system can be extended up to 1.2GB.

Using Node Management Utilities

sanique_add_node: Activating SANique member nodes Usage: sanique_add_node <IP_address_of_node> Example – activate a node whose IP is 211.228.101.103: $sanique_add_node 211.228.101.103 SUCCESS : Host(211.228.101.103) successfully added to SANique cluster!

Page 92: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

82

sanique_lock_reconf: Reconfiguring global lock service Usage: sanique_lock_reconf <GLM_list> Example – reconfigure global lock service with node 0, 1, and 2: $sanique_lock_reconf 0 1 3 SUCCESS : SANique lock server reconfiguration is done successfully!

sanique_ls_dev: Listing device information Usage: sanique_ls_dev Example – list all device information: $sanique_ls_dev SANique Ver2.4 device viewer - Device Summary List ---------------------------------------------------------- DEVICE ID SANique ID DEVICE NAME CAPACITY DEVIC

0x801 0x0 /dev/sda1 513008 KBytes SANique

sanique_ls_lv: Listing logical volume information Usage: sanique_ls_lv Example – list all logical volume information: $sanique_ls_lv ========================================================== SANique Ver2.4 volume viewer - Device Summary List ---------------------------------------------------------- DEVICE ID SANique ID DEVICE NAME CAPACITY DEV

0xbe02 0xbe02 /dev/lv1 311296 KBytes SANi 0xbe03 0xbe03 /dev/lv2 311296 KBytes SANi

Page 93: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

83

SANique Cluster File System v2.4 Administrator’s Guide

0xbe04 0xbe04 /dev/lv3 4194304 KBytes SANi 0xbe05 0xbe05 /dev/lv4 4227072 KBytes SANi 0xbe06 0xbe06 /dev/lv5 3899392 KBytes SANi ==========================================================

sanique_ls_mtab: Listing all file system mount information Usage: sanique_ls_mtab Example – list all shared file system information mounted: $sanique_ls_mtab ===========================================================

SANique Ver2.4 Mount Table ----------------------------------------------------------- node mode filesystem_device log_device =========================================================== 0 rw /dev/lv3 /dev/lv1 1 rw /dev/lv3 /dev/lv1

sanique_node_stat: Listing the status of SANique cluster Usage: sanique_node_stat Example – show the current status of SANique cluster: $sanique_node_stat ----------------------------------------------------------

SANique Ver2.4 Configuration Information ---------------------------------------------------------- MyID : 0 MasterID : 0 Timeout : 10 Number of Lokcs : 102400 Size of Lock Space : 28672000 byte

Page 94: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

84

PortNumber : 50070 Gateway : 10.1.1.1 Auto Config Lock Service : OFF HeartBeat Mode : ON HeartBeat Interval : 30 sec ---------------------------------------------------------- [ Node 0 ] Node Name : node0 IP Address : 10.1.1.10 Global Lock Service : ON Active : YES Member of SANique Cluster : YES ----------------------------------------------------------

sanique_rm_node: Deactivating SANique member nodes Usage: sanique_rm_node <node ID> Example – remove Node 2 from SANique cluster: $sanique_rm_node 2 SUCCESS : Host(2) successfully removed from SANique Cluster!

sanique_shutdown: Shuting down SANique CFS Usage: sanique_shutdown Example – shutdown SANique CFS: $sanique_shutdown SUCCESS : SANique are successfully brought down!

sanique_start: Starting SANique CFS Usage: sanique_start

Page 95: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

85

SANique Cluster File System v2.4 Administrator’s Guide

Example – start up SANique CFS: $sanique_start SUCCESS : SANique is successfully loaded as master node..

sanique_sync_conf: Synchronizing SANique CFS configuration Usage: sanique_sync_conf Example – synchronize SANique CFS configuration cluster-wise: $saniqe_sync_conf SUCCESS : SANique configuration is successfully saved!

sanique_mount: Mounting SANique file systems automatically Usage: Sanique_mount <-a | -n node_id> Example – mount all file system defined in sanique_fstab on node 2: $sanique_mount –n 2

sanique_umount: Unmounting SANique file systems automatically Usage: sanique_umount <-a | -n node_id> Example – unmount all file systems defined in sanique_fstab on node 2: $sanique_umount –n 2

sanique_version: Showing version information of SANique

Page 96: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

86

This appedix describes how to install other HBA driver modules than those provided by the standard Linux kernel.

AAAppppppeeennndddiiixxx 111

IIInnnssstttaaalllllliiinnnggg HHHBBBAAA DDDrrriiivvveeerrr MMMoooddduuullleeesss

Page 97: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

87

SANique Cluster File System v2.4 Administrator’s Guide

Preparing Module Compilation Each module should be compiled for tha target kernel in order for it to be loadable. Modules can be compiled based on the following directory:

/lib/modules/<kernel_version>/build

The above build directory is symbolic-linked to the following directory in general:

/usr/src/<kernel_source>

That is, when compiling a driver module, /usr/src/<kernel_source> /include/linux/version.h should be included and the kernel version in the

include file should correspond with the version of the target kernel image. Otherwise, module upload will fail when tried.

Let’s assume that the system has been brought up with SANique kernel image after SANique CFS has been properly installed with SANique-patched kernel source via RPM. (Hereafter, SANique-patched kernel is assumed to be 2.4.18-3rh7 sanique2smp meaning that the target kernel is for Redhat-patched 2.4.18.3

Linux SMP kernel with SANique-patched version 7.)

Check your current kernel image by: $uname -r 2.4.18-3rh7sanique2smp

Then, check your current kernel source tree by: $cd /usr/src $ls -l lrwxrwxrwx 1 root root 25 Jul 17 18:09 linux-2.4-> linux-2.4.18-3rh7sanique2

With SANique CFS properly installed, linux-2.4 should be symbolically linked to linux-2.4.18-3rh7sanique2 directory.

Page 98: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

88

Now, change your working directory to the correct kernel source direcory and open makefile wirth your faborite text editor for editing. $cd /usr/src/linux-2.4 $vi Makefile

At the beginning of makefile, supply proper parameter values as follow and close makefile. (Remember that this is an example. The actual values that you should supply may differ depending on your working environment.)

VERSION = 2 PATCHLEVEL = 4 SUBLEVEL = 18 EXTRAVERSION = -3rh7sanique2smp

You can select kernel options as follow, but you may not have to change anything if you have installed SANIque kernel image via RPM. $make menuconfig

The next step is to build a dependency tree as follow: $make dep

Now, there should be version.h file under /usr/src/linux-2.4 /include/linux directory and the correct kernel version should be written in version.h file as follow: $cat /usr/src/linux-2.4/include/linux/version.h ... #define UTS_RELEASE "2.4.18-3rh7sanique2smp" ...

Make sure that UTS_RELEASE corresponds with your current environment. If

everything is alright, you are ready to compile your HBA driver module.

Page 99: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

89

SANique Cluster File System v2.4 Administrator’s Guide

Compiling HBA Driver Module Compile your HBA driver module following instructions provided by the vendor of your HBA.

Loading Module Automatically at boot time

1. Edit modules.conf file and add alias for your HBA driver module in /etc/modules.conf file. Ex) alias scsi_hostadapter lpfcdd

2. Execute depmod with –a option.

$depmod –a Then, the module information for the given HBA driver is added to modules.dep file in modules directory corresponding the given kernel.. Make sure that the information is correct in /lib/modules/<kernel-verion>/modules.dep file.

3. Make initrd image.

$mkinitrd <inited_image> <kernel_version> ex) mkinitrd initrd-2.4.18-3rh7sanique2.img 2.4.18-3rh7sanique2

Add initrd information into lilo.conf or grub.conf as follow.

In lilo.conf file: initrd=/boot/<initrd_image> ex)initrd=/boot/initrd-2.4.18-3rh7sanique2.img

Make sure that you execute lilo after editing lilo.conf.

In grub.conf file: initrd /boot/<initrd_image>

Page 100: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

90

ex) initrd /boot/initrd-2.4.18-3rh7sanique2.img 4. Reboot the system.

Make sure that the corresponding module is preperly loaded and functional.

Page 101: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

91

SANique Cluster File System v2.4 Administrator’s Guide

This appendix depicts the directory tree of SANique CFS when properly installed. .

AAAppppppeeennndddiiixxx 222

SSSAAANNNiiiqqquuueee CCCFFFSSS DDDiiirrreeeccctttooorrryyy LLLiiisssttt

Page 102: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

92

1. Directoty Tree under /usr/local/sanique

/usr/local/sanique

modules

doc

gui

config

bin

cvm_util

Atdisk dtdisk Mkpdisk Rmpdisk Lspdisk clrdisk Mkdg rmdg Chdg Lsdg Scandg extenddg Reducedg mergedg Splitdg Importdg Exportdg mklv rmlv chlv lslv scanlv extendlv reducelv mklvnode cklv

sanique_csm2.o sanique_clm2.o sanique_cfm.o sanique_cvm.o sdd/sanique_lmd sanique_sdd sanique_reboot sanique_failback sanique_stop_sdd

sanique.conf sanique.port sanique.license sanique_fstab

SANique_CFS_V2_Admin_Guide.pdf

sanique2.jar CvmGUI libcvm_native.so

sanique_add_node sanique_ls_dev sanique_ls_lv sanique_node_stat sanique_shutdown sanique_ls_mtab sanique_rm_node sanique_start sanique_lock_reconf sanique_sync_conf sanique_extend_fs sanique_mount sanique_umount sanique_version

Page 103: SANique Cluster File System (CFS) Version 2€¦ · Chapter 2: Preparing Installation ... Snapshot (Enterprise Edition Only) ... l A host computer on SAN has to reboot or re-mount

93

SANique Cluster File System v2.4 Administrator’s Guide

2. Directory under /dev and /sbin

3. Symbolic Link ln -s /etc/init.d/sanique_lmd /etc/rc3.d/S41sanique_lmd ln -s /etc/init.d/sanique_lmd /etc/rc4.d/S41sanique_lmd ln -s /etc/init.d/sanique_lmd /etc/rc5.d/S41sanique_lmd ln -s /etc/init.d/sanique_failback /etc/rc3.d/S99sanique_failback ln -s /etc/init.d/sanique_failback /etc/rc4.d/S99sanique_failback ln -s /etc/init.d/sanique_failback /etc/rc5.d/S99sanique_failback ln -s /etc/init.d/sanique_reboot /etc/rc6.d/K01sanique_reboot

/sbin

mkfs.sanique2 fsck.sanique2

/dev

cvm_comm snq_csm

sh sanique_failback sanique_reboot sanique_sdd sanique_lmd

sanique_cvm_gui sanique_mknod

/etc/init.d

sanique_failback sanique_reboot sanique_sdd sanique_lmd