-
USER GUIDEcluster server 6
212.886.972.
112.905.35.987.635
154.019.250.391
234.354.956.117
553.125.69.145
214.324.369.457
257.514.369.516
463.655.735.962 122.342.981.161
451.009.658.007
645.375.986.542
679.345.245.667
978.449.356.785
767.234.679.565
019.453.655.612
125.664.857.367
659.212.773.536
744.335.695.787
549.326.784.677
112.323.612.962
618.233.785.818
738.309.304.390
547.383.211.231
553.125.69.145
214.324.369.457
978.449.356.785
019.453.655.612
125.664.857.367
744.335.695.787
549.326.784.677
112.323.612.962
234.
354.
956.
117
553.
125.
69.1
45
214.
324.
369.
457
679.
345.
245.
667
978.
449.
356.
785
767.
234.
679.
565
019.
453.
655.
612
125.
664.
857.
367
659.
212.
773.
536
744.
335.
695.
787
618.
233.
785.
818
122.342.981.161
451.009.658.007
738.309.304.390
547.383.211.231
-
TurboLinux Cluster Server 6 User GuideVersion 6..0 September
2000
1999-2000 TurboLinux Inc. All Rights Reserved.
The information in this manual is furnished for informational
use only, is subject to change without notice, and should not be
construed as a commitment by TurboLinux Inc. TurboLinux assumes no
responsibility or liability for any errors or inaccuracies that may
appear in this book.
This publication may be reproduced, stored in a retrieval
system, or transmitted, in any form or by any means -- electronic,
mechanical, recording, or otherwise without the prior written
permission of TurboLinux Inc., as long as this copyright notice
remains intact and unchanged on all copies.
TurboLinux, Inc., TurboLinux, and TurboLinux logo are trademarks
of TurboLinux Incorporated. All other names and trademarks are the
property of their respective owners.
Written and designed at TurboLinux Inc.8000 Marina Boulevard,
Suite 300Brisbane, CA 94005 USAT. 650.228.5000F.
650.228.5001http://www.turbolinux.com/
-
TurboLinux Cluster Server 6 User Guide i
TABLE OF CONTENTS
PREFACE . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . viiAbout TurboLinux . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . viiTurboLinux Cluster
Server Contents . . . . . . . . . . . . . . . . . . . . . . . . . .
.viiiRegistration . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .ixSupport . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.ixContacting Us . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . xPrerequisites . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xTypographic Conventions . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . x
CHAPTER 1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 1-1What Is Cluster Server? . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Target Audience . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 1-2
Why Use Cluster Server? . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 1-4What Services Can Be Clustered? . . .
. . . . . . . . . . . . . . . . . . . . . 1-5
Whats New In This Release . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 1-6Separate Product . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 1-6
New Installer . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . 1-7Runs on Red Hat or TurboLinux . . . . . . . . . . . .
. . . . . . 1-7
New Names . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 1-7Technical Improvements. . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . 1-8
NAT Support . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . 1-8Stateless Fail-over Support . . . . . . . . . . . . .
. . . . . . . . 1-9Delay Settings Separated . . . . . . . . . . . .
. . . . . . . . . . 1-9More Application Stability Agents . . . . .
. . . . . . . . . . . 1-9
Added Security . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 1-10Security Settings. . . . . . . . . . . . .
. . . . . . . . . . . . . . 1-10Synchronization Tools . . . . . . .
. . . . . . . . . . . . . . . . 1-10
Cluster Management Console. . . . . . . . . . . . . . . . . . .
. . . . . . . 1-11
-
ii TurboLinux Cluster Server 6 User Guide
Enhanced Usability . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 1-11Configuration Tools. . . . . . . . . . . . .
. . . . . . . . . . . . 1-11Configuration File Format. . . . . . .
. . . . . . . . . . . . . . 1-12Error Logs . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 1-12
Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 1-12Registration. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 1-13
Requirements . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 1-14Software . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 1-14Hardware. .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . 1-15Infrastructure . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 1-16
CHAPTER 2 CLUSTERING CONCEPTS . . . . . . . . . . . . . . . . .
. . . . . . . . . . 2-1What Is a Cluster? . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .2-2
What Makes a Cluster a Cluster? . . . . . . . . . . . . . . . .
. . . . . . . . 2-2Related Technologies . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . 2-3
SMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . 2-3NUMA . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 2-4MPP . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . 2-5Distributed Processing. . . .
. . . . . . . . . . . . . . . . . . . . 2-6
Components of a Cluster . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . .2-7Cluster Nodes. . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . 2-7Cluster
Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . 2-7
Types of Clusters . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .2-9Shared Processing . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . 2-9Load Balancing
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2-10Fail-over . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 2-10High Availability . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 2-10
How a Cluster Works . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . 2-12Traffic Management. . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 2-12
Direct Forwarding . . . . . . . . . . . . . . . . . . . . . . .
. . . 2-13Tunneling . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . 2-13NAT . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 2-14
Cluster Management . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 2-16Shared Data Storage. . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . 2-17
Software . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 2-17Synchronization . . . . . . . . . . . . .
. . . . . . . . . . . . . . 2-17Distributed File Systems. . . . . .
. . . . . . . . . . . . . . . . 2-18
-
TurboLinux Cluster Server 6 User Guide iii
Hardware. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 2-19Storage Area Networks. . . . . . . . . .
. . . . . . . . . . . . . 2-20Network Attached Storage . . . . . .
. . . . . . . . . . . . . . 2-20High Speed Drive Interfaces . . . .
. . . . . . . . . . . . . . . 2-21
CHAPTER 3 INSTALLATION . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 3-1Installation Overview . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .3-2Installing
Cluster Server . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . .3-3Post-Installation . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 3-14Troubleshooting
Installation Issues . . . . . . . . . . . . . . . . . . . . . . . .
. 3-15
Unable to Find Installation Files . . . . . . . . . . . . . . .
. . . . . . . . 3-15Undetectable Distribution . . . . . . . . . . .
. . . . . . . . . . . . . . . . 3-15Installing on an Unsupported
Distribution . . . . . . . . . . . . . . . . 3-16
CHAPTER 4 CONFIGURATION . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 4-1Planning the Design . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .4-2
Typical Scenarios . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 4-2Small Cluster . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . 4-3Larger Cluster. . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . 4-4Complex Cluster . . .
. . . . . . . . . . . . . . . . . . . . . . . . . 4-4
Configuration Tool Overview . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . .4-6turboclusteradmin . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 4-6tlcsconfig . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4-7
Services . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . 4-10Agents . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-10Service
Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . 4-12
Servers. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . 4-16Servers Configuration . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . 4-16
Forwarding Mechanisms . . . . . . . . . . . . . . . . . . . . .
. 4-18Direct Forwarding . . . . . . . . . . . . . . . . . . . . . .
. . . . 4-19Tunneling . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 4-19NAT . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 4-20
Server Groups Configuration . . . . . . . . . . . . . . . . . .
. . . . . . . . 4-20
Advanced Traffic Managers . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 4-23Advanced Traffic Manager Systems. . . .
. . . . . . . . . . . . . . . . . . 4-24Advanced Traffic Manager
Settings . . . . . . . . . . . . . . . . . . . . . 4-24
-
iv TurboLinux Cluster Server 6 User Guide
Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . 4-27Global Settings . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4-30
Security Settings . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 4-30Network Settings . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . 4-32NAT Settings . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4-33
CHAPTER 5 CONFIGURING CLUSTER NODES . . . . . . . . . . . . . .
. . . . . . . . 5-1Configuring a Linux or UNIX Cluster Node . . . .
. . . . . . . . . . . . . . . . . .5-3
Tunneling Cluster Nodes . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . 5-6
Configuring a Windows NT Cluster Node . . . . . . . . . . . . .
. . . . . . . . . . .5-7Configuring a Windows 2000 Cluster Node . .
. . . . . . . . . . . . . . . . . . . 5-11Configuring Cluster Nodes
on Other Systems. . . . . . . . . . . . . . . . . . . . 5-16
CHAPTER 6 CONFIGURATION FILE . . . . . . . . . . . . . . . . . .
. . . . . . . . . 6-1The clusterserver.conf File . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .6-2Global Settings .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . .6-3
Security Settings . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 6-3Network Mask Setting . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . 6-4NAT Settings . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6-4
Services . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .6-5UserCheck Settings . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5Defining
Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . 6-6
Servers and ServerPool . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . .6-8Servers . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 6-8ServerPool
Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . 6-8
Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . 6-10AtmPool Section. . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 6-10VirtualHost
Section. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . 6-12
CHAPTER 7 ADMINISTRATION . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 7-1Administrative Tools. . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .7-2
Tuning the Cluster . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 7-2Kernel Table Sizes . . . . . . . . . . . . .
. . . . . . . . . . . . . . 7-3Time Settings. . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . 7-4
-
TurboLinux Cluster Server 6 User Guide v
Synchronization Tools. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . .7-6tlcs_content_sync . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . 7-6tlcs_config_sync .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7-9
Cluster Management Console (CMC). . . . . . . . . . . . . . . .
. . . . . . . . . . 7-12Troubleshooting. . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . 7-18
Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . 7-18Daemon Startup . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 7-19Using
/proc/net/cluster . . . . . . . . . . . . . . . . . . . . . . . . .
. . . 7-22
/proc/net/cluster/config . . . . . . . . . . . . . . . . . . . .
. 7-23/proc/net/cluster/connections . . . . . . . . . . . . . . . .
. 7-23/proc/net/cluster/debug . . . . . . . . . . . . . . . . . . .
. . 7-24/proc/net/cluster/nat . . . . . . . . . . . . . . . . . . .
. . . . 7-25/proc/net/cluster/servers. . . . . . . . . . . . . . .
. . . . . . 7-25/proc/net/cluster/services . . . . . . . . . . . .
. . . . . . . . 7-26/proc/net/cluster/stat. . . . . . . . . . . . .
. . . . . . . . . . 7-27/proc/net/cluster/timeout . . . . . . . . .
. . . . . . . . . . . 7-27
Common Problems . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . 7-28Synchronization Tools Fail . . . . . . . . . .
. . . . . . . . . . 7-28Verifying That the Cluster is Working . . .
. . . . . . . . . . 7-29Determining Which ATM is the Primary. . . .
. . . . . . . . 7-30Cluster Generates a Lot of Extra Traffic. . . .
. . . . . . . . 7-30
CHAPTER 8 CLUSTER SERVER ARCHITECTURE . . . . . . . . . . . . .
. . . . . . . . 8-1SpeedLink Kernel Module. . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .8-2
Kernel Patch . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . 8-2ip_cs Module . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . 8-2Compiling the
Kernel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 8-4
Cluster Server Daemon (clusterserverd) . . . . . . . . . . . . .
. . . . . . . . . . .8-7Application Stability Agents (ASAs) . . . .
. . . . . . . . . . . . . . . . . . . . . .8-9Synchronization
Tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . 8-12Cluster Management Console (CMC). . . . . . . . . . . .
. . . . . . . . . . . . . . 8-14Putting All the Pieces Together . .
. . . . . . . . . . . . . . . . . . . . . . . . . . 8-16
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . 8-17
GLOSSARY . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . G-1
INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . I-1
-
vi TurboLinux Cluster Server 6 User Guide
-
TurboLinux Cluster Server 6 User Guide vii
PREFACE
Thank you for purchasing TurboLinux Cluster Server 6. We realize
that you have many choices in selecting your clustering solutions.
We have worked hard to make our software powerful, flexible, and
easy to use. We are dedicated to offering the highest performance
at the lowest cost with TurboLinux Cluster Server and all our
products.
This manual provides instructions for installing, configuring,
and using TurboLinux Cluster Server 6. It can also be used as a
reference guide for the more advanced features of the product. The
manual will also explain what clustering is and why you might want
to create a cluster.
About TurboLinux
TurboLinux, long the Linux leader in the Pacific Rim, is taking
the world by storm. We have been working with Linux since 1993. We
decided to offer our own distribution in 1997 with both English and
Japanese language versions. We now offer TurboLinux Workstation and
Server distributions in English, French, German, Italian, Spanish,
Portuguese, Chinese, Japanese, and
-
viii TurboLinux Cluster Server 6 User Guide
Russian. For the latest information on our fast-growing company,
please visit our web site at http://www.turbolinux.com.
TurboLinux is also the leader in enterprise-class Linux
solutions. TurboLinux Cluster Server is just one of the many
products that can be used in large enterprise environments, as well
as in smaller companies that need the flexibility to grow.
Our success and your satisfaction with TurboLinux are all made
possible through the magic of the Open Source movement and the
original creator of Linux, Linus Torvalds. We want to thank Linus
and the thousands of developers around the world who contribute to
making the magic possible.
TurboLinux Cluster Server Contents
Unlike previous versions, TurboLinux Cluster Server 6 runs on
top of an existing operating system. Therefore, we have included a
copy of TurboLinux Server 6.0 using the 6.0.5 release in the box.
To install the product, you should install TurboLinux Server,
unless you already have an existing TurboLinux Server or Red Hat
Linux system. If you have an earlier version of the TurboLinux
Server distribution, you should upgrade to the TurboLinux Server
release included in the box.
The TurboLinux Cluster Server 6 product includes the following
materials:
TurboLinux Cluster Server 6 Install CD TurboLinux Server 6.0
Install CD Using 6.0.5 Release Set of floppy diskettes labeled Boot
and Extra Hardware. These can be used
to install TurboLinux Server This manual, the TurboLinux Cluster
Server 6 User Guide TurboLinux Server User Guide
-
TurboLinux Cluster Server 6 User Guide ix
Registration card, including the serial number License agreement
(in the TurboLinux Cluster Server 6 User Guide) Helpful Hints for
Cluster Server, containing important information that
was made available after the printing of this manual
Registration
You will be unable to fully utilize the Cluster Server product
until you register it. The registration card included in the box
contains a unique serial number. You must use this serial number to
register the product and receive a license file. To register,
browse to http://www.turbolinux.com/register/tlcs6. There you will
be asked to enter your serial number, as well as some information
about yourself and your company. The registration process will
return a license file, which must be placed in the
/etc/clusterserver/.licenses directory.
Support
TurboLinux provides 60 days of email installation support at no
charge once you have registered your purchase at the web site. With
our clustering products, we also offer 60 days of phone support at
no additional charge. This support will help you get the product
installed and operational.
Additional support options are available, at hourly and daily
rates. You may also find valuable information in the support
section of our web site, at http://www.turbolinux.com/support.
-
x TurboLinux Cluster Server 6 User Guide
Contacting Us
We value your feedback. While every measure is taken to ensure
the accuracy of our documentation, you may find some mistakes or
oversights. Please let us know when you find something that you
feel should be corrected, or if there is an important part of our
product that you feel could be better explained.
Please send us your input on any aspect of our products and
supporting documentation. We listen to our customers. Email your
suggestions to [email protected].
Prerequisites
This manual assumes that you understand the basics of the Linux
operating system and TCP/IP networking. You should be comfortable
using the Linux or UNIX command line to perform routine system
administration tasks. You will need root access to the systems
within the cluster, and should be familiar with the
responsibilities that come with having root access. You should also
be familiar with IP addresses, network interfaces, subnets, subnet
masks, port numbers, and daemons.
Typographic Conventions
This manual uses the following conventions:
Monospace indicates utilities, commands, programs, and text
examples that need to be entered exactly as shown.
File names and directory paths are shown in Arial font.
-
TurboLinux Cluster Server 6 User Guide xi
Italics indicate CD and book titles, and emphasize words. Menu
items and buttons are enclosed in single quotes. Command lines will
start with a dollar sign ($) prompt, or a hash symbol
(#) prompt if root access is required. They will appear in the
following format:
$ ls -lAtr pictures# less /var/log/messages
-
xii TurboLinux Cluster Server 6 User Guide
-
TurboLinux Cluster Server 6 User Guide 1-1
Chapter 1 INTRODUCTION
This chapter will introduce you to the TurboLinux Cluster Server
6 product. We will examine what the product is and how you can use
it effectively to enhance the performance and reliability of your
network and the services it provides.
We will introduce you to the product, describing what it does
and who the target audience is. Next well explain the benefits of
using TurboLinux Cluster Server as compared to stand-alone systems
and other clustering products. Well take a look at the improvements
that have been made to this version of the product compared to
version 4.0, the previous release. Finally, well review the
software and hardware requirements for running Cluster Server
6.
-
Introduction
1-2 TurboLinux Cluster Server 6 User Guide
What Is Cluster Server?
TurboLinux Cluster Server is an enterprise-class solution that
allows you to leverage your existing network resources to create
scalable and reliable services. With it, you can significantly
improve quality-of-service levels for virtually every TCP/IP
network service, including web, email, news, and FTP. Cluster
Server provides the architectural framework that will allow your
network to effortlessly grow to meet new demands.
Cluster Server implements load balancing and fail-over support
of network services. Load balancing allows the services to run on
multiple systems. The cluster will distribute client connections
among the servers that make up the cluster. Fail-over allows the
service to run on a single server. If that server should fail,
another server within the cluster will take over for it.
You can think of Cluster Server as similar to RAID. Whereas RAID
uses an array of disks, Cluster Server uses an array of servers.
Both provide the same features: enhanced speed, reliability,
redundancy, and scalability. Cluster Server distributes the
workload among several servers instead of concentrating all the
work on one large server. However, the cluster will appear as a
single machine to clients accessing it.
Target AudienceTurboLinux Cluster Server is targeted at medium
to large companies who want to implement the high availability or
scalability features at a modest price. Internet Service Providers
will find the product useful to provide a higher level of uptime as
well as scalability that allows them to add servers to the cluster
to improve performance. Large enterprises can use the product to
deliver standards-compliant services to large numbers of clients,
either internally or on the Internet. Medium-sized companies can
use the software to leverage existing computer systems as the
companys needs grow.
-
TurboLinux Cluster Server 6 User Guide 1-3
What Is Cluster Server?
An administrator implementing Cluster Server should be familiar
with Linux or UNIX and have a good understanding of TCP/IP
networking. While clustering is a fairly simple concept, the
implementation details can be rather complex. Troubleshooting any
problems that arise will require not only understanding of the
concepts behind TCP/IP, but also experience with the real-world
problems that can arise.
TurboLinux Cluster Server is not a Beowulf cluster, and is not
intended to compete with Beowulf. It is not used to cluster
CPU-bound processes, but instead focuses on network-based services.
If you need a cluster to perform intensive processing tasks, you
should consider TurboLinux EnFuzion. (See the EnFuzion web site at
http://www.turbolinux.com/products/enf/.)
-
Introduction
1-4 TurboLinux Cluster Server 6 User Guide
Why Use Cluster Server?
Cluster Server provides a cost-effective way to leverage your
existing systems to create scalable network services. If it is
important that your network remain available as often as possible,
Cluster Server may be for you. If you need to provide services that
are accessed more frequently than one server can handle, Cluster
Server can help by creating a virtual server to handle the
additional load.
There are several hardware solutions available that perform the
same function as TurboLinux Cluster Server. These closed boxes tend
to be very expensive and less flexible. By using a Linux-based
system, you have finer control of the cluster. You also have the
option of running other services on the cluster manager, and can
have the cluster manager double as a cluster node. Cluster Server
also allows redundancy of the traffic manager itself, so you do not
have a single point of failure like many of the hardware-based
solutions.
Cluster Server is a high-performance solution. The traffic
management takes place at a very low level within the kernel. While
all incoming traffic must come through the traffic manager,
outbound traffic can go from the cluster node directly out to the
client. Because most TCP/IP services have larger replies than
requests, this is an important optimization.
In addition to forwarding traffic, Cluster Server monitors the
health and availability of the network resources. It continuously
samples all server nodes, verifying that the applications are
running properly. This is accomplished through the use of intuitive
application polling agents. In addition, each backup traffic
manager repeatedly queries the master traffic manager in order to
verify that the cluster itself is functional.
-
TurboLinux Cluster Server 6 User Guide 1-5
Why Use Cluster Server?
What Services Can Be Clustered?Many typical network services can
be clustered with the Cluster Server product. The main requirement
is that the service must be able to be run on more than one machine
at a time. Just about any TCP/IP service will work. The following
services are commonly used with Cluster Server:
Web sites (HTTP, HTTPS) FTP Email (SMTP, POP3, and IMAP) News
(NNTP) DNS LDAP
TurboLinux Cluster Server should generally not be used to
cluster database servers that are write-intensive. There is no
built-in locking mechanism between cluster nodes, so if more than
one cluster node is writing to the same database, data could become
corrupted. If you need to cluster a database, you do have a few
options. If you use the cluster to read the database, and another
single system to write to the database, everything should work
fine. Another method is to use a two-tier model, with web servers
within the cluster accessing a database server behind the
cluster.
-
Introduction
1-6 TurboLinux Cluster Server 6 User Guide
Whats New In This Release
This release of TurboLinux Cluster Server differs substantially
from the previous version. Many features have been added, and the
architecture of the system has changed. Even the name has changed
from TurboCluster Server to TurboLinux Cluster Server. This section
will outline all the user-visible changes that were made between
the previous version (4.0) and this version.
The primary changes are:
Decoupling from the operating system New names for some parts
Technical improvements Added security Cluster Management Console
Usability enhancements Licensing changes
Separate ProductThe previous version of this product was
integrated into its own Linux distribution. This version has been
decoupled from the operating system and is packaged as a separate
product. Thus, it now requires a Linux distribution to have already
been installed. It is recommended that you use TurboLinux Server
6.0 using the 6.0.5 release or later. You can also use Red Hat
Linux 6.2.
There are several advantages to having the clustering product
distributed separately from the Linux distribution. First, it is
easier to upgrade the operating system or the Cluster Server
product separately. It is also easier to troubleshoot problems,
because they can be isolated as either problems with the clustering
software or the underlying operating system. Finally, you have the
option to install the product on different Linux distributions,
providing
-
TurboLinux Cluster Server 6 User Guide 1-7
Whats New In This Release
you with more flexibility. If you have another software package
that will only run on certain versions of Linux, you may now be
able to use Cluster Server on that system as well.
New Installer
Since the previous version of the product was only available
bound to its own Linux distribution, it was installed along with
the operating system. With the new stand-alone version, a new
installation tool has been created to install the various pieces.
The installation program will guide you through the process. It is
a menu-based program with an easy-to-use interface. The
installation program will be covered in detail in chapter 3.
Runs on Red Hat or TurboLinux
Because the product is no longer bound to the operating system,
it has been made to work under Red Hat Linux as well as TurboLinux
Server. You will need to run TurboLinux Server 6.0 using the 6.0.5
release or later, or Red Hat Linux 6.2. No other Linux
distributions are currently supported.
New NamesThe name of the product has been changed from
TurboCluster Server to TurboLinux Cluster Server. This is partly to
distinguish the fact that it is now a separate product from the
operating system. Due to this name change, many
-
Introduction
1-8 TurboLinux Cluster Server 6 User Guide
of the components have also been renamed since version 4.0. Here
is a table of some of these name changes.
Technical ImprovementsVersion 6 has several technical
improvements over the previous version. These include:
NAT forwarding method Fail-over support Ability to specify
different intervals for server and application checks More
Application Stability Agents (ASAs)
NAT Support
In addition to the previously supported forwarding methods,
Cluster Server 6 allows you to use Network Address Translation
(NAT). So you now have three choices: direct forwarding, tunneling,
or NAT.
NAT is a technology normally used to hide a private network
behind a firewall connected to the Internet. It allows traffic
coming from and going to the private network to appear as if it is
coming from one system. NAT simplifies
Table 1.1 Changed Component Names
TURBOCLUSTER SERVER 4.0 NAME TURBOLINUX CLUSTER SERVER 6
NAME
turboclusterd clusterserverd
turbocluster_sync tlcs_config_sync
tl_sync tlcs_content_sync
/etc/turbocluster.conf /etc/clusterserver/clusterserver.conf
/var/log/turboclusterd.log /var/log/clusterserverd.log
TCSWAT CMC (replacement for TCSWAT)
-
TurboLinux Cluster Server 6 User Guide 1-9
Whats New In This Release
configuration, because you do not need to make any special
changes to the cluster nodes themselves, except for setting the
default gateway. It also provides some added security, because the
cluster nodes cannot be accessed directly from the outside. The
downside is that NAT has slightly reduced performance because all
outbound traffic must go through the NAT box.
The NAT system used in Cluster Server is implemented in
accordance with RFC 1631, the Internet standard describing NAT.
Stateless Fail-over Support
In addition to load balancing, TurboLinux Cluster Server now
also allows you to implement fail-over services. Whereas load
balancing has two or more systems providing the same service at
once, fail-over will use only one server at a time. Only if that
server goes down will any of the other servers listed for that
service be forwarded any network traffic.
Delay Settings Separated
Cluster Server has two different checks that it performs on
cluster nodes. First it checks to see if the server responds to a
network ping. Then it runs an Application Stability Agent (ASA) to
determine if the specific services required are responding. In the
previous version, the intervals for these two types of checks were
tied together. Version 6 allows you to specify a different interval
for each.
More Application Stability Agents
We have included more Application Stability Agents (ASAs) in
this version. These include agents to connect to various
enterprise-level databases, such as Oracle and DB2. The full list
of ASAs is:
DB2Agent dnsAgent
-
Introduction
1-10 TurboLinux Cluster Server 6 User Guide
ftpAgent genericAgent httpAgent httpsAgent http10Agent imapAgent
nntpAgent oracleAgent popAgent smtpAgent
Added SecuritySeveral security features have been added to
ensure the integrity of the system and to restrict access to the
cluster. These include restricting access to the system and the use
of Secure Shell (SSH) to transfer data between cluster nodes. In
addition, the CMC program uses SSL-encrypted HTTPS, whereas the
TCSWAT program that it replaces used regular unencrypted HTTP.
Security Settings
You can now specify systems to deny or allow access to the
remote configuration capabilities of the cluster. These are similar
to the TCP wrappers settings configured in the /etc/hosts.allow and
/etc/hosts.deny files. You can specify individual hosts or ranges
of IP addresses. These settings will be covered in more detail in
the configuration chapters.
Synchronization Tools
The synchronization tools now use SSH to securely transfer data.
This includes the transfer of both configuration information and
content.
-
TurboLinux Cluster Server 6 User Guide 1-11
Whats New In This Release
F-Secure SSH version 1.3.7 is installed with the Cluster Server
package. If you have any other version of SSH on your systems, you
should remove it to ensure full compatibility.
Cluster Management ConsoleA new web-based management system has
been created, called Cluster Management Console, or CMC. This tool
replaces the TCSWAT program from the previous version. The new tool
has more functionality and provides more information about the
cluster.
CMC is used to monitor the current performance of your cluster,
and can be used to dynamically modify the clusters settings. One of
the most powerful features of CMC is the Traffic Monitor. It
generates a real-time graph of the clusters performance.
Log files can be displayed in CMC, and you can also look at the
online documentation, including man pages. You can also stop and
restart the Cluster Server daemon from the CMC web page.
CMC will be covered in more detail in chapter 7.
Enhanced UsabilitySeveral features have been added to increase
usability. These include:
Changes to the configuration tools Simplified configuration file
syntax Improved formatting in log files
Configuration Tools
The configuration tools have been updated to be easier to use.
Some of the terms used have been simplified, as have some of the
menus. The tools have
-
Introduction
1-12 TurboLinux Cluster Server 6 User Guide
been made more user-friendly. The addition of the web-based
Cluster Management Console also improves usability of the
software.
Configuration File Format
The syntax of several options has been made more clear. Example
configuration files are provided. While the format of the
configuration file is pretty straight-forward, you should use the
configuration tools when possible. The format of the configuration
file has been changed from the format used in the 4.0 version, but
it is pretty simple to convert an existing file to work with 6.
Simply edit your /etc/clusterserver/clusterserver.conf file and
remove the port numbers and in the AddServer lines. For more
information on the configuration file format, see chapter 6.
Error Logs
The format of the error log files has been made more readable.
Many of the messages have been clarified, and where possible they
have been shortened to fit within 80 columns. This should help you
when troubleshooting a problem with the cluster.
LicensingThe program now features license activation codes to
enable the program. This allows more flexibility in pricing
structures and allows us to provide customers with evaluation
copies that time-out after a certain period of time. With the
activation code system, if you are using a demo and decide to
purchase a full license, you can simply copy new license files to
the server and will not have to re-install the product.
License files are cumulative. If you purchase a license for 2
ATMs and 2 nodes, and another license for 2 ATMs and 10 nodes, you
will be able to use
-
TurboLinux Cluster Server 6 User Guide 1-13
Whats New In This Release
up to 4 ATMs and 12 nodes. However, note that a system acting as
both an ATM and a cluster node requires both an ATM license and a
node license.
Registration
To use the product, you will need to register it. To register,
browse to the registration web site at
http://www.turbolinux.com/register/tlcs6. There you will be asked
to enter the serial number that was provided in the box, as well as
some information about yourself and your company. The registration
process will return a license file, which must be placed in the
/etc/clusterserver/.licenses directory.
-
Introduction
1-14 TurboLinux Cluster Server 6 User Guide
Requirements
TurboLinux Cluster Server is used to combine the resources of
several computers. The requirements for each of these computers
varies according to its function within the cluster. The two main
functions are advanced traffic manager (ATM) and cluster node.
Cluster nodes are simply systems that provide network services. The
traffic manager is the machine that receives all incoming packets
and forwards them to the cluster nodes. You will also have backup
traffic managers, which will become active only if the primary ATM
fails. A system may be configured to function as both a traffic
manager and a cluster node at the same time.
SoftwareAll traffic managers must have TurboLinux Cluster Server
installed and running. Cluster nodes that are not traffic managers
are not required to run the Cluster Server product. They can run
any operating system, including Linux, UNIX, Windows NT, and
Windows 2000. However, it will simplify cluster management if all
the systems are running the same operating system and the Cluster
Server software.
To run Cluster Server you will need to have a Linux server
running either TurboLinux Server or Red Hat Linux. (Note that the
previous version of Cluster Server was integrated with TurboLinux
Server; this version requires you to install TurboLinux Server
prior to installing Cluster Server.) If you run TurboLinux Server,
you must have version 6.0 using 6.0.5 release or later. For Red Hat
systems, you must be running version 6.2. The product may be able
to run on other Linux systems, but due to quality assurance issues,
we can only provide support for the distributions mentioned here.
TurboLinux Server 6.0 using 6.0.5 release included in the
TurboLinux Cluster Server package. If you are running an older
version of TurboLinux Server, or
-
TurboLinux Cluster Server 6 User Guide 1-15
Requirements
TurboCluster Server 4.0, please upgrade your operating system
using the provided software.
In addition to the Cluster Server management software, you will
need to have software providing the services that are to be
clustered. For example, if you are creating a web server cluster,
each node in the cluster must be running its own web server. This
software is not included with the Cluster Server product, but many
network services are included with most operating systems. For
example, TurboLinux Server and virtually every other Linux
distribution comes with Apache web server.
HardwareWhile Cluster Server can be run on modest hardware, such
as a Pentium 100 with 32 MB of RAM, the product is designed to
provide high performance. We suggest that you use hardware that
fits these high performance needs. The hardware specifications for
a traffic manager are similar to that of a network router. Choose
hardware that is reliable and efficient. The important factors that
you will want to focus on are network interface speed, memory, and
CPU speed. Today that would mean at least a 100-Mbs Ethernet card,
256 MB of RAM, and a 700-MHz processor. (TurboLinux Cluster Server
is only available for Intel-compatible architectures.)
Disk space is less critical, unless you are running other
services on the machine as well. Be sure to factor in any other
software that will be running on the machine. The Cluster Server
software itself will take up approximately 40 MB of disk space.
Additional space will be required for log files and other
administrative tasks.
If an Advanced Traffic Manager is supporting NAT cluster nodes,
then the ATM should have two network cards. One network card will
be used to accept incoming client requests. The other will be used
to connect to the NAT private network.
-
Introduction
1-16 TurboLinux Cluster Server 6 User Guide
The hardware requirements for cluster nodes are the same as if
the systems were running stand-alone. The primary concern will be
what services are running on the node. There are no additional
requirements beyond the hardware recommendations of the operating
system and the applications that will be running on the node.
In order to provide the highest amount of uptime, you will want
to employ as much hardware redundancy as possible. You should
obviously use UPSes to ensure that the cluster will remain running
in the event of a power failure. You may also want to consider
redundant power supplies in each system. To ensure constant data
access, you can use a RAID hard drive array. Drive mirroring and
RAID 5 can provide redundancy, and hot-swappable hard drives will
allow you to replace faulty components. Dont forget to perform
routine system backups; redundant hardware cant prevent software
catastrophes.
A CD-ROM drive is required to install the product. The CD-ROM
does not necessarily need to be installed in the server; you may
mount the CD-ROM on a different server and access it via NFS or
some other method. You will also need a connection to the Internet
to download updates and to register the product.
InfrastructureTo run a cluster of network services, you will
obviously need to have a stable network. If possible, it is
recommended that you have all the cluster nodes on a single subnet,
and that this subnet be separate from the rest of the network. This
allows the cluster to run at maximum performance, while isolating
any problems from the rest of the network. For very high-traffic
clusters, you may saturate the bandwidth of a single subnet; in
that instance you might have to consider multiple subnets.
-
TurboLinux Cluster Server 6 User Guide 1-17
Requirements
While putting all the nodes on a single subnet or LAN is
recommended for maximum performance, it is by no means required.
You have the flexibility to locate your nodes anywhere, especially
when using the tunneling method. However, all the ATMs must be on
the same subnet. This is because the ATMs will all need to be able
to take on the virtual IP address of the cluster itself. This can
only be done on the subnet that would normally contain that IP
address.
If you are looking to create a high availability web site, you
should consider redundant Internet routers on the network. If one
of the routers goes down, you can still access the cluster from the
outside. For maximum redundancy, the routers should go through
separate Internet Service Providers. The high availability of your
cluster wont matter much if you become disconnected from the
Internet.
It is highly recommend that you have a DNS server running to map
domain names into IP addresses. Reverse DNS lookups must be working
properly as well, resolving IP addresses back into domain names.
Like all servers, the systems within the cluster should have static
IP addresses, not DHCP-assigned addresses.
-
Introduction
1-18 TurboLinux Cluster Server 6 User Guide
-
TurboLinux Cluster Server 6 User Guide 2-1
Chapter 2 CLUSTERING CONCEPTS
This chapter will cover some of the basic concepts that will be
required in order to understand how TurboLinux Cluster Server
works. You will need to understand these concepts in order to make
the most of the product. It will also help you to understand your
options when configuring a cluster.
We will look at the following topics:
What is a cluster? Components that make up a cluster The various
types of clusters How a cluster works How to manage a cluster
Methods of sharing data between systems
-
Clustering Concepts
2-2 TurboLinux Cluster Server 6 User Guide
What Is a Cluster?
A cluster is a group of individual computer systems that can be
made to appear as one computer system. While that definition may
sound simple, there are several other similar technologies. The
differences between the technologies can be quite subtle.
Computer clustering has been around in various forms since the
1980s, originating on the Digital VAX platform. The VMS operating
system and VAX hardware combined to provide clustered services.
These VAX clusters were able to share hardware resources, such as
disk space, and were able to provide computing resources to
multiple users.
This section looks at what it means to be a cluster. Then it
provides an overview of some of the related parallel processing
technologies in order to draw some distinctions.
What Makes a Cluster a Cluster?Clustering is just one form of
parallel computing. One of the key points that distinguishes
clustering from other related technologies is the ability to view
the cluster as either a single entity or a collection of
stand-alone systems. For example, a cluster of web servers can
appear as one large web server, but at the same time, individual
systems within the cluster can be accessed as individual systems,
if desired.
Because each system in the cluster is a separate computer, each
has its own hardware, operating system, and software. Clusters can
be either homogeneous, with all the systems running the same
software on similar hardware. They can also be heterogeneous, with
systems within the cluster running different operating systems on
various hardware.
-
TurboLinux Cluster Server 6 User Guide 2-3
What Is a Cluster?
Related TechnologiesClustering falls within a continuum of
parallel processing techniques. The primary distinctions are based
on the level at which resources are shared or duplicated. At the
lowest level, a system will have multiple processors on a single
motherboard, and share everything else. At the other end of the
spectrum, distributed processing employs multiple computers, but
the system is generally not viewed as a single entity.
Some parallel processing methods are (from tightest binding to
loosest):
SMP NUMA MPP Clustering Distributed processing
Each of these processes are explained in this section, except
for clustering, which we have already covered.
SMP
Multi-processor systems today are generally of the symmetric
type. This means that no one processor is any more important than
the others, and all resources are equally available to all the
processors. Systems of this type are called symmetric
multi-processing, or SMP. A single computer has multiple CPUs but a
single shared memory space and shared I/O facilities.
The idea behind SMP is to transparently break down a computing
problem into concurrent processes and allow these to execute on
separate processors within the same machine. The emphasis here is
on transparency. The same program can run time-sliced on a single
processor machine, and the development tools need not even be aware
of the underlying parallelism.
-
Clustering Concepts
2-4 TurboLinux Cluster Server 6 User Guide
On an SMP machine, the operating system itself is responsible
for dividing up the individual processes making up an application
among the available CPUs. SMP machines are best used with operating
systems and programs that use threading or light-weight processes.
Windows NT is heavily thread-based, and Linux processes are fairly
light-weight, so both scale fairly well on SMP hardware.
SMP systems with two or four processors are fairly simple to
build. Anything beyond that becomes rather difficult, because the
processors all need to be able to access all the I/O and memory
resources. Beyond four processors, these shared resources start to
become a bottleneck, and adding more CPUs provides diminishing
returns.
NUMA
SMP computers use a memory sharing scheme in which each
processor has the same level of access to all the physical memory
in the computer. Such a scheme is known as uniform memory access,
or UMA. NUMA (non-uniform memory access) is a more complex
technique which allows several processors in a multi-processor
computer to share local memory in a more efficient manner than in
simple SMP. Each CPU has direct fast access to a single memory area
but can access other memory areas on the system with less immediate
access.
The basic idea of NUMA is to give certain processors an
advantage in accessing a given range of physical memory. You can
think of a NUMA machine as a sort of intermediate step between
simple SMP machines and massively parallel systems. Access to any
part of the memory is possible on a NUMA system; it just may take
more time to access some memory addresses than others. However, the
time to access the non-local memory will still be faster than
accessing disk or network I/O.
The system bus on a NUMA machine is quite complicated. It is
often implemented as a mesh, with many connections to the bus.
Coherency is also
-
TurboLinux Cluster Server 6 User Guide 2-5
What Is a Cluster?
a major issue. You may see the term ccNUMA, which indicates that
the system maintains cache coherency. When a CPU is accessing
memory, the cache internal to all the other processors must be
checked to make sure that they have not modified the data that is
being retrieved.
NUMA systems try to optimize the main issue with parallel
computing: inter-processor communication. In clusters and massively
parallel systems, the overhead of communicating between processors
is quite high, because the communication must travel across a
network of some sort. NUMA uses a high-speed memory bus to
communicate via the shared memory. While the speed of accessing
non-local memory is not as high as that of a local memory access,
it is much higher than communicating over the network.
NUMA machines scale very well to a large number of processors --
thus they can sometimes rival the performance of massively parallel
systems for calculation throughput. The downside is that, as you
might imagine, the design of these machines involves extremely
complex algorithms based on nano-split second timings and
arbitration schemes. Thus they tend to be rather expensive
machines. However, they have a great advantage -- from the
perspective of the application software -- all the complex memory
arbitration among processors is invisible. Massively parallel
systems are blinding fast but almost require a per-problem
configuration of the machine to take advantage of the speed. NUMA
trades off some efficiency for simplicity of development tools and
transparency of resources.
MPP
Massively parallel processing (MPP) is the heavyweight of the
parallel computing world. In the MPP model, each node consists of a
separate processor with its own dedicated resources. The idea of an
MPP system is to break a computing problem down into parts that can
be separately computed more or less independently of each other.
Likewise, the architecture of the system has units that are fairly
independent. Massively parallel systems are
-
Clustering Concepts
2-6 TurboLinux Cluster Server 6 User Guide
usually used for high-end compute-intensive operations. For
example, the current record holder as the worlds fastest computer
is an MPP system used to create a mathematical model to simulate a
nuclear blast.
MPP is very closely related to clustering, but each node in an
MPP system does not usually have full I/O capabilities. Thus each
node in an MPP system may not be a viable stand-alone computer. An
MPP system is usually larger than a typical cluster, but projects
such as Beowulf are definitely blurring the distinctions.
One of the problems with MPP is that programs must be written
specifically for parallel systems. (This is also a problem with
some types of clusters, including Beowulf.) There are two common
APIs that are used: PVM and MPI. These APIs concentrate on breaking
down a problem into chunks that can be computed in parallel. Thus,
if the problem to be solved cannot be broken down in this way, an
MPP system will not be of much help.
Distributed Processing
Distributed processing is probably the least well-defined of all
the terms we have covered here. Distributed processing basically
means that parts of the work to be done are done in different
places. The most common example of distributed processing is the
client/server architecture. The server has a specific job to
perform, while the client performs another portion of the task,
generally the task of displaying the information to the user.
A distributed system is more loosely coupled than a cluster. In
fact, it is usually difficult to see any coupling at all. There
generally isnt any single entity that would be managed as a whole.
With distributed processing, nodes retain their individual
identity, while cluster nodes are usually anonymous. In a
distributed processing system, you would say, give me data X from
server Y. In a cluster, you would say, give me data X from the
cluster.
-
TurboLinux Cluster Server 6 User Guide 2-7
Components of a Cluster
Components of a Cluster
There are two primary types of systems that make up a cluster:
nodes and managers. The cluster nodes are the systems that provide
the processing resources. The cluster manager or managers provide
the logic that binds the nodes together to provide the appearance
of a single system.
Cluster NodesCluster nodes do the actual work of the cluster.
Generally, they must be configured to take part in the cluster.
They must also run the application software that is to be
clustered. Depending upon the type of cluster, this application
software may either be specially created to run on a cluster, or it
may be standard software designed for a stand-alone system.
TurboLinux Cluster Server and TurboLinux EnFuzion both allow the
use of software written for stand-alone systems. Configuring the
software to be used within the cluster is usually pretty
straight-forward.
We will sometimes refer to cluster nodes simply as nodes,
servers, or server nodes.
Cluster ManagerThe cluster manager divides the work amongst all
the nodes. In most clusters, there is only one cluster manager.
Some clusters are completely symmetric and do not have any cluster
manager, but these are more rare today. They require complex
arbitration algorithms and are more difficult to set up.
In TurboLinux Cluster Server, the cluster manager is referred to
as the Advanced Traffic Manager, or ATM. Cluster Server provides
fail-over for the ATM so that there is no single point of failure.
If the primary ATM goes down, a backup ATM will be able to fill in
and take its place.
-
Clustering Concepts
2-8 TurboLinux Cluster Server 6 User Guide
Note that a cluster manager may also work as a cluster node.
Just because a system is dividing the work does not mean that it
cannot do any of the work itself. However, larger clusters tend to
dedicate one or more machines to the role of cluster manager,
because the task of dividing the work may take more computational
power. It also makes it a bit easier to manage the cluster if the
two roles are isolated.
-
TurboLinux Cluster Server 6 User Guide 2-9
Types of Clusters
Types of Clusters
As you saw in the previous section, the definition of a cluster
is pretty loose. So loose in fact, that there is some confusion
about how differing technologies can all be referred to as
clusters. The fact is that clusters can be implemented for several
different reasons.
The most common reasons to create clusters are to pool CPU
resources, balance a workload among several machines (load
balancing), create high system availability, or provide a backup
system in case the primary system fails (fail-over). These
represent different types of clusters, although there is quite a
bit of overlap.
TurboLinux Cluster Server can be used to implement high
availability, load balancing, and fail-over. It does not provide
shared processing in the usual sense of the term. Instead, it
provides load balancing of network services. Each server receives
incoming network service requests, processes the requests, and
sends the reply back to the client.
Shared ProcessingWhen you hear the term Linux clustering, the
first thing you probably think of is the Beowulf project. Beowulf
is a clustering system that combines the processing power of
several systems to provide a system that has a large amount of
processing power. It was designed for scientific and CPU-intensive
purposes. Programs must be specially written to conform to an API
that allows them to have their work distributed across systems. You
can get more information on Beowulf at http://www.beowulf.org/.
Cluster Server does not provide this type of clustering. Another
package that can be used to provide shared processing is EnFuzion.
This TurboLinux product has the advantage that programs do not have
to be re-written in
-
Clustering Concepts
2-10 TurboLinux Cluster Server 6 User Guide
order to be used on the system. Instead, it is more of a
task-based processing system. You can find more information about
EnFuzion at its web site:
http://www.turbolinux.com/products/enf/.
Load BalancingLoad balancing is similar to shared processing,
but there is no need for communication between the nodes. With load
balancing, each node processes the requests it has been given by
the cluster manager. The cluster manager will distribute the
requests in some manner that attempts to distribute the workload
evenly among all the systems.
Fail-overFail-over is similar to load balancing. However,
instead of requests being distributed among all the cluster nodes,
one system processes all the requests. Only when that system goes
down will one of the other systems in the cluster take over.
High AvailabilityWhile it would be desirable to have all
computers working all the time, the reality is that computers do
sometimes go down. In some situations this is merely a nuisance,
but in others it can be devastating. Therefore computer companies
have devised methods of increasing the availability of systems.
High availability is a method by which system resources are kept
available as often as possible. Clustering provides a convenient
way to do this. Instead of paying exorbitant costs for hardware
redundancy, multiple systems can be clustered together to provide
the needed resources. If one of the systems fails, the others can
take over the workload.
-
TurboLinux Cluster Server 6 User Guide 2-11
Types of Clusters
High availability can be implemented with either hardware or
software. Hardware systems are usually more expensive, but software
solutions are generally not cheap either. The more reliability you
require, the more you will end up paying.
Availability is often measured in percentage of uptime. A
typical server may be up 99% of the time, whereas a system designed
for high availability may be up 99.99% of the time. This is often
referred to as four nines availability.
High availability can be achieved using either load balancing or
fail-over.
-
Clustering Concepts
2-12 TurboLinux Cluster Server 6 User Guide
How a Cluster Works
The cluster manager is the core of the cluster. It makes the
determination of how work is to be divided among the cluster nodes.
The cluster manager divides up the workload and sends a piece of
the workload to each cluster node. The cluster node then processes
that piece of work. It either sends the result back to the cluster
manager, or it sends the result directly to the client that
requested the result.
Traffic ManagementFor the service-oriented clustering that
TurboLinux Cluster Server implements, the workload management is
called traffic management. This is because the work to do is to
respond to incoming network service requests. The cluster manager
must direct network traffic amongst all the cluster nodes. In this
way, it acts much like a traffic cop.
The traffic scheduling algorithm used by TurboLinux Cluster
Server is called modified weighted round-robin. This mechanism
tries to ensure that traffic is distributed evenly among all the
nodes in the cluster, proportional to the amount of workload that
each cluster can handle. Each server is assigned a weight to
specify its performance relative to the other systems.
The scheduling algorithm is further enhanced to support client
persistency. When this feature (also called the sticky bit) is
enabled, a specific client will be bound to a particular server
within the cluster. Some services such as SSL-enabled services
require authentication each time a new client connects to the
server. Without persistency, each time the client connects to a
different server within the cluster, the user is prompted to
re-enter their password.
-
TurboLinux Cluster Server 6 User Guide 2-13
How a Cluster Works
Cluster Server provides three different ways to forward traffic
from the cluster manager to the nodes. These are:
Direct forwarding Tunneling NAT
Direct Forwarding
Direct forwarding can be used when the ATM and the cluster node
are attached to the same network segment or subnet. Packets
forwarded using this method are sent directly to the MAC address of
the cluster node. The IP packet is not modified at all; the cluster
node will see it exactly as it arrived at the ATM.
This is the preferred method, because it is the fastest and has
the least overhead. The direct forwarding method also has the
advantage that outbound traffic (responses being returned to the
client) does not need to be sent through the ATM; reply packets are
sent directly out to their destination.
Tunneling
If a cluster node is not located on the same segment as the
ATMs, you can use the tunneling forwarding mechanism. Tunneling is
a way to encapsulate IP packets within other network traffic. It is
used to make a virtual direct connection between two systems. With
this point-to-point connection, you can be sure that the packet
will arrive on the cluster node via the virtual connection.
The tunneling method only works with Linux and UNIX systems. It
uses the IP-IP kernel module to create the point-to-point
connection between the traffic manager and the cluster node. The
kernel in use on the cluster node must be configured to have IP
tunneling support. The kernel supplied with TurboLinux Cluster
Server has this support built in, and the Cluster Server
-
Clustering Concepts
2-14 TurboLinux Cluster Server 6 User Guide
daemon can automatically configure both ends of the link for
you. You can also set up the tunnel interfaces yourself,
establishing the point-to-point connection by hand.
The encapsulation process introduces some overhead that will
reduce performance somewhat as opposed to the direct forwarding
method. Like the direct forwarding method, outbound packets do not
need to be sent through the ATM; they will be sent directly from
the cluster node to the client.
NOTE The IP tunneling used in Cluster Server is not encrypted,
so it is possible for others to intercept any packets traveling
from the traffic manager to the nodes. If you need to add nodes
that are outside your LAN, you should implement a Virtual Private
Network (VPN) in order to secure data transmission.
NAT
NAT is an abbreviation for Network Address Translation. It is
often used to hide a private network behind a firewall connected to
the Internet. Defined in RFC 1631, NAT was designed to help
mitigate the rapid depletion of the IP address space.
The NAT box sits between the private network and the public
network. It modifies outbound packets from the private network to
make them appear to have come from the NAT box itself. When packets
are sent to the NAT box, it determines which system on the internal
network the packet should go to. It normally does this by keeping a
table of connections that have been initiated. For each connection
made by a client on the private side, the table directs replies to
be sent to that client. The version of NAT used by the ipchains
package on Linux is sometimes called IP masquerading.
If the operation of NAT sounds familiar, thats because it works
much like a cluster traffic manager. Although NAT is normally used
to hide client systems,
-
TurboLinux Cluster Server 6 User Guide 2-15
How a Cluster Works
it is used to hide servers when used in a cluster. This
difference is important, because it changes the way the connection
table is used. In TurboLinux Cluster Server, the NAT method uses
the same connection table that is used by the other two traffic
forwarding methods.
NAT simplifies configuration, because you do not need to make
any special configuration changes to the cluster nodes themselves.
All you have to do is make sure that the cluster nodes are on the
internal subnet, and have their default gateway set to the NAT
gateway address defined in the cluster configuration file. NAT also
provides some added security, because the cluster nodes cannot be
accessed directly from the outside. The downside is that NAT has
slightly reduced performance, because all outbound traffic must go
through the NAT box and the address translation process.
NAT cannot be used with some network services. For example, FTP
cannot be used with NAT because it uses two separate TCP
connections on different ports. Other services cannot be used if
they include IP addresses or port numbers within the high-level
portion of the protocol. See RFC 1631 for more details.
-
Clustering Concepts
2-16 TurboLinux Cluster Server 6 User Guide
Cluster Management
Managing a cluster is a bit more complicated than just managing
all the systems in the cluster. You must maintain each server as
well as the system as a whole. Cluster management concentrates
mainly on the cluster manager. Thats where all the interesting
functionality is implemented.
Cluster management primarily involves monitoring the performance
of the cluster. You need to monitor each system as well as the
whole cluster. If an individual system is overloaded, you can
adjust the cluster configuration so that it doles out less work to
that system; or there may be some configuration issue with that
particular server. You should also monitor the performance of the
cluster as a whole. If all the cluster nodes are heavily loaded,
you may want to add an additional node or two to scale up the
performance.
Another important aspect of managing a cluster is making sure
all the systems are running the same software and using the same
content. TurboLinux Cluster Server comes with some synchronization
tools to help you replicate content, so that all the servers are
consistent.
-
TurboLinux Cluster Server 6 User Guide 2-17
Shared Data Storage
Shared Data Storage
In order for two or more systems to provide the same access to
the same data, they must have some way to share that data. This is
actually a much more difficult thing to do than would appear at
first glance. If the data changes frequently, there must be some
way to keep all the systems synchronized. This section looks at
some software and hardware solutions that can be used to share
data.
SoftwareThe easiest shared storage mechanisms are done through
software. Unfortunately, the hardware solutions are more powerful
and robust, but in many instances you will be able to use a simple
software method to share data.
Synchronization
The most basic way of sharing data is by copying the data in
question to each server. Of course, this will only work if the data
is changed infrequently, and always by someone with administrative
access to all the servers in the cluster.
TurboLinux Cluster Server comes with two synchronization tools.
One is used to synchronize the configuration of the servers. The
other is used to synchronize content. These tools can be run
directly or accessed through the turboclusteradmin program. They
will be covered in detail in chapter 7. If you can use the
synchronization tools to maintain data consistency, you will
probably find them to be the easiest solution. They provide you
with data redundancy without the need for any complex
administration.
There are other replication methods available for data. One of
the more common replication systems coming into use is the
Lightweight Directory Access Protocol (LDAP). With LDAP, you can
keep a database that is
-
Clustering Concepts
2-18 TurboLinux Cluster Server 6 User Guide
replicated across several systems. This provides a database
system with redundancy and reliability, and is relatively easy to
set up. LDAP is not a general-purpose database, and does not
implement SQL. It is intended as a directory of network information
and is object-based. However, you may find that it can be adapted
to fit your needs.
Distributed File Systems
If your data changes too frequently to do manual
synchronization, you should consider using a distributed file
system. Your options here include NFS, AFS, DFS, Coda, Intermezzo,
and GFS.
UNIX and Linux systems typically use NFS to share data over the
network. NFS is a well-known system and is easy to configure as a
server or as a client. However, NFS has many problems. It does not
have very good security and has no provisions for replicating the
data to multiple systems. Thus, if you use NFS, you will most
likely still have a single point of failure, which may be one of
the reasons you wanted to create a cluster in the first place.
Several newer distributed file systems have been developed to
overcome the shortcomings with NFS, but none of them have become
significant enough yet to replace NFS.
One alternative that has much in common with NFS while replacing
its broken authentication mechanism is the Andrew File System
(AFS). AFS is an outgrowth of the Andrew Project at Carnegie Mellon
University in Pittsburgh. AFS is licensed commercial software. The
most important aspect of AFS is its secure authentication
mechanism, based on the Kerberos protocol. AFS has a number of
other performance, usage, and administration enhancements that make
it preferable to NFS, even in secured areas.
Closely related to AFS is Transarcs Distributed File System
(DFS). Both are available commercially from Transarc. DFS is an
enterprise-level shared storage solution with sophisticated
replication and load balancing
-
TurboLinux Cluster Server 6 User Guide 2-19
Shared Data Storage
capabilities. A key design goal in DFS is transparency across
domains and networks within an enterprise, allowing for easy
centralized administration.
The Coda file system is an Open Source distributed file system
that now comes with the Linux kernel. Coda is an attempt to create
a system much like AFS, with some more modern features as well. It
attempts to fix some of the availability problems by providing
disconnected operation, server side replication, continued
operation during partial network failures, and scalability and
bandwidth adaptation features.
Intermezzo is another Open Source distributed file system. One
of the advantages of Intermezzo is that it sits in a layer above
the native file system, allowing you to use any native file system
to store the data. It is more aware of modern computing
environments and equipment capabilities than Coda. Like Coda, it
stresses high availability, large scale replication, and
disconnected networks. Intermezzo is still in the beta stages of
development at the time of this writing. You can check it out at
http://www.inter-mezzo.org/.
One of the best distributed file system solutions is the Global
File System (GFS). This solution requires hardware support in
addition to the file system software. The hard drives must be
directly attached to all the systems participating in the file
system (i.e. all the nodes in the cluster). This can be done using
either double-ended SCSI or fibre-channel.
HardwareMost high-end shared storage systems are hardware based.
The two primary technologies used are Storage Area Networks (SAN)
and Network Attached Storage (NAS). Solutions can also be
implemented using fibre-channel and double-ended SCSI chains.
-
Clustering Concepts
2-20 TurboLinux Cluster Server 6 User Guide
Storage Area Networks
A Storage Area Network (SAN) is a highly fault tolerant,
distributed network in itself dedicated to the purpose of providing
absolutely reliable data serving operations. Conceptually, a SAN is
a layer which sits between application servers and the physical
storage devices, which themselves may be NAS devices, database
servers, traditional file servers, or near-line and archival
storage devices. The software associated with the SAN makes all
this back-end storage transparently available and provides
centralized administration for it.
The main distinguishing feature of a SAN is that it runs as an
entirely separate network, usually employing a proprietary or
storage-based networking technology. Most SANs these days are
moving towards the use of fibre-channel. It should be clear that
implementing a SAN is a non-trivial undertaking. Administering a
SAN will likely require dedicated support personnel. Therefore SANs
will most likely only be found in large enterprise
environments.
Network Attached Storage
A NAS device is basically an old fashioned file server turned
into a closed system. Every last clock cycle in a NAS device is
dedicated to pumping data back and forth from disk to network. This
can be very useful in freeing up application servers (such as mail
servers, web servers, or database servers) from the overhead
associated with file operations.
Another way to think of a NAS device is as a hard drive with an
Ethernet card and some file serving software thrown on. The
advantage of a NAS box over a file server is that the NAS device is
self-contained and needs less administration. Another key aspect is
that a NAS box should be platform independent. As an all-purpose
storage device, a NAS box should be able to transparently serve
Windows and UNIX clients alike.
-
TurboLinux Cluster Server 6 User Guide 2-21
Shared Data Storage
High Speed Drive Interfaces
Fail-over clustering would not be practical without some way for
the redundant servers to access remote storage devices without
taking a large performance hit, as would occur if these devices
were simply living on the local network. Two common solutions to
this problem are double-ended SCSI and fibre-channel.
Double-ended SCSI, also known as differential SCSI, exploits a
redundancy in the design of SCSI to allow longer SCSI cables and
thus make practical high speed outboard storage devices. On a
single-ended SCSI cable, every other signal line is actually
grounded. Double-ended SCSI uses these redundant ground lines to
carry the same signal as the adjacent signal line, with the voltage
inverted. The net effect is a signal with twice the strength and
thus a much longer potential cable length, up to several feet,
without signal loss. Double-ended SCSI suffices when the computers
using the external device are more or less adjacent.
Fibre-channel interfaces actually use fiber optic cables to
carry the encoded SCSI signals via laser light, in much the same
way that high speed network interfaces do. These have essentially
unlimited local range (up to 6 miles) at high bandwidth and are a
key technology in implementing SANs. Of course they are quite
expensive in comparison to strictly local interfaces.
-
Clustering Concepts
2-22 TurboLinux Cluster Server 6 User Guide
-
TurboLinux Cluster Server 6 User Guide 3-1
Chapter 3 INSTALLATION
This chapter will show how to install TurboLinux Cluster Server.
The installation program is pretty simple and will guide you
through the process. Once you have installed the product, you must
configure it before it can be used in a cluster. Configuration will
be covered in the next chapter.
In this chapter we will discuss:
Installation overview Installing Cluster Server
Post-installation Troubleshooting installation issues
NOTE Be sure to perform a complete system backup before
attempting to install TurboLinux Cluster Server. Like any software
installation, there is a small possibility that something could go
wrong and corrupt data on the system.
-
Installation
3-2 TurboLinux Cluster Server 6 User Guide
Installation Overview
TurboLinux Cluster Server must be installed on every primary and
backup ATM within the cluster. Although it does not need to be
installed on every cluster node, we recommend that you install the
software on every system in the cluster. Running Cluster Server on
all the nodes will greatly simplify the amount of configuration and
maintenance work you will have to do. You will not have to
configure the systems individually if they are running Cluster
Server, because the daemon will automatically perform the
configuration for you. In addition, the content on systems running
Cluster Server can be easily synchronized. Without Cluster Server
on the nodes, you will likely have to manually synchronize any
content to ensure that the cluster remains consistent.
Cluster Server is provided on a CD-ROM. If you do not have a
CD-ROM drive on each system in the cluster, you can mount the CD on
one system and export it using NFS or some other shared file
system. Then mount the network share on the other systems to
perform the installation.
Once you have the CD-ROM mounted, either locally or from a
network share, you can change to the directory containing the
software and start the installation program. The program will guide
you through the process step by step. In most instances you will be
able to choose the defaults and press ENTER to continue on to the
next step.
When the installation is complete, the program will prompt you
to reboot. Make sure that you do not have any other applications
with unsaved data running on any other consoles. Press ENTER to
reboot. The system will shut down cleanly and reboot.
-
TurboLinux Cluster Server 6 User Guide 3-3
Installing Cluster Server
Installing Cluster Server
Installing TurboLinux Cluster Server is simple if you follow
these steps and allow the installation program to guide you through
the process. As with any software installation, you will need to be
logged in as root to perform these steps.
1. Mount the CD-ROM:
# mount /mnt/cdrom
2. Change to the directory that the CD-ROM is mounted on:
# cd /mnt/cdrom
3. Read any related documentation and release notes, especially
the README and RELEASE.NOTES files. (You can also read these files
from within the TLCS-install program -- they are accessible via the
main menu.)
4. Start the installation program.
# ./TLCS-install
The installation program first determines what Linux
distribution it is running under. The currently supported
distributions are TurboLinux Server and Red Hat Linux. If the
installation program is unable to detect a supported system, it
will exit. You can tell the installer which distribution you have
by specifying redhat or turbolinux at the command prompt:
# ./TLCS-install turbolinux
There is a test mode available via the --test or -t option,
which will not actually install anything, but will instead validate
that all the prerequisites exist in order to install successfully.
There is also help available with --help or -h, which gives you the
syntax and options available.
-
Installation
3-4 TurboLinux Cluster Server 6 User Guide
5. The welcome screen will appear. Press ENTER or click OK to
continue.
Figure 3.1 Installation Welcome Screen
6. Read the entire license when it appears before you continue
with the installation. You can use the cursor keys to scroll
through the text. Once you've read the license, you can click I
agree to continue. If you choose
-
TurboLinux Cluster Server 6 User Guide 3-5
Installing Cluster Server
not to agree with the license, clicking Exit will exit the
installation program and return you to the prompt.
Figure 3.2 License Agreement
7. After you agree to the licensing terms, the program will
attempt to determine what distribution of Linux you are running. If
it is successful, it
-
Installation
3-6 TurboLinux Cluster Server 6 User Guide
will display the name of the distribution along with the kernel
version, as shown in the figure below.
Figure 3.3 Detected Kernel Version and Distribution
Click OK or press ENTER to continue.8. This brings you to the
installation menu. Your choices here are the guided
install, installation of the modified kernel, installation of
the libraries and
-
TurboLinux Cluster Server 6 User Guide 3-7
Installing Cluster Server
utilities, and LILO configuration. You can also access the
documentation files from this menu. The menu is pictured below.
Figure 3.4 Installation Menu
You should choose Guided Install, which will walk you through
the process and install all the necessary pieces. The other options
are primarily used to install portions of the product at a later
time, or if something goes wrong. The guided install just takes you
through each section in turn.
9. Starting the guided installation will begin by warning you
that the kernel will need to be replaced. Installing a new kernel
could potentially render your system inoperable. (The original
kernel will still be available -- just choose linux at the LILO
prompt.) Make sure that you have backed up any important data
before proceeding. Click Yes to continue, or No if you need to exit
and back up the system.
-
Installation
3-8 TurboLinux Cluster Server 6 User Guide
10. At the next screen, choose the kernel you would like to
install. The program will do its best to choose kernels that are
newer than the one you are running but have a similar
configuration.
Figure 3.5 Choosing the Kernel to Install
Unless you have a really good reason, you should choose the
newest version listed. If there is no suitable kernel, check the
TurboLinux web site to see if there is one available for download
that will fit your needs. Otherwise you will have to compile and
install a custom kernel. This procedure will be covered in chapter
8.
NOTE If you are running a 2.0 kernel, you should upgrade to a
2.2 series kernel before installing TurboLinux Cluster Server.
Upgrading from 2.0 to 2.2 is a major undertaking, and you should be
comfortable with those changes before you install Cluster
Server.
If you are running a 2.4 kernel, you will need to check the
TurboLinux Cluster Server web page to see if there is an acceptable
kernel available for download.
-
TurboLinux Cluster Server 6 User Guide 3-9
Installing Cluster Server
Once you have chosen the appropriate kernel, click Proceed to
continue the installation.
11. Next you can choose which pieces of the kernel to install.
Unless you are running low on disk space, accept the default, which
will include all the extra pieces.
Figure 3.6 Kernel Packages
You will definitely want to include the base kernel package and
the extra kernel utilities. The kernel sources are required if you
want to rebuild the kernel at a later time. The header files are
required if you want to build any software on the system. You can
probably uncheck the support for PCMCIA and iBCS. PCMCIA is a
hardware interface mostly used with notebook computers. It is
unlikely that you will need PCMCIA support on a server. The iBCS
module allows you to run programs that conform to the Intel Binary
Compatibility Standard. It allows you to run portable binaries that
were written for SCO and other Intel-based UNIX systems. If you
dont have any such programs, it is not required.
-
Installation
3-10 TurboLinux Cluster Server 6 User Guide
Click Proceed once youve selected the kernel packages to
install. The kernel and additional modules will be installed. This
may take a minute or two.
12. After the kernel has been installed, the installer will
present you with the administrative tools available. Accept the
default, installing all of the listed packages.
Figure 3.7 Package Installation Menu
These packages provide the functionality of the Cluster Server
as well as several administration tools. Here is a brief overview
of what they do: The Cluster Management Console is a web-based tool
that allows you to
monitor and modify the cluster. It will be covered in chapter 7
of this manual.
The Cluster Server daemon is a key component of the Cluster
Server software. Do not uncheck it unless you are certain that it
has already been installed.
The Cluster Agents (also called ASAs) allow you to monitor
different services on the cluster nodes. They will be discussed in
chapter 8. You
-
TurboLinux Cluster Server 6 User Guide 3-11
Installing Cluster Server
should install the cluster agents so that the cluster daemon can
determine when a service on a cluster node becomes unavailable.
The TLCS Administration tools include the menu