Last updated: 2018-01-12 In-Guest High Availability (HA) Configuration in Red Hat OpenStack Cloud using Veritas InfoScale Availability (VCS) Who should read this paper Administrators who want to implement in-guest HA architectures for unmanaged application services in the cloud
13
Embed
In-Guest High Availability (HA) Configuration in Red Hat ...
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Last updated: 2018-01-12
In-Guest High Availability (HA) Configuration in Red Hat OpenStack Cloud using Veritas InfoScale Availability (VCS)
Who should read this paper
Administrators who want to implement in-guest HA architectures for unmanaged
application services in the cloud
In-guest HA configuration in Red Hat OpenStack cloud using Veritas InfoScale Availability (VCS)
PREPARING OPENSTACK TO CONFIGURE A VIP FOR HIGH AVAILABILITY ..................................................................... 6
CP SERVER CONFIGURATION ...................................................................................................................................... 9
In-guest HA configuration in Red Hat OpenStack cloud using Veritas InfoScale Availability (VCS)
3
Introduction OpenStack is based entirely on open source software and is backed by a vibrant global ecosystem of users
and vendors. Initiated in 2010, this flexible cloud platform has matured rapidly and is now ready for production
cloud deployments in many environments.
In this document, we describe how to prepare Red Hat OpenStack to configure a virtual IP (VIP) for high
availability (HA). Veritas InfoScale Availability (formerly Veritas Cluster Server or VCS) is used to provide in-
guest HA to the VIP.
Configuration The following graphic depicts an OpenStack network configuration with two VCS nodes running on a Red Hat
OpenStack virtual machine (VM) in the same availability zones:
Note: This sample configuration uses Red Hat OpenStack version 10 deployed with the KVM hypervisor, and
Veritas InfoScale Availability 7.2 is configured on an OpenStack VM.
This graphic includes the following elements:
Element Description
Public network A public network contains a floating IP address, which is a service provided by
Neutron. It does not use any DHCP service or an IP is not set statically within the
guest. The operating system of the guest does not know that it was assigned a
floating IP address. The L3 agent of Neutron is responsible for the delivery of
packets to the interface with the assigned floating address. Instances that have a
floating IP address assigned can be accessed from the public network by using
the floating IP.
In-guest HA configuration in Red Hat OpenStack cloud using Veritas InfoScale Availability (VCS)
4
Element Description
Private network A private network contains a private IP address, which the DHCP server assigns
to the network-interface of an instance. The address is visible from within the
instance by using a command like “ip a”. The address is typically part of a
private network and is used for communication between instances.
Router 1 The router that connects the private network to the public network to further
access any external networks.
vcs-1 and vcs-2 These VMs are created in the OpenStack environment within the same availability
zone, and they would form the VCS cluster nodes.
LLT1 and LLT2 These networks are used for low latency transport (LLT) communication between
the VCS cluster nodes.
Note: These networks must have static IP address. Otherwise, a network partition
may occur and the cluster may get into a jeopardy situation.
In-guest HA configuration in Red Hat OpenStack cloud using Veritas InfoScale Availability (VCS)
5
The following graphic depicts an application configured for in-guest HA in Red Hat OpenStack using VCS:
The OpenStack instances host Apache web server and an application, the binaries of which are
placed on an NFS location, all of which are managed by VCS. Both the instances exist in a private
subnet and are connected to the public network via a router for internet access.
This sample configuration uses LLT over UDP. However, if you are using a single availability zone
and subnet, you can also use LLT over Ethernet.
Each OpenStack VM has two network interfaces, eth1 and eth2. Both the instances, vcs-1 and vcs-2,
has subnet 192.168.90.0/24 on eth1 and subnet 192.168.100.0/24 on eth2.
The LLT links are configured to use both these subnets as a part of the intra-cluster communication.
In-guest HA configuration in Red Hat OpenStack cloud using Veritas InfoScale Availability (VCS)
6
The following graphic depicts a sample /etc/llttab file for the OpenStack instance, vcs-1:
The following graphic depicts a sample /etc/llttab file for the OpenStack instance, vcs-2:
The application runs inside the OpenStack instances, while the application binaries are placed over
the NFS location 10.209.124.11, which is outside the OpenStack network. The scope of this
configuration includes the mount agent only, which has been tested with NFS v3 and NFS v4 in this
sample.
A coordination point (CP) server is configured outside the OpenStack network, which takes care of
fencing in case there is a network partition between the two cluster nodes. If required, you can also
configure a CP server inside the OpenStack network.
The OpenStack instances have Apache server installed and running, to make the Apache service
highly available across the cluster nodes. For access to these from outside the OpenStack
environment, a virtual private IP is configured, all of which are managed by VCS.
Preparing OpenStack to configure a VIP for high availability To allow a VIP to communicate across cluster nodes, you need to map the same VIP to multiple ports.
Neutron does not allow such a configuration by default. To work around this limitation and provide HA for a
VIP, we use the allowed-address-pairs feature that Neutron provides.
Allowed-address-pairs allow you to specify IP address (CIDR) pairs that pass through a port. This enables the
use of protocols such as VRRP, which floats an IP address between two instances to enable fast data plane
failover.
In-guest HA configuration in Red Hat OpenStack cloud using Veritas InfoScale Availability (VCS)
7
To map the VIP with the instance port
1. Create a Neutron port for VIP with the appropriate network. This example uses a private network named port-vip1.
2. After creating the port IP address 192.168.0.19 can be used as a VIP, which the DHCP network provided:
Alternatively, you can assign a specific IP address to the newly created port using the following command:
3. Now, map your VIP (192.168.0.19) with the appropriate ports of all the cluster nodes, so that the VIP can fail over and communicate across the cluster using allowed-address-pairs:
In-guest HA configuration in Red Hat OpenStack cloud using Veritas InfoScale Availability (VCS)
8
The allowed-address-pairs values are successfully updated for both the instances:
After performing the previously listed steps the VIP 192.168.0.19 can be used to communicate across the
cluster within network.
In-guest HA configuration in Red Hat OpenStack cloud using Veritas InfoScale Availability (VCS)
9
4. To enable communication outside the OpenStack environment, associate the floating IP to port-vip1 using the dashboard:
CP server configuration The coordination point server (CP server) is a software solution that runs on a remote system or cluster. CP
server provides arbitration functionality by allowing the VCS cluster nodes to perform the following tasks:
Self-register to become a member of an active VCS cluster (registered with CP server) with access to
the data drives.
Check which other nodes are registered as members of this active VCS cluster.
Self-unregister from this active VCS cluster.
Forcefully unregister other nodes (preempt) as members of this active VCS cluster.
If required, set the loser_exit_delay parameter in the /etc/vxfenmode file according to your
cluster setup.
In-guest HA configuration in Red Hat OpenStack cloud using Veritas InfoScale Availability (VCS)
10
In short, the CP server functions as another arbitration mechanism that integrates within the existing I/O
fencing module.
You can configure a CP server by invoking the /opt/VRTS/install/installer -configcps command.