Author Diego Protta Casati Abstract Use load balancing in a scaled-out EMC ViPR SRM 3.6 deployment to avoid performance bottlenecks that could impact data collection and report generation. The Load Balancer component enables metrics from various collectors to be spread across multiple backends based on decisions from an arbiter. December 2014 Configuring Load Balancing for EMC ViPR SRM 3.6
16
Embed
Configuring Load Balancing for EMC ViPR SRM 3 · Configuring Load Balancing for EMC ViPR SRM 13 1. Select a SolutionPack and then select its Data collection component. 2. Reconfigure
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Author
Diego Protta Casati
Abstract
Use load balancing in a scaled-out EMC ViPR SRM 3.6 deployment to avoid performance bottlenecks that could impact data collection and report generation. The Load Balancer component enables metrics from various collectors to be spread across multiple backends based on decisions from an arbiter.
Overview As new devices are added to ViPR SRM, the overall load on the system will increase with new metrics being pushed into the databases. This is when a Load Balancer comes into play. Similar in purpose to a network level load balancer, ViPR SRM’s Load Balancer is a component that allows metrics from various collectors to be spread across multiple Backends based on decisions from an Arbiter. This allows you to scale out a current ViPR SRM deployment, avoids performance bottlenecks that could impact report generation, and provides means for the system to grow as requirements evolve.
Naturally, funneling all of the metrics through a single component is likely to become a performance bottleneck. To avoid this scenario, another component was designed: the Load Balancer Connector (LBC). With the Load Balancer Connector, it is possible to offload the burden on the Arbiter by sharing a global destination table of where a given metric should go with all of the LBCs.
Audience This article is for ViPR SRM installers, administrators or anyone who manages the ViPR SRM application.
Purpose After reading this article, you understand load balancing concepts used with ViPR SRM along with specific procedures and recommendations for installation.
Concepts
Arbiter
There is a need for only one Arbiter, as he is the one who decides to what backend where the metrics should go. The case when the Arbiter is out of service (i.e.: Virtual Machine downtime), the only consequence will be that metrics from devices newer to the Arbiter, will be retained locally on their respective collectors until the Arbiter is back to service those requests. Because all of the LBCs have a copy of the devices routing table, all of the previously learned devices will work as if nothing has occurred.
Note: The Arbiter is not a single point of failure.
5 Configuring Load Balancing for EMC ViPR SRM
Load Balancer Connector
The LBC is responsible for receiving the metrics from all of the collectors that point to it and to forward these metrics to the correct backend. Every time a new metric is received by the LBC, it checks on a local routing table, kept in a file, which backend that metric should be routed to. Whenever a device is seen for the first time, as the LBC will not have that information on its routing table file, it will proxy that metric to the Arbiter. The Arbiter then will, based on a routing algorithm, place the metric on a Backend, update its own balancing list file and send this update to all of its known LBCs.
Note: Only one LBC should be installed once per VM/physical host on which the collection of data occurs. For example, multiple collector-managers will share one LBC instance on any given host.
Update checks
By default, the LBC will check for an update from the Arbiter every 60s (update-check-time) as shown in the balancer-connector.xml under /opt/APG/Collecting/Load-Balancer/Load-Balancer.
New metrics arrive at the Load Balancer Connector from one or more Collector. As this is the first time the LBC sees these new metrics, it will forward them to the server where the Arbiter is installed. The Arbiter, in turn, will place these metrics on the Backends that are currently known to it.
7 Configuring Load Balancing for EMC ViPR SRM
The Arbiter shares its recently created balancing list with all of the LBCs registered with it.
8 Configuring Load Balancing for EMC ViPR SRM
With knowledge of the balancing list, the LBCs will be able to send the metrics to the proper Backends without having to consult the Arbiter. The LBCs will regularly ask for an updated version of the balancing list from the Arbiter, by default every 60 seconds.
9 Configuring Load Balancing for EMC ViPR SRM
Prerequisites EMC M&R 6.3 or higher (formerly Watch4net)
Certify that the ports 2020 (Loadbalancer components) and 48443 (Webservice-Gateway) are not blocked between the collectors and the Arbiter. Procedures
vApp consideration
As of ViPR SRM 3.5, the LBC is already installed on the Collector Virtual Machine when deploying the vApp that bundles 4 VMs. If you are running this configuration, the following procedure is not necessary.
Installation
1. Navigate to Centralized-Management > Physical Overview > <host > > Modules. Select the Collector host on which you want to install the Load Balancer connector module.
2. Click Install. The Packages Installation page displays.
3. In the Categories list, click Block.
10 Configuring Load Balancing for EMC ViPR SRM
4. In the Packages list select load-balancer-connector.
5. Click Launch and install the four modules (Collector-Manager, FailOver-Filter, LoadBalancer and the load-balancer-connector).
6. On the Load Balancer Connector, the following table describes the expected values:
Field Value
Web-Service gateway hostname or IP address [localhost]
Primary Backend IP (vApp deployment)
Web-Service port number [48443] Default
Web-Service username [admin] Default
Web-Service password [•••••] Default
Web-Service Category [Collecting] Default
Web-Service Module [Collector-Manager] Default
Web-Service Instance [Load-Balancer] This needs to match the Arbiter Web-Service Instance. By default it is "Load-Balancer"
Socket Collector port [2020] Default
Arbiter hostname or IP address Primary Backend IP (vApp deployment)
11 Configuring Load Balancing for EMC ViPR SRM
12 Configuring Load Balancing for EMC ViPR SRM
Additional configuration tasks
Removing access restriction for localhost
Remove the access restriction for localhost by commenting the last line in APG-WS.xml: <!-- Restrict access to localhost only
Each time a new Backend is added, the Arbiter must be informed about it. This can be done by executing, on the Arbiter machine, the reconstruction script (load-balancer-reconstruction.sh) found under the bin directory of the Load Balancer collecting instance. PRIMARY_BE:/opt/APG/Collecting/Load-Balancer/Load-Balancer # bin/load-balancer-
Reconstruction should also take place whenever a database operation takes place (e.g., a database split). In order to use the reconstruction script you must first stop the Arbiter and then run the script.
Reconstruction can also be used when the Arbiter is broken or when a new database is included.
Reconfiguring SolutionPacks
With the new LBC in place, you can now reconfigure the SolutionPacks to make use of it. After being reconfigured, they will send their data through the LBC instead of aiming directly to the Backend.
13 Configuring Load Balancing for EMC ViPR SRM
1. Select a SolutionPack and then select its Data collection component.
2. Reconfigure the SolutionPack by changing its Data collection parameters so that it uses the new LBC, listening on port 2020 on the localhost.
3. Click Reconfigure. Repeat this process for all of the SolutionPacks on this host.
If the LBC configuration is established before SolutionPacks are installed, it can be used for all SolutionPacks you subsequently install.
Changing the balancing key
By default, the Arbiter will take the property device as its balancing key. It is possible to modify this for other values but in most cases this will turn out to be a bad decision causing performance issues.
NOTE: Changes to this file are not preserved during an upgrade or reconfiguration of the SolutionPack.
For more information please refer to the APG-Load-Balancer.pdf document.
At the Arbiter's main file (arbiter.xml), it is also possible to identify what Backends the Arbiter is currently aware of. They are listed under the container name "end-points". <?xml version="1.0" encoding="UTF-8"?>
<!--
* Copyright (c) 2014, EMC Corporation.
* All Rights Reserved.
* This software contains the intellectual property of EMC Corporation
* or is licensed to EMC Corporation from third parties.
* Use of this software and the intellectual property contained therein
* is expressly limited to the terms and conditions of the License
* Agreement under which it is provided by or on behalf of EMC.
On the collectors where the LBC is configured, check for communication issues usin the log files located at:
/opt/APG/Collecting/Collecting/Load-Balancer/logs
If there is indeed a communication problem between the LBC and either the Arbiter of one of the Backends, the FailOver Filter will start and the logs will contain entries pointing to that event.
16 Configuring Load Balancing for EMC ViPR SRM
Balancing lists
The balancing lists are clear text files that can be found under /opt/APG/Collecting/Load-Balancer/Load-Balancer/data/ and are named as Backend0_1, Backend2_1, and so on.
Check communication ports
Check the status of the communication ports (2020 and 48443).
Summary
Using the information in this article, you should be able to configure load balancing and enable SolutionPacks to take advantage of the configuration. You also have information to help troubleshoot load balancer issues you may encounter.