-
Overview about Adapter container in IBM Sterling B2B Integrator
and step by step process on setting up container to achieve high
availability.
- Balasubramanian Dhanavel ([email protected]), Senior Staff
Software Engineer, IBM Sterling B2B Integrator Support
Table of Contents
Overview about adapter
containers.......................................................................2
Adapter container
background...............................................................................3
Achieving high
availability......................................................................................4
Adapter Container JVM
Configuration...................................................................6
Setting up
Containers............................................................................................7
Monitor Adapter
Container...................................................................................12
Adapter Container JVM
Properties......................................................................16
Adapter Container JVM
Logs...............................................................................18
Commands...........................................................................................................18
Summary..............................................................................................................23
-
OverviewThis article discusses one way to separate
communications from back-end processing to achieve higher
availability. The IBM Sterling B2B Integrator V5.1 and V5.2 adapter
container breaks out mailbox-based protocols (FTP, SFTP, FTPS,
Connect:Direct) and uses standard multi-node clustering.
Adapter containerYou can achieve higher availability by using
the adapter container. The 5.1 and 5.2 releases added a new
component called an adapter container. When you usethe adapter
container to host communication adapters, they have a different
life-cycle than the application server-independent virtual machine
(ASI JVM), which does data processing. See figure 1.
Figure 1. Connectivity and deployment diagram
-
From the Sterling B2B Integrator 5.2 documentationAdapter
availability is the key to measuring Sterling B2B Integrator (SBI)
stability.Activities that prevent an adapter from being available
might affect the ability to do business. Activities that currently
require Sterling B2B Integrator to be unavailable include, but are
not limited to:
Installing a patchRestarting the system to pick up property file
updatesOut-of-memory and other system errors
Adapter Container BackgroundA number of adapters can be run in
the adapter container. These include custom adapters as well as
those shipped with the product. These adapters can run in a
separate JVM by creating an adapter container as in the product
install. An adapter container can be easily created afterwards at
any time. The initial adapter container creation will require a
system outage, so it is best to plan this for a maintenance period
or when the product is initially installed. Just like
addingadditional cluster nodes, adding additional adapter container
nodes will not require a system outage. The following server
adapters are of interest for this discussion:
FTP Server AdapterFTPS Server AdapterSFTP Server AdapterHTTP
Server AdapterHTTPS Server AdapterConnect:Direct Server Adapter
Before an adapter can be configured to run in an adapter
container, the system must be set up with one or more adapter
containers. The adapter container is very much like a node in a
multi-node cluster. In the case of an adapter container, the node
is limited and cannot run schedules or business processes. The
intent of the adapter container is only to host adapters and
isolate those adapters from the other nodes. The number of adapter
nodes to configure and start depends on system load and adapter
type. It is recommended that custom adapters be deployed in their
own adapter container and not mixed with shipped adapters.
Once an adapter container has been configured and started, these
adapters will show the adapter container as a deployment option in
the UI. The adapter container will be listed in the target node
drop-down list during service deployment.
-
As the diagram shows, the adapter container is loosely coupled
to Sterling B2B Integrator. All payloads from the adapter are sent
to the Sterling B2B Integrator processing engine by first being
written to either the database or file system and then having a
message sent via a Java Message Service (JMS) queue to Sterling B2B
Integrator. The Java Message Service provider is an external
process much like the database. It has its own life cycle and must
be made highly available just like the database.
The adapter container isolates adapters from engine fails and
the engine from adapter failures. This can be a huge advantage when
running an adapter which uses third-party library or native (JNI)
libraries which utilize memory differently.
Achieving high availability
This article focuses on server protocol adapters only (such as
FTP server or SFTP server). Client protocols (such as FTP client or
SFTP client) are orchestrated through the data processing engine.
When the orchestration node isdown for maintenance, outgoing
partner data might experience a delay. To keep the adapter
endpoints available for incoming partner data, the endpoints must
be server adapters.
The adapters that run in the adapter container communicate
directly with the database to store payload data and retrieve
mailbox data. If an adapter must run a business process, the
adapter executes the process by putting a message on aJMS queue.
When you deploy this configuration, target an adapter at the
individual adapter container to guarantee that is where the adapter
runs in the container.
The adapter container shares a database with the ASI VM, and any
global database outage impacts the adapter container. Because the
adapter container will access only a few tables in the Sterling B2B
Integrator database, database maintenance that normally requires
all of Sterling B2B Integrator to be shut down, such as index
rebuilds, can take place while the adapter container runs.
Because the adapter container shares the database with ASI, no
additional ongoing configuration is needed. End-to-end visibility
is the same as when you do not use adapter containers.
-
No business processes execute in the adapter container.
Protocols such as AS2,which are implemented primarily by business
processes, have outages if the ASI node (business process engine
node) is unavailable.Set up multiple instances of ASI and the
adapter container so that when you apply a patch that impacts the
adapter container, one node runs while the other is patched.
Advantages
Easy maintenance.End-to-end visibility.Separation of lifecycle.
The ASI JVM can be recycled to pick up configuration changes or
clear memory issues independent of the adapter container.Server
adapters (endpoints) can be cleanly isolated from client adapters
and data processing.
Disadvantages:
Potential communications outage during database maintenance or
if a table that the adapter container relies on is
modified.Protocols such as AS2 require asynchronous message
disposition notifications (MDNs), otherwise the protocols do not
work in the container.Does not cover Global High Availability
scenarios.
Assumptions:
Highly available database solution, such as Oracle RAC.Highly
available JMS provider, such as MQSeries®.
This feature and deployment pattern is available in Sterling B2B
Integrator 5.1 and 5.2. Proper use of this feature and pattern can
increase system availability, reduce down time, and isolate
potential system failures. Following this model shows you how to
split communications and back-end processing (for example, EDI)
into separate processes with separate lifecycles to achieve these
results.
-
Adapter Container JVM Configuration:
setupContainer command is used to configure one adapter
container JVM. ContainerJVMConfigure.xml.in in install/properties
is the ant file for setupContainer command.
The command does following:
1. Add CLUSTER=true in sandbox.cfg2. Create a sub directory with
name pattern node#AC# under install/properties and install/logs for
the container and copy all necessary property files into the sub
directory3. Modify property files in install/properties to set up
cluster flag to true, change to use containerStartup.properties
file to start ASI, generate runAll command to start ASI and
container JVMs, etc.4. Update multinodesCentralOps.properties.in
for a new container JVM5. Update central.properties.in file to make
central ops server work for the new container JVM. There is only
one central ops server in an install, ASI and all configured
container JVM(s) need to talk to the central ops.6. Generate
customer_overrides.properties file in the subdirectory for the
container JVM7. Update log property files in the subdirectory
created in step two.8. Update service property files in the
subdirectory created in step two9. Update SERVICE_DEF table to set
those adapters to be able to be deployed in container node.
(SERVICE_TYPE is 12)
Here is the usage of the command:
-
Setting up Containers:
Container JVM node name is the current node name with AC and
node number. You can set up more than one adapter container within
one install. For example, run following command on node1:
setupContainer.sh 1
The command sets up a container JVM called node1AC1
AND
setupContainer.sh 2
It sets up a container JVM called node1AC2
If running them on node2 ASI install, then container names could
be node2AC1 and node2AC2.
Note:
To configure adapter container JVM(s) for a cluster environment,
you have to set up ASI normal cluster environments first before
setting up any adapter container JVM. For existing cluster
environments, you can just go ahead and configure adapter container
JVM(s) on each node if required.
-
Setting up a container on an existing Cluster install:
node1AC1:
-
node2AC1:
-
Starting the Container node1AC1:
-
Starting the Container node2AC1:
-
Monitor Adapter Container:
Each adapter container acts as a cluster node. You should be
able to monitor their status from UI,
Operations->System->Cluster-Node Status page. Below is an
sample of that page for two configured adapter containers, node1AC1
and node2AC1 and both are active. As occurs with a normal ASI
cluster, when node goes down, an “abnormal event of node went down”
is fired and email is sent out by default.
-
node1AC1 and node2AC1 details:
-
You can also go to the troubleshooter page to check more
information for an adapter container node. For example you can
check which adapters have been deployed in the container, which
adapter is active, etc.
Here is an example of troubleshooter page for node1AC1:
-
Adapter Deployed on node1AC1:
-
Adapter Container JVM Properties:
There is an /properties directory is for ASI install. For each
adapter container JVM, a subdirectory is created under /properties
to contain all specific property files for that container JVM. The
subdirectory name is the container node name. The subdirectory is
created when running the adapter container configuration
setupContainer command.
Most adapter specific properties are in
system_overrides.properties in the subdirectory. It contains the
following container specific properties after setupContainer
executes:
The container specific jndi properties are in
jndi.noapp.properties.in. The subdirectory also contains two log
properties, so all log files used by
the container JVM are created in subdirectory in install/logs.
The node name is the log subdirectory name. The
servers.properties.in file in the subdirectory is the one that
container
JVM used when starting container JVM.
If there are some other properties that you would like to set up
for this container, they can be added into this
system_overrides.properties.in file. For example, for jdbc,
database pool properties, they can be specific for an adapter
container from ASI.
-
Node1AC1:
node2AC1:
-
Adapter Container JVM Logs:
General ASI log files are still in the install/logs directory.
The container logs are stored in the
install_dir/install/logs/node*AC*(install_dir\install\logs\node*AC*for
Windows) directory. In this convention, node*AC*, the first *
refers to the ASI node name and the second * refers to the
container number. For example, in node2AC1, 2 refers to the ASI
node name and 1 refers to the container number.
General ASI log files are still in the install/logs directory.
Each container's JVM log files are in the subdirectory of
install/logs under the node name of that container. For example,
log files for node1AC1 should be in node1 install/log/node1AC1
directory.
Commands: UNIX Commands:
Command Name Description
setupContainer.sh # port The command is for configuring an
adapter container JVM
startContainer.sh The command is to start one or more adapter
container JVM(s)
stopContainer.sh The command stops one or more adaptercontainer
JVM(s)
run.sh This command starts ASI as it used to. It starts
ActiveMQ, command line 2 adapterclient, Ops and NoApp server.
stopASI.sh This command stops NoApp and Ops Server
runAll.sh This command does all run.sh plus startsall configured
adapter container JVM(s)
hardstop.sh This command stops everything, including adapter
container JVM(s)
-
Windows Commands:
Command Name Description
Setupcontainer.cmd Configure an adapter container on Windows
InstallContainerWindowsService.cmd Install an adapter container
as a Windows Service
startContainerWindowsService.cmd Start an adapter container
windows service
StopContainerWindowsService.cmd Stop an adapter container
Windows Service
UninstallContainerWindowsService.cmd
Uninstall an adapter container windows service
startWindowsService.cmd start all windows services including all
ASI windows service plus all adapter container(s)
stopWindowsService.cmd stop all windows services including all
ASI windows services plus all adapter container(s)
startASIWindowsService.cmd start ASI windows services, not
include adapter container
stopASIWindowsService.cmd stop ASI windows services, not include
adapter container
Startup Adapter Container:
To start the adapter container on a UNIX platform, you can run
the startContainer.sh command. You can start one or all configured
adapter container(s). To start one adapter container, you need to
specify the node number that has been configured using the
setupContainer.sh command.
-
Start Adapter Container
startContainer.sh/startContainerWindowsService.cmd starts one or
all adapter container(s). To run those commands, ASI is required to
be up and running. Otherwise, the command fails.
Stop Adapter Container
stopContainer.sh/stopContainerWindowsService.cmd stops one or
all adapter container(s). It won't touch ASI; it leaves it as it
is.
Start everything
runAll.sh/startWindowsService.cmd starts everything, including
ASI and adapter container(s).
Stop everything
hardstop.sh/stopWindowsService.cmd stops everything, including
ASI and adapter container(s).
Stop only ASI
stopASI.sh/stopASIWindowsService.cmd commands kill ops server
and noapp server and leave adapter container nodes associated with
the same install. Those commands are used for installing a patch
and leave the adapter container up running. Once patch install is
complete, restart everything again which means restart ASI and
restart adapter container(s). This reduces adapter downtime which
installing patch.
Start only ASI
run.sh/startASIWindowsService.cmd commands start ASI as they did
before.
Patch Install Process
Since the container adapter uses mostly property files in the
install/properties area with customer_overrides.properties defined
in install/properties/[containerNodeName], only few property files
need to be copied over to the subdirectory for container. We have
an ant project takes care those automaticallyduring deployment when
doing patch install. So after patch install, the adapter container
should be configured and work properly as it does before patch
install.
-
Sample Procedure To Setup Adapter Container:
1. Normal Single SBI environment after new SBI install or
fixpack install
After installing SBI and Installing the patch startCluster.sh 1
false -- run this before in adding container setupContainer.sh 2 --
sets up second adapter container node runAll.sh -- starts ASI and
adapter container nodes
2. Existing SBI cluster environment
Stop Your node1
Install the fixpack/Patch using InstallService.sh
startCluster.sh 1 -- you need to run this after your patch
setupContainer.sh 1 -- setup first adapter container node on
node1
Stop Your node2
Install the fixpack/Patch using InstallService.sh on node2
startCluster.sh 2 -- you need to run this after your patch install
setupContainer.sh 1 -- setup first adapter container node on
node2
on node1: runAll.sh -- starts node1 and adapter container node
on node1 on node2: runAll.sh -- starts node2 and adapter container
node on node2
-
3. New SBI install cluster environment
Install node1 as normal startCluster.sh 1 -- setup node1 as
usual
Install node2 as normal startCluster.sh 2 -- setup node2 as
usual
on node1, setupContainer.sh 1 -- setup adapter container 1 on
node1 on node2, setupContainer.sh 1 -- setup adapter container 1 on
node2
do runAll.sh on node1 and node2
Summary:
Achieve higher availability with IBM® Sterling B2B Integrator®
by using the adapter container for communications adapters. The
article describes how to splitcommunications and back-end
processing (for example, EDI) into separate processes with separate
lifecycles. By following this model, you can reduce downtime and
isolate potential system failures.
Related Links:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Wf96854c0c8fc_4762_9b83_c6247feca5fc/page/Increase%20availability%20with%20IBM%20Sterling%20B2B%20Integrator%20adapter%20containers
http://www-01.ibm.com/support/knowledgecenter/SS3JSW_5.2.0/com.ibm.help.manage_svcs_adpts.doc/Adpt_Overview.html
http://www-01.ibm.com/support/knowledgecenter/SS3JSW_5.2.0/com.ibm.help.manage_svcs_adpts.doc/Adpt_Overview.htmlhttp://www-01.ibm.com/support/knowledgecenter/SS3JSW_5.2.0/com.ibm.help.manage_svcs_adpts.doc/Adpt_Overview.htmlhttp://www-01.ibm.com/support/knowledgecenter/SS3JSW_5.2.0/com.ibm.help.manage_svcs_adpts.doc/Adpt_Overview.html
Achieving high
availability......................................................................................4Adapter
Container JVM
Configuration...................................................................6Monitor
Adapter
Container...................................................................................12
OverviewAdapter containerFrom the Sterling B2B Integrator 5.2
documentationAdapter Container BackgroundAchieving high
availabilityAdapter Container JVM Configuration:Monitor Adapter
Container:Adapter Container JVM Properties:Adapter Container JVM
Logs:
Commands: UNIX Commands:Startup Adapter Container:Start Adapter
ContainerStop Adapter ContainerStart everythingStop everythingStop
only ASIStart only ASIPatch Install ProcessSample Procedure To
Setup Adapter Container: