Architectural Overview: Architectural Overview: Network Deployment Network Deployment IBM Confidential
Unit ObjectivesUnit Objectives�This unit discusses:�Network deployment runtime flow�Network deployment concepts and terminology:
�Cell�Node�Node agent�Deployment manager
�Network deployment administration flow�Managing Web servers with WebSphere�Platform messaging overview�High availability overview�Data replication service overview�Name service overview
Network Deployment Runtime FlowNetwork Deployment Runtime Flow
AppSrv03
AppSrv04
Node B
HTTP Server Plug-in
HTTP Server
Plug-in Configuration File
HTTP Server Plug-in
HTTP Server
Plug-in Configuration File
Load Balancer
Browser
Java Client
RMI/IIOPHTTP(S)
HTTP(S)
HTTP(S)
AppSrv01
AppSrv02
Node A
HTTP(S) ApplicationDatabasesApplicationData
JDBC
Network Deployment ConceptsNetwork Deployment Concepts
V6 Node
V6 Application
Server
V6 Application
Server�
V6 Node
V6 Application
Server
V6 Application
Server�
�
Cell
�A node is a logical grouping of application servers� Each node is managed by a
single node agent process� Multiple nodes can exist on a
single machine through the use of profiles
�A deployment manager (DMgr) process manages the node agents� Holds the configuration repository
for the entire management domain, called a cell
� Within a cell, the administrative console runs inside the DMgr
Managed versus Unmanaged NodesManaged versus Unmanaged Nodes�A managed node is a node that contains a node agent
�An unmanaged node is a node in the cell without a node agent�Enables the rest of the environment to be aware of the node
�Useful for defining HTTP servers as part of the topology�Enables creation of different plug-in configurations for different HTTP servers
Network Deployment Administration FlowNetwork Deployment Administration FlowWeb-based
administrative consolewsadmin
command-line client
Legend:
Commands:Configuration:
Cell cfg
Node A cfg
AppSrv01 cfg
AppSrv02 cfg
Node B cfg
AppSrv03 cfg
AppSrv04 cfg
MASTER
EAR
AppSrv04
AppSrv03 Node Agent
AdminServices
Node B
Cell cfg
Node cfg
AppSrv03 cfg
AppSrv04 cfgEAR
AppSrv02
AppSrv01 Node Agent
AdminServices
Node A
Cell cfg
Node cfg
AppSrv01 cfg
AppSrv02 cfgEAR
Deployment MgrWeb Container
Admin App
AdminServices
RMI/IIOPHTTP(S)
�Each managed process, node agent, Deployment manager starts with it's own set of configuration files.
�Deployment manager contains the MASTER configuration and application files.
�Any changes made at node agent or server level are local and will be overridden by the MASTER configuration at the next synchronization (update).
C:\> wsadmin
File SynchronizationFile Synchronization
�Deployment manager contains the master configuration�Node agents synchronize their files with the master copy�Automatically
�At start up�Periodically
�Manually�Administrative console �Command line
�During synchronization1. Node agent checks for changes to master configuration
2. New or updated files are copied to the node
AppSrv02
AppSrv01 Node Agent
AdminServices
Node A
Cell cfg
Node cfg
AppSrv01 cfg
AppSrv02 cfgEAR
Deployment MgrWeb Container
Admin App
AdminServices
File Sync.Service
File Sync.Service
Cell cfg
Node A cfg
AppSrv01 cfg
AppSrv02 cfg
Node B cfg
AppSrv03 cfg
AppSrv04 cfg
MASTER
EAR
12
WebSphere Network Deployment ProfilesWebSphere Network Deployment Profiles�Benefits of profiles in network deployment:�Think of profiles as representing a node
�Can install multiple profiles on a single machine�Each profile uses the same product files
�Stand-alone node� Equivalent to Base or Express application server
�Managed node� Node that has been federated
�DMgr� Deployment manager
Managing Web Servers with WebSphereManaging Web Servers with WebSphere�WebSphere V6 DMgr can help manage external Web servers�IBM HTTP Server 6.0 (special case � no node agent
needed)�Can have plugin-cfg.xml files automatically distributed to them�Can be started and stopped�Can manage the httpd.conf
�Other Web servers (node agent needed)�Can have plugin-cfg.xml files automatically distributed to them�Can be started and stopped
�Web servers can be defined within WebSphere cell topologies�Managed node (local) or unmanaged node (remote)
�Managed nodes contain a node agent to control the Web server�Unmanaged nodes use the IHS Admin Service instead of a node agent to control the Web server
Web Server: Unmanaged NodeWeb Server: Unmanaged Node
�Web server not managed by WebSphere�Allows WebSphere system administrator to create custom plug-in files for a specific Web server�Manually ftp/copy the plug-in configuration file from the DMgr machine to the Web server machine
WebServer
Plug-inConfig
XML filePlug-in Module
S2V6 Node
V6 Application
Server
V6 Node Agent
V6 Application
Server�
V6 Deployment
Manager
Manualcopy or
Shared file
OS
Plug-inConfig
XML file
Un-ManagedWeb server Definition
IHS as Unmanaged Node (Remote)IHS as Unmanaged Node (Remote)
�WebSphere V6 and IHS have special enhancements � IHS administrative process provides administrative functions for IHS within
WebSphere� Provides ability to start, stop IHS, make configuration changes to httpd.conf
and automatically push the plug-in configuration file to IHS machine � Does not need node agent on the Web server machine
IBMHTTPServer
Plug-inConfig
XML filePlug-in Module
S2V6 Node
V6 Application
Server
V6 Node Agent
V6 Application
Server�
V6 Node
V6 Application
Server
V6 Node Agent
V6 Application
Server�
V6 Deployment
Manager
S1
Manage
IHSAdmin
Process
Start,StopHTTP
Commandsto manage
IHS
OS
OS
HTTP ConfManage
Unmanaged node
Remote plug-in install
Web Server: Managed Node (Local)Web Server: Managed Node (Local)
�Install Web server on a managed node�Create a Web server definition within the DMgr�Node agent receives commands from DMgr to administer the Web server�Plugin-cfg.xml file is propagated through the file synchronization service and lives under the config directory.
Manage
Start/Stop
V6 Node Agent
WebServer
Plug-inConfig
XML filePlug-in ModuleV6 Node
V6 Application
Server
V6 Node Agent
V6 Application
Server�
V6 Deployment
Manager
ManagesS1
S3
S2
ManagedWeb server Definition
Managed node
Local plug-ininstall
IHS Administration OverviewIHS Administration Overview�Direct administration of IHS 6.0 is done by manually editing httpd.conf�There is no Web-based console for IHS as there was in
previous versions.
IBMHTTPServer
Plug-inConfig
XML filePlug-in Module
HTTP Conf
IHS Administrative ServerIHS Administrative Server�IHS Administration server runs as a separate instance of IHS �Admin component for IHS 6.0 includes:�IHS Admin configuration file (admin.conf)
�Default port for the IHS Admin server is 8008.�IHS Admin authentication password file (htpasswd.admin)�Initially BLANK, which prohibits access to IHS Admin �Administrator updates IHS Admin password file using
> htpasswd -cm ..\conf\admin.passwd <user_name>�To start/stop the administrative server�<ihs_root>\bin\adminctl start �<ihs_root>\bin\adminctl stop�Or Windows service
Web Server Custom pluginWeb Server Custom plugin--cfg.xmlcfg.xml�Enterprise applications need to be mapped to one or more Web servers (as well as to application servers)� Can be done through the administrative console� Alternately use the script generated during the installation of the plug-in
which can automate the mapping of all the applications to the Web server•configure<Web_server_name>.bat in <plugin_root>\bin
�Mapping the applications to specific Web servers will cause the custom plugin-cfg.xml files for those Web servers to include the information for those applications.� Web servers target specific applications running in a cell� Automatically generated by the deployment manager
AppSrv03
HTTP Server Plug-in
HTTP Server
Plug-in Configuration File
C:\...\configurewebserver00.bat
Installed applicationsNeed to be mapped
Managing pluginManaging plugin--cfg.xml Filescfg.xml Files�plugin-cfg.xml files are now automatically generated and propagated
� This is the default behavior� This behavior is configurable through the console
�plugin-cfg.xml files can be generic to a cell or custom to Web server� Generating a cell generic plugin-cfg.xml file
�Use the command line script <was_root>\bin\GenPluginCfg.bat�No longer available through the console
� Generating a Web server custom plugin-cfg.xml file�Use the administrative console�Need to map applications to Web servers�Can customize each Web server�s plug-in settings
Managing Web Server PlugManaging Web Server Plug--in Propertiesin Properties�Each Web server can have customized plugin-cfg settings
� Not just application mappings
Web Server Definition Web Server Definition �� At a GlanceAt a Glance
NoneNoneAll packagesUn-managed Web server node
Start, stop Web server, manage (push) plug-in config file to Web
server machineNoneND cell
IHS as a special case of unmanaged
node
Start, stop Web server, manage (push) plug-in config file to Web
server machine
Requires node agent running on the Web server
machineND cellManaged Web
server node
Web server Administration CapabilityRequirementTopology ApplicabilityTopology
Platform Messaging OverviewPlatform Messaging Overview�Integrated asynchronous capabilities for the WebSphere Platform
� Integral JMS messaging service for WebSphere Application Server
� Fully compliant JMS 1.1 provider�Service Integration Bus
� Intelligent infrastructure for service-oriented integration� Unifies SOA, messaging, message brokering and
publish/subscribe�Compliment and extend WebSphere MQ and Application Server
� Share and extend messaging family capabilitiesAppSrv01
WebSphere V6: High Availability OverviewWebSphere V6: High Availability Overview�High Availability (HA) manager is used to eliminate single points of failure.�HA manager is responsible for running key services on available servers rather than on a dedicated one (such as the DMgr)�Can take advantage of fault-tolerant storage technologies such as Network Attached Storage (NAS)�Hot standby and peer failover for critical singleton services�WLM routing, PMI aggregation, JMS messaging,
Transaction Manager, and so forth�Failed singleton starts up on an already-running JVM�Planned failover takes < 1 second
Data Replication Service Data Replication Service �Data Replication Service (DRS) is responsible for replicating in-memory data among WebSphere processes. �Helps allow for high availability and failover recovery�Improves performance and scalability
�What uses this service?�Stateful session EJB persistence and failover�HTTP session persistence and failover�Dynamic cache replication
�Uses either peer-to-peer or client-server replication techniques
Failover of Stateful Session EJBsFailover of Stateful Session EJBs�Uses DRS, similar to HTTP session failover
�Always enabled
�WLM fails beans over to a server that already has a copy of the session data in memory if possible
�Ability to collocate stateful session bean replicas with HTTP session replicas with hot failover�J2EE 1.4 specification requires HTTP session state objects
to be able to contain local references to EJBs
Node Group OverviewNode Group Overview�Enables mixing nodes with different capabilities within the same cell for administration purposes�z/OS and distributed nodes �WBI nodes and base nodes �Mechanism that allows validation of the node capability
before perform certain functions�Example: Creating a cluster of nodes � cannot mix servers from z/OS and distributed nodes within a cluster
�Default configuration with single node group is sufficient unless you want to mix platforms within cell
DefaultNodeGroup
DMgr Node
zOS_NG1
z/OS Node 4
z/OS Node 3
zOS_NG2
z/OS Node 6
z/OS Node 5
WebSphere V6 Cellz/OS Sysplex z/OS Sysplex
Node 1 Node 2
Dist_NG3
Name ServiceName Service
�Provides a JNDI name space�Registers all EJB and J2EE resources (example: JDBC Providers, JMS, J2C, URL and JavaMail) that are hosted by the application server�There is one name server per application server�Configured bindings can map resources to remote locations
Node 1 Node 2
JNDI Client
Deployment Manager
Name space
Node Agent
Name space
Name space
Application Server
Name space
Application Server
Name space
Application Server
Node Agent
Name space
9810 9811
2809
9809
9810
2809
Node 3
lookup
lookuplookup
Virtual HostsVirtual Hosts�Configuration that enables a host machine to resemble multiple host machines�Allows one machine to support multiple applications�Associated with the cell not a single node�Enables plug-in to route requests to the correct servers
�Each virtual host has a logical name and:�One or more host aliases
�Each alias is a host name and port combination (allows wildcards)�For example: *:80, *:443, *:9080, *:9060
�There are two default virtual hosts�default_host - Used for accessing the default applications
�Example: http://localhost:9080/snoop�admin_host - Used for accessing the administrative console
�Example: http://localhost:9060/ibm/console
Browser
AppSrv03
HTTP Server Plug-in
HTTP Server
virtual host important here
Edge ComponentsEdge Components�WebSphere Application Server Network Deployment package contains the following Edge Component functionality:�Load Balancer�Caching Proxy
�Edge Components install separately from WebSphere Application Server�Load Balancer is responsible for balancing the load across multiple servers that can be within either local area networks or wide area networks�Caching Proxy�s purpose is to reduce network congestion within an enterprise by offloading security / content delivery from Web servers and application servers
Client Load Balancer
Cluster of Load
Balanced ServersCaching
Proxy
Unit SummaryUnit SummaryHaving completed this unit, you should be able to explain:�Network deployment runtime flow�Network deployment concepts and terminology:�Cell�Node�Node agent�Deployment manager
�Network deployment administration flow�Managing Web servers with WebSphere�Platform messaging overview�High availability overview�Data replication service overview�Name service overview