Top Banner
September 2006-- Issue 9.7 This month's articles: Using Spring and Hibernate with WebSphere Application Server: Get the most out of your open source environment Using portal analytics with open source reporting tools: Monitor, manage, and adapt your portal using WebSphere Portal log data Reliable and repeatable unit testing for Service Component Architecture modules: Part 2: Create repeatable tests for SCA modules that implement business processes A guided tour of WebSphere Integration Developer: Part 6: Becoming more on-demand using dynamic business rules Featured columns: Comment lines from Scott Simmons: SOA governance and the prevention of service-oriented anarchy Comment lines from Jason McGee: Dynamic middleware and the six attributes of virtualized application serving environments The EJB Advocate: SOA represents the next step in the evolution of component-based applications From the Editor This issue of the IBM® WebSphere® Developer Technical Journal contains essential information on elements of Service Oriented Architecture (SOA), Service Component Architecture (SCA), and open source tools. In our featured articles: Some advice for those considering using the Spring or Hibernate framework with WebSphere Application Server, and information to help you monitor, manage, and adapt your portal with WebSphere Portal log data and open source reporting tools. In our continuing series: Learn how to test SCA components that use Business Process Execution Language (BPEL) to implement business processes, and give on-demand flexibility to integrated applications by adding dynamic business rules. This month's guest columnists: Scott Simmons spells out the key(s) to SOA success, Jason McGee explains the critical characteristics of an autonomic computing environment, and The EJB Advocate tries to tell us what SOA and EJB have to do with each other. Your required reading begins below...
122

IBM WebSphere Developer Technical Journal - CiteSeerX

Feb 03, 2023

Download

Documents

Khang Minh
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: IBM WebSphere Developer Technical Journal - CiteSeerX

September 2006-- Issue 9.7

This month's articles: Using Spring and Hibernate with WebSphere Application Server:

Get the most out of your open source environment Using portal analytics with open source reporting tools:

Monitor, manage, and adapt your portal using WebSphere Portal log data Reliable and repeatable unit testing for Service Component Architecture modules:

Part 2: Create repeatable tests for SCA modules that implement business processes A guided tour of WebSphere Integration Developer:

Part 6: Becoming more on-demand using dynamic business rules Featured columns:

Comment lines from Scott Simmons:SOA governance and the prevention of service-oriented anarchyComment lines from Jason McGee:Dynamic middleware and the six attributes of virtualized application serving environmentsThe EJB Advocate:SOA represents the next step in the evolution of component-based applications

From the EditorThis issue of the IBM® WebSphere® Developer Technical Journal contains essential information on elements of Service Oriented Architecture (SOA), Service Component Architecture (SCA), and open source tools.

In our featured articles: Some advice for those considering using the Spring or Hibernate framework with WebSphere Application Server, and information to help you monitor, manage, and adapt your portal with WebSphere Portal log data and open source reporting tools.

In our continuing series: Learn how to test SCA components that use Business Process Execution Language (BPEL) to implement business processes, and give on-demand flexibility to integrated applications by adding dynamic business rules.

This month's guest columnists: Scott Simmons spells out the key(s) to SOA success, Jason McGee explains the critical characteristics of an autonomic computing environment, and The EJB Advocate tries to tell us what SOA and EJB have to do with each other.

Your required reading begins below...

Page 2: IBM WebSphere Developer Technical Journal - CiteSeerX

Using Spring and Hibernate with WebSphere Application Server:Get the most out of your open source environmentby Tom Alcott, Roland Barcia, Jim Knutson, Sara Mitchell and Ian RobinsonIf you are at all considering using Spring or Hibernate with WebSphere Application Server, reading this article needs to go at the top of your to-do list. Originally published in September 2006, this article has been revised to reflect Interface21's recent certification of Spring Framework 2.1 with WebSphere Application Server. Spring is a popular open source project that aims to make the J2EE environment more accessible, and to deliver significant benefits to projects by increasing productivity, performance, test coverage, and application quality. Developers like the easy interfaces and XML-based configuration that accelerate J2EE development time and ease unit testing. Certainly, there are features in Spring that replicate function already embedded within WebSphere Application Server, and so it is not desireable for applications that will be deployed into the server to use this additional layer of framework code. But with judicious use, you can leverage many of Spring's ease-of-use development features together with WebSphere Application Server to quickly develop and deploy enterprise applications. Hibernate is one of several persistence frameworks that can be successfully used with WebSphere Application Server to provide the object-relational mapping to entity data stored in relational databases, provided that sufficient care is taken to avoid problematic scenarios. This article is not an exhaustive review of either framework, but a critical reference to help you successfully implement such scenarios, and one of the most popular articles ever published on developerWorks.

Read this article

Using portal analytics with open source reporting tools:Monitor, manage, and adapt your portal using WebSphere Portal log databy Stefan Liesche and Steffen UhligYou've got a terrific portal application, but do you know what people are doing with it? If not, are you curious to find out? Or maybe you do know how your portal is being used right now, but want to know more about trends, user experience, and if long term expectations are being met. "Portal analytics" is a process that can help you understand how your portal is used. More specifically, it involves the collection, processing, analysis, and reporting of portal usage data. WebSphere Portal writes usage records in an industry-standard format to a dedicated log file that you can integrate with your preferred reporting and analytics tools. This article explains the elements of portal analytics, why you should even care about how your portal is used, and how to use the WebSphere Portal loggers. You can also step through a reporting exercise using the AWStats open source reporting tool. There's a lot of data out there just waiting to help you turn your terrific portal application into a spectacular success story.

Read this article

Reliable and repeatable unit testing for Service Component Architecture modules:Part 2: Create repeatable tests for SCA modules that implement business processes by David J.N. ArtusPart 1 of this series explained how repeatable unit testing provides an efficient and reliable means of verifying the quality of application components, and applied automated testing methods to Service Component Architecture (SCA) modules. Testing SCA modules that implement business processes using Business Process Execution Language (BPEL) is a bit more complex -- which is why we have Part 2. This article describes issues you might come across when testing SCA components that use BPEL, and how using mock objects can make repeatable testing of these components a reality. Sample code is included, along with a simple framework you can use for easy construction of mock objects.

Read this article

Page 3: IBM WebSphere Developer Technical Journal - CiteSeerX

A guided tour of WebSphere Integration DeveloperPart 6: Becoming more on-demand using dynamic business rulesby Richard Gregory, Jane Fung, Greg Adams, and Randy GiffenIn the sixth article in this series exploring a service-oriented approach to application integration using WebSphere Integration Developer, we learn about business rules: a mechanism that can make your running application dynamic and flexible enough to handle changing business conditions without having to be redeployed. Business rules externalize and manage business logic separately from the main business process, and let you to make changes at run time to keep up with the evolving on-demand business environment. Need to change an offer for different customers under different conditions? Want to test a promotion on the fly, and then stop it or extend it at your discretion? Don't recode the application, create the business rules to make it happen. What does this mean to your application? Greater agility and less complexity; a combination we hope to realize with every solution, but is actually -- and easily -- achievable here. This article discusses rule groups, rule sets, and decision tables, and includes sample code you can use to complete a business rule development exercise.

Read this article

Comment lines from Scott Simmons:SOA governance and the prevention of service-oriented anarchy" In my job as an SOA Architect, I work with successful -- as well as unsuccessful -- implementations of SOA. Customers ask me to specify the ingredients needed for SOA success and, equally important, the factors that contribute to failure. The common answer to both of these questions comes down to governance. Success with SOA does not 'just happen.' Customers finding success with SOA have a common characteristic: they have implemented a governance approach to support the design, development, deployment, and operations of an SOA solution framework... "

Read the entire column

Comment lines from Jason McGee:Dynamic middleware and the six attributes of virtualized application serving environments" Let's face it: managing production application server environments is hard. Modern applications spread across a large number of machines have to handle an unpredictable load, they have to be available all the time, and they are constantly changing. So how you do manage this complexity? One promising approach is through the use of virtualization and automation. If the middleware could be smarter and could understand your goals, then the systems could be both cheaper and easier to manage. There are a number of products and technologies that claim to provide these benefits. How do you know which are the good ones...? "

Read the entire column

The EJB Advocate:SOA represents the next step in the evolution of component-based applications" The EJB Advocate advocating SOA-related specifications? How did that happen? In this new discussion with a customer who just doesn't get what the big deal is about SOA, the EJB Advocate explains how service-oriented architecture is really an evolution if the EJB component model, and why EJB DNA really won't be going away... "

Read the entire dialog

Copyright © 2006 IBM This material may not be reproduced, in whole or in part, without permission.IBM, IBM (logo), WebSphere, are trademarks of International Business Machines in the United-States, other countries, or both.Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.

Page 4: IBM WebSphere Developer Technical Journal - CiteSeerX
Page 5: IBM WebSphere Developer Technical Journal - CiteSeerX

Using Spring and Hibernate with WebSphere Application ServerGet the most out of your open source environment

Level: Advanced

Tom Alcott ([email protected]), Consulting IT Specialist, IBMRoland Barcia ([email protected]), Certified IT Specialist, IBMJim Knutson ([email protected]), WebSphere J2EE Architect, IBMSara Mitchell ([email protected]), Software Engineer, IBMIan Robinson ([email protected]), STSM, WebSphere Transactions Architect, IBM

20 Sep 2006Updated 20 Jun 2007

If you're considering using Spring or Hibernate with IBM® WebSphere® Application Server, this article explains how to configure these frameworks for various scenarios with WebSphere Application Server. This article is not an exhaustive review of either framework, but a critical reference to help you successfully implement such scenarios. (Updated for Spring Framework 2.1.)From IBM WebSphere Developer Technical Journal.

Introduction

The Spring Framework, commonly referred to as Spring, is an open source project that aims to make the J2EE™ environment more accessible. Spring provides a framework for simple Java™ objects that enables them to make use of the J2EE container via wrapper classes and XML configuration. Spring's objective is to deliver significant benefits to projects by increasing development productivity and runtime performance, while also improving test coverage and application quality.

Hibernate is an open source persistence and query framework that provides object-relational mapping of POJOs (Plain Old Java Objects) to relational database tables, as well as data query and retrieval capabilities.

While many organisations are interested in discovering what benefits they can obtain from using these frameworks, IBM wants customers who do use them to know that they can do so with WebSphere Application Server in a robust and reliable way. This article describes how these frameworks can be used with WebSphere Application Server, and explains best practices for a variety of use cases so that you can get started with Spring or Hibernate as quickly as possible.

Using Spring

Spring is generally described as a lightweight container environment, though it is probably more proper to describe it as a framework for simplifying development. The Spring Framework was developed by Interface21, based on publications by Rod Johnson on the dependency injection design pattern. Spring can be used either in standalone applications or with application servers. Its main concept is the use of dependency injection and aspect-oriented programming to simplify and smooth the transitions from development to testing to production.

One of the most often used scenarios involving Spring is to configure and drive business logic using simple Java bean classes. The Spring documentation should provide enough information to build an application using Spring beans; there is nothing WebSphere-specific about this. The following sections describe some of the usage scenarios for using Spring on WebSphere Application Server. Spring applications that are developed following the advice in this article should execute within a WebSphere Application Server or

Page 6: IBM WebSphere Developer Technical Journal - CiteSeerX

WebSphere Application Server Network Deployment environment with no difficulties.

Except where explicitly stated, the information presented here pertains to Versions 6.0.2.x and 6.1.x of WebSphere Application Server on all platforms.

Presentation tier considerations

This section describes considerations relating to the use of Spring in the Web-based presentation tier.

● Web MVC frameworks

Spring's Web MVC framework is an alternative to other frameworks that have been around for some time. Web MVC frameworks delivered, used, and supported directly by WebSphere Application Server include JavaServer Faces (JSF) and Struts. Spring documentation describes how to integrate Spring with these Web frameworks. Use of any of these MVC is supported by WebSphere Application Server, although IBM will only provide product support for the frameworks shipped with WebSphere Application Server.

● Portlet MVC framework

Spring also provides a Portlet MVC framework (which mirrors the Spring Web MVC framework) and runs in both the WebSphere Portal V6.0 and the WebSphere Application Server V6.1 portlet containers. (See Spring Portlet MVC for an example set of Spring portlets.) Running portlets in the WebSphere Application Server V6.1 portlet container requires that an additional Web application be created to define the layout and aggregation of the portlets. Information on how to use the portlet aggregator tag library can be found in the WebSphere Application Server Information Center and in the article Introducing the portlet container. Using JSF in combination with portlets is a common practice for rendering. For information on how Spring, Hibernate, JSF, and WebSphere Portal can be combined together, see Configuring Hibernate, Spring, Portlets, and OpenInSessionViewFilter with IBM WebSphere Portal.

Data access considerations

This section describes considerations relating to the configuration of Spring beans that access data within a transaction.

The Spring framework essentially wraps Spring beans with a container-management layer that, in a J2EE environment, delegates to the underlying J2EE runtime. Following are descriptions of how Spring beans should be configured so that the Spring Framework properly delegates to (and integrates with) the WebSphere Application Server runtime.

● Accessing data sources configured in WebSphere Application Server

WebSphere Application Server manages the resources used within the application server execution environment. Spring applications that want to access resources, such as JDBC data sources, should utilize WebSphere-managed resources. To do this:

1. During development, the WAR module should be configured with a resource reference. For example:

<resource-ref> <res-ref-name>jdbc/springdb</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> <res-sharing-scope>Shareable</res-sharing-scope></resource-ref>

2. For EJB JAR files, the same resource-ref should be declared in each EJB that needs to access the data source.

3. A data source proxy bean would then be declared within the Spring application configuration, which references a

Page 7: IBM WebSphere Developer Technical Journal - CiteSeerX

WebSphere-managed resource provider:

<bean id="WASDataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="java:comp/env/jdbc/springdb"/> <property name="lookupOnStartup" value="false"/> <property name="cache" value="true"/> <property name="proxyInterface" value="javax.sql.DataSource"/></bean>

Accessing the data source through this proxy bean will cause the data source to be looked up using the module's configured references, and hence be properly managed by WebSphere Application Server. Note that the jndiName property value matches the pattern java:comp/env/ concatenated with the res-ref-name declared in the resource-ref.

4. The data source proxy bean may then be used by the Spring application as appropriate.

5. When the application is deployed to a WebSphere Application Server, a resource provider and resource data source must be configured in the normal fashion for use by the Spring application resource reference. The resource reference declared within the module's deployment descriptor will be bound to the application server's configured data source during deployment.

● Using JDBC native connections

Spring provides a mechanism for accessing native connections when various JDBC operations require interacting with the native JDBC resource. The Spring JdbcTemplate classes utilize this capability when a NativeJdbcExtractor class has been set on the JdbcTemplate class. Once a NativeJdbcExtractor class has been set, Spring always drills down to the native JDBC connection when used with WebSphere Application Server. This bypasses the following WebSphere quality of service functionality and benefits:

❍ Connection handle tracking and reassociation❍ Connection sharing❍ Involvement in transactions❍ Connection pool management.

Another problem with this is the WebSphereNativeJdbcExtractor class depends on internal WebSphere adapter classes. These internal classes may differ by WebSphere Application Server version and may change in the future, thereby breaking applications that depend on this functionality.

Use of NativeJdbcExtractor class implementations (for example, WebSphereNativeJdbcExtractor) are not supported on WebSphere Application Server and you should avoid scenarios that require it. The alternative is to use the WebSphere Application Server WSCallHelper class to access non-standard vendor extensions for data sources.

● Using transactions with Spring

WebSphere Application Server provides a robust and scalable environment for transaction processing and for managing connections to resource providers. Connections to JDBC, JMS, and Java Connector resource adapters are managed by WebSphere Application Server regardless of whether or not a global transaction is being used; even in the absence of a global transaction there is always a runtime context within which all resource-provider connections are accessed. WebSphere Application Server refers to this runtime context as a local transaction containment (LTC) scope; there is always an LTC in the

Page 8: IBM WebSphere Developer Technical Journal - CiteSeerX

absence of a global transaction, and resource access is always managed by the runtime in the presence of either of a global transaction or an LTC. To ensure the integrity of transaction context management (and hence the proper management of transactional resources) WebSphere Application Server does not expose the javax.transaction.TransactionManager interface to applications or application frameworks deployed into WebSphere Application Server.

There are a number of ways to drive resource updates under transactional control in Spring, including both programmatic and declarative forms. The declarative forms have both Java annotation and XML descriptor forms. If you use Spring 2.1 RC1 with WebSphere Application Server V6.0.2.19 or V6.1.0.9 or later, you can take advantage of full support for Spring's declarative transaction model. Spring 2.1 RC1 has a new PlatformTransactionManager class for WebSphere Application Server, called WebSphereUowTransactionManager, which takes advantage of WebSphere Application Server's supported UOWManager interface for transaction context management. Managing transaction demarcation through WebSphere Application Server's UOWManager class ensures that an appropriate global transaction or LTC context is always available when accessing a resource provider. However, earlier versions of Spring used internal WebSphere interfaces that compromised the ability of the Web and EJB containers to manage resources and are unsupported for application use. This could leave the container in an unknown state, possibly causing data corruption.

Declarative transaction demarcation in Spring 2.1 RC1 or later are supported in WebSphere Application Server using the following declaration for the WebSphere transaction support:

<bean id="transactionManager" class="org.springframework.transaction.jta.WebSphereUowTransactionManager"/>

A Spring bean referencing this declaration would then use standard Spring dependency injection to use the transaction support, for example:

<bean id="someBean" class="some.class"> <property name="transactionManager" > <ref bean="transactionManager"/> </property>...</bean><property name="transactionAttributes"> <props> <prop key="*">PROPAGATION_REQUIRED</prop> </props></property>

The WebSphereUowTransactionManager supports each of the Spring transaction attributes:

❍ PROPAGATION_REQUIRED❍ PROPAGATION_SUPPORTS❍ PROPAGATION_MANDATORY❍ PROPAGATION_REQUIRES_NEW❍ PROPAGATION_NOT_SUPPORTED❍ PROPAGATION_NEVER

For earlier versions of Spring that do not provide org.springframework.transaction.jta.WebSphereUowTransactionManager, and for versions of WebSphere Application Server prior to V6.0.2.19 or V6.1.0.9 that do not provide com.ibm.wsspi.uow.UOWManager, transaction support in WebSphere Application Server is available via this Spring configuration:

Page 9: IBM WebSphere Developer Technical Journal - CiteSeerX

<bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager"> <property name="autodetectTransactionManager"value="false" /></bean>

This configuration supports a restricted set of transaction attributes that does not include PROPAGATION_NOT_SUPPORTED and PROPAGATION_REQUIRES_NEW. The Spring class org.springframework.transaction.jta.WebSphereTransactionManagerFactoryBean, which also claims to provide PROPAGATION_NOT_SUPPORTED and PROPAGATION_REQUIRES_NEW capabilities, uses unsupported internal WebSphere Application Server interfaces and should not be used with WebSphere Application Server.

● Using Spring JMS

Just as with accessing JDBC data sources, Spring applications intended to access JMS destinations must ensure they use WebSphere-managed JMS resource providers. The same pattern of using a Spring JndiObjectFactoryBean as a proxy for a QueueConnectionFactory or a TopicConnectionFactory will ensure that JMS resources are properly managed.

An additional consideration for JMS is the Spring support for inbound JMS messaging: Spring provides a DefaultMessageListenerContainer class that can be used to host message consumer endpoints in either a J2SE or J2EE environment. In a WebSphere Application Server environment, this properly uses server-managed threads and can be integrated with the server's transaction management, as described above. This is a supported Spring application configuration, although it is recommended that J2EE message-driven beans (MDBs) be used directly in WebSphere Application Server configurations that require workload management and/or high availability. Be aware that no other Spring JMS MessageListenerContainer types are supported, as they can start unmanaged threads and might also use JMS APIs that should not be called by applications in a Java EE environment.

● Using OpenJPA with Spring

Part of the EJB 3.0 specification is the new Java Persistence API (JPA). The Apache OpenJPA project is an open-source implementation of JPA that you can use directly in conjunction with Spring and WebSphere Application Server V6.1 (see Resources); using the OpenJPA classes directly is preferred over using the Spring's JPA helper classes (in the org.springframework.orm.jpa package), as the former have been more completely tested with WebSphere Application Server. OpenJPA is dependent on JDK 1.5, and so will work with WebSphere Application Server V6.1 and later.

WebSphere Application Server V6.1 supports JPA application-managed entity managers, which might have a transaction type of either JTA or resource-local. The OpenJPA JTA entity manager uses the application server's underlying JTA transaction support, for which transaction demarcation can be defined using either standard J2EE techniques or Spring's declarative transaction model as described above.

A data access object (DAO) that uses OpenJPA is packaged with a persistence.xml that defines persistence context for the JPA EntityManager used by the application. For example, a persistence.xml for a JTA entity manager that uses the data source with a JNDI name "java:comp/env/jdbc/springdb" can be set up like this:

Page 10: IBM WebSphere Developer Technical Journal - CiteSeerX

<persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="default" transaction-type="JTA"> <provider> org.apache.openjpa.persistence.PersistenceProviderImpl </provider> <jta-data-source> java:comp/env/jdbc/springdb </jta-data-source> <properties> <property name="openjpa.TransactionMode" value="managed" /> <property name="openjpa.ConnectionFactoryMode"value="managed" /> <property name="openjpa.jdbc.DBDictionary" value="db2" /> </properties> </persistence-unit></persistence>

By setting the openjpa.TransactionMode and openjpa.ConnectionFactoryMode properties to "managed," OpenJPA delegates management of transactions and connections to WebSphere Application Server. The DAO may use Spring's declarative transaction demarcation as described above.

Integration and management considerations

● JMX and MBeans

Spring JMX MBeans are supported on WebSphere Application Server V6.1 and later only when registered with WebSphere Application Server's container manager MBeanServer. If no server property is specified, the MBeanExporter tries to automatically detect a running MBeanServer. When running an application in WebSphere Application Server, the Spring framework will therefore locate the container's MBeanServer.

You should not use MBeanServerFactory to instantiate an MBeanServer and then inject it into the MBeanExporter. Furthermore, the use of Spring's ConnectorServerFactoryMBean or JMXConnectorServer to expose the local MBeanServer to clients by opening inbound JMX ports is not supported with WebSphere Application Server.

Spring JMX MBeans are not supported on WebSphere Application Server prior to Version 6.1.

● Registering Spring MBeans in WebSphere Application Server

WebSphere Application Server MBeans are identified by a javax.management.ObjectName when they are registered that looks like this:

WebSphere:cell=99T73GDNode01Cell,name=JmxTestBean,node=99T73GDNode01,process=server1, type=JmxTestBeanImpl

This means that when they are de-registered, they need to be looked up with the same "fully qualified" name, rather than the simple name property of MBean. The best approach is to implement org.springframework.jmx.export.naming.ObjectNamingStrategy, which is an interface that encapsulates the creation of ObjectName instances and is used by the MBeanExporter to obtain ObjectNames when registering beans. An example is available on the Spring Framework forum. You can add the ObjectNamingStrategy instance to the bean that you register. This will ensure that the MBean is properly de-registered when the application is uninstalled.

Page 11: IBM WebSphere Developer Technical Journal - CiteSeerX

<bean id="exporter" class="org.springframework.jmx.export.MBeanExporter" lazy-init="false"><property name="beans"> <map> <entry key="JmxTestBean" value-ref="testBean" /> </map></property><property name="namingStrategy" ref="websphereNamingStrategy" />...</bean>

● MBeans ObjectNames and notifications

Due to the use of a fully qualified ObjectName for MBeans in WebSphere Application Server, you need to define that ObjectName in full to use notifications. At the time of this writing, there is an open JIRA to enable the Spring bean name to be used instead, which would simplify this situation.

<bean id="exporter" class="org.springframework.jmx.export.MBeanExporter" lazy-init="false"> <property name="beans"> <map> <entry key="JmxTestBean" value-ref="testBean" /> </map> </property> <property name="namingStrategy" ref="websphereNamingStrategy" /> <property name="notificationListenerMappings"> <map> <entry key="WebSphere:cell=99T73GDNode01Cell, name=JmxTestBean, node=99T73GDNode01, process=server1, type=JmxTestBeanImpl"> <bean class="client.MBeanListener" /> </entry> </map> </property> </bean>

● System z multicall/unicall limitation

As Spring doesn't allow the specification of platform-specific fields in the MBean descriptor, Spring JMX will work on multi-SR servers on WebSphere Application Server V6.1, but you are restricted in your deployment options. WebSphere Application Server will default to the unicall strategy so that only one instance of the MBean (in one, indeterminate SR) will be asked to execute a request. This may be sufficient in some scenarios but it is more likely that an application will require the ability to declare a combination of multicall and unicall methods, and possibly result in aggregation logic.

● Scheduling and thread pooling

Spring provides a number of TaskExecutor classes that can be used for scheduling work. The only Spring TaskExecutor that is supported by WebSphere Application Server for executing work asynchronously is the Spring WorkManagerTaskExecutor class, which properly utilizes thread pools managed by WebSphere Application Server and delegates to a configured WorkManager. Other TaskExecutor implementations might start unmanaged threads.

You can setup a WorkManager within the WebSphere Application Server administrative console by navigating to Resources => Asynchronous beans => Work managers . The JNDI name for the resource can then be used in the Spring config file to define a WorkManagerTaskExecutor.

Page 12: IBM WebSphere Developer Technical Journal - CiteSeerX

● Classloaders

Spring and WebSphere Application Server both use several open source projects and, unfortunately, the versions of the projects they have in common won't always match. Spring dependencies should be packaged as part of the application, and the server should be setup as described below to avoid conflicts. Otherwise, the classloaders may not load the appropriate version either for the runtime or for the application. Usually, this will cause exceptions to appear in the log regarding version mismatches of classes, ClassCastExceptions, or java.lang.VerifyErrors.

One example is the use of Jakarta Commons Logging. Configuring Jakarta Commons Logging (JCL) for application use, or utilizing a different version of JCL than is provided by the application server (for example, embedded with the application code) requires specialized configuration on WebSphere Application Server. See Integrating Jakarta Commons Logging for strategies on how to configure a deployed application to use an embedded version of commonly used technologies. Keep an eye on the support Web site for updates on how to configure embedded JCL on WebSphere Application Server V6.x products. This is just one example of conflicts. Others might include application use of JDOM or specific versions of JavaMail. Replacement of WebSphere Application Server's JAR files with these or other packages with later or different versions is not supported.

Another classloader problem that may plague Spring users on WebSphere Application Server is the way Spring loads resources. Resources can include things such as message bundles, and with the classloader hierarchy and various policies for locating resources within the hierarchy, it is possible for resources using a common name to be found in an unintended location. The WebSphere Application Server classloader viewer can be used to help resolve this problem. The combination of this and other versions of common libraries may require that the application rename resources to a unique name.

The example explained by James Estes on the Spring forum contains an EJB project and a Web project packaged into an EAR file. The solution described is to add the spring.jar file into both the WEB-INF/lib and the top level of the EAR, then set the classloader policy for the WEB project to PARENT LAST so that it finds the version in WEB-INF/lib first. The EJB project uses the version in the EAR.

Design considerations

Some of the infrastructure services provided by the Spring Framework replicate services provided by a standards-based application server runtime. Furthermore, the abstraction of the Spring framework infrastructure from the underlying J2EE application server necessarily weakens the integration with application server runtime qualities of service, such as security, workload management, and high availability. As a result, using the Spring Framework in applications deployed into the WebSphere Application Server must be carefully considered during application design to avoid negating any of the qualities of service provided by WebSphere Application Server. Where no other recommendation is made, directly using the services provided by WebSphere Application Server is preferred in order to develop applications based on open standards and ensure future flexibility in deployment.

● Unmanaged threads

There are some Spring scenarios that can lead to unmanaged thread creation. Unmanaged threads are unknown to WebSphere Application Server and do not have access to Java EE contextual information. In addition, they can use resources without WebSphere Application Server knowing about it, exist without an administrator's ability to control their number and resource usage, and impede on the application server's ability to gracefully shutdown or recover resources from failure. Applications should avoid any scenario that causes unmanaged threads to be started, such as:

❍ registerShutdownHook

Avoid using the Spring AbstractApplicationContext or one of its subclasses. There is a public method, registerShutdownHook, that creates a thread and registers it with the Java VM to run on shutdown to close the ApplicationContext. Applications can avoid this by utilizing the normal lifecycle notices they receive from the WebSphere container to explicitly call close on the ApplicationContext.

❍ WeakReferenceMonitor

Spring provides convenience classes for simplified development of EJB components, but be aware that these convenience

Page 13: IBM WebSphere Developer Technical Journal - CiteSeerX

classes spawn off an unmanaged thread, used by the WeakReferenceMonitor, for cleanup purposes.

● Scheduling

Spring provides (or integrates with) a number of scheduling packages, but the only Spring scheduling package that works with threads managed by WebSphere Application Server is the CommonJ WorkManager. Other packages, such as quartz and the JDK Timer, start unmanaged threads and should be avoided.

Using Hibernate

Hibernate is an open source persistence framework for POJOs, providing object-relational mapping of POJOs to relational database tables using XML configuration files. The Hibernate framework is a data access abstraction layer that is called by your application for data persistence. Additionally, Hibernate provides for the mapping from Java classes to database tables (and from Java data types to SQL data types), as well as data query and retrieval capabilities. Hibernate generates the requisite SQL calls and also takes care of result set handling and object conversion.

Hibernate, like OpenJPA, implements the Java Persistence APIs (JPA) specification, which is a mandatory part of Java EE 5. (See Resources for developerWorks articles on using Hibernate.)

Usage scenarios

The following scenarios describe some of the possible scenarios for how to use Hibernate with WebSphere Application Server and WebSphere stack products. These are only example scenarios and should not be considered recommended scenarios.

● Use a WebSphere Application Server data source

In order for Hibernate to get database connections from WebSphere Application Server, it must use a resource reference, as mandated by the Java EE (formerly known as J2EE) specification. This ensures WebSphere Application Server can provide the correct behavior for connection pooling, transaction semantics, and isolation levels. Hibernate is configured to retrieve a data source from WebSphere Application Server by setting the hibernate.connection.datasource property (defined in the Hibernate configuration file) to refer to a resource reference (for example, java:comp/env/jdbc/myDSRef) defined in the module's deployment descriptor. For example:

<property name="hibernate.connection.datasource"> java:/comp/env/jdbc/myDSRef</property>

Java EE resource references for Web applications are defined at the WAR file level, which means all servlets and Java classes within the container share the resource reference. Inside of an EJB module, resource references are defined on the individual EJB components. This means that, if many EJB components use the same Hibernate configuration, each EJB must define the same reference name on each EJB component. This can lead to complications that will be discussed a bit later.

Once a data source is configured, one of the next steps to ensure that Hibernate works correctly is to properly configure transaction support.

● Transaction strategy configuration

Hibernate requires the configuration of two essential pieces in order to properly run with transactions. The first, hibernate.transaction.factory_class, defines transactional control and the second, hibernate.transaction.manager_lookup_class,

Page 14: IBM WebSphere Developer Technical Journal - CiteSeerX

defines the mechanism for registration of transaction synchronization so the persistence manager is notified at transaction end when it needs to synchronize changes with the database. For transactional control, both container-managed and bean-managed configurations are supported. The following properties must be set in Hibernate.cfg.xml when using Hibernate with WebSphere Application Server:

❍ for container-managed transactions:

<property name="hibernate.transaction.factory_class"> org.hibernate.transaction.CMTTransactionFactory</property><property name="hibernate.transaction.manager_lookup_class"> org.hibernate.transaction.WebSphereExtendedJTATransactionLookup</property>

❍ for bean-managed transactions:

<property name="hibernate.transaction.factory_class"> org.hibernate.transaction.JTATransactionFactory</property><property name="hibernate.transaction.manager_lookup_class"> org.hibernate.transaction.WebSphereExtendedJTATransactionLookup</property><property name="jta.UserTransaction"> java:comp/UserTransaction</property >

The jta.UserTransaction property configures the factory class to obtain an instance of a UserTransaction object instance from the WebSphere container.

The hibernate.transaction.manager_lookup_class property is supported on the WebSphere platform by WebSphere Application Server V6.x and later, and on WebSphere Business Integration Server Foundation V5.1 and later. This property configures Hibernate to use the ExtendedJTATransaction interface, which was introduced in WebSphere Business Integration Server Foundation V5.1 and WebSphere Application Server V6.0. The WebSphere ExtendedJTATransaction interface establishes a pattern that is formalized in Java EE 5 via the JTA 1.1 specification.

● Unsupported transaction configurations

The Hibernate documentation describes transaction strategy configurations for running on WebSphere Application Server Versions 4 and 5 products; however, these configurations use internal WebSphere interfaces and are not supported on those earlier versions. The only supported transaction configuration of Hibernate is described above, which means, as stated earlier, Hibernate usage is only supported on WebSphere Business Integration Server Foundation V5.1 and on WebSphere Application Server Version 6.x and later.

● Hibernate's usage patterns within a WebSphere Application Server environment

Hibernate's session-per-request and long conversation patterns are both available when using Hibernate with WebSphere Application Server. Customers must choose which is appropriate for their application, though it is our opinion that session-per-request offers better scalability.

❍ Multiple isolation levels

Sharable connections provide a performance improvement in WebSphere Application Server by enabling multiple

Page 15: IBM WebSphere Developer Technical Journal - CiteSeerX

resource users to share existing connections. However, if sharable connections and multiple isolation levels are both necessary, then define a separate resource-ref and Hibernate session-factory for each connection configuration. It is not possible to change the isolation level of a shared connection. Therefore, it is also not possible to use the hibernate.connection.isolation property to set the isolation level on a sharable connection. See Sharing connections in WebSphere Application Server V5 for more information on policies and constraints on connection sharing. (Although this article generally pertains to all shared connection use on WebSphere Application Server V5, the connection sharing advice still follows for Hibernate running on V6.x.)

❍ Web applications

Hibernate long conversation sessions can be used and stored in HttpSession objects; however, a Hibernate session holds active instances and, therefore, storing it in an HttpSession may not be a scalable pattern since sessions may need to be serialized or replicated to additional cluster members. It is better to use HttpSession to store disconnected objects (as long as they are small, meaning 10KB to 50KB) and re-associate them with a new Hibernate session when an update is needed. This is because HttpSession is best used for bookmarking and not caching. A discussion on how to minimize memory use in HttpSession is contained in Improving HttpSession Performance with Smart Serialization. Instead of using HttpSession as a cache, consider using a WebSphere data caching technology like ObjectGrid or DistributedObjectCache, as described in the next section.

For best practices on high performing and scalable applications, the book Performance Analysis for Java Websites is strongly recommended.

At the time of publication, the behavior of Hibernate's cluster aware caches in conjunction with WebSphere Application Server has not been determined; therefore, it is not yet determined whether or not their use is supported and we will not discuss them further. As a result, customers requiring a distributed cache should consider creating a class that implements org.hibernate.cache.CacheProvider using the property hibernate.cache.provider_class, which employs one of the two distributed cache implementations in WebSphere.

● Integrating a second-level cache

A Hibernate session represents a scoping for a unit of work. The Session interface manages persistence during the lifecycle of a Hibernate session. Generally, it does this by maintaining awareness or state of the mapped entity class instances it is responsible for by keeping a first-level cache of instances, valid for a single thread. The cache goes away when the unit of work (session) is completed. A second-level cache also can be configured to be shared among all sessions of the SessionFactory, including across a cluster. Be aware that caching in Hibernate raises issues that will need to be addressed. First, no effort is made to ensure the cache is consistent, either with external changes to the database or across a cluster (unless using a cluster aware cache). Second, other layers (such as the database) may already cache, minimizing the value of a Hibernate cache. These issues must be carefully considered in the application design, but they are beyond the scope of this article.

Hibernate comes with several pre-configured caches. You can find information on them in the Hibernate Cache documentation pages. For read-only data, one of the in-memory caches might be enough. However, when the application is clustered and a cluster aware cache is needed, the local read-only caches are not enough. If a distributed cache is desired, we recommend using one of the WebSphere-provided distributed cache implementations. These can be used as a second level cache with Hibernate:

❍ The DistributedMap/DistributedObjectCache interfaces provide distributed cache support the WebSphere v6.x product family. See Using the DistributedMap and DistributedObjectCache interfaces for the dynamic cache for more information.

❍ ObjectGrid, available as part of the WebSphere Extended Deployment product, provides extensible object caching support. See ObjectGrid for more information.

● Using Hibernate in WebSphere Enterprise Service Bus and WebSphere Process Server

WebSphere Process Server and WebSphere Enterprise Service Bus (ESB) rely on the Service Component Architecture (SCA) and Service Data Objects (SDO) as an assembly and programming model for SOA. (See Resources to learn more about SCA and SDO.) SCA components are not Java EE components, so they do not have resource references, but rely instead on services and

Page 16: IBM WebSphere Developer Technical Journal - CiteSeerX

adapters to connect to systems. Resource references cannot be used when building Java SCA components; therefore, Hibernate cannot be used directly by an SCA component.

In this case, Hibernate persistence should be hidden behind some kind of facade. There are two alternatives:

❍ A local EJB session facade is created to wrap Hibernate persistence. The session facade provides adapter logic to map Hibernate entity POJOs to Service Data Objects and back. An integration developer can then use an EJB import to invoke the session facade, and invoke it in a tightly coupled fashion with corresponding Qualities of Service (QoS).

❍ An EJB Web service session facade is created to wrap Hibernate persistence. An integration developer can then use a Web service import to invoke the Web service for persistence. This gets around having to build POJO to SDO converters, since at the current time SCA only uses SDO for data types. Figure 1 illustrates a business process using both patterns, though the details of the process are beyond the scope of this article.

Figure 1. Sample business process

● Hibernate JPA API on WebSphere Application Server V6.1

Hibernate's JPA support provides for JPA standard persistence and is a good alternative to the proprietary Hibernate APIs. Hibernate's JPA implementation requires a Java SE 5 based runtime, and therefore only runs on WebSphere Application Server V6.1 or later. At the time of publication, Hibernate's JPA support does not run on WebSphere System z or iSeries platforms. The Hibernate documentation describes how to package and deploy applications using Hibernate's JPA implementation.

● Non-interoperable / Non-portable function

Section 3.2.4.2 in the JPA specification describes a scenario that is likely to cause interoperability and potential portability problems. This has to do with the combination of the use of lazy loading (that is, @Basic(fetch=LAZY)) and detached objects. When merging a detached object back into a session, JPA will examine the object and update the data store with any changed values. However, data objects are simple POJOs. If part of the POJO state wasn't loaded when it was detached, it can appear to be changed when it is merged back in. To get this to work correctly, vendors must implement serialization techniques specific to their runtime. This is not interoperable and the semantics may not be portable either.

Product and customer technical support

Page 17: IBM WebSphere Developer Technical Journal - CiteSeerX

An area of reasonable concern for users is support of projects using open source and the impact of that usage upon a vendor's support for its licensed products. IBM recognizes that some customers may desire to use non-IBM frameworks in conjunction with IBM WebSphere Application Server and is providing information to customers that may promote the creation of the most reliable operating environment for IBM WebSphere Application Server. IBM considers open source code and application frameworks installed by customers, either bundled as part of the application or as shared libraries, to be part of application code. By carefully utilizing this information when using open source projects, customers may use IBM products with a higher degree of confidence that they may have continued access to IBM product and technical support. If a problem is encountered when using these frameworks with WebSphere products, IBM will make reasonable efforts to ensure the problem does not lie with the WebSphere product.

It is expected that customers may safely use frameworks such as Spring and Hibernate on IBM products by observing the suggestions of this article and understanding a few key points:

● Customers must ensure that they only use those frameworks in ways that are allowed by WebSphere Application Server. In particular, this means that frameworks should not be used when they use internal product interfaces -- unfortunately many open source frameworks do this when not configured carefully. Customers should avoid scenarios clearly documented as things to avoid on WebSphere.

● For open source frameworks, customers should ensure they understand and have access to matching source code and binaries for the framework they are using with WebSphere Application Server.

● Customers are encouraged to obtain corrective service for the frameworks from the open source community or from partners working with the open source community.

For more details on IBM support and policy please refer to the IBM Support Handbook and WebSphere Application Server Support Statement.

Although following the suggested practices of this article will help you enhance your experience when using WebSphere Application Servers in an open source environment, it is not an all inclusive list of ways in which an open source component may impact WebSphere Application Server operation or the operation of other components. Users of open source code are urged to review the specifications of all components to avoid licensing, support, and technical issues.

Throughout this article, the terms "support" or "supported" indicates that the usage being described uses only IBM documented functionality. The authors have done their best to provide advice on how to configure and use these frameworks to ensure that their usage is consistent with documented product behavior, but this article is neither an endorsement nor a statement of support for Spring or Hibernate.

Conclusion

The Spring Framework has enjoyed a rapid growth in popularity. Developers like the easy interfaces and XML-based configuration that accelerate J2EE development time and ease unit testing. The framework itself is also growing very rapidly, with many subprojects now listed on the website. As with all software, it is important to identify what benefit is being offered by its inclusion in your application, and whether there are alternative, preferable ways to achieve the same end result. Certainly, there are features in Spring that replicate function already embedded within WebSphere Application Server, and so it is not desireable for applications that will be deployed into the server to use this additional layer of framework code. But with judicious use, you can leverage many of Spring's ease-of-use development features together with WebSphere Application Server's robust, integrated enterprise support features to quickly develop and deploy enterprise applications into IBM's industry-leading J2EE application server.

Hibernate is one of several persistence frameworks that can be successfully used with WebSphere Application Server to provide the object-relational mapping to entity data stored in relational databases, provided that sufficient care is taken to avoid problematic scenarios. In particular, you must ensure that your use of Hibernate does not involve the use of internal WebSphere Application Server interfaces. By following the recommendations presented here, you can avoid some common problems and use Hibernate as the

Page 18: IBM WebSphere Developer Technical Journal - CiteSeerX

persistence framework for applications deployed into WebSphere Application Server.

Acknowledgements

The authors would like to thank Keys Botzum, Paul Glezen, Thomas Sandwick, Bob Conyers, Neil Laraway, and Lucas Partridge for their comments and contributions to the article.

Resources

● developerWorks articles on using Hibernate ● Hibernate Project Support page ● Sharing connections in WebSphere Application Server V5 ● Improving HttpSession Performance with Smart Serialization ● Performance Analysis for Java Websites ● Hibernate Cache ● Using the DistributedMap and DistributedObjectCache interfaces for the dynamic cache ● ObjectGrid ● Spring Product Support page ● Spring Portlet MVC ● Introducing the portlet container ● Configuring Hibernate, Spring, Portlets, and OpenInSessionViewFilter with IBM WebSphere Portal Server ● Local transaction containment ● Integrating Jakarta Commons Logging ● WSCallHelper ● Leveraging OpenJPA with WebSphere Application Server V6.1

About the authors

Tom Alcott is consulting IT specialist for IBM in the United States. He has been a member of the Worldwide WebSphere Technical Sales Support team since its inception in 1998. In this role, he spends most of his time trying to stay one page ahead of customers in the manual. Before he started working with WebSphere, he was a systems engineer for IBM's Transarc Lab supporting TXSeries. His background includes over 20 years of application design and development on both mainframe-based and distributed systems. He has written and presented extensively on a number of WebSphere run time issues.

Roland Barcia is a Consulting IT Specialist for IBM Software Services for WebSphere. He is a co-author of IBM WebSphere: Deployment and Advanced Configuration. For more information about Roland, see his blog.

Jim Knutson is WebSphere's J2EE architect. Jim is responsible for IBM's participation in J2EE related specifications and his involvement in both goes back to before there was a J2EE. Jim is also involved in programming model evolution to support SOA and Web services.

Sara Mitchell works at IBM's UK development laboratory at Hursley as a team lead for WebSphere Application Server.

Page 19: IBM WebSphere Developer Technical Journal - CiteSeerX

Dr. Ian Robinson is an IBM senior technical staff member and the transaction architect for IBM WebSphere Application Server. He has over 12 years experience of designing and implementing distributed transaction systems, having worked on the IBM CICS server and ComponentBroker CORBA server. Ian is co-chair of the OASIS Web Services Transaction technical committee, was spec lead for the J2EE Activity Service (JSR 95) and co-author of the WS-Transaction set of specifications. Ian received a BSc and a PhD in Physics from the University of Exeter, England, in 1986 and 1989 respectively.

Page 20: IBM WebSphere Developer Technical Journal - CiteSeerX

IBM WebSphere Developer Technical Journal: Using portal analytics with open-source reporting toolsMonitor, manage, and adapt your portal using WebSphere Portal log data

Level: Introductory

Stefan Liesche ([email protected]), WebSphere Portal and Workplace Foundation Lead Architect, IBMSteffen Uhlig ([email protected]), Senior Consultant, IBM

20 Sep 2006

The term "portal analytics" describes a process that can help you understand how your portal is used. IBM® WebSphere® Portal writes usage records to a dedicated log file. Because the format of the log follows industry standards ("NCSA Combined"), you can integrate portal usage data with your preferred reporting and analytics tools. This article describes how you can derive reports and analytics information based on the data provided by the instrumentation in WebSphere Portal V5.1x and WebSphere Portal V6. Also included is an example of how to use the logs for portal analytics using open source reporting tools. The example illustrates complete, end-to-end reporting of typical statistics reports.

From the IBM WebSphere Developer Technical Journal.

Introduction

What is portal analytics?

Successful organizations spend a large amount of time planning and developing their initial portal release. Although this work is critical, it covers only a portion of the total planning effort required over the lifetime of a portal. You also need to maintain, monitor, and adapt your portal to new usage patterns that surface only after going live.

When planning a portal project, groups usually do sizings based on assumptions, experience, and expectations. Over time, other questions arise, such as "Will our portal be able to deal with evolving user needs?" and "What do our users really do with the portal?" These questions and others can be answered using portal analytics, which is the process of collecting, processing, and reporting usage data.

WebSphere Portal writes usage records to a dedicated log file. Because the format of the log follows industry standards ("NCSA Combined"), you can integrate portal usage data with your preferred reporting and analytics tools. Portal analytics include trend analysis techniques that can help you to predict the demand on your portal in the future. Therefore, a portal operator can proactively plan for adapting to the community's needs, instead of being hit without warning after a threshold is reached.

WebSphere Portal provides comprehensive instrumentation capabilities. In this article, you will see how reports and analytics information can be derived based on the data provided by the instrumentation. The example involves end-to-end reporting for typical statistics reports and shows how to use the logs for portal analytics using open source reporting tools.

Why care about portal usage?

Page 21: IBM WebSphere Developer Technical Journal - CiteSeerX

A portal usually serves one or more specific purposes. For example, it can facilitate access to information, it might enable its users to work more effectively and more efficiently, and it might also be used to integrate technically separate information systems "on the glass". Organizations typically have a very clear picture of what purpose a portal should serve. Measuring the success of those efforts is critical. Knowing how well the portal supports its users also helps your group justify the investment in the portal.

What the logs report

WebSphere Portal logs the following user activities and makes them available for analytics:

● Page management (creating, reading, updating, deleting a page)● Requests of a certain page by users (including contained portlets)● Session activities (login, logout, timed out, login failed)● User management actions (creating, reading, updating, deleting users and groups).

For more information on what data WebSphere Portal logs can create, view the site analysis topic in the WebSphere Portal Information Center (see Resources).

Elements of portal analytics

In a recent report from the Patricia Seybold Group, the following criteria for portal analytics was defined:

● InstrumentationPortal technology platforms should collect the information that represents their usage and performance.

● Real-time analysisYou need to know how well many aspects of your customer portal are performing in real time. You would prefer not to (or can't wait to) move performance data to a data warehouse and run queries and analytics against it.

● ReportsPortal technology platforms should package reports that present how customers are interacting with their content and how well their facilities are performing. These reports should present instrumented information in easily-understood formats.

● AnalyticsSometimes reports aren't enough, and analytic processing is required in order to understand customer behaviour and customer portal performance.

This article focuses on instrumentation, and briefly discusses how reports can be derived from the data that is created by WebSphere Portal.

The long-term vision for portal analytics is a self-optimizing portal that automatically responds to the demand on the site, including both functional and non-functional aspects. If the portal site was referenced by a popular news service (the "Slashdot effect") and the access rate rose high above normal, you could enable spare servers to be automatically deployed with the current configuration and to be added to the existing cluster. After the demand slows down again, the spare servers are freed up for other purposes.

A functional example would be that a portal's navigation only shows the most-wanted pages upon the user's arrival. As the user digs deeper into the site, more navigation elements display.

What portal analytics is not

Portal analytics instrumentation is not a replacement for other logs such as audit, performance, or system event logs. The audiences for those kinds of logs are different:

Page 22: IBM WebSphere Developer Technical Journal - CiteSeerX

● Audit logging is usually used in security-sensitive environments where changes made to the portal's run time configuration are recorded. Auditing is primarily part of the administrative function in WebSphere Portal. Portal analytics, on the other hand, focuses on that part of the portal that end users see as well as how they use the portal.

● Performance logs are used to find out how well a particular part of the whole portal performs in the real production environment. It enables the operator to determine to what extent the portal consumes valuable resources (such as CPU or memory) while it adheres to the defined Service Level Agreements (SLAs).

● System event logs help the system administrator to understand what issues occur while running the portal. Typically, records in the system event log contain Java™ exception traces that enable an administrator to take appropriate action.

Alternative solutions

Not covered in this article are techniques that use external log services like IBM SurfAid (now owned by Coremetrics) or Google Analytics. These services usually trace user activities by placing a certain piece of content (typically some inline JavaScript or a remote image) that is pointing to a service provider in the page that is delivered to the user. After rendering the page, the browser retrieves those content items from the service provider. The service provider analyzes its access logs and provides its customer with reports about what was requested and when. This technique is also referred to as a "Web beacon", "Web bug", or other similar names.

Using portal analytics

WebSphere Portal provides analytics logging by writing events to a dedicated log, similar to the logging for analyzing a server delivering static pages. The portal analytics file is called sa.log (sa for site analytics) and you can typically find it in the $WP_HOME/log/ directory.

Each line in sa.log represents a specific event that was fired through a request against the portal. A single request (for example, for a certain page) can result in multiple lines written to sa.log. You can customize the type and amount of logging by configuring the appropriate logger.

WebSphere Portal defines the following site analytics loggers:

Logger PurposeSiteAnalyzerSessionLogger Logs session events like login or logoutSiteAnalyzerUserManagementLogger Logs user and group management events like creating or deleting users and groupsSiteAnalyzerPageLogger Logs page render eventsSiteAnalyzerPortletLogger Logs portlet render eventsSiteAnalyzerPortletActionLogger Logs actions occurred in a PortletSiteAnalyzerApplicationActionLogger Logs actions occurred in a Portlet applicationSiteAnalyzerErrorLogger Logs any errors

For each interaction a user makes with the portal, the appropriate logger (if it is configured) creates a new entry in the site analytics log file. For each activity, the general format of the log record follows the definition described above. The difference is in the request URI, in which specific data is recorded for each activity.

For example, the page logger records the name of the page the user just requested, the name of the parent page (if it is a derived page), and some other data that is specific to the pages. Likewise, the session logger creates a record whenever a user logs in or out of the portal. In the case of a log-out, the session logger logs the reason for the logout (for example, timed out) as the URI parameter.

Page 23: IBM WebSphere Developer Technical Journal - CiteSeerX

The specific data for each logger goes into the request URI, either as path information or as URI parameters.

Using the loggers to record events

Let's look at the loggers in more detail, and examine the format of the associated request URI.

SiteAnalyzerSessionLogger is responsible for recording login and logout user activity. When a user logs into the portal, the request URI is /Command/Login and the user ID is logged in the USER_ID part of the log record. If the login attempt fails, a query string of ErrorCode=x (where x is the error code value) is appended to the request URI. The status code is set to the appropriate HTTP status code.

After the user logs out, whether explicitly or through a timeout, a log record with a request URI of /Command/Logout is created. If the reason for the logout is a session timeout, a Reason=SessionTimedOut query string is appended to the request URI.

Logging user management events

You can use the SiteAnalyzerUserManagementLogger to log any user management events that are made through the portal's administrative interface. When a new user is created, the logger records the event in the request URI as /Command/UserManagement/CreateUser. Deleting a user results in a request URI of /Command/UserManagement/DeleteUser.

You use the same mechanism for recording events associated with creating, modifying, and deleting groups. The respective URIs are /Command/UserManagement/CreateGroup, /Command/UserManagement/ModifyGroup, and /Command/UserManagement/DeleteGroup.

No further detail about the user or group that was subject to an operation is currently logged. Important: Logging for Modify events is not available in releases before WebSphere Portal V6.

Logging page events

Whenever a page is displayed to a requester, SiteAnalyzerPageLogger creates a corresponding log entry. The request URI starts with /Page/, the unique object ID and the name of the page follows.

SiteAnalyzerPageLogger also records any page management. The request URI is either /Command/Customizer/CreatePage, /Command/Customizer/EditPage, or /Command/Customizer/DeletePage. The unique object ID and the name of the managed page are appended to the request URI as parameters of the query string. For example:

/Command/Customizer/EditPage?Page=6_0_5RH_[CONTENT_NODE:6001]&PageName=Products

In this example, a page whose object ID is 6_0_5RH and whose name is Products was edited.

Logging portlet events

If the SiteAnalyzerPortletLogger is configured, it creates a log record for each portlet that is rendered, no matter which page contains the portlet.

The request URI of the log record starts with /Portlet/. The unique object ID of the portlet being rendered is appended, as well as the portlet name. The query string contains the unique object ID of the portlet, along with the current portlet mode {View, Edit, Configure, Help} and state {Normal, Minimized, Maximized}.

Page 24: IBM WebSphere Developer Technical Journal - CiteSeerX

If both SiteAnalyzerPageLogger and SiteAnalyzerPortletLogger are turned on, rendering a single page can lead to multiple log records in the sa.log (one line for the page and one line for each portlet on that page).

Logging portlet actions

SiteAnalyzerPortletActionLogger is not invoked by the general event infrastructure in WebSphere Portal. It writes records when you invoke it manually, and then it writes the request URI starting with /PortletAction/.

Logging errors

If an error occurs while rendering a page or portlet, the SiteAnalyzerErrorLogger creates a corresponding record in the analytics log. This is not a replacement for the system event logs (wps_*.log); instead, it lets you record errors from a business perspective, such as the number of failing portlets within a certain timeframe.

If there are errors to report, the corresponding log records start with /Error/Portlet or /Error/Page.

Application action logger

This logger is reserved for future use. The intention is to enable portlets to contribute to the site analytics log. However, at the time of writing this paper, using this logger is not yet supported.

If the logger writes anything, the request URI will start with /ApplicationAction/.

Examining the log file format

The format of the log records follows the NCSA Combined log format. Using the Extended Backus-Naur Form (EBNF), it can be formally specified as follows:

STRING = ? any printable character except the quote sign ?;HOST, CLIENT_ID, USER_ID = STRING;HYPHEN = "-";QUOTE = """;SIGN = "+" | "-";SPACE = " ";TWODIGITHOURS = "00".."23";MINUTES = "0..59";DAY = "00..31";MONTH = "01..12";SLASH = "/";COLON = ":";RFC822TIMEZONE = SIGN SPACE TWODIGITHOURS SPACE MINUTES;TIMESTAMP = DAY SLASH MONTH SLASH YEAR COLON HOURS COLON MINUTES COLON SECONDS SPACE RFC822TIMEZONE;PORT = 0..65535;URI = "http" | "https" + COLON + SLASH + SLASH + HOST + PORT + STRING;STATUS_CODE = NUMBER ? must be a legal HTTP status code ?;BYTES = NUMBER;REFERER = URI;REQUEST = "GET" | "POST" | "PUT" | "DELETE" URI SPACE "HTTP/1.0" | "HTTP/1.1";SA_LINE = HOST SPACE CLIENT-ID|HYPHEN SPACE USER_ID|HYPHEN SPACE "[" TIMESTAMP"]"

Page 25: IBM WebSphere Developer Technical Journal - CiteSeerX

SPACE QUOTE REQUEST_URI QUOTE STATUS_CODE SPACE BYTES SPACE REFERER SPACE QUOTE USER_AGENT QUOTE SPACE QUOTE COOKIES QUOTE;

Tip: The BYTES value is usually -1, meaning that the size of the returned markup is unknown, because of the dynamic nature of a portal page.

The request URI is artificial and cannot be called from a browser. Its only purpose is to carry the relevant logging information in an NCSA-common-compatible way. The URI is also independent of changes you make to your site's structure and content. You can use page names or page IDs to structure your reports in a way that suits your needs.

Correlating information

While you probably want to know which pages are requested by users, you might also want to understand the relationships among requests. Because HTTP is a stateless protocol, there is no inherent information about the page a user looked at previously before he or she looks at another one. This problem is typically solved by employing a user session, and this is what WebSphere Portal and the underlying WebSphere Application Server use.

WebSphere Portal tracks sessions using cookies, and the default cookie is typically called JSESSIONID. Whenever a user logs in, WebSphere Portal creates a new session and stores the key to the session in the browser as the cookie's value. From then on, the browser sends the cookie with each request to the server the cookie came from. By reading the cookie value, the server can correlate a specific request with a specific session and with previous requests.

The user's session can also be used to find related requests in the site analytics log. The requests can be grouped by session to gather additional information. For example, to find the most common click trails through a portal, you could group all requests by their session and then derive the click trail for each session. By counting the number of similar trails, you could determine the "most used" click trails through that portal.

As an alternative correlation mechanism, you could send a dedicated "tracing" cookie to the user's browser. The cookie can write to the sa.log in the same manner as any other cookie. The portlet programmer can choose the name and value of the cookie, and then log arbitrary data to the sa.log without imposing much overhead.

Examples

Logging page requests

When a user requests a page (and PageLogger is turned on), WebSphere Portal creates a log entry for the page:

localhost - wpsadmin [15/Jun/2006:23:42:10 +0200] "GET /Page/6_0_4D_[CONTENT_NODE:141]/Welcome HTTP/1.1" 200 -1 "" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.4) Gecko/20060508 (CK-IBM) Firefox/1.5.0.4" "JSESSIONID=0000c9fiQ6Q14XUbsp3QFQHFkq9:-1"

You can derive the following data from the log entry:

● The request was made from localhost.● The authenticated user for this request was wpsadmin (short name of the user).● Handling the request was finished at 15/Jun/2006:23:42:10, GMT +0200.● The page with title "Welcome" was requested.● The request was successful (HTTP response code 200).

Page 26: IBM WebSphere Developer Technical Journal - CiteSeerX

● The size of the returned markup is unknown to the logger (-1).● The request was made by a browser which identifies itself as "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US;

rv:1.8.0.4) Gecko/20060508 (CK-IBM) Firefox/1.5.0.4", which is actually Firefox 1.5.0.4 on Windows® XP.● The request was made within a session, identified by the session id "0000c9fiQ6Q14 XUbsp3QFQHFkq9".

Logging page and portlet requests

When a user requests a page and PageLogger and PortletLogger are both turned on, log entries are created for the page and for each portlet on that page. The first log entry and the information it contains will be similar to the one mentioned above:

localhost - wpsadmin [15/Jun/2006:23:42:10 +0200] "GET /Page/6_0_4D_[CONTENT_NODE:141]/Welcome HTTP/1.1" 200 -1 "" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.4) Gecko/20060508 (CK-IBM) Firefox/1.5.0.4" "JSESSIONID=0000c9fiQ6Q14XUbsp3QFQHFkq9:-1"

Because PortletLogger is now turned on, an additional entry is created for each portlet that resides on the page; for example:

localhost - wpsadmin [15/Jun/2006:23:42:16 +0200] "GET /Portlet/5_0_49_[PORTLET_ENTITY:137]/Welcome_to_WebSphere_Portal?PortletPID=5_0_49_[PORTLET_ENTITY:137]&PortletMode=View&PortletState=Normal HTTP/1.1" 200 -1 "http://localhost/Page/6_0_4D_[CONTENT_NODE:141]/Welcome" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.4)Gecko/20060508 (CK-IBM) Firefox/1.5.0.4" "JSESSIONID=0000c9fiQ6Q14XUbsp3QFQHFkq9:-1"

The additional information in this line is:

● A portlet with the title "Welcome to WebSphere Portal" was rendered (/Portlet/ 5_0_49_[PORTLET_ENTITY:137]/Welcome_to_WebSphere_Portal).

● It was rendered in View mode and its state was Normal (neither minimized, maximized nor solo - PortletMode=View&PortletState=Normal).

● It is located on the page with name "Welcome" (referrer, http://localhost/Page/6_0_4D_[CONTENT_NODE:141]/Welcome).

Logging which portlet was requested seems redundant at first; the portal's static configuration should tell exactly which portlets are deployed to a certain page. However, every user who is in an Editor role on a page can remove and replace portlets at will. Logging both page and portlets lets you determine which portlets a user requested.

Similarly, PortletLogger records the page that includes the portlet in the referrer field. If PageLogger is turned off, this field lets you determine which page rendered the portlet.

Information that can be derived from the log

Based on this data, the log analytics package can create reports such as:

● Hosts and domains of the users visiting the portal (by counting and aggregating the HOST elements).● Different logins corresponding to authenticated users (by counting and aggregating the USER_ID elements).● Robots and browser details (by counting and aggregating the USER_AGENT elements).

Page 27: IBM WebSphere Developer Technical Journal - CiteSeerX

● Pages that were requested but not found (STATUS_CODE).● Search engines, key phrases and key words (REFERER).● Different operating systems as reported by the browser (USER_AGENT).● The referring page a user came from (REFERER).● Entry and exit URLs (the most common first and last pages requested within unique sessions).

There are many more potential reports and aggregations than listed above. Information such as common click trails of users, or the response in relation to a marketing campaign, start to touch the area of data warehousing. This information is typically a subject for larger, distinct projects. Simple log reporting and analytics cannot answer those advanced questions.

Reporting with open source tools

There is a wide range of commercial and open source site analysis and reporting tools available. The example report below uses the popular Open Source AWStats package.

Introducing the example

The example creates one simple report called This Month's Top Pages. This report should provide a graph of the pages that have been requested within a particular time period. It will show the name of the page and the access count (number of times that page has been requested).

Turning on the instrumentation

Start by enabling the relevant instrumentation in WebSphere Portal:

● Enable logger by setting the following values in the WebSphere Application Server admin console.

SiteAnalyzerPageLogger.isLogging=trueSiteAnalyzerPortletLogger.isLogging=true

Usually the settings are already there; they just need to be set to the new values. (For the details of this procedure, see the Setting configuration properties topic in the Portal WebSphere V6 Information Center.)

● Restart WebSphere Portal.● Make a few requests against the portal and then check that sa.log contains new records.

Installing and configuration AWStats

AWStats comes with extensive information about installing it. For the sake of our example, we will only use the offline analysis. Therefore, it is enough to:

1. Download AWStats.2. Unzip the package into a directory.3. Configure a site according to the documentation. For example, create awstats.localhost.conf (with

localhost being the site name) and change the following configuration statements:

Page 28: IBM WebSphere Developer Technical Journal - CiteSeerX

LogFile="C:\Program Files\IBM\Portal51UTE\PortalServer\log\sa.log"LogFormat=1LogSeparator=" "SiteDomain="localhost"DNSLookup=1 AllowToUpdateStatsFromBrowser=0

4. Create the initial statistics database:

perl awstats.pl -config=localhost -update5. Create the overview report as shown in Figure 1:

perl awstats.pl -config=localhost -output -staticlinks > awstats.localhost.html

Figure 1. Overview report generated by AWStats

Testing the report

To see how different reports relate to different request schemes, you create a suite of test pages, test users, and test requests. Then you can see how AWStats analyzes and reports different request schemes.

Creating a test hierarchy of pages

Page 29: IBM WebSphere Developer Technical Journal - CiteSeerX

First, set up a small, automated test that makes a number of requests against the portal. The report created from those calls should show exactly the requests we made.

To create the requests for this example, you use Apache's JMeter tool. To keep things simple, create a hierarchy of predefined portal pages to be called with JMeter. The structure of the example pages resembles parts of the ibm.com home page:

Figure 2. Sample page hierarchy

Again, to keep the test simple, you can derive each page from a common template page. The download of the supporting files for this article contains an XMLAccess file (createPortalAnalysisTree.xml) that you can use to create the complete sample hierarchy.

Assign a URI mapping context to each page. This will ease the setup of JMeter quite a bit.

Creating a group of test users

In addition to showing requests to different pages, the example report should also show requests by different, authenticated users. XMLAccess supports the creation of users and groups as long as the user registry connection is configured for read/write access. The code archive in the download file contains an XMLAccess script (createPortalAnalyticsTestUsers.xml) that creates a number of test users and a group containing them.

The script that creates the sample page hierarchy assumes that group "Portal Analytics Test-Users" exists. The createPortalAnalyticsTestUsers.xml script creates that group, too.

Installing and configuring JMeter

JMeter ‎is a simple load test tool in Apache's Jakarta project. It's a pure Java tool that tests server performance using various protocols. In this example, we use its HTTP connection to create a series of requests against the portal. Then, we can use a (manual) correlation among the requests made by JMeter and the results logged by portal analytics to sa.log.

The examples in this article were created using JMeter 2.2, which you can download. After you have met all the prerequisites, you can start it from the bin subdirectory in the JMeter directory. The tool's main window will look similar to Figure 3.

Figure 3. The Jmeter main window

Page 30: IBM WebSphere Developer Technical Journal - CiteSeerX

The initial configuration primarily follows JMeter's user manual. You build the test plan to login with one of our test users (analyticsTest001 .. analyticsTest009), and then request the homepage. Without think time, the following requests ask for the product, services, support, and account pages. The iteration stops without logging out.

Without going into too much detail regarding how we configured JMeter, here are the main config items that are used to conduct the test:

● Thread Group "Analytics Test Users" runs with 10 users (threads).● HTTP Request Defaults are set to point to http://localhost:9081.● An HTTP Cookie Manager is used to keep track of cookies, but clears cookies for each iteration.● An HTTP Header Manager sends a custom User-Agent header. You can use this technique in a production environment

to distinguish artificial JMeter requests from real users.● Five HTTP requests follow:

1. Log in. We use the following well-known login URL and pass a randomly selected user ID.

/wps/portal/cxml/04_SD9ePMtCP1I800I_KydQvyHFUBADPmuQy?userid= <your user id>&password=<your password>

Page 31: IBM WebSphere Developer Technical Journal - CiteSeerX

2. /analytics/home3. /analytics/products4. /analytics/services5. /analytics/support6. /analytics/account

● Finally, a View Results Tree shows the results of the test plan.

The archive in the download file contains the sample JMeter test plan (PortalAnalyticsExample.jmx), as shown in Figure 4.

Figure 4. Example test plan in JMeter

Running the test

Running the test is simple. After all pieces are in place (test users, page hierarchy, and test plan), you invoke the test by selecting Run => Start in the JMeter tool. The tool creates ten threads, each modelling an individual test user. Each user logs on to the portal and selects the defined sequence of pages.

Results

Once the plan execution stops, you can invoke the AWStats reports to update the reports (see Resources):

perl awstats.pl -config=localhost -output -staticlinks > awstats.localhost.html

The results, shown in Figure 5, indicate that the most popular page for our sample test plan is the home page. However, it shows 20 hits. Didn't we simulate just ten users with a single iteration each? This is really no surprise when you think about the way WebSphere Portal and J2EE™ work. The first request we made was the login request. If this is successful, WebSphere Portal displays the very first page on which a user has view rights. In this case, if there is nothing underneath "My Portal" besides our test hierarchy, the first page is the home page. But we request the home page again in our next call after login. Therefore, we get

Page 32: IBM WebSphere Developer Technical Journal - CiteSeerX

twice the number of hits for the home page.

The most popular portlet is the "Information Portlet". Again, no surprise here because we had only one portlet on our single page template.

In a real world scenario this report would, of course, look much more complex. However, the small and simple setup in this example gives you a clear understanding of how portal analytics work.

Figure 5. AWStats Pages report for the example sample test plan

Setting configuration parameters in WebSphere Portal V5.1

Configuring WebSphere Portal V5.1 for portal analytics is slightly different from the procedure for WebSphere Portal V6. The main difference is that in WebSphere Portal V5.1, all configuration settings are made through property files, whereas WebSphere Portal V6 manages its configuration with the help of the Resource Environment Provider facility of WebSphere Application Server V6.

To enable analytics instrumentation in WebSphere Portal V5.1, turn on the logger in <wp_home>/shared/app/config/services/SiteAnalyzerLogService.properties by setting:

SiteAnalyzerPageLogger.isLogging=trueSiteAnalyzerPortletLogger.isLogging=true

Usually the lines are already there; they are just commented out. In this case it is enough to un-comment those lines. All other elements are similar to those for WebSphere Portal V6.

Conclusion

Page 33: IBM WebSphere Developer Technical Journal - CiteSeerX

WebSphere Portal's analytics log provides all the necessary data for portal analytics. Although the recorded URLs are not real, clickable URLs, they still provide the relevant information to find out which pages the users of the portal looked at.

This paper explained the data collected by WebSphere Portal in the analytics log file, and showed how to use that data with the AWStats Open Source Web site analysis package.

Download

Description Name SizeDownload methodSample analytic reporting components PortalAnalyticsExampleCode.zip 7 KBHTTP

Resources

● NCSA Combined Log Format ● "Site analysis" topic in the WebSphere Portal V5 Information Center ● "Site analysis" topic in the WebSphere Portal V6 Information Center ● IBM WebSphere Portal 5.1.0.1. How IBM's Portal Stacks Up against Our Customer Portals Evaluation Framework,

Kramer, Mitchell I, Patricia Seybold Group ● Slashdot effect ● Auditing Service, WebSphere Portal 5.1 Information Center ● Measuring Web traffic, Part 1 ● Measuring Web traffic, Part 2 ● EBNF, ISO Standard ISO/IEC 14977:1996(E), final draft ● Interface HttpSession", J2EE 1.4 API documentation ● DMOZ, Top: Computers: Software: Internet: Site Management: Log Analysis: Freeware and Open Source ● AWStats ● Apache JMeter ● Building a Web Test Plan ● Portal Usage Analytics: A Key Tool in the Battle Against the Empty Portal, Haddad, Robert ● IBM Tivoli Web Site Analyzer V4.5 Information Center ● Analyzing Web Logs with AWStats, Part 1, Carlos, Sean ● Analyzing Web Logs with AWStats, Part 2, Carlos, Sean

About the authors

Stefan Liesche is a Senior Technical Staff Member in the IBM Development Laboratory in Böblingen, Germany. He has 12 years of experience in the software development field. He holds a master of science degree in computer science from University of Hildesheim, Germany. He joined IBM in 1998 as part of the services group where his speciality was designing large-scale end-to-end e-business solutions for complex environments. Stefan has been working with IBM WebSphere Portal for years. He first worked on the construction of large-scale portal solutions before joining the WebSphere Portal development architecture team. He is the Lead Architect of Workplace and Portal Foundation.

Page 34: IBM WebSphere Developer Technical Journal - CiteSeerX

Steffen Uhlig is a Senior Consultant with Lab Services for WebSphere Portal in the IBM Development Laboratory in Böblingen, Germany. He has 10 years of experience in the IT industry. He holds a Diploma degree in Electrical Engineer-ing from the Mittweida University of Applied Sciences. Before Steffen joined IBM in 2001, he designed and implemented software for Digital Audio Broadcast-ing and the related Multimedia Object Transfer. Since he joined IBM, he has been helping clients with architecture and implementation of WebSphere Portal.

Page 35: IBM WebSphere Developer Technical Journal - CiteSeerX

IBM WebSphere Developer Technical Journal: Reliable and repeatable unit testing for Service Component Architecture modules -- Part 2Create repeatable tests for SCA modules that implement business processes

Level: Intermediate

David J.N. Artus ([email protected]), Consulting IT Specialist, IBM

20 Sep 2006

Repeatable unit testing provides an efficient and reliable means of verifying the quality of solution components. This article describes issues you may encounter testing Service Component Architecture (SCA) modules that implement business processes using Business Process Execution Language (BPEL), and how using mock objects can make repeatable testing of these components a reality.

From the IBM WebSphere Developer Technical Journal.

Introduction

In Part 1 of this series on component testing, we described an approach to performing repeatable testing of Service Component Architecture (SCA) components using the JUnit open source framework and Cactus, its server-side extension, and we showed how a unit test suite for a component can be defined in a set of test specification files with optional payload files. Figure 1 shows the major test specification parts for a service that takes a simple PostCode string and returns a data object, the expected result being specified in a simple payload file.

Figure 1. Example test specification

Page 36: IBM WebSphere Developer Technical Journal - CiteSeerX

Since Eclipse JUnit execution facilities are delivered with WebSphere Integration Developer, running a set of tests is simple. The result is a visual display, like the one shown in Figure 2.

Figure 2. JUnit test execution

In this article, we will describe some issues you might come across when testing SCA components that implement business processes using Business Process Execution Language (BPEL), then discuss how to address these issues by using mock objects. We will also present some techniques for creating mock objects quickly. This article is accompanied by a download file with code examples that implement these techniques.

If you plan to work through the examples presented in this second article, you will find it helpful to work through the previous article first.

Testing BPEL

Since the WebSphere Process Server programming model exposes business processes as SCA components that can be invoked in the same manner as any other SCA component, we might then expect to be able to test BPEL SCA components in the same manner as any other component. However, we find that there are some issues with such as approach. Consider the very simple, example process shown in Figure 3.

Figure 3. Simple BPEL process

Page 37: IBM WebSphere Developer Technical Journal - CiteSeerX

This example demonstrates the three key problematic issues:

1. The business process uses a human task, and the result of the human task controls some key execution paths.2. External services may not be readily executable in unit test environments; for example, they may depend upon

infrastructure not available to the developers.3. The business process is initiated but there is no corresponding reply.

Let's look at each of these issues in more detail.

1. Human task

The use of human tasks is extremely common is real world business process implementations. Clearly, if we are to implement a repeatable set of tests that can be executed automatically, then we must find a way to remove the need for human intervention. Further, our example shows a very common pattern: the outcome of the human task determines which code path is taken. In Figure 3, you can see that the choice between InvokeAgreedPayment and InvokeCaseClosed depends upon the human response.

If our tests are to achieve good code coverage, then we must be able to control the result of human tasks. Hence, the need for the following requirement:

It must be possible to simulate the actions of humans in response to different input data.

2. External services

External services present a number of challenges in addition to the previously noted one-way response issues:

Page 38: IBM WebSphere Developer Technical Journal - CiteSeerX

● Some services (for example, those implemented using mainframe or ERP systems) may not be available to developers in unit test environments.

● Even when such systems are available, the test data may not be suitable for exercising all code paths; that is, it may be difficult to simulate some error conditions.

● In many cases, the development of services occurs in parallel with the development of business processes. Hence, there is no guarantee that a service implementation will be delivered when it is needed for unit testing.

These factors lead to another testing requirement:

It must be possible to simulate the actions of services in response to different input data.

3. One-way invocation

Until now, all our SCA tests have had the pattern invoke service, validate response. Our example business process, in common with many real world business processes, has no response. We can readily construct a unit test to invoke the BPEL process, but there is no response to validate, so how can we determine its success or failure? Our determination of success or failure must lie in the ability to determine whether actions such as invoking other services have been performed correctly.

If our example process is correctly implemented, then particular values of input data will lead to the service invocation InvokeAutoSettlement while other input values will lead to the creation of a human task and, subsequently, other service invocations.

This leads to the testing requirement:

It must be possible to determine that a service invocation has occurred.

We also note that there are assignment statements prior to the two service invocations, which may be arbitrarily complex. This leads to another requirement:

It must be possible to determine that the correct data was used to invoke a service.

There is one further requirement, this one being particularly important when the BPEL process has more complex logic. In our simple example, it's clear that if InvokeAutoSettlement is used, then (for example) InvokeAgreedPayment will not be used. A unit test should be able to determine that such mutual exclusivity has been correctly implemented. Therefore:

It must possible to determine that a service has not been invoked.

Before we consider how such testing requirements can be satisfied, let us consider some additional issues.

Mock objects and mock object controllers

We have established the need to emulate SCA components such as human tasks and external services so that they can drive particular code paths in the BPEL process. This is accomplished with the common testing pattern known as mock objects.

Implementing such mock object services in WebSphere Integration Developer is comparatively simple. Our starting point would be a real component interface, such as that shown in Figure 4.

Page 39: IBM WebSphere Developer Technical Journal - CiteSeerX

Figure 4. Customer interface

WebSphere Integration Developer enables us, with a few mouse clicks, to create a new SCA component that implements that interface, as shown in Figure 5.

Figure 5. Mock customer

With a further mouse-click, we can then generate a skeletal implementation of the interface:

public class MockCustomerImpl { // ... constructor etc. removed ...

public DataObject getCustomer(Integer customerId) throws ServiceBusinessException { //TODO Needs to be implemented. return null; }

}

We can then complete the creation of a functioning mock object by adding code to construct, populate, and return a suitable data object. In principle, creating such code is not particularly difficult, but can be somewhat tedious and error prone to write, especially if the data object is large and contains nested objects and arrays. This is a very similar issue to that we encountered in the previous article when performing test invocations. There, we needed to construct potentially complex payload objects, and our solution was to externalise the payload data to XML files. Here, we will show how to apply the same ideas to mock object return values, externalising the return values to XML files.

Page 40: IBM WebSphere Developer Technical Journal - CiteSeerX

With mock objects, we can replace human tasks and service components to create a reproducible test harness for our BPEL processes; we will be able to execute the processes without human intervention and with no dependency on external systems.

So, we have addressed some of the test requirements: we can actually execute the tests. The remaining issues are concerned with validating that a test was successful. To do this, we need to verify that:

● Specific services were invoked with the correct data.● Specific services were not invoked.

We accomplish this by extending the capabilities of the mock object, adding a second interface (Figure 6).

Figure 6. Mock controller interface

In this interface, you see four operations. The loadData operation is used in our scheme for externalising return data, which we will discuss later. The remaining three operations resetLastRequest, checkNoRequest, and checkExpectedResult enable the validation of service invocation.

Our mock object will then implement both the service interface of the component it is replacing, plus the controller interface. An outline of a mock object implementation would look like:

Page 41: IBM WebSphere Developer Technical Journal - CiteSeerX

public class MockServiceImpl { // ... constructor etc. removed ...

private DataObject lastRequest = null

// the service method used by the BPEL process under test public void exampleServiceMethod(DataObject requestData) throws ServiceBusinessException { lastRequest = requestData; }

public void resetLastRequest(String label) throws ServiceBusinessException {

lastRequest = null; }

public String checkNoRequest() throws ServiceBusinessException {

assertIsNull(lastRequest);

}

public String checkExpectedRequest(DataObject expectedValue) throws ServiceBusinessException {

// code to compare expectedValue with lastRequest }}

A test of a BPEL process that uses the exampleServiceMethod could consist of three steps, each specified in its own .tspec file:

1. Invoke the mock controller resetLastRequest.2. Initiate the BPEL process with values intended to drive the BPEL to a path invoking exampleServiceMethod with

certain values.3. Invoke the mock controller checkExpectedRequest, passing the expected values; if the expected values are not found,

an exception is raised by the mock controller and the test will fail.

Similarly, a second test sequence can be defined to verify that exampleServiceMethod was not invoked:

1. Invoke the mock controller resetLastRequest.2. Initiate the BPEL process with values intended to drive the BPEL to a path not invoking exampleServiceMethod.3. Invoke the mock controller checkNoRequest; if lastRequest is not null (implying that exampleServiceMethod was

unexpectedly called), then an exception is raised by the mock controller and the test will fail.

The scheme we describe here is simplistic and intended to demonstrate the principles of using a mock controller interface in conjunction with a mock object. This scheme will not be sufficient for more demanding scenarios such as:

● Multi-user scenarios where many mock object instances are servicing many requests.● A BPEL process makes several different calls to the same mock object in rapid succession.

Such scenarios expose the need for a more sophisticated scheme, than simply keeping and checking the most recent request data. In the remainder of the article, we will show how to implement our simplistic mock object scheme, exploiting a set of libraries and components that greatly reduce the work required. If your test scenarios require a more complex mock controller implementation, the structure described here will provide a good starting point.

Page 42: IBM WebSphere Developer Technical Journal - CiteSeerX

Test scenario

Our test scenario is part of a fictional archive application managing a long-term store of items. We will be working on the business process by which a customer requests that an item be retrieved and is later notified that the retrieval has completed.

You can download a Project Interchange file containing all the code for this example. If you want to work through this article, then import these projects:

Table 1. Projects to importProject Type Contents

L_Archive SCA libraryInterface for the Archive application components. Delivered as part of the application, used by JUnit tests.

MP_ArchiveRetrieval SCA moduleThe Archive application process module. Contains the BPEL process we are testing.

J_ScaUtilities Java library Utilities usable by both applications and tests.

LT_ScaTest SCA libraryThe interface used by unit tests and mock objects, not for application use.

LT_ScaJunitTest SCA library The JUnit SCA test execution code. For use only by JUnit tests.MT_TestArchiveRetrieval SCA module Module delivering JUnit tests

MT_TestArchiveRetrievalJUnitWebJ2EE Web projectThe JUnit test code, test definitions, and payload data, part of MT_TestArchiveRetrieval.

If you open the Assembly diagram for the MP_ArchiveRetrieval module, you will see the interfaces offered by the retrieval process and services it uses.

Figure 7. Archive Module Assembly

Process interfaces

We see that the process has an initiation interface (Figure 8).

Figure 8. Initiation interface

Page 43: IBM WebSphere Developer Technical Journal - CiteSeerX

This interface might reasonably be accessed by a customer via some suitable user Interface. The process also has a second interface, by which the process is notified of the completion of a retrieval request (Figure 9).

Figure 9. Retrieval result notification interface

Services used by retrieval process

The business process uses two services. The Customer service will retrieve customer information given the customer's unique ID (Figure 10).

Figure 10. Customer service interface

When the retrieval is completed then the process will notify the Customer via the CustomerCommunication service (Figure 11).

Figure 11. CustomerCommunication service

Page 44: IBM WebSphere Developer Technical Journal - CiteSeerX

Required mock objects

The projects you imported did not include implementations of the Customer and CustomerCommunication services. We will create mock objects for these two services later in this article. (If you wish to skip the work of creating the mock objects, you can import the projects MT_MockCustomer and MT_MockCustomerCommunition from the Project Interchange file.)

The retrieval process

Before we start work on the mock objects, you might like to examine the business process as it is currently implemented. Open the process to see the overall structure, shown Figure 12.

Figure 12. Retrieval process

At this stage, we are primarily interested in the three items highlighted in red:

● The Receive statement labelled Initiation: this is the target of our unit test initiation.

Page 45: IBM WebSphere Developer Technical Journal - CiteSeerX

● The Invoke statements labelled GetCustomerDetails: we will look at this in more detail shortly, but note that this is one point where a mock object will be invoked.

● The Invoke statement labelled NotifyCustomer: this is our second mock object invocation point.

You might also notice that this is far from a complete implementation of the process; for example, there is no actual access to the archive system. We show this abbreviated example for ease of understanding; however, it is worth reflecting on two points:

● Our preference is to build and test large BPEL processes incrementally, focusing on particular parts of the overall process. Our repeatable tests give us confidence should any re-factoring become necessary.

● An important principle, especially relating to performance testing, is "Test Early, Test Often." We would wish to take even a skeletal process like this example into integration test and performance test environments. Early experiments give immediate insights into stability, error handling, and performance characteristics.

You can also examine the fault handlers for the GetCustomerDetails invoke, noting that notFound faults and transientError faults are potentially treated differently (Figure 13).

Figure 13. Faults on GetCustomerDetails

In this simplified code, we have no special treatment of the different faults; however, in a real implementation, we would need to exercise these different paths. When we implement our mock object, we will need to consider how to produce the different errors needed to drive the different paths.

The unit tests

To examine the unit tests that we intend to run, open a Web perspective in WebSphere Integration Developer. In the Project Explorer view, expand Dynamic Web Projects => MT_TestArchiveRetrievalJunitWeb => Java Resources => Java Source => com.ibm.issw.archive.utsuite => ArchiveTest-Data => testArchiveInitiation (Figure 14).

Figure 14. Unit test specifications

Page 46: IBM WebSphere Developer Technical Journal - CiteSeerX

The ArchiveInitiation test is comprised of five steps. Open each tspec file in turn to understand how the test proceeds:

● 010-InitialiseCustomers resets the Customer mock object.● 020-InitialiseNotification resets the CustomerCommunication mock object.● 030-LoadCustomers initialises the Customer mock object with some customer data. (We will explain how this works

shortly.)● 040-InitiateRetrieval starts the business process itself. The expectation is that the process will use

GetCustomerDetails to obtain data from the Customer mock object and then use the NotifyCustomer features of the CustomerCommunication mock object.

● 050-CheckNotification will then verify that the CustomerCommiunication mock object was called with the correct data.

You will also see another set of tests owned by CustomerTest.java, in the CustomerTest-Data directory (Figure 15). These tests enable us to exercise the Customer mock object directly. These are supplied to enable a quick demonstration of the mock object we are about to build next.

Figure 15. Customer tests

Page 47: IBM WebSphere Developer Technical Journal - CiteSeerX

We will describe the purpose of these tests when we are ready to run them, after we have completed building the Customer mock object.

Customer mock object

To create the Customer mock object, we will perform these general steps:

A. Create the mock Customer moduleB. Create mock Customer componentC. Generate an implementation classD. Implement loadData operationE. Implement getCustomerDetails operationF. Test the mock object

A. Create the mock Customer module

In these examples, we create separate modules for each mock object. For a realistic development scenario, this does result in the creation of a quite a number of modules and WebSphere Integration Developer projects, but we find the benefits of separation of concerns outweigh the increase in complexity.

To create the module:

1. In the Business Integration perspective, Business Integration view, right-click and select New => Module to display the New Module dialogue.

2. Enter the module name MT_MockCustomer and click Finish.3. The mock object must implement the Customer interface from the L_Archive library. We will also exploit some

libraries from our test framework. Therefore, we need to set up the necessary dependencies. Double-click on the newly created project to bring up the Dependency editor.

Page 48: IBM WebSphere Developer Technical Journal - CiteSeerX

4. In the Libraries section, click Add to bring up the Library Selection dialogue. 5. Select L_Archive and click OK. 6. Similarly, add dependencies on J_ScaUtilities, L_ScaTest (Figure 16).

Figure 16. Add libraries

Note that mock objects do not themselves use JUnit, and therefore we do not need to add dependency on the LT_ScaJUnitTest library.

7. Use File => Save and File => Close to save your changes and close the dependency editor, respectively.

B. Create mock Customer component

8. Select the MT_MockCustomer Assembly Diagram and double-click to open the editor. 9. In the palate, select the uppermost element, and from the sub-palate, select Component (with no implementation

type), then click on the editing surface. This creates the mock object component. (Figure 17)

Figure 17. Mock Customer assembly

Page 49: IBM WebSphere Developer Technical Journal - CiteSeerX

10. Right-click the newly created component. Rename the component MockCustomer. The primary role of the Mock Customer component is to provide the CustomerService interface.

11. Select Add Interface from the context menu to bring up the Add Interface dialogue. 12. Select I_CustomerService and click OK. You can see the result in the Details tab of the Properties view (Figure 18).

Figure 18. Mock Customer with service interface

C. Generate an implementation class

To create a skeletal implemention of the mock Customer component:

13. Right-click on the component and select Generate Implementation to bring up the Generate Implementation dialogue.

14. This dialogue enables you to specify the Java package for the implementation class, offering names derived from the interface namespaces. It is a common convention to use package names that conform to your own naming convention. To do this, click ,b.New Package, and then enter a suitable name. We use com.ibm.issw.archive.mock.customer (Figure 19). Click OK.

Figure 19. Implementation package

Page 50: IBM WebSphere Developer Technical Journal - CiteSeerX

15. Select the newly created package and click OK again to generate the implementation code (Figure 20).

Figure 20. Generated implementation

16. Now, select Add Interface again and add the I_MockServiceController interface. Notice that we have generated the service implementation before adding the second interface; we will be using a library implementation of the mock controller interface, so we don't need to write any code for the second interface.

Page 51: IBM WebSphere Developer Technical Journal - CiteSeerX

Figure 21. Mock object interface

17. You can see the result in the Details tab of the Properties view (Figure 22).

Figure 22. Mock Customer interfaces

Page 52: IBM WebSphere Developer Technical Journal - CiteSeerX

18. Two other projects, MT_TestArchiveRetrieval and M_Archive, will need to use this mock component, so export the interfaces to give those projects access. Right-click the MockCustomer component, and select Generate Export => SCA Binding to bring up the Select Interface dialogue (Figure 23).

19. Check both interfaces, then OK.

Figure 23. Export mock Customer interfaces

Your assembly diagram will now look like Figure 24.

Page 53: IBM WebSphere Developer Technical Journal - CiteSeerX

Figure 24. Mock Customer export

20. Save the Assembly Diagram. We are now ready to provide implementations for the interfaces.

D. Implement loadData operation

Now that we have added the mock Service Controller interface to the MockCustomer component, you will see an error because we have no implementation of the required operations (Figure 25).

Figure 25. Implementation errors

A suitable set of implementations is provided in the testing framework, in a base class named ScaMockBase.

Page 54: IBM WebSphere Developer Technical Journal - CiteSeerX

21. Modify the MockCustomerImpl.java implementation to extend this base class by adding the import of ScaMockBase and the extends clause as shown below:

import com.ibm.swservices.sca.test.ScaMockBase;import com.ibm.websphere.sca.ServiceBusinessException;import commonj.sdo.DataObject;import com.ibm.websphere.sca.ServiceManager;

public class MockCustomerImpl extends ScaMockBase {

22. Save these changes. This will remove the errors.23. Examine the code for the ScaMockBase class.Click on the class name and press F3 to open the code. You will see a

number of protected (and therefore accessible to your MockCustomer class) member variables:

protected Logger m_logger = Logger.getLogger(this.getClass().getName());

protected Object m_lastRequest = null;

protected String m_lastFaultName = null;

protected List m_dataList = new ArrayList();

You will see that ScaMockBase populates m_dataList with valid customers from XML files:

DataObject data = XmlSdoUtilities.readDataFromXmlFile(dataFiles[count]);m_dataList.add(data);

We will be using this list in our getCustomerDetails implementation. We need to create some files containing a set of valid customer data. The format of the files is the serialised form of the Customer data object:

<?xml version="1.0" encoding="UTF-8"?><_:TestDefinition xsi:type="l:Customer"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xmlns:l="http://L_Archive"xmlns:_="http://scatest/issw/ibm/com"> <customerId>2</customerId> <name>Bingley Bradford</name> <email>[email protected]</email> <phone>22222</phone></_:TestDefinition>

Remember that in our previous article we described a utility for creating an example of such files from the data object definition. The data files must be located in the mock Customer project so that they can be picked up from the class loader. We have provided some sample files, delivered in the MT_TestArchiveRetrievalJUnitWeb project.

24. In the Web perspective, Project Explorer view, expand Dynamic Web Projects => MT_TestArchiveRetrievalJUnitWeb => Java Resources => JavaSource.

25. Right-click on the com.ibm.issw.archive.testData.Customer folder, and select Copy. 26. Expand Other Projects => MT_MockCustomer, then right-click com.ibm.issw.archive.mock.customer, and

Paste (Figure 26).

Page 55: IBM WebSphere Developer Technical Journal - CiteSeerX

Figure 26. Install test data

We have now placed the test data in a directory tree in the same location as the MockCustomer class; the test data is loaded by the classloader for this class. If you chose a different package name, you will need to adjust the location of the test data files accordingly.

27. You now have an implementation of the loadData operation, along with test data for it to load. Open the two data files Customer001.xml and Customer002.xml and note the Customer IDs that will be loaded.

Other attributes of the ScaMockBase class

You should notice that in ScaMockBase, an instance of java.util.Logger is created, enabling us to add trace and error statements to our implementation:

protected Logger m_logger = Logger.getLogger(this.getClass().getName());

Also, m_lastRequest and m_lastFaultName are used to implement the Last Request scheme we described earlier. We will be using these later in our second mock object implementation.

E. Implement getCustomerDetails operation

We can now return to MockCustomerImpl.java and add the implementation of getCustomerDetails.

28. Use the following code as the body of that method:

m_logger.fine("getCustomer " + customerId);

for (int i = 0; i < m_dataList.size(); i++){DataObject candidate = (DataObject) m_dataList.get(i);

int candidateId = candidate.getInt("customerId"); m_logger.finest("candidate " + candidateId);

if ( candidateId == customerId.intValue()){ m_logger.finest("returning candidate "); return candidate; }}throw new ServiceBusinessException("NotFound");

Observe that the implementation emits trace events using the m_logger member variable from the base class.29. Save your implementation.

Page 56: IBM WebSphere Developer Technical Journal - CiteSeerX

F. Test the mock object

You now have a functioning mock object that provides the getCustomerDetails method and a set of tests for that object. You can now deploy the MockCustomer module and the tests to a server, and then verify that the MockCustomer operates properly.

30. Wire the test module to the mock service module by expanding the MT_TestArchiveRetrievalApp module, and open its assembly diagram. You will see the standalone references used by the tests.

31. Select the customer standalone reference and, in the Properties view, select the Binding tab. 32. Click on Browse to display the SCA Export Selection dialogue. Select the export provided by the MockCustomer

module. (Figure 27)

Figure 27. Wire MockCustomer to test

33. In the same way, wire I_MockServiceController to the same module. We now have enabled the tests to use both interfaces to our mock object.

Figure 28, Wire MockCustomer Controller to test

Page 57: IBM WebSphere Developer Technical Journal - CiteSeerX

Now we can deploy the modules to the server. 34. In the Business Integration perspective, Servers view, right-click the WPS server, and select Add Remove Projects

to bring up the Add and Remove Projects dialogue. 35. Add MT_TestArchiveRetrievalApp and MT_MockCustomerApp to the server, then click Finish.

Figure 29. Deploy MockCustomer for testing

Page 58: IBM WebSphere Developer Technical Journal - CiteSeerX

36. Wait for the application deployment and initiation to complete and then execute the test. In the Web perspective, expand MT_TestArchiveRetrievalApp and select CustomerTest.java.

37. From the Run menu select Run... to display the Run dialogue. 38. Select JUnit and click New.

Figure 30. Run MockCustomer test

This will create a new run configuration, CustomerTest, as shown in Figure 31.

Figure 31. Run configuration

Page 59: IBM WebSphere Developer Technical Journal - CiteSeerX

39. Add the Cactus URL specification:

-Dcactus.contextURL=http://localhost:9080/MT_TestArchiveRetrievalJunitWeb

in the Arguments tab (adjust the host and port to correspond to your test environment).40. Click Run to launch the test.

Figure 32. Cactus argument and execution

You should see the JUnit test succeed, plus some information displayed in the console showing the data comparisons performed by the test (Figure 33).

Page 60: IBM WebSphere Developer Technical Journal - CiteSeerX

Figure 33. Test success

Figure 34. Test progress in console

Take the time to study the test files. You will see:

● 01-InitialiseCustomers, which resets the list of customers held by MockCustomer.● 02-LoadCustomers, which reads the test data we installed. Notice how the location of the test data is specified in the

payload:

<service>I_MockServiceControllerPartnerRef</service> <method>loadData</method> <payloadString>testData/Customer</payloadString>

● 03-GetCustomer, which retrieves a customer with a specified ID and compares the details with the contents of the expected results file ExpectedCustomer.xml.

● 04-GetBadCustomer, which attempts to retrieve a customer with an invalid ID and expects a specific fault to be thrown.

This test demonstrates that our MockCustomer is able to provide different responses that could drive our business process to different paths.

Mock CustomerCommunication object

We will illustrate the second aspect of mock objects by constructing a MockCustomerCommunication object. In our business process, the CustomerCommunication component should notify the Customer about the results of its retrieval request. In

Page 61: IBM WebSphere Developer Technical Journal - CiteSeerX

testing the business process, we want to determine that the Communication component is invoked with the correct data. We will enable such testing by creating a suitable mock object.

Creating the MockCustomerCommunication component

Creating MockCustomerCommunication uses the same techniques you just followed for the MockCustomer object. We will outline the steps here, but detailing only new information; refer to the detailed instructions and screenshots above, as applicable.

1. Create an MT_MockCustomerCommunication module, adding dependencies on the L_Archive, J_ScaUtilities and L_ScaTest libraries.

2. Create MockCustomerCommunication component and implementation, following these additional steps:

❍ Add the I_CustomerCommunicationService interface to the component.❍ Generate a skeletal implementation in a package com.ibm.issw.archive.mock.customercommunication.❍ Add the I_MockServiceController.❍ Generate an export for the two interfaces with an SCA binding.❍ Modify the implementation to extend the ScaMockBase class, thus implementing the I_MockServiceController

interface.3. Save your work so far. There should be no errors.

Implementing the service

We now need to consider the service operation that will be used by our business process.

4. Open your implementation class and examine the sendRetrievalResult method. Observe that there is no useful implementation.

public void sendRetrievalResult(DataObject retrievalNotifcation) { //TODO Needs to be implemented. }

5. We require that this method interact with the ScaMockBase class so that the testing methods checkExpectedRequest and checkNoRequest can verify whether this method has been called with the correct data. Achieve this by modifying the code to read:

public void sendRetrievalResult(DataObject retrievalNotifcation) { m_logger.info("sendRetrievalResult" + XmlSdoUtilities.getDataObjectAsXml(retrievalNotifcation));

saveLastRequest(retrievalNotifcation);}

6. The trace message exploits a utility method in the class XmlSdoUtilities, so you will need to add this import statement:

import com.ibm.swservices.sca.utilities.XmlSdoUtilities;7. The call to a method in ScaMockBase, namely saveLastRequest, is the only required implementation. Ensure all your

changes are saved. The mock object is now complete.

Testing the BPEL process

Page 62: IBM WebSphere Developer Technical Journal - CiteSeerX

We can now wire the BPEL process to the two mock objects we created and run the tests.

8. In the Business Integration perspective, open the MP_ArchiveRetrieval assembly diagram and select the CustomerService import.

9. In the Properties view, select the Binding tab and click Browse to bring up the SCA Export Selection dialogue. 10. Select the export from your mock Customer module and click OK. 11. Examine the values entered in the binding tab to see how they refer to your mock object module.

Figure 35. Wire process to MockCustomer

12. In the same way, select the CustomerCommunicationService import and wire it to the MockCustomerCommunication module. The result should be as shown in Figure 36.

Figure 36. Wire process to MockCustomerCommunication

Page 63: IBM WebSphere Developer Technical Journal - CiteSeerX

13. Save the modified assembly diagram.14. We also need to enable the tests to invoke the mock object controllers, so open the MT_TestArchiveRetrieval

assembly diagram and use the technique described above to ensure that the imports are correctly wired as summarised here:

Table 2. Wire tests to modulesImport Module name Export nameI_MockServiceControllerPartner MT_MockCustomer MockCustomerExportI_MockCustomerCommunicationControllerPartnerMT_MockCustomerCommunicationMockCustomerCommunicationExportarchiveRetrieval MP_ArchiveRetrieval Initiationcustomer (we only use this import if running the Mock Customer test)

MT_MockCustomer MockCustomerExport

15. Now we are ready to initiate the test. Ensure that your test server is started and add the four projects to it:

❍ MP_ArchiveRetrieval❍ MT_TestArchiveRetrieval❍ MT_MockCustomer❍ MT_MockCustomerCommunication.

16. In the Web perspective, expand MT_TestArchiveRetrieval and select ArchiveTest.java. Select Run => Run... to create a JUnit launch configuration, remembering to specify the VM argument:

-Dcactus.contextURL=http://localhost:9080/MT_TestArchiveRetrievalJunitWeb

Figure 37. Launch archive test

Page 64: IBM WebSphere Developer Technical Journal - CiteSeerX

17. The JUnit test should complete with no errors. Examine the test specifications to remind yourself of how the tests and mock objects cooperate to verify the process behaviour.

Conclusion

We have seen how simple mock objects can enable repeatable unit testing of a BPEL process, and described a simple framework that greatly eases the construction of mock objects. Applying the principles of repeatable unit testing to BPEL development increases both productivity and quality.

More in this series

● Part 1: Create automated unit tests for SCA modules

Download

Description Name SizeDownload methodSample components BpelUnitTesting-v1.1.zip 698 KBHTTP

Page 65: IBM WebSphere Developer Technical Journal - CiteSeerX

Resources

● Service Component Architecture ● SCA application development ● Building SOA solutions with the Service Component Architecture

About the author

David Artus is member of the IBM Software Services for WebSphere team working out of the IBM Hursley Lab in the UK. He has provided WebSphere consultancy and mentoring since he joined IBM in 1999. Prior to joining IBM David worked in a variety of industries including Investment Banking, Travel and IT Consultancy. His interests include design of distributed systems, object technologies, and service-oriented architectures.

Page 66: IBM WebSphere Developer Technical Journal - CiteSeerX

IBM WebSphere Developer Technical Journal : A guided tour of WebSphere Integration Developer -- Part 6 Becoming more on-demand using dynamic business rules

Level: Introductory

Jane Fung ([email protected]), Advisory Software Developer, IBMGreg Adams ([email protected]), Distinguished Engineer, IBMRichard Gregory ([email protected]), Staff Software Developer, IBMRandy Giffen ([email protected]), Advisory Software Developer, IBM

20 Sep 2006

This is the sixth article in a series exploring a service-oriented approach to application integration using IBM® WebSphere® Integration Developer. This article examines how you can make your running application dynamic and flexible so it can handle changing business conditions without requiring you to redeploy the application. We will look extensively at business rules, a key mechanism to achieve this flexibility.

From the IBM WebSphere Developer Technical Journal.

Introduction

In the last article, we dove deeper into business process creation using IBM® WebSphere® Integration Developer. As you might recall, a business process is any system or procedure that an organization uses to achieve a larger business goal. You can automate a business process, or you can create a number of steps that one or more users need to manually complete. Business processes can be short running, or take hours, days, weeks or more to complete.

Business processes often include key business decisions. For example, your enterprise might choose to provide gold customers with a 10% discount and silver customers with a 5% discount. The size of the discount is a key business decision based on the type of customer. Furthermore, a truly agile business does not want these key decision points hard-wired into their processes. Instead, they might want to adjust the discounts rapidly so they can lure gold customers during a holiday season. Perhaps, if they are feeling particularly giddy, they might give gold customers a 50% discount for December and January.

You could always hard-wire the decision and discount values into the processes and redeploy your application whenever you want to change the discount, but that is hardly agile and would require a user who simply wants

Page 67: IBM WebSphere Developer Technical Journal - CiteSeerX

to give good customers a 50% discount to actually modify a process and redeploy the application, rather than just typing two numbers. You could also consider building your application so that it uses a database, and then modify the database as business conditions change (for example, modify the database to change the gold discount to 50%). However, this option makes your application more complicated and requires that you have a database-savvy user on staff, instead of a user who just wants to type two numbers. Surely there must be something simpler?

Indeed there is! WebSphere Integration Developer provides a key component that greatly improves application flexibility: business rules. Business rules externalize and manage business logic separately from the main business process. They allow you to make changes at runtime to keep up with the evolving on-demand business environment.

Business rules are the business logic and constraints that shape an organization. For example, a business rule might state, "shipments over 300K in value require 2 supervisors to approve." Business rules in WebSphere Integration Developer live within a rule group, which is another way to implement a component. There are two kinds of business rules: rule set and decision table. A rule set typically consists of a number of if-then rules, and is normally used when complex logic is involved. A decision table captures simple rule logic in a table format. The selection of which business rule to invoke is based on date criteria that exists in the rule group.

In this article you will learn about

● Rule groups● Rule sets● Decision tables● Putting it all together

Rule groups

A rule group is a type of component implementation that logically groups rule sets and decision tables. Like other component types, a rule group implements one or more interfaces. For example, Figure 1 shows a rule group that implements the getBilling operation of the BillRecordInterface. When you call the getBilling operation, this rule group determines what action to take next.

Figure 1: A sample rule group

Page 68: IBM WebSphere Developer Technical Journal - CiteSeerX

When you invoke a rule group is, it selects a destination using the selection criteria and date range entries. A destination could be either a rule set or a decision table that contains the actual business logic. Selection criteria could be either a current date, an XPath expression that gets a date from a business object, or a Java snippet that returns a date.

A date range entry specifies a range of dates, during which a destination would be applicable. If the date returned from the selection criteria is within the boundary of the date range entry, then the corresponding destination is used. The date range entries are optional. If, in fact, no date range entry is supplied or if no date range matches, the default destination is invoked. Keep in mind that if you do not specify a default destination and no date range entry matches, then an exception will be thrown. Therefore, it is best to provide a default destination.

Looking at Figure 1 again, the rule group states that if the current date falls within June 20 and Sept 20 of 2007, then the destination getBillingRSsummer will be in effect. Any other date will put the default destination getBillingRS into effect.

Partner References

You need to define a partner reference in a rule group when a rule set or a decision table invokes another component. For example, you might want your rule group to perform a business decision based on the customer type (gold, silver) and then call another component to ship the order. For gold customers you might use air express, for bronze customers you might use the nearest snail. After creating the rule group, you need to add it to the assembly diagram with the partner references wired correctly before it is considered complete.

Figure 2 shows a rule group that contains two partner references. In the details pane, you can see that each partner reference has an associated interface. Partner references in a rule group work the same as they do in a business process.

Figure 2: Partner references in a rule group

Page 69: IBM WebSphere Developer Technical Journal - CiteSeerX

Rule sets

Rule sets are one type of destination in a rule group. Each rule set has its own namespace and name. There are two kinds of rules in a rule set: if-thenrules and actionrules. An if-then rule can have one or more conditions. When the if clause has more than one condition, you can choose to have it evaluate to true when any of the conditions are true or when all of the conditions are true. Figure 3 shows both kinds.

Figure 3: Rule set conditions

When the if clause evaluates to true, the corresponding actions in the then clause are performed. The actions can range from assigning a value to a variable to creating business objects or invoking other services. Most of these actions can co-exist with other actions in the same section, except for the invoke action. The invoke action cannot run with the other actions; it occupies the whole action section by itself.

The other type of rule is an action rule. An action rule doesn't have any conditions; it always performs the actions specified in the rule. The actions that are allowed in an action rule are the same as those that are allowed in the action section of the if-then rule. The actions in both the if-then rule and the action rule execute sequentially.

Variables

Page 70: IBM WebSphere Developer Technical Journal - CiteSeerX

You can define local variables in a rule set. Local variables can be primitive or business object types. You define variables in the variables section of the rule set editor. When local variables are defined, they have no value and are not initialized. If you try to use them without initializing them, you will get a run-time exception. Therefore, you must first assign a new business object instance to a business object variable before using it. Figure 4 shows the variable section from a rule set editor.

Figure 4: A rule set variable section

Rule Templates

A rule template is a form that you can use to create similar looking rules. For example, if you had a rule stating that gold customers get a 50% discount and another where silver customers get a 5% discount, you could first imagine a template that in essence says "for a customer of type X give a discount of type Y". Now that you have this template, you can fill in gold and 50 to get a rule for your gold customers. Then you can fill in silver and 5 to create a second rule for silver customers. Because both rules were created from a template, you know that gold and 50 are parameters. That means that, at runtime, you can change those values to say 45or 65 quite easily. To modify the parameters in a rule instance at runtime, you use the Business Rules Manager, by right-clicking a server instance in the Server view, and selecting Launch - Business Rules Manager.

You can create a rule template for either an if-then rule or an action rule. There are two ways to create a template. You could create a regular rule and then convert it to a rule template or you could directly create a rule template. A rule template is shown in Figure 5. The presentation field is the text that describes the rule template to an administrator using the business rules manager, and it specifies the parameters that the rule template accepts. This presentation string is what users will see at runtime when they want to change the discount – they will not see the details of how you are implementing the rule. The parameters are inputs to the template and are listed under the parameters sections. This example is an if-then rule; therefore, you would see the if condition and its corresponding action. The if-then statement uses the parameter values that are passed in from a rule instance.

Figure 5: A rule template

Page 71: IBM WebSphere Developer Technical Journal - CiteSeerX

A rule that uses a rule template is known as a rule instance. Figure 6 shows a rule instance that uses Template_Rule1. The Template field specifies which rule template that this instance uses and the presentation field displays the presentation from the rule template. The Presentation field is where you can provide the parameter values. In this example, param0 (seen in the template in Figure 5) has a value of Pay As You Go and param1 has a value of 30.

Figure 6: A rule instance

Decision tables

A decision table captures simple rule logic in a table format. There are two sections in a decision table: conditions and actions. You can specify conditions in the conditions section. For example, Figure 7 shows a decision table that has two conditions and two actions. The conditions state that if the usage_plan is "Pay As You Go" and destination1 is "Canada", then two actions will be performed. The first action will set 0.05 into the chargeOutput variable and the second action will set 10 into the total_charge variable.

Figure 7: A decision table

Page 72: IBM WebSphere Developer Technical Journal - CiteSeerX

You can toggle the orientation of a decision table condition so that you can view it either horizontally or vertically by clicking the Change Orientation of Condition button. Figure 8 shows the destination1 condition in the horizontal orientation.

Figure 8: Changing the orientation of a decision table

Value templates

The concept of a template also applies to decision tables. You could convert any cell into a value template. Similar to a rule template in a rule set, a value template in the decision table has a presentation and parameters. Due to the nature of the decision table though, a template in a decision table typically has only one parameter, which is always an action rule. A value template in decision table is not reusable, but by using a value template, you can alter the parameters using the Business Rule Manager. Figure 9 shows a sample of a rule template for a decision table.

Figure 9: A rule template for a decision table

Page 73: IBM WebSphere Developer Technical Journal - CiteSeerX

Putting it all together

Now that you have seen the features of rule groups, rule sets, and decision tables, let's look at a scenario that you can implement using these concepts. Our scenario is a cell-phone billing system. Have you heard the term billing increment? A cell phone plan that uses a full-minute billing increment means that a one-second phone call will be billed as a full minute. That doesn't sound like a very good deal unless you are a phone company. To stay competitive, cell phone companies might need to change billing rates or introduce a new promotional plan in real time. In this example, we examine how a company uses business rules to define the billing rules and to externalize and modify them on the fly.

We have created some projects, business objects, and interfaces for you since you already know how to do that from earlier articles. As in the last article, you just need to import the pre-built parts of the application so you can focus on the business rule aspects of it.

In this scenario, you create two rule groups. The first contains a decision table that captures the international rate plan. The other rule group contains a rule set that invokes a component to calculate the phone charges. Finally, we will show you how to change rules at runtime.

Cell Phone Billing System

Page 74: IBM WebSphere Developer Technical Journal - CiteSeerX

Many cell phone companies use a two-leg billing system that bills each leg of the call separately. Our application uses the same system. The first leg of the call starts as soon as the caller makes the call and is billed as follows: Leg 1 rate x Leg 1 duration. The second leg of the call begins as soon as the destination party or answering device answers the telephone. It is billed as follows: Leg 2 rate x Leg 2 duration. The customers probably thought that they didn't have to pay unless they answer the phone. Sadly, they do, and while we're ruining your day, the Easter Bunny is also not real (we think).

If a customer makes a call from the US to Europe, the first leg begins as soon as the caller starts calling. When the destination party picks up the call, the second leg begins. The rate table is shown in Table 1.

Table 1: International rate table

Destination Canada US Asia Pacific EuropeRate / min 0.05 0.05 0.15 0.20

Imagine that the first leg, from the U.S., is 61 seconds and the second leg, to Europe, is 56 seconds. After adjusting the durations with a billing increment of 30 seconds, the first leg becomes 90 seconds (instead of 61) and the second leg becomes 60 seconds (instead of 56). Therefore, a 30-second billing increment means a one-second call will be charged as a 30-second call. The phone call charge is calculated as follows:

1. Charge of Leg 1: Leg 1 duration (90 seconds) x Leg 1 Rate (US = 0.05/min) = $0.075

2. Charge of Leg 2: Leg 2 duration (60 seconds) x Leg 2 Rate (Europe = 0.20/min) = $0.20

3. Total charge of the call is the sum of each leg which is $0.075 + $ 0.20 = $0.275

In your business rules you will use a decision table to capture the rate table. An existing component calculates each leg and the total charge. Finally, you will use a rule set to put everything together.

Step 1: Importing the existing projects

Since you probably just want to focus on the business rules, we'll start by importing some prebuilt business objects, interfaces and other artifacts:

1. Download the ratingsystembegin.zip file from the Downloads section. 2. Click File - Import - Project Interchange, and then click Next. 3. Click Browse next to the From zip file field, browse to the phonebillingbegin.zip file that you

just downloaded, and click Open. This zip file is the Project Interchange file. 4. Check Select All, and then click Finish. The imported modules open in the Business Integration view

and, as the project builds, you see a Building workspace message in the lower right corner of the workbench. Wait for the workspace to finish building.

Let's briefly look at what you have just imported. The RatingSystem module contains the BillingRecord and Rate business objects that you will use within the business rules. There are also three

Page 75: IBM WebSphere Developer Technical Journal - CiteSeerX

interfaces: BillRecordInterface, InternationRateInterface, and TotalChargeInterface. You will need these interfaces when you create the rules and call services from the rules.

The module also has an assembly diagram (to open it, expand the RatingSystem module in the Business Integration view and double-click RatingSystem). The assembly contains only a single business process component. If you double-click that component, you will see that it simply contains a snippet called calcCharge that calls a custom visual snippet called TotalCharge. You can view the details of the TotalCharge snippet by opening the details for calcCharge in the properties view, and then double-clicking the TotalCharge node in the visual snippet. In the TotalCharge snippet, you will see two uses of the TotalChargePerLeg custom visual snippet that calculates the rate for each of the two legs of a call according to the rules we discussed earlier.

You'll use this component, and its snippets in the business rules that you will create in the following sections.

Step 2: Create the InternationalRateRG rule group

It's time to create a rule group that uses a rate plan decision table.

1. In the Business Integration view, expand RatingSystem - Business Logic - Rule Groups. 2. Right-click the Rule Groups category and select New - Rule Group. 3. Enter InternationalRateRG for the name. Click Next. 4. From the interface list, select InternationalRateInterface. Click Finish. The rule group editor opens.

Now you will implement the operation of the interface you just selected that this component will support. After all, there is not much point to a rule group if it doesn't do anything. The next steps add a decision table as a destination for the group:

1. In the rule group editor, select getRate. The rule sets or decision tables that you define in the right pane will implement the getRate operation.

2. In the right side of the editor, click the Default Destination field. A list of choices opens as shown in figure 10.

3. Select New Decision Table from list of choices. A new decision table dialog box opens.

Figure 10: Creating a default destination

Page 76: IBM WebSphere Developer Technical Journal - CiteSeerX

4. Enter RateTable for the name of the decision table. Click Finish. A decision table editor opens. 5. Switch back to the InternationalRateRG rule group editor and save it. This will get rid of the red X that

indicates there are no destinations.

Step 3: Add Conditions to the RateTable Decision Table

The Interface section of the decision table editor displays the input and output variables of the interface, as shown in Figure 11. Although the input1 variable has the same type as the output1 variable, the type is not automatically copied to the output1 variable. Instead, a new Rate business object instance automatically gets created and assigned to output1. Time to fill in your decision table with some conditions!

Figure 11: RateTable editor interface section

1. In the RateTable decision table editor, click the condition cell where it says Enter Term. 2. In the list that opens, expand input1 and select destination as the condition term.

Figure 12: Creating a new condition

Page 77: IBM WebSphere Developer Technical Journal - CiteSeerX

3. Click the first condition value cell where it says, Enter Value. 4. From the list that opens, select String 5. In the floating text field, enter Canada, and press Enter.

Figure 13: Entering a new condition value

6. In the same manner, enter US in the adjacent cell. 7. Right-click the table and select Add Condition Value. This adds a condition value after the last one. 8. To add another condition value, select the Add Condition Value menu again. The table should look like

the one in Figure 14.

Page 78: IBM WebSphere Developer Technical Journal - CiteSeerX

Figure 14: The RateTable after adding two extra condition values

9. For the third and fourth condition values, enter Asia Pacific and Europe respectively.

Now that you have the conditions defined, let's fill in the actions for each condition:

1. Click the actions cell where it says Enter Term. Expand output1 and select rate as the action term.

Figure 15: Entering a new term for the action

2. Select the first action value cell where it says Enter Value, and then select Number. 3. In the floating text field, enter0.05. 4. In the second, third, and fourth action value fields, enter 0.05, 0.15 and 0.20 respectively. 5. Save the RateTable editor.

Figure 16 shows the final decision table is shown. The value of output1.rate will be set according to the value of input1.destination.

Figure 16: The completed RateTable

Page 79: IBM WebSphere Developer Technical Journal - CiteSeerX

Step 4: Create and test the InternationalRateRG rule group component

To allow other components and other modules to use your rule group and its decision table, you need to ensure that you add the rule group to the assembly diagram. When you have added it to your assembly diagram, you can quickly test your new component.

1. In the Business Integration view, open the RatingSystem assembly diagram. 2. Drag the InternationRateRG rule group that you just created onto the canvas.

Adding the rule group to the assembly diagram creates a new component. Before continuing, catch your breath and try out the new component using the WebSphere Integration Developer test facility:

1. Right-click the canvas in the assembly diagram and select Test Module. 2. In the test client that opens, type US for destination. Click Continue.

The InternationalRateRG component, which uses the RateTable decision table, is invoked by default because there were no other components or operations to select. The resulting rate is 0.05, as shown in Figure 17. You might notice that the result the destination of the output variable is null. Why is that? As mentioned earlier, a new Rate instance is automatically assigned to the output1 variable. To keep things simple, you only assigned values to the output1.rate variable and not to the output1.destination variable. Therefore, it is null.

Page 80: IBM WebSphere Developer Technical Journal - CiteSeerX

Figure 17: Testing the InternationalRateRG rule group

In earlier versions of WebSphere Integration Developer, not all changes were picked up when a module was deployed to the server. It's a good idea to remove the project from the server before continuing.

1. Right-click the server and select Add and remove projects. 2. In the configured projects list, select RatingSystemApp, click Remove, and then Finish.

Step 5: Create a BillRecordRG

Follow these steps to create a rule group named BillRecordRG that contains a rule set with if-then conditions:

1. Create another rule group in the RatingSystem project the same way as you created the InternationalRateRG rule group. This time, name the rule group BillRecordRG.

2. For the interface, select BillRecordInterface. The interface was included in the module that you imported in Step 1.

3. In the rule group editor, select the operation named getBilling on the left pane. 4. On the right side of the editor, click Enter Destination in the DefaultDestination cell. A list of available

destinations opens.

Last time you were at this point, you created a decision table. This time, you will create a rule set:

Page 81: IBM WebSphere Developer Technical Journal - CiteSeerX

1. Select New Ruleset from the destinations list.2. Enter getBillingRS for the name, and then click Finish.

To invoke these components, your rule group needs to have references that you can connect to the destination components.

1. Switch back to the BillRecordRG editor after the rule set editor opens. In the left-hand pane, click Add Partner Reference in the References section.

2. In the ReferenceDetails section, enter InternationalRateRGRef for the name. 3. Click the Interface field and, from the list of interfaces that opens, select InternationalRateInterface. 4. In the same manner, add another partner reference. This time, name it TotalChargeReference, and

select the TotalChargeInterface interface. 5. Save the editor.

Figure 18 shows the final BillRecordRG rule group.

Figure 18: The BillRecordRG rule group

Step 6: Add rules to the getBillingRS rule set

The getBillingRS rule set contains six rules:

● Two if-then rules that determine the billing increment based on the usage plan. Both use the same if-then rule template.

● An action rule that creates new business objects for the internal variables and assigns some values to them.

● Two rules invoke the InternationalRateRG rule group component that you created earlier. The InternationalRateRG contains the RateTable decision table that determines the rate for each leg of the call.

Page 82: IBM WebSphere Developer Technical Journal - CiteSeerX

● The last rule invokes the TotalCharge Java component that calculates the total charge of the phone call by adding the charges per leg together.

Rules in a rule set are always applied in the order that they appear. For this reason, they are sometimes called simple or sequential rules.

Create local variables

In this step, you will create three local variables: rate1, rate2 and increment. The rate1 and rate2 variables are Rate business objects and the increment variable is a primitive double type. Since business object variables are not automatically initialized, an action rule, described later in the article, will create a new business object instance and assign the new instance to these variables. The following steps create these variables:

1. In the getBillingRS rule set editor, click Add Variable. 2. Change the name of the variable to rate1. In the Type cell, click Select Type, and then select Rate

from the list of types. 3. In the same manner, create two more variables. Name one rate2, with the type Rate, and name the

other increment with the type double.

Figure 19 shows the variables.

Figure 19: Variables in the getBillingRS rule set editor

Create the first two rules and the rule template

Rules 1 and 2 are if-then rules that determine the billing increment based on the usage plan. They are created based on the same rule template. Now let's create these rules and their template.

1. In the getBillingRS editor under the Rules section, right-click and select Add If-Then Rule. This adds a new rule named Rule1 to the canvas.

2. In the If cell, click Condition. 3. Expand billingRecord and select usage_plan as shown in Figure 20.

Figure 20: Selecting a condition in the rule set editor

Page 83: IBM WebSphere Developer Technical Journal - CiteSeerX

4. In the same cell, select == from the list of choices, as shown in Figure 21. (The double equal sign is used for comparisons, rather than setting values.)

Figure 21: Selecting == for the condition

As you add to an expression for a rule, the list of available choices changes. At any time, you can simply type what should appear or you can select from the list.

5. In the same cell, select String and enter Pay As You Go in the floating text field. Press Enter.

Figure 22: Entering "Pay As You Go" in the condition

Page 84: IBM WebSphere Developer Technical Journal - CiteSeerX

The "if" clause is complete. The next steps create the "then" clause:

1. In the Then cell, click Action, and then select increment from the list of choices. 2. Select = (single equal sign). Use the single equal sign to assign a value to a variable, unlike the double

equal sign (==), as in the previous condition, which compare the values. 3. In the location after the single equal sign, enter 30. Save the editor.

Rule1 should appear as in Figure 23.

Figure 23: The completed Rule1

The next steps convert Rule1 into a template and then create Rule2, which uses the same template as Rule1.

1. Right-click Rule1 and select Convert Rule to Template. This step creates Template_Rule1 in the Templates section and makes Rule1 a rule instance that uses Template_Rule1.

2. Right-click in the Rules section again and select Add Template Rule - Template_Rule1. All available templates appear in the list (there is only one at this point). This step creates Rule2.

3. In the Presentation cell of Rule2, click Enter Value next to the If cell, and then enter Regular. 4. At the second Enter Value, enter 6, and then save the editor.

Page 85: IBM WebSphere Developer Technical Journal - CiteSeerX

Your editor now should look like Figure 24.

Figure 24: The completed Rule1, Rule2, and Template_Rule1

Create Rule3

You defined some local variables in the previous section. As we mentioned before, variables that are business objects are not automatically initialized. If you try to use them without initializating them, they will throw exceptions at runtime. In the rule set, you have already created two business object variables. Rule3 initializes these two variables by creating a new business object and assigning some values to them. Each rule set contains built-in methods for creating and copying business objects. You will see these methods from the action list.

1. In the Rules section, right-click and select Add Action Rule, which creates Rule3. 2. Click the Action cell in Rule3, select rate1, select = (single), and then select Create BO. 3. The Business Object Selection dialog box appears. Select Rate and click OK. 4. To create an additional action in the Action cell, press Enter at the end of the first line.

At this point, an Action item appears after the first line in the Action cell, as Figure 25 shows. Next, you need to specify what the actions are. In the following steps you define the actions to create the rate1 and rate2 business objects, and then assign values to their destination attribute:

Figure 25: Creating actions in Rule3

Page 86: IBM WebSphere Developer Technical Journal - CiteSeerX

1. Click Action at the bottom of the Action cell. 2. Expand rate1, select destination, and then select =. 3. Expand billingRecord and then select destination1. 4. At the end of the second line, press Enter to add another action. 5. In the same manner, create another Rate business object for the rate2 variable, assigning

billingRecord.destination2 to rate2.destination. 6. Save the editor.

Rule3 should now look like Figure 26.

Figure 26: The completed Rule3

Creating Rule4 and Rule5

Rule4 and Rule5 are actions rules that invoke the InternationalRateRG rule group. You might recall that two partner references are specified in the BillRecordRG rule group. One of the partners is InternationalRateRGRef, which refers to the interface file of the InternationalRateRG. The interface file of the InternationalRateRG is the rule group that contains the RateTable decision table. In Rule4, you invoke this partner reference using the rate1 variable. Rule5 uses the rate2 variable to invoke this partner reference.

Follow these steps to create these rules.

1. To create Rule4, select Add Action Rule in the Rules section. 2. This time, when you click Action in the Action cell, select Invoke. 3. Click the Partner cell and select InternationalRateRGRef from the list of available partners. 4. Select getRate in the Operation cell, rate1 in the Input cell, and rate1 in the Output cell.

Rule4should now look like Figure 27. This rule invokes the service to which the InternationalRateRGRef reference is wired, with rate1 as the input. The output from that partner gets written back to the rate1 variable.

Page 87: IBM WebSphere Developer Technical Journal - CiteSeerX

1. To create Rule5, repeat steps 1 to 4 that you followed to create Rule4. This time, select rate2 for both the input and output variables.

2. Save the editor.

Figure 27: The completed Rule4 and Rule5

Create Rule6

Rule6 is an action rule that invokes the TotalChargeReference component. We provided this component in the module you imported. It's implemented using a business process, and its operation takes four input variables.

1. In the rules section, select Add Action Rule. 2. In the same manner as in the previous steps, create another Invoke action with TotalChargeReference

as the Partner, and operation1 as the Operation. 3. For the Input, select billingRecord for record, rate1 and rate2 for the rate1 and rate2 variables

respectively, and increment for increment_in_sec. 4. For the Output variable named total_charge, select chargeOutput. 5. Save the editor.

The completed Rule6 should look like Figure 28.

Figure 28: The completed Rule6

Page 88: IBM WebSphere Developer Technical Journal - CiteSeerX

Just to put things in perspective, figure 29 shows the entire completed rule set.

Figure 29: The completed getBillingRS rule set

Page 89: IBM WebSphere Developer Technical Journal - CiteSeerX

Step 7: Create and test the BillingSystem component

You have just finished creating your rule group that uses a rule set and if-then rules. Now we'll add the rule group to your assembly diagram and test it.

1. In the Business Integration view, under the Rule Groups category, select BillRecord and drag it to the assembly diagram.

The BillRecordRG rule group component has two partner references that need to be connected to the corresponding components, as you might remember from the second and third, articles. Follow these steps to connect the partner references to the corresponding components:

1. Connect the top reference of BillRecordRG to the InternationalRateRG component. 2. Connect the bottom reference to the TotalCharge component.

Figure 30: The BillingSystem assembly diagram

To clean up the assembly diagram, drag the components around to match Figure 31, or just right-click the editor and select LayoutContents. Either way, you're done! Now you can test the entire module.

1. Right click the canvas in the assembly diagram and select Test Module. 2. In the integration test client input editor, select BillRecordRG as the component.

Page 90: IBM WebSphere Developer Technical Journal - CiteSeerX

3. Enter the parameters shown in Figure 31, and then click Continue.

Figure 31: Testing the BillRecordRG

Figure 32 shows the result.

Figure 32: The BillRecordRG test result

Step 8: Using the Business Rules Manager

The Business Rules Manager is a Web client that helps you administer and configure your rules at runtime, based on changing business conditions. By default, the Business Rules Manager is not installed, so run a command line script to install it.

1. While the server is running, open a command prompt. 2. Change directories to <WID_INSTALL>\runtimes\bi_v6\bin, and then run the following

command:

Page 91: IBM WebSphere Developer Technical Journal - CiteSeerX

wsadmin.bat –f installBRManager.jacl

This script installs the Business Rules Manager, and you should see the message shown in Figure 33.

Figure 33: Instaling the Business Rules Manager

1. In the Servers view, right-click the server and select Launch - Business Rules Manager. 2. In the Business Rules Manager, you see the two rule groups that you created in this article. If you

expand both of them, you see getBillingRS and RateTable.

Figure 34: The Business Rules Manager

Page 92: IBM WebSphere Developer Technical Journal - CiteSeerX

3. Click the RateTable link to see the details on the decision table. Recall that, because no rule template is used inside the decision table, you cannot edit any of the parameters, even if you click the Edit button.

Figure 35: The Rate Table in the Business Rules Manager

Page 93: IBM WebSphere Developer Technical Journal - CiteSeerX

4. Click the Rule Books link in the navigation frame on the left of the manager. 5. Expand BillRecordRG and click the getBillingRS link. 6. Click Edit.

You are now in the edit page. Unlike in the RateTable, here you can modify the parameters in the rule set, because you used a rule template in the rule set. In the next steps we'll make a change in the rule parameters and add a new business rule based on the existing template:

1. Change the increment of the Pay As You Go plan from 30 to 60, as shown in Figure 36. 2. Type RulePromo for the name, Promo for the usage_plan, and 1 for the increment.

Figure 36: Changing parameters in the Business Rules Manager

Page 94: IBM WebSphere Developer Technical Journal - CiteSeerX

3. Click Add.4. Click the up arrow on RulePromo to move it all the way to the top.

Figure 37: Add a new business rule in the Business Rules Manager

Page 95: IBM WebSphere Developer Technical Journal - CiteSeerX

5. Click Save at the top of the page.

The change is only temporary until you publish it to the server.

1. In the navigation frame in the manager, click the Publish and Revert link. You will see that getBillingRS has one change, as Figure 38 shows.

2. Click Publish.

Figure 38: Publishing the getBillingRS rule set in the Business Rules Manager

Page 96: IBM WebSphere Developer Technical Journal - CiteSeerX

Now that you have made two changes to the getBillingRS rule set, let's rerun the BillRecordRG in the test component again and see what result we get (Select the Invoke event in the test client and then select Rerun). If you recall, the total charge before the change was 0.475. After you have changed the increment from 30 to 60, the total charge becomes 0.5, as Figure 39 shows:

Figure 39: Testing the modified getBillingRS

Page 97: IBM WebSphere Developer Technical Journal - CiteSeerX

Remember that you also added a new rule for the Promo usage plan, that has a billing increment of one second. Now, re-run the test (click the Invoke button in the top right corner of the test client) and use Promo as the usage_plan parameter. The total charge is 0.356667, as figure 40 shows, which is a much better deal then the billing increment of 30 and 60 seconds.

Figure 40: Testing the modified getBillingRS with "Promo" as the usage_plan

Page 98: IBM WebSphere Developer Technical Journal - CiteSeerX

Conclusion

Business rules are a crucial part of the WebSphere Business Integration portfolio. Business rules can externalize and manage business logic separately from the main business processes. Off-loading the most volatile logic to business rules offers great flexibility to your company.

Downloads

Description Name SizeDownload methodStarter project interchange file ratingsystembegin.zip 12 KBHTTPFinal project interchange file ratingsystemcomplete.zip 17 KBHTTP

Resources

Learn

● A guided tour of WebSphere Integration Developer -- Part 1: Get a driver's view of the WebSphere Integration Developer landscape

● A guided tour of WebSphere Integration Developer -- Part 2: SOA development with WebSphere Integration Developer

● A guided tour of WebSphere Integration Developer -- Part 3: Building a simple service-oriented application

● A guided tour of WebSphere Integration Developer -- Part 4: Unleashing visual snippets and business state machines in your service-oriented application

● A guided tour of WebSphere Integration Developer -- Part 5: Business processes in a service-oriented world

● Creating WebSphere Process Server Custom Selectors with WebSphere Integration Developer ● Business Process Execution Language for Web Services version 1.1 ● Business Process with BPEL4WS: Understanding BPEL4WS ● WebSphere Integration Developer product information ● WebSphere Process Server product information ● WebSphere Process Server: IBM's new foundation for SOA

Page 99: IBM WebSphere Developer Technical Journal - CiteSeerX

● Build a Hello World SOA application ● Service Component Architecture ● Common Event Infrastructure ● developerWorks: WebSphere Process Server and WebSphere Integration Developer resources ● developerWorks: WebSphere Business Integration zone ● developerWorks: WebSphere development tools zone ● Meet the experts: WebSphere Integration Developer

Get products and technologies

● Download the WebSphere Business Modeler trial ● Download the Rational Application Developer trial

Discuss

● WebSphere Integration Developer forum ● developerWorks blogs

About the authors

Jane Fung is an Advisory Software Developer at IBM Canada Ltd, where she is responsible for developing the Business Process Execution Language (BPEL) and Business Rules debuggers in WebSphere Integration Developer. Prior to that, she was the team lead of the WebSphere Studio Technical Support team.

Greg Adams was the lead architect for the user interface in the award-winning Eclipse platform, and more recently, has been the lead architect and development lead in the core WebSphere Business Integration Tools, including WebSphere Studio Application Developer Integration Edition and WebSphere Integration Developer. Greg led the delivery of IBM's first complete services oriented architecture (SOA) tools stack and the first BPEL4WS standards supporting Business Process Editor; both critical deliverables in support of IBM's On Demand strategy.

Richard Gregory is a software developer at the IBM Toronto Lab on the WebSphere Integration Developer team. His responsibilities include working on the evolution and delivery of test tools for WebSphere Integration Developer.

Randy Giffen is an advisory software developer at the IBM Ottawa Lab on the WebSphere Integration Developer team. He was responsible for WebSphere Integration Developer's business state machine tools and the visual snippet editor. Prior to to this he was a member of user interface teams for WebSphere Studio Application Developer Integration Edition, Eclipse, and VisualAge for Java.

Page 100: IBM WebSphere Developer Technical Journal - CiteSeerX

Comment lines: Scott Simmons: SOA governance and the prevention of service-oriented anarchyLevel: Introductory

Scott Simmons ([email protected]), Executive IT Architect, IBM

20 Sep 2006

Success with SOA, at an enterprise level, mandates adoption of a robust and disciplined governance framework. Although organizations may differ on the specific functions enabled within their governance model, a common set of capabilities needs to be addressed for SOA. This column discusses the need to build effective governance frameworks while examining customer examples.

From the IBM WebSphere Developer Technical Journal.

The governance imperative

The early results on customer SOA adoption illustrate the promise of SOA to transform organizations through increased flexibility and adaptability. Although some of this is anecdotal, there are real metrics which evidence the increased responsiveness to business requirements and decreased costs for development that are emerging. At the same time, these findings must be reconciled with realism -- success with SOA does not "just happen." Customers finding success with SOA have a common characteristic: they have implemented a governance approach to support the design, development, deployment, and operations of an SOA solution framework.

SOA imparts extended requirements across development, deployment, and management of information technology solutions. Although the overall challenges and governance requirements of IT remain similar, the service concept provides an increased relevancy of the IT function to business. No longer is IT viewed as the function that manages a set of siloed applications; the IT portfolio of an SOA-enabled organization is founded on a set of evolving common base functions and services that are LOBs and system agnostic. As a result, SOA governance extends IT governance in many ways, for example by:

● Changing the traditional development lifecycle arising from new design and modeling paradigms, service reuse considerations, and composite application development.

● Extending the concept of operational management to deal with the issues of distributed

Page 101: IBM WebSphere Developer Technical Journal - CiteSeerX

deployment and management, measuring effectiveness, performance, and security at the level of the service interfaces and their underlying implementation.

● Changing the organizational model in terms of roles and responsibilities (for example, requirements assessment, communication, service ownership, and project funding), breaking down the traditional walls between business and IT.

It can be questioned whether SOA governance mandates an existing IT governance approach, but it is clear that effective governance provides an initial framework for an SOA governance approach.

Governance approaches: The good, bad and ugly

In my job as an SOA Architect at IBM, I work with successful -- as well as unsuccessful -- implementations of SOA. Customers ask me to specify the ingredients needed for SOA success and, equally important, the factors that contribute to failure. The common answer to both of these questions comes down to governance. Although I could develop a list of SOA governance functions, I will limit my discussion to four core areas:

● The definition and enforcement of policies and procedures to support SOA design, development, deployment, and management of services with a corresponding set of methods, techniques, and tools to support SOA and the tactical and strategic requirements of the business.

● Establishment of a Center of Competence (or Center of Excellence), representing both IT and business to maintain and advance SOA initiatives as a shared competency across the organization.

● Ongoing executive and business sponsorship of the SOA intent and objectives, with the mandate to support continuous communication to organizational stakeholders on the development, implementation, and management of SOA.

● An understanding at the organizational level to tactically execute and strategically plan with SOA adoption; SOA is an incremental and evolutionary change to IT and the organization, and benefits from an iterative implementation.

It should be stated directly: SOA changes the IT mission and IT's relationship to the business.

SOA directs a different approach to IT development and management, enabling solutions to span organization business units versus a silo-based application approach that is linked to specific lines of business. SOA will not succeed at an enterprise level if organizations do not both understand and reconcile these considerations.

SOA story 1: Acme Assurance

Page 102: IBM WebSphere Developer Technical Journal - CiteSeerX

A European customer of IBM® in the financial industry (hereafter referenced as "Acme Assurance") began to build a structured approach to integration in 1999, which has since evolved into an enterprise SOA framework. The project started as a message integration architecture to span the mainframe (IMS™, CICS® and now WebSphere®), AIX®/UNIX™, Windows®, and iSeries®. Acme Assurance uses DB2® and Oracle as the primary databases with applications (custom and off-the-shelf) running over a variety of implementations including COBOL, PL/SQL, and Java™. As part of the initial framework, they built a framework using XML to define common message formats, an in-house custom registry to categorize message flow components, and syntax (pre-WSDL) to define flow operations. Their current solution consists of over 300 common services (now migrating to WSDL) with an overall reuse rate of over 50%. More importantly, 40% of their daily transactions flow through one or more of these services generating an estimated savings of over 3 million pounds in IT development, based on reuse and quality metrics.

When I met with Acme Assurance and asked what they felt led to their success, they responded that success was related to management support they had since the onset of the work, and the incremental definition and delivery of the architecture enabled them to generate results to business quickly. Through a tight linkage with business, they define services and operational characteristics at technical and business levels via their Center of Excellence (CoE), which interacts across teams specializing in messaging, Java, XML and security. The CoE is responsible for:

● Defining the design process, infrastructure components, and overall framework. ● Defining and implementing the service management processes including the definition,

documentation, and publishing of service.● Defined the process and operational implementation of measuring reuse.

The benefits of the approach at Acme Assurance can be summarized as follows:

● Over 70 applications running in production which consume one or more services.● Over 40% of back-end transactions are initiated through SOA.● Management of over 300 reusable services in their business service directory.● Over 50% of business services are reused.

The success of this solution largely occurred through the investment in a strong governance approach via definition of policies and processes through their CoE. This organization continues to be successful in the wake of technology and organizational changes over the last seven years, largely due to the discipline and rigor in ensuring governance of the architecture. In addition, IT secured executive commitment early in the project, which enabled the solutions to find credibility across the organization. Finally, their ongoing work in tracking and managing the deployment and operational aspects of their solution has been outstanding. Weill and Ross (see Resources) state the ability to track and manage these aspects is the Number 1 correlation with successful governance.

SOA story 2: Worldwide Resources

Page 103: IBM WebSphere Developer Technical Journal - CiteSeerX

Another customer followed a similar path at project initiation, but did not have the same results. This worldwide Global 100 Company (we will call "Worldwide Resources") manages operations in nearly every country in the world. I was called in to do an architectural assessment review as part of a Worldwide Resources re-evaluation of their investment in five key IT projects, intended to support a more flexible IT architecture.

On one hand, the company was successful in developing an internal organization to establish policies and procedures around the concept of integration. Unfortunately, the Competency Center became a technology think-tank and quickly became isolated from the lines of business, with overall executive sponsorship evaporating as a result. Development quickly moved to a bottom-up development effort based on tactical execution of a set of projects without defined business value. This issue was compounded by a number of sub-optimal design decisions which further stalled an ability to deliver tangible short-term successes. Two years after commencing the project, the business users were in revolt with issues centering on non-delivery of strategic capabilities, budgeting dissension surrounding integration, and overall frustration with the technical team to realize value from resource investments.

What went wrong at Worldwide Resources? The list is pretty evident from the above discussion:

● The business case definition and involvement of business stakeholders was minimal.● The stakeholders did not define goals and objectives as well as cross-organizational roles and

responsibilities.● There was lack of reuse of services and components, as each team created solutions on their own

with no effective method to track reuse or define utilization.● There was limited adherence to standards and procedures in areas of solution definition,

deployment, and QA/test procedures, resulting from a lack in governance.● As a critical issue, there was never a definition of funding across implementations -- organizations

requesting a solution became funding points without cost recovery.

In the end, Worldwide Resources tried to replace a fragmented department-centric development process with a costly approach based a hope that a technology-based approach would correct organizational dysfunction. These issues, in tandem with the non-existence of a sound IT governance approach, led to a crisis of credibility between business and IT. I should add that they are "healthier" now, but many of the underlying organizational issues continue to plague the IT/business relationship; achieving governance and transforming IT is not as easy task.

Is Worldwide Resources a unique case? Unfortunately not. Many companies run into similar issues. The lack of a formal, well-honed governance approach plagues and continues to stall organizations in SOA adoption.

SOA story 3: Tech Equipment Inc.

Another company ("Tech Equipment Inc.") is a large, worldwide company based in the US and the

Page 104: IBM WebSphere Developer Technical Journal - CiteSeerX

market leader in their specific high technology sector. Their business problem is common to large organizations: they had over 15 different channels for business-to-business interactions with over 1000 strategic partners, ranging from FTP and EDI to Rosetta Net. The internal private processes implementing these interactions were contained in separate systems arising from acquisition and tactical solutions to support business requirements. As a result, there was no visibility at Tech Equipment Inc. into operations without reconciling the output from these varied systems. Following an extended assessment process, IBM Global Services was contracted to manage the B2B consolidation process and the development of a service oriented approach for system development.

In my view, the key in the ongoing success at Tech Equipment Inc. was the initial involvement of business sponsors to define the problem and to work with the technical team to develop a tactical approach, as well as a strategic plan for rolling out multiple iterations of the solution architecture. As a result of the development of a phased approach to development and design, and the early definition of policies and processes as part of their IT governance approach, the project continues to deliver on their promise. In addition to the identification of business objectives upfront, the project has done a remarkable job of keeping stakeholders across the company appraised on project progress, including sales, marketing, and partner management organizations. This glowing appraisal of the project does not ignore that there are both business and technical issues encountered on a periodic basis, but overall the governance of the project ensures that problems are dealt with as they occur -- not after they have multiplied. Currently, the B2B consolidation project is on schedule and has reduced the number of B2B channels by half, largely through governance and management of both the tactical and strategic plan.

SOA story 4: Exchange Unlimited

As my last customer example, I was involved in the development of an exchange hub in Asia Pacific, which I will reference as "Exchange Unlimited." The company had an existing exchange hub for conducting business-to-business transactions for "subscribing organizations" over a variety of interfaces, including FTP, Rosetta Net, and portal-based interactions. The project intention was to convert the hub to a service-oriented solution using J2EE, portal, and partner management technologies (similar to Tech Equipment Inc.).

Although the business requirements and overall context for the solution was defined at the beginning of the project, there was a lack of effective communication between the business stakeholders and the technical team. Working with the project team, we conducted multiple architectural reviews over a nine month period, and began to establish the underpinnings of a Center of Excellence approach. After three architecture reviews, it was clear that there were issues in the system design concerning performance, security, and management. The core issue was that these issues were not being communicated to business stakeholders. In addition to suffering from an exodus of experienced talent to other companies, there was a gap between business expectations and technical feasibility. This gap was exacerbated by a lack of honest communication and trust between the different functions at Exchange Unlimited, and no definition of formal processes for development and design of components and services. Even with a governance framework, the problems at Exchange Unlimited may not have been eliminated, but the problems would have surfaced more quickly and been dealt with.

Page 105: IBM WebSphere Developer Technical Journal - CiteSeerX

So let's review the scorecard:

Table 1. SOA success criteria

CompanyDefinition of policies and procedures

Center of Competency and communication

Executive and business sponsorship

Incremental SOA adopttion

Acme Assurance YES YES YES YESWorldwide Resources

PARTIAL(Technical only)

YESNO(Initially YES)

NO

Tech Equipment Inc.

YES YES YES YES

Exchange Unlimited

NO NO NO NO

Success with SOA governance

I want to provide some guidance in terms of what are the ingredients in a successful approach to SOA governance. Although there are "skunk-works" SOA investigation projects that may start with an immature or non-existent governance approach, I find that success with SOA initiatives requires an organizational commitment to governance.

In my work, I find that establishing SOA governance to support SOA often occurs through the creation of (or extension to) a Center of Excellence. In many cases, organizations utilize the experience of outside consultants in CoE creation to benefit from the cumulative best practices learned through other SOA engagements. It should be noted that establishing a CoE is not sufficient in itself for SOA success; the definition of policies, processes, and methods to support SOA are integral for governance as well. Additionally, the ongoing sponsorship of the initiative by executives and business stakeholders with active communication is a critical requirement for success. Finally, the ability to incrementally adopt SOA is a key facet to success; forcing SOA adoption as a single (or as multiple, as seen at Worldwide Resources) insurmountable project(s) is a mistake and will result in failure in nearly all cases.

Although many companies have IT governance in place and are involved with extending these frameworks to support SOA, experience shows that successful SOA governance often requires outside assistance to ensure coherence and alignment with corporate governance and IT governance frameworks.

Page 106: IBM WebSphere Developer Technical Journal - CiteSeerX

Conclusion

Within many organizations, SOA adoption may occur organically at the department level, but as it expands to an enterprise directive, SOA requires enterprise commitment and the ongoing involvement across business and IT stakeholders. To be successful with implementing SOA, organizations must define and implement SOA governance. As stated by Eric Mark, "Service oriented culture binds the firm's vision, strategy and objectives with its SOA strategy, vision and governance model."

SOA governance requires significant planning and preparation to ensure success. In many cases, SOA governance may germinate from the SOA project team; however, it is important that business stakeholders are an active and ongoing part of the governance framework to ensure communication, and to support the overall goals and objectives of the business.

In closing, SOA governance is not optional -- it is critical to support SOA adoption and evolution. SOA governance is not a given -- organizations with existing organizational and IT governance do not have SOA governance naturally. Finally, SOA governance is not a point-in-time solution -- SOA governance requires ongoing assessment and iterative refinement to ensure SOA success. "Forewarned is forearmed" -- make the right governance decisions early to be successful with SOA.

Resources

● Introduction to SOA governance ● Effective SOA Governance white paper ● What is IT governance, and why should you care? ● Service-Oriented Architecture (SOA) Compass: Business Value, Planning, and Enterprise

Roadmap ● CBDI Service Oriented Architecture Practice Portal ● IT Governance Institute ● Center for Information Systems Research (MIT) ● IBM SOA Governance page ● Recommended books

❍ Weill P., Ross J., IT Governance: How Top Performers Manage IT for Superior Results, Harvard Business School Press, 2004, ISBN 1591392535

❍ Mark, Eric and Michael Bell, Service Oriented Architecture -- A Planning and Implementation Guide for Business and Technology,Wiley and Sons, 2006

Page 107: IBM WebSphere Developer Technical Journal - CiteSeerX

About the author

Scott Simmons is an Executive IT Architect for the Worldwide WebSphere Integration Technical Sales Support team. Scott specializes in the design and development of integration architectures for customers and partners with a specialized focus on B2B integration solutions.

Page 108: IBM WebSphere Developer Technical Journal - CiteSeerX

Comment lines: Jason McGee: Dynamic middleware and the six attributes of virtualized application serving environmentsLevel: Introductory

Jason McGee ([email protected]), Distinguished Engineer, WebSphere XD Chief Architect, IBM WebSphere Extended Deployment

20 Sep 2006

Applying virtualization and automation is one way to significantly ease the burden of managing a modern, complex application server environment made up of many applications spread across a large number of machines. When evaluating virtualization for such environments, it is important to look for technologies and products that address the realities of your IT environment head on. Here are six key attributes you should look for in a solution to make sure it really addresses the complexities of your environment.

From the IBM WebSphere Developer Technical Journal.

Introduction

Let's face it: Managing production application server environments is hard. Modern environments are often comprised of many applications spread across a large number of machines. These applications have to handle an unpredictable load, they have to be available all the time, and they are constantly changing. Plus, there is a constant pressure to do it for less money and with less people. So how you do manage this complexity? There are many solutions, but one promising approach is through the use of virtualization and automation. If the middleware could be smarter and could understand your goals, then the systems could be both cheaper and easier to manage.

There are a number of products and technologies that claim to provide these benefits. How do you know which are the good ones? Below I have outlined six attributes that I feel must be addressed for a solution to really address the complexities of modern application server environments. All of these attributes are addressed in IBM's WebSphere® Extended Deployment product, which I will use as the example throughout.

Page 109: IBM WebSphere Developer Technical Journal - CiteSeerX

1. Awareness of the environment

The first thing that a virtualized environment must have is situational awareness. In order to manage the application servers, the virtualization system must understand three critical elements:

● Topology includes information about both the servers and the applications. For servers, the system needs to know all of the application servers, their current status (started or stopped), and their communication ports (host and port number). For applications, the system needs to know which applications are installed, on which server or servers they reside, whether the application is currently running or not, and which endpoints they expose. An endpoint could be a URL, a Remote Object over IIOP, or a message driven bean. It is also critical that the topology information is live, that it is updated as the system changes. Changes could be administrative actions, like installing a new application, or failures, like a machine dies. A proper virtualization system will learn as much of this topology as possible on its own without requiring the administrator to define and maintain this information separately. In WebSphere Extended Deployment, a subsystem called On-Demand Configuration (ODC) is responsible for discovering, maintaining, and distributing this topology information around the system. ODC is a live state model, showing the topology of the current running system.

● Load and capacity involves the virtualization system understanding how much computing power is available in a cluster of machines that it is managing, and how much of that power is in use at any moment. This information is critical to decisions that the virtualization system will make later. A proper virtualization system must be able to compare the computing power of a heterogeneous collection of machines (Intel mixed with Power5 mixed with UltraSparc, and so on). The system must also be able to determine the current percentage of that computing power that is in use at any moment and must be able to continuously distribute that information. In WebSphere Extended Deployment, the NodeDetect subsystem is responsible for determining and distributing load and capacity information. NodeDetect computes a node speed rating for each machine by looking at the number of CPUs, CPU speed in Mhz, and CPU architecture, and applying a scaling factor to normalize the different CPU architectures. NodeDetect also monitors in-use CPU load, memory utilization, and other metrics to help determine how much of the machine is in use at a snapshot in time.

● Demand is the final element of which the virtualization system must have awareness. Demand is a measure of the requested load on the system. It is a measure of how much traffic is being sent into the system. At a minimum, a virtualization system must know how many requests are being sent into the system. Assuming all requests cause equal load on the servers, this incoming rate enables the system to compute the required computing power needed to handle the demand. Of course, in real life, all requests are not equal. Some requests are computationally expensive and others are cheap. So a good virtualization system has an ability to measure the cost of each different type of request in terms of computing power required to process the request. In

Page 110: IBM WebSphere Developer Technical Journal - CiteSeerX

WebSphere Extended Deployment, the On Demand Router (ODR), which I will discuss shortly, tracks the incoming demand in terms of arrival rate. WebSphere Extended Deployment also has a subsystem called the Work Profiler that is responsible for automatically profiling applications to determine the average per-request computing cost of different types of requests in the system. This gives WebSphere Extended Deployment a very accurate view of the demand on the system.

The combination of topology, load/capacity, and demand provide a virtualization system, such as WebSphere Extended Deployment, a clear and complete picture of what is happening within a collection of machines. However, before a virtualization system can use this information effectively, the system needs to understand how the administrator wants the systems to be managed.

2. Goal orientation

The second attribute that must be present in a virtualized application server environment is some notion of goal orientation. With virtualization, one of the main benefits is the ability of the virtualization system to make changes automatically for the administrator in response to current conditions. But to do that, the system must have some criteria by which to make decisions. In a proper virtualization system, that criteria is expressed as policy. Policy capabilities vary widely, depending on the functionality of the virtualization system. In WebSphere Extended Deployment, there are two primary policy types:

● The first is a performance management policy called service policy, which is a type of service level agreement (SLA). Service policies are comprised of a set of classification rules, a performance goal, and an importance level. The classification rules are predicate statements based on elements of the incoming request, such as the URL, headers, cookies, client IP, and so on. The performance goal is usually something like an average response time goal for all requests that match the classification rules. The importance level is a relative ranking amongst different workloads. If all workloads are able to meet their defined performance goal, then importance level has no effect. However, when demand exceeds capacity, the importance level would be used to make trade offs.

● The second policy type is a health policy. Health policies define common problems that the administrator wants to deal with. I will discuss this topic in more detail later.

A final critical comment on goal orientation is that a proper virtualization system will enable goals to be defined in terms of end user characteristics. For example, it is better to define a Web application's policy in terms of response time to the end user instead of maximum CPU utilization threshold. Systems like WebSphere Extended Deployment provide end user goals.

Page 111: IBM WebSphere Developer Technical Journal - CiteSeerX

3. Managing demand vs. capacity

Once we have situational awareness and a set of goals to manage against, the real work (and value) of virtualization can begin. The purpose of a virtualization system is to enable the goals to be met in the face of current, real world conditions. To do this, the system must first be able to exert control. There are many different points of control in a virtualization system, from controlling traffic flow, to controlling cluster sizes and machine allocation, to controlling CPUs and memory allocated to processes. All have their place and often differ in how quickly they can be applied. In WebSphere Extended Deployment, the two primary points of control are traffic control and cluster size control:

● Traffic control is provided by the On Demand Router (ODR). The ODR is a Java™-based proxy server. It is placed in front of the application servers and is responsible for routing all traffic into a set of application servers. In this important spot in the network, the ODR is capable of controlling how resources are used on the back-end servers. To meet the defined goals, the ODR provides two key capabilities:

❍ Dynamic workload management (DWLM) is able to route and balance traffic across a cluster of servers using load awareness information. This awareness allows DWLM to balance the load in the most efficient way possible, leading to better throughput and response times.

❍ Autonomic request flow manager (ARFM) provides flow control for incoming workload. This enables the ODR to differentiate workloads in order to meet the defined goals. So if a low importance workload is consuming too much resources, ARFM can slow down (queue) the low importance work and speed up higher importance work to change the mix of workload on the back-end application servers.

These traffic control capabilities provide a way to meet goals with fast responsiveness, adapting quickly to changes in the situational environment.

● Traffic control is, however, not sufficient. If there is not enough CPU assigned to a given application, managing the incoming traffic will not let you to meet the assigned goal. To remedy this, WebSphere Extended Deployment has a capability called the Application Placement Controller (APC). The APC's job is to control the size of the cluster hosting a given application based on the demand on the system. So as demand increases, the amount of CPU allocated to a given application can increase and visa-versa. However, APC does not deal with applications in isolation. It must decide for all applications running on the same collection of machines how best to divide up the resources of the pool amongst all of the applications so that they all meet their goals. This is critical. Because APC looks at the holistic problem, it can enable a pool of hardware to host a larger than typical collection of applications, since each application can claim a large portion of the pool as needed based on real demand.

Page 112: IBM WebSphere Developer Technical Journal - CiteSeerX

The ability to manage demand enables a virtualization system to eliminate a number of complex tasks from the administrator. Capacity planning becomes less important. The virtualization system can decide where applications should run, how big the clusters should be, and how to route to those applications. As things change, the virtualization system can adapt automatically. These are all issues the administrator no longer has to deal with manually.

4. Handling failure

Of course, no system and no application is perfect. Problems happen. Applications have memory leaks. Servers hang. A proper virtualization system will expect these problems and deal with them appropriately. A proper virtualization system will also mitigate the impact of these problems in the production system. Mitigation means that the end user is not aware that problems are occurring, giving the administrators more time to resolve the problems. In WebSphere Extended Deployment, there is a health management system called HMM. HMM enables the administrator to define health policies to monitor for common problems and take some mitigating action when they occur. The HMM system can monitor for things such as memory leaks in the Java Virtual Machine (JVM), hung or non-responsive servers, requests that are timing out, storm drains, excessive service policy violations, and other conditions. When one of these conditions is detected, HMM can notify the administrator, capture some diagnostic information for later debugging (such as a JVM heap dump or thread dump), and restart the server to remove the bad server from the production environment. That server restart is intelligent, ensuring that the application is never taken off-line because of the restart and predicting failure in advance, so the restart can be triggered before requests start producing errors.

When combined with performance management policies, this health management system can yield a very robust environment that continues to meet the administrator's goals in the face of both real demand and failures, all without manual intervention. This is a win-win situation: better resiliency with less work.

5. Planning for change

Once a system is in production and running smoothly, handling the changing loads and recovering from failure, it would be nice to just let things be. But that is not reality. Change is constant. Applications are upgraded, software maintenance must be applied. A proper virtualization system helps manage these situations as well.

Page 113: IBM WebSphere Developer Technical Journal - CiteSeerX

Software maintenance includes upgrades to both the operating system and the application server software on a given machine. As an administrator, you need the ability to take a given machine cleanly out of production so you can apply those changes. Then you need to be able to put the machine back in service. The virtualization system should orchestrate this for you. WebSphere Extended Deployment accomplishes this through a capability called Node Maintenance Mode. When you place a node (or machine) in maintenance mode, WebSphere Extended Deployment will cleanly drain all work from the machine, allowing in-flight requests to complete while blocking all new requests from using that machine. WebSphere Extended Deployment will also ensure that all servers assigned to that machine are moved to other machines, ensuring that the goals for a given application continue to be met on the now smaller set of hardware resources. This may involve a complex rebalancing of workload, but it will be handled automatically. Once the machine is removed from production, maintenance can be applied by the administrator. When completed, the machine can be removed from maintenance mode and be made available again in the pool of resources. WebSphere Extended Deployment can then move applications and workload back to the machine as needed. The beauty of this system is that the administrator does not have to move applications, change routing tables, stop workload, or do anything manual. The administrator simply declares that a given machine can no longer be used and the WebSphere Extended Deployment virtualization system adapts automatically.

Application code upgrades are a little more complex. The challenge is to make these upgrades transparent to the end user. Some of that challenge must be met through careful programming and making compatible upgrades to the design of an application. A proper virtualization system will help you meet the other part of the challenge, deploying the new application code into production. In WebSphere Extended Deployment, two major types of application version changes are supported:

● Quick replace. The goal here is to replace one version of an application with another in as short a time period as possible while keeping the application continuously available. In WebSphere Extended Deployment, this is accomplished through an automatic coordination of the roll out of the application code with the flow of traffic from the ODR tier. By coordinating these two activities, WebSphere Extended Deployment can ensure that code is replaced one server at a time while traffic is routed around the server being changed.

● Coexistence. Here, the goal is to have two versions of an application running side by side for an extended period of time. This is useful for things like running pilots. For this to work, the incoming workload must be split somehow to the two destinations. In WebSphere Extended Deployment, this is accomplished through a routing policy on the ODR. This routing policy enables the administrator to define how to route the traffic and which version of the application to send the traffic to. For example, the administrator might route the traffic based on the client's IP address, letting users from one location go to version 1.0 of an application and all other users go to version 2.0 of the application. This routing control enables great flexibility for the administrator to make changes in the infrastructure without that change showing through to the end user.

Page 114: IBM WebSphere Developer Technical Journal - CiteSeerX

6. People: trust, proof, and money

The final attribute of a proper virtualization system is the recognition that people play a pivotal role in the success of these systems. The virtualization system must address this role directly. There are three major issues at stake related to the people using these systems: trust, proof, and money:

1. Let's face it: few people trust that software is going to do what it says. That is why people test everything. So it stands to reason that a new, fancy virtualization system that automatically makes decisions and makes changes for an administrator -- while powerful -- would initially not be considered trustworthy. It is the responsibility of that virtualization system to earn trust. In WebSphere Extended Deployment, this is addressed through a mode of operation called supervised mode. In supervised mode, before any changes are made to the system in reaction to changing input conditions, WebSphere Extended Deployment notifies an administrator of the pending change, describes the reasons behind the change, describes in detail what changes it desires to make, and then asks for approval. If approved, the changes will be made. Otherwise, they are canceled. This system is in place to build trust. The administrator can see all changes being made, can control whether they are made or not, and can see the reasons behind the changes. Over time, the administrator will come to realize that WebSphere Extended Deployment is consistently making sound recommendations and the administrator can then move the system into the more desirable automatic mode, after the trust has been earned.

2. Closely related to trust is proof. A proper virtualization system must provide proof that it is making good decisions, even if those decisions are made and executed automatically. It must provide an audit trail of the changes it has made, and it must be able to prove that it is meeting the goals that are defined. In WebSphere Extended Deployment, this is accomplished through an audit and performance data logging infrastructure that lets the system record all of the changes it is making and enables the state of the system over time to be recorded. This information is invaluable for the administrator to understand how the system is behaving over time and why it is making those decisions.

3. In the end, it all comes down to money. Is the system less expensive to run? Is it easier to manage, requiring less people? Is it more resilient, resulting in less opportunity cost lost? And how do I pass that cost on to the application owners? These are important questions. Some are proved in the way the system operates on a daily basis. But one big change is the last question: how do I pass on the cost? Many IT departments do internal charge-back to the application owners for the cost of hosting their applications. Today, that is often done in fixed units. In a proper virtualization system, that billing can be done in a more fine-grained manner, by actual usage of resources. This is especially critical since most virtualization systems, including WebSphere Extended Deployment, are built on the concept of shared resources, making fixed unit charge-back inaccurate. In WebSphere Extended Deployment, we solve this requirement through the ability to capture fine-grained-usage-based metrics on system utilization by application or service policy. We also enable consumption of those metrics into a formal billing and metering system, such as IBM Tivoli® Usage and Accounting Manager. This base capability enables a complete transformation of how IT recovers cost from their application owners.

Page 115: IBM WebSphere Developer Technical Journal - CiteSeerX

Conclusion

As you can see, providing a virtualized infrastructure is complex. There are many facets to be considered. It is possible to address only part of the requirements in a given product, but by doing so, an administrator is left with the task of filling in the gaps. When evaluating virtualization for application server environments, it is important to look for technologies and products that address the realities of your IT environment head on. IBM WebSphere Extended Deployment is such as system.

Resources

● WebSphere Extended Deployment product information ● developerWorks Autonomic computing zone ● Comment lines from Kyle Brown: Why you need WebSphere Extended Deployment

About the author

Jason McGee is a Distinguished Engineer and Chief Architect for WebSphere Extended Deployment (XD). Previously, Jason was the Chief Architect of the Base and Network Deployment versions of WebSphere Application Server. He is a senior architect on the WebSphere Foundation Architecture Board and an associate member of the IBM Software Group Architecture Board. Jason serves as the Director of WebSphere Advanced Technologies, responsible for the productization new technologies into the WebSphere platform. Jason joined IBM in 1997 and has been a member of the WebSphere Application Server product since its inception. He helped to define the concepts of Servlets and JavaServer Pages (JSP) for processing Web presentation logic on WebSphere. Jason was responsible for the design and implementation of the Web Container in WebSphere Application Server. Mr. McGee has been heavily involved in leading the architecture for key parts of the WebSphere Application Server, including the server runtime and the XML-based systems management architecture. Jason graduated with a B.S. degree in computer engineering from Virginia Tech in 1995.

Page 116: IBM WebSphere Developer Technical Journal - CiteSeerX

The EJB Advocate: SOA represents the next step in the evolution of component-based applicationsLevel: Intermediate

Geoff Hambrick ([email protected]), Distinguished Engineer, IBM

20 Sep 2006

Somehow the tables got turned! This month, the EJB Advocate finds himself in the position of advocating SOA-related specifications, such as Service Component Architecture (SCA), as much as those associated with Enterprise JavaBeans™.

From the IBM WebSphere Developer Technical Journal.

In each column, The EJB Advocate presents the gist of a typical back-and-forth dialogue exchange with actual customers and developers in the course of recommending a solution to an interesting design issue. Any identifying details have been obscured, and no "innovative" or proprietary architectures are presented. For more information, see Introducing the EJB Advocate.

What is really new about SOA?

Dear EJB Advocate,

I am a bit confused about all the hype around Service Oriented Architecture (SOA) -- and you seem to be caught up in it.

For example, in Is it ever best to use EJB components without facades in service oriented architectures, you describe best practices that one should follow in designing EJB components to make them "service oriented," such as making them coarse-grained and stateless.

The principles you describe are nothing new to those of us who built successful applications using distributed object technologies like CORBA and Enterprise JavaBeans. I suppose we have been "service oriented" all along.

I'll admit that you get a better acronym out of "service oriented" architecture than "distributed object"

Page 117: IBM WebSphere Developer Technical Journal - CiteSeerX

architecture. But besides that, I have a serious question: is there anything new about SOA? Specifically, why should I care about the new Service Component Architecture and Service Data Objects specifications when I can do everything with Enterprise JavaBean components?

Signed, SOA What

SCA represents a natural evolution on the server side

Dear SOA What,

The following statement may shock you, given that I am the EJB Advocate -- but just because you can code everything on the server side in Java using EJB components doesn't mean that you should. My take is that we are seeing a natural evolution of technology on the sever side similar to that which we saw with Java™ servlets on the client side.

In case you may not remember, Java servlets were introduced as a standard Java-based component to unify the proprietary Java APIs associated with specific Web servers, like Microsoft®'s Internet Server API (ISAPI). Java servlets enabled Java programmers to develop components to generate dynamic Web pages that could be run with a wider variety of Web servers from different vendors.

The most commonly used component is HttpServlets, which handled all the details of mapping the input from the HTTP request stream and output to the HTTP response stream, leaving the programmer free to concentrate on the details of the application flow logic.

As nice as this was, users soon found it tedious to generate HTML in Java code. For example, here is a snippet from an HttpServlet doGet() method to generate a simple dynamic "Hello world":

String name = request.getAttribute("name");PrintWriter pw = request.getPrintWriter();pw.println("<HTML>");pw.println("<BODY>");pw.println("<P>Hello " + name + "!</P>");pw.println("</BODY>");pw.println("</HTML>");

Various "template" languages quickly began to spring up that enabled you to embed Java code in the HTML, making the programming model more WYSIWYG (that is, declarative). Standardizing these

Page 118: IBM WebSphere Developer Technical Journal - CiteSeerX

approaches lead to the Java Server Page (JSP) specification. With JSPs, you can mix Java "scriptlets" (<%...%>) and "expressions" (<%=…%>) with HTML. For example, here is a snippet from a JSP to display the same "Hello world":

<% String name = request.getAttribute("name"); %><HTML><BODY><P>Hello <%=name%>!</P></BODY></HTML>"

Just eliminating the parenthesis, quotes, and semicolons alone prevented countless thousands of bugs for Web application programmers. Further, eliminating the need to compile, package and deploy an HttpServlet component greatly reduced the turnaround time required to make a change -- whether to fix a bug or not.

But more importantly, JSPs led to an architectural change to separate the concerns of rendering the view from that to get the data. Web page designers and application programmers could suddenly work together without stepping on each others toes, each developing their own components in a language and style more suited to their role.

As an aside, the Enterprise JavaBeans specification emerged about the same time to separate the concerns even further. Adding EJB components enabled a true Model-View-Controller architecture for Web applications, where the Model was encapsulated by EJB components, the View was encapsulated by JSPs, and the Controller was encapsulated by HttpServlets.

Unfortunately, for rendering anything really useful, the interaction of Java scriptlets, expressions, and static HTML could get pretty complicated. For example, here is a snippet to produce a list of order IDs and status:

<% OrderStatus orderStatus = (OrderStatus)request.getAttribute( "OrderStatus" ); OrderData d[] = orderStatus.orders; int orderID = 0; String status = null; for (int i=0 ; i < d.length; i++) { orderID = d[i].orderID; status = d[i].status;%><P>Order Id =<%=orderID%> Status =<%=status%></P><% } %>

Page 119: IBM WebSphere Developer Technical Journal - CiteSeerX

The scriptlet to start the loop is rather complex. The scriptlet to end it is really simple, but often gets forgotten (or put in the wrong place). No tools exist that catch this and other common bugs when the JSP is being edited (like what happens in a Java IDE for "pure" HttpServlets).

In short, the problem remains that proper use of Java scriptlets and expressions is way too much to ask for the average Web page designer -- especially one who is not trained in programming.

The idea of custom JSP tags emerged and was standardized to minimize or even eliminate the need for Java programming. For example, IBM Software Services for WebSphere developed some custom tags that enabled iteration through a nested structure. These tags worked with the USEBEAN and GETPROPERTY tags provided as part of the base JSP tag library to greatly simplify the logic, as shown below:

<JSP:USEBEAN ID="OrderStatus" CLASS="com.onlinemall.data.OrderStatusData" SCOPE="request"/><SW296:INDEXEDBEANPROPERTY BEAN="OrderStatus" PROPERTY="orders" VAR="order" TYPE="com.onlinemall.data.OrderData"><P>Order ID=<JSP:GETPROPERTY NAME="order" PROPERTY="orderID"/>Status=<JSP:GETPROPERTY NAME="order" PROPERTY="status"/></P></SW296:INDEXEDBEANPROPERTY>

It was not long before frameworks like Struts arose to eliminate the need to write HttpServlets. Struts includes a rich set of custom tags for forms handling, but also a concept called "tiles" which enable declarative page composition. The important feature of tiles is the ability to recompose the page very easily without changing the underlying code components. Struts was one of the inspirations behind Java Server Faces (JSF) which standardizes the concept of declarative UI assembly from components. See Roland Barcia's excellent article on JSF versus Struts for more details.

The evolution on the client side from Java servlets to JSPs to custom tag frameworks to JSF was a step by step move away from procedurally programmed components that must be compiled toward declaratively assembled ones that can be interpreted (or at least compiled at run time). The Service Component Architecture (SCA) can be considered to be a similar evolution; enabling declarative assembly of server side components into an application.

Page 120: IBM WebSphere Developer Technical Journal - CiteSeerX

I assume you have seen some of the other developerWorks articles on Service Component Architecture and Service Data Objects. If so, you have probably seen a diagram such as this one that shows an SCA component:

Figure 1. Sample SCA component

There are three important points of comparison between SCA and EJB components to remember:

1. Composition. With EJB components, Java is the only language available for implementation, so all composition of services into a larger assembly is done procedurally through Java. SCA not only supports Java as an implementation language, but also Business Process Execution Language (BPEL, which is being extended to handle such concepts as state machines, business rules, and human tasks, among others).

2. Invocation. The way you use a Stateless Session Bean is different than the way you use a Stateful Session Bean, Entity Bean, Custom Entity Home Method or Message Driven Bean. Specifically, each component type has a different way to locate and invoke its methods. SCA components enable a common invocation model for the client regardless of implementation type.

3. Data. Entity EJB components were never intended to be serializable for efficient usage outside of the scope of the container or transaction boundary. Therefore Data Transfer Objects evolved to fill the gap. SDO components provide a common abstraction for data flow between components regardless of scope.

The simplification of choices for Composition, Invocation and Data is what makes the evolution towards declarative assembly of applications from services possible in SCA -- just like JSPs and custom tags enabled the trend towards declarative assembly of UIs.

So to summarize, a major reason to care about SOA is that it will eventually enable business analysts to

Page 121: IBM WebSphere Developer Technical Journal - CiteSeerX

compose applications with minimal programmer support.

OK then, Your EJB Advocate

Will EJB components go the way of the dinosaurs?

Dear EJB Advocate,

I hadn't really thought about how the trend towards declarative languages on the client side might apply to the server side. Now you have me wondering whether I should use EJB components at all!

Now sign me: SOA Confused

EJB components are more like DNA: the building blocks of evolution

Dear SOA Confused,

The operative words in my summary statement were "eventually" and "minimal."

Eventually means that it will be a quite a while before the run time interpreters for BPEL will match the function and performance needed for all aspects of an enterprise application. For example, it is likely the case that "microflows" (logic intended to run in a single unit of work) will perform better as an EJB component.

Minimal means that even when the BPEL standard gets extended to enable business analysts to specify the logic through declarative concepts like state machines, human tasks, and business rules, there will always be some patterns of logic that are procedurally more easily specifiable. While it would be possible to create a "procedural" tag or visual programming language, it will be no simpler than Java -- and probably even more complex, given all the extra tag syntax to type or lines between components to draw.

In summary, it may come to pass that there will be less need to manually code EJB components; but it is

Page 122: IBM WebSphere Developer Technical Journal - CiteSeerX

likely that a few EJB components will still be needed in a complex enterprise quality application. For this reason, EJB components should be considered as the building blocks upon which extensions to the server side components are based.

Hope that I can sign you as "SOA Happy" after this!

OK then, Your EJB Advocate

Resources

● Java Platform, Enterprise Edition (JEE) specification (formerly J2EE); the '2' was dropped in order to separate the main concept of a Java platform for the enterprise, from a specific version.

● Enterprise Java Programming with IBM WebSphere, Second Edition by Kyle Brown, Gary Craig, Greg Hester, Russell Stinehour, W. David Pitt, Mark Weitzel, JimAmsden, Peter M. Jakab, Daniel Berg. Foreword by Martin Fowler.

● The EJB Advocate: Is it ever best to use EJB components without facades in service oriented architectures? by Geoffrey Hambrick

● JavaServer Faces (JSF) vs. Struts: A brief comparison ● Service Component Architecture ● Introduction to Service Data Objects

About the author

Geoff Hambrick is a lead consultant with the IBM Software Services for WebSphere Enablement Team and lives in Round Rock, Texas (near to Austin). The Enablement Team generally helps support the pre-sales process through deep technical briefings and short term Proof of Concept engagements. Geoff was appointed an IBM Distinguished Engineer in March of 2004 for his work in creating and disseminating best practices for developing J2EE applications hosted on IBM WebSphere Application Server.