This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Agility tomorrow...And Beyond...Pushing logic from mainframes to distributed
systems just pushes cost and complexity
sideways. It provides no new business
functionality or agility and fails to extract
the maximum value from investments
already made.
Instead of moving the logic “sideways” from
one complex developer code to another,
the trend is to truly express value to the
business by moving the logic “up” within
the business, to expose business services
and processes so that anyone can under-
stand their value.
Imagine if you could deliver a business
application that:
Dynamically adopts data structure changes
throughout all application tiers (e.g. from
a database to a user interface layer)
Renders the information automatically, depending on the connected end-user device (e.g. PC, Netbook, PDA, cell phone) and the context an end-user works in
•
•
technology Agility todAyBefore we think about new paradigms, it is
important to understand how business
applications are implemented today. Most
business logic (processes, workflows, rules
and functions) is still contained within
custom developed or packaged applica-
tions. Multi-tier application architectures
(e.g. Client-Server) and integration solutions
(e.g. Middleware, ETL, batch processes)
expand this business logic to other IT
systems that then interact with this business
logic or information in order to process it
further for the purpose of end-user interac-
tion, reporting, analyzing, data exchange
or follow-up computation. The flexibility of
these applications in the context of change
varies. However, the change that occurs is
usually in one of the following areas:
Changing or developing new source code
Changing customization parameters (e.g. in packaged applications)
Applying changes in each application tier, especially when changing data structures
Adopting the system infrastructure (e.g. to improve scalability)
In addition, the information and data that
form a logical entity (e.g. customer, order)
are mostly spread across different informa-
tion systems (e.g. databases, files, document
management systems, data warehouse),
making it a challenge to achieve a single
customer or order view.
•
•
•
•
buSinESS ApplicAtiOnS in 2015By Guido Falkenberg, Vice President, Deputy CTO, Software AG
those of us who support business applications are facing accelerating pressure to support new technologies and new business
demands. regardless of your vertical industry, mergers and acquisitions, new regulations and competition are driving a
breakneck pace of change. how can we develop applications that respect the past (avoiding expensive rip-and-replace), cope
with the present, but don’t compromise our future?
Scales according to defined service-level agreements (e.g. response time, concur-rent user)
Provides business activity monitoring by design
Gives multiple options of how to deploy business rules (e.g. code, rule engine)
Ensures transactional reliability and consistency even when hitting unpredict-able data volumes
Provides unified processing capabilities no matter where the data is stored or what data structure it has (e.g. database records, document management systems, Excel sheet, archives, data warehouse)
Develops reusable services and events no matter the programming language paradigm or underlying communication protocol
Runs business processes continuously even when the application portfolio is changing or new sourcing strategies are chosen
Synchronizes and automates business processes across IT applications, telecom-munication systems and batch environments
Provides automatic discovery and registra-tion of all application assets in a transpar-ent and manageable way
Actively involves business end-users in the development process, e.g. designing the user interfaces, business rules and processes in an interactive and collabora-tive fashion.
Basic concepts of a Read-only BuffeR poolThe read-only buffer pool is designed to
meet exactly the above mentioned re-
quirements. Provided the buffer pool
contents do not change, the serialization
operations in Natural’s buffer pool man-
agement can be skipped. To enforce the
inalterability of such a buffer pool, any
request which ends up in a structural
change of the memory layout is rejected
by the buffer pool manager.
Dropping the serialization operations
reduces the system time, as well as the
stall time. Stall time is the time one
Natural process, trying to perform a buffer
pool operation, has to wait for another
Natural process performing a buffer pool
operation. The reduction of system time
consumption benefits all processes running
on a machine, while the decrement in the
stall time improves the throughput of all
Natural processes.
The read-only buffer pool is from the
application’s point of view and is very
similar to an update-capable buffer pool.
status QuoAtomicity is not always an option running
applications performing concurrent opera-
tions. In many cases, concurrent processing
does not need to be serialized in order to
avoid data corruption. In particular, if
processes sharing data only need read
access to data, then precautions to avoid
data corruption due to concurrent updates
do not need to be taken.
Against this background the idea of a read-
only buffer pool was born. The main task
of Natural’s buffer pool is to reduce the
number of system file access operations.
The buffer pool manager concurrently loads
and stores Natural objects into a storage
segment shared by several Natural pro-
cesses and returns addresses of executable
code and constants held in a Natural object
to Natural’s runtime. Any request from
Natural’s runtime to get the address of an
object is serialized. While an object is
located and eventually loaded into the
buffer pool, any other process requesting a
service from the buffer pool has to wait
until the operation to locate and load an
object has finished, with, or without, success.
In a steady state, the change in the num-
ber of program loads per unit of time is
quite a small figure. In its optimum case it
is zero. If this figure is zero, all objects
needed by a Natural application are
situated in memory used by the buffer
pool. In this case, changes in the memory
structure maintained by Natural’s buffer
pool manager do not occur.
leveraging natural’S read-only Buffer pool feature tO dElivEr A HigH-PErfOrmAncE SOA PrOductiOn EnvirOnmEntBy Martin Hugentobler, R&D Specialist, Natural Open Systems, Software AG
this article explains how to improve performance and reduce resource requirements for natural applications running in open
systems production environments. it also describes how to optimize and tune your production environment to deliver a high
performance soa production environment. the basic concepts of a Read-only buffer pool and how to configure the buffer pool
to achieve optimal performance are introduced.
Any request from Natural’s runtime to load
an object is answered with either the
address of that object because it was
found in memory or with error NAT0082.
An update-capable buffer pool returns
error NAT0082 only if the object requested
was not found in the system file.
The differences with an update-capable
buffer pool are seen mainly on an adminis-
trative level. The start up procedure of an
update-capable buffer pool allocates storage
and semaphores in the size and amount
defined by the system administrator. Data
is loaded on request by Natural’s buffer
pool manager without any additional
interaction. If necessary, unneeded data is
deleted and replaced by requested data.
A read-only buffer pool requires prepara-
tory work. All objects that the read-only
buffer pool shall hold must be registered in
a preload list. During start up of a read-
only buffer pool, storage is allocated in the
size defined by the system administrator.
The previously worked out preload list is
read and an attempt is made to load
objects described in that preload list into
the buffer pool. As soon as the preload list
has been processed with success, a Natural
session can attach to the read-only buffer
pool and execute an application.
The login procedure performed by each
Natural session using an update-capable
buffer pool is not required when accessing
a read-only buffer pool. Most of the
semaphores within an update-capable
the number of attempted locates did not
change since the last review), the applica-
tion can be changed to use a read-only
buffer pool only. If the contents of the
read-only buffer pool are shown to be
incomplete because of sporadic NAT0082
(“Objects not found”) errors, the read-only
buffer pool can be exchanged in flight by
an enhanced read-only buffer pool.
summaRy To summarize, a read-only buffer pool can
improve the overall performance and reduce
resource requirements of a computing
system that includes Natural applications
running in Open Systems. The essential
advantages of using a read-only buffer
pool are:
An unlimited number of users may use a
read-only buffer pool
Under UNIX, a read-only buffer pool does not use any semaphores
In a Windows environment few sema-phores are used to serialize start up and shutdown processing of a read-only buffer pool
Using a read-only buffer pool, a perfor-mance gain and a throughput increment is achieved due to the reduction of:
Storage requirements can increase as all objects needed by an application have to be present in the buffer pool.
CATALOG / SAVE or STOW operations, as well as system file updates through SYSMAIN or SYSOBJH, have no effect on the contents of the read-only buffer pool; objects loaded into the read-only buffer pool will never be replaced.
extended concepts usinG a Read-only BuffeR pool To relieve the deficiencies of a read-only
buffer pool to update its contents at any
time and to allow a 7x24 operation of
Natural (or a Natural session) using a read-
only buffer pool, two additional buffer pool
types have been introduced:
An alternate buffer pool
A backup buffer pool
To each read-only buffer pool an alternate
buffer pool can be assigned. The alternate
buffer pool must be a read-only buffer pool
as well. Natural sessions can be forced to
detach from a read-only buffer pool and to
attach in flight to the alternate buffer pool
using the NATBPMON command “SWAP”.
This allows replacing an incomplete read-
only buffer pool with a new improved read-
only buffer pool without stopping the
execution of Natural. The obsolete read-
only buffer pool can then be shut down,
•
•
•
•
•
restarted with an improved preload list, and
put into production again.
A backup buffer pool is an update-capable
buffer pool. A Natural session running with
a read-only buffer pool can be told to use
a backup buffer pool as secondary buffer
pool. If a call to locate an object in the
primary read-only buffer pool fails (be-
cause the object is not found in that buffer
pool) Natural’s buffer pool manager will
pass that call to the secondary (or backup)
buffer pool which will try to load that
object and allow Natural runtime to
execute that object in the secondary buffer
pool. The Natural application will not notice
the absence of an object in the primary
read-only buffer pool. However, the
number of sessions that can use such an
execution environment is limited to the
maximum number of users defined for the
backup buffer pool.
How to set up an enviRonment usinG a Read-only BuffeR pool Using a read-only buffer pool as the
primary buffer pool and a backup buffer
pool as the secondary buffer pool is a
method to change a production environ-
ment from using a read/write capable
buffer pool to only use a read-only buffer
pool and to allow an unlimited number of
users to access the read-only buffer pool.
As the complete extent of an application is
seldom known, a read-only buffer pool can
be set up with all objects known to be
used by the application. Natural can then
run using as the primary buffer pool, the
read-only buffer pool and as the secondary
buffer pool, the backup (a read/write
capable) buffer pool. The contents of the
secondary buffer pool show exactly those
objects that were not known to belong to
the application, i.e., those objects not found
by Natural in the primary read-only buffer
pool. The preload list(s) can be enhanced
by adding the contents of the secondary
(or backup) read/write capable buffer pool.
As soon as the secondary buffer pool is not
accessed any longer (for example, because
TECHniquesSOA EnAblEmEnt EditiOn | iSSuE 2, 2009
natural Spotlight
Programmers are often unaware of how MultiFetch works. How-
ever, they may have heard how much they can save by employing
MultiFetch. They seek out applications with a lot of READ’s and/or
FIND’s, and promptly modify READ’s and FIND’s to include Multi-
Fetch. This can lead to trouble. The following is an adaptation of a
real life experience.
When nOT TO MulTiFeTchRecently, I was teaching a class where the topic under discussion
was the new feature in Version 4 of Natural to ESCAPE TOP REPO-
SITION. I was comparing the code using this feature versus the
“old” approach of placing a READ loop within a “dummy” REPEAT
loop. Someone in the class wondered whether the new code
would solve a problem they had that necessitated the dummy
REPEAT loop code. I asked for a description of the problem. I will
simplify their description a bit to maintain just the portions rel-
evant to our discussion.
The Problem
We have an order file. There are many line items per order. Every
line item of every order is a separate record (it is not in our
purview to suggest a redesign with a PE). Line item numbers are
assigned serially. There is a super descriptor which concatenates
the order number and line number. Thus, we might have order
number B8752 with nineteen line items. Therefore, there will be
nineteen records, each with a separate line item.
There is an application which accepts an order number. It then
reads by the super-descriptor (in our example, the values of the
super would be B875201…B875219). It is quite possible that in
the process of reading the line items for an order, a new line item
(record) is created for the order. The line item number will be the
next one available (existing maximum plus one). In our example,
we might have read the record with super B875204 then created
a record with super B875220.
Consider a simple READ loop:
Assume there are 2000 records that are included in our specified
range. The READ loop will issue 2000 calls to Adabas. Each call to
Adabas will result in a record being read from the database, then
“stripped” (to extract the requested fields from the record) and
“decompressed” (padded out with trailing blanks or leading zeroes
for passage back to the calling program).
Suppose we now have:
Instead of 2000 calls to Adabas, there will be 20 calls to Adabas.
Each call will read 100 records, each of which will be stripped and
decompressed and returned to your Natural program in the Record
Buffer, from which they are placed in a MultiFetch buffer. Note
that MultiFetch does not save record reads (the same 2000
records will be read, stripped, and decompressed). It does, how-
ever, save Adabas calls. Actually, in this example, we will save
99% of the original 2000 Adabas calls. Adabas calls are expensive
(assuming you are not a DBA, just ask your DBA about the cost of
Adabas calls).
USing natUral’S MUltiFetch FeatUre tO imprOvE pErfOrmAncE in SOA EnvirOnmEntSBy Steve Robinson, Publisher of Inside Natural, Developer of “Simply Natural”, Presenter of Natural Seminars, and President, S.L. Robinson and Associates, Inc., U.S.
There have been many enhancements to natural version 4. One of the most important new features, when performance is
critical, is MultiFetch. Many programmers today are using MultiFetch without really understanding what it is and what it can do
for them. This lack of understanding leads to problems, one of which we will discuss here. First, however, i will provide a brief
discussion on why MultiFetch is such an important performance tool for natural applications in SOA environments.
READ PRODUCTS BY PROD-CODE STARTING FROM ‘B’ TO ‘BZ9999’:::END-READ
READ MULTI-FETCH OF 100 PRODUCTS BY PROD-CODE STARTING FROM ‘B’ TO ‘BZ9999’
2
And here are the JONES records.
Okay, as we can see above, there are nine JONES records.
Here is a slight variation from the previous program:
Note we have a MULTI-FETCH of six for our READ. Also note that
when we get to ROBERT JONES, the third JONES (see output
above), we will STORE a new record with the FIRST-NAME of IS
THIS THERE.
Let’s discuss what the effect of the MULTI-FETCH is.
The program calls Adabas which returns six records. If you look above at the output from MULT01, these are the records for VIRGINIA through MARTHA JONES.
Natural will process the first six records using the data from the MultiFetch buffer. When we get to ROBERT JONES, we create our new JONES record.
After processing MARTHA JONES, our program calls Adabas again. Adabas places six records in its record buffer for return to Natural. The six records will be the remaining original JONES records (LAUREL, KEVIN, and GREGORY), the new JONES record (IS THIS THERE) and two additional records (with last names JOPER and JOUSSELIN). If I had used TO, rather than ENDING AT, the two additional records would not have been read (This can be VERY important when using MultiFetch).
•
•
•
Originally they had simple code such as:
Sometimes, when this organization reached the end of an order,
they would see the new line item record, which is what they
wanted to see. However, the problem was that at other times,
they did not see the new record. Up until six months ago, they
always saw the new line item record.
So, put your thinking caps on...What might have caused the
change six months ago?
A MultiFetch “Gotcha”!
As we discussed above, MultiFetch “reads ahead” in order to
reduce Adabas calls. We will demonstrate the potential problem
using the familiar Employees file. Here is a simple program that
displays the JONES records from our file (yes, I could have used
the new option TO, rather than ENDING AT; this would not have
affected the scenario that will unfold).
READ ORDER-VIEW BY ORDER-LINE-NUMBER STARTING FROM #ORDER-LINE-ONE IF ORDER-NUMBER NE #ORDER-NUMBER ESCAPE BOTTOM IMMEDIATE END-IF :::: IF some condition ::::::::::: STORE new line item record END TRANSACTION END-IF ::::: END-READ
> > + Program MULT01 Lib XSTRO 0010 DEFINE DATA LOCAL 0020 1 MYVIEW VIEW OF EMPLOYEES 0030 2 NAME 0040 2 FIRST-NAME 0050 END-DEFINE 0060 * 0070 INCLUDE AATITLER 0080 * 0090 READ MYVIEW BY NAME 0100 STARTING FROM ‘JONES’ 0110 ENDING AT ‘JONES’ 0120 DISPLAY NAME FIRST-NAME 0130 END-READ 0140 * 0150 END
MORE PAGE # 1 DATE: Feb 02, 2009 PROGRAM: MULT01 LIBRARY: XSTRO NAME FIRST-NAME -------------------- -------------------- JONES VIRGINIA JONES MARSHA JONES ROBERT JONES LILLY JONES EDWARD JONES MARTHA JONES LAUREL JONES KEVIN JONES GREGORY
> > + Program MULT02 Lib XSTRO 0010 DEFINE DATA LOCAL 0020 1 MYVIEW VIEW OF EMPLOYEES 0030 2 NAME 0040 2 FIRST-NAME 0050 END-DEFINE 0060 * 0070 INCLUDE AATITLER 0080 * 0090 READ MULTI-FETCH OF 6 MYVIEW BY NAME 0100 STARTING FROM ‘JONES’ 0110 ENDING AT ‘JONES’ 0120 DISPLAY NAME FIRST-NAME 0130 IF FIRST-NAME = ‘ROBERT’ 0140 MOVE ‘JONES’ TO NAME 0150 MOVE ‘IS THIS THERE’ TO FIRST-NAME 0160 STORE MYVIEW 0170 END-IF 0180 END-READ 0190 BACKOUT TRANSACTION 0200 END
3
SuMMAryThere are many variations of the problem we have just discussed.
It is important, when you consider using MultiFetch, that you also
consider the possible implications of record updates (STORE,
DELETE, UPDATE) in your program AND other programs.
MultiFetch is an important tool in enhancing the performance of
Natural applications in SOA environments. Understanding how to
properly use MultiFetch and avoid the problem we discussed in
this article will help you continue to fine tune your Natural appli-
cations to meet the needs of your SOA.
Our output is shown below.
I made one minor change to our program, as shown below:
Let’s discuss what happens. The processing of the first six records
is the same, EXCEPT, while processing ROBERT we did not create
our extra record.
After processing MARTHA, our Natural program calls Adabas and
reads six records into the MultiFetch buffer; namely, the last three
JONES records (LAUREL, KEVIN, and GREGORY) and three more
records with last names JOPER, JOUSSELIN, and JUBE.
While processing the record for KEVIN, we STORE our new
JONES record.
NOTE: creating the new JONES record does NOT modify the
MultiFetch buffer. I think you now realize what will happen.
After processing GREGORY JONES, the next record in the MultiFetch
buffer has a last name of JOPER. We will therefore exit our READ
loop, without ever having seen the new JONES record, as
shown below.
MORE PAGE # 1 DATE: Feb 02, 2009 PROGRAM: MULT02 LIBRARY: XSTRO NAME FIRST-NAME -------------------- -------------------- JONES VIRGINIA JONES MARSHA JONES ROBERT JONES LILLY JONES EDWARD JONES MARTHA JONES LAUREL JONES KEVIN JONES GREGORY JONES IS THIS THERE
0130 IF FIRST-NAME = ‘KEVIN’
MORE PAGE # 1 DATE: Feb 02, 2009 PROGRAM: MULT03 LIBRARY: XSTRO NAME FIRST-NAME -------------------- -------------------- JONES VIRGINIA JONES MARSHA JONES ROBERT JONES LILLY JONES EDWARD JONES MARTHA JONES LAUREL JONES KEVIN JONES GREGORY
Reprinted with permission of Steve Robinson, President, S.L. Robinson and Associates, Inc., 28 Teal Drive, Langhorne, PA 19047, U.S.; Phone: 215-741-0820; Email: [email protected]. This article is an excerpt from an Inside Natural article that appeared in May 2008 (page 25). If you would like to read the entire Inside Natural article on MultiFetch and your company does not subscribe to Inside Natural, please go to the Software AG Technical Forums: http://tech.forums.softwareag.com/ If you are not registered at the technical forums, you will see a message that says Hello Guest, and you will also see a box that says Register. Click on that box and sign up as a registered reader of the Forum. There is no fee for this. You must be a registered reader to download attachments, although you can read postings without registering. Click on Natural General. Then click on Inside Natural. Finally, click on MultiFetch Article. The PDF for the article can be downloaded to your PC. If you have just registered for the Forum for the first time, after reading the article, peruse the site a bit. You will discover that there is a wealth of information for many Software AG products available at this site.
TECHniquesSOA EnAblEmEnt EditiOn | iSSuE 2, 2009
AdabasSpotlight
An Out-DAteD ApprOAch tO IntegrAtIOnThe reason for major problems and headaches when it comes to integrating new software
with existing IT systems is very much down to the traditional approach to software integration.
Take a typical scenario where some existing, monolithic business logic on a core platform
needs to be reused as part of a Microsoft Visual Basic application. The traditional method
of dealing with this would involve the following steps:
1. Designing and agreeing on an interface that will be used to enable the VB client and
the business logic to communicate.
2. Installing a messaging system, such as MQ Series, to enable the VB client to communicate with the business logic.
3. Ensuring customer server code enables the acceptance of messages from the VB client to invoke the business logic and return the response to the VB client. (While in theory, it is a simple enough exercise to handle messages from one client, the reality is that this code must be in a position to handle requests from a multitude of clients at the same time, thus making this logic infinitely more complex than a normal ‘batch type’ application.)
4. Writing and testing the VB client, but only when the above has been completed.
These steps characterize projects that are generally high in both risk and cost. This has driven the search for an alternative. Today that search is over, with the avail-ability of Adabas SOA Gateway.
Why is it integration problematic?By John Power, CEO, Risaris
the IntegrAtIOn chAllengeWhen it comes to technology projects
there is one stark statistic that should
make you take notice. Organizations
are finding up to 70% of the cost of
any software project can be soaked
up by integration with existing it
assets, according to a leading analyst
firm. Even more worrying is the fact
most of these ‘integration’ projects
go over budget, are not delivered on
time, and many even fail.
medium and large-size organizations
generally adopted software platforms
(such as the ibm mainframe systems)
in the past for good reasons that still
hold true today. Over the years they
have developed multiple applications
to represent the data and business
logic that have become the core
assets of their business.
Why is integration so problematic and how can organizations significantly reduce the cost, complexity and risk associated with
integration projects? this article looks at the traditional approach to integration and contrasts it with a new approach that
dramatically reduces the time, complexity and cost of integration projects.
Custom Interface
MQ Messaging
Custom Interface
Custom Service Logic
MQ Messaging
Business LogicScreen Logic
SImplIfyIng IntegrAtIOnAdabas SOA Gateway removes a massive degree of the effort and risk associated with
traditional integration.
So forget the traditional approach and imagine utilizing the following streamlined,
effective and efficient steps when it comes to your integration project:
1. Install Adabas SOA Gateway.
2. Use a configuration wizard to wrap and make business logic available in minutes.
3. Write and test the VB client application against a real server.
This straightforward, no-fuss approach offers the following major advantages:
Significant reduction in cost due to less custom code.
Risk is limited or abolished, as the logic is made available immediately.
Software does not need to be installed on the client system.
Both unit and integration testing can take place immediately.
Communication between client and server may be secured with the standard SSL protocol.
ADAbAS SOA gAteWAy explAIneDSo how is it possible to simplify the integration and application modernization challenge?
Given the Adabas SOA Gateway installation is a once-off event on a given platform, the
steps required to wrap a single piece of business logic are easy:
1. The structure(s) identifying the inputs and outputs to the business logic is identified
and imported into an Eclipse-based tool.
2. The fields in the structure are marked for ‘input only’, ‘output only’, or ‘input and output’.
3. The definitions are exported to the Adabas SOA Gateway Server.
4. The service is published and is now available to the client.
•
•
•
•
•
2
Business Logic
Screen Logic
AdabasSOA Gateway
AdabasSOA Gateway
AccountsApplication
AcceSSIng DAtA the trADItIOnAl WAyThere are also occasions when an integration effort requires access directly to data in an
existing database. As with the exposure of business logic, getting access to the data in
the traditional way is expensive, time consuming and fraught with difficulties.
Assuming a Java client running on Windows wishes to access some core data, the following
would generally be the traditional approach:
1. Designing and agreeing on an interface that will be used to enable the Java client to
talk to a custom data server.
2. Installation of a messaging system, such as MQ Series, to enable the Java client to communicate with the custom server.
3. Enabling custom server code to accept messages from the Java client, to access the database and return the response to the Java client. (As with the previous example, this custom code must be in a position to handle requests from a multitude of clients at the same time thus making this logic infinitely more complex than a normal ‘batch type’ application.)
4. Writing and testing the Java logic, but after the above steps have been completed.
hOW IS thAt SImplIfIeD by the uSe Of ADAbAS SOA gAteWAy?
1. The structure of the database table or file can be determined from the database and
imported into an Eclipse-based tool.
2. The definitions are exported to the Adabas SOA Gateway Server.
3. The service is published and is now available to the client.
As shown in the diagram, Adabas SOA Gateway eliminates the need for writing any code
on the database side.
3
Custom Inteface
MQ Messaging
Custom Interface
Custom Data Access Application
MQ Messaging
Existing Database
z/OS
TM
AdabasSOA Gateway
TM
eASy AcceSS tO cOre DAtA ASSetSUsing Adabas SOA Gateway gives equivalent integration characteristics to business logic
integration, as in the earlier example, while offering the following benefits:
Integration with existing database and business logic assets can be achieved in hours instead of weeks or months.
Services may be reused again and again.
Unit testing can be simply done with off-the-shelf tools.
Integration testing is less time consuming as it clarifies where information is being sent and what information is being returned.
Projects can be delivered on time and within agreed budgets.
Integration costs are reduced by 35% plus.
Programmers can focus on creating valuable business applications instead of spending up to 70% of their time working out how to get to the legacy information.
Organizations adopting Adabas SOA Gateway for their IT integration projects can rest
assured they will no longer be one of those organizations wasting their software budgets
on time-consuming, hard-to-manage integration problems.
Reprinted with permission of Rísarís Limited, 6 The Mill Buildings, The Maltings, Bray, Co Wicklow, Ireland; Phone: +353(1)2768040; Fax: + 353 404 66464; Email: [email protected].
FIGurE 1: ESB Service Abstracts Implementation from Invoke Method
TECHniquesSOA EnAblEmEnt EditiOn | iSSuE 2, 2009
webmethods Spotlight
actually realized ROI from their SOA efforts,
according to a recent survey¹.
So if the promise of SOA is business agility,
but the present reality (for many) is a small
pilot project, just a handful of services in
production or a mythical ROI – one wonders –
what makes this paradigm shift any differ-
ent than the ones that came before it?
The reality is that there is no “one-size-fits-
all” enterprise SOA solution; implementing
SOA is not so easy. While SOA raises expec-
tations and possibilities, it also increases
interdependencies and organizational
complexities (Figure 1). For example, now
we want to promote not just reuse, but
reuse enterprise-wide; yet obtaining enter-
prise-wide agreement can be a challenge.
1 “Best Practices for SOA Governance” User Survey, Software AG, May 2008
Our purpose here centers on bringing
clarity to SOA, SOA Governance and some
related subjects, as well as to show how a
modular and integrated approach can help
manage SOA complexities and fast-track
business agility.
To cut through some of the technical
jargon surrounding SOA and SOA Gover-
nance, we begin our discussion using a
business-level abstraction of both: in the
larger sense an SOA is about realizing
business agility and SOA Governance
facilitates this by enabling the acceleration
of business change in a controlled manner.
Interestingly enough SOA, in contrast to
previous IT technology paradigms, has the
attention of both “ends” of the enterprise –
business and IT. However, despite this
interest and time in existence, only about
one-third of the SOA adopters to-date have
SoA AnD SoA goVErnAnCE: GOinG bEyOnd thE hyPEBy Justin Vaughan-Brown, Senior Director Communities, Software AG
Despite millions of web references for Service-oriented Architecture (SOA), over a decade in existence and the fact that more
than two-thirds of enterprises today are “using or planning to use it,” there isn’t one commonly accepted definition for SOA.
SOA Governance is a related term that shares a similar lack of a singular definition. In spite of the confusion, one thing is clear:
SOA and SOA Governance are rather complex topics.
FIGure 1: Interdependence Creates Complexity
Service-oriented Architecture (SoA)
aims to deliver business agility by
using services (small ad-hoc modules)
that can be quickly built, assembled
and employed to meet dynamic
business needs. An SOA is supported
by an it infrastructure, development
methods, organizational processes
and integration capabilities all geared
towards loosely coupled services. An
SOA is like a giant lEGO set: the
blocks are different sizes, shapes
and colors; they are combined in a
predictable and uniform way; yet
they are completely flexible, so you
can quickly create many different
things again and again. Just as lEGO
can create buildings, cars, people
and even art, an SOA can reuse and
adapt existing technologies to meet
organizational demands.
SoA governance is the art of ensur-
ing that the enterprise is creating
the right lEGO blocks, combining
them in the right ways and doing it
consistently across the enterprise to
effectively realize the business
objectives. Early application of SOA
Governance lays the foundation for
success of the SOA initiative.
Serv
ICeS
Automated
Manual
Redundant
Customers
Mainframe
Hr
CrM
Finance
Third-party
Logistics
erP
Partners
Manual
New expectations of visibility and inter-
operability are running up against familiar
territory: complex infrastructures, a business
playing field that is under constant change
and enterprise divisions that may not want
to collaborate. Plus the challenges of
delivering information are still far different
than the challenges of using it to drive
business competitiveness.
Yet despite the complexities and constant
change, some organizations have been able
to transform their information silos into the
holy grail of “alignment of business and IT.”
While no two SOA implementations or
strategies are exactly alike, companies
successful with SOA do seem to share some
key commonalities; a foundation for SOA
success, if you will. These range from using
an SOA road map and starting with SOA
Governance early on, to ensuring that the
SOA initiative actively involves the busi-
ness side and follows an approach that
suits an add-as-you-go mentality.
Of these, SOA Governance is emerging as
one of the most important to get an SOA
initiative off to the right start, deliver
business value quicker and improve agility.
Implementing SOA without governance
can quickly lead to issues, and ultimately
project failure (See 10 Dangers of an
Ungoverned SOA). SOA Governance helps
navigate the complexities introduced with
an “SOA jungle,” provides a holistic enter-
prise view, manages business changes and
provides measurements for compliance
and success. SOA Governance helps ensure
that SOA meets the organizational business
drivers, such as measurable ROI, greater IT
and business alignment, real-time business
visibility, reduced risks, improved quality
and business and regulatory compliance.
Beyond getting an SOA initiative off to a
good start though, SOA Governance is
essential to achieving SOA’s potential for
long-term success. This is because SOA
Governance encompasses all SOA activities
throughout the lifecycle, from the initial
definition through creation and execution.
Using a structured approach helps imple-
ment SOA Governance effectively across
the enterprise. The Governance Reference
Framework² (Figure 2) classifies the recom-
mended elements for effective implemen-
tation of SOA Governance and management
of SOA Assets into three groups:
Organizational elements relate to
people; what roles and structures are
needed to define, enforce and monitor
SOA governance policies.
Norms relate to policies, procedures and
processes; what standards are needed to
govern the activities surrounding SOA.
Technology relates to the tools; techno-
logies that support SOA Governance to
define, enforce and monitor the norms.
The methods that the Governance Reference
Framework provides to measure and guide
the SOA Governance plan can be adapted
to suit the organization’s needs. In addition,
it helps an enterprise transition and fine-
tune its organizational structure for more
effective SOA Governance. This can be, for
example, by establishing an SOA Compe-
tency Center (Figure 3) to gain the needed
skills for SOA within the organization.
2 Approach to Service-Oriented Architecture (SOA), Deployment Accelerator”, Software AG, October 2007
•
•
•
10 DANGerS OF AN uNGOverNeD SOA
1. modeling process has no visibility of existing services and the processes they impact
2 Services may be accessed by those not entitled to do so
3. no awareness of the impact of changes made to one service has upon another related one
4. Absence of quality assurance processes before a service is deployed
5. lack of holistic view of how it and business are interlinked
6. Poor understanding of service deployment, consumption or downtime
7. Policy enforcement is manual, unstructured, and sporadic
8. no overall view of existing services means they are recreated again, not reused
9. Absence of lifecycle management creates version control issues
10. lack of responsibility and ownership regarding service creation and consumption
2
FIGure 2: Governance reference Framework
NOrMS ¬ policies
¬ procedures
¬ processes
OrGANIzATIONAL eLeMeNTS
TeCHNOLOGy
SOAASSeTS
The Governance Reference Framework also
provides a set or catalog of norms that a
company can use to jump-start its SOA
Governance initiative. These norms help
guide how the SOA actors perform their
activities to best serve the needs of the
company. Technology is the third funda-
mental element; these are the tools that
facilitate effective SOA Governance. The
right tools allow you to plan, design, manage
and govern SOA infrastructures that support
the enterprise’s objectives across all aspects
of the SOA lifecycle.
TOOLS CAN HeLP CrOSS THe rOI LINeIn fact, the choice of SOA Governance tools
and when an organization implements them
can often mean the difference between
success and failure of the SOA initiative. An
SOA is by nature complex, often crossing
multiple departments, external groups,
customers and partners. Just one service
that delivers customer information, for
example, could be consumed by the
customer (to update their information),
finance (to validate and track the customer’s
bill) and logistics (to ship the customer’s
order.) In each case, there may be different
policies surrounding the use of that service.
Tools are an essential part of these pro-
cesses, without them an organization
cannot manage and govern their SOA.
Many companies start off their SOA initia-
tives with the “management by Excel”
approach. They list their small but growing
catalog of services in an Excel spreadsheet,
a virtual registry or “yellow pages”. How-
ever, this approach is quite inadequate for
the complex, dynamic nature of an SOA.
exCeL IS yeT ANOTHer SILOBesides the obvious downside of yet
another manually maintained spreadsheet,
Excel is not interconnected with the systems
that are used to develop, deploy and run
elements related to SOA. As services move
through the lifecycle, at each step of the
process, the Excel sheet would need to be
updated; and as services are being con-
sumed, what then? How can an organiza-
tion ever hope to measure if a service met
the defined SLA or enforce a policy if the
information is trapped in a spreadsheet?
Using Excel or other unintegrated technolo-
gies limits the enterprise’s ability to grow
and adapt their SOA; plus these fail to
provide a holistic enterprise SOA view.
Rather it is better to start off with a small
set of flexible tools specifically designed
for the purpose of SOA, and add on as the
organization’s needs change. That way the
organization has tools that facilitate the
natural evolution of SOA; a “think big, start
small” approach.
LONG-TerM SOA FLexIbILITyFlexible, modular SOA and SOA Governance
tools have the biggest organizational impact.
With them you can grow and adapt SOA as
the organizational needs change over time.
Modular and automated toolsets allow you
to rapidly implement a customized, best-
of-breed SOA Governance solution; this in
turn promotes business agility, collaboration
and reuse. Interoperability, best practices
and open standards-based plug-in architec-
tures combine for a long term approach
that helps maintain SOA flexibility.
FIGure 3: SOA Competency CenterTM
helping you design and implement your SOA
SOA LIFeCyCLe ServICeS
SOA
MAT
urIT
y AS
SeSS
MeNT
SOA DISCOvery SOA reADINeSS ASSeSSMeN
T
Discovery Phase
Optimization Phase
Assessment Phase
Measure-ment Phase Planning
& Design Phase
execution Phase
SoA MEthoDology
leading analysts confirm that no
single solution or technology will be
able to meet the diverse SOA Gover-
nance requirements. Enterprises will
need the support of a good SOA
ecosystem, built from multiple ven-
dors with a registry that unifies them.
3
how to AChiEVE MAxiMuM buSinESS Agility with SoA – KEy pointS
As you have seen in this article, there are many facets of SOA Governance and a
wide range of integrations possible across the SOA lifecycle. Summarized below are
some of the key aspects in order to achieve maximum business agility with an SOA:
Consider the need for SOA Governance before you embark on your SOA initiative.
Think of the Governance Reference Framework – and consider your organization, its norms (policies, procedures and processes) and the technology tools that can help support SOA Governance.
A registry/repository should be at the heart of your technology tools, recording your services, their lifecycle stages, helping to enforce policies and acting as a command center for governing SOA.
Use a designed-for-purpose tool – not Excel as a quick fix or interim solution.
CentraSite, the market leading SOA registry/repository is a tool ideally suited for the command center role.
The registry and repository should be open and capable of integrating with other best-in-class solutions to provide comprehensive end-to-end SOA Governance. In addition, how do registry/repositories vendors approach integration with existing tools in your specific IT environment?
A great way to build up your understanding of an SOA registry/repository is to download the free Community Edition of CentraSite at www.centrasite.com.
introducing ThE CEnTrASiTE COmmuniTyBy Gerd Schneider, Senior Vice President, Enterprise Transaction Systems and Communities, and Justin Vaughan-Brown, Senior Director Communities, Software AG
SOA adoption. These include:
Pre-packaged integrations that fast-track
implementations
Diverse areas of SOA can be brought
together seamlessly
No rip and replace demands – CentraSite
will integrate with any vendor offering
(competitive or not) using commonly
accepted industry standards
Broad range of expertise across the
SOA landscape
Vendors who are not yet part of the
Community can easily integrate with
CentraSite, based on the proven standards-
based approach.
1 SYS-CON Media 2007 SOA World Reader’s Choice Awards Best Web Services or XML Site: CentraSite Community Portal, CentraSite Community
•
•
•
•
The CentraSite Community is an SOA ecosystem comprising software vendors and consultancies whose technologies and
methodologies complement and integrate with the CentraSite registry/repository to deliver a comprehensive end-to-end SOA
Governance solution, from conceptual modeling through to resulting service deployment and monitoring.