Top Banner
490

knoesis.wright.eduknoesis.wright.edu/sites/default/files/bok%3A978-3-540-48738-8.pdf · Preface CAiSE*99 is the 11th in the series of International Conferences on Advanced Information

Mar 10, 2018

Download

Documents

nguyen_duong
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • Lecture Notes in Computer Science 1626Edited by G. Goos, J. Hartmanis and J. van Leeuwen

  • 3BerlinHeidelbergNew YorkBarcelonaHong KongLondonMilanParisSingaporeTokyo

  • Matthias Jarke Andreas Oberweis (Eds.)

    Advanced InformationSystems Engineering

    11th International Conference, CAiSE99Heidelberg, Germany, June 14-18, 1999Proceedings

    1 3

  • Series Editors

    Gerhard Goos, Karlsruhe University, GermanyJuris Hartmanis, Cornell University, NY, USAJan van Leeuwen, Utrecht University, The Netherlands

    Volume Editors

    Matthias JarkeRWTH Aachen, Lehrstuhl fur Informatik VAhornstr. 55, D-52056 Aachen, GermanyE-mail: [email protected]

    Andreas OberweisUniversitat Frankfurt, Lehrstuhl fur Wirtschaftsinformatik IIMerton Str. 17, D-60325 Frankfurt, GermanyE-mail: [email protected]

    Cataloging-in-Publication data applied for

    Die Deutsche Bibliothek - CIP-Einheitsaufnahme

    Advanced information systems engineering : 11th international conference ;proceedings / CAiSE 99, Heidelberg, Germany, June 14 - 18, 1999. MatthiasJarke ; Andreas Oberweis (ed.). - Berlin ; Heidelberg ; New York ; Barcelona ;Hong Kong ; London ; Milan ; Paris ; Singapore ; Tokyo : Springer, 1999

    (Lecture notes in computer science ; Vol. 1626)ISBN 3-540-66157-3

    CR Subject Classification (1998): H.2, H.4-5, J.1, K.4.3, K.6

    ISSN 0302-9743ISBN 3-540-66157-3 Springer-Verlag Berlin Heidelberg New York

    This work is subject to copyright. All rights are reserved, whether the whole or part of the material isconcerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting,reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publicationor parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,in its current version, and permission for use must always be obtained from Springer-Verlag. Violations areliable for prosecution under the German Copyright Law.

    c Springer-Verlag Berlin Heidelberg 1999Printed in Germany

    Typesetting: Camera-ready by authorSPIN 10703286 06/3142 5 4 3 2 1 0 Printed on acid-free paper

  • Preface

    CAiSE*99 is the 11th in the series of International Conferences on AdvancedInformation Systems Engineering. The aim of the CAiSE series is to give re-searchers and professionals from universities, research, industry, and public ad-ministration the opportunity to meet annually to discuss evolving research issuesand applications in the field of information systems engineering; also to assistyoung researchers and doctoral students in establishing relationships with seniorscientists in their areas of interest.

    Starting from a Scandinavian origin in the late 1980s, CAiSE has evolved intoa truly international conference with a worldwide author and attendance list. TheCAiSE*99 program listed contributions from 19 countries, from four continents!These contributions, 27 full papers, 12 short research papers, six workshops,and four tutorials, were carefully selected from a total of 168 submissions by theinternational program committee.

    A special theme of CAiSE*99 was Component-based information systemsengineering. Component-based approaches mark the maturity of any engineer-ing discipline. However, transfering this idea to the complex and diverse world ofinformation systems has proven more difficult than expected. Despite numerousproposals from object-oriented programming, design patterns and frameworks,customizable reference models and standard software, requirements engineer-ing and business re-engineering, web-based systems, data reduction strategies,knowledge management, and modularized education, the question of how tomake component-oriented approaches actually work in information systems re-mains wide open.

    CAiSE*99 addressed these issues through invited talks and panels by busi-ness and research leaders. The invited talks, held by two of the most influentialresearcher-entrepreneurs, represent two key trends in information systems engi-neering, towards object model standardization and towards business domainmodel standardization. Ivar Jacobson (Rational Software Inc.) reviewed theUnified Process of Component-Based Development, filling a sorely felt gap inthe Unified Modelling Language (UML) standardization effort. August-WilhelmScheer (Universitat des Saarlandes and IDS Prof. Scheer GmbH) investigatedthe way-of-working leading from business process models to application sys-tems. These talks were complemented by a high-level interdisciplinary panel onComponent-Based Information Systems Architectures.

    More details on the conference theme were provided by two full-day tutorialson component-based development and on the impact of the Unified ModelingLanguage. Additional mini-tutorials during the conference itself addressed theemerging questions of advanced workflow systems and infrastructure for elec-tronic commerce.

    CAiSE*99 was held in Heidelberg, Germany, not only a beautiful site withthe oldest university in Germany, but also close to numerous leading Information

  • VI Preface

    Systems Engineering companies in the Rhein-Main-Neckar area, just an houraway from Frankurt airport. The touristic attraction of the area dates back500 000 years the oldest human bones ever found in Europe belong to HomoHeidelbergensis ! More recent Heidelberg features include the historic studenttown with the biggest wooden wine barrel in the world as well as the historicstudent dungeon for those who had too much of the contents.

    Besides summaries of the invited talks, this proceedings volume includesthe research papers accepted as long or short papers for the conference. Topicsaddressed include:

    components, workflows, method engineering, and process modeling data warehousing, distributed and heterogeneous information systems temporal information systems and information systems dynamics.

    Associated pre-conference workshops covered Data Warehousing, Method Eval-uation, Agent-Oriented Information Systems, Business Process Architectures,and Requirements Engineering. Papers from these workshops could not be in-cluded in these proceedings but are available through the workshop organiz-ers (for more information, consult the web site http://www-i5.informatik.rwth-aachen.de/conf/caise99/).

    We would like to thank all the institutions and individuals who have madethis conference possible, especially our General Chair Wolffried Stucky, the work-shop and tutorial co-chairs Gottfried Vossen and Klaus Pohl as well as the CAiSEadvisory committee led by Janis Bubenko and Arne Solvberg. Several sponsorsand supporters from industry made substantial contributions to the success ofthe conference. Thanks are also due to all the submitters of papers, tutorials,workshops, and last but not least to our co-workers Kirsten Lenz, Stefan Sklorz,and Irene Wicke without whom the organization would hardly have been possi-ble. Stefan Sklorz also served as Technical Editor of this proceedings volume.

    March 1999 Matthias Jarke and Andreas OberweisAachen and Frankfurt

  • CAiSE*99 Conference Organization

    Advisory CommitteeJanis Bubenko

    Royal Institute of Technology, SwedenArne Slvberg

    The Norwegian University of Science and Technology, Norway

    General ChairWolffried Stucky

    University of Karlsruhe,Germany

    Program ChairMatthias JarkeRWTH Aachen,

    Germany

    Organizing ChairAndreas Oberweis

    University of Frankfurt,Germany

    Program Committee

    Hans-Jurgen Appelrath GermanySjaak Brinkkemper The NetherlandsMeral Binbasioglu U.S.A.Janis Bubenko SwedenSilvana Castano ItalyPanos Constantopoulos GreeceVytautas Cyras LithuaniaKlaus R. Dittrich SwitzerlandEric Dubois BelgiumMarcel Francksson FranceWolfgang Hesse GermanyStefan Jablonski GermanyMatthias Jarke (chair) GermanyChristian S. Jensen DenmarkManfred Jeusfeld The NetherlandsLeonid Kalinichenko RussiaHannu Kangassalo FinlandGerti Kappel AustriaKamal Karlapalem ChinaGerhard Knolmayer SwitzerlandFrederick H. Lochovsky ChinaPericles Loucopoulos United KingdomKalle Lyytinen FinlandNeil A.M. Maiden United KingdomRobert Meersman BelgiumCarlo Meghini ItalyGunter Muller GermanyJohn Mylopoulos CanadaErich J. Neuhold GermanyAntoni Olive Spain

    Andreas Opdahl NorwayMaria E. Orlowska AustraliaMichael Papazoglou The NetherlandsBarbara Pernici ItalyKlaus Pohl GermanyNaveen Prakash IndiaBala Ramesh U.S.A.Andreas Reuter GermanyColette Rolland FranceThomas Rose GermanyMatti Rossi FinlandGunter Saake GermanyMotoshi Saeki JapanAmit Sheth U.S.A.August-Wilhelm Scheer GermanyMichel Scholl FranceArie Segev U.S.A.Amilcar Sernadas PortugalKeng Siau U.S.A.Elmar J. Sinz GermanyArne Slvberg NorwayStefano Spaccapietra SwitzerlandWolffried Stucky GermanyRudi Studer GermanyBabis Theodoulidis United KingdomYannis Vassiliou GreeceYair Wand CanadaRoel Wieringa The NetherlandsEric Yu Canada

  • VIII Conference Organization

    Additional Referees

    Mounji Abdelaziz BelgiumGuiseppe Amato ItalyMichael Amberg GermanyIsmailcem Budak Arpinar U.S.A.Soren Balko GermanyPatrick Baudisch GermanyLinda Bird AustraliaMichael Boehnlein GermanyDietrich Boles GermanyMarkus Breitling GermanyPatrik Budenz GermanyRalph Busse GermanyJorge Cardoso U.S.A.Fabio Casati ItalyDonatella Castelli ItalyJudith Cornelisse-Vermaat NLStefan Decker GermanyMartin Doerr GreeceRuxandra Domenig SwitzerlandSt. Duewel GermanyChristian Ege GermanyRolf Engmann The NetherlandsFabian Erbach GermanyTorsten Eymann GermanyDieter Fensel GermanyErwin Folmer The NetherlandsChiara Francalanci ItalyLars Frank DenmarkPeter Frankhauser GermanyMatthias Friedrich GermanyHans Fritschi SwitzerlandMichael Gebhardt GermanyChristian Ghezzi United KingdomG. Giannopoulos United KingdomSoenke Gold GermanyPaula Gouveia PortugalTom Gross AustriaPeter Haumer GermanyOlaf Herden GermanyEyk Hildebrandt GermanyMartin Hitz AustriaStefan Horn GermanyGiovanni Iachello GermanyUwe Jendricke Germany

    Dirk Jonscher SwitzerlandYannis Kakoudakis United KingdomVera Kamp GermanyPanos Kardasis United KingdomE. Kavakli United KingdomMinna Koskinen FinlandMarkus Kradolfer SwitzerlandMichael Lawley AustraliaKarel Lemmen The NetherlandsMauri Leppanen FinlandWeifa Liang AustraliaJianguo Lu CanadaZongWei Luo U.S.A.Olivera Marjanovic AustraliaSilvia Mazzini ItalyJens Neeb GermanyMarian Orlowski AustraliaBoris Padovan GermanyRainer Perkuhn GermanyAnne Persson SwedenIlias Petrounias United KingdomNikos Prekas United KingdomJaime Ramos PortugalStefan Rausch-Schott AustriaMartin Reichenbach GermanyJorg Ritter GermanyRoland Rolles GermanyGregory Rose U.S.A.Wasim Sadiq AustraliaCarina Sandmann GermanyMohammad Saraee United KingdomRalf Schamburger GermanyMichael Schlundt GermanyIngo Schmitt GermanyHans-Peter Schnurr GermanyDetlef Schoder GermanyDirk Schulze GermanyKerstin Schwarz GermanyRoland Schatzle GermanyMikael Skov DenmarkSteffen Staab GermanyUmberto Straccia ItalyJuha-Pekka Tolvanen FinlandDimitrios Tombros Switzerland

  • Conference Organization IX

    Can Tuerker GermanyAchim Ulbrich-vom Ende GermanyAnca Vaduva SwitzerlandAlex Vakaloudis United KingdomKris De Volder BelgiumRob L.W. v.d. Weg The Netherlands

    Klaus Weidenhaupt GermanyPatrik Werle SwedenBenedikt Wismans GermanyShengli Wu U.S.A.J.Leon Zhao U.S.A.Frank-O. Zimmermann Germany

    Organizing Committee

    Marcus Raddatz RWTH AachenStefan Sklorz RWTH AachenIrene Wicke RWTH Aachen

    Kirsten Keferstein Univ. of FrankfurtKirsten Lenz Univ. of FrankfurtJurgen Powik Univ. of Frankfurt

    Supporting and Sponsoring Organizations

    Association for Information SystemsEuropean Media Lab (Heidelberg)Gesellschaft fur Informatik e.V.

    IBM Deutschland GmbHPromatis GmbH

  • CAiSE*99 Tutorials

    Tutorial ChairKlaus Pohl

    RWTH Aachen, Germany

    UML at Work - From Analysis toImplementation

    Gregor Engels GermanyAnnika Wagner GermanyMartin Hitz AustriaGerti Kappel AustriaWerner Retschitzegger Austria

    Building Component-BasedBusiness Applications

    Claus Rautenstrauch GermanyKlaus Turowski Germany

    Challenges in WorkflowManagement

    Amit Sheth U.S.A.Christoph Bussler U.S.A.

    Technological Infrastructure forElectronic Commerce

    Avigdor Gal U.S.A.John Mylopoulos Canada

    CAiSE*99 Pre-conference Workshops

    Workshop ChairGottfried Vossen

    University of Munster, Germany

    6th CAiSE Doctoral ConsortiumFrank Moisiadis AustraliaGabrio Rivera SwitzerlandAntonia Erni Switzerland

    Design and Management of DataWarehouses (DMDW99)

    Stella Gatzi SwitzerlandManfred Jeusfeld The NetherlandsMartin Staudt SwitzerlandYannis Vassiliou Greece

    4th CAiSE/IFIP8.1 Int. WS onEvaluation of Modeling Methodsin Systems Analysis and Design

    (EMMSAD99)Keng Siau U.S.A.Yair Wand Canada

    Agent-Oriented InformationSystems (AOIS99)

    Gerd Wagner GermanyEric Yu Canada

    Software Architectures forBusiness Process Management

    (SABPM99)Wil van der Aalst The NetherlandsJorg Desel GermanyRoland Kaschek Austria

    5th International Workshop onRequirements Engineering:

    Foundation for Software Quality(REFSQ99)

    Klaus Pohl GermanyAndreas Opdahl Norway

  • Table of Contents

    Invited Talks

    The Unified Process for Component-Based Development . . . . . . . . . . . . . . . . 1I. Jacobsen

    From Business Process Model to Application System Developing anInformation System with the House of Business Engineering (HOBE) . . . . . 2

    A.-W. Scheer, M. Hoffmann

    Regular Papers

    Components

    CPAM, A Protocol for Software Composition . . . . . . . . . . . . . . . . . . . . . . . . . . 11L. Melloul, D. Beringer, N. Sample, G. Wiederhold

    A Process-Oriented Approach to Software Component Definition . . . . . . . . . 26F. Matthes, H. Wegner, P. Hupe

    Configuring Business Objects from Legacy Systems . . . . . . . . . . . . . . . . . . . . . 41W.-J. v.d. Heuvel, M. Papazoglou, M.A. Jeusfeld

    IS Management

    Risk Management for IT in the Large . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57D. Verhoef, M. Franckson

    Linking Business Modelling to Socio-technical System Design . . . . . . . . . . . . 73A.G. Sutcliffe, S. Minocha

    Towards Flexible and High-Level Modeling and Enacting of Processes . . . . 88G. Joeris, O. Herzog

    Method Engineering

    Method Enhancement by Scenario Based Techniques . . . . . . . . . . . . . . . . . . . 103J. Ralyte, C. Rolland, V. Plihon

    Support for the Process Engineer: The Spearmint Approach toSoftware Process Definition and Process Guidance . . . . . . . . . . . . . . . . . . . . . 119

    U. Becker-Kornstaedt, D. Hamann, R. Kempkens, P. Rosch,M. Verlage, R. Webby, J. Zettel

  • XII Table of Contents

    Managing Componentware Development Software Reuse and theV-Modell Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

    D. Ansorge, K. Bergner, B. Deifel, N. Hawlitzky, C. Maier,B. Paech, A. Rausch, M. Sihling, V. Thurner, S. Vogel

    Data Warehouses

    Modelling Multidimensional Data in a Dataflow-Based Visual DataAnalysis Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

    F. Wietek

    Towards Quality-Oriented Data Warehouse Usage and Evolution . . . . . . . . . 164P. Vassiliadis, M. Bouzeghoub, C. Quix

    Designing the Global Data Warehouse with SPJ Views . . . . . . . . . . . . . . . . . 180D. Theodoratos, S. Ligoudistianos, T. Sellis

    Process Modeling

    Applying Graph Reduction Techniques for Identifying StructuralConflicts in Process Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

    W. Sadiq, M.E. Orlowska

    A Multi-variant Approach to Software Process Modelling . . . . . . . . . . . . . . . 210W. Hesse, J. Noack

    An Ontological Analysis of Integrated Process Modelling . . . . . . . . . . . . . . . . 225P. Green, M. Rosemann

    CORBA, Distributed IS

    Design of Object Caching in a CORBA OTM System . . . . . . . . . . . . . . . . . . . 241T. Sandholm, S. Tai, D. Slama, E. Walshe

    Constructing IDL Views on Relational Databases . . . . . . . . . . . . . . . . . . . . . . 255K. Jungfer, U. Leser, P. Rodriguez-Tome

    The Design of Cooperative Transaction Model by Using Client-ServerArchitecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269

    A. Oh

    Workflow

    A Multilevel Secure Workflow Management System . . . . . . . . . . . . . . . . . . . . . 271M.H. Kang, J.N. Froscher, A.P. Sheth, K.J. Kochut, J.A. Miller

    Time Constraints in Workflow Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286J. Eder, E. Panagos, M. Rabinovich

  • Table of Contents XIII

    TOGA A Customizable Service for Data-Centric Collaboration . . . . . . . . . 301J. Sellentin, A. Frank, B. Mitschang

    Heterogeneous Databases

    A Practical Approach to Access Heterogeneous and DistributedDatabases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317

    F. de Ferreira Rezende, U. Hermsen, G. de Sa Oliveira,R.C.G. Pereira, J. Rutschlin

    A Uniform Approach to Inter-model Transformations . . . . . . . . . . . . . . . . . . . 333P. Mc.Brien, A. Poulovassilis

    OTHY: Object To HYpermedia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349F. Barbeau, J. Martinez

    IS Dynamics

    Modeling Dynamic Domains with ConGolog . . . . . . . . . . . . . . . . . . . . . . . . . . . 365Y. Lesperance, T.G. Kelley, J. Mylopoulos, E.S.K. Yu

    Towards an Object Petri Nets Model for Specifying and ValidatingDistributed Information Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381

    N. Aoumeur, G. Saake

    Relationship Reification: A Temporal View . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396A. Olive

    Short Papers

    Towards a Classification Framework for Application Granularity inWorkflow Management Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411

    J. Becker, M. zur Muhlen

    Adaptive Outsourcing in Cross-Organizational Workflows . . . . . . . . . . . . . . . 417J. Klingemann, J. Wasch, K. Aberer

    Policy-Based Resource Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422Y.-N. Huang, M.-C. Shan

    Modelling Method Heuristics for Better Quality Products . . . . . . . . . . . . . . . 429N. Prakash, R. Sibal

    Queries and Constraints on Semi-structured Data . . . . . . . . . . . . . . . . . . . . . . 434D. Calvanese, G. De Giacomo, M. Lenzerini

    A Prototype for Metadata-Based Integration of Internet Sources . . . . . . . . . 439C. Bornhovd, A.P. Buchmann

  • XIV Table of Contents

    Workflow Management Through Distributed and Persistent CORBAWorkflow Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446

    M. Weske

    Component Criteria for Information System Families . . . . . . . . . . . . . . . . . . . 451S. Jarzabek

    TUML: A Method for Modelling Temporal Information Systems . . . . . . . . . 456M. Svinterikou, B. Theodoulidis

    Beyond Goal Representation: Checking Goal-Satisfaction by TemporalReasoning with Business Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462

    C. Yi, P. Johannesson

    Design the Flexibility, Maintain the Stability of Conceptual Schemas . . . . . 467L. Wedemeijer

    Metrics for Active Database Maintainability . . . . . . . . . . . . . . . . . . . . . . . . . . . 472O. Daz, M. Piattini

    Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477

  • The Unified Process for Component-Based

    Development

    Ivar Jacobsen

    Rational Software [email protected]

    Abstract. A better development process-in fact, a process unifying thebest practices now available-is the key to the software future. The provenUnified Process originally developed by Ivar Jacobson, now incorporatingthe work of Grady Booch, Jim Rumbaugh, Philippe Kruchten, WalkerRoyce, and other people inside Rational, answers this long-felt need.Component and object based, the Unified Process enables reuse. Use-case driven, it closes the gap between what the user needs and what thedeveloper does, it drives the development process. Architecture centric,it guides the development process. Iterative and incremental, it managesrisk. Represented in the design blueprints of the newly standardized Uni-fied Modeling Language (UML), it communicates your results to a wideaudience.

    M. Jarke, A. Oberweis (Eds.): CAiSE99, LNCS 1626, pp. 11, 1999.c Springer-Verlag Berlin Heidelberg 1999

  • M. Jarke, A. Oberweis (Eds.): CAiSE99, LNCS 1626, pp. 2-9, 1999. Springer-Verlag Berlin Heidelberg 1999

    From Business Process Model to Application System -Developing an Information System with the House of

    Business Engineering (HOBE)

    August-Wilhelm Scheer, Michael Hoffmann

    Institut fr Wirtschaftsinformatik (IWi) an der Universitt des SaarlandesIm Stadtwald, Geb. 14.1, 66123 Saarbrcken

    {scheer, hoffmann}@iwi.uni-sb.de

    Abstract. Organizational concepts, like virtual enterprises or profit centers incompanies, are ordering new functionality on information systems. Enterpriseshave to customize their business processes in very short periods because ofstronger competition and globalization. Concepts like electronic commerce andsupply chain management need a permanent optimization of the businessprocesses along the supply chain. The generating of individual software andmodel based customizing of ERP-packages offer potentials for more flexibilitysupporting business processes with information systems. This paper shows aconcept for describing and implementing business processes and a softwaredevelopment project using this concept.

    1 Flexible Information Systems for Mobile Enterprises

    Organizational concepts, like virtual enterprises or profit centers in companies, areordering new functionality on information systems. The networks between manyorganizational units in a company and between companies need more flexibility inimplementing and customizing information systems.

    In the past, information systems had been customized on the basis oforganizational structures and business processes once and then not been altered againover a period of many years. Nowadays, enterprises have to customize the businessprocesses in very short periods because of stronger competition and globalization.Concepts like electronic commerce and supply chain management demand apermanent optimization of the business processes along the supply chain.

    Companies selling enterprise resource planning packages have realized thenecessity to deliver tools reducing the time and the costs spent in implementing andcustomizing their software products.

    Many enterprises expect more flexibility from generating their individual softwareproducts on the basis of semantic models and software components working togetherin a framework. A change of the semantic models automatically takes effect on theconfiguration of the information system.

    With ARIS House of Business Engineering, chapter 2 introduces a concept fordescribing business processes from semantic models to implementation. Chapter 3shows the development of software applications using the HOBE-concept. In chapter4 future trends are presented.

    mailto:hoffmann}@iwi.uni-sb.de

  • From Business Process Model to Application System 3

    2 ARIS - House of Business Engineering (HOBE)

    ARIS - House of Business Engineering is an integrated concept describing andexecuting business processes. Furthermore, it is a framework for developing realapplication software. The next chapters explain the meaning of the different levelsshown in Fig. 1.

    IV. Process Application

    III. Process Workflow

    I. Process Design

    referencemodels,

    knowledgemanagement

    II. Process Planning and Control

    database

    simulation qualityassurance

    controlling,bench-

    marking

    components,business objects,

    object libraries

    standardsoftware

    build

    - ti

    me

    - co

    nfig

    urat

    ion

    time and capacity management

    Executive Information System

    ContinuousProcess

    Improvement

    folder

    open function open document open data

    monitoring

    Java-applets

    V. F

    RA

    ME

    WO

    RK

    process andproduct models

    process ware-house

    Fig. 1 ARIS - House of Business Engineering

    2.1 Level I: Process Design

    Level I describes business processes according to the routing. Therefore, the ARISconcept provides a method to cover every aspect of the business process. Methods foroptimizing and guaranteeing the quality of processes are available, too.

    2.2 Level II: Process Management

    Level II plans and monitors every current business process from the businessprocess owners point of view. Various methods of scheduling and capacity controland cost analysis are available. By monitoring the process, the process manager isaware of the status of each process instance.

  • 4 August-Wilhelm Scheer and Michael Hoffmann

    2.3 Level III: Process Workflow

    Level III transports the objects to be processed, such as customer orders with thecorresponding documents or insurance claims in insurance companies, from oneworkplace to the next. The documents are then stored in folders. Workflow systemscarry out the material handling in electronically stored documents.

    2.4 Level IV: Process Application

    Level IV processes the documents transported to the individual workplaces; that iswhere the functions of the business process are executed. At this level, computer-aided application systems from simple word processing programs to complexstandard software modules and internet applets are used.

    2.5 Combination of the Four Levels and Framework

    Those four levels of ARIS House of Business Engineering are connectedinterdependently. Level II (Process Management) delivers information about theprofitability of currents processes. That information forms the basis for a continuousadjustment and improvement of the business processes at Level I. The processworkflow reports data regarding times, amounts and organizational allocation etc. toProcess Management. The workflow systems at Level III start applications at LevelIV.

    The framework is the fifth component, which includes all four levels. It combinesarchitecture and application knowledge, from which concrete applications areconfigured. At all levels of the HOBE concept, the ARIS life cycle model can beused. That means that the software can be described by Requirements Definition,Design Specification and Implementation.

    The following chapter describes the prototypical implementation of the HOBEConcept.

    3 Prototypical Implementation of ARIS House of BusinessEngineering

    In order to implement the HOBE Concept, the institute for information systems(Institut fr Wirtschaftsinformatik IWi) of the University of Saarland, and the IDSScheer AG, have launched a project to prototypically implement a sales scenario. Thisproject was supported by CSE Systems Cooperation and NEXUS GmbH. Fig. 2shows the components used with the implementation.

  • From Business Process Model to Application System 5

    IV. Anwendungs- system

    III. Workflow- steuerung

    I. Proze- gestaltung

    II. Prozeplanung und -steuerung

    ContinuousProcess

    Improvement

    ARIS-Toolset

    ARIS-Toolset

    OrganisationsmodelleProzemodelleDatenmodelle

    NEXUS Conspector DatenDatenOffice Anwendungen

    Fig. 2 Components used with the implementation

    At Level I, the ARIS Toolset has been used to model the processes, the datastructure and the hierarchical organization of the enterprise considered. Anorganigram has been modeled for the hierarchical organization, as well as an UMLclass diagram for the data structure and an extended Event Driven Process Chain(eEPC) for the process organization.

    3.1 Data Modeling

    On the basis of the UML class diagram, a database repository has been generatedby using a report and an interface between the ARIS Toolset and the NEXUS Builder.

    The following rules were observed: Each class is represented as an object in the database. Each association with the cardinality m:n is represented as an object in the

    database. Associations with the cardinality 1:n are not represented as objects. The attributes

    of these associations are administered in the subordinate classes. Classes are depicted centrally in forms, associated classes are depicted in the detail

    section.

    Filling the generated database with content, one gets the company-wide integrateddatabase/data set for the company considered.

  • 6 August-Wilhelm Scheer and Michael Hoffmann

    Fig. 3 Generating the Database

    Fig. 3 shows part of the data model and the result after the transition to the NEXUSruntime environment. In the center of the generated mask, the class Artikel can beseen. In the detail section, the associations Unterteil and Oberteil can be faded in,which depict the bill of material view respectively. If someone selects the associationAuftragsposition, it will be depicted which order position an item has been assignedto.

    3.2 Organizational Structure

    Figure 4 shows the organigram of the exemplary company. As shown on the right-hand side of the figure, in CSE-Workflow, the modeled organizational units aredepicted as tree-control. The following transition rules have been used:

    An organizational unit in the organigram becomes an organizational unit in CSE-Workflow.

    An internal person is depicted as a user in CSE-Workflow. A person type in the organigram determines a role in CSE-Workflow. A position describes a synonym.

  • From Business Process Model to Application System 7

    Fig. 4 Generating the Organizational Structure

    3.3 Organization of Workflow

    Fig. 5 shows the modeling of the workflow organization according to the modelingconventions of the CSE-Workflow filter, and the transition of the eEPC to CSE-Workflow. In the process designer of the CSE-workflow system, events are notdepicted. The red-colored ovals stand for activities and correspond to the functionswithin the eEPC. The first activity is particularly stressed and may not be automatedin the workflow.

    It is unusual that application systems supporting the execution of a function are notmodeled to the respective function, but to the event triggering the function. Onereason for this is that events in workflow systems are not modeled and thatapplication systems are opened via internal workflow triggers at the transitions. InFig. 5 they are shown as exclamation marks on the lines between the activities. Theorganizational units are assigned to the functions responsible for their execution.

  • 8 August-Wilhelm Scheer and Michael Hoffmann

    Fig. 5 Generating the Organization of Workflow

    Next chapter describes the scenario supported with the developed informationsystem.

    3.4 Scenario

    We consider an enterprise that sells computers and computer parts on the Internet.A customer can enter his master data in a form on the homepage of the enterprise andgenerate an order by entering quantities of the desired items. The generatedelectronic order automatically initiates the instantiation of a workflow. The employeeresponsible for order acceptance finds the form completed by the customer in theinbox and copies the data down to the NEXUS database generated from the ARIStoolset. When double-clicking on that business event, the database is automaticallyopened with the respective input mask appearing. Having finished his task, theemployee sends the work folder containing the order to the next position according tothe workflow model. The information systems required for the processing of the taskare, too, opened by double-clicking on the business event. The business event will beforwarded to the next position until it has been worked off. Figure 6 shows theprocess model modeled with ARIS toolset, the Internet form which when completed- triggers a workflow, the inbox of the employee responsible for order acceptance andthe input mask of the generated enterprise-wide database.

  • From Business Process Model to Application System 9

    Fig. 6 Order processing with the system developed

    4 Outlook

    The development of the project shows that future models will not only documentsoftware, but also generate and modify it. Thus the importance of a consistent andcomplete modeling of enterprises on the level of requirements definition is growing.As a change of models automatically generates a change in the underlying software.Enterprise are able to support new business processes with application software oradjust existing business processes nearly without any delay. Thus ,e.g., theinstantiation of ad-hoc workflow becomes feasible. New aspects are expected withreverse engineering and reverse customizing. The features of the tool used formodeling are very important. Besides usability, the functionality of the database thatrecords the objects of the models, and the interfaces offered to application softwareprogramming environments are decisive for the exhaustion of maximum potentials.

  • M. Jarke, A. Oberweis (Eds.): CAiSE99, LNCS 1626, pp. 11-25, 1999. Springer-Verlag Berlin Heidelberg 1999

    CPAM, A Protocol for Software Composition

    Laurence Melloul, Dorothea Beringer, Neal Sample, Gio Wiederhold

    Computer Science Department, Stanford UniversityGates Computer Science building 4A, Stanford, CA 94305, USA

    {melloul, beringer, nsample, gio}@db.stanford.eduhttp://www-db.stanford.edu/CHAIMS

    Abstract. Software composition is critical for building large-scale applications.In this paper, we consider the composition of components that are methodsoffered by heterogeneous, autonomous and distributed computational softwaremodules made available by external sources. The objective is to compose thesemethods and build new applications while preserving the autonomy of thesoftware modules. This would decrease the time and cost needed for producingand maintaining the added functionality. In the following, we describe a high-level protocol that enables software composition. CPAM, CHAIMS Protocolfor Autonomous Megamodules, may be used on top of various distributionsystems. It offers additional features for supporting module heterogeneity andpreserving module autonomy, and also implements several optimizationconcepts such as cost estimation of methods and partial extraction of results.

    1 Introduction

    CPAM, the CHAIMS Protocol for Autonomous Megamodules, is a high-levelprotocol for realizing software composition. CPAM has been defined in the context ofthe CHAIMS (Compiling High-level Access Interfaces for Multi-site Software) [1]research project at Stanford University in order to build extensive applications bycomposing large, heterogeneous, autonomous, and distributed software modules.

    Software modules are large if they are computation intensive (computation timemay range from seconds in the case of information requests, to days in the case ofsimulations) or/and data intensive (the amount of data can not be neglected duringtransmissions). They are heterogeneous if they are written in different languages (e.g.,C++, Java), use different distribution protocols (e.g., CORBA [2], RMI [3], DCE [4],DCOM [5]), or run on diverse platforms (e.g., Windows NT, Sun Solaris, HP-UX).Modules are autonomous if they are developed and maintained independently of oneanother, and independently of the composer who composes them. Finally, softwaremodules are distributed when they are not located on the same server and may be usedby more than one client. We will call modules with these characteristicsmegamodules, and the methods they offer services.

    In this paper, we focus on the composition of megamodules for two reasons:

    service providers being independent and geographically distant, software modulesare autonomous and most likely heterogeneous and distributed,

    because of cost-effectiveness, composition is critical when services are large.

    mailto:gio}@db.stanford.eduhttp://www-db.stanford.edu/CHAIMS

  • 12 Laurence Melloul et al.

    Megamodule composition consists of remotely invoking the services of thecomposed megamodules in order to produce new services. Composition differs fromintegration in the sense that it preserves megamodule autonomy. Naturally, theassumption is that megamodule providers are eager to offer their services. This is areasonable assumption if we consider the business interest that would derive from theuse of services, such as a payment of fees or the cut of customer service costs.

    Composition of megamodules is becoming crucial for the software Industry. Asbusiness competition and software complexity increase, companies have to shortentheir software cycle (development, testing, and maintenance) while offering evermore functionality. Because of high software development or integration costs, theyare being forced to build large-scale applications by reusing external services andcomposing them. Global information systems such as the Web and global businessenvironments such as electronic commerce foreshadow a software developmentenvironment where developers would access and reuse services offered on the Web,combine them, and produce new services which, in turn, would be accessed throughthe Web.

    Existing distribution protocols such as CORBA, RMI, DCE, or DCOM allow usersto compose software with different legacy codes but using CORBA, RMI, DCE, orDCOM as just the distribution protocol underneath. The Horus protocol [6] composesheterogeneous protocols in order to add functionality at the protocol level only. TheERPs, Enterprise Resource Planning systems, such as SAP R/3, BAAN IV, andPeopleSoft, integrate heterogeneous and initially independent systems but do notpreserve software autonomy. None of these systems simultaneously supportsheterogeneity and preserves software autonomy during the process of composition ina distributed environment.

    CPAM has been defined for accomplishing megamodule composition. In thefollowing, we describe how CPAM supports megamodule heterogeneity (section 2),how it preserves megamodule autonomy (section 3), and how it enables optimizedcomposition of large-scale services (section 4). We finally explain how to use CPAM,and provide an illustration of a client doing composition in compliance with theCPAM protocol (section 5).

    2 CPAM Supports Megamodule Heterogeneity

    Composition of heterogeneous and distributed software modules has severalconstraints: it has to support heterogeneous data transmission between megamodulesas well as the diverse distribution protocols used by megamodules.

    2.1 Data Heterogeneity

    In order for megamodules to exchange information, data need to be in a commonformat (a separate research project is exploring ways to map different ontologies [7]).Also, data has to be machine and architecture independent (16 bit architecture versus32 bit architecture for instance), and transferred between megamodules regardless ofthe distribution protocol at either end (source or destination). For these reasons, the

  • CPAM, A Protocol for Software Composition 13

    current version of CPAM requires data to be ASN.1 structures encoded using BERrules [8]. With ASN.1/BER-encoding rules:

    1. Simple data types as well as complex data types can be represented as ASN.1structures,

    2. Data can be encoded in a binary format that is interpreted on any machine whereASN.1 libraries are installed,

    3. Data can be transported through any distribution system.

    It has not been possible to use other definition languages such as CORBA InterfaceDefinition Language or Java classes to define data types because these definitionsrespectively require that the same CORBA ORB or the RMI distribution protocol besupported at both ends of the transmission.

    Fig. 1. Data transfer, Opaque data

    2.2 Opaque Data

    Because ASN.1 data blocks are encoded in binary format, we refer to them as BLOBs(Binary Large OBjects). BLOBs being opaque, they are not readable by CPAM. Aclient doing composition only (the client gets and transmits the data, with nocomputation in between) does not need to interpret the data it receives from amegamodule, or sends to another megamodule. Therefore, as shown in Fig. 1, beforebeing transported, data is encoded in the source megamodule; it is then sent to theclient where it remains a BLOB, and gets decoded only when it reaches thedestination megamodule.

    A client that would have the knowledge of the megamodule definition languagecould convert the blobs into their corresponding data types, and read them. It wouldthen become its responsibility to encode the data before sending it to anothermegamodule.

    2.3 Distribution Protocol Heterogeneity

    Both data transportation and client-server bindings are dependent on the distributionsystem used. CPAM is a high-level protocol that is implemented on top of existing

    Source megamodule

    Data gets encoded

    Client

    Data remains encoded

    Destination megamodule

    Data gets decoded

  • 14 Laurence Melloul et al.

    distribution protocols. Since its specifications may be implemented on top of morethan one distribution protocol within the composed application, CPAM has to supportdata transmissions and client-server connections across different distribution systems.

    We mentioned that encoded ASN.1 data could be transferred between the clientand the megamodules independently of the distribution protocols used at both ends.Regarding client-server connections, CPAM assumes that the client is able tosimultaneously support the various distribution systems of the servers it wishes to talkto. The CHAIMS architecture, along with the CHAIMS compiler [9], enables thegeneration of such a client. This process is described in next section. Currently, in thecontext of CHAIMS, a client can simultaneously support the following protocols:CORBA, RMI, local C++ and local Java (local qualifying a server which is notremote).

    Fig. 2. The CHAIMS architecture

    2.4 The CHAIMS Architecture

    Figure 2 describes the CHAIMS architecture. In CHAIMS, the client program is themegaprogram and the compiled client program is the CSRT (Client Side Run Time).Also, server information repositories are currently merged into one unique CHAIMSrepository.

    The distribution protocol used during a specific communication between the clientand a remote server is the one of the server itself, and must be supported by the client.In the context of CHAIMS, the composer writes a simple megaprogram in CLAM(CHAIMS Language for Autonomous Megamodules) [10], a composition only

  • CPAM, A Protocol for Software Composition 15

    language. This program contains the sequence of invocations to the megamodules thecomposer wishes to compose (an example of megaprogram is given in section 5.3).The CHAIMS compiler parses the megaprogram and generates the whole client codenecessary to simultaneously bind to the various servers (CSRT). Server specificationssuch as the required distribution protocol are contained in the CHAIMS repositoryand are accessible to the CHAIMS compiler.

    Both the client and the servers have to follow CPAM specifications. As it is notedin Fig. 2, megamodules that are not CPAM compliant need to be wrapped. Theprocess of wrapping is described in section 5.2.

    3 CPAM Preserves Megamodule Autonomy

    Besides being heterogeneous and distributed, megamodules are autonomous. They aredeveloped and maintained independently from the composer who therefore has nocontrol over them. How can the composer be aware of all services offered bymegamodules, and of the latest versions of these services without compromisingmegamodule autonomy? Also, how do the connections between the client and theserver take place? Which of the client or the server controls the connection? Afterspecifying these two points, we will briefly described several consistency rules thatwill ensure offered services are not updated by the server without the client beingaware of it.

    3.1 Information Repository

    Composition can not be achieved without knowing what services are offered and howto use them. The composer could refer to an application user's guide to know what thepurposes of the services available are. He/she could also refer to the applicationprogrammer's guide to get the implementation details about the services. Nonetheless,the composer would only get static information such as service description andmethod input/output parameter names and types. Megamodules being autonomousand distributed, make it compulsory to also retain dynamic information about theservices, such as the name of the machines where the services to be composed arelocated.

    CPAM requires that the necessary megamodule information, both static anddynamic, be gathered into one information repository. Each megamodule provider isresponsible for making such a repository available to external users, and for keepingthe information up-to-date. It is also the megamodule provider's responsibility toactually offer the services and the quality it advertises.

    Information Repository Content. The information repository has to include thefollowing information:

    1. Logical name of the service (i.e., megamodule), along with the machine locationand the distribution protocol used, in order for the client to bind to the server,

  • 16 Laurence Melloul et al.

    2. Names of the services offered (top-level methods), along with the names andnature (input or output) of their parameters, in order for the client to makeinvocations or preset parameters before invocation.

    Scope of Parameter Names. The scope of parameter names is not restricted to themethod where the parameters are used, but rather to the whole megamodule. Formegamodules offering more than one method, this implies that if two distinct methodshave the same parameter name in their list of parameters, any value preset for thisparameter will apply to any use of this parameter in the megamodule. CPAM enlargesthe scope of parameter names in order to offer the possibility of presetting allparameters of a megamodule using one call only in the client, hence minimizing dataflow (see section 4.2).

    3.2 Establishing and Terminating a Connection with a Megamodule

    Another issue when composing autonomous megamodules is the ownership of theconnection between a client and a server. Autonomous servers do not know when aclient wishes to initiate or terminate a connection. In CPAM, clients are responsiblefor making a connection to a megamodule and terminating it. Nonetheless, serversmust be able to handle simultaneous requests from various clients, and must be startedbefore such requests arrive. Certain distribution protocols like CORBA include aninternal timer that stops a server execution process if no invocations occur after a settime period, and instantly starts it when a new invocation arrives.

    CPAM defines two primitives in order for a client to establish or terminate aconnection to a megamodule. These are SETUP and TERMINATEALL. SETUP tellsthe megamodule that a client wants to connect to it; TERMINATEALL notifies themegamodule that the client will no longer use its services (the megamodule kills anyongoing invocations initiated by this client). If for any reason a client does notterminate a connection to a megamodule, we can assume the megamodule itself willdo it after a time-out, and a new SETUP will be required from the client before anyfuture invocation.

    3.3 Consistency

    Megamodules being autonomous, they can update services without clients orcomposers being aware of the modifications brought to the services. The best way aclient becomes aware of updates in the server is still under investigation (one optioncould be to have such changes mentioned in the repository). Nevertheless, it shouldnot be the responsibility of the megamodule provider to directly notify clients andcomposers of service changes since we do want to preserve megamodule autonomy.

    Once the composer knows what modifications were brought to the services, he/shecan accordingly upgrade the client program. In CHAIMS, the composer upgrades themegaprogram, which the CHAIMS compiler recompiles in order to generate theupdated client program.

    It is the responsibility of the service provider not to update the server while thereare still clients connected to it. The server must first ask clients to disconnect or waitfor their disconnection before upgrading megamodules.

  • CPAM, A Protocol for Software Composition 17

    The information repository and the connection and consistency rules ensure thatserver autonomy is preserved and that clients are able to use offered services.

    4 CPAM Enables Efficient Composition of Large-Scale Services

    CPAM makes it possible to compose services offered by heterogeneous, distributedand autonomous megamodules. Services being large, an even more interestingobjective for a client would be to efficiently compose these services. CPAM enablesefficient composition in the following two ways:

    Invocation sequence optimization Data flow minimization between megamodules [11].

    4.1 Invocation Sequence Optimization

    Because the invocation cost of a large service is a priori high and services aredistributed, a random composition of services could be very expensive. Theinvocation sequence has to be optimized. CPAM has defined its own invocationstructure in order to allow parallelism and easy invocation monitoring. Suchcapabilities, added to the possibility of estimating a method cost prior to itsinvocation, enable optimization of the invocation sequence in the client.

    Invocation Structure in CPAM. A traditional procedure call consists of invoking amethod and getting its results back in a synchronous way: the calling client waitsduring the procedure call, and the overall structure of the client program remainssimple. In contrast, an asynchronous call avoids client waits but makes the clientprogram more complex, as has to be multithreaded. CPAM splits the traditional callstatement into four synchronous remote procedure calls that make the overall callbehave asynchronously while keeping the client program sequential and simple.These procedure calls have also enabled the implementation of interestingoptimization concepts in CPAM, such as partial extraction and progress monitoring.

    The four procedure calls are INVOKE, EXAMINE, EXTRACT, andTERMINATE:

    1. INVOKE starts the execution of a method applied to a set of input parameters. Notevery input parameter of the method has to be specified as the megamodule takesclient-specific values or general hard-coded default values for missing parameters(see hierarchical setting of parameters, section 4.2). An INVOKE call returns aninvocation identifier, which is used in all subsequent operations on this invocation(EXAMINE, EXTRACT, and TERMINATE).

    2. The client checks if the results of an INVOKE call are ready using the EXAMINEprimitive. EXAMINE returns two pieces of information: an invocation status andan invocation progress. The invocation status can be any of DONE, NOT_DONE,PARTIAL, or ERROR. If it is either PARTIAL or DONE, then respectively part orall of the results of the invocation are ready and can be extracted by the client.Invocation progress semantics is megamodule specific. For instance, progressinformation could be quantitative and describe the degree of completion of an

  • 18 Laurence Melloul et al.

    INVOKE call, or qualitative (e.g., specify the degree of resolution a first round ofimage processing would give).

    3. The results of an INVOKE call are retrieved using the EXTRACT primitive. Onlythe parameters specified as input are extracted, and only when the client wishes toextract results, can it do so. CPAM does not prevent a client from repeatedlyextracting an identical or a different subset of results.

    4. TERMINATE is used to tell a megamodule that the client is no longer interested ina specific invocation. TERMINATE is necessary because the server has no otherway to know whether an invocation will be referred to by the client in the future. Incase the client is no longer interested in an invocation's results, TERMINATEmakes it possible for the server to abort an ongoing execution. In case theinvocation has generated persistent changes in the server, it is the responsibility ofthe megamodule to preserve consistency.

    Parallelism and Invocation Monitoring. The benefits of having the call statementsplit into the four primitives mentioned above are parallelism, simplicity, and easyinvocation monitoring:

    Parallelism: thanks to the separation between INVOKE and EXTRACT in theprocedure call, the methods of different megamodules can be executed in parallel,even if they are synchronous, the only restrictions being data flow dependencies.The client program initiates as many invocations as desired and begins collectingresults when it needs them. Figure 3 illustrates the parallelism that can be inducedon synchronous calls using CPAM. Similar parallelism could also be obtained withasynchronous methods.

    Fig. 3. Split of the procedure call in CPAM and parallelism on synchronous calls

    Simplicity: the client program using CPAM consists of sequential invocations ofCPAM primitives, and is simple. It does not have to manage any callbacks of

  • CPAM, A Protocol for Software Composition 19

    asynchronous calls from the servers (the client is the one which initiates all thecalls to the servers, including the ones for getting invocation results).

    Easy invocation monitoring:

    - Progress monitoring: a client can check a method execution progress(EXAMINE), and abort a method execution (TERMINATE). Consider thecase where a client has the choice between megamodules offering the sameservice and arbitrarily chooses one of them for invocation. EXAMINE allowsthe client to confirm or revoke its choice, perhaps even ending an invocationif another one seems more promising.

    - Partial extraction: a client can extract a subset of the results of a method.CPAM also allows progressive extraction: the client can incrementallyextract results. This is feasible if the megamodule makes a result available assoon as its computation is completed (and before the computation of the nextresult is), or becomes significantly more accurate. Incremental extractioncould also be used for terminating an invocation as soon as its results aresatisfying, or conversely for verifying the adequacy of large methodinvocations and maybe terminating them as soon as results are not satisfying.

    - Ongoing processes: separating method invocation from result extraction andmethod termination enables clients to monitor ongoing processes (processesthat continuously compute or complete results, such as weather services).

    With very few primitives to learn, the composer can write simple client programs,still benefiting from parallelism and easy invocation monitoring. CPAM offers onemore functionality in order to optimize the invocation sequence in the client program:invocation cost estimation.

    Cost Estimation. Estimating the cost of a method prior to its invocation augments theprobability of making the right invocation at the right time. This is enabled in CPAMthrough the ESTIMATE primitive. Due to the autonomy of megamodules, the clienthas no knowledge of or influence over the availability of resources. The ESTIMATEprimitive, which is provided by the server itself, is the only way a client can get themost accurate method performance and cost information.

    A client asks a megamodule for a method cost estimation and then decides whetheror not to make the invocation based upon the estimate received. ESTIMATE is veryvaluable in the case of identical or similar large services offered by more than onemegamodule. Indeed, for expensive methods offered by several megamodules, itcould be very fruitful to first get an estimate of the invocation cost before choosingone of the methods. Of course, there is no guarantee on the estimate received (we canassume that a service invocation which is not in concordance with the estimatepreviously provided to a client will not be reused by the client).

    Cost estimates are treated in CPAM as fees (amount of money to pay to use aservice, in electronic commerce for instance), time (time of a method execution)and/or data volume (amount of data resulting from a method invocation). Since thelast two factors are highly run-time dependent, their estimation should be at run-time,as close as possible to the time the method could be invoked. Other application-specific parameters like server location, quality of service, and accuracy of estimatecould be added to the output estimate list in the server (and in the informationrepository), without changing CPAM specifications.

  • 20 Laurence Melloul et al.

    Parallelism, invocation estimates and invocation examinations are very helpfulfunctions of CPAM which, when combined, give enough information and flexibilityto get an optimized sequence of invocations at run-time. Megamodule code shouldreturn pertinent information with the ESTIMATE and EXAMINE primitives in orderfor a client to completely benefit from CPAM through consistent estimation andcontrol.

    Another factor for optimized composition concerns data flow betweenmegamodules.

    4.2 Data Flow Minimization between Megamodules

    Partial extraction enables clients to reduce the amount of data returned by aninvocation. CPAM also makes it possible to avoid parameter redundancy whencalling INVOKE thanks to parameter presetting and hierarchical setting ofparameters.

    Presetting Parameters. CPAMs SETPARAM primitive sets method parameters andglobal variables before a method is invoked. For a client which invokes a method withthe same parameter value several times consecutively, or invokes several methodswhich have a common subset of parameter names with the same values, it becomescost-effective to not transmit the same parameter values repeatedly. Let us recall thatmegamodules are very likely to be data intensive. Also, in the case of methods whichhave a very large number of parameters, only a few of which are modified at each call(very common in statistical simulations), the SETPARAM primitive becomes veryadvantageous. Finally, presetting parameters is useful for setting a specific contextbefore estimating the cost of a method.

    GETPARAM, the primitive dual, returns client specific settings or default values ofthe parameters and global variable names specified in its input parameter list.

    Hierarchical Setting of Parameters. CPAM establishes a hierarchical setting ofparameters within megamodules (see Fig. 4). A parameter's default value (most likelyhard-coded) defines the first level of parameter settings within the megamodule. Thesecond level is the client specific setting (set by SETPARAM). The third levelcorresponds to the invocation specific setting (parameter value provided through onespecific invocation, by INVOKE). Invocation specific settings override client specificsettings for the time of the invocation, and client specific settings override generaldefault values for the time of the connection. When a method is invoked, themegamodule takes the invocation specific settings for all parameters for which theinvocation supplies values; for all other parameters, the megamodule takes the clientspecific settings if they exist, and the megamodule general default values otherwise.For this reason, CPAM requires that megamodules provide default values for allparameters or global variables they contain.

  • CPAM, A Protocol for Software Composition 21

    Fig. 4. Hierarchical setting of parameters

    In conclusion, a client does not need to specify all input data or global variablesused in a method in order to invoke that method, nor does it need to repeatedlytransmit the same data for all method invocations which use the same parametersvalues. Also, a client need not retrieve all available results. This reduces the amountof data transferred between megamodules.

    Megamodules being large and distributed, invocation sequence optimization anddata flow minimization are necessary for efficient composition.

    5 How to Use CPAM

    We have so far discussed the various CPAM primitives necessary to do composition.Still, we have not yet described the primitives syntax nor the constraints a composerwould have to follow in order to write a correct client program. Another point wouldconcern the service provider: what does he/she need to do in order to convert amodule which is not CPAM compliant to a CPAM compliant megamodule thatsupports CPAM specifications?

    CPAM primitives syntax is fully described and can be found under the CHAIMSWeb site at http://www-db.stanford.edu/CHAIMS. In this section, we describe theprimitive ordering constraints, the CHAIMS wrapper templates, and provide a clientprogram example which complies with CPAM, and is written in CLAM.

  • 22 Laurence Melloul et al.

    5.1 Primitives Ordering Constraints

    CPAM primitives cannot be called in any arbitrary order, but they only have to followtwo constraints:

    All primitives apart from SETUP must be preceded by a connection to themegamodule through a call to SETUP (which has not yet been terminated byTERMINATEALL),

    The invocation referred to by EXAMINE, EXTRACT, or TERMINATE must bepreceded by an INVOKE call (which has not yet been terminated byTERMINATE).

    Figure 5 summarizes CPAM primitives and the ordering relations.

    Fig. 5. Primitives in CPAM and invocation ordering

    5.2 The CHAIMS Wrapper

    In case a server does not comply with CPAM specifications, it has to be wrapped inorder to use the CPAM protocol. The CHAIMS wrapper templates allow amegamodule to become CPAM compliant with a minimum of additional work.

    The CHAIMS wrapper templates are currently implemented as a C++ or Javaobject which serves as a middleman between the network and non-CPAM compliantservers. They implement CPAM specifications in the following way:1. Mapping of methods [12] [13] and parameters: the wrapper maps methods

    specified in the information repository to one or more methods of the legacymodule. It also maps parameters to ASN.1 data structures, preserving defaultvalues assigned in the legacy modules (or adding them if they were not assigned).

    2. Threading of invocations: to ensure parallelism and respect asynchrony in thelegacy code, the CHAIMS wrapper spawns a new thread for each invocation.

  • CPAM, A Protocol for Software Composition 23

    3. Generation of internal data structures to handle client invocations andconnections, and support hierarchical setting of parameters: each call to SETUPgenerates the necessary data structures to store client and invocation relatedinformation in the wrapper. Such information includes client-specific preset values,and the status, progress and results for each invocation. The generated datastructures are deleted when a call to TERMINATEALL occurs.

    4. Implementation of the ESTIMATE primitive for cost estimation: for each methodwhose cost estimation is not provided by the server, the ESTIMATE primitive bydefault returns an average of the costs of the previous calls of that method. It couldalso include dynamic information about the server and the network, such asmachine server load, network traffic, etc.

    5. Implementation of the EXAMINE primitive for invocation monitoring: by default,only the status field is returned. For the progress information to be set in thewrapper, the server has to give significant information.

    6. Implementation of all other CPAM primitives: SETUP, GET/SETPARAM,INVOKE, EXTRACT, TERMINATE, and TERMINATEALL.

    The current CHAIMS wrapper templates automatically generate the code to ensurepoints 2 to 6. Only requirement 1 needs manual coding (except for BER-encoding/decoding, which is automatically done by ASN.1 libraries).

    5.3 Example of a Client Using CPAM

    A successful utilization of CPAM for realizing composition is the Transportationexample implemented within the CHAIMS system. The example consists in findingthe best way for transporting goods between two cities. The composer uses servicesfrom five heterogeneous and autonomous megamodules. The client program is writtenin CLAM, and the CSRT generated through the CHAIMS compiler is in compliancewith CPAM. A second example is under implementation. It computes the best designmodel for an aircraft system, and includes optimization functionality as costestimation, incremental result extraction and invocation progress examination.

    Below is given a simplified version of the Transportation megaprogram (Fig. 6).Heterogeneity and distribution characteristics of the composed megamodules arespecified as follows: locality (Remote or Local), language, and protocol.

    6 Conclusion

    CPAM is a high-level protocol for composing megamodules. It supportsheterogeneity especially by transferring data as encoded ASN.1 structures, andpreserves megamodule autonomy by collecting service information from aninformation repository, and by subsequently using the generic invocation primitive ofCPAM in order to INVOKE services.

    Most importantly, CPAM enables efficient composition of large-scale services byoptimizing the invocation sequence (parallelism, invocation monitoring, costestimation), and minimizing data flow between megamodules (presetting ofparameters, hierarchical setting of parameters, partial extraction). As CPAM iscurrently focused on composition, it does not provide support for recovery or security.

  • 24 Laurence Melloul et al.

    These services could be obtained by orthogonal systems or by integrating CPAM intoa larger protocol.

    TransportationdemoBEGINCHAIMSio = SETUP ("io") // Remote, Java, CORBA ORBACUSmath = SETUP ("MathMM") // Local, Javaam = SETUP ("AirMM") // Remote, C++, CORBA ORBIXgm = SETUP ("GroundMM") // Remote, C++, CORBA ORBIXri = SETUP ("RouteInfoMM") // Remote, C++, CORBA ORBIX

    // Get type and default value of the city pair parameter(cp_var = CityPair) = ri.GETPARAM()

    // Ask the user to confirm/modify the source and destination citiesioask = io.INVOKE ("ask", label = "which cities", data = cp_var)WHILE ( ioask.EXAMINE() != DONE ) {}(cities_var = Cities) = ioask.EXTRACT()

    // Compute costs of the route by air, and by ground, in parallelacost = am.INVOKE ("GetAirTravelCost", CityPair = cities_var)gcost = gm.INVOKE ("GetGdTravelCost", CityPair = cities_var)

    // Make other invocations (e.g., Check weather).

    // Extract the two cost resultsWHILE ( acost.EXAMINE() != DONE ) {}(ac_var = Cost) = acost.EXTRACT()WHILE ( gcost.EXAMINE() != DONE ) {}(gc_var = Cost) = gcost.EXTRACT()

    // Compare the two costslt = math.INVOKE ("LessThan", value1 = ac_var, value2 = gc_var)WHILE ( lt.EXAMINE() != DONE ) {}(lt_bool = Result) = lt_ih.EXTRACT()

    // Display the smallest costIF ( lt_bool == TRUE ) THEN{ iowrite = io.INVOKE ("write", data = ac_var) }ELSE{ iowrite = io.INVOKE ("write", data = gc_var) }

    am.TERMINATE()ri.TERMINATE()gm.TERMINATE()io.TERMINATE()math.TERMINATE()ENDCHAIMS

    Fig. 6. CLAM megaprogram to calculate the best route between two cities

    In the future, we plan to enable even more optimization through automatedscheduling of composed services that use the CPAM protocol within the CHAIMSsystem. Automation, while not disabling optimizations that are based on domainexpertise, will discharge the composer from parallelism or lower level schedulingtasks. In a large-scale and distributed environment, resources are likely to be

  • CPAM, A Protocol for Software Composition 25

    relocated, and their available capacity depends on aggregate usage. Invocationscheduling and data flow optimization need to take into account such constraints. TheCPAM protocol can give sufficient information to the compiler or the client programfor enabling automated scheduling of composed software at compile-time, and moresignificantly, at run-time.

    References

    1. G. Wiederhold, P. Wegner and S. Ceri: "Towards Megaprogramming: A Paradigm forComponent-Based Programming"; Communications of the ACM, 1992(11): p89-99

    2. J. Siegel: "CORBA fundamentals and programming"; Wiley New York, 19963. C. Szyperski: "Component Software: Beyond Object-Oriented Programming"; Addison-

    Wesley and ACM-Press New York, 19974. W. Rosenberry, D. Kenney and G. Fisher: "Understanding DCE"; OReilly, 19945. D. Platt: "The Essence of COM and ActiveX"; Prentice-Hall, 19976. R. Van Renesse and K. Birman: "Protocol Composition in Horus"; TR95-1505, 19957. J. Jannink, S. Pichai, D. Verheijen and G. Wiederhold: "Encapsulation and Composition of

    Ontologies"; submitted8. "Information Processing -- Open Systems Interconnection -- Specification of Abstract

    Syntax Notation One" and "Specification of Basic Encoding Rules for Abstract SyntaxNotation One", International Organization for Standardization and InternationalElectrotechnical Committee, International Standards 8824 and 8825, 1987

    9. L. Perrochon, G. Wiederhold and R. Burback: "A compiler for Composition: CHAIMS";Fifth International Symposium on Assessment of Software Tools and Technologies (SAST`97), Pittsburgh, June 3-5, 1997

    10. N. Sample, D. Beringer, L. Melloul and G. Wiederhold: CLAM: Composition Languagefor Autonomous Megamodules; Third Int'l Conference on Coordination Models andLanguages, COORD'99, Amsterdam, April 26-28, 1999

    11. D. Beringer, C. Tornabene, P. Jain and G. Wiederhold: "A Language and System forComposing Autonomous, Heterogeneous and Distributed Megamodules"; DEXAInternational Workshop on Large-Scale Software Composition, August 28, 1998, ViennaAustria

    12. Birell, A.D. and B.J. Nelso: "Implementing Remote Procedure Calls"; ACM Transactionson Computer Systems, 1984. 2(1): p. 39-59

    13. ISO, "ISO Remote Procedure Call Specification", ISO/IEC CD 11578 N6561, 1991

  • A Process-Oriented Approach to

    Software Component Definition

    Florian Matthes, Holm Wegner, and Patrick Hupe

    Software Systems Institute (STS)Technical University Hamburg-Harburg, Germany{f.matthes,ho.wegner,pa.hupe}@tu-harburg.de

    Abstract. Commercial software component models are frequently basedon object-oriented concepts and terminology with appropriate binding,persistence and distribution support. In this paper, we argue that aprocess-oriented view on cooperating software components based on theconcepts and terminology of a language/action perspective on cooper-ative work provides a more suitable foundation for the analysis, designand implementation of software components in business applications.We first explain the relationship between data-, object- and process-oriented component modeling and then illustrate our process-orientedapproach to component definition using three case studies from projectswith German software companies.We also report on our experience gained in developing a class frameworkand a set of tools to assist in the systematic process-oriented developmentof business application components. This part of the paper also clarifiesthat a process-oriented perspective fits well with todays object-orientedlanguage and system models.

    1 Introduction and Rationale

    Organizations utilize information systems as tools to support cooperative activ-ities of employees within the enterprise. Classical examples are back-end infor-mation systems set up to support administrative processes in banks, insurances,or processes in enterprise resource planning.

    Driven by various factors (availability of technology, group collaboration is-sues and organizational needs), there is a strong demand for more flexible anddecentralized information system architectures which are able to support

    cooperation of humans over time (persistence, concurrency and recovery), cooperation of humans in space (distribution, mobility, on/offline users), and cooperation of humans in multiple modalities (batch processing, transaction

    processing, asynchronous email communication, workflow-style task assign-ment, ad-hoc information sharing).

    Another crucial observation is the fact that cooperation support can no longerbe restricted to employees (intra-organizational workflows) but also has to en-compass inter-organizational workflows involving customers, suppliers, tax au-thorities, banks, etc.

    M. Jarke, A. Oberweis (Eds.): CAiSE99, LNCS 1626, pp. 2640, 1999.c Springer-Verlag Berlin Heidelberg 1999

  • A Process-Oriented Approach to Software Component Definition 27

    The objective of our research is to identify abstractions and architecturalpatterns which help to design, construct and maintain software components insuch a cooperative information system environment [2, 3]

    In Sect. 2, we argue that the successful step in information system engineer-ing from data-oriented to object-oriented information system modeling shouldbe followed by a second step from object-oriented to process-oriented systemengineering. Only a process-oriented perspective allows software architects andorganizations to identify and to reason about actors, goals, cooperations, com-mitments and customer-performer relationships which are crucial in a world ofconstant change to keep the organizational objectives and the objectives of thesupporting information systems aligned.

    In Sect. 3, we illustrate our process-oriented approach to component defini-tion using three case studies from projects with German software companies.Details of the underlying process and system model are given in Sect. 4 and 5.

    2 Approaches to Component Definition in InformationSystems

    In this section, we briefly review the evolution of information system archi-tectures to highlight the benefits of a process-oriented approach to componentdefinition. Figure 1 summarizes the main result of this section:

    Processes

    Objects

    Processes

    Objects

    DataData

    Process Interface(Conversation Spec /

    Workflow schema)

    Object Interface(CORBA IDL /

    COM / JavaBean / ...)

    Data Interface(SQL DDL / EDI / ...)

    Fig. 1. Three Approaches to Component Definition in Business Information Sys-tems

    Interaction between system components in an information system can beunderstood at three levels, namely at the level of data access, object interactionand at the level of process coupling.

    At each level, interfaces between system components should be declared inan abstract, system-independent syntax which also provides a basis for the sys-tematic implementation of vendor-independent middleware. Abstract conceptsof the interface language are mapped to concrete implementation concepts ofthe participating system components. A higher-level interface language includesconcepts of the lower-level interface language but often imposes additional re-strictions on the use of these concepts. For example, CORBA IDL provides data

  • 28 Florian Matthes, Holm Wegner, and Patrick Hupe

    attributes and attribute types similar to record attributes and SQL domains,but also provides mechanisms for data encapsulation at the object level.

    2.1 Data-Oriented Component Definition

    Database research and development focused on the entities and relationships ofan information system and led to the development of conceptual, logical andphysical data models, generic systems and tools as well as software develop-ment processes to support the analysis, design and efficient implementation of(distributed) data components.

    Today, virtually all organizations have conceptual models of the informationavailable in their back-end information systems and systematic mechanisms todevelop and change applications to satisfy new information needs.

    For information stored in relational databases, SQL as the intergalacticdataspeak [13] provides both, a language for the exchange of vendor-independentdata description (schemata, tables, views, . . . ) as well as a language for (remote)data access (via APIs like ODBC, JDBC, . . . ; see also Fig. 1).

    2.2 Object-Oriented Component Definition

    In modern client-server architectures and in cooperating information systems,the information assets of an enterprise are modeled (and implemented) as collec-tions of distributed and persistent objects interacting via messages and events.

    The interfaces for these components are defined as enriched signatures inobject-oriented languages using concepts like classes, objects, (remote) refer-ences, attributes, methods, subclasses, interfaces, events, exceptions etc.

    These object and component models (CORBA, DCOM, JavaBeans, etc.)describe the interaction between components by (a)synchronous method invo-cation extending and adapting the semantics of dynamic method dispatching inobject-oriented programming languages (see also Fig. 1).

    The advantage of object-oriented component models over pure data modelson the one hand side and over purely procedural (remote) function libraries onthe other hand is their ability to describe semantic entities which are meaningfulalready at the analysis and at the design level. The often quoted classes Customerand Shipment are examples for such high-level entities.

    As exemplified by ODMG using CORBA IDL, an object-component modelcan also be used to describe data-only components, simply by omitting elab-orate method and message specifications and only providing (set-oriented) get,set, insert and delete methods.

    2.3 Process-Oriented Component Definition

    An object-oriented component definition is richer than a data-oriented com-ponent definition since the methods provide a more suitable protocol for theinteraction between a client and a server component than a sequence of rawSQL statements working on a shipment table.

  • A Process-Oriented Approach to Software Component Definition 29

    However, we still see the following deficiencies of object-oriented componentdefinitions which call for an even richer process-oriented component definition:

    The interface of a software component rarely consists of a single object in-terface but of a large number of highly interrelated object interfaces.

    Frequently it is necessary for a server to manage multiple execution contexts,for example, one for each concurrent client session. Clients then often haveto pass a session handle as an extra argument to each of these methods.

    In particular in business applications, it is desirable to enforce restrictionson the admissible execution order of certain method calls (A shipment canonly be send after a payment has been received).

    The lifetime of such execution contexts tends to be longer than the lifetimeof the server process. Therefore, it becomes necessary to make such executioncontexts first-class persistent and possibly mobile objects.

    If a large system is broken down into autonomous concurrent subsystems (acollection of agents), synchronization issues can arise

    As a solution to these problems we propose not to use object-oriented compo-nent interface definitions but process-oriented interface definitions between partsof a business information system following the language/action perspective oncooperative work [15, 4, 12, 1]. For details on our model, see [8, 10, 6, 14].

    In a first step, we identify actors in a business information system. An actorcan either be a human or an active process (thread, task, job, . . . ) in an infor-mation system. For example, a customer, an SAP R/3 application server and aLotus Notes Domino Server can all be viewed as actors.

    In a second step, we identify conversation specifications to describe long-term,goal-directed interactions between actors. For example, if a customer can browsethrough an Internet product catalogue on a Lotus Notes Server to place an orderonline, we identify a conversation specification called online shopping. We alsoassign roles to actors within a conversation. The actor that initiates the conver-sation (the online shopper) is called the customer of the conversation. The actorthat accepts conversation requests is called the performer of the conversation.

    An actor can participate in multiple (possibly concurrent) conversations indifferent roles. For example, Lotus Notes could use the services of SAP R/3 tocheck the availability of products. Thus, Lotus Notes would be the customer ofSAP R/3 for this particular conversation specification.

    Next, we identify dialog specifications which are process steps within each ofthe conversations (catalog/item view, shopping cart dialog etc. for online shop-ping and a dialog for the availability check). A dialog consists of a hierarchicallystructured content specification plus a set of request specifications valid in thisparticular process step. For example, the shopping cart dialog could aggregatea set of shopping cart items (part identification, part name, number of items,price per item) plus a total, the name of the customer, VAT, etc. In this dialog,only a restricted set of requests can be issued by the customer (leave shop, selectpayment mode, remove item from cart, etc.).

    For each request specification in a dialog there is a specification of the setof admissible follow-up dialogs. If this set contains more than one dialog, the

  • 30 Florian Matthes, Holm Wegner, and Patrick Hupe

    performer can continue the conversation at run-time with any of these dialogs.For example, the addItemToShoppingCart request in the item view dialog couldeither lead to the shopping cart view or to an out of stock error dialog.

    It should be emphasized that a dialog specification fully abstracts from thedetails and modalities of dialog processing at run-time. For example, the di-alog could be carried out synchronously via a GUI interface or via HTTP orasynchronously via email or a workflow-style task manager.

    Contrary to object interactions via message passing, this form-based ordocument-based style of interaction at the level of dialogs also fits well the(semi-)formal interaction between humans. For example, we are all used to aform-based interaction with public authorities.

    We consider the ability to abstract from the modalities of an interaction(local/remote, synchronous/asynchronous, short-term/persistent, involving sys-tems/humans) as a major contribution of our process model since it makes itpossible to uniformly describe a wide range of interactions.

    Customer Performer

    Request Negotiation

    PerformanceFeedback

    Acceptance

    Completion

    Fig. 2. Phases of typical customer-oriented conversations

    Figure 2 illustrates the basic structure of a typical customer-oriented conver-sation. In the first step, the request -phase, a customer asks for a specific serviceof a performer (I want to order a hotel-room). In the second negotiation-phase,customer and performer negotiate their common goal (e.g. conditions, quality ofservice) and possibly reach an agreement. To do this, several dialog iterationsmay be necessary. In the third performance-phase, the performer fulfills the re-quested activity and reports completion back to the customer (we acceptedyour order). The optional fourth feedback -phase gives the customer a chance todeclare his/her satisfaction and may also contain the payment for the service.

    It should be noted that we deliberately restrict our model to binary cus-tomer/performer relationships and that we do not follow established workflowmodels (from CSCW) that focus on process chains involving multiple perform-ers from different parts of the enterprise. For example, in our model a workflowwith three specialized performers could be broken down into a coordinator thattakes a customer request and that initiates three separate (possibly concurrent)conversations with the three performers involved.

  • A Process-Oriented Approach to Software Component Definition 31

    This example also illustrates that conversation specifications are an excellentstarting point for component definitions since they help to identify data and con-trol dependencies. Moreover, it becomes much simpler to assign (data, behaviorand also process) responsibilities to actors than it is the case for pure data- orobject-oriented models.

    To summarize, conversation specifications include data specifications (similarto complex object models) and behavior specifications (similar to methods inobject models) and they provide additional mechanisms for process modeling:

    Actors and their roles in the network of conversations of an enterprise aremodeled explicitly and at a high level of abstraction.

    The context of a request is modeled explicitly. For example, it is possible toaccess the history or the client of a conversation.

    It is possible to restrict requests to certain steps (dialogs) within a process. It is possible to specify (aspects of) the dynamic behavior of the process

    through the set of follow-up dialogs defined for a requests.

    Finally, it should be noted that conversation, dialog, request and contentspecifications can be used as static interfaces between components. Only at run-time, conversations, dialogs, requests and contents are created dynamically asinstances of these classes. This corresponds to the distinction between schemaand database at the data level and the distinction between interface and objectat the object level.

    3 Three Case Studies

    In this section, we illustrate our process-oriented approach to component defi-nition using three case studies from projects with German software companies.The goal of these projects was to investigate whether the abstract process com-ponent model described in [8, 10] and successfully implemented in persistentprogramming languages [6, 14] is also a suitable basis for the implementation ofprocess components using commercially relevant technology.

    The conceptual basis for these projects is summarized in Table 1 that showsthe (rough) correspondence between the abstract process component model con-cepts on the one hand side and the implementation concepts of the respective lan-guages or systems used to systematically realize these concepts. We also added acolumn describing the relationship between Java HTTP-Servlets and our model.

    Several cells in the table are marked as not applicable (n.a.), since someof the systems lack the required modeling support. However, these concepts canbe emulated by a systematic use of other language constructs.

    Figure 3 summarizes the agents and conversation specifications of the threecase studies. In this diagram, an agent is indicated by a circle with an arrow.A conversation specification between two agents is indicated by a line with twoarrows connecting the agent icons. If there are multiple agents that are basedon the same conversation specifications, the icons of these agents are stacked.

  • 32 Florian Matthes, Holm Wegner, and Patrick Hupe

    Model Concept Implementation ConceptSAP R/3 Dynpro SAP R/3 BAPI Lotus Notes Java Server Microsoft ASPTechnology Technology Technology