STATE OF NORTH DAKOTA Project Plan Mainframe Migration Prepared by Software AG, Inc.
Nov 18, 2014
STATE OF NORTH DAKOTA
Project Plan
Mainframe Migration
Prepared by Software AG, Inc.
March 16, 2006Version 1.2
State of ND - Project Plan – V1.2
N O T I C E O F N O N - D I S C L O S U R EThis Software AG, Inc. (“Software AG”) Technical Assessment and Detailed Implementation Plan contains information and data, which are privileged and/or confidential to Software AG. The information and data are not to be made available for public review, and are submitted voluntarily to the State of North Dakota (“ND”) only in response to a specific request. No license of any kind whatsoever is granted to State of North Dakota to use the information contained herein or in subsequent discussions unless a written agreement exists between Software AG and State of North Dakota. The information contained herein is submitted to State of North Dakota for purposes of review and evaluation. No other use of the information and data contained herein is permitted without the express written permission of Software AG. Under no condition should the information contained herein or in subsequent discussions be provided in any manner whatsoever to any third party without first receiving the express written permission of Software AG.
D I S C L A I M E RNone of the terms set forth in this document should be considered final or binding unless and until they are set forth in an agreement signed by State of North Dakota and Software AG.
T R A D E M A R K D I S C L O S U R ESoftware AG, the Software AG logo, Adabas, Natural and EntireX are either registered trademarks or trademarks of Software AG (Darmstadt, Germany). All other company, product or service names referenced herein are used for identification purposes only and may be trademarks of their respective owners.
Copyright Software AG, Inc., 2006. All rights reserved.
3/16/2006 Page i
State of ND - Project Plan – V1.2
Version Summary
Version
NumberDate By Brief Summary
1.0 12/23/2005Dennis
DeBruinInitial Project Plan
1.1 01/19/2006Dennis
DeBruin
Minor modifications based on North
Dakota input
1.2 03/14/2006Dennis
DeBruin
Minor modifications based on
continued “learnings” from the project
3/16/2006 Page ii
State of ND - Project Plan – V1.2
Table of Contents
EXECUTIVE SUMMARY...........................................................1
1 MIGRATION PLAN – CURRENT STATUS.................................5
2.1 A Conceptual View...................................................................................................5
1.1 Project Roles and Responsibilities .........................................................................8 Sub-SITAC Committee –State Information Technology Advisory Committee
9
1.2 Project Phases.......................................................................................................11
2 TECHNICAL ARCHITECTURE................................................12
4.1 Software.................................................................................................................13
4.2 Creating the Production Environment....................................................................14
2.1 Creating the Development and Test Environments...............................................14
2.2 Hardware Server Configurations............................................................................14
2.3 System Performance Requirements......................................................................17
3 TESTING ..........................................................................18
4.1 Testing in the Linux Environments at ND ..............................................................18
3.1 Testing Methodology..............................................................................................193.1.1 High-level Approach............................................................................193.1.2 Validation Types..................................................................................203.1.3 Test Methods.......................................................................................213.1.4 Test Execution ....................................................................................243.1.5 Test Reporting and Feedback.............................................................253.1.6 DISCREPANCY MANAGEMENT PROCESS.....................................273.1.7 Recording Problems in WMS..............................................................273.1.8 Recording Issues in WMS...................................................................303.1.9 Key Definitions for Issues ..................................................................32
3.2 User Acceptance Testing.......................................................................................33
4 SOFTWARE DEVELOPMENT STANDARDS ............................34
5.1 Best Practices in Software Development Standards.............................................34
4.1 Quality Assurance Plan..........................................................................................35
4.2 Considerations for Mainframe to Linux Migration..................................................36
3/16/2006 Page iii
State of ND - Project Plan – V1.2
4.3 Pre-Migration Activities .........................................................................................40
4.4 Preparing the ND Application Infrastructure for Migrating to Linux ......................404.4.1 Identifying/Preparing ND Modules for Migration..................................404.4.2 Natural.................................................................................................414.4.3 COBOL ...............................................................................................414.4.4 REXX...................................................................................................414.4.5 DYL280...............................................................................................424.4.6 ASSEMBLER......................................................................................424.4.7 FORTRAN ..........................................................................................434.4.8 Batch Jobs/JCL...................................................................................434.4.9 External Interfaces..............................................................................464.4.10 Printing Considerations.....................................................................47
4.5 Migration Activities at ND.......................................................................................47
4.6 Configuration Management....................................................................................48
5 SECURITY PLAN ................................................................49
5.1 UNIX User Authentication/Authorization and Active Directory Integration.............49
5.2 Access Control Lists...............................................................................................50
5.3 DB2, RACF, and Active Directory..........................................................................50
5.4 Natural Security......................................................................................................51
5.5 CICS and Microfocus.............................................................................................51
5.6 Access to Datasets or Files...................................................................................51
5.7 Auditing..................................................................................................................52
6 HUMAN RESOURCES/TRAINING PLAN..................................53
7 MANAGEMENT AND CONTROL METHODOLOGY....................58
7.1 Change Management............................................................................................58
7.2 Communications Plan............................................................................................61
7.3 Method for Updating the Communications Plan....................................................62
8 STAKEHOLDER PROJECT COMMUNICATION MATRIX.............67
8.1 Status Reporting....................................................................................................68
8.2 Managing the New Linux Environment..................................................................71
8.3 ND Project Management Standards and Practices...............................................75
9 IMPLEMENTATION TIMELINE AND METHODOLOGY...............76
9.1 Go Live Contingency Plan.....................................................................................76
3/16/2006 Page iv
State of ND - Project Plan – V1.2
10 RISK MANAGEMENT.........................................................78
10.1 Mainframe Migration Risks..................................................................................79
3/16/2006 Page v
State of North DakotaMigration Project Plan – V1.2
Executive Summary
Following a successful assessment and analysis project, the State of North Dakota contracted with Software AG, Inc. (Software AG) to migrate the significant majority of their mainframe code base to a Linux platform. Three key success factors have become our goals:
The look and feel of the application must remain the same for ITD customers
The migration must lower ongoing operating costs
We must set a foundation for future growth of the State’s hardware and operating
systems platforms while still maintaining or improving efficiency in operations and
development.
The Project Plan is a “living document”. As more information is gained, this document will be updated to reflect the new information. You should expect an updated document at least quarterly. The document may be published more often based on critical needs or findings in the project.
The Software AG senior project team is now in place and has permanent offices at North Dakota. It is expected that this team will be on-site a considerable, but not full-time basis, for the next 18 months as we develop the project.
The assessment project and continued analysis on this project have indicated that the optimal way to pursue this project is in four phases, which agencies and their respective code assigned to each of the four phases. Those four phases are described in detail in this document.
The initial hardware and operating system has been installed. Additional platforms will be installed as the project schedule indicates the need for.
Project kick-offs have been conducted at a high level and an agency-level kick-off meetings for the agencies that will be part of the each phase (group) will be scheduled just before we begin to work on each group.
To reduce project costs and to increase the learning curve of North Dakota staff, Software AG has included five North Dakota IT staff members into the project. These five people add very valuable practical and business experience to the Software AG team.
Drawing from our experience in similar migration endeavors, and taking into account the continuing information through our partnership with North Dakota, we have defined a set of project assumptions and best practices, which guide our recommended approach to the project for the State. As you will note, we see the State’s personnel, and in particular ITD, as being key to a successful migration effort. Our assumptions address the anticipated responsibilities of Software AG and the State, especially in the areas in which we anticipate extensive involvement with the State, such as application and acceptance testing.
3/16/2006 Page 1
State of North DakotaMigration Project Plan – V1.2
The project schedule has also been created and is included in this plan. It is a very detailed schedule that is complete for phase (group) 1. There are many components to the plan, so we are going to create high-level schedules for groups 2-4, and then detail them much more with our experience from Group 1.
In summary, while the project is just processing group 1 objects, there are no significant issues and most scheduled tasks are being completed as needed.
3/16/2006 Page 2
State of North DakotaMigration Project Plan – V1.2
DETAILED SUMMARY
Migrating is a very comprehensive process, ranging from the beginning phases involving a significant degree of up-front analysis to the final efforts of extensive post-migration testing. In between these initiating and ending activities, several major activities will take place to promote a smooth transition to the Linux platform.
As the migration is completed, through all four groups, Software AG will, at a minimum:
o Define the User Interface: Identify and implement the State requirements for the user
interface features on the Linux platform.
o Define the Processing Environment: Describe and assist North Dakota in implementing the
operating system, version, and utilities for the new platform.
o Assist with Definition of Server Environment: Describe and assist North Dakota in
implementing the server environment, including CPUs, disk, memory requirements and any
other storage media for the new platform.
o Assist with Definition of Communications Protocols, Connectivity, and Hardware
Placement: Describe and assist North Dakota to implement the communications protocols,
connectivity and hardware placement of the new platform.
o Identify Database Version: Identify the versions needed.
o Identify Programming Languages: Identify any languages and versions needed.
o Assist with Definition of Security: Assist North Dakota with the security aspects of the new
platform.
o Replace Existing Job Control Language: Describe the means that will be used to replace
existing JCL batch operations.
o Anticipated Technical Obstacles: Software AG will describe technical difficulties that might
be encountered and potential solutions or safeguards that could be used to solve or mitigate
such difficulties.
o Comprehensive list of 3rd Party Products and Pricing: Software AG will identify all 3rd
party products required and recommended to complete the migration phase. Software AG will
provide cost estimates and a “source of supply” proposal to supply those products.
The complete project schedule is included in Appendix A of this document. Appendix B contains a detailed application profile of the in-scope applications as developed by ITD and Software AG.
3/16/2006 Page 3
State of North DakotaMigration Project Plan – V1.2
As an overview, Software AG will institute a phased approach to the migration project with thorough and thoughtful project management as a key to the success of the project. Software AG will bring to this project all our past migration experience and knowledge as well as our product support capability to ensure we have the right staff to complete the migration in the desired timeframe and to resolve any issue that may arise during the project.
We also understand the critical nature of ensuring ITD customers and ITD staff are able to maintain (if not increase) the level of efficiency you have today in your current environment. To accomplish this, we will migrate the applications’ look-and-feel and simultaneously implement new GUI tools that will assist developers and key ITD customer staff to improve productivity. We have also integrated a significant amount of training within the project schedule.
3/16/2006 Page 4
State of North DakotaMigration Project Plan – V1.2
1 Migration Plan – Current Status
North Dakota applications will be migrated to the Linux (SUSE) environment. The Adabas and DB2 data files, COBOL and Natural source code will be transferred to the target platform. Outside of the application source code, elements of the mainframe version of the applications will be converted to analogous facilities within Linux. These include FORTRAN, DYL280, REXX, SAS, Assembler, JCL, and TP monitor processes.
2.1 A Conceptual View
This document provides a detailed plan for migrating the ND applications from the mainframe and implementing them on the Linux (SUSE) platform. The migration will not change business logic, application, or architecture. Rather, migration offers a low development risk and a minimal retraining of users since, from the user’s perspective, the new environment looks and functions much the same as the original mainframe environment.
During the migration, the Adabas/ DB2 data files and Natural/COBOL source code will be transferred to the target platform. Source code changes will be performed in the new environment, where necessary, so that the components compile and run on the target system. In addition to Natural/COBOL source, elements of the mainframe version will require conversion to analogous facilities within Linux. These include FORTRAN, REXX, DYL280, Assembler, JCL, SAS, and terminal emulation processes.
Figure 1 illustrates the comprehensive nature of the migration project, where the term Application System Migration refers to Software AG moving/migrating all layers of the “Infrastructure Stack,” not just the applications that support business functionality. The discussion immediately following Figure 1 highlights how the solution (equivalent application environment), the functionality, and the assets are preserved (e.g., network environment can still be used).
3/16/2006 Page 5
State of North DakotaMigration Project Plan – V1.2
Figure 1: ND Migration Overview
The upper portion of the stack defines the execution architecture and supports a number of systemic properties that are key to ND:
Business Strategy – business functions that occur, people and processes that support the business functions and the business strategy are defined. This is called the “Business Architecture,” and it will remain the same in the target environment.
3/16/2006 Page 6
State of North DakotaMigration Project Plan – V1.2
Business Application – supports the business functionality. Application modules are modified, recompiled, and deployed on a new hardware platform that supports the Linux SUSE operating system. Compatibility libraries are developed or acquired in the new platform to map older utilities/APIs to the new environment and provide identical functionality. The applications are integrated with the new Linux environment. While source code, scripts, and data are moved; compilers, source code repositories, and software tools are replaced by versions compatible with the new platform. Functionality should remain the same. Changes in the business require changes in the application layer. Change management should be very similar on the new platform.
Application Infrastructure – Database technology (data requirements of the applications), backup requirements, and the utilities (APIs) provided by the mainframe operating system. The same application infrastructure should be provided in the new platform. Additionally, Hummingbird terminal emulation software has to be accommodated in the new platform (at the client end and the Linux server end).
Computing and Storage Platforms – These hardware components enable the application infrastructure.
Network Infrastructure – The same (or similar) interconnect technology (the technology used to connect the application to a network) that ND has connected to the mainframe today would be used to connect Linux systems to the existing networks.
Facilities Infrastructure – provides critical support to the elements above.
Availability, Scalability, Performance Measurability, and Security –Will be implemented and supported within the new Linux environment providing similar Service Level Agreements (SLAs).
The lower portion of the stack represents the management infrastructure. The tools, people, and processes implement the management infrastructure to control, measure, and manage the execution architecture:
IT Tools – used to monitor capacity, utilization, and throughput and to help ensure that the service levels are met.
IT Processes – in place to support change, service, deployment, and maintenance.
IT People – select, develop, and administer the tools and processes. Training must take place to ensure that people understand the management processes, as well as execution architecture.
Business Continuity and Contingency Needs – would remain the same, or similar, in the new platform described in Section 2.
3/16/2006 Page 7
State of North DakotaMigration Project Plan – V1.2
1.1 Project Roles and Responsibilities
A key element in the migration and implementation project plan is the organization of the human resources, their relationships, and reporting structures. Figure 2 shows the project’s organizational structure.
Chief Information OfficerMike Ressler
Project SponsorDean Glatt
Director of Computer Systems Division
ITD Project ManagerLinda Weigel
Executive CommitteeMike Ressler, Deputy CIO & Director of ITD
Jerry Fossum, Director of Telecommunications DivisionDan Sipes, Director of Administrative Services Division
Nancy Walz, Director of Policy & PlanningVern Welder, Director of Software Development
Martin Steinhobel, Vice President Professional ServiceHans B. Otharsson, Senior Director Professional Services
Jennifer Witham, Dept. of Human Services
Project Management Office
Dave Eckenrode
ITD / State AgenciesProject Manager
Shawn Meier
Mainframe Migration Organization Chart
Software AGProject Manager
Carl Bondel
Project DirectorSteve Lowe, Director Professional Services
Izak Both, Migration Practice Manager, Professional Services
Technical TeamJeff Carr
Kyle ForsterJames Gilpin (SAG)
Data Conversion Team
James Gilpin (SAG)
Operations TeamJames Gilpin (SAG)
ITD
Development & Porting Implementation Team
Larry Madern (SAG)Brenda Muscha,
ITD
Test TeamTeresa Noto (SAG)
ITDState Agencies
Security & Infrastructure TeamJames Gilpin (SAG)
ITD
Communications
Sub-SITACSteering Committee
Figure 2: Project Organization
Define expected involvement and expected roles of each organization. Use identical/consistent wording to refer to “entire team” or individual components/people on the team.
Executive Committee – ITD Directors, DHS State Agency Representative and Software AG Representatives comprise this committee. ITD’s Project Manager will meet with the committee every four to six weeks or more is needed. The North Dakota Project Manager will conduct these
3/16/2006 Page 8
State of North DakotaMigration Project Plan – V1.2
meetings. Any “out of scope” changes will need to be presented to this committee for approval as well as any issues requiring approval at this level.
SAG Project Director – Provides senior-level direction to ensure that the project is carried out expeditiously and that Software AG’s resources are being used effectively
Sub-SITAC Committee –State Information Technology Advisory Committee
Project Sponsor –Responsible for the financial resources for the project.
Project Management Office – Oversee the ITD Project Managers
ITD Project Manager – Responsible for the overall management of the project.
Technical Team – Assists both project managers with the technical aspects of the project.
Communications - See Communication Plan Section
Software AG Project Manager – Works with the ITD Project Manager and is responsible for the overall Software AG project management.
ITD/State Agencies Project Manager – Responsible for working directly with the Development & Porting Implementation, and Test Teams making sure that deliverables are met in a timely manner. This Project Manager will also be the liaison between the Mainframe Migration team and the State Agencies Personnel.
Test Team – The ND ITD Test Support Team and the Software AG Test Support Team will have primary responsibilities for major segments in preparing and/or executing the testing tasks. This group will also support the testing process by organizing test results, tracking testing progress, accumulating statistics of successful and unsuccessful test executions, tracking problems, and compiling a final test report.
There are three primary groups of testers: 1. Software AG Test Support Team2. North Dakota ITD Test Support Team3. North Dakota Business/Agency Testers
North Dakota Business/Agency Testers, employees familiar with the functional and/or technical aspects of the applications, will be responsible for actually executing the acceptance testing.
Development & Migration and Implementation Team – This team is a rotating group of subject matter experts from Software AG and North Dakota ITD and the North Dakota Business/Agency organizations as applicable.
The Software AG and ND ITD groups will provide software development standards, change management processes, production quality control, and program and performance standards.
The Software AG and ND ITD groups will also create a target runtime environment inventory, identifying all the components involved in the migration/porting to ensure the completeness of the target environment’s functionality in meeting the needs of the migrated ND applications. They will migrate all the application modules and processes to the new platform, will look at the
3/16/2006 Page 9
State of North DakotaMigration Project Plan – V1.2
configurations, FTP capabilities, printers attached within the mainframe environment, and will convert to appropriate functionalities within the target Linux environment.
Finally, the Software AG and ND ITD groups will ensure proper installation in the production environment, preparing both software and data environments for production. This team will also check general performance and stability of the system.
The North Dakota Business/Agency organizations will assist the Software AG and ND ITD groups in reviewing the migration strategy, assisting in the definitions of all processes and providing answers to questions as needed.
Data Conversion Team – This team will be comprised of Software AG specialists responsible for the transformation or conversion of legacy data for the new environment. The team will perform preparation activities around data conversion and the actual data conversion.
Security & Infrastructure Team – This team, primarily comprised of the ND ITD Security staff, supported by Software AG staff, will ensure that the new servers adhere to ND’s security program policies and procedures. The security requirements involve protection of the data, source code, audit logs, etc. This team is also responsible for defining the technical architecture to support the migrated solution.
This team will understand and conform to all the existing security policies and procedures in the target environment. Enforcement of the business processing rules on a user’s authority requires an unassailable authentication solution, strong and effective privilege definitions, and the assignment of a privilege list. Security and integrity can be provided through infrastructure products, as can authority limits and privileges. It is necessary to take all of these features into account while designing the Linux environment.
Operations Team – This team will be comprised of Software AG specialists and the ND Operations Team and Business/Agency representatives. It has responsibility for day-to-day management of the migrated solution in the new platform. The applications and platforms affected by the migration/porting, as well as existing and new processes, must be assessed to ensure that the migrated application will integrate into the procedures executed by the enterprise.
The operations management architecture – operational qualities include performance (throughput), manageability, security, serviceability, and maintainability – will need to be migrated to the new environment.
Business and Agency personnel will be involved to confirm the decisions made and verify that the operational qualities of the new system meet their expectations.
The refinement process will be iterative in nature. As more information, requirements, and constraints are discovered, changes to the overall design of the platform might also be required. Current capacity planning and service level management knowledge can be input into the engineering process of the platform design.
Training Team – Comprised of Software AG, ND application specialists and ND Business Agency experts. An education plan that will address the skill sets needed to support the application systems in the new environment has been developed and will be implemented throughout the project.
3/16/2006 Page 10
State of North DakotaMigration Project Plan – V1.2
A key input into the migration/porting strategy will be the skills of the existing IT staff. Introducing new technology might result in decreased availability or productivity as people come up-to-speed on the new technology.
3/16/2006 Page 11
State of North DakotaMigration Project Plan – V1.2
1.2 Project Phases
Software AG’s approach to the ND migration incorporates the principles of a lifecycle view and incremental development (cycle of architect, implement, and manage). It provides a common language for defining the non-functional requirements of an application platform, enabling ND to use its structured infrastructure methodology. The approach is based on the following phases:
Building the Migration environment (includes startup tasks, data and source code conversion and movement, building, batch set-up, and functional testing) as well as assisting North Dakota in rewriting processes and procedures for ongoing operations support.
Migration activities (includes hardware set-up and all environmental configuration, data and source code movement)
Testing the applications in the new Linux environments
User Acceptance Testing and Training
Manage Phase (identify, resolve, and incorporate modifications and retest)
Implementation/Deployment
Transitioning to Support Phase
3/16/2006 Page 12
State of North DakotaMigration Project Plan – V1.2
2 Technical Architecture
The following “real world” scenario for ITD customer access corresponds to the technical architecture depicted in Figure 3.
1. An ND end user will initialize interaction with the ND server environment by clicking on a “Production ND” icon on his/her desktop. This will invoke terminal emulation software to allow the end user’s Windows PC to appear as a VTxxx terminal when connecting to the Linux server.
2. The end user’s identification will be checked through LDAP-based authentication against the State's Active Directory Domain, NDGOV. If a user is allowed access, s/he will be logged into the ND server.
3. The end user’s Linux account will be tailored to run a script designed to initialize the appropriate application environment without any end user interaction. This script will display a menu that provides access to the appropriate ND subsystems. At this point, the user is able to perform all normal work routines.
4. When the end user selects certain options from the various subsystem menus, specific data will be retrieved from, deleted, or stored into the database environment.
Figure 3: ND Technical Architecture
End UserWorkstation Natural/Cobol
Server
AdabasServer
DB2Server
WindowsActive Directory
Server
Ldap Authentication
WindowsActive DirectoryAuthentication
SSHConnection
BismarckData Center
3/16/2006 Page 13
State of North DakotaMigration Project Plan – V1.2
2.1 Software
Figure 45 below reflects the supporting software, which will be implemented under the new Linux environments. Appendix I provides additional detail information on the software listed below.
Figure 4: List of Software
2.2 Creating the Production Environment
The new environment will consist of a two sets of HP blade servers with one set providing the production environment while the other will provide test and development environments. In addition this second set will be used for Business Continuity functions and will be located in the State’s Mandan Datacenter with production being housed in Bismarck.
In overview the architecture will be of a client server nature with one server hosting Natural and Cobol, and the Job Scheduler. DB2 and Adabas will be housed on their own servers. IBM’s Infoprint will be used to manage printing and will be housed on its own server.
As we build the environments and perform benchmark testing, the summary above may change. Expect those changes to be documented in the next release of this document.
Common tasks to be performed with all environments include:
Obtain Hummingbird, Attachmate, IBM Communications terminal emulation
3/16/2006 Page 14
Company Product Name
Software AG
Adabas SQL Gateway
Adabas C
Natural Construct
EntireX Communicator
Natural Construct
Entire Access
Natural Security
Predict
Entire Network
Tamino
Natural Productivity Pack
IBM DB2 UDB Enterprise Server Addition
OnDemand
Rational ClearCase
InfoPrint
Appworx Appworx
MicroFocus Revolve Developer
Revolve Enterprise Edition
Enterprise Server MTO
Net Express MTO
MigrationWare Adabas SQL Addin for MicroFocus
Regression manager
SyncSort DMExpress
Syncsort
FilePort
Cronus ESPControl
Segue Silktest
Address ValidationPitney Bowes Finalist Software to be used
State of North DakotaMigration Project Plan – V1.2
Prepare the environments, including backup and restore facilities and backup/restore scripts, and integrate them into the ND network
Identify the software to use in the new environment and acquire licenses, where appropriate – Check current versions of software
Order the right version of SUSE Define printers for use on the new Linux servers in ND network and order additional
printers, if required Define all users in the authentication system – userids, passwords, profiles, etc. Load Terminal Emulation software on each client
Additionally, Software AG will continue to address the Software AG products and 3rd party product environment (see Item 2.1 above). All appropriate software will be installed in all the appropriate environments.
2.3 Creating the Development and Test Environments
The development and test environments will be modeled after the production environment. The development environment will consist of 3 servers with the same architecture as production: One server for Natural/Cobol applications and dedicated servers for Adabas and DB2. Note that this development environment will also host the user acceptance test regions. The test region will consist of two servers, one hosting Cobol/Natural with the other hosting both DB2 and Adabas. This test environment will be used to test new releases/patches of vendor supplied software such as Adabas, Natural, DB2, Operating System, etc.
2.4 Hardware Server Configurations
The servers of choice are HP Blade servers with AMD's Dual Core Opteron CPU, which provides 64-bit addressing. This will provide much better access to memory above 4GB than the 32-bit Xeons. The production Cobol/Natural server will be a BL45 blade with 4 Dual Core Opteron CPUs and 16 GB of memory. While the production Database servers will be BL25 blades with two Dual Core Opteron CPUs and 8 GB of memory. The comparatively large amounts of memory will permit the Operating System and Databases to aggressively cache disk reads, thus elevating performance.
Use of blades will also aid in communication with the database servers. If the Natural/COBOL applications were running on blades in the same enclosure database, requests would never leave the enclosure, resulting in Gigabit connectivity within the enclosure. The production systems will be located in the Bismarck data center, with the Mandan data center hosting the development/test environments. The two centers will be connected via Ethernet, with a maximum bandwidth of 2 GB/sec. Finally, the environments in both the Bismarck and Mandan data centers will utilize Fiber Channel SANs for data storage. The storage platform of choice for the migrated systems will be the IBM DS6000 line. The DS6000 line does use fiber attached disks within each array. The technology used in the DS6000 is the same as in IBM's Enterprise Storage Server (the Shark) though the DS6000 line fits into a standard 19-inch rack.
3/16/2006 Page 15
State of North DakotaMigration Project Plan – V1.2
Natural/CobolServer – 4 CPU BL45
AdabasServer – 2 CPU BL25
InfoprintServer(s)
DB2Server – 2 CPU BL25
Bismarck Data Center -Production
Natural/CobolServer – 2 CPU BL25
DB2Server – 1 CPU BL25
AdabasServer – 1 CPU BL25
DevelopmentEnvironment
Natural/CobolServer – 1 CPU BL25
InfoprintServer(s)
DB2/AdabasServer – 1 CPU BL25
TestEnvironment
Mandan Data Center2 Gigabit
Fiber
Figure 5: Server Configuration
2.5 System Performance Requirements
In developing this plan, Software AG worked from the basic assumption that performance must be as good, and preferably better than it was before the migration.
Some details are available on the current systems’ performance, but additional benchmarking will have to be performed to quantitatively evaluate the performance of the migrated systems for both online and batch. The key metric is the mission effectiveness of ITD to meet your customers’ expectations.
We have used five processes in this plan to reduce risk regarding system performance.
1. We are using a phased approach to ensure system performance can be validated to allow for modification of the configuration.
2. We will be performing initial benchmark tests, both single queries and “flood queries” to identify any issues or concerns.
3. We have incorporated significant testing plans into the project to ensure we understand how individual components of the applications interact with other applications.
4. We will integrate testing performance into the current long-term network performance project currently in process.
5. ND can also undertake an option to implement Compuware QA center or Seque Performance Monitoring as part of a short-term load test, which can simulate 450 or more concurrent users. This can allow for additional tuning of system configurations to provide the best possible performance.
3/16/2006 Page 16
State of North DakotaMigration Project Plan – V1.2
3/16/2006 Page 17
State of North DakotaMigration Project Plan – V1.2
3 Testing
3.1 Testing in the Linux Environments at ND
A well-designed test plan facilitates the testing of ND within the scope of the current functional and technical requirements. Specifically, the ND Test Plan must meet the following objectives:
1. Provide test scenarios that adequately enable ND to verify and validate that the migration to Linux has been successful.
2. Demonstrate that the migrated system meets the functional and technical requirements of ND.
3. Demonstrate that the migrated system meets the business needs of ND.
To meet the testing objectives, test scripts and/or other verification methods are required for all documented requirements of modules affected by the migration. The Software AG Test Support Team, working with its North Dakota support team, will be responsible for development of the test plan.
The Test Support Team will also support the testing process by organizing test results, tracking testing progress, accumulating statistics of successful and unsuccessful test executions, bug tracking, and compilation of a final test report. ND Agency Testers with the assistance of the Test Support Team are responsible for actual execution of the final acceptance test. This group will consist of ND employees that are familiar with the functional and/or technical aspects of the ND system.
This plan establishes the framework for development and execution of the tests associated with the original mainframe application and the migrated Linux-based system. The test plan includes several methods for verifying the ported system, including testing scenarios, report verifications, data entry, and input/output feedback. Tests that need to be planned and executed include:
Unit Testing - Unit testing is an ongoing process. Throughout the migration and batch set-up phases, Software AG will perform unit testing of the migrated application components.
System & Integration testing – As a final test prior to commencement of User Acceptance
Testing, Software AG will perform an end-to-end unit test of all applications. This test will be performed using the test scripts prepared (and subsequently accepted by ND) as the template to ensure all required functions are tested. This type of comprehensive testing will relieve end users of having to deal with the more “obvious” bugs. Whereas a wide range of possible test options are possible, at a minimum Software AG will verify that:o System query results “look” correct; o The group of ND end users selected to test the system can readily navigate from
screen to screen, comfortable with the “look and feel” of the migrated system and are able to perform the same functions they employ today in their daily jobs; and
o System response time is comparable to, or superior to, the current system
User Acceptance Testing
3/16/2006 Page 18
State of North DakotaMigration Project Plan – V1.2
3.2 Testing Methodology
In validating the system prior to production acceptance, testing methodology shall include:
High-level Approach Validation Types Test Methods Test Execution Test Reporting and Feedback
3.2.1 High-level Approach
The Test Support Team will be comprised of Software AG staff, North Dakota staff assigned to Software AG, North Dakota ITD staff and North Dakota agency staff.
During the test planning phase, the Test Support Team will first establish the test methods (i.e., scripts, inspection, etc.) that will be required for the various components of the migrated system. Based upon the different methods required for each segment of the system, the test team will develop specific test scripts, scenarios, and other verification methods that will be used for the actual testing. The tests will be designed to demonstrate that each ND module meets the applicable functional and technical requirements through the verification and validation of test results. All Four groups comprising the Test Support Team will execute the scripts (potentially at different times) on the existing mainframe system to verify expectations and accuracy.
After the ND code is migrated to the new platform, user acceptance testing will begin. Prior to the initiation of actual testing activities, it is necessary to set up dual environments on the mainframe and Linux with identical initial data. Initially, the dual systems will be used to validate the data conversion, reports, and interfaces. During the scripted testing, the dual environments may be required to investigate any discrepancies that may be found. The ND Agency and ITD Testers will perform the actual tests. The Test Support Team will provide support assistance to the ND Testers during user acceptance testing. After testing is completed, a test report will be submitted to ND for review and formal acceptance.
Testing related activities will include:
Establishment of the test methods (i.e., scripts, inspection, etc.) that will be required for the various components of the migrated system.
Development of the specific and detailed test scripts, scenarios, and other verification methods. The scripts will mirror actual production activities and workflows as closely as possible.
Validation of the testing scenarios against mainframe ND. Review and approval of the test documents by ND. Exploratory Testing to understand current application behavior.
Further testing related activities will include:
Setup of dedicated test areas in both mainframe and Linux environment. Testing by the ND Testers. Organizing test results, tracking testing progress, accumulating statistics of successful
and unsuccessful test executions, and issue tracking by the Software AG and ND ITD Test Support Team.
3/16/2006 Page 19
State of North DakotaMigration Project Plan – V1.2
Scheduling and execution of an end-to-end system test. Compilation of a final test report. Support for the go/no go decision.
3.2.2 Validation Types
Depending on the item tested, various validation types will be used to verify proper operation of the migrated ND system. While the majority of the documented functional requirements and verification of ND functionality will utilize the scripts and scenario based testing, other areas will use other methods. For example, for many of the technical requirements it is more appropriate to determine compliance through demonstration or inspection, or compliance may be evident. This methodology provides the flexibility to select the most appropriate methods for system validation. It is envisioned that the technical requirements will mostly be validated through non-scripted methods.
The five validation (5) types are:
1. Scripts – A scripted test refers to the use of a predefined, detailed set of steps and actions that are intended to emulate a typical business process performed on the system being tested.
As mentioned above, validation of ND operational functionality and compliance with the functional requirements will be done using script and scenario based testing. Test scenarios will be created for typical ND activities in order to verify ND functionality. Each scenario may contain multiple closely related test cases. A single scenario may contain multiple requirements, as the scenarios are designed to mimic daily activities and functions within ND, and these functions can cover more than one requirement. A cross-reference mapping will be created to verify and demonstrate that sufficient test scenarios exist to represent all of the functional requirements.
2. Demonstration - Validation through demonstration refers to the confirmation of functionality by observing the behavior of the system. The tester must witness one or more examples of behavior that demonstrates that the requirement is met by the system. A checklist is used to document compliance and a detailed script is not prepared in advance.
For high-level requirements without explicit data values or detail specifications, demonstration methods are more applicable. Requirements Validation Checklists will be used to document compliance. For example, compliance with Technical Requirement – “Display warning messages/verification request when the user is about to submit a sensitive transaction” can be documented with the demonstration validation type.
3. Inspection - Validation through inspection is performed by review of system specifications, item attributes or other reliable documentation.
Inspection is used for high-level requirements for which the demonstration method is not practical or viable. An example of a requirement that will be verified through inspection is Technical Requirement – “Utilize scalable software and database components that comply with the (open) standards specified by ND at the beginning of the system development phase.” Many of the hardware and software requirements will be verified through inspection.
4. Comparison - Comparison validation refers to comparison of features, functions, or data from the one system to another. In the case of ND, comparison validation refers to the
3/16/2006 Page 20
State of North DakotaMigration Project Plan – V1.2
comparison of migrated ND to what is produced by mainframe ND under an identical set of data, processes, programs, and procedures.
Comparison validation is the envisioned method for verification of the data conversion and reports. A comparison of reports from the mainframe control system and the migrated system will be used to verify that (1) the data port was successful and that (2) reports are operating properly. The comparison method will also be used extensively in the testing of interfaces. This is described in greater detail in Section 3.2.3.1 below.
5. Evident - The evident validation method refers to situations where compliance with the requirement is overwhelmingly obvious.
Evident validation can be used for requirements when compliance is obvious and formal additional testing is not required. For example, Technical Requirement – “Provide the ability to communicate with clients using IP protocol.” The fact that users are able to access the system from their workstations over the ND network makes it obvious that this requirement has been met.
3.2.3 Test Methods
In order to test all aspects of the migrated system, multiple testing methods utilizing the validation types defined above must be used. Comparison Testing, Scripted Testing, and Non-scripted Validation will all be utilized at times:
3.2.3.1 Comparison Testing
Data Conversion – the first testing activity is validation of the data that has been migrated from the mainframe. During the migration, ND data will be transferred from the existing mainframe system to the new Linux infrastructure. Validation of the data will be performed by comparison of the migrated data to the data on the mainframe. Validation of the data migration must be done before testing activities can alter the data.
Reports - To validate that reports are operating properly, a set of reports will be printed out of both ND environments (mainframe and migrated) and compared. Reports used to validate the data conversion have already been compared and can be excluded from this exercise. This test will validate that the reports print as expected and perform as they did on the mainframe. It should be noted that the reports will also be run during execution of the scripted tests. The primary purpose of running reports as part of the scripted test is to validate the data entry, manipulation, and storage capabilities of the migrated system.
Interfaces (to both internal and external systems) - ND has interfaces to both internal and external systems. For purposes of this document, internal interfaces are those that are within and under the control of ND. External interfaces are those that pass data between ND and a system that is not under the direct ND staff control, such as Social Security and other Federal Agencies. For a complete listing of the application interfaces see Appendix G. There are two types of external interfaces, inbound and outbound. Inbound interfaces refer to entry points for external data into ND. External interfaces provide ND data to other systems. The validation method varies for the different types of interfaces. The following is a description of the interface validation method.
Outbound External – The initial interface validation for outbound external interfaces will be performed by comparison of an interface data file between the migrated ND and mainframe ND. For each interface a historical time period will be selected. The interface will then be run for the selected time
3/16/2006 Page 21
State of North DakotaMigration Project Plan – V1.2
period on both systems and the output will be compared using a file comparison utility. During this initial testing the interface data will not actually be sent to and processed by the destination system. During the final end-to-end test, the outbound interface will actually be submitted for processing by the receiving systems of entities that have agreed to participate in this test.
Inbound External – In the case of inbound external interfaces, a copy of an actual interface file will be imported into the system. When selecting a file for processing, it will be important to choose data that is consecutive to the most recent data in the system.
Outbound Internal – In the case of outbound internal interfaces the data can be imported into the receiving system and processed where it is practical to do so. The receiving system will then be validated to see if the received data has been imported correctly and if reports and other outputs are correct.
Inbound Internal – The procedure for validation of internal inbound interfaces will be identical to the one used for external inbound interfaces.
The initial task will be to confirm the inventory of interfaces and categorize the type (internal vs. external; inbound vs. outbound) of each interface. For each interface, it will be necessary to identify the following attributes:
Interface name Interface type Interface description Interface points of contacts Testing method Time span (beginning and end date) to be used Verification method
Interface Name
Interface Type
Dataset Names and Description
ResponsibilityPoints of Contact
Verification Method
Testing Method
Testing Date
Target
Figure 6: Sample Interfaces Inventory Matrix
Interfaces Testing DetailThe ND interfaces are tested by initially utilizing the tools provided by each interface type. For example, if the interface uses FTP as its delivery mechanism, FTP commands will be used to verify and qualify its internal workings and functionality. These commands will be executed from a command prompt or application interface dependent upon the FTP application used. The objective is to test the interface for connectivity and appropriate configuration of its target destination.
Once the manual testing of the interfaces has successfully concluded, the next step will be to transfer various sample files to its target destinations. Following this procedure and to complete testing on the interfaces, full transfers of actual publications and reports are transmitted to their respective target destinations.
3/16/2006 Page 22
State of North DakotaMigration Project Plan – V1.2
Validation and verification of the transmissions will be conducted through manual or automated verification tools by comparing the “Transferred” and/or “Received” files against the original files.
Test scripts are provided to exercise the aforementioned testing method and may be described in the following manner:
Script:Manual Testing Steps Interfaces Data matrix (Short)Interface Table and dataset details (Full)Publication Tables with full composing, dataset details, frequency and other notes
3.2.3.2 Scripted Testing
Scripted testing will be used to test the ND functionality and adherence to the functional requirements. Test scenarios will be created for typical ND activities in order to verify ND functionality. Each scenario may contain multiple, closely related test cases. A single scenario may contain multiple requirements, as the scenarios are designed to mimic daily activities and functions within ND, and these functions can cover more than one of the requirements. A cross-reference mapping will be created to verify and demonstrate that sufficient test scenarios exist to represent all of the functional requirements. Appendix E contains a sample test script, and Figure 8 represents the hierarchical relationship between scenarios, requirements, and test cases.
Figure 7: Scenarios, Requirements, and Test Cases
3.2.3.3 Non-scripted Validation
As mentioned above, the majority of the technical requirements will be validated using non-scripted testing. It may be practical to utilize the scripted tests for some of the technical requirements, such as the publications section, however; for the majority of the requirements other verification methods will be used. The various methods listed above will be used as appropriate. Requirements validation checklists will be created to organize and formalize the testing process.
3/16/2006 Page 23
Scenario 4.2.BR.1
Requirement 4.02.01
Test Case BR.2.12
Test Case BR.4.5
Test Case BR.1.3Requirement 4.04.08
State of North DakotaMigration Project Plan – V1.2
3.2.4 Test Execution
3.2.4.1 Naming Conventions for Scenarios and Test CasesA standard naming convention has been developed and is detailed in a separate test strategy document. Please see Linda Weigel or Carl Bondel for that document.
3.2.4.2 Pre-test Preparation
The following tasks will need to be completed before actual testing begins:
Team identification – The ND Testers will need to be identified. Testers should include functional users that are familiar with the system and will be able to identify any anomalies. Minimal Linux navigation training will be provided to the testers. Based upon the testers and their typical jobs, tests will be assigned to individual or groups of testers.
System preparation – As described above, a similar environment will need to be established on the mainframe ND with identical data. The initial testing activities will involve comparisons between the two systems that will validate proper operation of the migrated ND. Once the systems have been established, it will be important to create a backup of both. It may be desirable to return the systems to this baseline state at future points in the testing. Periodic backups of the test systems should also be performed. The selection of an “as of date” for the data is important. The “as of date” is the date through which both systems have data. The date should be selected so that there is real data available for dates after the “as of date.” That real data will be valuable for the scripted testing and testing of the inbound interfaces. If the testing takes place during an adjournment, the “as of date” should not be the day of adjournment. It should be at least a day earlier than the adjournment.
Establish test recording and reporting procedures – Test recording and reporting procedures need to be established. The progress of the testing tasks will advance as successful tests are completed. Unsuccessful tests, as well as questions, will need to be tracked for resolution and completion. A formal and documented procedure will need to be initiated to enable testing to proceed efficiently.
Testing kickoff meeting – Prior to initiating the actual testing a kickoff meeting will be held. The meeting will serve to communicate roles and responsibilities, schedule, scope, tasks, and goals of the testing activities.
3.2.4.3 Testing Sequence
As described above, testing will take place in a specific order. Initial validation will be performed through comparison of the mainframe and migrated systems. It is, therefore, important that this comparison is performed prior to any data entry activities on the migrated system. The following is the recommended order of testing activities:
1. Validate data – This validates that historical data has migrated properly. This will be primarily performed through reports with a limited degree of inspection where required.
2. Validate reports – Report comparisons will confirm that the new reports are working properly.
3/16/2006 Page 24
State of North DakotaMigration Project Plan – V1.2
3. Validate interfaces – Some of the interfaces can be validated through comparison with a similar interface constructed from the mainframe. This would also need to take place prior to any data entry activities.
4. Scripted testing – Scripted testing will be used to validate much of the functionality of the ND system. Agencies may add their own scenarios for testing in addition to the scripts that will be developed by the test support team.
5. Non-scripted validation – Non-scripted validations will be used where scripted testing is not applicable. This can actually be done at any point.
6. Benchmark testing – Benchmark testing is conducted to verify and analyze system performance, scalability, and overall stability. Benchmark testing is executed by a group of testers using a set of specific test scripts that will measure system and application performance during normal system activity. Executing batch jobs in a timed fashion to be run on the system will create the system load. During these batch executions, the same testers will again perform the same test scripts activities and record the response time on the loaded system. The performance result of the system can then be determined by comparing the measurements between the baseline tests and the load tests. Depending on the overall result, possible tuning of the system may be warranted and applied.
7. End-to-end test – Following the conclusion of the other testing activities, a full end-to-end test will be conducted. During the end-to-end test, external interfaces will be submitted to all parties that have agreed to participate in the end-to-end test. A more limited set of test cases may be selected for this test at the option of ND. The primary purpose of this test is to validate the operation of the external interfaces in as near a production environment as possible.
3.2.4.4 User Impact
The use case and test script creation will be coordinated with both ND ITD and the Business/Agency groups to ensure the least impact their normal work schedule, especially during peak processing periods.
3.2.4.5 Testing Schedule
Please refer to the Project Schedule (Appendix A) for the complete testing schedule.
3.2.5 Test Reporting and Feedback
The ND ITD and Business/Agency groups will perform the actual tests. Software AG Test Support Team will support the ND Testers and perform the following activities:
Monitoring progress of the testing tasks Maintaining the status of each test case and/or script Accumulating statistics for passed and failed tests Maintaining a problems/unknowns database Tracking the resolution of defects and/or questions.
3/16/2006 Page 25
State of North DakotaMigration Project Plan – V1.2
ND Testers will submit test results to the Software AG Test Support Team, which will be responsible for updating the status of each test case to indicate if the item has been tested and if so, what the results were. In cases where the test was not successful or there is a question regarding the outcome of the test, Software AG Testers will work with the developers to seek a resolution to the problem. In some cases, system behavior may need to be confirmed on the mainframe. The problem will be reported via the WMS Problem Log.
Testers will be asked to assign a severity to problems encountered. Problems will be prioritized in severity order.
Errors identified through testing will be discussed with development team members and/or the Project Manager to verify that the observed behavior constitutes an error. The tester will log identified errors onto a problem-tracking form and electronically send it to the Software AG Test Support Team. The Test Support Team will coordinate a resolution with the development team. After the development team corrects an error, the Test Support Team will record the resolution onto the problem-tracking form and notify the test team. The function will then be retested using the same Test Script that detected the error and the tester will enter validated fixes onto the problem-tracking form.
The Software AG Test team will track the status of retests required and performed. The Software AG Test team will provide management reports to assist ND in understanding the progress of the testing. The status of each test case will be tracked on the report. At the conclusion of testing activities, a final report will be produced and submitted.
3/16/2006 Page 26
State of North DakotaMigration Project Plan – V1.2
3.2.6 DISCREPANCY MANAGEMENT PROCESS
Discrepancy management involves capturing, reporting, escalating, tracking and resolving problems that will occur on this project.
For this project we will use ITD’s Work Management System (WMS) to track those discrepancies.
During all phases of testing, we will be recording discrepancies as “problems” in WMS.
Problems are defined as discrepancies in which someone other than the actual tester must take action to correct. If a tester finds a problem that can be traced to his/her script or other internal document, then it is resolved by them without creating a problem.
If a problem cannot be easily resolved by the testing team or the technical migration team, then it may be elevated to an issue. Issues require a higher level of management support to resolve.
The following process diagram will control issue and problem creation for the project:
3.2.7 Recording Problems in WMS
A problem is defined as a discrepancy in which the migrated objects are not performing as expected, based on the validation routines described in the appropriate test cases.
3/16/2006 Page 27
State of North DakotaMigration Project Plan – V1.2
Any team member can define a problem. Most testing team members will have direct access to WMS. If someone does not have access, they should work with other members of the migration team to define the problem.
The following are the steps are required to enter a problem into WMS:
Click on Problem Log and the following form will appear:
3/16/2006 Page 28
State of North DakotaMigration Project Plan – V1.2
You will need to complete the following fields with the appropriate information:
Short Description: Short description for the problem Sub-Project: If you have sub-projects, you can select the sub-project that the issue relates to. Test Phase: Linux Verification testing will use “System Test” and User Acceptance testing will
use “Acceptance Test” Log Type: will usually use “problem”. Other values should only be used with Test Manager(s)
agreement. Priority: As defined in a later section. Object Name: The name of the object that is has the discrepancy. Test Scenario: A brief indication of the test scenario, with reference to the test case number. Tester Sign on: The sign on of the person executing the test. Description: Detailed information on the issue Initiator: Indicates who initiated the action item. Can only select one person.
3/16/2006 Page 29
State of North DakotaMigration Project Plan – V1.2
3.2.8 Recording Issues in WMS
An issue is defined as a problem or question that, in order to be resolved, requires a decision be made by the Project Manager, Project Sponsor or Executive Committee, depending on the priority or impact that the issue will have on the overall project.
Only the test manager(s) or the technical migration manager can define issues or elevate a problem into an issue. Testing team members will bring potential issues to a Testing Project Manager to evaluate and determine if it should be entered as an issue.
The following are the steps are required to enter a issue into WMS:
Click on Issues and the following form will appear:
3/16/2006 Page 30
State of North DakotaMigration Project Plan – V1.2
You will need to complete the following fields with the appropriate information:
Short Description: Short description for the issue Sub-Project: If you have sub-projects, you can select the sub-project that the issue relates to. Date Required: Indicates the date by which the issue needs to be resolved. Enter the date or
select from the calendar. Description: Detailed information on the issue Alternatives and Impacts: Describes all alternatives that could be considered and the impact
of selecting the alternative. The alternatives/impacts could be modifications to a system, procedural change, customer impact, or financial ramifications. If there is an impact to the cost or schedule, an impact of project change must also be created.
Recommendation: Describes which alternative is recommended. Department Notes: Area for recording any type of notes. Initiator: Indicates who initiated the action item. Can only select one person. Assignee: Select the individual who is assigned to the action item. Can only select one
person. Notifications: Select the individuals who are to be notified when the issue is submitted for
review. Can select multiple individuals. Attachments: This area is used for storing the documents relating the action item. Resolution: Description explaining what the assignee’s resolution is to the issue. The
resolution maybe an agreement with the recommendation or a completely different resolution. If the resolution is different from the recommendation, make sure to clearly explain the resolution.
Comments: This area can be used to record any information relating to this issue. Deny/Return/Approve: Used to indicate whether the impact is denied or approved or needs to
be returned for correction. Only select one.
3/16/2006 Page 31
State of North DakotaMigration Project Plan – V1.2
3.2.9 Key Definitions for Issues
Priority
The priority defines how quickly the reported issue needs to be resolved. The following matrix defines the priorities:
Priority Definition Action(s)1 Severe Issue resolution within 8 hours,
orAcceptable course of action discussed and agreed upon by ND, Decision communicated to end users and all stakeholders
2 High-Impact Problem resolution within 2 days or as agreed to by Testing Manager
3 Intermittent Resolution within 5 days in the absence of higher priority problems, orRaise to priority 2 after 5 days, if warranted
4 Format/Appearance Resolve as schedule allows
StageThe stage defines where the issue is in its life-cycle of resolution:
Stage DefinitionOpen The issue has been reported but not assignedSubmitted The issue has been assigned. The assigned person becomes the owner and is responsible for
the resolution of the issue.Retest The issue has been resolved and is awaiting re-testingReturned The issue has failed re-testing and is reassigned to technical teamClosed The issue is closed.
3/16/2006 Page 32
State of North DakotaMigration Project Plan – V1.2
3.3 User Acceptance Testing
User acceptance testing is an important aspect of the project, contributing to ultimate project success. The test cycle will be established and testing executed in the ND Test environment by the ND team, assisted by the Software AG team.
User Acceptance Test Planning entails:
Defining test strategy Setting test objectives Reviewing testing methodology Reviewing pre-defined acceptance criteria Reviewing test scenarios/test scripts Estimating time and resources (Software AG activity) Finalizing test plan (Software AG activity)
The User Acceptance Test Execution Process is equally comprehensive, encompassing:
Training team members Executing test plan Tracking progress Performing complete system/integration testing Documenting test results Defect reporting Status reporting Fixing defects (Software AG activity) Going through lifecycle testing Final reporting (Software AG activity)
The successful execution of all acceptance testing criteria, the resolution of all problems, and customer sign-off constitute the end of the acceptance testing phase, signaling that the environment is ready for production implementation.
3/16/2006 Page 33
State of North DakotaMigration Project Plan – V1.2
4 Software Development Standards
5.1 Best Practices in Software Development Standards
While there will be very little new development for this project, it is still important to have consistent and clear processes for the development of new software and the purchase of new products.
Following are the system development and technical documentation standards for all new sub-components that need to be created or existing software components that need to be modified.
1. Requirements - Gathering and agreeing on requirements is fundamental to a successful project. Those requirements need to be validated, their size determined (each set of requirements describe functionality from the user’s point of view and they can be converted to function points; assigning function points to a set of requirements helps us understand how large a set of requirements is and the associated effort needed to produce it.), and a plan for implementation established. Then, ND needs to incorporate the sets of requirements into the system design to serve as the foundation of design reviews and turn them into code and documentation.
These requirements are the backbone of testing and documentation. When the requirements are clearly stated and testable, they form the basic system test plan. They are also well suited for user acceptance testing. Knowing where each set of requirements is within the software lifecycle is valuable for managing the project and determining status.
2. Architecture - Ensure that the new or modified components or new vendor packages fit within the existing architecture.
3. Design - There are two basic principles: “Keep it simple, and keep it modular.”
4. Construction of the Code - Construction of new code is a very small fraction of the total project effort, but it is very visible, as it will change the user interface.
5. Peer Reviews - It is important to review other people's work. Experience has shown that problems are eliminated earlier this way and reviews are as effective as, or even more effective than, testing. Where possible, two-person teams are employed to assess a particular task. The teams subsequently define and implement an agreed upon course of action to complete the task.
6. Testing - Testing is not an afterthought or cutback when the schedule gets tight. It is an integral part of software development that needs to be planned. It is also important that testing is proactive; meaning that test cases are planned before coding starts and test cases are developed while the components are being designed and coded.
7. Integration - Testing is usually the last resort to catch application defects. It is labor intensive and usually only catches coding defects.
8. Configuration Management - Configuration management involves knowing the state of all objects that make up the ND system or project, managing the state of those objects, and releasing distinct versions of the migrated code.
3/16/2006 Page 34
State of North DakotaMigration Project Plan – V1.2
9. User Acceptance Testing - This is a critical step before deployment. This testing is executed in the ND Test environment. The successful execution of all acceptance-testing criteria, the resolution of all problems, and sign-off constitute the end of the acceptance testing phase.
10. Deployment - Deployment is the final stage of releasing ND or any application module with integrated sub-components for users.
11. System Operations and Support - Without the operations department, you cannot deploy and support a new application. The support area is a vital factor to respond to, and resolve, user problems. It is important to assess the change management process for changes or enhancement requests and to establish a change management process in the new Linux environment and communicate it to all users.
12. Documentation - The Systems documentation, Operations manual, and User Guide are updated and provided by the North Dakota ITD team.
13. Project Management - Project management is key to a successful project. Many of the other best practice areas described here are related to quality project management.
4.1 Quality Assurance Plan
Quality Assurance is a key part of the overall Professional Services project methodology. We have a standardized Project Management approach that follows best practices of the industry; however, we allow for client customization as required. This Project Plan, including a Project Schedule and Risk Log will be reviewed and approved with the ITD Project Manager. These documents are 'living' documents in that they are modified throughout the project to track progress. The SAG and ND ITD Project Manager will perform a Risk Analysis of the project. Risk mitigation plans are identified, with customer input in some cases, to avoid risk incurrence. The project schedule and Risk log are provided to the customer upon request and will be reviewed every two weeks.
The prime communication vehicle on any project is a weekly status report. This document is usually accompanied by a status review meeting on a weekly basis. The status report template not only documents was has been accomplished, but it also identifies issues, next steps, and schedule metrics. This provides a continual channel of information on project progress to the entire project and management teams. Issues are escalated as necessary, the actions from which are all captured in subsequent status reports and issue logs in WMS. Throughout the entire project, the status reports provide the project insight that the team needs to be successful.
Our methodology allows for milestones or checkpoints to ensure quality delivery in each step of the project. At the end of each phase, a Go/No-Go or Acceptance milestone is identified for customer concurrence. The status reports will identify upcoming milestones and any delays that may occur due to dependencies or issues identified by the team. Subsequent phases of a project are dependent on completion or acceptance of the prior phase milestone.
Prior to any deliverable creation, the Project Manager will inquire as to any customer preferred templates; otherwise, Software AG templates will be used. Deliverable tracking is handled via signature documents including Delivery Receipt and Acceptance. When the document is delivered, the customer will be asked to sign a Delivery Receipt form. After being given an
3/16/2006 Page 35
State of North DakotaMigration Project Plan – V1.2
adequate review cycle or completing a scheduled review session, the client may be asked to sign an Acceptance Form. If any changes are required, the items are noted in the status report and meeting minutes. Changes are then incorporated and the deliverable is resubmitted to the client for review and signoff. If any modification implies a change in scope, the client will use a Change Control form to document the project impact for approval.
Software AG, Inc. does hold internal project reviews on key accounts periodically to verify compliance with Software AG Project Management Procedures and Policies as well as ensuring the project is receiving the support required to be successful. Project Managers are encouraged to schedule mid-project or periodical project reviews with clients as well.
4.2 Considerations for Mainframe to Linux Migration
Migration from the mainframe to Linux must take into consideration the many differences between the operating system environments. The more significant differences are highlighted below.
Feature Mainframe LinuxCharacter coding EBCDIC ASCIIData types Binary, Floating, Integer, Zoned
Decimal, Packed Decimal, CharacterCharacter
Record formats Blocked, unblocked variable, fixed or spanned
Unblocked variable records
File Access SAM, DAM and VSAM Sequential accessBatch Job Language JCL Shell scriptingValues between blank and 99999
Sort Sequence: a-z A-Z 0-9
Sort Sequence: 0-9 A-Z a-z
Label processing Standard, Non-Standard and unlabeled
Not available
Job Scheduling Mature scheduling packages Cron scheduling packagesData Set/File management
Catalog data sets, retention periods, expiration dates and space allocation.
Logical Volume Manager, sequential files.
Security RACF / ACF2 Trusted Computing User Exit Language Assembler C / Natural Terminal Emulation IBM 32xx DEC vtxxxText Editors ISPF, ICCF and XEDIT VI, EMACS Queue Management JES/POWER Third party packages available System log All job initiations and messages are
routed to the system consoleStandard Output / Standard Error.
Figure 8: Table of Differences between Source and Target Environments
Data Transfer to Linux - Data files must be unloaded from the IBM mainframe product database and decompressed. A method of transferring the file is via IBM FTP. There are two transfer types available: binary and ASCII. Binary is used to transfer Adabas decompressed files. ASCII is used to migrate the sequential work files created by SYSTRANS, SYSDICBE, or by the application.
3/16/2006 Page 36
State of North DakotaMigration Project Plan – V1.2
Non-alpha Data Embedded in Adabas Alphanumeric Fields - When data files are transferred to Linux it is via binary transfer. This means that EBCDIC data resides on the Linux platform. This data is then processed via the Adabas utility cvt_fmt, which converts the record format to an exclusive length formatted file. The converted file is then read and the data compressed by the Adabas utility ADACMP, prior to being loaded to Adabas. Alphanumeric fields are translated during this process via an internal table from EBCDIC to ASCII. If an ND application redefines an Adabas alphanumeric field type to include the Natural data types Date (D), Time (T), Packed Numeric (P), Binary (B), Integer (I), Floating (F), Attribute Control (C) or Logical (L), the data will be translated incorrectly during the initial load. If this data is accessed from a mainframe application at a later date then the data will also be returned incorrectly.
Solution: Check the Adabas files prior to migrating and discover if the application places data other than alphanumeric data (A-Z, a-z, 0-9 special characters) in Adabas fields that the IT defined as alphanumeric. The preferred method is to change the layout of the record to reflect the true nature of the data. This may require creation of super-descriptors, creation of new fields and application modifications. This will avoid potential problems in the future. To change the data without changing the application:
Write a Natural program to read the file on the mainframe creating a sequential file containing the fields with binary data translated to standard unpacked numeric data.
Transfer the file to Linux via ASCII transfer. Unload the Adabas file from the mainframe, transfer the file normally and load the file to
Linux. The data in the fields will initially be wrong. Write a Natural program to read the sequential file created on the mainframe to update
the field in question back to the original values. Adabas will handle the redefine once the data is in the database.
Developing/Translating ScriptsPlease see section 4.5.9 for detailed information on the treatment of JCL for the migration.
Adabas DatabaseThe new environment is almost a one-to-one mapping of component technology. When implementing an Adabas environment on the Linux platform, the appropriate products are acquired and installed with their respective licenses.
DB2 DatabaseAll DB2 tables will be converted, like-for-like, to the Linux environment.
ND currently runs DB2 V7 for which IBM is dropping support in 2006. Prior to migrating the DB2 data to the Linux environment, ND will have to do a DB2 version update on the mainframe.
Lotus Notes InterfaceAn ODBC interface will be supplied to Adabas via EntireX SQL gateway and a normal ODBC connectivity will be available against DB2 or Oracle.
OnDemandPer IBM’s website – “DB2 Content Manager OnDemand for Multiplatforms V8.3,” a member of the DB2 Content Manager portfolio, provides enterprise report management and electronic statement-presentment. It also helps organizations effectively manage large volumes of computer-generated output.
3/16/2006 Page 37
State of North DakotaMigration Project Plan – V1.2
What's new:
Extends Linux support to DB2 Content Manager OnDemand for Multiplatforms V8.3. o Supports Red Hat Linux and SUSE Linux o Delivers cost-effective and highly scalable platform flexibility
Provides streamlined delivery of reports and/or check images on CD-ROM Includes DB2 Content Manager OnDemand Web Enablement Kit (ODWEK) in the base
DB2 Content Manager OnDemand for Multiplatform product “
Additional information pertaining to migration is available from IBM’s Redbook manual SG24-6409-00.
OfficeVisionThe OfficeVision application will be replaced with a customized solution by Software AG. A solution provided by a software vendor, CINCOM, was determined not to meet North Dakota’s budget and technical requirements.
The Software AG solution is being determined at this time. It is expected that a separate design document will be issued and a prototype developed prior to group 3.
VSAM FilesAll VSAM files accessed by Natural will be converted to ADABAS file formats. Data will be migrated from the mainframe to Linux and any coding issues will be remedied.
All VSAM files that are exclusively accessed by COBOL will be converted to ISAM file structures. Data will be migrated from the mainframe to Linux and any coding issues will be remedied.
For any non Natural VSAM files that are not already defined in Predict the file layouts must first be defined in Predict as to make them appear as Natural VSAM files on the mainframe. For standard COBOL VSAM there is a known method by which the Predict definitions can be generated from the original COBOL copylibs.
The VSAM conversion decisions are outlined in the diagram on the next page:
3/16/2006 Page 38
State of North DakotaMigration Project Plan – V1.2
For each VSAM file being converted to ADABAS, an equivalent ADABAS file of identical format will be created on the mainframe before being migrated to Linux. This is a 3-step process:
1. Assign ADABAS file numbers - For each of the required VSAM files, an available ADABAS file number will be assigned.2. Create ADABAS Predict definitions - For each file, using the VSAM file definitions in Predict, an ADABAS definition will be created. This will contain the new ADABAS file/view name and the field data/format definitions, as they will appear in ADABAS.3. Generate empty ADABAS files - For each file, using the new ADABAS conceptual file definitions in Predict, a new ADABAS file with the assigned name and file number will be generated.
Once all VSAM files are defined as Natural VSAM files in Predict and when the equivalent ADABAS files have been created, the data from the VSAM files can be loaded to ADABAS. The ADABAS files are created and loaded with current data on the mainframe they can be migrated to Linux. The ADABAS files will need to be created on Linux and the DDM’s generated. Once the files exist on both the mainframe and Linux, these files can be migrated in the same manner as the already established ADABAS files on the system.
3/16/2006 Page 39
State of North DakotaMigration Project Plan – V1.2
Tape Datasets JCL streams referencing Tape files are probably stored in EBCDIC and will have to be converted to ASCII. North Dakota will be converting these tape files and saving them to disk.
Data Conversion and Movement Activities – Data Extraction, Transformation, and LoadingThis set of tasks readies the data for the new environment. After we install the supporting database environment, we can create the database objects to accept the data. Adabas and DB2 data will then be unloaded from the mainframe databases and loaded into the corresponding databases on the target platform.
The Data Dictionary (Predict) will be migrated to the corresponding product on the target platform.Prior to the migration of Predict information it is imperative that the consistency check be run on the mainframe dictionary. This removes a lot of problems with internal ids and also removes null characters from the dictionary. The data dictionary can be transferred between platforms using the ALF format export facility.
The user security profiles and other Natural Security data pertaining to ND (Natural Security) will be migrated to the corresponding product on the target platform after all the testing is completed.
All the data files comprising of ND, their associated File Definition Tables (FDTs), Data Definition Modules, and Predict entries, will be migrated from the mainframe to the target environment.
4.3 Pre-Migration Activities
Pre-migration activities are also centered on acquiring suitably configured target platform servers at ND. Once the Linux server is in place, creating a new development environment – loading and configuring the required software products on both the server and client machines and defining new users on the server – will take place.
We also suggest reviewing implementation of specific activities, which will not only prepare the applications for migration but, also well position the ITD customers and staff to take advantage of new options. This includes but is not limited to:
Implement the new development environment for Natural and COBOL using the current mainframe environment.
Increased communication to the staff and customers on status and expectations.
4.4 Preparing the ND Application Infrastructure for Migrating to Linux
When the migration environment is in place, we can begin transforming the application source code and any third-party scripts that are used to support the ND application. Once the hardware platform is in place, we can also begin creating the application infrastructure.
4.4.1 Identifying/Preparing ND Modules for Migration
3/16/2006 Page 40
State of North DakotaMigration Project Plan – V1.2
Natural and COBOL modules are migrated to the target platform as is. The batch components are will be dynamically converted to Linux shell scripts by the Cronus ESPBatch product. FORTRAN, DYL280, REXX and Assembler modules will be rewritten for use on the target platform.
Appendix B lists the ND objects that will be migrated.
4.4.2 Natural
Natural programs run on various hardware platforms ranging from mainframe to PC, including IBM mainframe, UNIX, Linux, OpenVMS and Windows systems.
Natural programs are mostly unaffected by technical restrictions, because Natural service calls (for example, driver, database interfaces, CM-Calls) contain the platform-dependent functionality. This makes the Natural programming system adequate for the implementation of portable applications.
Nevertheless, to achieve full portability some points have to be kept in mind when migrating Natural applications:
Different character sets
Redefined binary fields
Special characters
Platform dependencies
Statement syntax extension
User interface modules
Transfer of applications
Portable source code
4.4.3 COBOL
For the most part these sources will require minimal change to the source code. COBOL programs accessing Adabas or VSAM will be recompiled and any issues emerging fromthe recompilation will be addressed. Some of the typical issues that may require attentionrelate to specific mainframe oriented COBOL syntax and/or operating system calls.
COBOL programs where the source is integrated with ISPF will need to be converted intopure COBOL equivalents, and all references to ISPF will need to be programmed out of theprograms.
COBOL programs that make use of exotic pre-processors to interpret embedded syntax(e.g., with EXEC RPI), will need to be migrated.
3/16/2006 Page 41
State of North DakotaMigration Project Plan – V1.2
4.4.4 REXX
REXX is an IBM mainframe oriented scripting language. Although there is support provided by various products, including Micro Focus ES/MTO, our view is that the level of this support is limited and will potentially not address the requirements of the REXX code we have seen. REXX code will be converted to Perl, Scripts or other technologies depending on the business functions provided by the current REXX functions.
4.4.5 DYL280
Software AG will provide a semi-automated conversion of all DYL280 code to either Natural or other technologies as may be required. The converted code will be well structured and formatted, to ensure readability and ease of maintenance. The decisions and type of conversion made is described in the diagram below:
3/16/2006 Page 42
State of North DakotaMigration Project Plan – V1.2
4.4.6 ASSEMBLER
Unlike Natural and COBOL, which is largely portable across all platforms, Assembler is very platform and operating system specific. It will therefore need to undergo conversion. The Assembler programs will be converted to COBOL. These programs will be converted through a mixture of automation and manual work.
4.4.7 FORTRAN
As with Assembler, FORTRAN is another of the technologies where conversion is the only reasonable alternative.
Software AG will convert all FORTRAN programs to COBOL. These programs will be converted through a mixture of automation and manual work.
4.4.8 Batch Jobs/JCLThe ND system is comprised of various online and batch processes. Batch jobs have been created to run certain repetitive functions of a fairly lengthy duration, not requiring direct intervention. Online functions are of a short duration and require frequent human intervention.
Regarding batch, the current system runs pre-existing JCL job streams at end of the day. In addition, JCL is created and submitted ad hoc as needed. Specifically, JCL is submitted via NATRJE processing.
During migration, those Natural modules containing NATRJE/JCL processing logic will be amended to incorporate the EspBatch RJE processes methods, whereby Linux shell scripts are automatically generated and submitted in batch mode.
The JCL and RJE batch replacement phase actually entails several crucial steps:
RJE Programs: Modify Natural programs to incorporate EspBatch method. Recode SORT sequences, using Linux SORT functionality, on a case-by-case basis; Modify Natural program to utilize ESPSORT syntax. (passing of parameters) Unit test Natural programs using EspBatch method.
JCL modules (Cataloged) JCL and PROC cleanup process (remove unused JCL and PROCs) Convert JCL using EspBatch conversion toolkit to Unix SCL. Expand PROCs where used within SCL Replace MF Utility steps with ESP equivalent (e.g. IDCAMS, IEBGENER, FTP etc)
Software AG will implement a dataset-naming standard for work files created during batch processing. Consistent and logical naming standards will serve the dual purpose of simplifying maintenance activities and addressing security requirements.
Various SUBSYSTEMS can be defined and used as actual disc location for each step referencing Datasets. These SUBSYSTEMS are predefined and Linux directories created.
3/16/2006 Page 43
State of North DakotaMigration Project Plan – V1.2
Ie: SUBSYSTEM: OPSWORK will actually point to directory /data/wf/ND/OPSWORK
This subsection will explain in more detail how the batch JCL processes are converted from the mainframe environment to the Linux environment. The following table illustrates sample mainframe steps with the corresponding Linux steps while using the ESP routines.
Function Mainframe JCL Linux Script EquivalentIDCAMS //STEP1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=* //SYSIN DD * DELETE LEG.HEAD.DATA
To be replaced with:Program ESPDSDEL Workfile Parameter: LEG.HEAD.DATASUBSYSTEM parameter: OPSWORK
NATURALSummarized here – a more detailed example is shown below
//STEP3 EXEC ADACG108 //CMPRINT DD SYSOUT=5,HOLD=YES //INPUT……..
natural batch parm=prod(The Natural Parameter prod will have all the necessary CMPRTXX statements to allow the routing of reports”Device assignments mapping.
SYNCSORT
Will be replaced using LINUX sort command
//STEP4 EXEC PGM=SYNCSORT//SYSPRINT DD SYSOUT=* //SYSOUT DD SYSOUT=* //SORTWK01 DD UNIT=SYSDA,SPACE=(CYL,20) //SORTIN DD DSN=LEG.HEAD.DATA,DISP=OLD //SORTOUT DD DSN=LEG.HEAD.DATA,UNIT=DISK,DISP=OLD //SYSIN DD *,DCB=BLKSIZE=80 SORT FIELDS=(1,6,CH,A)
Replaced with program ESPSORT using same Sort Fields parameters and dataset names as on Mainframe. The SUBSYSTEM can also be supplied to route the SORTOUT to different disc section.
FTP //FTPSTEP EXEC PGM=FTP2,PARM=" / FIOS " //SYSPRINT DD SYSOUT=* //SYSPUT DD SYSOUT=*,DCB=BLKSIZE=133 //SYSGET DD DSN=LEG.LIB.CNTL(CLKFTP),DISP=SHR// DD *,DCB=BLKSIZE=800B:STRUCT FB:TYPE IPUT' #TSONAME #FSERVER-NAME STAT
Replace with program ESPFTP. Using same parameters as on Mainframe JCL
Figure 9: Comparison - Mainframe JCL versus Linux ESPBatch SCL
During the conversion extra consideration is given to replacement of the following:
1) Condition Codes on Step and Job level ( To be included in the Appworx Scheduler)
3/16/2006 Page 44
State of North DakotaMigration Project Plan – V1.2
2) Replacement of System routines like IECGENER, IEFBR14, IEBGENER. All of these will be replaced by equivalent ESP routines. (See attached documentation on example routines)
3) Dispositions on Workfiles where DISP=MOD is required. (Linux overwrite files)4) GDG’s replacements 5) Restart capabilities6) Procedure flow 7) Overall Submission and Management.8) The actual submission of these Natural SCL modules will be done via EspBatch.
The initial trigger will however be done from the Appworx scheduler and information passed to EspBatch using the EspBatch API. (i.e. SCLNAME, SCL Step No etc)
IBM Generation files are recognized by the suffixes (0), (+1), (-1). The Linux converted files will have an underscore '_' suffix. The mapping of these GDG datasets are done internally by EspBatch and a maximum number of versions are set per SUBSYSTEM. The same syntax used on the mainframe is used to retrieve previous allocated datasets e.g. ORDR.ORDR010.RR (-2)
E.g.: <SUBSYSTEM>/MYFILE.TXT.VERSION_NR
Sample conversion - GDG files:
BEFORE //RETURNS DD DSN=ORDR.ORDR010I.RETURNS.RR(0) //REPTFILE DD DSN=ORDR.ORDR270F.REPORT(+1)
AFTER
/<SUBSYSTEM>/ ORDR.ORDR010I.RETURNS.RR_0223/<SUBSYSTEM>/ORDR.ORDR270F.REPORT_0234
ordr270f.report_0001
The migration of production ITD JCL includes full testing as well. The non-production JCL will also be converted using the EspBatch toolkit but will not be tested.
Information regarding JCL, data files and other information can be obtained from HTML analysis reports created by system prefix as defined by North Dakota ITD staff and is stored on a DVD-ROM. The primary process is to locate the specific system prefix directory and then double-click on the index.html file. This will result in the following Internet Explorer (IE) presentation being presented:
3/16/2006 Page 45
State of North DakotaMigration Project Plan – V1.2
The user should then proceed to select either “type” or “data access” and the associated presentation will be displayed.
The user can then “drill down” to a greater level of granularity to examine the relationships among programs, JCL, and files.
4.4.9 External Interfaces
3/16/2006 Page 46
State of North DakotaMigration Project Plan – V1.2
The impact of any EBCDIC/ASCII differences will have to be identified and addressed.
Several ND systems maintain interfaces with one another (via FTP). To facilitate similar functionality with the new Linux platform, testing criteria must be developed that includes tests to confirm that the data exchanged remains the same as in the current system.
The data will need to be checked due to the fact that the mainframe creates EBCDIC output in contrast to Linux’s ASCII. We need to ensure that these external partners can accept ASCII and similarly are also able to send ASCII formatted data.
Please see Appendix G for greater detail on external interfaces.
4.4.10 Printing Considerations
For the current DCF (AFP) printing solution, we recommend the special reporting method used (IBM DCF) be replaced with IBM’s InfoPrint. InfoPrint is currently used by North Dakota, and absent any identified problems, we will move forward with it. Should problems occur, Software AG will evaluate Optio as a solution and provide a POC to ND.
Normal ASCII printers should be connected directly to the network or slaved off PC’s using special software and managed by a print spooler. Additional software will be required if the current infrastructure does not support LPD type printing from the PC’s.
Variable Data Intelligent PostScript PrintWare (VIPP) is a programming language that provides nearly unlimited capability and flexibility to its users. VIPP is a set of high-level functions written with the PostScript language. VIPP adds to PostScript functionality and enhances your ability to print complicated documents at the rated speed of your print device. This means that the job containing variable data, images, logos, and other memory-intensive job segments can be printed without the delays that are encountered in similar non-VIPP applications. Limited tests of VIPP using data report output from the North Dakota Red Hat Linux effort have not shown any negative issues arising from reading in the data report files. Based upon those results it is not expected that VIPP users would encounter errors during the report enhancement activities.
4.5 Migration Activities at ND
In the migration phase at ND, Software AG will work with North Dakota ITD staff to ensure that the Production Server and the Development Server are configured, the respective software components are loaded, and the Adabas and Predict data are loaded into the appropriate Adabas databases and files.
Natural code will be modified, where necessary, to function correctly in a Linux environment. In addition, any required FORTRAN or Assembler code functionality has been re-written in Natural or COBOL.
Migration involves the movement of application modules with business logic, business data, configuration data, and metadata from the Software AG server to the ND Linux environment. The
3/16/2006 Page 47
State of North DakotaMigration Project Plan – V1.2
process also involves Software AG verifying the current versions of Software AG software and comparing them with the software held by ND to ensure proper versions are installed in the new target environment.
Next, the build activities for the both the production and the development/test platforms at ND will take place. The infrastructure must be constructed so that applications can be hosted.
The various objects to be migrated will be categorized and copied to various containers in the Linux environments utilizing installation scripts for copy strategy. The data migration process will be a logical copy from the Software AG server into ND Linux Adabas databases via FTP.
Modifying the FacilitiesTypically, the facilities must be modified, if necessary, prior to hardware installation. These activities are coordinated to reduce their impact on the existing environment.
Creating the Networking InfrastructureThis task is the responsibility of ND. It involves preparing the network for the new platform. This is when the decisions are made about IP addresses, routing, and network masks, if any. If warranted, new load balancers, switches, hubs, cable drops, and the like will have to be deployed and tested. Care must be taken to minimize the impact of these activities on the existing environment.
Deploying the Compute and Storage PlatformsND shall perform this task involving all the hardware and operating system testing (e.g., RAID effectiveness) for the effectiveness of the production and development machines. Prior to installing the Linux platform, the supporting infrastructure (for example, facilities and networking) should be in place.
4.6 Configuration Management
A Configuration Management system is essential for those clients where change control between Production, Test (QA) and Development environments is of great importance. Rational ClearCase will be used as the configuration management tool for all code. North Dakota is responsible for administering the usage of Rational ClearCase.
3/16/2006 Page 48
State of North DakotaMigration Project Plan – V1.2
5 Security Plan
North Dakota is responsible for all aspects of the security plan.
A major concern for the migration effort lies in transferring the existing security environment to the migrated systems. Achieving a migration that is transparent to end-users requires that the underlying security system provides the same functionality in both the existing and migrated environment. In what follows the highlights of the existing security environment are described, followed by proposals that would provide this same functionality.
Current StateThe state currently uses RACF as the core of its mainframe security environment. At present:
Users authenticate and are authorized via RACF. Access to programs is controlled via RACF. Note that this depends on the RACF Access
Control List (ACL) mechanism. Access within DB2 is controlled via RACF userids and groups. Access to CICS transactions is controlled via RACF userids and groups. Access to individual datasets is controlled by access lists that utilize the prefix of the
dataset name (the first 2,3,4, characters in the dataset name). Individual users, or groups of users, are granted access to all datasets whose name begins with a given prefix.
There is no limit to the number of users/groups that may appear in a given object’s ACL. Users are given access, or have access removed, through a request system. Users may
request that other users be added to, or removed from, the access control list for a given dataset. Who may request such access changes is specified by each agency.
RACF provides extensive auditing capability: who did what when to change what.It should be noted that the majority of these access control rules are used to restrict data/program/transaction access to the agency that owns the data/program/transaction. There are, however, exceptions to this default.
5.1 UNIX User Authentication/Authorization and Active Directory IntegrationUNIX natively provides a user authentication/authorization system that is analogous to that provided by RACF1. These natively provided security services depend on sets of information that are traditionally stored within UNIX flat files. The source of this information can be externalized to an LDAP server: the LDAP server contains the needed userid, password, group, user home, shell, etc. information that is traditionally stored within UNIX flat files. Use of LDAP as a security information provider is transparent: the UNIX operating system is simply configured to query the LDAP server for this information.
While Active Directory is an LDAP server, it does not natively contain all of the information required to serve as a security information provider for UNIX. In particular, the native Active Directory schema provides no place to store this information. However, Microsoft provides a product, UNIX Services for Windows that extends the schema of Active Directory so that Active Directory can provide this information. In versions of Active Directory after Active Directory 2003 these schema extensions will be provided as part of the base product.
1
3/16/2006 Page 49
State of North DakotaMigration Project Plan – V1.2
It is proposed that UNIX Services for Windows be utilized so that Active Directory can serve as the security information provider for UNIX. It should be stressed that this will be transparent to applications: applications will utilize standard UNIX calls and the operating system will manage retrieving the information from Active Directory.
There are two major issues that will need to be tested: UNIX traditionally does not support groups within groups and tests will have to be
conducted to determine if this feature of Active Directory can be used. UNIX traditionally does not support group names that contain spaces, while Active
Directory does.It should be noted that both of these issues can be obviated by creating new groups for all migrated functionality. However, this would not be ideal from an administrative perspective.
5.2 Access Control ListsThe traditional UNIX security model contains no notion of Access Control Lists (ACLs). RACF, on the other hand, makes extensive use of ACLs. The base UNIX security model limits access control to the user/group/world model:
Individual users may belong to one or more groups, with one group being their default group.
Individual objects (i.e. files or programs) have the following attributes: an owner and a group to which the object belongs.
Read, Write, or Execute (rwx) privileges to an object can be granted to the owner, the group to which the object belongs, and everyone else (i.e. the world). Thus, in the traditional UNIX model, whether a given user can execute a program, or read a file, or write to that file, is determined by the rwx attributes that are tied to the given program or file.
This base UNIX access control model clearly presents challenges, in particular the limitations imposed by an object’s group membership. Fortunately, there are extensions to this base model that provide help.
The Linux 2.6.x kernel contains POSIX ACL functionality. This extension provides a RACF-like (or Windows-like) access control list for individual objects.
An individual object retains its ownership and group attributes – this provides backwards compatibility.
Additional users, or groups, may be added to the object’s access control list. If the object is a directory, an inheritance ACL can be defined: any object created within
that directory will be created with the inheritance ACL in place.
It is proposed that the POSIX ACL functionality present in the Linux 2.6.x kernels be used as a replacement for the RACF ACL functionality. Because the current RACF ACLs are maintained by ITD based upon Agency requests, this should be very straightforward.
As described above, two major issues for testing are the groups within groups functionality provided by Active Directory and group names containing spaces. 5.3 DB2, RACF, and Active DirectoryThe current DB2 environment externalizes its security: aside from critical maintenance accounts, no user accounts are defined within DB2. Rather, DB2 utilizes the information stored within RACF to provide user authentication. DB2 resource authorization specifications are defined in the DB2 system security tables. IBM utilizes this same approach upon all the platforms on which DB2 runs: user authentication and access control can be externalized to the natively provided security
3/16/2006 Page 50
State of North DakotaMigration Project Plan – V1.2
system. Simply put, DB2 can be configured to rely upon the underlying operating system to provide the user/group information needed for DB2 to provide access control.
As has already been proposed, the Linux OS will be configured to utilize Active Directory as its security information provider. In turn, DB2 will be configured to externalize its security to Linux, just as it does to RACF today on the mainframe. Thus, Active Directory can be used to provide the needed security information to DB2.
As described above, two major issues for testing are the groups within groups functionality provided by Active Directory and group names containing spaces.
5.4 Natural SecurityCurrently on the mainframe, the userids are defined in RACF and in Natural. Interactive Natural uses the user's mainframe id passed to it from CICS. This means Natural trusts that CICS has done the authentication and Natural does not ask the user to supply a password. The specific item in question is the maximum length of the Natural userid when Natural is running on a platform other than the mainframe. The maximum length of a mainframe Natural id is 8 characters. If we utilize Active Directory as the repository of userids, Active Directory ids can and are greater than 8 characters in length. What maximum length ids can the non-mainframe Natural handle?
5.5 CICS and MicrofocusThe current CICS environment depends upon RACF for its security context: which transactions can be run by which users are based upon RACF. Continued work and analysis will need to be performed to determine whether or not MicroFocus externalizes security.
5.6 Access to Datasets or FilesAt present, access to individual datasets is controlled by access lists that utilize the prefix of the dataset name (the first 2,3,4, characters in the dataset name). Individual users, or groups of users, are granted access to all datasets whose name begins with a given prefix. The majority of the access control rules are used to limit access to a dataset to the agency that owns that data. In some sense these rules can be thought of as creating a hierarchical structure on the mainframes flat file system. In addition, these filename based rules provide an inheritance function: any file created with a given prefix will automatically acquire the required access controls.
While UNIX has no prefix based access control mechanism, the tools described above, when combined with a well designed directory structure, can provide this same functionality. The following is an example of one approach that could be taken:
A directory structure containing directories for each agency is created. For example, /agency/dhs for DHS, /agency/itd for ITD, /agency/dot for DOT, etc. would be suitable.
Access to this agency specific directory would be granted to all agency users. In addition, the POSIX ACL inheritance rules would be set to grant access to all agency users.
Any files for which all agency users have the same access rights would be placed in this top level agency directory.
For files that are only accessible/writable by a subset of agency users this approach is iterated. Imagine that a group of DHS users are the only users that should access a set of files. Then:
o A group containing these users is created within Active Directory.o A subdirectory /agency/dhs/groupname is created with the ACL and inheritance
set as needed.
3/16/2006 Page 51
State of North DakotaMigration Project Plan – V1.2
o Files subject to this ACL are created in this directory. This approach can clearly be replicated as often as required.
While the example given above illustrates an approach that will work, the currently existing ACLs within RACF need to be analyzed. This analysis will permit a directory structure that best fits to be created.
5.7 AuditingRACF, and z/OS, contain extensive auditing capabilities. In particular, it is possible to log each and every dataset access/modification. Traditional UNIX lacks this capability in its default configuration. However, SUSE and IBM have created an extension to Linux that has met the Common Criterion, which includes the auditing capabilities provided by RACF and z/OS. Discussions will need to be held with SUSE concerning how best to implement this auditing functionality.
One issue of concern lies in any restrictions to kernel version the auditing extension may require. This issue will have to be a major point of discussion with SUSE.
3/16/2006 Page 52
State of North DakotaMigration Project Plan – V1.2
6 Human Resources/Training Plan
Support IT Staff Training Requirements - Training, whether in the form of informal on-the-job training and knowledge transfer or formal classroom lectures, is a vital ingredient to the successful completion of this migration project and the future service levels of the new environment. We want to make sure to address the following strategic areas of learning to ensure the successful migration project.
Awareness – ND should prepare an awareness campaign to inform end users, executives, interfacing entities, and technical staff about the coming migration. It should provide details about the goals, schedule, and training for the new environment. This awareness campaign is usually a key part of the project risk mitigation strategy.
End-user training – Just as end users need to be informed about migration process and results, they will need to be trained on any new features or changes in their business process.
Platform training – Training needs to be given on the Hummingbird platform at the client end. Linux familiarity training will help also.
Process training – As a result of the migration, some of the internal IT processes will change. This might be in the area of monitoring, change management, or something else. Training for these changes will need to be taken care of before the deployment phase.
Software AG will provide a knowledge transfer program for ND staff that includes on-site classroom training on the products and on-the-job training. The concept for knowledge transfer is guided by the principle of just-in-time training – people tend to quickly forget what they have learned in a classroom and need immediate practice in what they learned to lock in a real transfer of knowledge.
Classes will be taught by qualified instructors at ND facilities with follow up on-the job mentoring by project team members conducted after new skills are covered. By the time of the conclusion of testing and of Task 4, ND staff should be in a position of performing all routine support tasks for the application on their own, with minimal oversight from project team members.
Various ND staff members need training targeted specifically to their project roles:
Linux System Administrator - Administers the Linux environment
SUSE LINUX Fundamentals (3 days)Introduces open source standards and common knowledge and skills needed in all Linux distributions. Attendees gain the essential skills required to log in to a multi-user Linux environment, navigate the SUSE LINUX file system and manipulate files, work within shells and execute shell script commands, control processes running on the SUSE LINUX Server, and more. SUSE LINUX Administration (5 days)This course utilizes SUSE Linux Enterprise Server 9 to teach administrative skills common to an entry level Linux administrator or Help Desk technician in an enterprise environment. Attendees of Linux Administration will learn to install and configure SLES9, establish and manage users and groups, grant and manage permissions to users and
3/16/2006 Page 53
State of North DakotaMigration Project Plan – V1.2
groups, manage software applications with YaST, manage and troubleshoot the SUSE LINUX file system, manage printing, configure the network with YaST, and manage network services.
These two courses are the first two in a training program from Novell that prepare students to take the Novell Certified Linux Professional (Novell CLP) certification Practicum Exam.
Adabas DBA/System Administrator - Administers the database and the Natural environment, including backups, configuration, and upgrades.
Adabas Skills/Natural Systems Administration for LINUX (5 days)This five-day course is essential for new Database Administrators (DBA) using Adabas on Linux environment and DBAs using Natural. Among the many topics you will learn are establishing and maintaining the Adabas environment, defining a database, loading a file, reorganizing files to improve performance, monitoring databases, and using Natural tools and utilities to successfully administer Natural. Intensive workshops will give you practice with the day-to-day tasks and problems encountered by a DBA.
Natural 6 for LINUX Internals (2 days)Focusing on the internal architecture and processes of Natural Version 6 in the LINUX environment, this two-day seminar provides information to better develop, maintain, tune, and debug Natural code. You also will learn in-depth information about Natural system parameters and work areas and how to analyze issues as they arise. The course is designed for Natural Administrators and senior Natural Developers.
Managing Information with Predict (2 days) – LINUX only
Natural Programmer - Develops and maintains the ND Natural application code
Natural 6 for Windows (1 day) The Natural Programmers need to be trained how to use the Windows interface to the LINUX To help the programmers quickly and effectively begin to use the new, Windows based interface, Software AG will create a customized online learning tutorial specifically for ND. This tutorial will be approximately 6-8 hours of self-paced training on the new interface and functionality. The training will be broken down into multiple modules. The modules will consist of text, graphics and audio. The course modules will all follow a standard training format--Overview, content discussion followed by a summary. Programmers will be able to use the modules both for initial training and subsequent review as necessary. Quizzes will be conducted at the end of each module. This training will be based on industry standard software and standards.
Natural 6 for LINUX Internals (2 days)Focusing on the internal architecture and processes of Natural Version 6 in the LINUX environment, this two-day seminar will provide information to better develop, maintain, tune, and debug Natural code. You also will learn in-depth information about Natural system parameters and work areas and how to analyze issues as they arise. The course is designed for Natural Administrators and senior Natural Developers.
3/16/2006 Page 54
State of North DakotaMigration Project Plan – V1.2
COBOL Programmer - Develops and maintains the ND COBOL application code
Using Micro Focus at North Dakota (4 days) The class will focus on understanding and using the four products purchased by North Dakota:
o Net Express o Server Express o Enterprise Server with MTO o Revolve EE
Outline
1. Overall introduction
2. Using Revolve
3. Using Net Express
4. Using the Enterprise Server
5. Using Server Express
COBOL and Natural Analysts – Analyze Natural and COBOL applications for enhancement and modification projects.
Using Micro Focus at North Dakota (4 days) The class will focus on understanding and using the four products purchased by North Dakota:
o Net Express o Server Express o Enterprise Server with MTO o Revolve EE
Outline
1. Overall introduction
2. Using Revolve
3. Using Net Express
4. Using the Enterprise Server
5. Using Server Express
Natural Engineer Introduction (2 days)This course is designed for application developers and project leaders. After completing the course, you will have a comprehensive understanding of how Natural Engineer can enhance the documentation of your applications and ease the maintenance of your applications.
3/16/2006 Page 55
State of North DakotaMigration Project Plan – V1.2
JCL Users - Develops and maintains the ND JCL
Linux Scripting for Mainframe Programmers (4 days)This hands-on course provides a thorough introduction to developing and maintaining JCL by using Cronus software tools
Class Summary
1. Overall introduction 6. EspArchive
2. General Set-up 7. EspMail and FTP
3. Natural Parameter Modules 8. Xi-Text
4. EspMenu 9. Xi-Batch
5. EspBatch 10. Power Desk Delegator
AFP Printing Replacement Users – Administer and maintain specialty printing software
The selection of the software will depend on ease of use and migration during a benchmark at the start of the migration project. A placeholder number of training days has been included as part of the training plan.
End User – Utilize Linux application interfaces
End User Online Tutorial (1 hour)
Just as end users need to be informed about migration process and results, they will need to be trained on any new features or changes in their business process. To help the users quickly and effectively begin to use the new, Linux based systems, Software AG will create a customized online learning tutorial specifically for ND. This tutorial will be approximately 1 hour of self-paced training on the End user Interface requirements and changes. Since the recommended length of online training modules is 15-20 minutes per session, the training may be broken down into multiple modules. The modules will consist of text, graphics and audio. The course modules will all follow a standard training format--Overview, content discussion followed by a summary. End users will be able to use the modules both for initial training and subsequent review as necessary. Quizzes can be added if required. This training will be based on industry standard software and standards. At the conclusion of the project, the client will own the training and will be free to modify it and distribute it as they see fit.
Training Assumptions: The following products are currently owned by the State of ND. Training for these
products have not been included:o ClearCaseo DB2o Oracleo Segueo Tivoli
3/16/2006 Page 56
State of North DakotaMigration Project Plan – V1.2
Also, per the Statement of Work, classes for SUSE Linux are not included. All classroom onsite courses have a maximum number of 12 students per session The Adabas DBA/System Administrator courses are for experienced Software AG
product DBAs and administrators. The Natural and COBOL programming courses are for experienced Natural and COBOL
programmers.
Course Name Days Format Sessions Total Students
Provided by SAG
SUSE LINUX Fundamentals 3 Classroom 1 12SUSE LINUX Administration 5 Classroom 1 12Adabas Skills/Natural Systems Administration for LINUX
5 Classroom 1 12 X
Natural 6 for LINUX Internals 2 Classroom 2 24 XManaging Information with Predict 2 Classroom 1 12 XNatural 6 for Windows 1 Online N/A N/A XUsing MicroFocus at North Dakota 4 Classroom 2 24 XNatural Engineer Introduction 2 Classroom 2 24 XLinux Scripting for Mainframe Programmers 4 Classroom 2 24 XAFP Printing Replacement 5 Classroom 1 12End User Online Tutorial
N/A Online N/A N/A XFigure 10: List of Recommended Training Courses
3/16/2006 Page 57
State of North DakotaMigration Project Plan – V1.2
7 Management and Control Methodology
7.1 Change Management
During the project, it may be necessary to make changes or additions to the current project to meet changing business needs, previously unidentified requirements, or unforeseen problems. These changes are likely to affect scope, pricing and/or deliverable dates. Therefore, Change Control Management is essential to ensure the interest of both ND and Software AG is maintained. In order to initiate a change in scope under the contract, ND and Software AG should execute a mutually agreed upon Project Deliverable Change/Addition Form (example included in H). This form must specifically address in detail the required change to a particular deliverable or the new deliverable to be added (only one request per form). Software AG will provide time and cost estimates for the change or addition requested. Once both Software AG and ND have approved the form, the existing deliverable list, project plan, and invoicing schedule will be revised accordingly.
Scope Change Control
The following procedure pertains to changes within the ITD’s project plan for cost, time and scope.
If a change in scope is identified, an issue will be created. See Issue Management and Escalation Process, Section 3.2.6 of the project plan. All approved issues that require a change to cost; time or scope will require an Impact of Change request to be logged in WMS.
The escalation process on a scope change will be to the ITD Project Manager first and final approval from the Project Sponsor prior to any “out of scope work” commencing.
The following are steps to follow to complete an Impact of Change request as follows:
Impact of ChangeThis form is used to record an Impact of Change. The purpose of an Impact of Change is to identify changes in cost and/or time relating to a project. The Impact must specify whether the change is an increase or decrease, what the change is to the cost and what the change is to the time (schedule). There are times when an Impact may affect one without the other. All Impacts must be approved or rejected.
Click on Impact of Change and the following form will appear:
3/16/2006 Page 58
State of North DakotaMigration Project Plan – V1.2
You will need to complete the following fields with the appropriate information:
Short Description: Short description for the issue Sub-Projects: If you have sub-projects, you can select the sub-project that the issue relates to. Associated Issue: If the impact relates to an issue, select the issue from the drop down list. Impact Reason: Used to categorize impacts. Select the reason from the drop down list. Description of Change: Describe the impact Current Estimate Cost: Record what the current estimated cost of the project. Current Estimate-Completion Date: Record what the current estimated completion date of
the project. Estimated Impact-Increase/Decrease: Indicated whether the overall impact is an increase of
decrease to the cost or timeframe. Estimated Impact – Impact Cost: Indicate whether the change in cost is an increase or
decrease and enter the cost of the change. Estimated Impact-Ongoing Cost: Indicate whether the change to the ongoing cost is an
increase or decrease and enter the cost of the change. Estimated Impact – Timeframe: Indicated whether the change in the timeframe. New Estimate-Cost: Will indicate the new estimated cost of the project. Click on ‘Calculate
New Estimate Cost’ to calculate the new estimated cost.
3/16/2006 Page 59
State of North DakotaMigration Project Plan – V1.2
New Estimate- Completion Date: Record what the new estimated completion date of the project.
Initiator: Indicates who initiated the impact. Can only select one person. Assignee: Select the individual who is assigned to the impact. Can only select one person. Notifications: Select the individuals who are to be notified when the impact is submitted for
review. Can select multiple individuals. Attachments: This area is used for storing the documents relating the issue Department Notes: area for Department that created the impact to record any type of notes. Department Attachments: This area is used by the Department that created the impact for
storing the documents relating to the issue. Comments: This area can be used to record any information relating to this issue. Deny/Return/Approve: Used to indicate whether the impact is denied or approved or needs to
be returned for correction.
Project AcceptanceThis form is used to record project acceptance. Project Acceptance forms are used as verification of the approval of key deliverables within the project. Project Acceptance forms are completed for such key deliverables as the Project Plan, Statement of Work, Analysis document, Design Phase, and upon project completion. The Project Acceptance form must be approved by Project Manager. Click on Project Acceptance and the following form will appear:
3/16/2006 Page 60
State of North DakotaMigration Project Plan – V1.2
You will need to complete the following fields with the appropriate information:
Short Description: Short description for the project acceptance. Sub-Projects: If you have sub-projects defined for your project, you can select the sub-
projects that the project acceptance relates to. Acceptance Type: Indicates what the project acceptance is for. Select the type from the drop
down list. Other Description: If you selected Other, you must enter and description. Comments: Enter any comments that relate to this project acceptance.
WMS is not a substitute for necessary Software AG contractual documents or processes. At times these documents may necessitate duplication with WMS documents.
7.2 Communications Plan
A Communication Plan is a continuous process used before, during and after the migration project. The objective of the plan is to:
Identify and describe all project stakeholders Describe the individuals responsible for the communication. Define how project stakeholders will be kept informed about the project Identify the communication paths within the project Ensure all information is consistent, accurate, and timely
A variety of methods may be used to communicate with project stakeholders. Common methods include status reports, correspondence, meetings, formal presentations and even a project web site. Overall each method will be accomplished one of two basic ways:
Push where the information is pushed to each stakeholder in a memo, email or presentation.
Pull where the information is available, but the stakeholder has to go find it. A web site is good example of “pull”.
In most cases the ‘Push’ method will be used, however the ‘Pull’ can be used for detailed information that not everyone wants to see. This Communication Plan describes the specific communication methods that will be used to communicate with project stakeholders and how those will very during different phases of the project.
To effectively communicate with project stakeholders, the migration team needs to develop a good understanding of the unique needs of each stakeholder group and repeat the messages using different mechanisms to reinforce the messages. This is accomplished with several ‘tools’ that are included in the Communication Plan, including the Responsible Roles and the Stakeholders Communication Matrix. These tools describe all roles, which must provide clear understanding of their vested interest in the project, and their expectations to specific stakeholders. Lastly, the communication methods are correlated to the specific needs of each stakeholder group.
3/16/2006 Page 61
State of North DakotaMigration Project Plan – V1.2
Clear and consistent communication is essential to the success of any project. The Communication Plan ensures that the methods, means, and frequencies of communication are clearly defined for all project stakeholders. We must also anticipate conflict will occur and this plan is developed such that we work to overcome conflict before it occurs. We also anticipate a feedback process will be implemented for both anonymous and face-to-face meetings to continue to improve clear communication throughout the project.
7.3 Method for Updating the Communications Plan
The Software AG Project Manager and ND Project Manager will be the persons responsible to ensure the various methods of communication are updated and disseminated to the various stakeholders by the appropriate team member. The project managers will also review the communication plan whenever a milestone is reached or if there is a significant change in the project. The next chart provides details of the individuals who are responsible to ensure the project communications plan is executed on consistent bases as required.
Role Responsibilities Goals
ITD Project Sponsor
(Dean Glatt)
Communicate weekly with the IT Executive Committee
Communicate questions with the Steering committee
Ensure adherence to IT policies & procedures
Effective utilization of resources
Maintain focus on customer service
Customer expectations are met
Project goals & objectives support ITD strategic plan
Project meets agency’s goals & objectives
3/16/2006 Page 62
State of North DakotaMigration Project Plan – V1.2
ITD Overall Project Manager
(Linda Weigel)
Manage the overall project
Conduct Project Kick-off meeting
Develop Project Plan
Establish time and cost baseline
Ensure timely completion of deliverables
Coordinate and direct project activities
Effective management of project resources
Ensure project stakeholders are kept well informed
Communicate with the SAG Project Manager regarding project management methods and practices
Communicate with ITD Sponsor, Steering Committee, Executive Committee and Customers
The project is well managed
All customer requirements are communicated and captured efficiently
Information flows easily among project stakeholders
Customer expectations are well met
ITD Project Team members participate when needed
ITD Project Team members contribute to the flow of project information
Adequate project resources are available
Project deliverables are of high quality
Project Office projects are successful
Timely notification of issues
Project is completed on time and within budget
Clear and consistent Communication between the Stakeholders
3/16/2006 Page 63
State of North DakotaMigration Project Plan – V1.2
ITD Project Manager
(Shawn Meier)
Coordinate and direct project activities with- in the Migration Teams and ensure timely completion of deliverables
Communicate any issues to ITD Project Manager
Be liaison between ITD development, ITD Computer Service and State agencies with SAG’s migration teams
Identify tasks required to perform the required scope of work
Identify skill requirements, sources and availability of ITD personnel
Coordinate so all areas remain on time and within scope
Testing effort remains on schedule and transition to live is on time
Issues within the teams are communicated and resolved in timely manner
Clear and consistent Communication between the teams
Task are identified for each team
3/16/2006 Page 64
State of North DakotaMigration Project Plan – V1.2
Software AG Project Manager
(Carl Bondel)
Assist ITD Project Manager in Managing the overall project
Ensure timely completion of deliverables
Coordinate and direct project activities with ITD Project Manager
Effective management of SAG project resources
Maintain focus on customer service
Identify tasks required to perform the required scope of work
Identify skill requirements, sources and availability of SAG personnel
The project is well managed
All customer requirements are communicated and captured efficiently
Information flows easily among project stakeholders
ITD’s Customers expectations are well met
SAG Project Team members participate when needed
SAG Project Team members contribute to the flow of project information
Adequate project resources are available
3/16/2006 Page 65
State of North DakotaMigration Project Plan – V1.2
Migration Teams:
(Data Conversion – SAG
-------------------
Operations Team – SAG & ITD
-------------------
Development & Porting Implementation Team – SAG & ITD
-------------------
Test Team – SAG, ITD & State Agencies
------------------
Security & Infrastructure Team – SAG & ITD)
Provide technical input to the Statement of Requirements
Provide technical support for infrastructure
Support the network, server, database, and PC infrastructure
Assist with various phases of the migration
Communicate with Project Manager as necessary
The migration is properly designed, meets agency standards, and is compatible with systems and infrastructure.
The product and infrastructure are correctly installed.
Project Team members contribute to the flow of project information
Applications are migrated per the plan
Issues are uncovered and quickly resolved.
3/16/2006 Page 66
Project Communications Policy
State of North DakotaMigration Project Plan – V1.2
8 Stakeholder Project Communication Matrix
Once the project has begun significant, and valuable communication must occurs with all project Stakeholders for the duration of the project. The attached chart identifies the communication type, and methods by which the Stakeholders which need to be reached
Communication Method
StakeholdersSteering Co
mmittee
ITD Executive
Sponsor
s
ITD Customers
Current Dep
t Managemen
t
Current Dep
t Project
Manager
ITD/SAG Project
Managemen
t
Project Office
Manage
r
Project Coordinator
Data Conversion Team
Operations
Team
Porting Team
Test Team
Network
& security Team
Acquisition Plan A A A A A A A A
Change Requests, Control Log
A A A A A A A A A
Communication Plan a a A a A A A
Daily Bulletins a a a A a a a a a a a a a
E-Mail a a a a a a a a a a a a a
Implementation Plan A A A A A A A A
Topic-specific letters and memos
a a a a a a a a a a a a a
Issue Statement a a A A A A A A a a
IT Portfolio Project Summary Report
W w W W W W
Project Resources Report
M m M M M M
Web Site a a a a a a a a a a a a a
Weekly Status Meeting a a A A a A a a a a a
Executive Committee Meeting
M M
Steering Committee Meeting
TBD
TBD
TBD
TBD
TBD
TBD
TBD
TBD
TBD
TBD
TBD
TBD
TBD
Presentations a A a a a a
Project Schedule M M BW BW BW BW
BW BW bw bw
Project Status Report M M M W W W W W W W
Resource Allocation A A A a A A A a a
3/16/2006 Page 67
State of North DakotaMigration Project Plan – V1.2
PlanRisk Management and Contingency Plan
A a A a A A A a a
Schedule of Deliverables/ Milestones Report
M M W w W W W W
Test Plan a A A A A A A A a
Training Plan a A A A A A A A a
KEYFrequency Required Optional
Weekly W wBi-weekly BW bwMonthly M mBi-monthly BM bmQuarterly Q qAs Needed A a
8.1 Status Reporting
To enable ND to accurately determine the status of the ND project, Software AG’s status reporting will be updated during the migration period through the end of the project.
The status report and an updated Project Schedule will be submitted prior to the weekly status meetings, which are held on Tuesdays. A sample of the suggested status report appears on the following page.
The status report format will help guide the agenda for the status meeting, as the status report and Project Schedule will be reviewed during the meeting. Meeting minutes and action items will be recorded for each meeting, and the previous week’s action items will be reviewed at the beginning of each meeting.
This addresses the requirements to effectively monitor the implementation and performance of the migrated ND application.
3/16/2006 Page 68
State of North DakotaMigration Project Plan – V1.2
STATUS REPORT
DATE:
Week Ending: MM/DD/YYYY
A. KEY STATUS INDICATORS
StatusCurrent Period*
Previous Period* Comments
Is project schedule on track?Yellow Green
Has scope of tasks changed? Green GreenWill target dates slip? Green GreenAre there resource problems? Green GreenAny technical problems? Green YellowAny Review or approval problems?
Green Green
Any communication problems?
Green Green
Red Project is critical and dates may not be met
Yellow Project slipping but will be able to recover
Green Project on Schedule
B. KEY ACTIVITY DATES
Milestone (for group 1 activities)Begin Date
End Date
Current Status
Product Evaluations and POCs 12/1/05 3/27/06 On Schedule Complete setup of Linux development environment
12/19/05 1/27/06 On Schedule
Define test criteria and use cases 1/20/06 2/20/06 On Schedule Migrate objects to Linux 1/27/06 3/16/06 On Schedule Execute Test Scripts on Mainframe and Linux 3/17/06 4/14/06 On ScheduleAgency Acceptance Test 4/14/06 5/26/06 On ScheduleProduction Implementation 6/1/06 6/14/06 On schedule
Red Task is critical and dates may not be met
Yellow Task slipping but will be able to recover
Green Task on Schedule
C. DELIVERABLES AND ACTIVITIES COMPLETED THIS WEEK (Key Accomplishments)
SAGID Description12
3/16/2006 Page 69
State of North DakotaMigration Project Plan – V1.2
34
ITDID Description12345
D. DELIVERABLES AND ACTIVITES PLANNED FOR THIS WEEK BUT NOT COMPLETED
SAG & ITDID Description1
E. DELIVERABLES AND ACTIVITIES PLANNED FOR NEXT WEEK
SAGID Description12345
ITDID Description1234
F. ISSUES REQUIRING RESOLUTION
ID Description Date Priority* Owner Resolution/Status1
* Low, Medium, High
Figure 11: Project Status Report
3/16/2006 Page 70
State of North DakotaMigration Project Plan – V1.2
8.2 Managing the New Linux Environment
To successfully manage the new Linux environment, proper attention must be paid to the operating of processes within the management space. The processes and the infrastructure to support management will need to be architected, designed, and implemented. We also suggest that additional on-site mentoring of staff and assistance in development of detail management methods should be using during and shortly after the migration to ensure a smooth transition.
Performance Standards (Quality Control Program)ND administrators will need to monitor the new server to ensure performance standards. A performance monitoring software package may need to be introduced in order to maintain management information, such as CPU/disk bottlenecks possibly found during peak periods.
System Performance Management ND administrators should also understand whether the application’s response time or throughput is constrained by memory, CPU, or I/O. System Performance Management is usually reactive, but involves identifying the system constraints on meeting the performance requirements and designing them out of the configuration. Examples are:
CPU Constraint – SMP, horizontal scaling, CPU clock upgrade Memory Constraint – 64-bit very large memory addressing, horizontal scaling I/O Constraint (disk) – Bandwidth scaling, RAID, disk cache1 I/O Constraint (network) – Bandwidth scaling (trunking or base technology), with
caching implemented
Performance and Scalability Scalability is usually planned by adopting a strategy to apply when performance thresholds fail to be reached and the failure is caused by a change to the scale of the system that results from the increase in business volumes or the user community. There are several scalability strategies available to platform designers.
Scalability strategies have a significant impact on the deployment design. Designers need to design for growth, and therefore, business volume predictions are required. These can be hard to discover, but migrators have the advantage that the history should be available because the application, and hence the business process, already exists. Scalability design also needs to take into account any predictions that the performance constraint will move as business volumes grow.
Support of Batch and Online
In Online mode, the input of commands and data comes from a terminal keyboard and the output is displayed on a terminal screen (see Section 2 for additional detail regarding Online). In batch mode, input is read from a file and output is written to a file. Just as in the current mainframe system batch mode is of particular interest for mass data processing and re-usable execution.
When COBOL/Natural are run as a background batch job, no interaction between the application and the person who submitted the batch job is necessary. The batch job consists of a set of programs such that each is completed before the next program is started. The programs are executed serially and receive sequential input data.
Job Control
3/16/2006 Page 71
State of North DakotaMigration Project Plan – V1.2
Batch applications under Linux require the use of scripts to replace what would be done in JCL on the mainframe. Environment variables are set to define the required input and output files, and Natural is executed with any required dynamic parameters. Standard Linux utilities can then be used to monitor the running process. Applications will terminate with an exit code that can be used to determine success or failure of the task. Please review section 4.4.8, which further describes Batch Jobs/JCL.
Several commercial products exist for scheduling and monitoring Linux job execution. Software AG in-conjunction with ITD have concluded that the Tivoli Workload Manager solutions should be use to manage “batch” processing in the new Linux environment.
NaturalThe following are items of particular interest regarding Natural in Batch:
Detecting Batch ModeThe system variable *DEVICE shows whether Natural is running in batch or interactive mode.
Batch-Mode SimulationIf the input channel is redirected to a file, Natural does not read the input commands and data from the keyboard but from this file. You have to specify the data in exactly the same way as you would do on the terminal. For example, for two input fields you have to fill up the first field with trailing blanks to position to the second field. No keyword delimiter mode is supported. To use keyword delimiter mode, use real batch mode instead.
If the output channel is redirected to a file, Natural writes any output that would appear on the screen to this file. Control sequences are also written to the file, which makes the file unreadable. To get a formatted output, use real batch mode instead.
Use the dynamic parameter BATCH when starting Natural, to set the system variable *DEVICE to the value "BATCH". This value can be checked within a Natural program.
Real Batch ModeTo run Natural in real batch mode you have to specify the dynamic parameter BATCHMODE. In addition, some input and output channels have to be defined (as described below).
Advantages over Batch-Mode SimulationIt is recommended to use real batch mode instead of batch-mode simulation, because real batch mode has the following advantages:
Easy data input with support of keyword delimiter mode Configurable and formatted output processing Extended error handling Faster startup and shutdown Faster program execution
RestrictionsWhen running Natural in batch mode, some features are not available or are disabled.
The terminal database SAGtermcap will be not supported. Therefore, the terminal capability TCS, which is used for a different character set is not
3/16/2006 Page 72
State of North DakotaMigration Project Plan – V1.2
supported. To use a different character set, use environment variable NATTCHARSET instead.
No colors and video attributes (such as blinking, underlined and reverse video) are written to the batch output file CMPRINT
Filler characters are not displayed within an INPUT statement No interactive input or output is possible Certain Natural system commands (e. g. CATALL) are not executable in real
batch mode, and are ignored. A corresponding note is given in the documentation of the system commands to which this restriction applies.
The following graphic illustrates how Natural reads input and writes output in batch mode.
Figure 12: Natural in Batch Mode
3/16/2006 Page 73
State of North DakotaMigration Project Plan – V1.2
3/16/2006 Page 74
State of North DakotaMigration Project Plan – V1.2
Natural Exit CodesThere are two types of Natural exit codes:
Startup Errors, where Exit Code 1 is assumed to indicate success and all other exit codes are assumed to indicate errors.
Errors generated by the TERMINATE statement, where Exit Code 0 to 255 is possible.
Availability and ReliabilityA system is considered available and fault-tolerant if it is capable of continuous operation in the event of a single and some multiple failures. Robust enterprise platforms require a highly versatile and scalable system architecture that allows for redundant nodes (or hardware pieces) to be added to the system for the express purpose of shadowing other nodes, or for existing nodes to shadow each other as part of their normal function. A platform/system should allow for adding and removing nodes or components with little or no effect on overall system performance and availability. Also, in the event that an unrecoverable failure (e.g., a hardware disk failure) should occur, there should be a fast (within a specified time period, say 20 minutes), efficient, manageable and reasonably priced way of restoring the status quo.
Help Desk and Support SystemsND’s current Help Desk staff and Operations staff have a wealth of knowledge and processes required to support the current mainframe systems. This has been built on years of experience, and to retain the same level of quality Software AG recommends that:
Staff be involved in as many training opportunities as possible to ensure a full understanding of the new environment is gained. Besides the training being provided at the State as part of the migration on going dollars have also been built into ROI model to provide additional training for this staff.
Staff be an integral part of the migration project and the day-to-day migration project to ensure knowledge transfer is completed.
Staff be part of the testing processes so they understand the migrated environment before ITD customers start to use system on the Linux environment.
An experienced Linux Operations Support contractor may be brought in close to the end of the migration project for a 6-month period to provide mentoring and best practices for production support. The cost for this contract has been provided in the Statement of Work 10/26/2005 Model, and can be invoked should North Dakota want this resource.
3/16/2006 Page 75
State of North DakotaMigration Project Plan – V1.2
8.3 ND Project Management Standards and Practices
ND Project Management Standards and Practices will be incorporated in the project. Software AG will make use of ND’s project reporting system for this project.
3/16/2006 Page 76
State of North DakotaMigration Project Plan – V1.2
9 Implementation Timeline and Methodology
9.1 Go Live Contingency Plan
After going live with the migrated ND application on the Linux platform, it is possible that certain problems (that had gone undetected during user acceptance testing) may surface. It is prudent therefore, to document contingency provisions that might need to be considered.
Trigger Factors for Contingency Solution1. Unexpected results:
From data perspective – end users presented with invalid data From ND functionality perspective – navigation between functions not working as
expected.2. Unacceptable performance 3. Networking issues4. Interface incompatibilities5. Batch run failure6. Mainframe access to ND data problematic.
Contingency Solutions (in order of preference)1. Assess impact of problem 2. Software AG advised of problem with view to rapid resolution3. Revert to processing on IBM mainframe
Timeframe ConsiderationsIn the event that a decision is made to revert back to processing on the IBM mainframe, end users and external customers need to be aware that they may be required to redo all work done by themselves from the time of conversion to the Linux server until the cutover back to the mainframe.
A decision to revert back to the mainframe after an hour of processing on the Linux will obviously be far more easily manageable for end users than a comparable switch back after a week. With this thought in mind, it becomes very important to make informed go, no-go decisions early after cutover where at all possible.
Scenario #1 - Assess Impact of Problem Enlist end users, ND programmers / DBA, Agency staff, and Software AG representatives
to obtain an informed description of problem. Assess type of problem Assess extent of problem Assess impact of problem in relation to overall ND functionality Discuss remediation strategy avenues Make go, no-go decision
Trigger to as to whether to move to Scenario #2 is dependent upon the priority of the situation:
3/16/2006 Page 77
State of North DakotaMigration Project Plan – V1.2
Priority Code
Classification Action
0 Informational No immediate action required4 Cosmetic – no impact to ND, but could be
improvedResolve as schedule allows
8 Low-level function not performing as expected. No impact on other areas.
1) Resolution within 5 days in the absence of priority 12/16 problems, or
2) Raise to priority 12 after 5 days, if warranted
12 Impact to function(s) affecting multiple areas within ND, but not requiring immediate resolutionorProblem isolated to single, highly important area
1) Problem resolution within 2 days, or2) As agreed by ND, or3) Raise to priority 16
Communication to external entity point of contact, appraising of situation.
16 Serious problem affecting many ND users. System may be unavailable to end users until problem is resolvedorSerious problem involving single critical entity
1) Problem resolution within 6 hours, or
2) Acceptable course of action discussed and agreed upon by ND, or
3) Decision to reinstate processing on the IBM mainframe. Decision communicated to end users, external entity point-of-contact, advising of what they would be required to do to reinstate the mainframe as the production platform.
Figure 13: Priority Codes for Implementation
Scenario #2 - Software AG Advised of Problem with View to Rapid Resolution If Software AG representative is onsite to obtain details of events leading up to problem,
error message (if applicable) plus any supporting documentation. Log problem to WMS If priority warrants, immediately inform Software AG Project Manager who will, in turn,
contact and assign log to person primarily responsible for resolution. In the event of a priority 16 problem, every available resource will be involved in
addressing direct resolution or acceptable workaround.
Trigger to move onto Solution #3: in event of problem resolution not being achieved within stipulated timeframe, ND and Software AG management to discuss alternatives or make decision to revert production to IBM mainframe.
Scenario #3 - Revert Back to Processing on IBM Mainframe A decision to revert to IBM mainframe requires:
o Advise end users of decision. 1. Communicate how / when to log on to mainframe2. Inform end users of any requirement to recapture information previously captured
to Linux.
3/16/2006 Page 78
State of North DakotaMigration Project Plan – V1.2
o Advise external users of decision.
10 Risk Management
Risk Management is the systematic process of identifying, analyzing, and responding to project risks. It includes maximizing the probability and consequences of positive events and minimizing the probability and consequences of adverse events to project objectives.
Identifying RisksThe Project Managers will solicit input from the Project Team, Project Sponsor, and from Customer Representatives, who try to anticipate any possible events, obstacles, or issues that may produce unplanned outcomes during the course of the project. Risks to both internal and external aspects of the project will be assessed. Internal risks are events the Project Team can directly control, while external risks happen outside the direct influence of the Project Team (e.g., legislative action). The list of risks identified will be entered into the Risk Management Log (see Appendix J). Any Team Member may identify ongoing risks. The Team Member will then bring the identified risk to a Project Manager who will determine if the risk is appropriate and then will log on the “Risk Management Log”.
Risk AnalysisThe Project Manager and Project Team members will evaluate each identified risk in terms of the likelihood of its occurrence and the magnitude of its impact. These measurements will be used as input into the Risk Management Log for further analysis when determining how the risk threatens the project.
Risk Response The Project Manager and Project Team members will determine the appropriate response. The Project Manager will then communicate the steps necessary to manage the risk and following up with team members to ensure those steps are taken.
Risk Monitoring A Risk Management Log is located in Appendix J and covers the following points. This will be the official tracking log for all identified risks for the project.• Date Identified – The date the risk was identified.• Status – Identifies whether the risk is potential, active, or closed.• Risk Description – Description of the risk.• Risk Probability – Likelihood that the risk will occur. See the “Evaluating Risk Probability” section below for possible values. In this category the descriptive words Low, Moderate, or High will be used.• Risk Impact – The effect o the project objects if the risk event occurs. See the “Evaluating Risk Impact” section of the table below for possible values. In this category the descriptive words Low, Moderate, or High will be used.• Risk Assignment – Person(s) responsible for the risk if it should occur.• Agreed Response – The strategy that is most likely to be effective.
o Avoidance – Risk avoidance entails changing the project plan to eliminate the risk or condition or to protect the project objectives from its impact.o Transference – Risk transference is seeking to shift the consequence of a risk to a third party together with ownership of the response. Transferring the risk simply gives anotherparty responsibility for its management; it does not eliminate it.
3/16/2006 Page 79
State of North DakotaMigration Project Plan – V1.2
o Mitigation – Risk mitigation seeks to reduce the probability and/or consequences of an adverse risk event to an acceptable threshold. Taking early action to reduce the robability of a risk’s occurring or its impact on the project is more effective than trying to repair the consequences after it occurs.o Acceptance – This technique indicates that the project team has decided not to change the project plan to deal with a risk or is unable to identify any other suitable response strategy.
• Risk Response Plan – Specific actions to enhance opportunities and reduce threats to the project’s objectives.
10.1 Mainframe Migration Risks
Due to the complexities of a large mainframe migration we have outlined below several “categories” of risk, which need to continually monitor during the life of the project. Details on items of higher risk for the State of North Dakota are included in the Risk Management Log (Appendix J).
Differences in Operating systems. Outlined in Section 4.3 above, are several areas of risk, which are inherent due to fundamental changes in the operating systems. Although this plan takes these risks into consideration, during the migration the project team must continually ensure these items are reviewed.
Network and System Performance. Due to significant changes in the physical architecture of the mainframe vs. Linux environment, processes and actions must be taken to ensure the required hardware and network performance can be maintained. This risk is well known and actions such as the phased approach, are utilized to minimize and manage this risk. However the migration team must continue to monitor these risks.
Acceptance of change. Although the overriding method in this plan is that the end customer will experience no changes, change will occur within ITD and other areas for the customer. This plan does address this area by providing a communication plan, testing and other process, however the migration team must continue to monitor these risks.
Scope of the project. Because of the complexities of this project’s scope, change is inevitable. The extensive planning for this project ensures the proper applications for migration have been selected and reviewed. However, it is probable that additional applications will be removed or added over the duration of the project. The project team will ensure that any changes do not impact the end date of the project.
Replacement Products. Every mainframe migration deals with replacement of aging technologies in use today, in cases where the same product used on the mainframe is not supported on Linux. In the case of ND, this risk is manifested with two areas; Printing and OVM. In the case of OVM, Software AG has recommended that they develop a “Natural-based” solution, rather than buying a Commercial of the Shelf solution. For Printing, we have tentatively chosen InfoPrint and will be benchmarking its functionality during the first phase of the project and continuing to confirm that functionality through each phase. With all decisions, we will work directly with the customers of the systems to review business needs and continue to use a variety of means to validate solutions.
3/16/2006 Page 80
State of North DakotaMigration Project Plan – V1.2
Remaining Applications. Every mainframe migration has to deal with applications that will not be migrated to the new platform. However, in this project a risk exists due the complexity and size of the remaining applications. Specifically, if LAWs and MMIS cannot be completely migrated by 6/30/09, the state will incur additional costs that will impact the ROI of this project.
Operations. Significant advantages exist for ITD with the new platform. As seen in every migration, the operations area will see significant changes in day-to-day processes and methods used to manage the systems. This is evidenced in areas such as job scheduling, security, and extending to billing for resources. This plan does address this area with both additional mentoring and training, but also ensures that security issues, job scheduling, etc. are addressed.
3/16/2006 Page 81
State of ND - Technical Assessment and Detailed Implementation Plan
Appendix A – Project Schedule
The high-level plan is pictured on the next page. A detailed project schedule is available from either Linda Weigel or Carl Bondel
It is also posted on the project’s website, www.nd.gov/itd
10/28/2005 Page 82
State of ND - Technical Assessment and Detailed Implementation Plan
Appendix B – Application Profile
During the ND assessment phase, Software AG created a detailed application profile.
Due to its volume, it is available from either James Gilpin or Linda Weigel. It requires access to a DVD reader.
10/25/2005 Page 83
State of ND - Technical Assessment and Detailed Implementation Plan
Appendix C – ND Database and File Statistics
Adabas Space Allocations
Database 215 MBAsso 6,559Data 9,672Work 73Total Space 16,304
Database 216Asso 2,186Data 4,836Work 73Total Space 7,095
Database 220Asso 13,119Data 26,598Work 184Total Space 39,901
Grand Total 63,300
Adabas Files
Database 215 NUMBER OF FILES LOADED = 184 Associator size = 6.6 GBData size = 9.7 GBWork size = 73 MB The list below is to be read as the Adabas file number and the assigned file name
1 MB243110 4 CHECKPOINT 6 CD000006 7 CD100007 8 SECURITY 15 CD650010 16 CD999010 20 MB800000 25 ST201010
26 CD853010 27 PS130010 28 ST530010 29 TI631020 32 WK821010 34 ST410010 50 CD100050 54 RM301010 55 MB291010
57 ER606610 58 ER641010 59 ER601010 60 ER631010 61 ER609010 62 GF201010 63 CD150010 64 CD161020 65 ER661010
10/25/2005 Page 84
State of ND - Technical Assessment and Detailed Implementation Plan
66 ER642010 67 CD901010 69 ER642020 70 CD150020 71 GF611010 82 CD180010 83 GF400010 84 IC600010 86 CD190110 94 BK760510 99 BK711010 100 BK766510 108 MB604010 109 MB603010 110 RM101010 112 MB610010 113 TI640000 114 ER801010 116 CD164020 117 TI631010 119 TI601010 124 CD996020 130 MB722010 131 SC102010 138 GF801010 140 SC302010 143 BK505010 145 CD310010 146 PI261020 147 GF341010 148 GF351010 151 MB351010 152 PI221010 153 PI370010
154 PI261010 155 CD118010 156 PI340010 157 PI380010 159 MB260010 160 MB317510 163 MB240010 165 CD500010 166 CD504010 167 CD709910 168 CS101010 169 CD510000 171 PI281010 172 BK751010 174 PI232010 176 PS801010 177 PS804010 178 PS805010 179 PS950010 181 BK774010 183 BK851010 184 BK521010 185 MB712010 194 CD704010 195 CD700010 200 CD100010 203 ST501010-1990 204 ST501010 205 MB590010 211 SC401010 212 SC401020 213 SC401030 214 MB434010 215 ACT9-FILE
216 BTW9-FILE 217 DEW9-FILE 218 HCW9-FILE 219 OLT9-FILE 220 SC401040 221 PBT9-FILE 222 TI661010 223 MB171050 229 CDBO-FILE 230 CDBP1-FILE 231 CDBP2-FILE 232 EDBB-FILE 233 EDBJ-FILE 234 EDBP-FILE 235 OSW9-FILE 236 PCF9-FILE 237 QSW9-FILE 238 SEC9-FILE 239 THF9-FILE 240 PTE9-FILE 241 NAT-CONSTRUCT 242 NAT-SYSTEM 243 NAT223-USER 244 PRD-DICTIONARY 245 NAF-SPOOL 246 NAT223-SECURITY 247 ODBS-FILE 248 ODBP-FILE 249 PRD-COORD 250 CD140010 252 MB601060 253 REV41-DBFILE 254 COR742-CONFIG 300 NAT223-USER
Database 216 NUMBER OF FILES LOADED = 77 Associator size = 2.2 GBData size = 4.8 GBWork size = 73 MB The list below is to be read as the Adabas file number and the assigned file name.
4 CHECKPOINT 8 SECURITY 10 ESQ-CATALOG-1 11 ESQ-CATALOG-2
12 ESQ-CATALOG-3 13 ESQ-MESSAGES 14 POWERBU.PBCATCOL
15 CD650010 16 POWERBU.PBCATEDT
10/25/2005 Page 85
State of ND - Technical Assessment and Detailed Implementation Plan
17 POWERBU.PBCATFMT 18 POWERBU.PBCATTBL 19 POWERBU.PBCATVLD 20 LR801220 26 HR101010 27 HR201010 29 HW861005 31 HD100615 32 HD100610 33 HD111010 35 HD100611 36 TR251010 37 TR232010 38 TR221010 44 TR229010
52 HP701010 53 HP701020 100 HD291020 101 HD291010 106 DL301010 107 DL301020 109 DL320520 128 HP601010 141 MP101010 149 HP400010 157 MP201010 161 TR451110 162 TR441010 163 TR401005
187 HW881010 192 HP511000 196 HP501010 207 DL100000 241 NAT-CONSTRUCT 242 NAT-SYSTEM 243 NAT-USER 244 PRD-DICTIONARY 245 NAF-SPOOL 246 NAT223-SECURITY 249 PRD-COORD 253 REV41-DBFILE 254 COR742-CONFIG 300 NAT223-USER
10/25/2005 Page 86
State of ND - Technical Assessment and Detailed Implementation Plan
Database 220 NUMBER OF FILES LOADED = 239 Associator size = 13 GBData size = 26.6 GBWork size = 184 MB The list below is to be read as the Adabas file number and the assigned file name. 3 EMERGENCY-FS-DBF 4 CHECKPOINT 8 SECURITY 9 ES863010 11 ES835020 12 ES861010 14 ES963010 15 ES810012 16 CD999010 17 SB681010 19 ES850620 20 SS121020 21 ES800001 22 ES800002 23 ES800003 24 SS441010 25 SS537010 26 SS589510 27 ES100000 28 ES822010 29 ES810010 30 ES820010 31 ES828010 32 ES835010 33 ES850010 34 ES823010 35 ES830010 36 ES823610 37 ES900010 38 ES839010 39 ES800310 40 ES854010 41 ES951510 42 ES856010 43 ES857810 44 ES850610 45 ES994510 46 ES953010
47 ES972110 48 ES944010 49 ES998010 50 ES870010 51 ES972120 52 ES966010 53 ES810011 54 SB644010 62 ES995511 64 ES996010 65 SB616015 69 SS280010 70 SB685010 72 ES830020 73 ES827910 74 ES888010 75 ES854710 76 ES940010 79 SB651010 80 SB661010 81 SB616010 82 SB617010 83 SB681020 88 ES983310 89 ES972130 90 SS950000 91 SS953010 92 ES990910 93 ES994520 97 SS450050 98 PW421010 101 HC201010 102 HC250010 103 HC101010 104 HC501010 105 HC607010 106 HC631010 107 HC650010
108 HC401010 109 HC700010 110 ES994530 111 PW300510 112 PW341510 113 PW349510 114 ES994540 115 PW330510 121 PW200510 122 PW201010 123 PW203010 124 PW201020 125 ES994550 126 PW210010 127 PW210510 128 PW213010 129 PW220010 130 PW222510 131 PW223510 132 PW226510 133 PW229510 134 SS811010 135 PW237010 136 PW245510 137 ES994560 138 PW252010 139 PW252510 140 PW258010 141 PW259510 142 PW260010 144 GR710020 145 JM710020 146 PW261510 147 SS121010 148 ES999110 149 ES994570 150 ADDRESS-DBF 151 BENEFITS-DBF
10/25/2005 Page 87
State of ND - Technical Assessment and Detailed Implementation Plan
152 CASE-BASIC-DBF 153 CASE-COMP-DBF 154 CLIENT-BASIC-DBF 155 TRAN-ASS-CAT-DBF 156 ET-ALERTS-DBF 157 UNPD-MED-BIL-DBF 158 QS-BUDGET-DBF 159 INCOME-DBF 160 INVOLVEMENT-DBF 161 PARTICIP-DBF 162 PGM-BASIC-DBF 163 ES994580 164 RECOUPMENTS-DBF 165 CASE-LIAB-PG-DBF 166 ABSNT-PARENT-DBF 167 ES994590 168 TABLES-DBF 169 CLIENT-MONTH-DBF 170 RECOUPED-H-DBF 171 REQUEST-DBF 172 MMR-HISTORY-DBF 173 TRANS-LOG-DBF 174 ACCTG-TRANS-DBF 175 ES994535 176 INTERFACES_DBF 177 MAILBOX-DBF 178 NOTICE-HIST-DBF 179 BUDGET-DBF 180 CASE-ELIG-DBF 181 SS450010 182 SS450000 183 SS450040
184 SS655020 185 ES200020 186 ES839510 187 EXPENSE-DBF 188 TPQY-DBF 189 SS880010 190 SS110000 191 SS655021 192 SS461010 193 SS851010 194 SS664041 195 SS140000 196 SS664040 197 ESQM1010 198 SB640010 199 SSRF1010 200 SSRF2020 202 SS141000 203 SS890010 204 EBT-LDS-CARD-DBF 206 EBT-ADMN-TRN-DBF 207 EBT-AGE-ACT-DBF 208 EBT-MIS-DBF 209 ES978210 210 ES978220 211 ES978230 212 ES978240 213 ES978250 214 ES994595 215 ES994599 216 ES865010 217 ES870020
218 ES700001 219 ES700002 220 ES700010 222 SS391011 224 ES901710 225 QI-DBF 226 SECURE-DBF 227 QC-FILE-DBF 228 CLIENT-BILL-DBF 229 CASE-LIAB-DBF 230 DEEMING-INS-DBF 231 QC-UNIVERSE-DBF 232 RESOURCE-DBF 233 MA-BUDGET-DBF 234 TPL-INDIV-DBF 235 TPL-POLICY-DBF 236 PC-PROVIDER-DBF 237 TPL-IVD-TEMP-DBF 238 REFERRAL-DBF 239 ES901110 240 ES901910 241 NAT-CONSTRUCT 242 NAT-SYSTEM 243 NAT223-USER 244 PRD-DICTIONARY 245 NAF-SPOOL 246 NAT223-SECURITY 249 PRD-COORD 253 REV41-DBFILE 254 COR742-CONFIG 255 CD915010 300 NAT223-USER
10/25/2005 Page 88
DB2The DB2 Files are part of the detailed application profile and can be found on the attached DVD. VSAM filesThe VSAM Files are part of the detailed application profile and can be found on the attached DVD.
Tape FilesThe Tape Files are part of the detailed application profile and can be found on the attached DVD.
GDG’sThe GDG’s are part of the detailed application profile and can be found on the attached DVD.
Sequential DatasetsThe Sequential Datasets are part of the detailed application profile and can be found on the attached DVD.
10/28/2005 Page 89
Appendix D – Batch Testing
The Batch jobs in ND will be tested in a similar fashion as the ND interfaces. However, the difference in this case is that no interfaces need to be tested again. Instead, Appworx, which will replace the functionality of Tivoli, will be tested to its extent and in the time frame available.
The following information explains how the batch jobs were submitted through ND on the mainframe platform. This information is of vital importance in order to execute similar jobs on the Linux platform.
The difference of execution will merely be the approach and configuration of the various Linux scripts instead of mainframe JCL’s. TSO will be replaced by the Linux command prompt, where the necessary scripts and/or programs are executed, which in turn will execute the various batch jobs.
The Linux scheduling software will be manually configured in order to preset the tests to be run in the time frame available. The Linux scripts will be manually executed and in some cases preset to automatically execute in the allotted time frame for testing.
Batch jobs will be verified upon completion and execution of each dependable job or script through the validation of batch logs. In some cases such as FTP jobs, the validation of a file or publication may be sufficient verification of a correct executed job.
A schedule to test the various batch jobs will be provided to the ND staff in order to complete this test event successfully. The schedule contains the various job names and attributes, frequency, Operation instructions, Run time, Special notes and possible restart procedures. ND staff members will coordinate and communicate with the various agencies when certain jobs need their attention and confirmation.
10/28/2005 Page 90
Appendix E – Sample Test Script
BILL BRIEFS
BR.1 Add a Bill to Briefs
1.1 Add a BillTest Test
StepsTest Description
4.2.BR.1.1 1 - 15 Bill Clerk enters or requests a Bill Brief through the ND BRIEFS system. The objective is to ensure a bill can be entered or requested via the indicated steps in the test case.
Requirements:1. Add/Request a Bill Brief with the same functionality as the legacy system2. Verify the Bill Briefs screens and functionality are identical to the pre-ported system
Assumptions:Bill Clerk has the credentials to login to ND application.Clerk has bill briefs to enter into the BRIEFS application.
10/28/2005 Page 91
Date
__________
Passed
__________
Failed
10/28/2005 Page 92
Action Expectations Y N Tester Comments Date1 Logon to the ND Application menu
by clicking on the ND application icon on your desktop. Login with the credentials of a Bill clerk
On validation of the clerk’s login credentials, the ND Application menu is displayed with various ND application options
2 Enter “BRIEFS” in the Next Screen> field as a selection from the ND application menu and press <Enter>
The “Bill Briefs Main Menu” is displayed
3 From the “Bill Briefs Main Menu” enter # 1 in the Option> field and press <Enter>
The “ADD A BILL OR RESOLUTION” is displayed
4 Verify the “ADD A BILL OR RESOLUTION” screen is displayed with the appropriate options
The following options are displayed:
Trans. No.: 20678 Sponsor: Add Additional
Sponsor(s): _ OLC Code: OLC Version: By Request:
Bill Type:
Appendix F – Test Execution Plan
The Test Execution Plan will be filed out with ITD and agency input, for each application to be migrated.
ND Module Test Group
Testing Dates ND Testers Comments
10/28/2005 Page 93
Appendix G – Internal and External System Interfaces
External Interfaces
Interface Name Description Contact
State Radio
CICS Interface Application.Runs against the Motor Vehicle & Drivers License Applications.
Radio Communications
AAMVA
A secure network for HIPAA that runs through AT&T Global. AAMVA submits data to approximately 15 companies for verification. Multiple agencies
JP Morgan (Citibank) DHSSocial Security Admin DHSNDC/ENVOY DHSWebMD DHSeRX VPN using SNA DHSHCFA Medicare & Medicaid EDB Match DHS
NCCTreasury Offset Program.Food and Nutrition. DHS
TANF DHS sends data offsite to the federal government. DHSARCARS DHS sends data offsite to the federal government. DHS
Datasets Referenced by Multiple Jobs
Dataset Name Job/System PrefixPrefix Count
DF.BK269005 ES,MB,PW,ST 4DF.BK870003 TI 1
10/28/2005 Page 94
DF.BK870004 PI 1DF.BK870014 TR 1DF.BK870067 ER 1DF.CD143010 MB 1DF.CD143010.PSFT MB 1DF.CD143520 MB 1DF.CD143520.PSFT MB 1DF.CD301005 HP,LR,PE,PR,SH,SS,WA 7DF.CD301010 BK,BT,HP,LR,PE,PI,PR,SH,SS,TX,WA 11DF.DB104030 CD 1DF.ES140010 SB 1DF.ES972710 SS 1DF.ESCSACPT.JCL CD,TW 2DF.ESCSTEST.JCL CD,TW 2DF.GR312581 HC,MB,SS 3DF.HL904431 SB 1DF.HP591830 MB 1DF.HP591911 MB 1DF.HWPRTDO PR 1DF.HWPRTDS PR 1DF.JS102010 ES 1DF.JS200010 MB 1DF.LA312581 LR,MB 2DF.LR138020 HE,HL,JC,JS,PI,SH 6DF.LR140020 HL,JC,PI 3DF.LR144020 HL,JC,PI 3DF.LR186020 HE,HL,JC,JS,PI 5DF.LR410010 HL,JC,PI 3DF.MB189010 AG,ER,GF,HD,HL,JC,LR,PI,PS,SH,TX 11DF.MB248510 CD,CS,LO 3
DF.MB302010AG,ER,GF,HD,HL,JC,LR,PI,PS,SH,SS,TX 12
10/28/2005 Page 95
DF.MB311021 AG,CD,HD,HE,HL,PI,SS,ST 8DF.MB357101 SA,SS 2DF.MB357102 SA,SS 2DF.MB357103 SA,SS 2DF.MB357104 SA,SS 2DF.MB357105 SA,SS 2DF.MB357106 SA,SS 2DF.MB357107 SA,SS 2DF.MB357108 SA,SS 2DF.MB357109 SA,SS 2DF.MB400510 GR,HD,JM,SS 4DF.MB410010 AG,ER,GF,HD,HL,JC,LR,PI,PS,SH,TX 11DF.MB421010 AG,ER,GF,HD,HL,JC,LR,PI,PS,SH,TX 11DF.MB430010 AG,ER,GF,HD,HL,JC,LR,PI,PS,SH,TX 11DF.MB440010 AG,ER,GF,HD,HL,JC,LR,PI,PS,SH,TX 11DF.MB450010 AG,ER,GF,HD,HL,JC,LR,PI,PS,SH,TX 11DF.MB460010 AG,ER,GF,HD,HL,JC,LR,PI,PS,SH,TX 11DF.MB470010 AG,ER,GF,HD,HL,JC,LR,PI,PS,SH,TX 11DF.MB480010 AG,ER,GF,HD,HL,JC,LR,PI,PS,SH,TX 11DF.MB490010 AG,ER,GF,HD,HL,JC,LR,PI,PS,SH,TX 11DF.MB510010 AG,ER,GF,HD,HL,JC,LR,PI,PS,SH,TX 11DF.MB550010 AG,ER,GF,HD,HL,JC,LR,PI,PS,SH,TX 11DF.MB551210 AG,ER,GF,HD,HL,JC,LR,PI,PS,SH,TX 11DF.MB560010 AG,ER,GF,HD,HL,JC,LR,PI,PS,SH,TX 11DF.MB601510 ER,GR,HW,JS 4DF.MB910010 ES,GF,HC,HD,JC,JS,LR,PI,SS,ST,TX 11DF.MB910020 GF,HD,MP 3DF.MB910030 CD 1DF.RE441021 SB 1DF.SB101003 HC 1DF.SB105520 HC 1DF.SB141070 HC,SS 2
10/28/2005 Page 96
DF.SB171660.L834 RE 1DF.SB686031 ES 1DF.SHR.CDOVLLB HR,HW,SC,ST,TS 5DF.SS121010 ES,SB 2DF.SS312581 HC,MB 2DF.SS312582 HC,MB 2DF.SS312583 HC,MB 2DF.SS312584 HC,MB 2DF.SS312585 HC,MB 2DF.SS312586 HC,MB 2DF.SS312587 HC,MB 2DF.SS312588 HC,MB 2DF.SS696530 ES 1DF.ST418020 MB 1DF.ST418030 ES 1DF.TX464615 SC 1DF.WC312581 MB 1DF.WK202020 ES,WC 2DF.WK870054 BK,BT 2DFHSM.HISTORY.LOG DF 1TP.&TP CD 1TP.&TPV CD 1TP.AP310010 PI 1TP.CDBENCH.MARK.DATA BE 1TP.ES150035 SB 1TP.MB170075 CD,PI 2TP.MB171525 PI,WA 2TP.MB409020 BL,GF,HD,PI,TX 5TP.MB409021 BL,GF,HD,PI,TX 5TP.MB500010 AG,ER,GF,HD,HL,JC,LR,PI,PS,SH,TX 11TP.PLOTTER CD,IC,SU 3TP.PLOTTER.DRUM TE 1
10/28/2005 Page 97
TP.PLOTTER.FLATBED TE 1TP.SB121010 SS 1TP.SB179010 HC,SS 2TP.SB670510 SS 1TP.SS503010 JM 1TP.TTTTTTTT CD 1TP.USERTP SB 1USER.CD.SQLCSRC DB 1USER.SS1.TSOLIB TW 1VS.&OPID..TEAMS.FONT TE 1VS.&OPID..TEAMS.NEWTAPS TE 1VS.&OPID..TEAMS.PATTERN TE 1VS.&OPID..TEAMS.SYMBOL TE 1VS.&OPID..TEAMS.VERSION TE 1VS.CD997610 PW 1VS.CDPS.ABENDAID.REPORT BT 1VS.CDPS.DB2.QMF.DSQPNLE $D 1VS.CDPS.FINAL.CITY.NEW77 DC,FI 2
VS.CDPS.FINAL.CITY.STATEBK,BT,BY,BZ,DC,DL,ER,ES,GF,HC,MB,MV,SS,TI,TR,TX 16
VS.CDPS.FINAL.ZIPSBK,BT,BY,BZ,DC,DL,ER,ES,GF,HC,MB,MV,SS,TI,TR,TX 16
VS.CDPS.FINAL.ZIPS.NEW77 BT,DC,FI 3VS.CDPS.GLOBAL.CSI DW,SM 2VS.CDPS.MAILSTRM.G1LICEN DL,MV 2VS.CDPS.MAILSTRM.USPSRFDC DL,MV 2VS.CDPS.MAILSTRM.USPSRFDI DL,MV 2VS.CDPS.MAILSTRM.USPSRFMP DL,MV 2VS.CDPS.MAILSTRM.USPSRFPS DL,MV 2VS.CDPS.MAILSTRM.USPSRFSQ DL,MV 2VS.CDPS.MAILSTRM.USPSRFZD DL,MV 2VS.CDPS.MAILSTRM.USPSRFZM DL,MV 2
10/28/2005 Page 98
VS.CDPS.NATBAT.EDITWORK DB 1VS.CDPS.NDM.MSG $H,$P,$T,ES,HL,MB,PW,RE,SB,SS,TE 11VS.CDPS.NDM.NETMAP ES,HL,MB,PW,RE,SB,SS 7VS.CDPS.NETVUFTP.V220.CHECKPNT.FILE FT 1VS.CDPS.OVMASF.DBL AW,KL,P2,SS 4VS.CDPS.OVMASF.DHL AW,KL,P2,SS 4VS.CDPS.OVMASF.DXL.PATH AW,KL,P2,SS 4VS.CDPS.QMF.DSQPNLE $B,$D,$I 3VS.CDPS.SRCHMGR.EYXUREG D0,LR 2VS.CDPS.SUPERSES.R147.RLSNAM.TEST D0 1VS.CDPS.TCPIP.ACTIVE EZ,FS 2VS.CDPS.TCPIP.OPTIONS EZ,FS 2VS.CDPS.TCPIP.QUEUE EZ,FS 2VS.CDPS.TCPIP.ROUTING EZ,FS 2VS.CDPS.TRACMSTR.XTPROFL DC 1VS.CDPS.TRACMSTR.XTSESLOG DC 1VS.CDPSCAI.SMPEREL5.CSI SM 1VS.ES150030 RE,SB 2VS.HC300210 SB 1VS.HD101510 HW 1VS.HD190010 HR,HW,MP 3VS.HD418010 DT 1VS.HT420020 TX 1VS.HW862010 DT 1VS.LR130010 AG,PI,SU,TX 4VS.MB100000 AG,BD,CS,DA,DF,JC,PI,PR,SS,TX,WA 11VS.MB160040 SS 1VS.MB170070 CD,GF,HL,PI,PR,SS 6VS.MB173010 AG,BD,CS,DF,JC,JS,PI,PS,SS,TX,WA 11VS.MB174010 PI,SS 2
10/28/2005 Page 99
VS.MB175010 PI,SS 2VS.MB190010 CS,HW,PI,SA,SS 5VS.MB191010 SS 1VS.MB192010 AP,SS 2VS.MB193010 CS,HC,HD,HL,JS,SB,TX 7VS.MB195010 SS 1VS.MB219010 SS 1VS.MB311020 AG,CD,DE,HD,HE,HL,PI,SS,ST 9VS.MB357201 SA,SS 2VS.MB357202 SA,SS 2VS.MB357203 SA,SS 2VS.MB357204 SA,SS 2VS.MB357205 SA,SS 2VS.MB357206 SA,SS 2VS.MB357207 SA,SS 2VS.MB357208 SA,SS 2VS.MB357209 SA,SS 2VS.SB510010 ES,HC,SS 3VS.SB620010 ES,HC,SS 3
DDMS Referenced by Multiple Libraries
Natural DDM DDM Type Library PrefixADDRESS DB2 ES,HES, RE,SSADDRESS-DBF ADABAS ES,HES,RE,SSALIAS DB2 HES,RE,SSBENEFITS-DBF ADABAS ES,HC,HES,RE,SSCASE_BENEFIT DB2 HES,PW, RECASE-BASIC-DBF ADABAS ES,HES,RE,SS,TMECASE-COMPOSITION-DBF ADABAS ES,HC,HES,PW,RE,SSCASE-ELIGIBILITY-DBF ADABAS HES,RE CASE-LIABILITY-DBF ADABAS HES,RE
10/28/2005 Page 100
CD000006 ADABAS DB,DL,ES,GF,HES,HW,PI,SS,ST CD100007 ADABAS DL,ER,PW,SSCD118010 ADABAS CD,HES,SSCD150010 ADABAS CD,DBCD150020 ADABAS CD,DB,EQ,RMCD161020 ADABAS CD,DBCD164020 ADABAS CD,DBCD310010 ADABAS DB,EQ,HW,SSCD504010 ADABAS CD.DBCD509910 ADABAS CD,DBCD650010 ADABAS CD,HES CD700010 ADABAS CD,DB,SSCD704010 ADABAS CD,DBCD996010 VSAM CD,HES CD996020 ADABAS ES,HE,HRMS CD996810 VSAM CD,HESCDBO-FILE ADABAS AF,BD,CP,DB,GF,HR,HRMS,MB,SSCDBP1-FILE ADABAS BD,DB,ER,GF,HR,HRMS,MBCDBP2-FILE ADABAS BD,DB,GF CLIENT DB2 HES,RE,SSCLIENT_CASE DB2 HES,RE,SSCLIENT_CASE_HIST DB2 HES,PW,RE,SS CLIENT_REFERRAL DB2 RE,SSCLIENT_XREF_ID DB2 HES,RE,SSCLIENT-BASIC-DBF ADABAS ES,HC,HES,RE,SS CLIENT-BILL-DBF ADABAS HES,SSCLIENT-MONTH-DBF ADABAS HES,RE,SSCS_CONTACT DB2 DB,ESCS_EMPLOYER DB2 DB,ESCS_PAY_WITHHLD_RSN DB2 DB,ESCS_PAYMENT DB2 DB,ESCS_PAYMNT_EMPLOYEE DB2 DB,ES
10/28/2005 Page 101
DD_ELIGIBILITY DB2 RE,SSDL301010 ADABAS DB,DL,ES,GF,HP,SS DL301020 ADABAS DB,DL,ES
EDBB-FILE ADABASAF,BD,CP,DB,ER,GF,HD,HP,HR,HRMS,HW,JM,JS,MB,SS
EDBJ-FILE ADABASAF,BD,CP,DB,ER,GF,HR,HRMS,HW,JM,JS,MB,SS
EDBP-FILE ADABAS BD,DB,ER,GF,HR,HRMS,HW,MB,SS EMERGENCY-FS-DBF ADABAS HES,SSER601010 ADABAS ER,TIES100000 ADABAS DB,ES,HC,HES,PW,RE,SS ES150030 VSAM ES,HES,RE,SS ES800001 ADABAS ES,HES,SS ES800002 ADABAS ES,SSES800003 ADABAS ES,SSES800310 ADABAS ES,SSES810010 ADABAS DB,ES,HES,SS ES820010 ADABAS ES,SSES822010 ADABAS ES,SSES823010 ADABAS ES,SSES823610 ADABAS ES,SSES827910 ADABAS ES,SSES828010 ADABAS DB,ES,SSES830010 ADABAS DB,ES,SSES830020 ADABAS DB,ES,SSES835010 ADABAS ES,SSES835020 ADABAS ES,SSES850010 ADABAS ES,SSES850610 ADABAS DB,ES,SA,ST ES861010 ADABAS DB,ESES870010 ADABAS DB,ES,SSES944010 ADABAS DB,ES
10/28/2005 Page 102
ES951510 ADABAS DB,ES,ST ES995511 ADABAS ES,SSES996010 ADABAS ES,SSET-ALERTS-DBF ADABAS HES,SSF_U_MBR_PARTICIPTN DB2 ES,HES,PW,RE,SS FILING_UNIT_BUDGET DB2 ES,HC,HES,PW,RE,SS FILING_UNIT_MEMBER DB2 HC,HES,RE GF201010 ADABAS ES,GF,SS GF611010 ADABAS ES,GF,SS GF801010 ADABAS ES,GF,SS HC201010 ADABAS HC,SSHC300110 VSAM HC,SSHC300210 VSAM HC,SSHC607010 ADABAS HC HCMOD SSHC631010 ADABAS HC HCMOD SSHC650010 ADABAS HC HCMOD SSHD100610 ADABAS GF,HD,MPHD100611 ADABAS HD,MPHD190010 VSAM HD,MPHP701010 ADABAS DL,HPHR101010 ADABAS HR,HRMSHW861005 ADABAS DT,HWHW862010 VSAM DT,HWINCOME-DBF ADABAS ES,HES,PW,RE,SS INVOLVEMENT-DBF ADABAS ES,HES,RE,SS MB100000 VSAM MB,PRMB170070 VSAM MB,PRMB171050 ADABAS CD,HW,MB,PI,PR,SA MB173010 VSAM JS,MBMB190010 VSAM CS,HRMS,HW,JS,MB,SS MB192010 VSAM HRMS,JS,MB,SS MB193010 VSAM CS,HC,HD,HL,MB,SS
10/28/2005 Page 103
MB194010 VSAM HRMS,JS MB195010 VSAM MB,SSMB240010 ADABAS GF,HL,MB,SSMOD MB243110 ADABAS GF,MB,SS MB291010 ADABAS DE,ST,GF,MB MB351010 ADABAS HRMS,JS MB604010 ADABAS CP,HR,HRMS,HW,SS MB610010 ADABAS CP,HRMS ME_ACTION DB2 HES,PW,RE ME_MEMBER_DETAIL DB2 HES,PW,RE,TME MP101010 ADABAS MP,SAOFFICE DB2 HES,RE,SS OSW9-FILE ADABAS CP,HR,HRMS OUTCOME_CLIENT_SER DB2 RE,SSPARTICIPATION-DBF ADABAS ES,HC,HES,RE,SS PI281010 ADABAS PI,TIPRI_CARE_PHYSICIAN DB2 HES,SSPRIMARY-CARE-PROVIDER-DBF ADABAS ES,HES,SS PROGRAM-BASIC-DBF ADABAS ES,HES,RE,SS PROGRESS_ASSESSMEN DB2 RE,SSPS804010 ADABAS PS,SHPW200510 ADABAS ES,PW,SS REFERRAL-DBF ADABAS ES,HES,RE,SS SB172010 ADABAS HC,SSSB261010 ADABAS RE,SSSB261020 ADABAS RE,SSSB261030 ADABAS RE,SSSB510010 VSAM HC,HES,SS SB616010 ADABAS DB,HL,SSSB616015 ADABAS HL,SSSB617010 ADABAS HL,SSSB620010 VSAM HES,RE,SS
10/28/2005 Page 104
SB640010 ADABAS HES,SSSB644010 ADABAS HC,HES,RE,SS SEQUENCE_NUMBER DB2 HES,RE,SSSERVICE_PLAN DB2 RE,SSSS121010 ADABAS HES,RE,SSSS450010 ADABAS HES,SSSS655020 ADABAS ES,HES,PW,SS SS970110 VSAM JM,SSSYSTEM_INDICATOR DB2 RE,SSTABLES-DBF ADABAS ES,HES,RE,SS,TME TPL_POLICY DB2 HES,RE,SSU112DBA-DT308610 DB2 DT,HD,HR U112DBA-DT308620 DB2 DT,HRU112DBA-DT327510 DB2 DT,HRU112DBA-DT352010 DB2 DT,TSU112DBA-SS132010 DB2 DB,SSU112DBA-SS132510 DB2 DB,SSU112DBA-SS133010 DB2 DB,SSU112DBA-SS133510 DB2 DB,SSU112DBA-SS134010 DB2 DB,SSU112DBA-SS134510 DB2 DB,SSU112DBA-SS135010 DB2 DB,SSU112DBA-SS135510 DB2 DB,SSU112DBA-SS136010 DB2 DB,SSU112DBA-SS136510 DB2 DB,SSU112DBA-SS139110 DB2 DB,SSU112DBA-SS139210 DB2 DB,SSU112DBA-SS139310 DB2 DB,SSU112DBA-SS139410 DB2 DB,SSU112DBA-SS139510 DB2 DB,SSU112DBA-SS139710 DB2 DB,SSU112DBA-SS139810 DB2 DB,SS
10/28/2005 Page 105
U112DBA-SS139910 DB2 DB,SSU112DBA-TS1_CRASH_MSTR DB2 HP,TSU112DBA-TS1_UNIT_DATA DB2 HP,TSU112DBA-TS1_UNIT_SEQ_EVNT DB2 HP,TS
10/28/2005 Page 106
Appendix H – Change Request
State of North DakotaMainframe Migration
Subject: Submitted By: Date:
Description of proposed change:
Reason for change:
Deliverable impacted:
Schedule impact:
Price to implement:
( ) ACCEPT FOR IMPLEMENTATION (client)
(Client) Project Mgr.:
Date: Signature:
( ) ACCEPT FOR IMPLEMENTATION Project Mgr.:
10/28/2005 Page 107
Software AG, Inc. Date:Signature:
( ) *ACCEPTED Software AG, Inc.Date:Signature:
*The Software AG Division Manager will be notified for signature approval if this Change Request constitutes any additional costs to the project.
10/28/2005 Page 108
Appendix I – Third Party Software Licenses
A large variety of third-party software is being purchased by North Dakota for this project.
For a summary of that software, including budgeted and actual prices, please see Linda Weigel or Carl Bondel.
10/28/2005 Page 109
Appendix J – Risk Management Log
For a current summary of the Risk Management Log, please see Linda Weigel or Carl Bondel.
10/28/2005 Page 110