Top Banner
SYSTEM TOOLS: XPEDITER Xpediter Primary Commands AA SNAP displays the Abend-AID Snapshot report. ACCEPT assigns a value to a data item. AFTER sets a breakpoint after the execution of an instruction. BEFORE sets a breakpoint before the execution of an instruction. BOTTOM scrolls to the bottom of the currently displayed data. DELETE turns off or negates the effect of other XPEDITER/TSO commands. END terminates the current function and returns to the previous screen. EXCLUDE excludes data lines from displaying in the source. EXIT terminates the current test session. FIND searches for character strings, data names, and COBOL structures. String delimiters can be '', "", == ==, or nothing. Any number of optional parameters can be specified with a required parameter. FIND without a keyword repeats the last find. GO begins execution or resumes execution following a pause. GOBACK changes the program logic and returns to the next higher level module. GOTO repositions the current execution pointer. INTERCEPT in an interactive test, the INTERCEPT command loads a module, sets before and after breakpoints, and displays the source. Xpediter Primary Commands (Contd...) KEEP continuously displays the values of program variables in a Keep window. KeepE keeps the contents of the elementary items of a group level variable. KeepH keeps the contents in hexadecimal format. Displayed values are updated as each breakpoint is encountered.
62
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Xpeditor Tutorial

 

SYSTEM TOOLS: XPEDITER

Xpediter

Primary Commands

         AA SNAP displays the Abend-AID Snapshot report.           ACCEPT assigns a value to a data item.           AFTER sets a breakpoint after the execution of an instruction.           BEFORE sets a breakpoint before the execution of an instruction.           BOTTOM scrolls to the bottom of the currently displayed data.          DELETE  turns off or negates the effect of other XPEDITER/TSO commands.          END terminates the current function and returns to the previous screen.          EXCLUDE excludes data lines from displaying in the source.           EXIT terminates the current test session.        FIND searches for character strings, data names, and COBOL structures.          String delimiters can be '', "", == ==, or nothing.  Any number of optional parameters can be

specified with  a  required parameter.            FIND without a keyword repeats the last find.           GO begins execution or resumes execution following a pause.           GOBACK changes the program logic and returns to the next higher level module.           GOTO repositions the current execution pointer.           INTERCEPT in an interactive test, the INTERCEPT command loads a module, sets before and after

breakpoints, and displays the source.

 

Xpediter

Primary Commands (Contd...)

         KEEP continuously displays the values of program variables in a Keep window. KeepE keeps the contents of the elementary items of a group level variable. KeepH keeps the contents in hexadecimal format. Displayed values are updated as each breakpoint is encountered. 

        MONITOR records the program's execution in a buffer. REVERSE can be  used to execute in review mode at a later time. 

         PEEK displays the values of program variables. PeekE displays  the contents of the elementary items of a group level variable. PeekH displays the contents in hexadecimal format. 

         RESET restores excluded lines on the source screen and removes any pending line commands.           SET overrides XPEDITER/TSO defaults.           Some values are set only for the duration of  the test session, while others are maintained across

sessions. More information is available in the XPEDITER/TSO and XPEDITER/IMS COBOL

Page 2: Xpeditor Tutorial

Reference Manual.           SHOW displays breakpoints, diagnostic information, or the SET command options.           Note: AFTERS, AT, BEFORES, BREAKS, LISTING, PFKEYS, and SKIPS  are not valid in

unattended batch.           SKIP temporarily bypasses the execution of a statement.          WHEN indicates when a specified condition is true or when a program variable changes value. In an

interactive test, execution is paused. In an unattended batch test, execution does not pause. A message is written to the log indicating that the specified condition has been met.

 

Xpediter

Line Commands         Double-character line commands are used to specify a block of lines. An n indicates a number.         ), )n, )), ))n scrolls a line or block of displayed data one or n columns to the left.          (, (n, ((, ((n scrolls a line or block of displayed data one or n columns to the right.        A, AA sets an after breakpoint on a line or block.         B, BB sets a before breakpoint on a line or block.         C, CC sets a count on a line or block.          D deletes all breakpoints on a Procedure Division line. 

- Deletes the displayed value on a line.  - Deletes an inserted line.  - Deletes the kept value on a kept line.  - Redisplays all the excluded lines on an excluded range of lines. 

         GT  repositions (GOTO) the current execution pointer to a line.          K, Kn, KK Keeps the first or nth variable on a line or block.          KE, KEn Keeps the elementary items for the first  or  nth  variable  on  a line.

 

Xpediter

Line Commands (Contd...)

         K* Keeps all variables on a line.           L, Ln reshows the last or n line(s) from a block of excluded lines.           P, Pn, PP temporarily displays (PEEK) the first or nth variable on a line or block.          PE, PEn displays the elementary items for the first or nth variable on a line.           PH, PHn displays the first or nth variable on a line in hexadecimal format.           P* displays all variables on a line.           S, SS sets a skip on a line or block.           X, XX excludes a line or block. 

Page 3: Xpeditor Tutorial

         XP captures and displays EXPLAIN information about an EXEC SQL or inserted SQL statement. Valid only  with the XPEDITER for DB2 Extension.

 

Page 4: Xpeditor Tutorial

What is NOMAD?

 Business Intelligence Software for Enterprise Reporting and Rapid Application Development

NOMAD is the most functionally rich business intelligence reporting tool available today for delivering a full range of applications with easy access to a variety of data sources for reporting and analysis. NOMAD is a fully relational product, which provides the basis for its tight integration with and highly efficient use of other relational database engines such as DB2 for z/OS and DB2 for z/VM.

A variety of application generators and user interface options, including a reporting front end and a graphical user interface design tool, speed both the development and use of NOMAD applications. 1. Provides easy access to variety of data sources on multiple platforms  2. Relational foundation ensures efficient operation with leading RDBMSs  3. Comprehensive reporting and analysis, including fully integrated DSS functionality and financial capabilities  4. Extensive features for application development  5. Offers full range of user productivity tools and performance enhancers  6. Includes advanced facilities for cooperative processing NOMAD is available for z/OS, OS/390, and z/VM on mainframe machines. When used with Front & Center as a reporting front end with RP/Server, NOMAD is a powerful mainframe server. NOMAD can also participate in IBM's Distributed Relational Database Architecture (DRDA) through DB2 interfaces on the mainframe and Unix platforms. 

A Brief History of NOMAD

 

 In 1969, a product called RAMIS from a group of people at Mathematica Products Group in Princeton, New Jersey was among the first of the languages dubbed a 4th Generation Language, or 4GL. It was available commercially, exclusively, on the time-sharing service provided by National CSS, Inc. of Stamford, CT, on a version of IBM's CP-67/CMS known as VP/CSS. The authors included Dick Cobb and Gerry Cohen, with help from some NCSS folks, including Harold Feinlieb and Nick Rawlings. It had its own database structure, which was essentially a single path hierarchy, a powerful REVISE command for importing data, and a powerful reporting verb PRINT. One could say PRINT ACROSS MONTH SUM SALES BY DIVISION and receive a report that would have taken many hundreds of lines of Cobol to produce. The product grew in capability and in revenue, both to NCSS and to Mathematica, who enjoyed increasing royalty payments from the sizable customer base. In 1973, NCSS decided to fund the development of an alternative product, which in October of 1975 was released under the name NOMAD. That same month, Gerry Cohen left Mathematica and released a product called FOCUS, which he made available on Tymshare Inc's competing time-sharing system, with

Page 5: Xpeditor Tutorial

the promise to RAMIS users that their applications could run un-modified, and at a significant discount over NCSS' charges for RAMIS applications. NOMAD from NCSS, later D&B Computing Services, became quite successful under the VP/CSS operating system, generating some $100M per year by the mid '80's. As NOMAD2, it became available under IBM's VM in '83 and MVS in '84. When the NOMAD software business was sold to Thomson, CSF in 1987, the customer base included over 800 of the Fortune 5000. In addition to providing its own relational database, NOMAD by '84 had interfaces to IBM's SQL/DS and DB2, as well as VSAM and IMS, and Teradata's database computer. Software sales approached $30M annually. The importance of the 4GL language replaced the importance of its native database. FOCUS from Information Builders, Inc (IBI), did even better, with revenue approaching a reported $150M per year. RAMIS moved among several owners, ending at Computer Associates in 1990, and has had little limelight since. NOMAD's owners, Thomson, continue to market the language from Aonix, Inc. While the three continue to deliver 10-to-1 coding improvements over the 3GL alternatives of Fortran, Cobol, or PL/1, the movements to object orientation and outsourcing have stagnated acceptance. The owners of the three now count on maintenance and support revenues to remain profitable. Few new sales are made, as prospects look to People-Soft, SAP, Oracle, and others, to provide the turn-key solutions to their data warehousing and reporting requirements.   

NOMAD Collection The NOMAD Collection is a complete set of productivity tools and facilities that speed the use of NOMAD by all users: 1. A variety of tools and code generators make application development easier and faster 2. Templates provide quick-start facilities for windowed application design 

The NOMAD Collection provides improved productivity and faster results to your application staff and users. The NOMAD Collection offers everything you need for building powerful and time-saving solutions in one easy-to-use package. The NOMAD Collection includes NOMAD tools, sample code, demonstrations and customization facilities.

NOMAD Tools

The NOMAD Collection provides tools for application developers, as well as tools that can be made available to end users. These tools vary in power and functionality and can be employed by NOMAD users of all experience levels. Many tools are written in NOMAD, which demonstrates the power and versatility of the language. A variety of facilities are included in the Collection such as:

1. Procedure generators that prompt for arguments and automatically create NOMAD procedures

2. Commands that provide common application support functions, such as help and menu processing

Page 6: Xpeditor Tutorial

3. Special environments such as forms painting and mapping NOMAD schemas to external files

4. Simple utilities to perform multiple-step processes, such as dumping selected data for later reloading

Specific NOMAD code generators include: 

  

 

WMAKEFRM  Builds and updates form descriptions, using screen painting techniques to describe forms 

WMAKEM  Builds procedures for updating a database, including support for shared databases

MAKEDUMP and MAKELOAD  Build procedures for moving data from and to various databases. A log of any rejected records is also generated.

FILEMAP  Builds a NOMAD schema or a load procedure to map to an external data file.

WTMMENU  Allows you to build menus and submenus for an application.

 

 

 

Other productivity-enhancing tools provided are:

  

 

NSPF  Provides functionality similar to ISPF to aid application development within the NOMAD Session Manager environment.

NED

  Is an editor which can be used in any NOMAD environment. NED is invoked to satisfy the editing needs of NSPF, as well as other NOMAD environments.

BROWSE  Enables intelligent browsing of a NOMAD report. Scrolling, find and freeze functions are among the facilities provided.

WEZHELP  Provides quick implementation of online help facilities within a NOMAD application.

 

 

 

External Function Library

A set of external programs, written in Assembler Language, are provided for gathering information on resource usage, manipulating bit and packed data more easily, and translating character strings. These external functions operate much the same way as the internal functions.

Enhanced Facilities for Customization Language Support

Page 7: Xpeditor Tutorial

Most messages produced by NOMAD Tools are located in a separate database that enables easy translation to other languages and simplifies customization of messages at any location. The database is currently available in several languages including English, French, German and Japanese.

Function Keys and Screen Attributes

All function keys and screen attributes are defined in one procedure, simplifying the specification of standards. Customization of function keys and screen attributes is supported on a global basis and on a tool-by-tool basis. Some tools provide specific options isolated in one procedure per tool. These tools also support individual user profiles.

 NOMAD CICS Companion NS/CICS Companion enables NOMAD Session Manager (NSM) applications to interact directly with CICS Temporary Storage and Transient Data Queues, and to invoke CICS transactions to manipulate these and other CICS resources. It also enables CICS applications to offload resource intensive tasks to NOMAD. This provides a true cooperative application environment between NOMAD and CICS. The application developer can use NOMAD commands to easily build user interface forms while using CICS as the database engine for CICS owned databases. NOMAD Provides: Fast application development and enhancement   Applications that function even if the associated data structure changes  A flexible Window environment for the application developer   Ability to create sophisticated Window applications, which can be CUA compliant   Intelligent data maintenance facilities that use automatic schema driven data integrity and error messages

 Ability to respond quickly to ad hoc requests   Ability to give the ad hoc requestor direct access to production data via end-user products.     CICS Provides:     Very high speed transaction processing rates  Very fast response time  Logging and journalling of user activities.  NSM/CICS Companion Provides: An application in either environment can trigger processing in the other environment  An application in either environment can place data in Temporary Storage or Transient Data Queues for access by an application in the other environment.    

Page 8: Xpeditor Tutorial

   It is also possible to have NOMAD handle the user interface completely. In this case, the application driver is written in NOMAD, and calls different NOMAD procedures associated with the different user choices. The NOMAD portion of the application invokes the CICS portion when necessary using the SYSTEM command to invoke CICS transactions, passing small amounts of data directly on the command line and/or larger amounts of data in queues via the NSM/CICS Companion for retrieval by the transaction. 

CICS offers features that NOMAD does not, such as logging and journalling for protected data, and rollback recovery for data not supported by NOMAD RESTORE. CICS also allows updating of BDAM, VSAM and IMS files.

Even when both NOMAD and CICS provide equal support for a particular activity, it may be preferable to have all activity of that type performed from a single environment using the logs and controls of that environment to simplify recovery and accounting. By using CICS transactions to apply sets of updates to VSAM databases, you can take advantage of CICS sync point/rollback to ensure that all updates are applied as a unit, that the updates are logged and journalled with all other updates being done through CICS, and that the updates are synchronized properly with other updates being done through CICS.

  NOMAD QLIST NOMAD QLIST is a productivity tool for quickly generating mainframe NOMAD reports, database CREATEs and SQL SELECT requests. It is primarily intended to speed report and application development for users who are familiar with basic NOMAD syntax. 

QLIST produces reports from any data source accessible to NOMAD, including SQL/DS, DB2 and Teradata tables, and IDMS, IMS, ISAM, QSAM and VSAM files. You can also build SQL SELECT requests to quickly access data from DB2 or SQL/DS tables. QLIST's CREATE facility lets you create temporary or permanent databases or flat files by selecting tables and manipulating items within these tables. 

Page 9: Xpeditor Tutorial

QLIST requires a minimum of keystrokes to build, maintain, catalog and execute requests from within an SAA/CUA-style windowed environment. You control the dialog through selection of action bars or pull-down menu picks. As you build your request, the process is highlighted through the action-bar picks at the top of the screen. You always know the function that is being accessed, and QLIST is flexible in building a request by enabling you to perform or change any step in any order. 

How QLIST Looks to a User

QLIST features a clear and consistent user interface. QLIST uses two types of menus - action bar and point and select. 

Action Bar

The action bar appears at the top of the screen and displays a list of options, or actions. Each action-bar pick can have a pull-down menu associated with it. After making a selection from the main action bar, a pull-down or functions associated with that action is displayed. 

Point and Select

This menu style provides a list of options. Selecting an option causes either a point and select subset menu or an action bar to appear. Menus are always displayed in the same locations, with colors denoting which windows are in use and their type (action bar, pull-down menu, help, etc.). Options are chosen by the entry of non-blank characters or cursor-placement in combination with the Enter key, replacing the need for command syntax. You constantly receive feedback to reinforce and confirm the actions that are taken -- such as prompts for the next action and confirmation of work completed. In addition, the user can toggle between the view of the request being built and the sample output, making it easy to prototype reports, databases, etc.  Technical Specifications

NOMAD QLIST runs in z/VM, z/OS or MVS NOMAD Session Manager environments. QLIST can also be used with versions of NOMAD that support SQL-based DBMS' including DB2 and Teradata.  

NOMAD One Pass

NOMAD is a mainframe product that provides the most powerful reporting capabilities available today. NOMAD's reporting command, LIST, is continually fine tuned and optimized so it provides not only powerful reporting capabilities, but highly efficient operation as well.  With the use of NOMAD One Pass, a performance optimizer, NOMAD delivers reporting power with even greater efficiency. One Pass processes several LIST requests at once with a single pass through a database. As a result, it addresses the constant data processing need for minimizing mainframe resource usage. One Pass is ideal for sites where the costs associated with data access are an important consideration due to either the complexity or volume of data, and where many reports are run frequently from the same set of data. 

NOMAD One Pass is a proven tool for reducing execution time, CPU and I/O. In documented testing, NOMAD One Pass reports achieved as much as an 80% reduction in execution time and 86% in CPU and I/O usage. 

In addition, product documentation offers guidelines for identifying the LIST requests that will benefit most by their processing with One Pass. 

Page 10: Xpeditor Tutorial

NOMAD One Pass can be used with any data accessible by NOMAD, including QSAM, ISAM, VSAM, IMS, IDMS and the SQL engines DB2, SQL/DS and Teradata.  One Pass accesses data once while delivering multiple reports, thus saving much of the processing time and resources associated with reporting. 

To use One Pass to its best advantage, it helps to understand the process of a NOMAD LIST request and how the data is stored. There are three steps in processing a LIST request: SET UP, ACCESS and REPORT. To use NOMAD One Pass, multiple LIST requests are grouped in a One Pass "package," which begins with a ONEPASS SAVE command and ends with a ONEPASS REPORT command. 

When NOMAD encounters a ONEPASS SAVE command it goes through a Set Up phase similar to that for a single LIST request, analyzing each request in the package and storing pertinent information for each one.  After NOMAD processes the entire database and creates the sort files for all the reports, the Report phase begins, using the information saved during the Set Up phase to format the reports. 

LIST also provides a BANNER option to add a banner page between reports. This makes it easy to identify the various reports in the One Pass package output.  

NOMAD One Pass produces resource savings in three primary areas: DASD I/O, Shared DEFINEs and message traffic.

NOMAD provides dynamic measurement tools for analyzing report performance. These measurement tools, OPTION LISTSTATS and NAPA (NOMAD Application Performance Analyzer), can be used to demonstrate how a set of LIST requests executed singly compares to One Pass in resource usage.  OPTION LISTSTATS is a powerful analysis tool that is part of mainframe NOMAD. OPTION LISTSTATS will show all the sort files created for a single LIST or for a One Pass package, and the number of records in each sort file. Other information provided includes the CPU time for the different phases of LIST and the CPU time for data retrieval _an important statistic to note for system performance.  The NOMAD Application Performance Analyzer (NAPA) is an optional NOMAD product that is a powerful tool for measuring all relevant resources, especially CPU usage, I/O counts, number of instances read and elapsed time.  One Pass can also be used to improve efficiency when reporting against data stored in mainframe SQL database systems to which NOMAD interfaces: DB2, SQL/DS, and Teradata. In these cases there are some additional considerations when doing the preliminary analysis. The results show that when data is retrieved from SQL systems, One Pass provides performance gains. Here, as in the first set of tests, Package 3 has the greatest savings. This package contained the SUM function used on DEFINEd items that appear in multiple LIST requests in the package. The results show that when this is the case, the savings are significant, whether the data is stored in SQL tables or in NOMAD. These test results confirm that NOMAD One Pass significantly improves efficiency across NOMAD's entire range of file structures.  

Page 11: Xpeditor Tutorial

 

NOMAD Configuration

 Assuming you know the name or TCP/IP address of your VM host, it should be the same name or address you use to connect via telnet or TN3270.You should link and access TCPMAINT 592 to have access to the TCPIP DATA file. If you can't get access to TCPMAINT 592, you can still continue but you are probably well into installation method #3 (remember? The "Gee, I'm not sure I'm supposed to do this" Method.)To start the web server execute the HTTPD EXEC.If you have access to TCPIP DATA you will see

HTTPD Version 1 Release 2.4TCPSHELL: PORT 80TCPSHELL: Ready;

If you do not have access to TCPIP DATA you will see

HTTPD Version 1 Release 2.4File TCPIP DATA * not foundTCPSHELL: PORT 80TCPSHELL: Ready;

Without access to TCPIP DATA your web server will work but it will be very slow, fortunately we'll fix that.If, instead of "TCPSHELL: Ready;", you get the error message "TCPSHELL: EADDRINUSE Address already in use" then somebody else may already be running a web server on this machine. You need to pick a new PORT to run your web server on.To change the PORT that the web server runs on:Edit the file HTTPD CONFIG and locate the line that says  PORT=80Change the line to read  PORT=8080Port 80 is the standard port used by web servers and web browsers, 8080 is a common alternative for unofficial web servers. If you again receive the "Address already in use" message then you can pick another port, simply select any port above 1024 but you may want to start asking yourself, "Gee, am I sure I'm supposed to do this?"Once you've gotten the TCPSHELL: Ready; response you are ready to access your web server from your web browser!Start your workstation's browser and enter the address or name of your VM host, for example

http://vm.testing.comorhttp://192.168.37.42

If you had to change the PORT to 8080 or some other number, add that number to the address, for example

http://vm.testing.com:8080orhttp://192.168.37.42:8080

You should see a web page describing Webshare in your browser. Congratulations, you just got your first "hit" on your web server!Remember, if you don't have access to the TCPIP DATA file then your web server is very slow, perhaps as much as 2 minutes to serve the Webshare intro page. The reason for the slow-down is that Webshare attempts to identify the incoming address of each web visitor and TCPIP DATA contains information that tells Webshare how to do that. It's called an "Address Resolve" or "Reverse DNS Lookup." A simple

Page 12: Xpeditor Tutorial

modification to Webshare will keep it from trying to perform the resolve and as a result will dramatically increase the speed of your web server. Even if you have access to TCPIP DATA this simple modification will provide some increase in speed.Stop the web server by entering STOP at the TCPSHELL: Ready; prompt.Edit the file TCPSHELL EXEC and on or around line 271 locate the statement  Parse Value Socket("RESOLVE",host) With rc rsChange the line to read  RC=1Rather than having TCPSHELL stall for a minute until is realizes it can't RESOLVE the incoming address, we just force RC=1 and let TCPSHELL handle the error 1 minute sooner.Restart the web server by executing HTTPDAssuming you are running Netscape or Microsoft Internet Explorer, hold the SHIFT key while you mouse click on the "refresh" icon on your browser. The Webshare intro page should be re-fetched and you should notice an increase in speed. (Note: if you don't hold the SHIFT key you'll see an AMAZING increase in speed but that will be because you just re-fetched it from your local hard drive, the SHIFT key tells your browser to go get the page even if it think it already has a local copy.)   

Nomad Server Pages (NSP)

 Nomad Server Pages allow Nomad programmers to use Microsoft FrontPage (or any other web development tool that supports Active Server Page editing) to create Web-based applications using Nomad code and data. No other "CGI" programming skills are needed!

What's an NSP?

Simply stated, Nomad Server Pages (NSPs) allow you to embed Nomad code in your HTML code. Tools like FrontPage will allow you to use all the wonderful drag-and-drop and WYSIWYG editing tools on your HTML code while protecting your Nomad code from any markup.An NSP program could be as simple as:<HTML><BODY><H1 ALIGN="CENTER">Nomad says, "it's <%=&&DATETIME;%>!"<H1></BODY></HTML> The Nomad code is set apart from HTML code by a pair of <% and %> tags and the special = directive is used to output data to the web-browser.NSP applications can do more then just output bits of data, the entire Nomad procedural language is at your disposal. Let's take a look at the classic "Time of Day Greeting" code as a NSP application.<html><body><H1 align="CENTER">Good<%DECLARE &HOUR AS 99 AUTOMATIC;&HOUR = DISPLAY(&&DATETIME AS DATETIME'HH');IF &HOUR >= 0 AND &HOUR < 12 THEN DO;%>Morning<%END;ELSE IF &HOUR > 11 AND &HOUR < 17 THEN DO;%>Afternoon<%END;ELSE DO;%>Evening<%END;%></H1>

Page 13: Xpeditor Tutorial

</body></html>Notice that Good, Morning, Afternoon and Evening are not part of a Nomad PRINT or WRITE statement nor even an NSP = output directive, they are part of the HTML code. If you were to view the NSP source in a browser you would see:

Good Morning Afternoon Evening

The surrounding Nomad IF/THEN/ELSE code would decide which of the 3 words would appear at run-time while during design-time you have the full power of your Web design tool to select the words, change their font type or font color all without disturbing the Nomad code! You have complete separation of logic and appearance.

But Nomad is all about data!

Let's look at an NSP program to display an HTML table of all the Stars and their Characters in the NADMAST sample database.<%DA NADMAST;%><html><body><table border="1"><tr><th nowrap>Actor</th><th nowrap>Character</th></tr><%FOR EACH CHARACTERS DO;%><tr><td><%=CHAR_STAR_NAME;%></td><td><%=CHAR_NAME;%></td></tr><%END;%></table></body></html>

Again, all the WYSIWYG power of your web design tool is at your disposal for setting table and cell attributes or font colors and sizes. When your NSP program is executed it will deliver...

Code like this That will display like this<html><body><table border="1"><tr><th nowrap>Actor</th><th nowrap>Character</th></tr><tr><td>KATHLEEN TURNER</td><td>JOANNA</td></tr><tr><td>KATHLEEN TURNER</td><td>CHINA BLUE</td></tr><tr><td>JOHN LAUGHLIN</td><td>GRADY</td></tr><tr><td>ANTHONY PERKINS</td><td>SHAYNE</td></tr><tr><td>WALLACE SHAWN</td><td>WALLY</td></tr><tr><td>ANDRE GREGORY</td><td>ANDRE</td></tr><tr><td>BLAIR BROWN</td><td>NELL</td></tr><tr><td>JOHN BELUSHI</td><td>SOUCHAK</td></tr></table></body></html>

 

Actor CharacterKATHLEEN TURNER

JOANNA

KATHLEEN TURNER

CHINA BLUE

JOHN LAUGHLIN GRADYANTHONY PERKINS

SHAYNE

WALLACE SHAWN

WALLY

ANDRE GREGORY

ANDRE

BLAIR BROWN NELL

JOHN BELUSHI SOUCHAK

Page 14: Xpeditor Tutorial

But what about HTML forms and POSTed data and QUERY_STRINGs and all that other messy "CGI stuff"?

The NSP processor converts posted HTML form fields and query parameters to Nomad &vars for you, your Nomad code need only know that if your HTML form had a field called USERNAME then that field's value will be in &USERNAME when your NSP application is executed.The NSP processor even handles multiple fields with the same name. If your form has 12 LastName, 12 FirstName and 12 Phone fields, perhaps as part of a batch entry system, then your NSP application will have 3 Nomad arrays &LastName(12), &FirstName(12) and &Phone(12).Even the HTTP header values are automatically provided as &vars for you! Use &USER_AGENT to determine what browser the user is using and modify your output for specific browsers. Use &REQUEST_METHOD to determine if your NSP application was launched as the result of a GET or POST request. Use &REMOTE_ADDR to interrogate the user's IP address and determine if they are authorized to run your NSP. Consult your web server software's documentation for a complete list of provided HTTP header values.The NSP processor also provides some values of its own which allow you to write NSPs that reference themselves or other NSPs without explicitly knowing what server it's running on. These values are:Name Use Example Value&NSP_SOURCE Contains the name of the NSP source

file that is currently running.SEEFILMS

&NSP_PATH Contains the resource portion of the URL that invoked the current NSP

\NSP?SEEFILMS

&NSP_CALL Contains the URL needed to call another NSP on the server that served the current NSP. Using this value can help you avoid hard-coding server names and TCP/IP ports in your hyperlinks and forms. When using this value you must provide the name of the NSP to call.

HTTP:\\yourhost.com:8080\NSP?

&NSP_SELF Contains the URL needed to self reference the current NSP. You could use this value to write a single NSP that presents a form when called with HTTP GET and then sets the FORM's ACTION value to itself, freeing your code from ever having to contain hard-coded references. Then when your NSP is called with an HTTP POST it could process the form data rather than present the form. The included SEEFILMS NSP sample code uses this feature to allow you to change it's name and store it on any server and the application will still function.

HTTP:\\yourhost.com:8080\NSP?SEEFILMS

You can also use the &NSP_ values in conjunction with the <BASE> header tag to allow one server to serve static HTML and images while using special HTTP servers for your NSP processing.

What do I need to start writing Nomad Server Pages today?

You need VM, Nomad2 and TCP/IP to start with. The NSP processor is available in the Nomad download library it has been tested with Beyond Software's Enterprise Web, Neon's Shadow VM Webserver and Rick Troth's Webshare. It most likely will work with RP/Web but I haven't tried it.

Page 15: Xpeditor Tutorial

Most important is that the NSP processor does not require RP/Web to bring the power of Nomad to your web applications. The NSP processor is freely available and Webshare is freely available so you can get started writing NSP applications today! 

  Known limitations:1) The = directive must be the first command in a statement. This means you can not code:

<%IF &LOCATION='Y2' THEN =&PLUGH;%>But you can code:<%IF &LOCATION='Y2' THEN DO;=&PLUGH;END;%>

2) Nomad &variables are limited to 15 characters, HTML form fields can be much longer. The NSP processor will truncate long field name, header types and query parameters to 15 characters.

3) If the NSP processor must truncate two or more variables so that their names are identical then they will be treated as an array.This means, REALLY_LONG_NAME_A and REALLY_LONG_NAME_B will both become &REALLY_LONG_NAM so they will then become &REALLY_LONG_NAM(1) = (value for a) and &REALLY_LONG_NAM(2) = (value for b).

4) Query parameters and form field names can start with numerals, in fact you could have a field named "2". The NSP Processor will append a _ to any parameter or field starting with a numeral such that 2 becomes &_2.

5) Query parameters and form field names can have many strange characters in them. The NSP Processor will convert anything outside of A-Z, 0-9 and _ to a _ such that Request-Method becomes &REQUEST_METHOD.

6) Nomad strings can't exceed 255 characters. Any query parameters, form fields or HTTP headers longer than 255 characters are truncated.

7) Any single quotes (') within a value will be converted to double single quotes which Nomad will then interpret as a single quote. In values that may be near the 255 limit, the double quoting may artificially inflate them past 255 causing the final Nomad string to be shorter than 255.

8) You should be able to use Microsoft's Visual Interdev to design and edit NSP files but I have found it to be a little over zealous in converting & to the HTML equivalent of "&amp;". Meaning &NAME becomes &amp;NAME which Nomad chokes on. I have had FrontPage warn that it might mistakenly convert scripted "&"s but it's never actually done it. "&" is a valid JavaScript operator so I would think web design tools would be smarter about it.

9) The NSP processor needs write access to the A-disk (actually can be any file mode by simple edit of NSP CGI) but each web server virtual machine will need some writeable DASD for a work file.

10) Field and query parameter values do become executed Nomad code. This is, of course, a security no-no. I'm pretty sure that a user can not trick an NSP into executing code passed as a query parameter but I'd love some other folks to take a look and try to break it. (see _$NSP$_ NOMAD on the web server's writeable DASD to see what actually gets sent to Nomad for execution.)

Page 16: Xpeditor Tutorial

TELONAn Overview

  

Page 17: Xpeditor Tutorial

 

 Structure of an Online TELON Program An online TELON program is divided into two main parts - an output side and an input side.   The most typical flow of TELON online programs is as follows:

 When a user flows from one screen to another, before the second screen will be displayed the output side of the TELON program is executed.  The screen is then displayed, the user would then perform whatever action and press either an enter key or PFkey which would then cause the input side of the TELON program to be executed.

 Please refer to attachment A as appropriate, which contains a detailed Online TELON Structure Chart as you read through the rest of this document. 

Output Processing The purpose of the output side of a TELON program, as the name suggests, is to retrieve data, and format the data so that it can be displayed in a meaningful way on the screen. The output side of a TELON program is divided into two main sections: A-100-OUTPUT-INIT The primary purpose of this section is to:

      allow any initial output processing to be done, such as getting the system date, some initial reads, and output formatting that may be required.

   B-100-OUTPUT-EDITS  The primary purposes of this section is to:

       map to the output fields on the screen any corresponding DBNAME (see paragraph 'DBNAME Variables'

on page 8)  specified on the Panel Definition.  This mapping of data can also involve TELON performing various automatic functions, such as ensuring the field is of a certain type (eg. numeric, char, dollar, etc.), reformatting the field, suppressing output.

       perform output list processing, where TELON will as much as possible (ie. handle the data retrieval for a

list, and the paging forward and backwards logic). 

      allow the developer to add any other final output logic they require, such as an output message, protection of field(s).

  

Input Processing The purpose of the input side of a TELON program, as the name suggests, is to process input received by the user, whether it be data keyed in or a PFkey being pressed. The input side of a TELON program is divided into four main sections:

Page 18: Xpeditor Tutorial

 P-100-PFKEYS The purpose of this section is to deal with all the PFkey processing in a TELON program, such as PF1 for Help, PF2 for Hold, and so on. D-100-INPUT-INIT The purpose of this section is to deal with any initial data retrieval, or editing that is required before the program proceeds onto perform the main edits.  Typical logic that this section contains is validation of 'keys' fields. E-100-INPUT-EDITS The primary purposes of this section is to:

       map to the DBNAME (see paragraph 'DBNAME Variables' on page 8) fields any corresponding screen

field specified on the Panel Definition.  This mapping of data can also involve TELON performing various automatic functions, such as ensuring the field is of a certain type (eg. numeric, char, dollar, etc.), reformatting the field. 

      perform input list processing, where TELON will loop through each line on a list and as much as possible perform validation logic itself, and allow you the developer to add additional validation logic. 

      allow the developer to add their own field edit logic. X-100-CONSIS-EDITS The primary purpose of this section is to perform consistency edits, that is any edit that involves more than one field or an edit that involves database checking.  H-100-INPUT-TERM The primary purpose of this section is to perform any database update logic and to potentially include logic to transfer to another program.  

TELON Time Savers This section will highlight some of the main features that TELON has that will enable a developer to minimise the amount of code that they need to write.   Some main features are: Panel Image This allows a developer to paint a screen defining all the input, output, input and output, and literal fields on the screen. Panel Definition This allows a developer to specify: 

      automatic mapping between the screen fields and the DBNAME fields      automatic editing of the screen fields such as numeric, date, dollar, fullcar, fullnum, etc.

Page 19: Xpeditor Tutorial

      converting a field from one value to another      reformatting the field      automatic output formatting of the screen field via a picture clause      suppression of an output field via a mapout field 

 Data Access TELON has a number of panels in what is called the Data Group.  The Data Group enables the developer to bring in table definitions, and to define various data accesses.   When defining a data access the developer can via fields on various panels specify: 

      what type of access it is, such a read, readnext, delete, and update.      whether they want the read to be performed automatically in set place(s) by TELON (very handy in

particular for list processing) or whether they want to perform the read themselves in Custom Code.       the columns they want returned      the predicate they want      the order by clause      the group by clause      the having by clause      SQL return codes to allow through, any return codes not specified will cause TELON to automatically go

into its abend processing since it validly assumes that you have encountered corrupt data and want to abend.

 List Processing TELON can generate a lot of the logic needed to display a list screen, and in some cases all but a couple of lines.  It can handle: 

      the paging forward and backwards logic,       setting of a more pages indicator,      database read to populate the list, as long as the list has one main 'driving' cursor read,      it can save any of the fields in a line on the list to a table so that they can be referred to in subsequent

processing      maintaining various variables such as line-count (number of lines displayed on the list), input-line-count

(number of rows the user have selected), page number, line-errors (is set to true when an input field on a line on the list is found to be in error) , current-segment-key (contains the first key to be used on the next page of the list).

 Miscellaneous useful Variables There are a number of variables that TELON uses that are important to be aware of, since you will need to use several of them, and others are can be extremely useful if used appropriately.   Some of the more important TELON variables are: 

      Control-Indicator. 

This variable is used by TELON to control the execution of the online program.  The control indicator has a number of 88s, these being process-output, do-write, process-input, do-transfer, continue-process, and transaction-complete.  It is vital that any TELON developer understands how the control indicator variable works.

       Various Segloop Variables

 

Page 20: Xpeditor Tutorial

There a several segloop variables that are extremely handy to be aware of.  See paragraph 'List Processing' earlier in this chapter.

       <database name>-<database column name>-NU

 For every nullable DB2 column TELON will general an 88 variable you can use to test whether the column is not null.  For example, column STKD-PROD-ENDT in table VXJJ0BBP which is nullable would by TELON have an 88 variable defined as VXJJ0BBP-STKD-PROD-ENDT-NU.

       Various TELON Attribute Variables

 There are a number of predefined attribute variables that TELON set up that you as a developer can use to set the attribute of a field.  In particular error-attr, prot-attr, and cursor-attr  are three attributes frequently used.  Other attribute variables are ok-attr, output-attr, output-blank-attr, blank-attr, cursor-blank-attr, input-blank-attr, output-high-attr, input-high-attr.

       TPO Variables

 TPO variables are variables that will be outputted to the screen, via the message area.  The TPO variables are mapped to in the B-100-OUTPUT-EDITS section by TELON if a DBNAME was specified for the screen field on the panel definition.  If a screen field does not have a DBNAME specified for it, it means that as the developer you intend to manually set up the screen field yourself. A particular important TPO variable to be aware of is TPO-ERRMSG1 which must be defined on every online TELON screen, and is the field into which all messages that you want to be displayed on the screen should be moved to.

       TPI Variables

 TPI variables are variables that will be populated from via the message area.  The TPI variables are moved to a DBNAME field (if one was specified) in the E-100-INPUT-EDITS section by TELON.  All TPI fields will generally have a DBNAME specified for it as developers should reference the DBNAME field in their code rather than the TPI field.

       'DBNAME' Variables

 'DBNAME' variables are entered on the Panel Definition screen of TELON.  The DBNAME entered on the Panel Definition screen can be a DCLGEN variable, a general working storage variable, or a XFER variable.  When a DBNAME is specified on the panel definition it will result in the TPI variable being mapped to the corresponding DBNAME variable in E-100-INPUT-EDITS, and in the DBNAME variable being mapped to the corresponding TPO variable in B-100-OUTPUT-EDITS. DBNAME variables should be used whenever possible as it can cut down significantly on the amount of custom code and working storage that you need to write.  For example, if you specify a DB2 column name for a output field, it means that after you have read the row that TELON will automatically move that DB2 column to the corresponding TPO field.

 There in this case no need to manually move the DB2 column to the TPO field yourself.

 DBNAME variables are typically not used when you need to perform some manual formatting on the field before it can be mapped to a TPO field.

  

Page 21: Xpeditor Tutorial

 

Custom Code Points TELON has a very fixed structure and only allows native COBOL code to be entered at fixed points in the program - these fixed points are called Custom Code points. The rest of this chapter will outline all the standard custom code points in an online TELON program, and outline what type of code should be put in each custom code point. Custom code points are as follows: 

      OINIT1 

This custom code point occurs at the beginning of output processing, before TELON does any automatic non-segloop reads.

 Is generally used to:

         determine if first time in the transaction         set up today's date & time if required         set up host variables needed for any automatic non-segloop reads         manually perform non-segloop reads         manually format non-segloop fields

       OINIT2

 This custom code points occurs in output processing, just after TELON has performed any automatic non-segloop reads.  Is generally used to:

         process the data retrieve via a non-segloop automatic read 

      OCUST1 

This custom code point occurs in output processing, and is the first custom code point for segloop processing.  This custom code point happens just before the actual segloop occurs. Is generally used to:

         manually perform the first segloop read.  The segloop read should only be performed manually if an AUTOEXEC BROWSE could not be used.

         manually format any output fields in the list to the TPO fields       OCUST2

 This custom code point occurs in output processing, and is the second custom code point for segloop processing.  This custom code point happens within the actual segloop and will be performed 'n' times (where 'n' will be the number of lines on the screen if there are enough rows on the table being read, otherwise 'n' will be the number of rows still to be read on the table).  This means that if that there are more pages to displayed the last row read will contain the key to be used if the user pages forward. Is generally used to:

         manually perform the segloop read.  The segloop read should only be performed manually if an AUTOEXEC BROWSE could not be used.

       OCUST3

Page 22: Xpeditor Tutorial

 This custom code point occurs in output processing, and is the third custom code point for segloop processing.  This custom code point happens within the segloop, and is perform for how many rows there are on the screen-1, assuming there are more pages to follow, otherwise it will be performed for the remaining rows fetched within the loop.  Note:      If the screen if filled, then this custom code point will be performed once less than OCUST2. Is generally used to:

         manually format any output fields in the list to the TPO fields 

       OUTTERM

 This custom code point is the last custom code point that occurs in output processing. Is generally used to:

         set up the more pages literal for a list screen         protect segloop lines with no data         display an output message if appropriate         reposition the cursor

       ##PFKnnn

 This custom code point is the first custom code point that occurs in input processing.  Where ## is the application code (eg. JJ, JL, or JK), and nnn is any three byte string you want to give for meaningfulness (eg.  ALL, 001, 002, 1&3, etc.). Is generally used to:

         code PFkey logic, such as PF1 for Help, and PF2 for hold. 

      ININIT1 

This custom code point occurs in input processing, before TELON does any automatic  reads. Is generally used to:

         perform any 'key field' edits.  This only needs to be done if you have a screen where the user enters in a key, and the same screen then goes and gets the data.  Most (if not all) screens within TESCO do not operate in this way.  The key of the screen is always passed in via the previous screen. 

         set up host variables needed for any automatic reads 

      ININIT2 

This custom code point occurs in input processing, just after TELON has performed any automatic  reads. Is generally used to:

         process the data retrieved via an automatic read 

      ICUST1               

This custom code point occurs in input processing, just before TELON has mapped its segloop TPI fields to the DBNAME fields. Is generally used to:

Page 23: Xpeditor Tutorial

         manually perform any edit checks on input segloop fields prior to TELON performing any automatic mapping.

 This custom code point is hardly ever used, and it is recommended that before any uses it that they speak to at least one other TELON developer to run through why they should use it.

       ICUST2

                This custom code point occurs in input processing, just after TELON has mapped its segloop TPI fields to the DBNAME fields. Is generally used to:

         manually perform any edit checks on input segloop fields.         handle the 'select' field processing on a list.  This includes: manually validating the field as

appropriate, if a row is validly selected to move appropriate data from that row to the XFER area and set the next-program-id up to transfer to the appropriate program.

 Hopefully a technical hint on handling input list processing will be produced shortly.

       FLDEDIT

                This custom code point occurs in input processing, just after TELON has mapped its TPI fields to the DBNAME fields. Is generally used to:

         manually perform individual field edits.  It should not be used to perform any consistency checks across multiple fields - the CONSIS custom code point should be used to do this.

       CONSIS

 This custom code point occurs in input processing, just after all the individual field edits have been performed. Is generally used to:

         manually perform cross field validation. 

      INTERM 

This custom code point occurs in input processing, just after all field validation has been performed. Is generally used to:

         apply any database updates, creates, and/or deletes.         put out a confirmation message stating whether the updates were successful or not.         can be where manual screen transfer code can also occur. 

      SECTION 

This custom code point occurs neither in the input or output side of TELON.  It is a custom code point that can be used to create Sections that can be performed from other custom code points. Is generally used to:

         contain sections of code that will be performed from several places within custom code.         contain sections to enable the custom code that is written to be more structured and maintainable.

       WKAREA

 

Page 24: Xpeditor Tutorial

This custom code point occurs in the working storage section of a TELON program. Is generally used to:

         enter in your own working storage variables.       XFERWKA

 This custom code point occurs in the working storage section of a TELON program. Is generally used to:

         define the spa area of your TELON program.  

Structure of a Batch TELON Program Please refer to attachment B at the bottom of this document as appropriate, which contains a detailed Batch TELON Structure Chart as you read through the rest of this document.  MAIN SECTION.                                                

    PERFORM Q-1000-PROGRAM-INIT.                         PERFORM C-1000-GET-TRAN.                             IF END-TRAN                                             NEXT SENTENCE                                     ELSE                                                    PERFORM MAIN-PROCESS-LOOP UNTIL END-TRAN             PERFORM B-2000-END-REPORT.                        PERFORM T-1000-PROGRAM-TERM.                         GOBACK.                                           

 Q-1000-PROGRAM-INIT 

THIS ROUTINE OPENS ALL FILES DEFINED AS OPEN=INPUTI-O AND OUTPUT.  IT IS CALLED ONCE AT THE BEGINNINOF PROCESSING FROM THE PROGRAM MAINLINE. IF ANY   JCL PARMS ARE DEFINED, R-1000 WILL BE PERFORMED   TO PARSE THE PARMS ONE AT A TIME.                                                                   GENERATED - FILE OPENS AND PARAMETER PARSING      COPY CODE - BATCH/INIT                           

  C-1000-GET-TRAN 

THIS ROUTINE CONTROLS THE RETRIEVAL OF A BATCH   INPUT TRANSACTION.  IT IS PERFORMED FIRST AFTER  PROGRAM INITIALIZATION AND THEN FROM THE MAIN    PROCESS LOOP AFTER THE PRINTING OF EACH DETAIL.  ONCE A VALID TRANSACTION HAS BEEN RETRIEVED,     A-1000-PROCESS-TRAN IS PERFORMED TO PROCESS THE  RETRIEVED TRANSACTION.                                                                           

Page 25: Xpeditor Tutorial

GENERATED - TRANSACT SEGMENT AUTOREADS           COPY CODE - BATCH/GETTRAN                       

  MAIN-PROCESS-LOOP.                             

    PERFORM A-1000-PROCESS-TRAN.                            IF PROCESS-DETAIL                                          PERFORM B-1000-PROCESS-DETAIL.                       IF GET-TRAN                                                PERFORM C-1000-GET-TRAN.                                                                                     IF GET-TRAN OR END-TRAN OR PROCESS-DETAIL                  NEXT SENTENCE                                        ELSE                                                       PERFORM Z-990-PROGRAM-ERROR.                    

   B-2000-END-REPORT 

THIS ROUTINE FORMATS AND PRINTS ANY FINAL CONTROL BREAKS AND PRINTS THE SUMMARY GROUP (IF DEFINED)  INTO THE REPORT.                                                                                    GENERATED - ENTIRE SECTION                   

T-1000-PROGRAM-TERM 

THIS ROUTINE CLOSES ALL FILES DEFINED AS        OPEN=INPUT, I-O, AND OUTPUT.  IT IS CALLED      ONCE AT THE END OF PROCESSING FROM THE PROGRAM  MAINLINE.                                                                                       GENERATED - FILE CLOSES                         COPY CODE - BATCH/TERM                                   

Attachment A - Online TELON Structure Chart

Page 26: Xpeditor Tutorial

  

Page 27: Xpeditor Tutorial

 Attachment B – Batch   TELON Structure Chart  

MAINLINE STRUCTURE CHART (NO REPORT)

 

 

Page 28: Xpeditor Tutorial

MAIN-LINE

M

1

Q-1000PROGRAM

INIT

PerformC-1000 toget first

transaction

2 3

MAIN-PROCESS

* 4

B-2000END-REPORT

5

T-1000PROGRAM-

TERM

1.1

R-1000PARSE-PARMS

1.2

OpenFiles

? 1.3

Copy CodeBATCH/INIT

5.1

Copy CodeBATCH/TERM

5.2

CloseFiles

?

3.4

CONTROL-INDICATORUndefined

3.3

“GET-TRAN”

>3.1

A-1000PROCESS-

TRAN

3.1.1 SET“PROCESS

first dtlgrpname”

3.1.2

Copy CodeBATCH/

PRCTRAN

3.3.1

C-1000GET-TRAN

3.3.1.1

AUTOEXEC

? 3..3.1.2

Copy CodeBATCH/

GETTRAN

3.3.1.1.1SET

“END-TRAN”if no more

transactions

?

3.4.1

PROGRAM-ERROR

3.4.1.1

ABEND

? - Conditionally generated logic

* - Iterative processing

> - Bypass logic

- Performed SECTION or called PROCEDURE

 

Page 29: Xpeditor Tutorial

 

MAINLINE STRUCTURE CHART (WITH REPORT)

 

Main-line Structure 

 

Page 30: Xpeditor Tutorial

MAIN-LINE

M

1

Q-1000PROGRAM

INIT

PerformC-1000 toget first

transaction

2 3

MAIN-PROCESS

* 4

B-2000END-REPORT

5

T-1000PROGRAM-

TERM

1.1

R-1000PARSE-PARMS

1.2

OpenFiles

? 1.3

Copy CodeBATCH/INIT

5.1

Copy CodeBATCH/TERM

5.2

CloseFiles

?

3.4

CONTROL-INDICATORUndefined

3.3

“GET-TRAN”

>3.1

A-1000PROCESS-

TRAN

3.1.1SET

“PROCESSfirst dtl

grpname”

3.1.2

Copy CodeBATCH/

PRCTRAN

3.3.1

C-1000GET-TRAN

3.3.1.1

AUTOEXEC

? 3..3.1.2

Copy CodeBATCH/

GETTRAN

3.3.1.1.1SET

“END-TRAN”if no more

transactions

?

3.4.1

PROGRAM-ERROR

3.4.1.1

ABEND

? - Conditionally generated logic

* - Iterative processing

> - Bypass logic

- Performed SECTION or called PROCEDURE

“PROCESSDETAIL”

3.2 >

B-1000PROCESS-

DETAIL

3.2.1

Format andPrint CONTROL

GROUP

Format andPrint DETAIL

GROUP

SET“GET-TRAN”

3.2.1.1 3.2.1.2 3.2.1.3

FP FP

FP - FORMAT and PRINT

  

Format and Print Structure  

 

Page 31: Xpeditor Tutorial

FP

1 2

1.1 1.2 1.3 2.1 2.2

Format andPrint

B-5000FORMAT-grpname

B-6000PRINT-

grpname

B-9000PAGE-BREAK

Format andPrint TOPDTL

for GROUP

Move GROUPFields to

Print Lines

1.4

Copy CodeGROUP

FMTCUST

1.1.1Format and

PrintTOPPAGEBOTPAGE

Move PrintLines to

Output FileRecord

C-2000WRITE-

REPORT

Page 32: Xpeditor Tutorial

 

EOS31J is used to view Production Jobs spool on AE23 Mainframe. VIEWING JOB SPOOL IN EOS31J

1.      Logon  to eos31j

2.      After entering user id and password the following screen is encountered.

3.      Give the option  J9  for annuities/pension jobs.

 

 4.   Now you can locate the job by giving ‘L JOBNAME’5.  Insert ‘S’ in front of the required job and find the spool details for the jobs      (Rest of the navigation commands is same as in AE20)6. Function key 4 or (EXIT/END)        is used to exit from SPOOL7. Function key 3 with take you to below screen 

Page 33: Xpeditor Tutorial

   8.  This screen allows the User to select the required function or to request the display of a       REPORT selected by NAME, IDENTIFICATION, or JOBID.                                   Unless Y is specified for OLDEST, the DIRECTORY is searched from the NEWEST           REPORT.                                                             AREP IN EOS31JThe AREP command allows the user to select report(s), by entering the selection criteria. 

Page 34: Xpeditor Tutorial

  It is used to recall jobs (which run in past) from job spool  You can also give the date range in which these jobs run. 

Page 35: Xpeditor Tutorial

  Give ‘R’ recall option in front of the job you want to recall. 

Page 36: Xpeditor Tutorial

   Give the command Y;Y;Y  

Page 37: Xpeditor Tutorial

  The jobs after getting recalled can be located in RESTORED SPOOL by entering ‘RST’ in the following screen.

Page 38: Xpeditor Tutorial
Page 39: Xpeditor Tutorial

 

Clean-Up Tool Synopsis After transfer of the data management to IBM, each and every byte of data stored on the disk and tapes is charged for. This includes the datasets which were created in test/production and are no longer required. This data continues to exist in the storage and causes lot of wastage. To bring down the operating cost, it is required to reduce this wastage.             To help manage data efficiently, NIIT has developed this tool for ING. This tool analyses the data based on the inputs from the user and finally generates a report showing how much space can be saved by removing some unwanted data.  Features 

Developed in REXX. User friendly REXX input panel Easy to use Saves more than one and a half person month effort each year Supports all kinds of datasets viz. Physical Sequential, VSAM, GDG by categorizing them into

migrated (disk) datasets and tape datasets. Flexibility to exclude datasets from the report/deletion. Flexibility to opt for ‘only generate a report’ or ‘delete the datasets (test) and generate the reports’ Flexibility to be used in any of the ING systems   

Using the tool             The tool can be invoked from the command line by issuing TSO CLEANUP command This command will show the entry panel. I have described each and every input field below 

                   

Page 40: Xpeditor Tutorial

  DATASETS TO BE CLEANED UP – Allows user to choose which datasets s/he wants to

analyze/delete. Option 1 is for Tape datasets and Option 2 is for Migrated datasets PATTERN – Allows user to enter a high level qualifier (If you want to analyze the datasets

created by user N118625, with second qualifier as TEST, enter N118625.TEST. Do not enter N118625.TEST.*)

MIGRATED/EXPIRED BEFORE – Enter a  valid date in YYYYMMDD format. For tapes, the report will show all the datasets expired before this date and for Migrated datasets, it will show all the datasets migrated before this date.

DELETE/VIEW REPORT – Default is 1. Option 1 creates report and does not delete the datasets. Option 2 Deletes datasets and generates report. Do not use option 2 for production datasets or the datasets to which the user doesn’t have required access.

  Here is an example where I want to generate report of all the datasets whose name starts with F912P.ACCTG.GENERAL. Note that this is a production datasets to which I do not have required authority. If I select delete option, I will get unexpected results.

  After hitting enter, it asks whether user want to exclude some of the datasets. Select ‘Y’ if user wants to exclude some, otherwise hit any key. 

Page 41: Xpeditor Tutorial

  On hitting enter, a list of all the tape datasets with the required qualifier will be displayed. Delete the ones if some of these datasets are to be excluded. 

  User wants to keep first 5 datasets shown above, he can issue Delete line command to remove these datasets from the list. Hit PF3 

Page 42: Xpeditor Tutorial

  The final report is generated in dataset USERID.FIRST LEVEL QUALIFIER.TAPE.REPORT  For Migrated datasets, the flow will be as follows 

  Hit Enter. This will generate a JCL. Please submit this JCL (Also specified at the bottom of the screen in Red Font) 

Page 43: Xpeditor Tutorial

 Press F3 to come out of this JCL after submitting it, it will again ask whether you want to exclude some of the datasets. If ‘Y’ is entered it will open the dataset containing the list of the datasets to be analyzed. User can choose to remove some of the datasets from the list. 

  The final report appears in USERID.FIRST LEVEL QUALIFIER.MIGRAT.REPORT  

Page 44: Xpeditor Tutorial

  Error Messages E100 – When no datasets are present for given qualifier before the specified expiration/migration date. E101 – When delete option is chosen and user doesn’t have required access authority to delete the datasets.

Page 45: Xpeditor Tutorial

                                               

 EOS32R Overview EOS32R is on AE23 LPAR on ING Boulder Mainframe. It is used to view the ERD Reports spooled from Mainframe Production Batch Jobs.How to View Production Report 1.      Log on to AE23 Boulder Mainframe

2.      Select EOS32R from the Sess ID and press Enter

 

 3.      Select Environment for which the report is to be viewed and press Enter

 

Page 46: Xpeditor Tutorial

        4.      Press F3 on the following screen 

Page 47: Xpeditor Tutorial
Page 48: Xpeditor Tutorial

5.      Select the desired Function and press Enter

  6.      Enter the form name

Page 49: Xpeditor Tutorial

 7.      Select the report to be viewed and press Enter

Page 50: Xpeditor Tutorial

8.      View the report

Page 51: Xpeditor Tutorial

 

Page 52: Xpeditor Tutorial

TWS – Forecasting and Creating Daily Trial Plan

   1) Go to TWS main menu select option 3   OPERATIONS PLANNING AND CONTROL        Option ===>   3                                                                                                                                                               Welcome to    OPC.   You are communicating with OC23                                                                                                                                  Select one of the following options and press   ENTER.                                                                                                                               0      OPTIONS              - Define OPC dialog user parameters and options                                                                                                         1      DATABASE             - Display or update OPC data base information                     2      LTP                  - Long Term Plan query and update                                 3      DAILY PLANNING       - Produce daily plans, real and trial                             4      WORK STATIONS        - Work station communication                                      5      MCP                  - Modify the Current Plan                                         6      QCP                  - Query the status of work in progress                            7      OLD OPERATIONS       - Restart old operations from the DB2 repository                     9      SERVICE FUNC         - Perform OPC service functions                                   10     OPTIONAL FUNC        - Optional functions                                      

       X      EXIT                 - Exit from the OPC dialog                                                                                                          

   2) Go to option 3 TRIAL   PRODUCING OPC DAILY PLANS Option ===>   3                                                                                                                                                 Select one of the following :                                                                                                                                 1      REPLAN                       - Replan current planning period                           2      EXTEND                       - Extend the current planning period                       3      TRIAL                        - Produce a trial plan                                     4      PRINT CURRENT                 - Print statistics for current planning period             5      SYMPHONY RENEW                - Create Symphony file starting from Current Plan                     

                                                                                         3) Enter the date for which Daily Trial plan has to be created as shown below  PRODUCING TRIAL DAILY PLAN Command ===>                                                                                                                                                  Enter/change data below and press ENTER to submit the job.                                                                                                    Current plan created   :   04/12/22      05.52                                       Current plan ends      :   04/12/23      06.00                                                                                                                      For trial replan, only report selection indicators required                      For future period :                                                                START DATE           ===>   04/12/24     Format YY/MM/DD                                  TIME         ===>   06.00        Format HH.MM                               END   DATE           ===>   04/12/25     Format YY/MM/DD                                      TIME             ===>   06.00 Format HH.MM                               For extend/future:   In effect if no end date above            

Page 53: Xpeditor Tutorial

EXTENSION LENGTH     ===>   _____        HHHMM                                                 TYPE       ===>   A      A   - includes all days                                                               W  - includes only work days    

   4) Keep the defaults on the screen below as shown below and press ENTER  PRODUCING TRIAL DAILY PLAN Command ===>                                                                                                                                                  Enter/change data below and press ENTER to submit the job.                                                                                                                                                                                   Report selection :                       Y - if report wanted, otherwise - N                        WS SUMMARY           ===>   Y                Summary for all work stations                     OPERATING PLAN       ===>   Y                Daily operation plan                              WS PLANS             ===>   Y                Plans for all work stations                       NON REPORTING        ===>   Y                Plans for non reporting work stations             PLANNED RESOURCE     ===>   Y                Planned resource utilization                                                                                              Copy VSAM used   :                       Y - if copy VSAM wanted, otherwise - N                LTP COPY             ===>   N                LTP copied input data set                         AD  COPY             ===>   N                AD  copied input data set                         WS  COPY             ===>   N                WS  copied input data set                         RD  COPY             ===>   N                RD  copied input data set           5) Modify the JCL JOBCARD accordingly. Specify the dataset name in Dataset Name option otherwise defaults it will create USERID.OC23.DPTRA.LIST  GENERATING JCL FOR A BATCH JOB Command ===>                                                                                                                                                  Enter/change data below and press ENTER to submit/edit the JCL.                                                                                               JCL to be generated for  : Trial daily planning                                                                                                                 SYSOUT CLASS               ===>   _                (Used only if output to system printer)    LOCAL PRINTER NAME   ===>   ________       (Used only if output on local printer)                                                      (Used only if       CLASS is blank)              DATASET NAME         ===>   ____________________________________________                                                  (Used only if      CLASS and    LOCAL PRINTER                                                 are both blank). If blank default name                                         used is     N120093.OC23.DPTRA.LIST                                                                                             SUBMIT/EDIT JOB      ===>   S             S      to submit JOB,       E      to edit                                                                                                Job statement                                                                     ===> //N120093T JOB (95115920),HTNB2.TF,NOTIFY=&SYSUID, _____________________  ===> //     CLASS=A,MSGCLASS=T,COND=(5,LT),REGION=0M_____________________

===>   //*                                                                                                    _____________________      Note : This way Trial Plan can be created for any future date. If current trial plan has to be created then in point no. 3 above don’t mention the date and time.            For verifying future date schedule: E.g. Holiday Schedule for 12/24/2004 

Page 54: Xpeditor Tutorial

Step 1.Run the TWS Trial Plan for future date.(Refer to DOC attached)Step 2.Run the SAS Utility JOB            T959.VKS.TESTJCL(TWSACEHO) – AE20/23            NW40PAUT.TWS.JOBS(TWSPREM) – NW01            Customize job for following (example for AE20)a) Modify JOBFILE DSN to point to TWS Trial Plan dataset name as created in Step1.//JOBFILE  DD  DSN=N120093.OC23.DPTRA.LIST b)Modify the SAS card for SYSTEM names:   WHERE SYSTEM IN ('J913','J944',              'J959','J970','J988',              'J9DT','J9DV','J9DW'); The JOB will give you output in SDSF SASLIST DD Name. The output will be like  -Sample for ACES 

SYSTEM JOB_ID STATION APP_ID PRED  

PRED_JOB

J9DTJ9DTAATH AE23_051 A9DT#0013#00 A9DT#0013#00

NONR_001 START

      A9DT#0013#00 A959#0023#00 AE23_036 J959TNHS

J9DT J9DTGMRNONR_044

PRIORDAYSRUN

PRIORDAYSRUN

NONR_001 START

J9DTJ9DTG2HS AE23_003 A9DT#0013#00 A9DT#0013#00

NONR_001 START

      A9DT#0013#00 A959#0023#00 AE23_030 J959TKHS