Top Banner
Connect CDC SQData Db2 Capture Reference Version 4.1
109

Connect CDC SQData - Db2 Capture Reference

May 08, 2023

Download

Documents

Khang Minh
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Connect CDC SQData - Db2 Capture Reference

Connect CDC SQData

Db2 Capture Reference

Version 4.1

Page 2: Connect CDC SQData - Db2 Capture Reference

2 Connect CDC SQData Db2 Capture Reference

Db2 Capture Reference

© 2001, 2022 SQData. All rights reserved.

Version 4.1

Last Update: 7/11/2022

Page 3: Connect CDC SQData - Db2 Capture Reference

Db2 Capture Reference

Contents

Introduction ............................................................................................... 6

Db2/z Data Capture Summary ............................................................. 7

Organization ........................................................................................ 8

Terminology ......................................................................................... 9

Documentation Conventions ............................................................. 10

Related Documentation .................................................................... 11

Db2/z Log Reader Capture ...................................................................... 12

Implementation Checklist ................................................................. 13

Prepare Environment ......................................................................... 15

Identify Source and Target System and Datastore ...................... 15

Confirm/Install Replication Related APARS ................................ 15

Modify z/OS PROCLIB Members ................................................. 16

Verify product is Linked ............................................................... 16

Bind the Db2/z Package .............................................................. 16

Verify APF Authorization of LOADLIB .......................................... 16

Create zFS Variable Directories ................................................... 16

Reserve TCP/IP Ports ................................................................... 19

Identify/Authorize zFS User and Started Task IDs ....................... 19

Prepare Db2/z for Capture .......................................................... 21

Generate z/OS Public / Private Keys and Authorized Key File ..... 22

Setup CDCStore Storage Agent .......................................................... 25

Size Transient Storage Pool ......................................................... 25

Apply Frequency .......................................................................... 26

Create zFS Transient Data Filesystem ......................................... 27

Create z/OS CDCStore CAB file ................................................... 28

Setup Log Reader Capture Agent ...................................................... 32

Configure Db2 Tables for Capture .............................................. 32

Create Db2/z Capture CAB File ................................................... 32

Encryption of Published Data ...................................................... 36

Data Sharing Environments ......................................................... 40

Prepare Db2/z Capture JCL .......................................................... 40

Setup Capture Controller Daemon .................................................... 42

Create Access Control List ......................................................... 43

Create Agent Configuration File .................................................. 45

Page 4: Connect CDC SQData - Db2 Capture Reference

Db2 Capture Reference

Prepare z/OS Controller Daemon JCL .......................................... 48

Configure Engine ............................................................................... 50

Component Verification .................................................................... 52

Start z/OS Controller Daemon ..................................................... 52

Start Db2/z Log Reader Capture Agent ....................................... 52

Start Engine ................................................................................. 52

Db2 Test Transactions ................................................................ 53

Operation ................................................................................................ 54

Start / Reload Controller Daemon ..................................................... 55

Setting the Capture Start Point .......................................................... 56

Restart / Remine Db2/z ..................................................................... 57

Normal Restart ............................................................................ 57

Restart from Current ................................................................... 57

Point-in-time Recovery ................................................................ 58

Apply Capture CAB File Changes ...................................................... 60

Displaying Capture Agent Status ...................................................... 62

Displaying Storage Agent Statistics .................................................. 64

Interpreting Capture/Storage Status ................................................. 66

Modifying z/OS Transient Storage Pool ............................................ 67

Stopping the Db2/z Capture Agent .................................................... 69

Health Checker .................................................................................. 70

Operating Scenarios ................................................................................ 72

Initial Target Load and Refresh ......................................................... 73

Dynamic Capture Based Refresh ................................................ 73

DB2 Unload Engines .................................................................... 78

Capture New Db2/z Data .................................................................. 79

Send Existing Db2/z Data to New Target .......................................... 81

Add Subscription to Log Reader Capture .................................... 81

Add New Engine .......................................................................... 81

Add Engine Controller Daemon ................................................... 82

Update Capture Controller Daemon ............................................ 84

Apply Configuration File changes ............................................... 84

Filter Captured Data .......................................................................... 86

Capture Side Filters ..................................................................... 86

Engine Filters ............................................................................... 87

Adding Uncataloged Tables .............................................................. 88

Page 5: Connect CDC SQData - Db2 Capture Reference

Db2 Capture Reference

Db2/z Straight Replication ................................................................ 90

Target Implementation Checklist ................................................ 90

Create Target Tables ................................................................... 91

Generate Engine Public / Private Keys ........................................ 91

Create Straight Replication Script ............................................... 91

Prepare z/OS Engine JCL ............................................................. 93

Verify Straight Replication .......................................................... 93

Db2/z Active/Active Replication ....................................................... 94

Db2/z Capture Troubleshooting .............................................................. 95

Db2/z Source Database Reorgs and Load Replace ........................... 96

Db2/z Compression Dictionary Delays .............................................. 97

CPU Utilization on MOUNT ............................................................... 98

Changes Not Being Captured ............................................................ 99

Long Table Names ........................................................................... 100

Db2/z Log Buffer Delays ................................................................. 101

Upgrading Db2/z to 10 Byte LSN ..................................................... 102

Compensation Analysis and Elimination ......................................... 103

Signal Errors .................................................................................... 105

z/OS Diagnostic Dumps ................................................................... 106

Page 6: Connect CDC SQData - Db2 Capture Reference

6 Connect CDC SQData Db2 Capture Reference

Introduction

Precisely's Connect CDC SQData enterprise data integration platform includes Change Data Capture agents for theleading source data repositories, including:

· Db2 on z/OS

This document is a reference manual for the configuration and operation of this capture agent including the transientstorage and publishing of captured data to Engines running on z/OS and other platforms. Included in this reference isan example of Simple Replication of the source datastore. Apply Engines can also perform complex replication tonearly any form of structured target data repository, utilizing business rule driven filters, data transformation logicand code page translation.

The remainder of this section:

· Summarizes features and functions of the Db2 z/OS change data capture agent

· Describes how this document is organized

· Defines commonly used terms

· Defines documentation syntax conventions

· Identifies complementary documents

Page 7: Connect CDC SQData - Db2 Capture Reference

7Connect CDC SQData Db2 Capture Reference

Introduction

Db2/z Data Capture Summary

The Connect CDC SQData Db2/z Log Reader Capture utilizes the LogMiner API to gain access to log data.

Attribute Db2/z Log Reader

Data Capture Latency Near-Real-Time or Asynchronous

Capture Method Log Miner

Unit-of-Work Integrity Committed Only

Output Datastore Options TCP/IP

Runtime Parameter Method SQDCONF

Auto-Disable Feature Yes

Auto-Commit Feature Yes

Multi-Target Assignment Yes

Include/Exclude Filters Correlation ID, Db2 Plan, Authorization ID

Transaction Include/Exclude Yes

DDL Change Tracking Optional support for Replicator Schema Evolution

Page 8: Connect CDC SQData - Db2 Capture Reference

8 Connect CDC SQData Db2 Capture Reference

Introduction

Organization

The following sections provide a detail level reference to installation, configuration and operation of the ConnectCDC SQData Capture Agent for Db2:

· Db2/z Log Reader Capture

· Operation

· Operating Scenarios including

a. Initial Target Load and Refresh

b. Capture New Db2/z Data

c. Db2/z Straight Replication

d. Db2/z Active/Active replication

· Db2/z Capture Troubleshooting

See the Change Data Capture Guide for an overview of the role capture plays in Precisely's Connect CDC SQDataenterprise data integration product, the common features of the capture agents and the transient storage andpublishing of captured data to Engines running on all platforms.

Page 9: Connect CDC SQData - Db2 Capture Reference

9Connect CDC SQData Db2 Capture Reference

Introduction

Terminology

Terms commonly used when discussing Change Data Capture:

Term Meaning

Agent Individual components of the Connect CDC SQData product architecture.

CDC Abbreviation for Changed Data Capture.

Datastore An object that contains data such as a hierarchical or relational database, VSAM file, flat file, etc.

ExitA classification for changed data capture components where the implementation utilizes asubsystem exit in IMS, CICS, etc.

File Refers to a sequential (flat) file.

JCL An abbreviation for Job Control Language that is used to execute z/OS processes.

Platform Refers to an operation system instance.

RecordA basic data structure usually consisting of fields in a file, topic or message. A row consisting ofcolumns in a Relational database table. Record may be used interchangeably with row or message.

SegmentA basic data structure consisting of fields in an IMS hierarchical database. Segments are recordshaving parent and child relationships with other records defined by a Database Description (DBD).

Source A datastore monitored for content changes by a Capture Agent.

SQDconf A Utility that manages configuration parameters used by some data capture components.

SQDXPARMA Utility that manages a set of parameters used by some IMS and VSAM changed data capturecomponents.

TableUsed interchangeably with relational datastore. A table represents a physical structure that containsdata within a relational database management system.

Target A datastore where information is being updated/written.

Page 10: Connect CDC SQData - Db2 Capture Reference

10 Connect CDC SQData Db2 Capture Reference

Introduction

Documentation Conventions

The following conventions are used in command and configuration syntax and examples in this document.

Convention Explanation Example

Regular type Items in regular type must be entered literally usingeither lowercase or uppercase letters. Items in Bold typeare usually "commands" or "Actions". Note, uppercase isoften used in "z/OS" objects for consistency just aslowercase is often used on other platforms

create

CCSID

/directory

//SYSOUT DD *

<variable> Items between < and > symbols represent variables. Youmust substitute an appropriate numeric or text value forthe variable.

<file_name>

| Bar A vertical Bar indicates that a choice must be madeamong items in a list separated by bars.

'yes' | 'no'

JSON | AVRO

[ ] Brackets Brackets indicate that item is optional. Items separatedby a | indicate a choice may be made among multipleitems.

[alias]

OR

[yes | no]

-- Double dash Double dashes "--" are used in two contexts. They mayprecede an option keyword. Many keywords can also be abbreviated and preceded by a single dash "-".

They are also used indicate the start of a single linecomment.

--service=<port>

OR -s <port>

OR --apply

OR -- this is acomment

… Ellipsis An ellipsis indicates that the preceding argument orgroup of arguments may be repeated.

[expression…]

Sequencenumber

A sequence number indicates that a series of argumentsor values may be specified. The sequence number itselfmust never be specified.

field2

' ' Single quotes Single quotation marks that appear in the syntax must bespecified literally.

IF code_value = 'a'

Page 11: Connect CDC SQData - Db2 Capture Reference

11Connect CDC SQData Db2 Capture Reference

Introduction

Related Documentation

Installation Guide - This publication describes the installation and maintenance procedures for the Connect CDCSQData for z/OS and Linux/AIX and Windows products.

Product Architecture - Describes the overall architecture of the Connect CDC SQData product and how itscomponents deliver true Enterprise Data Integration.

Data Capture Guide - This publication provides an overview of the role capture plays in Precisely's Connect CDCSQData product, the common features Capture and the methods supported for store and forward transport ofcaptured data to target Engines running on all platforms.

Apply and Replicator Engine References - These document provide a detail level reference describing the operationand command language of the Connect CDC SQData Apply and Replicator Engine components, which support targetdatastores on z/OS, AIX, Linux and Windows.

Secure Communications Guide - This publication describes the Secure Communications architecture and the processused to authenticate client-server connections.

Utility Guides - These publications describes each of the Connect CDC SQData utilities including SQDconf, SQDmon,SQDutil and the z/OS Master Controller.

Messages and Codes - This publication describes the messages and associated codes issued by the Capture,Publisher, Storage agents, Parser, Apply and Replicator Engines, and Utilities in all operating environments includingz/OS, Linux, AIX, and Windows.

Quickstart Guides - Tutorial style walk through for some common configuration scenarios including Capture andReplication. z/OS Quickstarts make use of the ISPF interface. While each Quickstart can be viewed in WebHelp, youmay find it useful to print the PDF version of a Quickstart Guide to use as a checklist.

Db2/z Quickstart - Procedures and screen shots that illustrate a fast and simple method of creating, configuring andrunning the components required for Db2/z changed data capture.

Page 12: Connect CDC SQData - Db2 Capture Reference

12 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

The Db2/z Log Reader Capture is multi-threaded and comprised of three components within the SQDDb2C module;the Log Reader based Capture agent and the CDCStore multi-platform transient Storage Manager and Publisher. TheStorage Manager and Publisher together maintain both transient storage and UOW integrity. Only Committed Units-of-Work are sent by the Publisher to Engines via TCP/IP.

Page 13: Connect CDC SQData - Db2 Capture Reference

13Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

Implementation Checklist

This checklist covers the tasks required to prepare the operating environment and configure the Db2/z Log ReaderCapture Data Capture Agent. Before beginning these tasks however, the base Connect CDC SQData product must beinstalled. Refer to the Installation Guide for an overview of the entire product and the z/OS installation instructionsand prerequisites.

# Task Sample JCL z/OS ControlCenter

Prepare Environment

1 Identify Source and Target System and Datastores N/A

2 Confirm/Install Db2 Replication Related APARS N/A

3 Modify z/OS Procedure Lib (PROCLIB) Members N/A

4 Verify Product is Linked SQDLINK

5 Bind the Db2 Package BINDSQD

6 Verify APF Authorization of LOADLIB N/A

7 Create ZFS Variable directories ALLOCZDR

8 Reserve TCP/IP Ports N/A

9 Authorize zFS User and Started Task IDs and specify MMAPAREAMAX RACFZFS

10 Prepare Db2 for Capture DB2GRANT

11 Generate z/OS Public/Private keys and Authorized Key Files NACLKEYS

Environment Preparation Complete

Setup CDCStore Storage Agent

1 Size the Transient Storage Pool N/A *

2 Create Db2 Capture zFS Transient Data File(s) ALLOCZFS

3 Create the CDCStore CAB file SQDCONDS *

CDCStore Storage Agent Setup Complete

Setup Db2 Log Reader Capture Agent

1 Configure Db2 Tables for Capture (DB Server) N/A

2 Create Db2 Log Reader Capture CAB File SQDCONDC *

3 Prepare Log Reader Capture Runtime JCL SQDDB2C *

Capture Agent Setup Complete

Setup Controller Daemon

1 Create Access Control List (acl.cfg) CRDAEMON *

2 Create Agent Configuration File (sqdagents.cfg) CRDAEMON *

3 Prepare z/OS Controller Daemon JCL SQDAEMON *

Controller Daemon Setup Complete

Configure Apply Engine

1 Determine Requirements N/A

2 Configure Apply Engine Environment N/A

3 Create Apply Engine Script N/A

Apply Engine Configuration Complete

Page 14: Connect CDC SQData - Db2 Capture Reference

14 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

Component Verification

1 Start the Controller Daemon SQDAEMON *

2 Start the Capture Agent SQDDB2C *

3 Start the Engine SQDATA *

4 Execute Test Transactions N/A

Verification Complete

Page 15: Connect CDC SQData - Db2 Capture Reference

15Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

Prepare Environment

Implementation of the Db2/z Log Reader Capture agent requires a number of environment specific activities thatoften involve people and resources from different parts of an organization. This section describes those activities sothat the internal procedures can be initiated to complete those activities prior to the actual setup and configurationof the Connect CDC SQData capture components.

· Identify Source and Target System and Datastores

· Confirm/Install Replication Related APARS

· Modify z/OS PROCLIB Members

· Verify Product is Linked

· Bind the Db2 Package

· Verify APF Authorization of LOADLIB

· Create ZFS Variable Directories

· Reserve TCP/IP Ports

· Identify/Authorize Operating User(s) and Started Task(s)

· Prepare Db2/z for Capture

· Generate z/OS Public / Private Keys and Authorized Key File

Identify Source and Target System and DatastoreConfiguration of the Capture Agents, Engines and their Controller Daemon's require identification of the system andtype of datastore that will be the source of and target for the captured data. Once this information is available,requests for ports, accounts and the necessary file and database permissions for the Engines that will run on eachsystem should be submitted to the responsible organizational units.

Confirm/Install Replication Related APARSThe Db2/z Capture Agent utilizes Db2 logging and LogReader IFI calls. That functionality has evolved over time asboth customers and IBM have identified problems. IBM initially creates a problem management report (PMR) when aproblem is identified. Next an authorized program analysis report (APAR) is issued containing symptoms and work-around's to document and track its resolution. Eventually IBM may produce a program temporary fix (PTF) to replacethe module in error, and the APAR is closed. IBM also bundles previous dependent PTFs together withenhancements described by a new APAR that when implemented activates what is referred to as a Function Level.Each Function Level when implemented activates all enhancements, fixes and preventative maintenance itemspresent in lower Function Levels.

Precisely recommends requesting from IBM the list of replication related APAR's associated with your installedversion of Db2 to ensure that your system is up to date before beginning to use the Connect CDC SQData Db2/z LogReader Capture. IBM will understand that includes any PTF's related to Db2 Logging, IFI 306 calls and IFCID 306 calls.

Notes:

1. Db2 Version 12 also requires implementation of the 10 Byte Log Sequence Number (LSN). If that was notcompleted prior to implementing the Db2/z Change Data Capture see the section Upgrading Db2 to 10 ByteLSN below.

Page 16: Connect CDC SQData - Db2 Capture Reference

16 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

2. The Db2/z Dynamic Refresh functionality in Connect CDC SQData 4.1 requires Db2 V12 R1 M500 in order tosupport processing "slices" of the source table based on row count or From and to Key values. See DynamicCapture Based Refresh below.

Modify z/OS PROCLIB MembersModify the members of the SQDATA.V4nnnn.PROCLIB as required for your environment to set the supporting systemdataset names (i.e. Language Environment, IMS, VSAM, Db2, etc.) Each member contains instructions on thenecessary modifications. Refer to the SQDLINK procedure for the names of system level datasets as it was updatedduring the base Connect CDC SQData Installation.

Verify product is LinkedThe Connect CDC SQData product should have been linked as part of the base product installation using JCL similar tosample member SQDLINKA. Verify that the return code from this job was 0.

Bind the Db2/z PackageThe Db2/z Capture Agent and Apply Engines that access Db2 tables require a Db2 Package/Plan in order to access theDb2 system catalogs to obtain information regarding the tables being processed. A common database requestmodule (DBRM) SQDDDB2D is shipped as part of the Connect CDC SQData product distribution inSQDATA.V400.DBRMLIB. The Bind of the Package/Plan SQDV4000 should be performed using JCL similar to samplemember BINDSQD.

Notes:

1. Remove the APPLCOMPAT parameter if Db2 is not at level V12R1M500.

2. Once the Bind is complete, authorization for its use must be granted for started task and job user-ids, seePrepare Db2/z for Capture.

3. If the bind is being performed as part of an upgrade from Connect CDC SQData V3 to V4 then the currentCapture Cab file must be updated to reflect the new Package and Plan. That can be accomplished using thesqdconf utility modify command with the --plan=SQDV4000 specified.

Verify APF Authorization of LOADLIBExecution of the Connect CDC SQData on z/OS requires APF authorization of the product's Load Library, which isnormally made a permanent part of the IPL APF authorization procedure as part of the base product installation.Verify that Connect CDC SQData is on the list of currently APF authorized files using the z/OS ISPF/SDSF facility. First,enter "/D PROG, APF" at the SDSF command prompt to generate the list. Next, enter "LOG" at the SDSF commandprompt. Scroll to the bottom of the log to display the results of the previous command and then back up and to theright to view the complete listing of the command.

Create zFS Variable DirectoriesOn z/OS Connect CDC SQData product components and most parameter and configuration data will be installed inpartitioned datasets. Controller Daemon and Capture/Publisher agent configurations will however be stored in thez/OS UNIX Systems Services file system, commonly referred to as zFS. The Controller Daemon, Capture, Storage andPublisher agents require a predefined zFS directory structure used to store a small number of files. While only theconfiguration directory is required and the location of the agent and daemon directories is optional, Preciselyrecommends the structure described below, where <home> and a "user" named <sqdata> could be modified toconform to the operating environment and a third level created for the Controller Daemon:

/<home>/<sqdata> - The home directory used by the Connect CDC SQData

/<home>/<sqdata>/daemon - The working directory used by the Daemon that also contains two subdirectories.

Page 17: Connect CDC SQData - Db2 Capture Reference

17Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

/<home>/<sqdata>/daemon/cfg - A configuration directory that contains two configuration files.

/<home>/<sqdata>/daemon/logs - A logs directory, though not required, is suggested to store log files usedby the controller daemon. Its suggested location below must match the file locations specified in theGlobal section of the sqdagents.cfg file created in the section "Setup Controller Daemon" later in thisdocument.

Additional directories should be created for each Capture/Publisher. Precisely recommend the structures describedbelow:

/<home>/<sqdata>/db2cdc - The working directory for the Db2 Capture and CDCStore Storage agents. TheCapture and CDCStore configuration (.cab) Files will be maintained in this directory along with smalltemporary files used to maintain connections to the active agents.

/<home>/<sqdata>/db2cdc/data - A data directory is required by the Db2 Capture. Files will be allocated in thisdirectory as needed by the CDCStore Storage Agent when transient data exceeds allocated in-memorystorage. The suggested location below must match the "data_path" specified in the Storage agentconfiguration (.cab file) described later in this chapter. A dedicated File System is required in productionwith this directory as the "mount point".

/<home>/<sqdata>/imscdc - The working directory for the IMS Capture and CDCzLOG Publisher agents. TheCapture and Publisher (.cab) Files will be maintained in this directory along with small temporary filesused to maintain connections to the active agents.

/<home>/<sqdata>/[vsampub | kfilepub] - The working directory for the VSAM and Keyed File CompareCapture's CDCzLOG Publisher agent. The Publisher configuration (.cab) File will be maintained in thisdirectory along with small temporary files used to maintain connections to the active agents.

Example:

JCL similar to the sample member ALLOCZDR included in the distribution should be used to allocate thenecessary directories. The JCL should be edited to conform to the operating environment.

//ALLOCZDR JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*--------------------------------------------------------------------//* Allocate zFS Directories for Daemon and CAB Files//*--------------------------------------------------------------------//* Note: 1) These directories are use by the Controller Daemon,//* CDCStore and CDCzLog based capture agents//*//* 2) The 1st, 2nd and 3rd level directories can be changed but//* we recommend the 2nd Level be a User named sqdata.//*//* 3) Leave /daemon and /daemon/cfg as specified//*//* 4) Your UserID may need to be defined as SUPERUSER to//* successfully run this Job//*//*********************************************************************//*//*------------------------------------------------------------//* Delete Existing Directories//*------------------------------------------------------------//*DELETDIR EXEC PGM=IKJEFT01,REGION=64M,DYNAMNBR=99,COND=(0,LT)//*SYSEXEC DD DISP=SHR,DSN=SYS1.SBPXEXEC

Page 18: Connect CDC SQData - Db2 Capture Reference

18 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

//*SYSTSPRT DD SYSOUT=*//*OSHOUT1 DD SYSOUT=*//*SYSTSIN DD *//* OSHELL rm -r /home/sqdata/*//*--------------------------------------------------------------------//* Create New ZFS Directories for Controller Daemon & Captures//*--------------------------------------------------------------------//CREATDIR EXEC PGM=IKJEFT01,REGION=64M,DYNAMNBR=99,COND=(0,LT)//SYSTSPRT DD SYSOUT=*//SYSTSIN DD * PROFILE MSGID WTPMSG MKDIR '/home/sqdata/' + MODE(7,7,5)

MKDIR '/home/sqdata/daemon/' + MODE(7,7,5)

MKDIR '/home/sqdata/daemon/cfg' + MODE(7,7,5)

MKDIR '/home/sqdata/daemon/logs' + MODE(7,7,5)/*// MKDIR '/home/sqdata/db2cdc/' + MODE(7,7,5) MKDIR '/home/sqdata/db2cdc/data/' + MODE(7,7,5)

MKDIR '/home/sqdata/imscdc/' + MODE(7,7,5)

MKDIR '/home/sqdata/vsampub/' + MODE(7,7,5)

MKDIR '/home/sqdata/kfilepub' + MODE(7,7,5)

Notes:

1. Consider changing default umask setting in the /etc/profile file, or in your .cshrc or .login file.

2. While many zFS File systems are configured with /u as the "home" directory, others use /home, the standardon Linux. References in the Connect CDC SQData JCL and documentation will use /home for consistency.Check with your Systems programmer regarding zFS on your systems.

3. The User-ID(s) and/or Started Tasks under which the Controller Daemon and Captures will run must beauthorized for Read/Write access to the zFS directories.

4. A more traditional "nix" style structure may also be used where "sqdata", the product, would be a sub-directory in the structure "/var/opt/sqdata/" with the daemon and data sub-directory structures insidesqdata.

5. The BPXPRMxx member used for IPLs should be updated to include the mount point(s) for this zFS directorystructure.

Page 19: Connect CDC SQData - Db2 Capture Reference

19Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

Reserve TCP/IP PortsTCP/IP ports are required by the Controller Daemons on source systems and are referenced by the Engines on thetarget system(s) where captured Change Data will be processed. Once the source systems are known, request portnumber assignments for use by Connect CDC SQData on those systems. Connect CDC SQData defaults to port 2626 ifnot otherwise specified.

Identify/Authorize zFS User and Started Task IDsz/OS Capture and Publisher processes can operate as standalone batch Jobs or under a Started Task. Once thedecision has been made as to which configuration will be employed, a User-ID and/or Name of the Started Task mustbe assigned. RACF must then be used to grant access to the OMVS zFS file system.

JCL similar to the sample member RACFZFS included in the distribution can be edited to conform to the operatingenvironment, and be used to provide the appropriate authorizations:

//RACFZFS JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*--------------------------------------------------------------------//* Sample RACF Commands to Setup zFS Authorization//*--------------------------------------------------------------------//* Note: 1) The Task/User Names are provided as an example and//* must be changed to fit your environment//*//* Started Tasks included://* SQDAMAST - z/OS Master Controller//* SQDDB2C - DB2 z/OS Capture Agent//* SQDZLOGC - IMS/VSAM LogStream Publisher//* SQDAEMON - z/OS Listener Daemon//* <admin_user> - Administrative User//*//* 2) MMAPAREAMAX Parm required only for DB2 CDCStore Capture//*//* 3) The FSACCESS step may be needed if the RACF FSACCESS//* class is active. See comments in the step.//*//*--------------------------------------------------------------------//*//RACFZFS EXEC PGM=IKJEFT01//SYSTSPRT DD SYSOUT=*//SYSPRINT DD SYSOUT=*//SYSUADS DD DSN=SYS1.UADS,DISP=SHR//SYSLBC DD DSN=SYS1.BRODCAST,DISP=SHR//SYSTSIN DD *ADDUSER SQDAMAST DFLTGRP(<stc_group>) OWNER(<owner_name>)ALTUSER SQDAMAST NOPASSWORD NOOIDCARDALTUSER SQDAMAST NAME('STASK, SQDATA')ALTUSER SQDAMAST DATA('FOR SQDATA CONTACT:<sqdata_contact_name>')ALTUSER SQDAMAST WORKATTR(WAACCNT('**NOUID**'))CONNECT SQDAMAST GROUP(<stc_group>) OWNER(<owner_name>)PERMIT 'SQDATA.*' ID(SQDAMAST) ACCESS(READ) GEN

ADDUSER SQDDB2C DFLTGRP(<stc_group>) OWNER(<owner_name>)ALTUSER SQDDB2C NOPASSWORD NOOIDCARDALTUSER SQDDB2C NAME('STASK, SQDATA')ALTUSER SQDDB2C DATA('FOR SQDATA CONTACT:<sqdata_contact_name>')ALTUSER SQDDB2C WORKATTR(WAACCNT('**NOUID**'))CONNECT SQDDB2C GROUP(<stc_group>) OWNER(<owner_name>)ALTUSER SQDDB2C OMVS(PROGRAM('/bin/sh'))ALTUSER SQDDB2C OMVS(MMAPAREAMAX(262144))

Page 20: Connect CDC SQData - Db2 Capture Reference

20 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

PERMIT 'SQDATA.*' ID(SQDDB2C) ACCESS(READ) GEN

ADDUSER SQDZLOGC DFLTGRP(<stc_group>) OWNER(<owner_name>)ALTUSER SQDZLOGC NOPASSWORD NOOIDCARDALTUSER SQDZLOGC NAME('STASK, SQDATA')ALTUSER SQDZLOGC DATA('FOR SQDATA CONTACT:<sqdata_contact_name>')ALTUSER SQDZLOGC WORKATTR(WAACCNT('**NOUID**'))CONNECT SQDZLOGC GROUP(<stc_group>) OWNER(<owner_name>)ALTUSER SQDZLOGC OMVS(PROGRAM('/bin/sh'))PERMIT 'SQDATA.*' ID(SQDZLOGC) ACCESS(READ) GEN

ADDUSER SQDAEMON DFLTGRP(<stc_group>) OWNER(<owner_name>)ALTUSER SQDAEMON NOPASSWORD NOOIDCARDALTUSER SQDAEMON NAME('STASK, SQDATA')ALTUSER SQDAEMON DATA('FOR SQDATA CONTACT:<sqdata_contact_name>')ALTUSER SQDAEMON WORKATTR(WAACCNT('**NOUID**'))CONNECT SQDAEMON GROUP(<stc_group>) OWNER(<owner_name>)ALTUSER SQDAEMON OMVS(PROGRAM('/bin/sh'))PERMIT 'SQDATA.*' ID(SQDAEMON) ACCESS(READ) GEN

ADDUSER <admin_user> DFLTGRP(<stc_group>) OWNER(<owner_name>)ALTUSER <admin_user> NOPASSWORD NOOIDCARDALTUSER <admin_user> NAME('STASK, SQDATA')ALTUSER <admin_user> DATA('FOR SQDATA CONTACT:<contact_name>')ALTUSER <admin_user> WORKATTR(WAACCNT('**NOUID**'))CONNECT <admin_user> GROUP(<stc_group>) OWNER(<owner_name>)ALTUSER <admin_user> OMVS(PROGRAM('/bin/sh'))ALTUSER <admin_user> OMVS(MMAPAREAMAX(262144))PERMIT 'SQDATA.*' ID(<admin_user>) ACCESS(READ) GEN

SETROPTS GENERIC (DATASET ) REFRESH/*////*--------------------------------------------------------------------//* SETUP R/W ACCESS TO THE SQDATA ZFS FILE SYSTEM//*//* If the FSACCESS RACF class is not active, do not run this step.//*//* The FSACCESS class provides coarse-grained control to z/OS USS//* file systems at the file system name level. It is inactive by//* default and is not always used.//*//* If your RACF administrator has activated this class, and if any//* protected file system will be accessed by a capture, publisher,//* daemon, admin user, or other user or task, then you will need to//* grant access to the relevant profile(s). Check with your RACF//* administrator to determine if this is required.//*//* The example below shows the RACF commands to define a new profile//* in the FSACCESS class for the DB2 CDCStore file system and grant//* UPDATE permission to the users that will access it.//*--------------------------------------------------------------------//FSACCESS EXEC PGM=IKJEFT01//SYSTSPRT DD SYSOUT=*//SYSPRINT DD SYSOUT=*//SYSUADS DD DISP=SHR,DSN=SYS1.UADS//SYSLBC DD DISP=SHR,DSN=SYS1.BRODCAST//SYSTSIN DD *SETROPTS GENERIC(FSACCESS)RDEFINE FSACCESS SQDATA.** UACC(NONE)PERMIT SQDATA.** CLASS(FSACCESS) ID(SQDAMAST) ACCESS(UPDATE)

Page 21: Connect CDC SQData - Db2 Capture Reference

21Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

PERMIT SQDATA.** CLASS(FSACCESS) ID(SQDDB2C) ACCESS(UPDATE)PERMIT SQDATA.** CLASS(FSACCESS) ID(SQDZLOGC) ACCESS(UPDATE)PERMIT SQDATA.** CLASS(FSACCESS) ID(SQDAEMON) ACCESS(UPDATE)PERMIT SQDATA.** CLASS(FSACCESS) ID(<admin_user>) ACCESS(UPDATE)SETROPTS RACLIST(FSACCESS) REFRESH/*//

Notes:

· The RACFZFS sample JCL includes users SQDDB2C and SQDZLOGC. These sections are only required when usingthe Db2 CDCSTORE Capture or the IMS/VSAM CDCzLog Publisher agents respectively.

· The Db2/z Log Reader Capture avoids "landing" captured data by using memory mapped storage. WhileStorage is not allocated until memory mapping is active, it is important to specify a value for MMAPAREAMAXusing RACF that will accommodate the data space pages allocated for memory mapping of the z/OS UNIX(OMVS) files. Precisely recommends using a value of 262144 (256MB) because the default of 4096 (16MB) willlikely cause the capture to fail as workload increases. The RACF ADDUSER or ALTUSER command, included inthe sample RACFZFS JCL above, specifies the MMAPAREAMAX limit. You can read more about MMAPAREAMAXprocess limits and its relationship to MAXPMMAPAREA system limits herehttps://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.bpxb200/maxmm.htm.

Prepare Db2/z for CaptureThe Db2 Log Reader Capture requires special user privileges and preparation to access and read the Db2 RecoveryLogs using the Db2 Instrumentation Facility Interface (IFI) calls. Version 4 of Connect CDC SQData also requires somesystem tables to be captured to support Schema Evolution.

The following GRANTS are required:

1. GRANT MONITOR2 TO < sqdata_user>;

2. GRANT EXECUTE ON PLAN SQDV4000 TO < sqdata_user>;

3. GRANT SELECT ON SYSIBM.SYSTABLES TO < sqdata_user>;

4. GRANT SELECT ON SYSIBM.SYSCOLUMNS TO < sqdata_user>;

5. GRANT SELECT ON SYSIBM.SYSINDEXES TO < sqdata_user>;

6. GRANT SELECT ON SYSIBM.SYSKEYS TO < sqdata_user>;

7. GRANT SELECT ON SYSIBM.SYSTABLESPACE TO < sqdata_user>;

Db2 Reorg and Load procedures may need to be updated:

· KEEPDICTIONARY=YES parameter must be used by all Db2 REORG and LOAD Utilities. If the CDC process is runasynchronously, for some reason gets behind or is configured to recapture older logs, the proper CompressionDictionary must be available.

Schema Evolution Requires DATA CAPTURE CHANGES on Two (2) Catalog Tables:

1. SYSIBM.SYSTABLES

2. SYSIBM.SYSCOLUMNS

Notes:

· A common database request module (DBRM) SQDDDB2D ships as part of the product distribution and a Bindmust be performed on the SQDV4000 Package and Plan. Use the BINDSQD member in the CNTL Library to bind

Page 22: Connect CDC SQData - Db2 Capture Reference

22 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

the Package and Plan to Db2.

· Each Db2 table to be captured also requires:

ALTER TABLE <schema.tablename> DATA CAPTURE CHANGES;

JCL similar to sample member DB2GRANT included in the distribution can be edited to conform to the operatingenvironment, and be used to provide the appropriate Db2 user Authorizations.

//DB2GRANT JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID //* //*-------------------------------------------------------------------- //* Grant Db2 Authorizations for SQDATA Userid(s) //*-------------------------------------------------------------------- //* Note: MONITOR2 for IFI Calls //* Execute on the SQDATA PLAN SQDV4000 //* SELECT on Catalog Table SYSIBM.SYSTABLES //* SELECT on Catalog Table SYSIBM.SYSCOLUMNS //* SELECT on Catalog Table SYSIBM.SYSINDEXES //* SELECT on Catalog Table SYSIBM.SYSKEYS //* SELECT on Catalog Table SYSIBM.SYSTABLESPACE //*-------------------------------------------------------------------- //* //DB2GRANT EXEC PGM=IKJEFT01,DYNAMNBR=20 //STEPLIB DD DISP=SHR,DSN=DSNC10.SDSNLOAD //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * DSN SYSTEM(DBCG) RUN PROGRAM(DSNTIAD) PLAN(DSNTIA11) - LIB('DSNC10.RUNLIB.LOAD') //SYSPRINT DD SYSOUT=* //SYSUDUMP DD SYSOUT=* //SYSIN DD * GRANT MONITOR2 TO <db2_user>; GRANT EXECUTE ON PLAN SQDV4000 TO <db2_user>; GRANT SELECT ON SYSIBM.SYSTABLES TO <db2_user>; GRANT SELECT ON SYSIBM.SYSCOLUMNS TO <db2_user>; GRANT SELECT ON SYSIBM.SYSINDEXES TO <db2_user>; GRANT SELECT ON SYSIBM.SYSKEYS TO <db2_user>; GRANT SELECT ON SYSIBM.SYSTABLESPACE TO <db2_user>;

Generate z/OS Public / Private Keys and Authorized Key FileThe Controller Daemon uses a Public / Private key mechanism to ensure component communications are valid andsecure. A key pair must be created for the SQDaemon Job System User-ID and the User-ID's of all the Agent Jobs thatinteract with the Controller Daemon. On z/OS, by default, the private key is stored in SQDATA.NACL.PRIVATE and thepublic key in SQDATA.NACL.PUBLIC. These two files will be used by the Daemon in association with a sequential filecontaining a concatenated list of the Public Keys of all the Agents allowed to interact with the Controller Daemon.The Authorized Keys file must contain at a minimum, the public key of the SQDaemon job System User-ID and isusually created with a first node matching the user name running the SQDaemon job, in our exampleSQDATA.NACL.AUTH.KEYS.

The file must also include the Public key's of Engines running on zOS or other platforms. The Authorized Keys file isusually maintained by an administrator using ISPF.

JCL similar to sample member NACLKEYS included in the distribution executes the SQDutil utility program using thekeygen command and should be used to generate the necessary keys and create the Authorized Key List file. The JCLshould be edited to conform to the operating environment and the job must be run under the user-id that will be

Page 23: Connect CDC SQData - Db2 Capture Reference

23Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

used when the Controller Daemon job is run.

//NACLKEYS JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID //* //*--------------------------------------------------------------------//* Generate NACL Public/Private Keys and optionally AKL file //*--------------------------------------------------------------------//* Required DDNAME: //* SQDPUBL DD - File that will contain the generated Public Key //* SQDPKEY DD - File that will contain the generated private Key //* ** This file and its contents are not to be shared//* //* Required parameters: //* PARM - keygen *** In lower case *** //* USER - The system USERID or high level qualifier of the //* SQDATA libraries IF all Jobs will share Private Key. //* //* Notes: //* 1) This Job generates a new Public/Private Key pair, saves //* them to their respective files and adds the Public Key //* to an existing Authorized Key List, allocating a new //* file for that purpose if necessary. //* //* 2) An optional first step deletes the current set of files //* //* 3) Change the SET parms below for: //* HLQ - high level qualifier of the CDC Libraries //* VER - the 2nd level qualifier of the CDC OBJLIB & LOADLIB //* USER - the High Level Qualifier of the NACL Datasets //*--------------------------------------------------------------------//* // SET HLQ=SQDATA // SET VER=V400 // SET USER=&SYSUID //* //JOBLIB DD DISP=SHR,DSN=SQDATA..&VER..LOADLIB//* //*------------------------------------------------------------------- //* Optional: Delete Old Instance of the NACL Files //*-------------------------------------------------------------------//*DELOLD EXEC PGM=IEFBR14 //*SYSPRINT DD SYSOUT=* //*OLDPUB DD DISP=(OLD,DELETE,DELETE),DSN=&USER..NACL.PUBLIC //*OLDPVT DD DISP=(OLD,DELETE,DELETE),DSN=&USER..NACL.PRIVATE //*OLDAUTH DD DISP=(OLD,DELETE,DELETE),DSN=SQDATA.NACL.AUTH.KEYS //*-------------------------------------------------------------------//* Allocate Public/Private Key Files and Generate Public/Private Keys//*-------------------------------------------------------------------//SQDUTIL EXEC PGM=SQDUTIL //SQDPUBL DD DSN=&USER..NACL.PUBLIC, // DCB=(RECFM=FB,LRECL=80,BLKSIZE=21200), // DISP=(,CATLG,DELETE),UNIT=SYSDA, // SPACE=(TRK,(1,1)) //SQDPKEY DD DSN=&USER..NACL.PRIVATE, // DCB=(RECFM=FB,LRECL=80,BLKSIZE=21200), // DISP=(,CATLG,DELETE),UNIT=SYSDA, // SPACE=(TRK,(1,1)) //SQDPARMS DD * keygen //SYSPRINT DD SYSOUT=* //SYSOUT DD SYSOUT=*

Page 24: Connect CDC SQData - Db2 Capture Reference

24 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

//SQDLOG DD SYSOUT=* //*SQDLOG8 DD DUMMY //*-------------------------------------------------------------------//* Allocate the Authorized Key List File --> Used only by the Daemon //*-------------------------------------------------------------------//COPYPUB EXEC PGM=IEBGENER //SYSPRINT DD SYSOUT=* //SYSIN DD DUMMY //SYSUT1 DD DISP=SHR,DSN=&USER..NACL.PUBLIC //SYSUT2 DD DSN=SQDATA.NACL.AUTH.KEYS, // DCB=(RECFM=FB,LRECL=80,BLKSIZE=21200), // DISP=(MOD,CATLG),UNIT=SYSDA,SPACE=(TRK,(5,5))

Notes:

1. Since the Daemon and Capture Agents and zOS Apply Engines may be running in the same LPAR/system, theyfrequently run under the same System User-ID, in that case they would share the same public/private keypair.

2. Changes are not known to the Daemon until the configuration files are reloaded, using the SQDmon Utility, orthe sqdaemon process is stopped and started.

Page 25: Connect CDC SQData - Db2 Capture Reference

25Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

Setup CDCStore Storage Agent

The Db2/z Log Reader Capture utilizes the CDCStore Storage Agent to manage the transient storage of bothcommitted and in-flight or uncommitted units-of-work using auxiliary storage. The Storage Agent must be setupbefore configuring the Capture Agent.

Size Transient Storage PoolThe CDCStore Storage Agent utilizes a memory mapped storage pool to speed captured change data on its way toEngines. It is designed to do so without "landing" the data, after it has been mined from a database log.Configuration of the Storage Agent requires the specification of both the memory used to cache changed data as wellas the disk storage used if not enough memory can be allocated to hold large units-of-work and other concurrentworkload.

Memory is allocated in 8MB blocks with a minimum of 4 blocks allocated or 32MB of system memory. The diskstorage pool is similarly allocated in files made up of 8MB blocks. While ideally the memory allocated would be largeenough to maintain the log generated by the longest running transaction AND all other transactions runningconcurrently, that will most certainly be impractical if not impossible.

Ultimately, there are two situations to be avoided which govern the size of the disk storage pool:

Large Units of Work - While never advisable, some batch processes may update very large amounts of data beforecommitting the updates. Often such large units of work may be unintentional or even accidental but must still beaccommodated. The storage pool must be able to accommodate the entire unit of work or a DEADLOCK condition willbe created.

Archived Logs - Depending on workload, database logs will eventually be archived at which point the data remainsaccessible to the Capture Agent but at a higher cost in terms of CPU and I/O. Under normal circumstances, captureddata should be consumed by Engines in a timely fashion making the CDCStore FULL condition one to be aware of butnot necessarily concerned about. If however the cause is a stopped Engine, the duration of the outage could result inun-captured data being archived.

The environment and workload may make it impossible to allocate enough memory to cache a worse case or eventhe average workload, therefore we recommend two methods for sizing the storage pool based on the availability oflogging information.

If detailed statistics are available:

1. Gather information to estimate the worse case log space utilization (longest running Db2 transaction AND allother Db2 transactions running concurrently) - We will refer to this number as MAX.

2. Gather information to estimate the log space consumed by an "Average size" Db2 transaction and multiply bythe number of average concurrent transactions - We will refer to this number as AVG.

3. Plan to allocate disk files in your storage pool as large as the Average (AVG) concurrent transaction Log spaceconsumed. Divide the value of AVG by 8 (number of MB in each block) - This will give you the Number-of-Blocks in a single file

4. Divide the value of MAX by 8 (number of MB in each block) and again by the Number of Blocks to calculate thenumber of files to allocate which we will refer to as Number-of-Files. Note, dividing the value of MAX byAVG and rounding to the nearest whole number should result in the same value for N.

Example:

Number-of-Blocks = AVG / 8 (MB per block)

Number-of-Files = MAX / 8 / Number-of-Blocks (which is the same as Number-of-Files = MAX / AVG)

Page 26: Connect CDC SQData - Db2 Capture Reference

26 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

If detailed statistics are NOT available:

1. Precisely recommends using a larger number of small disk files in the storage pool and suggests beginningwith 256MB files. Dividing 256MB by the 8MB block size gives the Number-of-Blocks in a single file, 32.

2. Precisely recommends allocating a total storage pool of 2GB (2048MB) as the starting point. Divide thatnumber by 256MB to calculate the Number-of-Files required to hold 2GB of active LOG which would be 8.

Example:

Number-of-Blocks = 256MB / 8MB = 32

Number-of-Files = 2048MB / 256 = 8

Use these values to configure the CDCStore Storage Agent in the next section.

Notes:

1. Remember that it is possible to adjust these values once experience has been gained and performanceobserved. See the section "Display Storage Agent Statistics" in the Operations section below.

2. Think of the value for Number-of-Files as a file Extent, in that another file will be allocated only if theMEMORY cache is full and all of the Blocks (Number-of-Blocks) in the first file have been used and none arereleased before additional 8MB Blocks are required to accommodate an existing incomplete unit of work orother concurrent units of work.

3. While the number of Number-of-Blocks and Number-of-Files can be dynamically adjusted they will applyonly to new files allocated. It will be necessary to stop and restart the Storage Agent for changes to MEMORY.

4. Multiple Directories can also be allocated but this is only practical if the File system itself fills and a seconddirectory becomes necessary.

Apply FrequencyThe size of the transient storage area is also affected by the frequency changed data is applied to a target. Forexample, changes from a Db2 source to an Oracle target may only need to be applied once a day.

In this example the transient storage could be sized large enough to store all of the changed data accumulated duringthe one-day period. Often however, the estimated size will prove to be inadequate. When that happens the capturewill eventually stop mining the Db2 Log and wait for an Engine to connect and Publishing to resume. When theCapture does finally request the next log record from Db2, the required Db2 Archive Logs may have becomeinaccessible. This would occur if the wait period was long enough or the volume of data changing large enough thatthe Archive Log retention period was too short.

Best practices for Db2 Archive Log retention will normally ensure that the Archive Logs are accessible. In someenvironments however this can become an issue. Precisely recommends analysis of the total Db2 workload in allcases because even though only a fraction of all the existing tables may be configured for capture, the Db2/z LogReader capture potentially requires access to every log Archived since the last Apply cycle.

Precisely recommends switching to a streaming Apply model for your target or raising the Apply frequency as high aspractical when capturing rapidly changing tables in high volume Db2 environments, especially if space is an issue.

Page 27: Connect CDC SQData - Db2 Capture Reference

27Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

Create zFS Transient Data FilesystemOnce you have estimated the potential size of the transient data storage pool you must create a dedicatedfilesystem. That filesystem will then be assigned a mount point that will be referenced in the CDCStoreconfiguration (.cab) file. As discussed previously this filesystem need not be large but it is critical that it be dedicatedfor this purpose because the Storage agent will create and remove files as needed based on the definition of thestorage pool. It will expect the space it has been told it can use to be available when needed and will terminate thecapture if it is not. Double the amount of space you have estimated will be required. That will allow you to adjust thenumber of blocks and files in your configuration without having to add an additional mount point and filesystem.

JCL similar to the following sample member ALLOCZFS included in the distribution should be used to create theTransient Data Filesystem for the Db2 z/OS Capture Agent and mount it at the directory createdpreviously, /home/sqdata/db2cdc/data. The JCL should be edited to conform to the operating environment.

//ALLOCZFS JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*--------------------------------------------------------------------//* Allocate Transient Storage Filesystem for DB2 z/OS Capture Agent//*--------------------------------------------------------------------//* Required parameters (set below)://* ZFS - the name of the VSAM cluster aggregate//* MB - the number of megabytes to allocate to the cluster//* DIR - the directory name of the ZFS mountpoint//* which was previously created using member ALLOCZDR//* SMS - the SMS class to be assigned to the cluster//* VOL - the DASD volume(s) used for cluster allocation//*//* Notes://* 1) You must be a UID(0) User to Run this Job//*//* 2) This job contains six (6) steps as follows://* - Unmount the existing file system - optional//* - Define the ZFS filesystem VSAM cluster aggregate//* - Format the ZFS filesystem aggregate//* - Format the ZFS filesystem aggregate//* - Create the mountpoint directory//* - Mount the ZFS filesystem//*--------------------------------------------------------------------//*// EXPORT SYMLIST=(ZFS,MB,DIR,SMS,VOL)// SET ZFS=SQDATA.TESTZFS// SET MB=2049// SET DIR='/home/sqdata/db2cdc/data'// SET SMS=DBCLASS// SET VOL=WRK101//*//*------------------------------------------------------------------//* Optional - Unmount the Existing File System//*------------------------------------------------------------------//*UNMOUNT EXEC PGM=IKJEFT01,DYNAMNBR=75,REGION=8M//*SYSPRINT DD SYSOUT=*//*SYSTSPRT DD SYSOUT=*//*SYSTERM DD DUMMY//*SYSUADS DD DSN=SYS1.UADS,DISP=SHR//*SYSLBC DD DSN=SYS1.BRODCAST,DISP=SHR//*SYSTSIN DD *,SYMBOLS=JCLONLY//* UNMOUNT FILESYSTEM('&ZFS')/*

Page 28: Connect CDC SQData - Db2 Capture Reference

28 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

//*------------------------------------------------------------------//* Define the ZFS filesystem VSAM cluster aggregate//*------------------------------------------------------------------//DEFINE EXEC PGM=IDCAMS//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SYSIN DD *,SYMBOLS=JCLONLY DELETE &ZFS CLUSTER SET MAXCC=0 DEFINE CLUSTER (NAME(&ZFS) - VOLUME(&VOL) - STORCLAS(&SMS) - LINEAR - MB(&MB 0) - SHAREOPTIONS(3))/*//*------------------------------------------------------------------//* Format the ZFS Filesystem Aggregate//*------------------------------------------------------------------//FORMAT EXEC PGM=IOEAGFMT,REGION=0M,PARM=('-aggregate &ZFS -compat')//SYSPRINT DD SYSOUT=*//STDOUT DD SYSOUT=*//STDERR DD SYSOUT=*/*//*--------------------------------------------------------------------//* Create the Mountpoint Directory//*--------------------------------------------------------------------//CREATDIR EXEC PGM=IKJEFT01,REGION=64M,DYNAMNBR=99,COND=(0,LT)//SYSTSPRT DD SYSOUT=*//SYSTSIN DD *,SYMBOLS=JCLONLY PROFILE MSGID WTPMSG MKDIR '&DIR' MODE(7,7,5)/*//*------------------------------------------------------------------//* Mount the ZFS Filesystem//*------------------------------------------------------------------//MOUNT EXEC PGM=IKJEFT01,DYNAMNBR=75,REGION=8M,COND=(0,LT)//SYSPRINT DD SYSOUT=*//SYSTSPRT DD SYSOUT=*//SYSTERM DD DUMMY//SYSUADS DD DSN=SYS1.UADS,DISP=SHR//SYSLBC DD DSN=SYS1.BRODCAST,DISP=SHR//SYSTSIN DD *,SYMBOLS=JCLONLY MOUNT FILESYSTEM('&ZFS') - MOUNTPOINT('&DIR') - TYPE(ZFS) - MODE(RDWR)/*

Create z/OS CDCStore CAB fileThe CDCStore Storage Agent configuration (.cab) file is a binary file created and maintained by the SQDconf utility.While this section focuses primarily on the initial configuration of the Storage agent, sequences of SQDconfcommands to create and configure the storage agent should be saved in a shell script or a zOS PARMLIB member bothfor migration to other operating environments and for recovery.

The SQDconf create command will be used to prepare the initial CDCStore configuration (.cab) file for the CDCStoreStorage agent used by the Db2 zOS, UDB (DB2/LUW) and Oracle Capture agents:

Syntax

sqdconf create <cab_file_name>

Page 29: Connect CDC SQData - Db2 Capture Reference

29Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

--type=store--alias=cdcstore[--number-of-blocks=<blocks_per_file>][--number-of-logfiles=<number_of_files>][--memory=<nnn[K|M|G]]--data-path=<directory_name>

Keyword and Parameter Descriptions

<cab_file_name> - Path and name of the Storage Agent Configuration (.cab) file. The directory must exist andthe user-id associated with the agent must have the right to create and delete files in that directory. Thereis a one to one relationship between the CDCStore Storage Agent and Capture Agent. Preciselyrecommends including the Capture Agent alias as the third node in the directory structure and first node ofthe file name, for example, /home/sqdata/db2cdc/db2cdc_store.cab. In a windows environment .cfg maybe substituted for .cab since .cab files have special meaning in windows.

--type=store - Agent type must be "store" for the Storage Agent.

--alias=cdcstore - Agent type must be "store" for the Storage Agent.

[--number-of-blocks=<blocks_per_file> | -b <blocks_per_file>] - The number of 8MB blocks allocated for eachfile defined for transient CDC storage. The default value is 32.

[--number-of-logfiles=<number_of_files> | -n <number_of_files>] - The number of files that can be allocatedin a data-path. Files are allocated as needed one full file at a time, during storage agent operation.CDCStore recycles blocks when possible before new storage is allocated. The default value is 8.

[--memory=nnn[KMG]] - CDCStore transient data cache memory. The default is the minimum 8M with the totalcache in megabytes calculated as ((((number of targets + 1 ) * 2) + 1) * 8M), i.e: One subscribing Engine willconsume ((((1 + 1 ) * 2) + 1) * 8M) = 40 megabytes. Precisely recommends using this default based on theexperience of many customers with a wide range of workload and transaction size. The maximum numberof blocks (each block is always 8MB) can be displayed using Log-Level 5 and based on the 40 megabyteexample will be 40M/8M = 5. Please contact Precisely support at https://www.precisely.com/supportbefore specifying this parameter.

--data-path=<directory_name> - The path and directory name, where the storage agent will create transientstorage files. The directory must exist and the user-id associated with the agent must have the right tocreate and delete files in that directory. In our example, /home/sqdata/db2cdc/data

sqdconf create conf_path --type=cdcstore --alias=name -d data_path [-n nb_logfile] [-b nb_block] [--

memory=nnn[KMG]]\n")Example

Create the Connect CDC SQData CDCStore Storage Agent configuration for a Db2 zOS capture using JCL similar tosample member SQDCONDS included in the distribution with the appropriate storage pool parameters. Whilethose parameters are included in-line below we recommend that they be maintained separately in a filereferenced by the SQDPARM DD:

//SQDCONDS JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*--------------------------------------------------------------------//* Create CAB File for DB2 CDCStore Storage Agent//*--------------------------------------------------------------------//* Note: 1) Parameter Keywords must be entered in lower case//*//* 2) Parameters Values are Case Sensitive.

Page 30: Connect CDC SQData - Db2 Capture Reference

30 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

//*//* 3) The transient storage directory(s) should be sized//* to hold the largest anticipated unit-of-work in//* addition to any concurrent inflight transactions//*//* Steps: 1) (optional) delete the existing Storage CAB File//* 2) Create a new Storage CAB File//* 3) Display the contents of the new CAB File//*//*********************************************************************//*//JOBLIB DD DISP=SHR,DSN=SQDATA.V400.LOADLIB//*//*-------------------------------------------//* Optional - Delete existing CAB File//*-------------------------------------------//*DELETE EXEC PGM=IEFBR14//*SYSPRINT DD SYSOUT=*//*CDCSTORE DD PATHDISP=(DELETE,DELETE),//* PATH='/home/sqdata/db2cdc/db2cdc_store.cab'//*//*-------------------------------------------//* Create a New Capture CDCStore CAB File//*-------------------------------------------//CRSTORE EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * create /home/sqdata/db2cdc/db2cdc_store.cab --type=store --alias=cdcstore --number-of-blocks=32 --number-of-logfiles=8 --data-path=/home/sqdata/db2cdc/data//*//*----------------------------------------------//* Display the Contents of the CDCStore CAB File//*----------------------------------------------//DISPLAY EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * display /home/sqdata/db2cdc/db2cdc_store.cab --details/*

Notes:

1. See the SQDconf Utility Reference for more details.

2. The SQDconf create command defines the .cab file name and the location and size of the transient data store.Once created, this command should never be run again unless the storage agent is being recreated.

3. Unlike the Capture/Publisher configuration files, changes to the CDCStore configuration file take effectimmediately and do not require the usual stop/apply/start sequence.

4. The Directory path references, in the example /home/sqdata/<type>cdc/ can be modified to conform to theoperating environment but must match the Connect CDC SQData Variable Directory created in the PrepareEnvironment Section for the Capture.

Page 31: Connect CDC SQData - Db2 Capture Reference

31Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

5. The configuration files must be assigned the proper OMVS permissions. The Job/Task that reads or

updates the configuration files must have permissions that permit the function. The userid associated with

the Job/Task SHOULD match the ownerid of the unix files and directories that the program reads/writes

to. If it doesn't, it would have to gain access via group or world permissions, or by virtue of being uid 0.

The recommended OMVS permission is 664 (RW owner, RW group, R everone else.

The permissions may be set directly in OMVS or by a final job step when configuration files are created

that might look like this:

//*--------------------------------------------------------------------

//* Optional: Change permissions of newly created files

//*--------------------------------------------------------------------

//CHMOD EXEC PGM=IKJEFT01,REGION=64M,DYNAMNBR=99,COND=(0,LT)

//SYSEXEC DD DISP=SHR,DSN=SYS1.SBPXEXE

//SYSTSPRT DD SYSOUT=*

//OSHOUT1 DD SYSOUT=

//SYSTSIN DD *

OSHELL chmod 775 +

/home/sqdata/daemon/cfg/sqdagents.cfg

OSHELL chmod 775 +

/home/sqdata/daemon/cfg/acl.cfg

Page 32: Connect CDC SQData - Db2 Capture Reference

32 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

Setup Log Reader Capture Agent

The Db2/z Log Reader Capture performs three functions: Capturing changed data by mining the Db2 Log, managingthe captured data, and publishing committed data directly to Engines using TCP/IP. The Publishing function managesthe captured data until it has been transmitted and consumed by Engines, ensuring that captured data is not lostuntil the Engines, which may operate on other platforms, signal that data has been applied to their target datastores.

Setup and configuration of the Capture Agent include:

· Configure Db2 tables for capture

· Create Db2/z Capture Agent CAB file

· Encryption of Published Data

· Prepare Db2/z Capture JCL

Configure Db2 Tables for CaptureIn order for the Db2/z Capture Agent to be able to extract the changed data for Db2 tables from the recovery log, thesource Db2 table must be altered to allow for change data capture.

Syntax

ALTER TABLE <schema.tablename> DATA CAPTURE CHANGES;

Keyword and Parameter Descriptions

<schema.tablename> is the fully qualified name of the source table for which changes are to be captured.

Note, Enabling change data capture will increase the amount of data written to the Db2 recovery log for each updateto the source data table. Depending on the size of the tables and the volume of updates made to the table, the sizeof the active Db2 logs may have to be adjusted to accommodate the increase data.

Create Db2/z Capture CAB FileThe Db2/z Log Reader Capture Agent configuration (.cab) file is created and maintained by the sqdconf utility usingJCL similar to sample member SQDCONDC included in the distribution. While this section focuses primarily on theinitial configuration of the Capture Agent, sequences of SQDCONF commands to create and configure the captureagent can and should be stored in parameter files in case they need to be recreated. See SQDCONF Utility Referencefor a full explanation of each command, their respective parameters and the utility's operational considerations.

Syntax

sqdconf create <cab_file_name>--type=DB2[--ssid=<value>][--single-member][--ccsid=<coded_character_set_identifier>][--plan=<sqdata_plan>][--exclude-plan=<value>][--auto-exclude-plan=<y or n>][--exclude-user=<user_id>][--exclude-correlation-id=<value>][--encryption | --no-encryption][--auth-keys-list="<name>"]--store=<store_cab_file_name>

Keyword and Parameter Descriptions

Page 33: Connect CDC SQData - Db2 Capture Reference

33Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

[--ssid=<value>] - Db2 z/OS only. Subsystem ID or the Db2/z Data Sharing Group or Member Name. See Notebelow regarding Data Sharing Environments.

[--single-member] - Db2 z/OS only. Consultation with Precisely support highly recommended, see Note belowregarding Data Sharing Environments.

--ccsid=<coded_character_set_identifier> - The coded character set identifier or code page number of the Db2Subsystem, default is 1047.

[--plan=<sqdata_plan>] - Plan name used to connect to the DB2 subsystem. The default Plan is namedSQDV4000 and need not be explicitly specified. This is an optional parameter that can be used to specifyanother Plan as needed.

[--exclude-plan=<name>] - Exclude transactions associated with the given plan name from capture. Thisparameter can be repeated multiple time.

[--auto-exclude-plan=<y | n] - Optionally exclude from capture Data that has been updated by an Apply Enginerunning under Connect CDC SQData's default Db2 Plan SQDV4000. Default is No (n).

[--exclude-user=<user_id>] - Rarely used, will exclude database updates made by the specified User.

[--exclude-correlation-id=<value>] - Exclude transactions with the given correlation id value from capture. Thisparameter can be repeated multiple time

[--encryption | --no-encryption] - Enables or disables NaCL encryption of the published CDC record payload.See Encryption of Published Data for more details. Precisely recommends zIIP processors be used toenhance CPU cycle efficiency and reduce CPU cost associated with NaCL software encryption.

[--auth-keys-list="<name>"] - Required for NaCL software encrypted CDC record payload. File name must beenclosed in quotes and must contain public key(s) of only the subscribing Engines requiring encryption ofthe CDC record payload. See --encryption option.

--store=<store_cab_file_name> - Path and name of the Storage Agent Configuration (.cab) file. In ourexample, /home/sqdata/db2cdc/db2cdc_store.cab

Note, For capture in a data sharing environment, Precisely recommends selection of one data sharing group,preferably on the same LPAR running the Db2/z Log Read Capture.

· Capture against multiple members in a data sharing group will result in capturing the same data multipletimes.

· Controlling which member is used facilitate fail over rather than allowing Db2 to select the member name tocapture from.

Note, The --single-member

Next, the configuration file must be updated by adding an entry for each table to be captured using the addcommand. Only one table and one associated datastore (target subscription) can be added at a time. Precisely highlyrecommends keeping a Recover/Recreate configuration file Job or shell script available should circumstances requirerecovery.

Add each source table to the list of source tables to be captured in the Capture Configuration (.cab) file. Each table isidentified by its name and schema. A datastore representing a single target subscription must be specified for eachtable added. Additional target subscriptions can be "added" but that is performed with the modify command It isimportant to develop a standard for how Datastores will be identified, particularly if a large number will be defined.The source is marked inactive by default but remember, even sources marked active will not be captured untilchanges to the configuration file are applied.

Page 34: Connect CDC SQData - Db2 Capture Reference

34 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

Syntax

sqdconf add <cab_file_name> --schema=<name> --table=<name> | --key=<name> --datastore=<url>[--active | --inactive][--pending]

Keyword and Parameter Descriptions

<cab_file_name> - Must be specified and must match the name specified in the previous create command.

--schema=<name> Schema name, owner, or qualifier of a table. Different databases use different semantics,but a table is usually uniquely identified as S.T where S is referenced here as schema. This parametercannot be specified with --key.

--table=<name> A qualified table name in the form of schema.name that identifies the source. This may beused in place of two parameters, --schema and --table. Both cannot be specified.

--key=<name> Same as --table

--datastore=<url> | -d <url> - While most references to the term datastore describe physical entities, adatastore URL represents a target subscription and takes the form: cdc://[localhost]/[<agent_alias>]/<subscriber_name> where:

· <host_name> - Optional, typically omitted with only a / placeholder. If specified must match the[<localhost_name> | <localhost_IP>] of the server side of the socket connection.

· <agent_alias> - Optional, typically omitted with only a / placeholder. If specified must match the<capture_agent_alias> or <publisher_agent_alias> assigned to the Capture/Publisher agent in theController Daemon sqdagents.cfg configuration file.

· <subscriber_name> The name presented by a requesting target agent. Also referred to as the Enginename. Connection requests by Engines or the sqdutil utility must specify a valid <subscriber_name> intheir cdc://<host_name>/<agent_alias>/<subscriber_name> connection url.

[--active | --inactive] - Mark a table as active or in-active for capture. The table will remain in the current stateuntil the capture is stopped, applied and re-started. The default is --inactive.

[--pending] This parameter allows a table to be added to the configuration before it exists in the databasecatalog.

Example

Create a Capture configuration for the Db2 IVP tables and display the current content of the configuration file:

//SQDCONDC JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*--------------------------------------------------------------------//* Create CAB File for the Db2 Log Reader Capture Agent//*--------------------------------------------------------------------//* Note: 1) Parameter Keywords must be entered in lower case//*//* 2) Parameters Values are Case Sensitive.//*//* 3) Engine Name should be in Upper Case for z/OS JCL//*//* Steps: 1) (optional) delete the existing Capture CAB File//* 2) Create a new Capture CAB File//* 3) Add Tables to the new capture CAB File

Page 35: Connect CDC SQData - Db2 Capture Reference

35Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

//* 4) Display the contents of the new CAB File//*//*********************************************************************//*//JOBLIB DD DISP=SHR,DSN=SQDATA.V400.LOADLIB//*//*-------------------------------------------//* Optional - Delete existing CAB File//*-------------------------------------------//*DELETE EXEC PGM=IEFBR14//*SYSPRINT DD SYSOUT=*//*CONFFILE DD PATHDISP=(DELETE,DELETE),//* PATH='/home/sqdata/db2cdc/db2cdc.cab'//*//*-----------------------------------------------//* Create Db2 Capture Configuration CAB File//*-----------------------------------------------//CRCONF EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD *create /home/sqdata/db2cdc/db2cdc.cab --type=db2 --ssid=DBBG --store=/home/sqdata/db2cdc/db2cdc_store.cab//*//*--------------------------------------------------------------------//* Add Tables to the Capture CAB File//*--------------------------------------------------------------------//* Modify to specify the Table(s) to be Captured initially.//* Tables can be added later using a modified version of this Job//* or using the SQDATA ISPF panel interface//*--------------------------------------------------------------------//*//*-----------------------------------------------------

//* Publish Table SQDATA.EMP to Subscription DB2TODB2//*-----------------------------------------------------//ADDEMP EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * add /home/sqdata/db2cdc/db2cdc.cab --table=SQDATA.EMP

--datastore=cdc:////DB2TODB2 --active//*//*-----------------------------------------------------

//* Publish Table SQDATA.DEPT to Subscription DB2TODB2//*-----------------------------------------------------//ADDDEPT EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * add /home/sqdata/db2cdc/db2cdc.cab --table=SQDATA.DEPT

--datastore=cdc:////DB2TODB2 --active//*//*-------------------------------------------//* Display configuration file

Page 36: Connect CDC SQData - Db2 Capture Reference

36 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

//*-------------------------------------------//DISPLAY EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * display /home/sqdata/db2cdc/db2cdc.cab//*//

Notes:

1. The sqdconf create command defines the location of the Capture agent's configuration file. Once created, thiscommand should never be run again unless you want to destroy and recreate the Capture agent.

2. Destroying the Capture agent cab file means that the current position in the log and the relative location ofeach engine's position in the Log will be lost. When the Capture agent is brought back up it will start from thebeginning of the oldest active log and will resend everything. After initial configuration, changes in the formof add and modify commands should be used instead of the create command. Note: You can not delete a cabfile if the Capture is mounted. And a create on an existing configuration file will fail.

3. There must be a separate ADD step in the Job for every source table to be captured.

4. The Job will fail if the same table is added more than one time for the same Target Datastore/Engine. Seesection below "Adding/Removing Output Datastores".

5. The <subscriber_name> is case sensitive in that all references should be either upper or lower case. Becausereferences to the "Engine" in z/OS JCL must be upper case, references to the Engine in these examples are allin upper case for consistency.

6. The display step when run against an active configuration (.cab) file will include other information including:

· The current status of the table (i.e. active, inactive)

· The starting and current point in the log where data has been captured

· The number of inserts, updates and deletes for the session (i.e. the duration of the capture agent run)

· The number of inserts, updates and deletes since the creation of the configuration file

Encryption of Published DataPrecisely highly recommends the use of VPN or SSH Tunnel connections between systems both to simplify theiradministration and because the CPU intensive encryption task can be performed by dedicated network hardware.

In the event that encryption is required and a VPN or SSH Tunnel cannot be used, Connect CDC SQData providesother options:

· NaCl Encryption

· TLS based Encryption between z/OS systems and Linux (ONLY).

Page 37: Connect CDC SQData - Db2 Capture Reference

37Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

Implement NaCL Encryption Connect CDC SQData provides for encryption by the Publisher using the same NaCl Public / Private Key used forauthentication and authorization. While Captures and Publishers are typically initiated by the same USER_ID as theCapture Controller Daemon, those jobs explicitly identify the public / private key pair files in JCL DD statements.Precisely recommends that a second NACL Key pair is generated for the Capture / Publisher. A second authorized KeyList will also be required by the Capture / Publisher containing the public keys for only those Engines subscribing tothat Capture / Publisher and whose payload will be encrypted. Once the Controller Daemon passes the connectionrequest to the Capture / Publisher a second handshake will be performed with the Engine and the CDC payload willbe encrypted before being published and decrypted by the receiving Engine.

Syntax

sqdconf create <cab_file_name>[--encryption | --no-encryption][--auth-keys-list="<name>"]

Keyword and Parameter Descriptions

<cab_file_name> - This is where the Capture Agent configuration file, including its path is first created. Thereis only one CAB file per Capture Agent. In our example /home/sqdata/db2cdc/db2cdc.cab

[--encryption | --no-encryption] - Enables or disables NaCL encryption of the published CDC record payload.See Encryption of Published Data for more details. Precisely recommends zIIP processors be used toenhance CPU cycle efficiency and reduce CPU cost associated with NaCL software encryption.

[--auth-keys-list="<name>"] - Required for NaCL software encrypted CDC record payload. File name must beenclosed in quotes and must contain public key(s) of only the subscribing Engines requiring encryption ofthe CDC record payload. See --encryption option.

Example 1

Turn on encryption

//*-----------------------------------------------//* Turn on Encryption for DB2 Capture//*-----------------------------------------------//MODCONF EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD *modify /home/sqdata/db2cdc/db2cdc.cab --encryption --auth-keys-list="NACL.AUTH.KEYS"//*

Next, stop and restart the DB2 Capture Agent.

Example 1

Turn off encryption

//*-----------------------------------------------//* Turn on Encryption for DB2 Capture//*-----------------------------------------------//MODCONF EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD *

Page 38: Connect CDC SQData - Db2 Capture Reference

38 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

modify /home/sqdata/db2cdc/db2cdc.cab --no-encryption//*

Finally, stop and restart the DB2 Capture Agent.

Note, Precisely recommends zIIP processors be used to enhance CPU cycle efficiency and reduce CPU cost associatedwith NaCL software encryption. Enabling zIIP processing requires one additional option when starting the Capture /Publisher:

1. Stop the DB2 capture agent.

2. Restart the agent and include the following --ziip option, as follows: --apply --start --ziip /home/sqdata/db2cdc/db2cdc.cab

Implement TLS SupportTransport Layer Security (TLS) is supported between all components on z/OS and between Connect CDC SQDataclients on Linux and Change Data Capture components on z/OS, only.

z/OS TLS Capture and Apply Engines

Connect CDC SQData already operates transparently on z/OS under IBM's Application Transparent Transport LayerSecurity (AT-TLS). Under AT-TLS no changes were required to the base code and only port numbers in theconfiguration need to be changed, as described below. For more information regarding AT-TLS see your z/OSSystems Programmer.

Once IBM's AT-TLS has been implemented on z/OS the following steps are all that is required by Daemon, Captureand Publisher components and on z/OS only, the SQDATA Apply Engine and SQDutil, to be in compliance with TLS:

1. Request the new secure port to be used by the Daemon

2. Request Certificates for MASTER, Daemon and APPLY Engine Tasks

3. Stop all SQDATA tasks

4. Update APPLY Engine source scripts with the new Daemon port. Note, port's are typically implemented usinga Parser parameter so script changes may not be required.

5. Update SQDUTIL JCL and/or SQDPARM lib member's, if any with the new Daemon port.

6. Run Parse Jobs to update the Parsed Apply Engines in the applicable library referenced by Apply Engine Jobs.

7. Update the Daemon tasks with new port

8. If using the z/OS Master Controller, update the SQDPARM Lib members for the MASTER tasks with newDaemon port

9. Start all the SQDATA tasks

Note, there are no changes to connection URL's for clients on z/OS.

Linux TLS Apply and Replicate Engines

Linux clients connecting to z/OS Daemons running under IBM's (AT-TLS) and servicing z/OS Change Data Capturesnow support the TLS handshake. TLS connections to Change Data Capture components running on AIX and Linux arenot supported at this time.

Page 39: Connect CDC SQData - Db2 Capture Reference

39Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

The only external prerequisite to enabling TLS on Linux is the GnuTLS secure communications library whichimplements TLS, DTLS and SSL protocols and technologies including the C language API used by Connect CDC SQDataon Linux. On RPM-based Linux distributions, YUM (Yellowdog Updater Modified) can be used to install GnuTLS. Formore information regarding YUM or other Package Managers see your Linux Systems Administrator.

Linux clients making TLS connections to z/OS, will by default perform the "typical TLS handshake" where the clientuses the server's certificate for authentication and then proceeds with the rest of the handshake process. Specificchanges to connection parameters are described below.

The following steps are all that are required on the client side to implement TLS on Linux for the "typical" client sidehandshake performed by an Engine:

1. Request the new Port number that was assigned to the z/OS Daemon.

2. Stop all running Connect CDC SQData Linux Engines, the local Daemon need not be stopped.

3. Update Engine source DATASTORE URL to use the "cdcs:// URL syntax type to specify that a secure TLSconnection is requested (changed from "cdc://" to "cdcs://").

4. Update Engine source DATASTORE URL to use the TLS z/OS Daemon port. Note, the port number is typicallyimplemented using a Parser parameter so script changes may not be required.

5. Parse the Apply Jobs to create a new <engine>.prc file in the applicable directory.

6. Start the Connect CDC SQData Linux clients.

Notes:

1. If the linux default package manager was used to install the GnuTLS library, we would expect it to be placed ineither /lib64 or /usr/lib64. Two softlinks should have been created and libgnutls.so included in the "defaultlibrary path" for example:

lrwxrwxrwx 1 root root 20 Jul 8 2020 libgnutls.so -> libgnutls.so.28lrwxrwxrwx 1 root root 20 Sep 11 2019 libgnutls.so.28 -> libgnutls.so.28.43.3

If it is not in the default path we will be unable to locate the library. A special environmental variable can beused but Precisely only recommends doing so in a test environment:

SQDATA_GNUTLS_LIBRARY=<path_to_softlink/>libgnutls.so

2. If the SQDmon utility is used to connect to a remote z/OS Daemon running under IBM's (AT-TLS), for exampleto request an "inventory" or "display" the status of a publisher a new --tls parameter must be specified:

Syntax: sqdmon inventory //<host_name> [-s port_num | --service=port_num] [--identity=<path_to/nacl_id>][--tls]

3. If the SQDutil is used to connect to a remote Publisher running under IBM's (AT-TLS), to copy/move CDCrecords to a file, the "cdcs://" URL syntax type must be specified:

Syntax: sqdutil copy | move cdcs://<host_name>:<port_num>/<agent_name> <target_url> | DD:<dd_name>

4. Although uncommon, if yours is a Mutual Auth aka Mutual Authentication implementation, which alsoincludes authentication of the client by the server, then two environmental variables must be used toidentify the client certificate and key. The server will then use the client side certificate to authenticate the

Page 40: Connect CDC SQData - Db2 Capture Reference

40 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

client before proceeding with the rest of the handshake.

SQDATA_TLS_CERT=</directory/file_name> SQDATA_TLS_KEY=</directory/file_name>

The Linux client will by default use the system installed Certificate Authority (CA). If a local CA file is used, itmust be specified using a third Environmental variable:

SQDATA_TLS_CA=</directory/file_name>

Data Sharing EnvironmentsFor capture in a data sharing environment, Precisely recommends selection of one data sharing group, preferably onthe same LPAR running the Db2/z Log Reader Capture.

· Capture against multiple members in a data sharing group will result in capturing the same data multipletimes.

· Controlling which member is used facilitate fail over rather than allowing Db2 to select the member name tocapture from.

Note, The --single-member

Prepare Db2/z Capture JCLOnce the DB2 Capture configuration (.cab) file has been created, JCL similar to sample member SQDDB2C included inthe distribution is used to Mount and optionally Start the DB2 Capture Agent process.

Example

//SQDDB2C JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*--------------------------------------------------------------------//* Execute (Mount) DB2 CDCStore Capture Agent - SQDDB2C//*--------------------------------------------------------------------//* Required parameters (lower case)://* config_file - Specifies the fully qualified name of the//* predefined DB2 capture agent configuration//* file (see sample JCL SQDCONDC)//*//* Optional parameters (lower case)://* --apply - Specifies that ALL pending changes to the config (.cab)//* file should be Applied before the Capture Agent is//* Started//* ** NOTE - This is normally NOT used in this job//* ** REMOVE if the status of pending changes is NOT//* ** known. Instead use SQDCONF to apply changes//*//* --start - Instructs the Capture Agent to Start processing//* ** NOTE - This is often used in this job but can//* ** be performed by a separate SQDCONF command//*//* --ziip - Instructs the Capture to utilize zIIP engines for //* encryption (if specified in the Capture Agent //* CAB file //* //* Note: 1) The Relational CDCStore Capture Agents include//* a second Publisher thread that manages Engine//* subscriptions//*--------------------------------------------------------------------

Page 41: Connect CDC SQData - Db2 Capture Reference

41Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

//*//JOBLIB DD DISP=SHR,DSN=SQDATA.V400.LOADLIB// DD DISP=SHR,DSN=CSQ901.SCSQAUTH// DD DISP=SHR,DSN=CSQ901.SCSQANLE// DD DISP=SHR,DSN=DSNC10.SDSNLOAD//*//SQDDB2C EXEC PGM=SQDDB2C,REGION=0M//*SQDDB2C EXEC PGM=XQDDB2C,REGION=0M//SQDPUBL DD DSN=SQDATA.NACL.PUBLIC,DISP=SHR//SQDPKEY DD DSN=SQDATA.NACL.PRIVATE,DISP=SHR//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//CEEDUMP DD SYSOUT=*//*SQDLOG8 DD DUMMY//SQDLOG DD SYSOUT=*//*SQDPARMS DD DISP=SHR,DSN=SQDATA.V400.PARMLIB(DB2CDC)//SQDPARMS DD * --apply --start /home/sqdata/db2cdc/db2cdc.cab /*// --apply --start --ziip --no-ddl-tracking//SQDLOGL DD * *=2 *=8 OPTIONS=NET_ACTIVITY,MESSAGE,CDCSTORE,IFI, OPTIONS=PARSE,UOW,NET_ACTIVITY,MESSAGE,CDCSTORE,IFI, OPTIONS=TIME, CDCSTORE=8 DB2C=8 VFILE=8

Notes:

1. Precisely recommends zIIP processors be used to enhance CPU cycle efficiency and reduce CPU costassociated with NaCL software encryption.

1. While the SQDCONF utility is used to create the Capture Agent configuration and perform most of the othermanagement and control tasks associated with the capture agent, on z/OS it cannot perform the function ofthe MOUNT command. On platforms other than z/OS, the MOUNT command brings an Agent on-line. On z/OSthat function must be performed with an agent specific JOB or Started Task. Once the Capture Agent has been"mounted" on z/OS, the sqdconf utility can and should be used to perform all other functions as documented.

2. The first time this Job is run you may choose to include a special form of the sqdconf apply and startcommands. After the initial creation, --apply should not be used in this JCL, unless all changes made since theagent was last Stopped are intended to take effect immediately upon the Start. The purpose of apply is tomake it possible to add/modify the configuration while preparing for an implementation of changes withoutaffecting the current configuration. Note, apply and start can and frequently will be separated into differentSQDCONF jobs.

3. The Controller Daemon uses a Public / Private key mechanism to ensure component communications arevalid and secure. While it is critical to use unique key pairs when communicating between platforms, it iscommon to use the same key pair for components running together on the same platform. Consequently, thekey pair used by a Log Reader Capture agent may be the same pair used by it's Controller Daemon.

Page 42: Connect CDC SQData - Db2 Capture Reference

42 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

Setup Capture Controller Daemon

The Controller Daemon plays a special role on platforms running Data Capture Agents, authenticating incomingconnection requests from Engines and Utilities usually running on other platforms, and transferring the openedsocket to the requested Capture/Publisher. The Authorized Key File of the Capture Controller Daemon must containthe Public keys of all Engines and Utilities that will request connections to Captures/Publishers managed by theDaemon.

Setup and configuration of the Capture Controller Daemon, SQDAEMON, include some of the Prepare Environmentactivities and the following Capture specific steps:

· Create the Access Control List

· Create the Agent Configuration File

· Prepare Controller Daemon JCL

Page 43: Connect CDC SQData - Db2 Capture Reference

43Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

Create Access Control ListThe Controller Daemon requires an Access Control List (ACL) that assigns privileges (admin, query) by user or groupof users associated with Capture or Engine agents running on the platform. This sequential file, usually named acl.cfgis placed in the <SQDATA_VAR_DIR>/daemon/cfg directory. The file name must match the name specified in theSQDagents.cfg file by the acl= <location/file>. The ACL configuration file contains 3 sections. Each section consists ofkey-argument pairs. Empty lines and lines starting with # or -- are interpreted as comments. Section names must bebracketed while keywords and arguments are case-sensitive:

Syntax

Global section - not identified by a section header and must be specified first.

allow_guest=no | yes guest_acl=<acl_list_name> default_acl=<comma separated list>

Keyword and Parameter Descriptions

allow_guest=no | yes - Specifies whether a guest is allowed to connect. Guests are clients that can process aNaCl handshake, but whose public key is not in the server's authorized_keys_list file. If guests are allowed,they are by default granted the right to query. The default value is No.

guest_acl=<acl_list_name> - Optionally assigns one of the acl_list_names in the [acls] section to guest users.This must be specified after the allow_guest parameter. The default if not acl_list_name is specified isnone.

default_acl=<comma separated list> - Optional comma separated list of specific access type authorizations(see below) assigned to authenticated clients that do not have an [acls] explicitly associated to them,either directly or via a Group making them by default a "Guest".

Syntax

Groups section - [groups] allows the optional definition of user groups to simplify management of the AccessRights for individual users with similar requirements. The Rights associated with the group_name in theAccess Control List section [acls] , will propagate those rights to all users in the group.

[groups]<group_name>=<user_name> [,<user_name>…]

Keyword and Parameter Descriptions

<group_name>=<user_name> [,<user_name>…] - Defines a "named group" and the members of that group. Thecase sensitive user_name/user-id of a connecting client must match a name specified in one or moregroup_names or the user will be considered a guest. The user_name may include a domain, eg:user_name@server but more commonly does not which facilitates the use of a single NACL key pair for anindividual user with accounts on multiple systems.

Syntax

Access Control List section - [acls] assigns one or more access "types" to individual users or groups in a commaseparated list.

[acls]<user_name> | <group_name> = <access type list>

Keyword and Parameter Descriptions

Page 44: Connect CDC SQData - Db2 Capture Reference

44 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

<user_name> | <group_name> - Individual user_name/user-id or group_name

<access type list> - A comma separated list of one or more of the following access or authorization types listedin ascending order of authority:

none - Explicitly assign no authorization. When present in a list all other elements of the list are ignored.

query - Allow to query the daemon about the state of the daemon and its agents. That includes theSQDmon utility Inventory and Display commands.

read - Allow to read data from an agent. An engine must have such authorization to be able to fetch cdcdata from a publisher.

write - Not presently used.

exec - Allow to start or stop an agents. This type is both agent type and platform specific. Engine andProgram (which includes scripts supported on the platform) Types may be started and stopped only onplatforms other than z/OS.

admin - Allow all rights. This level of access is required for to reload a modified daemon configuration.

sysadm - A special rights that allow the ability to shutdown the daemon itself. By default only the userused to run the daemon has that ability, unless that user has been given sysadm access/authorizationexplicitly or via a group in the acl.cfg file.

Notes:

1. When a type of access or authorization is assigned to a group_name, the list will propagate to all users in thegroup.

2. Access types are cumulative therefore it is only necessary to list the maximum access or authorizationallowed for an individual User or Group:[acls]admin=admincntl=execstatus=read

3. The user_name/user_id that starts the daemon, is implicitly granted sysadm access whether or not explicitlyassigned to a group or individually assigned another specific access right or authorization.

4. Changes are not known to the daemon until the configuration file is reloaded, using the SQDmon Utility, orthe sqdaemon process is stopped and started.

The acl.cfg file can be directly edited or the JCL can be edited and the files recreated using JCL similar to samplemember CRDAEMON included in the distribution. That JCL includes steps to create both the Access Control List andthe Agent Configuration file. The JCL should be edited to conform to the operating environment and in particular thezFS directory structure created by the ALLOCZDR Job run as part of the Prepare Environment Checklist.

Example

//*-----------------------------------------------------------------//* CREATE AND POPULATE THE ACL.CFG FILE //*-----------------------------------------------------------------//CRACL EXEC PGM=IEBGENER //SYSPRINT DD SYSOUT=* //SYSIN DD DUMMY //SYSUT2 DD PATH='//home/sqdata/daemon/cfg/acl.cfg', // PATHOPTS=(OWRONLY,OCREAT,OTRUNC), // PATHMODE=SIRWXU, // PATHDISP=(KEEP,DELETE),

Page 45: Connect CDC SQData - Db2 Capture Reference

45Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

// FILEDATA=TEXT //* //SYSUT1 DD * allow_guest=yesguest_acl=nonedefault_acl=query

[groups]admin=<sqdata_user>cntl=<user_name1>,<user_name2>status=<user_name3>,<user_name4>

[acls]admin=admin,sudocntl=query,read,writestatus=query,read/*

Create Agent Configuration FileThe Agent Configuration File lists alias names for agents and provides the name and location of agent configurationfiles. It can also defines startup arguments and output files for agents that are managed by the Controller Daemon.The sqdagents.cfg file begins with global parameters followed by sections for each agent controlled by the daemon.

Global section - not identified by a section header and must be specified first.

Syntax

acl=<path_to/acl.cfg>authorized_keys=<path_to/nacl_auth_keys>identity=<path_to/id_nacl>message_level=<0-8>message_file=../logs/daemon.logservice=<port_num>

Keyword and Parameter Descriptions

acl=<path_to/acl.cfg> - Location, fully qualified path to the working directory) and name of the aclconfiguration file to be used by the Controller Daemon. While the actual name of this file is user defined,we strongly recommend using the file name acl.cfg.

authorized_keys=<path_to/nacl_auth_keys> (Non-z/OS only) Location of the authorized_keys file to be usedby the Controller Daemon. On z/OS platforms, specified at runtime by DD statement.

identity=<path_to/id_nacl> - (Non-z/OS only) Local file system path and file name or AKV url for the NaClprivate key to be used by the Controller Daemon. On z/OS platforms both Public and Private Key files arespecified at runtime by DD statements.

message_level=<0-8> - Level of verbosity for the Controller Daemon messages. This is a numeric value from 0to 8. Default is 4.

message_file=../logs/daemon.log - Location of the file that will accumulate the Controller Daemon messages.If no file is specified, either in the config file or from the command line, then messages are send to thesyslog.

service=<port_num> - Number of the port or service to be used by the Controller Daemon to listen forincoming service requests. Service can be defined using the SQDPARM DD on z/OS, on the command linestarting sqdaemon, in the config file described in this section or, on some platforms, as the environmentvariableSQDAEmon_SERVICE, in that order of priority. Absent any specification, the default is 2626. If forany reason a second Controller Daemon is run on the same platform they must each have a unique port

Page 46: Connect CDC SQData - Db2 Capture Reference

46 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

specified.

Agent sections - not identified by a section header, each agent is identified with its "alias name" between squarebrackets, heading a block of properties for that agent.

Syntax

[<capture_agent_alias>] | [<publisher_agent_alias>]

[<engine_agent_alias> ] | [<program_alias> ] | [<process_alias>]type=Capture | Publisher | Engine | Programprogram=<name>args=<parameter list>working_directory=<full path to working directory>cab=<full or relative path/<capture/publisher_alias.cab>stdout_file=<full or relative path>/<agent_alias>.<ext>stderr_file=<full or relative path>/<agent_alias>.<ext>report=<synonym of stderr_file>comment=<user comment>auto_start=[ Y | N ]

Keyword and Parameter Descriptions

[<capture_agent_alias>] | [<publisher_agent_alias>] Must be unique in the configuration file of the daemon onthe same machine as the Capture / Publisher process. This alias name will be referenced in the Engineconnect string. Must be associated with the cab=<*.cab> file name specified in the sqdconf createcommand for the capture or publisher Agent setup in the previous section.

[<engine_agent_alias>] Only present in the configuration file of the daemon on the same machine where theEngine process executes. Also known as the Engine name, the <engine_agent_alias> provided here doesnot need to match the one specified as the subscription name in the Capture/Publisher .cab file, howeverthere is no reason to make them different. Will also be used by sqdmon agent management and displaycommands.

[<program_alias> ] | [<process_alias>] Only present in the configuration file of the daemon on the samemachine where the program or process associated with the alias will execute. Any string of characters maybe used.

type=Capture | Publisher | Engine | Program - Type of the agent. This can be Engine, Program capture orpublisher. It is not necessary to specify the type for Engines, programs, scripts or batch files.

program=<name> - The name of the program (or nix shell script / or windows batch file) to invoke in order tostart an agent. This can be a full path or a simple program name. In the latter case, the program must beaccessible via the PATH of the sqdaemon - context. The value must be "SQDATA" for Engines but may alsobe any other executable program, shell script (nix) or batch file (Windows).

args=<parameter list> - Parameters passed on the command line to the the program=<name> associated withthe agent on startup. In the case of an Connect CDC SQData Engine, the first must be the "parsed" Enginescript name ie <engine.prc>. This is valid for program= entries only.

working_directory=<full path to working directory> Specify the working directory used to execute the agent.

cab=<full or relative path/<capture/publisher_alias.cab> Location and name of the configuration (.cab) file forcapture and publisher agent entries.

stdout_file=<full or relative path>/<agent_alias>.<ext> - Location and name of the output file name used bythe agent_alias for stdout. The file extension is optional but .rpt is recommended. If not specified, thedefault value is agent_name.stdout. Using the same file name for stdout_file and stderr_file is

Page 47: Connect CDC SQData - Db2 Capture Reference

47Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

recommended and will result in a concatenation of the two results, for example <engine_name.rpt>.

stderr_file=<full or relative path>/<agent_alias>.<ext> - Location and name of the output file name used bythe agent_alias for stderr. The file extension is optional but .rpt is recommended. If not specified, thedefault value is agent_name.stderr. Using the same file name for stdout and stderr is recommended andwill result in a concatenation of the two results, for example <engine_name.rpt>

report=<synonym of stderr_file> - If both are specified, report takes precedence.

comment=<user comment> - User specified comment associated with the agent, used for display purposes.

auto_start=[ Y | N ] - A boolean value (yes/no/1/0), indicating if the associated agent should be automaticallystarted when sqdaemon is started. This also has an impact in the return code reported by sqdmon when anagent stops with an error. If an agent is marked as auto_start and it stops unexpectedly, this will bereported as an Error in the sqdaemon log, otherwise it is reported as a Warning. This is valid for engineentries only.

Notes:

1. Directories and paths specified must exist before being referenced. Relative names may be included and arerelative to the working directory of the sqdaemon "-d" parameter or as specified in the file itself.

2. While message_file is not a required parameter we generally recommend its use or all messages, includingauthentication and connection errors, will go to the system log. On z/OS however the system log may bepreferable since other management tools used to monitor the system, use the log as their source ofinformation.

3. All references to .cab files names must be fully qualified.

4. Azure Key Vault (AKV) based secrets in Connect CDC SQData are supported only on Linux platforms.

5. AKV requires an Azure Active Directory (AAD) token to be presented to retrieve secrets. SQData retrievesAAD tokens from Azure differently when running on on-prem Linux machines and when running on AzureLinux VM.

a) When running on-prem, to retrieve AAD, tenant_id, client_id and client_secret have to be specified in thesqdata_cloud.conf file located in the working directory.

b) When running on Azure VM, AAD will retrieved from managed identity. Two types of managed identitiesare supported:

1) If client_id is specified in sqdata_cloud.conf file, then AAD token is retrieved from user managedidentity

2) If tenant_id, client_id and client_secret are not specified, then AAD token is retrieved from systemmanaged identity.

A sample sqdagent.cfg file for the Capture Controller Daemon running on z/OS created using JCL similar samplemember CRDAEMON included in the distribution. The JCL should be edited to conform to the operating environmentand the zFS directories previously allocated. Once created, the sqdagent.cfg file can be directly edited or the JCL canbe edited and the files recreated. Changes are not known to the daemon until the configuration file is reloaded (seeSQDMON Utility) or the daemon process is stopped and started.

Example

//*--------------------------------------------------------------------//* Create and populate the sqdagents.cfg file//*--------------------------------------------------------------------//CRAGENTS EXEC PGM=IEBGENER

Page 48: Connect CDC SQData - Db2 Capture Reference

48 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

//SYSPRINT DD SYSOUT=*//SYSIN DD DUMMY//SYSUT2 DD PATH='//home/sqdata/daemon/cfg/sqdagents.cfg',// PATHOPTS=(OWRONLY,OCREAT,OTRUNC),// PATHMODE=SIRWXU,// PATHDISP=(KEEP,DELETE),// FILEDATA=TEXT//*//SYSUT1 DD * acl=acl.cfg message_file=../logs/acl.log

[DB2CDC] type=capture cab=/home/sqdata/db2cdc/db2cdc.cab

/*

Prepare z/OS Controller Daemon JCLJCL similar to the sample member SQDAEMON included in the distribution can be used to start the ControllerDaemon. The JCL must be edited to conform to the operating environment.

//SQDAEMON JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*-----------------------------------------------------------------//* Execute the z/OS SQDAEMON Controller in Batch//*-----------------------------------------------------------------//* Parms Must be Entered in lower case//*//* --service=port_number//* Where port_number is the number of a TCP/IP port that will be//* used to communicate to the Controller Daemon//* ** Note: If this parm is omitted, here and in the//* sqdagents.cfg file, the default port will be 2626 **//*//* -d zfs_dir//* Where zfs_dir is the predefined working directory used by//* the controller//* EXAMPLE://* /home/sqdata/daemon - the controller's working directory//* and its required cfg and optional logs sub-directories://*//* /home/sqdata/daemon/cfg - must contain 1 file://* sqdagents.cfg - contains a list of//* capture/publisher/engine agents to//* be controlled by the daemon//*//* - and optionally://* acl.cfg - used for acl security//* /home/sqdata/daemon/logs - used to store log files used by the//* controller daemon//*//*********************************************************************//*//JOBLIB DD DISP=SHR,DSN=SQDATA.V400.LOADLIB//*//SQDAEMON EXEC PGM=SQDAEMON//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDLOG DD SYSOUT=*//*SQDLOG8 DD DUMMY

Page 49: Connect CDC SQData - Db2 Capture Reference

49Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

//*//SQDPUBL DD DISP=SHR,DSN=SQDATA.NACL.PUBLIC//SQDPKEY DD DISP=SHR,DSN=SQDATA.NACL.PRIVATE//SQDAUTH DD DISP=SHR,DSN=SQDATA.NACL.AUTH.KEYS//*SQDPARMS DD DISP=SHR,DSN=SQDATA.V400.PARMLIB(SQDAEMON)//SQDPARMS DD * --service=2626 --tcp-buffer-size=262144 -d /home/sqdata/daemon/*//

Note: This JCL can be simplified by placing all optional parameters in the sqdagents.cfg file described above ratherthan specifying them in the JCL. The exception to this recommendation is when multiple Controller Daemon's arerunning on the same machine. In that case --service=port_number must be specified for at least one of the Daemons.

Page 50: Connect CDC SQData - Db2 Capture Reference

50 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

Configure Engine

This Capture Agent supports two types of Engines. The Apply Engine which provides maximum control over thereplication process and a variety of target datastores and the Replicator Engine, designed for maximum streamingreplication performance but with limited target datastore options.

The function of an Apply Engine may be one of simple replication, data transformation, event processing, sourcedatastore unload or a more sophisticated active/active data replication scenario. The actions performed by an ApplyEngine are described by an Engine Script the complexity of which depends entirely on the intended function andbusiness rules required to describe that function.

The most common function performed by an Apply Engine is to process data from one of the Change Data Capture(CDC) agents, applying business rules to transform that data so that it can be applied or efficiently replicated to aTarget datastore of any type on any operating platform.

The following steps should be followed to configure an Apply Engine:

1. Determine requirements

Identify the type of the target datastore; the platform the Apply Engine will run on; and finally the datatransformations required, if any, to map the source data to the target data structures.

2. Prepare Apply Engine Environment

Once the platform and type of target datastore are known, the environment on that platform must be preparedincluding the installation of Connect CDC SQData and any other components required by the target datastore.Connect CDC SQData will also utilize your existing native TCP/IP network for publishing data captured on oneplatform to Engines running on any another platform. Factors including performance requirements and networklatency should be considered when selecting the location of the system on which the Engine will execute.

3. Configure Engine Controller Daemon

The Engine Controller Daemon is the same program, SQDaemon, as the Capture Controller Daemon but provideslocal and remote management and control of Engines, Utilities and other User agents on the platform where theyexecute. Precisely recommends using an Engine Controller Daemon to simplify operation including the optionalautomatic startup of Engine agents following platform restart.

4. Create Apply Engine Script

The Apply Engine utilizes a SQL like scripting language capable of a wide range of operations, from replication ofidentical source and target structures using a single command to complex business rule based transformations.Connect CDC SQData commands and functions provide full procedural control of data filtering, mapping andtransformation including manipulation of data at its most elemental level if required.

5. End-to-end Component Verification

Confirm successful Change Data Capture through target datastore content validation.

The Replicator Engine is controlled by a simple configuration file that merely identifies source and target datastores.It's primary purpose is to operate like a utility offering high performance source to target replication with the focuson streaming targets like Kafka. It runs only on Linux.

The Replicator Engine operates in two modes:

· As a single purpose version of the Apply Engine designed for pure replication of captured Relational sourcedata to selected target Datastores. The objective is to provide a Utility like Replication solution requiringminimal configuration and eliminating, to the extent possible, all maintenance by supporting what we refer to

Page 51: Connect CDC SQData - Db2 Capture Reference

51Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

as Schema Evolution. In the case of Db2 zOS ONLY, changes to Db2 schemas will automatically generate alteredJSON schemas for Kafka and when using AVRO, automatically update to the Confluent Schema Repository andthe AVRO formatted Kafka topic payload. The Replicator Engine performs no unnecessary data transformationsand provides no ability to inject Business Rules to affect the data written to the target. There are limitedoptions providing global control of Kafka topic header information.

· As a Distributor for data captured by either the IMS TM Exit or IMS Log Capture agent. In this mode the IMS CDCdata is written as special purpose Kafka Topics containing CDCRAW data partitioned by the full or partial Key ofthe Root segments. The partitioning of the Kafka targets provides for subsequent parallel processing of theCDCRAW data by Apply Engines configured as Kafka Consumers.

The following steps should be followed to configure the RDBMS Replicator Engine for JSON or AVRO formatted Kafkatopics.

1. Determine requirements

Identify the type of the target datastore, JSON or AVRO formatted Kafka topics.

2. Prepare Replicator Engine Environment

The Replicator Engine runs on linux and preparation requires only installation of Connect CDC SQData and othercomponents required by the target datastore. A Kafka target for example requires the Open Source librdkafka Clanguage API and the libcurl API if a Confluent Schema Repository is to be used, in addition to Kafka clusteraccess and the optional Confluent Repository. Like all Engine side processing, the Replicator will also utilize yourexisting native TCP/IP network to receive publishing data captured on another platform. Factors includingperformance requirements and network latency should be considered when selecting the location of the systemon which the Replicator Engine will execute.

Note, see the Architecture guide for information about other streaming target platforms like Microsoft'sproprietary EventHub.

3. Configure Engine Controller Daemon

The Engine Controller Daemon is the same program, SQDaemon, as the Capture Controller Daemon but provideslocal and remote management and control of Engine, Utility and other User agents on the platform where theyexecute. Precisely recommends using an Engine Controller Daemon to simplify operation including the optionalautomatic Start of Engine agents following platform restart.

4. Create Replicator Engine Configuration Script

The Replicator Engine uses a very simple configuration script with very few options and no data transformationlogic. See the Replicator Engine Reference for details.

5. End-to-end Component Verification

Confirm successful Change Data Capture through target datastore content validation.

Notes:

1. See Db2/z Straight Replication for an sample Apply engine script and the Apply and Replicator EngineReferences for a full explanation of the capabilities provided by both types of Engine.

2. See Add Engine Controller Daemon for an example of the configuration.

Page 52: Connect CDC SQData - Db2 Capture Reference

52 Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

Component Verification

This section describes the steps required to verify that the Connect CDC SQData Db2/z Log Reader Data CaptureAgent is working properly. If this is your first implementation of the Db2/z Log Reader Capture, we recommend areview with Precisely https://www.precisely.com/support before commencing operation.

Start z/OS Controller DaemonThe JCL configured previously in sample member SQDAEMON can be used to start the Controller Daemon.

Note, once the Controller Daemon has been started, Implementing changes made to any of the Controller Daemon'sconfiguration files (acl.cfg, sqdagents.cfg, nacl.auth.keys) can be accomplished using the SQDMON Utility reloadcommand without killing and re-starting the Controller Daemon.

Start Db2/z Log Reader Capture AgentThe JCL configured previously in sample member SQDDB2C can be used to to Mount (execute) the DB2 Log ReaderCapture Agent on z/OS.

It is important to realize that the return code, and message from SQDDB2C indicating that the start command wasexecuted successfully, does not necessarily mean that the agent is still in started state. It only means that the startcommand was accepted by the capture agent and that the initial setup necessary to launch a capture thread weresuccessful. These preparation steps involve connecting to Db2 and setting up the necessary environment to start alog mining session.

The capture agent posts warnings and errors in the system log. The program name for the Db2/z Log Reader Captureis SQDDB2C. If there is a mechanism in place to monitor the system log, it is a good idea to include the monitoring ofsqdatalogm messages. This will allow you to detect when a capture agent is mounted, started, or stopped - normallyor because of an error. It will also contain, for most usual production error conditions, some additional informationto help diagnose the problem.

Start EngineStarting an Engine on the target platform may require only the submission of JCL similar to sample memberSQDATAD included in the distribution and specifying the parsed Engine script, in our example a DB2TODB2.

//SQDATA JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*--------------------------------------------------------------------//* Execute the Connect CDC SQData Engine under Db2//*--------------------------------------------------------------------//* Note: 1) This Job may require specification of the Public/Private//* Key pair in order to connect to a Capture/Publisher//* running on another platform//*//* 2) To run the Connect CDC SQData Engine as a started task, refer to//* member SQDAMAST//*//* Required DDNAME://* SQDFILE DD - File that contains the Parsed Engine Script//*//*********************************************************************//*//JOBLIB DD DISP=SHR,DSN=SQDATA.V400.LOADLIB// DD DISP=SHR,DSN=DSNB10.SDSNLOAD//*//SQDATA EXEC PGM=SQDATA,REGION=0M//SQDPUBL DD DISP=SHR,DSN=SQDATA.NACL.PUBLIC//SQDPKEY DD DISP=SHR,DSN=SQDATA.NACL.PRIVATE//SYSPRINT DD SYSOUT=*

Page 53: Connect CDC SQData - Db2 Capture Reference

53Connect CDC SQData Db2 Capture Reference

Db2/z Log Reader Capture

//SQDLOG DD SYSOUT=*//*SQDLOG 8 DD DUMMY//CEEDUMP DD SYSOUT=*//*//*---- PARSED ENGINE SCRIPT FILE ----//SQDFILE DD DISP=SHR,DSN=SQDATA.V400.SQDOBJ(DB2TODB2)

See the Apply and Replicator Engine reference for other use cases involving the Connect CDC SQData Db2/z LogReader Capture

Db2 Test Transactions1. Execute an online transaction or Db2 SQL statement, updating the candidate tables that are to be captured

2. Execute a batch program, updating the candidate tables that are to be captured.

3. Examine the results in the target datastore using the appropriate tools.

Page 54: Connect CDC SQData - Db2 Capture Reference

54 Connect CDC SQData Db2 Capture Reference

Operation

The sections above described the initial installation and configuration process. Once completed Connect CDC SQDatais operationally ready to become active. Db2 data that is changed will be logged by Db2, then Captured. SubscribingEngines will be authenticated by the Controller Daemon and CDC data will be Published via TCP/IP to eachsubscribing Engine, which will apply the data to the target datastores.

This section covers a few of the common operational activities likely to be encountered while in operation.

There are three methods supported for interacting with Connect CDC SQData components on z/OS:

1. ISPF panel Interface - The DB2 Quickstart Guide provides detailed step by step instructions.

2. z/OS JCL - Traditional z/OS Jobs that execute the SQDCONF and SQDMON utilities and their full range of optionsfor managing and controlling the SQDAEMON Controller Daemon and the z/OS IMS Log Reader Capture and zLogPublisher.

3. z/OS Console Commands - Duplicate many of the commands associated with the SQDCONF and SQDMON utilityprograms. Note, in order to issue any z/OS commands from TSO (including SDSF) the user must have TSOCONSOLE authority and possibly SDSF command authority.

P <task_name> - Stops and unmounts the agent immediately

F <task_name>,PAUSE - Pauses the agent

F <task_name>,RESUME - Resumes the agent after a pause

F <task_name>,DISPLAY - Display of the agent cab file with the output being written to SYSPRINT in the running STC

F <task_name>,STOP - Stops the agent but leaves it mounted

F <task_name>,STOP,UNMOUNT - Stops and unmounts the agent (same as P command)

F <task_name>,STOP,FLUSH - Stops the agent after flushing out any UOWs that began before the command wasissued, then unmounts

F <task_name>,STOP,FLUSH,FAILOVER - Same as STOP,FLUSH except that it instructs downstream engines to try toreconnect for up to 10 minutes

F <task_name>,START - Starts an agent that was previously stopped, but still mounted

F <task_name>,APPLY - Applies pending cab file changes to the agent's cab file. Agent must be mounted, stopped inorder to apply

Page 55: Connect CDC SQData - Db2 Capture Reference

55Connect CDC SQData Db2 Capture Reference

Operation

Start / Reload Controller Daemon

The JCL configured previously in sample member SQDAEMON can be used to start the Controller Daemon.

Note, once the Controller Daemon has been started, Implementing changes made to any of the Controller Daemon'sconfiguration files (acl.cfg, sqdagents.cfg, nacl.auth.keys) can be accomplished using the SQDMON Utility reloadcommand without killing and re-starting the Controller Daemon.

z/OS Console commands that may also be issued, with the proper authority include:

P <task_name> - Stops the daemon

F <task_name>,RELOAD - Refreshes the SQDagents.cfg file - required when you add new captures/publishers ordelete/recreate a capture/publisher cab file

F <task_name>,INVENTORY - List the tasks registered in SQDagents.cfg and their current status (i.e. started, stopped,not mounted, etc.)

F <task_name>,SHUTDOWN - Stops the daemon (same as P command)

Page 56: Connect CDC SQData - Db2 Capture Reference

56 Connect CDC SQData Db2 Capture Reference

Operation

Setting the Capture Start Point

The first time the Db2/z Capture agent is started, it uses the "current" Db2 LSN/RBA as starting point by Default.Capture can also be started for the first time at a specific point-in-time by explicitly specifying the start LSN. Thecurrent LSN can be determined using the Db2 -DISPLAY LOG command and then selecting a starting LSN based on theBegin Time of a logged transaction. Other transactions that started before that LSN but not already been committedare considered in-flight units-of-work and will be ignored along with all other transactions that committed prior tothat LSN.

The starting Db2 LSN/RBA is specified in the capture .cab configuration file. The log point LSN can be set at a globalcapture agent level (i.e. all tables in the configuration file or for individual tables). An LSN of 0 indicates that captureshould start from the current point in the Db2 log. This is used when starting the capture agent for the first time orwhen adding a new table to the configuration file. Once the LSN is established, it would typically never be alteredfor normal operation. The capture agent continuously updates the global LSN and the individual table LSNs in theconfiguration file as changed data is being processed. Each time the capture agent starts, data capture is resumedfrom the last LSN processed.

Example

Set the LSN to 0 (zero means current time) at the capture agent level for all tables) with the SQDCONF modifycommand.

//*---------------------------------------- //*- SET LSN AT GLOBAL LEVEL//*----------------------------------------//SETLSN EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify /home/sqdata/db2cdc/db2cdc.cab --lsn=0 //*

Page 57: Connect CDC SQData - Db2 Capture Reference

57Connect CDC SQData Db2 Capture Reference

Operation

Restart / Remine Db2/z

Precisely's Connect CDC SQData Db2/z Data Capture was designed to be both robust and forgiving in that it takes asimple and conservative approach when deciding where in the log to start capturing data when first started andwhen being restarted following either a scheduled interruption of the capture or an unplanned disruption in aproduction environment. The capture agent continuously updates the Capture configuration (.cab) file with theglobal LSN (last Db2 log LSN read), the last LSN captured for each source table and the last LSN published to eachsubscribing Engine.

Each of those LSN's are used when the Capture is started the First Time, when it resumes following a scheduled orunscheduled STOP and when for whatever reason a decision has been made to either Re-Capture from a point-in-time in the past or to skip to the "Current" point-in-time

Normal RestartBy Default, each time the Db2/z Capture agent starts, data capture is resumed from the last LSN processed. It knowsexactly how far back in time to re-mine to guarantee the re-capture of all in-flight transactions for all transactionswhere the "begin" transaction record was previously seen by the Capture. While re-mining, the Capture mayencounter Log records belonging to a transaction for which the begin transaction record was not seen previously.These records are counted as orphan records unless a rollback for the transaction is subsequently seen in the log. Ifthe transaction is rolled back, the orphaned records of that transaction are voided (that is not counted as orphanrecord). It is possible that the statistics, immediately after a re-start, show some orphan records, but does not showthem later.

Example

Resume capture using JCL similar to the following to Mount and Start the Capture Agent.

//SQDDB2C EXEC PGM=SQDDB2C,REGION=0M//*SQDDB2C EXEC PGM=XQDDB2C,REGION=0M//SQDPUBL DD DSN=SQDATA.NACL.PUBLIC,DISP=SHR//SQDPKEY DD DSN=SQDATA.NACL.PRIVATE,DISP=SHR//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//CEEDUMP DD SYSOUT=*//SQDLOG DD SYSOUT=*//*SQDLOG8 DD DUMMY//*SQDPARMS DD DISP=SHR,DSN=SQDATA.V400.PARMLIB(DB2CDC)//SQDPARMS DD * --start /home/sqdata/db2cdc/db2cdc.cab //*

Restart from CurrentIt may be necessary or desirable to restart Capture from the "Current" point-in-time. While this is the defaultbehavior if the Capture Configuration (.cab) file is deleted and recreated it may be desirable to explicitly specify thatan existing Capture that perhaps had been stopped for a period of time, common in test environments, is simplyrestarted but from "now" rather than when it was last processing. In this situation, both the Global LSN for theCapture and the starting LSN for individual Subscribing Engines must be reset. Rather than having to pick a specificLSN however the value need only be set to 0 (zero):

1. Set the Global LSN to Current

//*---------------------------------------- //*- SET LSN AT GLOBAL LEVEL//*----------------------------------------//SETLSN EXEC PGM=SQDCONF

Page 58: Connect CDC SQData - Db2 Capture Reference

58 Connect CDC SQData Db2 Capture Reference

Operation

//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify /home/sqdata/db2cdc/db2cdc.cab --lsn=0

2. Next, you must specify the same value for each of the subscribing Engines. For every Engine that requires re-capture a separate SQDCONF Job step must be run that specifies both the Target (subscribing Engine).

//*---------------------------------------- //*- SET LSN FOR ONE SPECIFIC Engine//*- REPEAT THIS STEP FOR EACH SUBSCRIBING Engine//*----------------------------------------//SETLSN1 EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD *

modify /home/sqdata/db2cdc/db2cdc.cab --target=cdc:///DB2TODB2 --lsn=0

Note: If the capture was running while the SQDCONF modify Job steps were executed, the Capture must be Stoppedbefore the changes can be Applied to the Capture configuration file.

Point-in-time RecoveryThere may be times when a point-in-time recovery is required, where changes made from a few hours or even daysearlier must be or recaptured, perhaps because an Engine script was modified. The appropriate LSN <value> can bedetermined by using the Db2 -DISPLAY LOG command and/or running the Db2 Log Print Utility DSNJ004. The firstthing to determine is if there is more than one Engine subscribed to the capture and if there is, whether recapturedchanges should be published to one all of the subscribed Engines.

1. Regardless of how many Engines are subscribed or will require re-capture and re-publishing, the Global LSNmust be set to the the appropriate value as follows:

//*---------------------------------------- //*- SET LSN AT GLOBAL LEVEL//*----------------------------------------//SETLSN EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify /home/sqdata/db2cdc/db2cdc.cab --lsn=<lsn_value>

2. Next, you must determine if changes will be re-captured and re-published to one or more subscribingEngines. For every Engine that requires re-capture a separate SQDCONF Job step must be run that specifiesboth the target datastore (subscribing Engine) and the remine LSN.

//*---------------------------------------- //*- SET LSN FOR ONE SPECIFIC Engine//*----------------------------------------//SETLSN1 EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD *

modify /home/sqdata/db2cdc/db2cdc.cab --target=cdc:///DB2TODB2 --lsn=<lsn_value>

3. Finally, after the modifications are complete the Capture must be restarted with an additional --safe-restart=<value> parameter specifying the starting LSN for the re-capture.

//SQDDB2C EXEC PGM=SQDDB2C,REGION=0M

Page 59: Connect CDC SQData - Db2 Capture Reference

59Connect CDC SQData Db2 Capture Reference

Operation

//*SQDDB2C EXEC PGM=XQDDB2C,REGION=0M//SQDPUBL DD DSN=SQDATA.NACL.PUBLIC,DISP=SHR//SQDPKEY DD DSN=SQDATA.NACL.PRIVATE,DISP=SHR//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//CEEDUMP DD SYSOUT=*//SQDLOG DD SYSOUT=*//*SQDLOG8 DD DUMMY//*SQDPARMS DD DISP=SHR,DSN=SQDATA.V400.PARMLIB(DB2CDC)//SQDPARMS DD * --apply --start --safe-restart=<lsn_value> /home/sqdata/db2cdc/db2cdc.cab//*

Note: If the capture was running while the SQDCONF modify Job steps were executed, the Capture must be Stoppedbefore the changes can be Applied to the Capture configuration file.

Page 60: Connect CDC SQData - Db2 Capture Reference

60 Connect CDC SQData Db2 Capture Reference

Operation

Apply Capture CAB File Changes

Changes made to a capture agent configuration are not effective until they are applied. The apply operation instructsthe capture agent to process the SQDconf commands that have been previously issued but not yet actually in use bythe capture agent itself. The reason for formally separating the apply step from the add/modify/remove steps is toprepare for the deployment of a production change in advance, without taking the risk of accidentally impactingproduction. For example, imagine that a new application will be rolled out next weekend:

· The application requires the capture of a couple of new tables and drops the capture of another table.

· If changes were effective immediately or automatically at the next start, the risk exists that the capture agentmay go down for unrelated production issues (like an unexpected disconnection from Db2), and the newchanges would be activated prematurely.

· If the changes could not be staged then they could not be prepared until the production capture is stopped forthe weekend migration.

· Staging the changes allows capture agent maintenance to be done outside of the critical upgrade path.

· Requiring the explicit Apply step, insures that such changes can be planned and prepared in advance, withoutputting current production replication in jeopardy.

The current operating "State" of the capture agent determines what actions can be taken including the application ofchanges. The following table illustrates the state combinations that restrict or permit changes:

Capture StateCombinations

Description

Unmounted Not on-line (on z/OS, no active job), changes cannot be applied.

Mounted, Paused Running, Capture is paused, BUT Publishing continues until all previously captured datais consumed, changes cannot be applied.

Mounted, Stopped Running, both Capture and Publishing are suspended, changes can be applied.

Mounted, Started Running, both Capture and Publishing are active, changes cannot be applied.

Mounted, Started, Stalled Running, both Capture and Publishing are active, no Engine is connected, changescannot be applied..

In summary, while changes to the capture agent configuration can and should be staged, the following rules must befollowed:

1. The capture agent must be Started and Stopped to permit additions and/or modifications to the configurationto be Applied.

2. A create on an existing configuration file will fail.

3. After initial configuration, changes in the form of add and modify commands must be used instead of thecreate command.

4. The .cab file cannot be deleted if the Capture agent is mounted.

Notes:

· Once created the Capture agent configuration .cab file should not be deleted because the current position in

Page 61: Connect CDC SQData - Db2 Capture Reference

61Connect CDC SQData Db2 Capture Reference

Operation

the log and the relative location of each engine's position in the Log will be lost. When the Capture agent isbrought back up it would start from the current log point, skipping all database activity that occurred after thecapture was Stopped and Unmounted or Canceled.

· There are a few exceptions:--retry is effective immediately--mqs-max-uncommitted is automatically effective at the next start.

· Parameters controlling the Transient Storage Pool can be modified dynamically and are effective immediately.

See the section Modify z/OS Transient Storage Pool.

· JCL similar to the following can be used to Stop, Apply and Start the Capture Agent to fully implement thestaged configuration changes:

//*---------------------------------------- //*- STOP THE CAPTURE AGENT //*----------------------------------------//STOP EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * stop /home/sqdata/db2cdc/db2cdc.cab//* //*---------------------------------------- //*- APPLY UPDATED CONFIGURATION FILE //*---------------------------------------- //APPLY EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * apply /home/sqdata/db2cdc/db2cdc.cab //* //*---------------------------------------- //*- START UPDATED CONFIGURATION FILE //*---------------------------------------- //START EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * start /home/sqdata/db2cdc/db2cdc.cab //*

· If the Db2/z Capture Agent task ended or was Canceled and therefore Unmounted, the Apply and Start stepscan be combined as is usually the case when starting the Capture Agent for the first time. See section Startingthe Db2/z Capture Agent above.

Page 62: Connect CDC SQData - Db2 Capture Reference

62 Connect CDC SQData Db2 Capture Reference

Operation

Displaying Capture Agent Status

Capture agents keep track of statistical information for the last session and for the lifetime of the configuration file.These statistics can be accessed with the display action of SQDCONF.

//*---------------------------------------- //*- DISPLAY CONFIG FILE //*---------------------------------------- //DISPLAY EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * display /home/sqdata/db2cdc/db2cdc.cab//*

SQDF901I Configuration file : /home/sqdata/db2cdc/db2cdc.cab SQDF902I Status : MOUNTED,STARTED SQDF903I Configuration key : cab_9A4A9EB10ECCB679 SQDF904I Allocated entries : 31 SQDF905I Used entries : 3 SQDF906I Active Database : DB9G SQDF907I Start Log Point : 0x0 SQDF908I Last Log Point : 0x1aff2621f SQDF940I Last Log Timestamp : SQDF987I Last Commit Time : 2013-01-07 21:03:12.617536 (1eb45bf000000000)SQDF981I Safe Restart Point : 0x1aff2569b SQDF986I Safe Remine Point : 0x1aff1d000 SQDF910I Active User : SQDF913I Fix Flags : RETRY SQDF914I Retry Interval : 30 SQDF919I Active Flags : CDCSTORE,RAW LOG SQDF915I Active store name : /home/sqdata/db2cdc/db2cdc_store.cab SQDF916I Active store id : cab_964960E67A564AA3 SQDF920I Entry : # 0 SQDF930I Key : SQDATA.DEPT SQDF923I Active Flags : ACTIVE SQDF928I Last Log Point : 0x1af474000 SQDF950I session # insert : 0 SQDF951I session # delete : 0 SQDF952I session # update : 0 SQDF960I cumul # of insert : 0 SQDF960I cumul # of insert : 0 SQDF961I cumul # of delete : 0 SQDF962I cumul # of update : 27

SQDF925I Active Datastore : cdc:///db2cdc/DB2TODB2 SQDF920I Entry : # 1 SQDF930I Key : SQDATA.EMP SQDF923I Active Flags : ACTIVE SQDF928I Last Log Point : 0x1aff1f000 SQDF950I session # insert : 544 SQDF951I session # delete : 544 SQDF952I session # update : 35811 SQDF960I cumul # of insert : 544 SQDF961I cumul # of delete : 544 SQDF962I cumul # of update : 35919

SQDF925I Active Datastore : cdc:///db2cdc/DB2TODB2 SQDF920I Entry : # 2

SQDF930I Key : cdc:///db2cdc/DB2TODB2

Page 63: Connect CDC SQData - Db2 Capture Reference

63Connect CDC SQData Db2 Capture Reference

Operation

SQDF842I Is connected : Yes SQDF932I Ack Log Point : 0x1aff2309e SQDF843I Last Connection : 2013-01-07 21:03:33 SQDF844I Last Disconnection : 2013-01-04 19:34:35 SQDF987I Last Commit Time : 2013-01-07 21:03:12.482800 (cabc970f621f0000)SQDF953I session # records : 24936 SQDF954I session # txns : 1086 SQDF963I cumul # records : 25026 SQDF964I cumul # txns : 1098 SQDF794I Storage Usage SQDF795I Used : 5 MB SQDF796I Free : 59 MB SQDF797I Unallocated : 0 MB SQDF798I Memory Cache : 8 MB SQDC017I sqdconf(pid=0x30) terminated successfully

Notes:

1. An Entry exists for each source and target specified in the configuration.

2. The Last Log Point for each individual entry of the configuration, indicates the LSN of the commit point of themost recent transaction that impacted this table. The rest of the statistics are fairly self-explanatory.

3. In Storage Usage section: Used is the amount of the Transient Storage Pool currently in use; Free is theamount of currently unused Storage Pool; Unallocated is the portion of the currently defined Storage Poolthat has not yet been allocated; Memory Cache is the amount of allocated MEMORY currently in use.

Page 64: Connect CDC SQData - Db2 Capture Reference

64 Connect CDC SQData Db2 Capture Reference

Operation

Displaying Storage Agent Statistics

The Storage Agent maintains statistics about the logs that have been mined as well as Transient Storage Poolutilization. These statistics can help to determine if the Storage Agent is sized correctly. These statistics are alsoaccessed with the display action of SQDCONF and the Storage Agent configuration .cab file:

//*---------------------------------------- //*- DISPLAY CONFIG FILE //*---------------------------------------- //DISPLAY EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//STOREOUT DD SYSOUT=*//SQDPARMS DD * display /home/sqdata/db2cdc_store.cab --stats --sysout=STOREOUT//*

SQDC017I sqdconf(pid=0x99) terminated successfullySQDF801I Configuration name : cdcstoreSQDF802I Configuration key : cab_964960E67A564AA3SQDF850I Session Statistics -SQDF851I Txn Max Record : 44SQDF852I Txn Max Size : 42386124SQDF853I Txn Max Log Range : 8207SQDF855I Max In-flight Txns : 0SQDF856I # Txns : 608SQDF857I # Effective Txns : 104SQDF861I # Commit Records : 82SQDF858I # Rollbacked Txns : 21SQDF859I # Data Records : 2775SQDF860I # Orphan Data Records : 0SQDF862I # Rollbacked Records : 924SQDF863I # Compensated Records : 0SQDF866I # Orphan Txns : 0SQDF867I # Mapped Blocks : 0SQDF868I # Accessed Blocks : 2SQDF869I # Logical Blocks : 0SQDF870I Life Statistics -SQDF871I Max Txn Record : 44SQDF872I Max Txn Size : 42386124SQDF873I Max Txn Log Range : 24582SQDF875I Max In-flight Txns : 0SQDF876I # Txns : 76576SQDF877I # Effective Txns : 27701SQDF881I # Commit Records : 22154SQDF878I # Rollbacked Txns : 5546SQDF879I # Data Records : 752586SQDF880I # Orphan Data Records : 0SQDF882I # Rollbacked Records : 243513SQDF883I # Compensated Records : 0SQDF886I # Orphan Txns : 0SQDF887I # Mapped Blocks : 48SQDF888I # Accessed Blocks : 2006SQDF889I # Logical Blocks : 48SQDC017I sqdconf(pid=0x99) terminated successfully

Note:

The following fields give an indication of the storage need for the current workload.

Page 65: Connect CDC SQData - Db2 Capture Reference

65Connect CDC SQData Db2 Capture Reference

Operation

· Txn Max Record :This indicates the maximum number of records contained in any given transaction. Here thebiggest transaction had 44 records.

· Txn Max Size: This is the maximum size of the payload associated with any given transaction. Here the totalamount of data carried by the biggest transaction we’ve seen was a little more than 40MB.

· Txn Max Log Range: This indicates the largest difference in LSN from start to the end of a transaction.

Page 66: Connect CDC SQData - Db2 Capture Reference

66 Connect CDC SQData Db2 Capture Reference

Operation

Interpreting Capture/Storage Status

The status of the Capture/Storage Agents can be found in the second line of the output from both the SQDconf andsqdmon display commands and provide information about the current operational status as well as an indication ofthe state of the Transient Storage Pool.

SQDF901I Configuration file : /home/sqdata+/<type>cdc_store.cabSQDF902I Status : MOUNTED,STARTED

Possible values include combinations of the following:

Status Console Description

MOUNTED On-line and Ready

NOTMOUNTED Off-line, No active process (on z/OS, no active Job or Task)

STARTED Capture and Publishing are active

PAUSED Capture Paused and waiting on a command, Publishing continues until all transientdata is consumed

STOPPED Capture and Publishing are suspended

STALLED One or more target Engines not connected or not responding

FULL Transient Storage Pool Full. Normal, but high latency likely due to target sideperformance. Capture will stop reading logs long enough for space to be freed.

DEADLOCK Y CRITICAL CONDITION, Storage Pool TOO SMALL, Cannot hold existing Unit of Work(UOW), MUST expand Storage Pool before Capture and Publishing can continue,which can be done while Capture is running.

Notes:

1. Each line of the display is prefixed with a Message Number corresponding to the information on that line. Inthe case of Status, message number SQDF902I is displayed.

2. Certain messages, particularly those that require intervention will also be directed to the Operator Consoleon z/OS and the system log on Linux and AIX,including the DEADLOCK condition:

SQDF207E DEADLOCK CONDITION DETECTED FOR <type>cdc_store.cab

3. The Deadlock state requires intervention. See the section, Size the Transient Storage Pool and adjust theNumber of Blocks or Files allocated to the Store Pool or add another mount point/directory where additionalfiles can be allocated, by modifying the CDCSTORE configuration file as described in Modify z/OS TransientStorage Pool.

Page 67: Connect CDC SQData - Db2 Capture Reference

67Connect CDC SQData Db2 Capture Reference

Operation

Modifying z/OS Transient Storage Pool

The parameters controlling the Storage Pool can be modified dynamically, without stopping the Storage Agent, usingthe SQDconf utility. Like the initial configuration of the Storage Agent, sequences of SQDconf commands to modifythe storage agent can/should be stored in parameter files and referenced by SQDPARM DD. See SQDconf Utility for afull explanation of each command, their respective parameters and the utility's operational considerations.

Syntax

add | modify <cab_file_name>--data-path=<directory_name)--number-of-blocks=<blocks_per_file>--number-of-logfiles=<number_of_files>

Keyword and Parameter Descriptions

<cab_file_name> - This is where the Storage Agent configuration file is stored. There is only one CAB file perStorage Agent. In our example /home/sqdata/db2cdc_store.cab

<directory_name> - The ZFS directory(s) previously created for the transient storage files. In our example theoriginal directory was: /home/sqdata/data

<blocks_per_file> - The number of 8MB blocks that will be allocated for each File defined for transient CDCstorage. In our example we started with the default of 32.

<number_of_files> - The number of files that may be allocated in the <directory_name> for transient CDCstorage. In our example we started with the default of 8.

Example 1

Expand the CDCStore by raising the number of files that may be allocated in the existing <directory_name>

Execute the SQDconf modify command with syntax similar to the JCL SQDCONDS included in the distribution:

//*-------------------------------------------//* STEP 1: MODIFY A CDCSTORE CAB FILE//*-------------------------------------------//MODIFY EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify /home/sqdata/db2cdc_store.cab--data-path=/home/sqdata/data--number-of-blocks=40--number-of-logfiles=16//*//*-------------------------------------------//* STEP 2: DISPLAY THE CDCSTORE CAB FILE//*-------------------------------------------//DISPLAY EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * display /home/sqdata/db2cdc_store.cab --details/*

Notes:

1. Modifying the value of --number-of-blocks=<blocks_per_file> will only affect new files allocated.

Page 68: Connect CDC SQData - Db2 Capture Reference

68 Connect CDC SQData Db2 Capture Reference

Operation

2. Changes to the value of --number-of-logfiles=<number_of_files> take affect immediately

3. Storage Pool Directories added using --data-path=<directory_name) will be used only after all --number-of-logfiles=<number_of_files> have been created and filled.

4. No files or Directories once allocated and used will be freed or released by the Storage Agent while it isrunning.

Page 69: Connect CDC SQData - Db2 Capture Reference

69Connect CDC SQData Db2 Capture Reference

Operation

Stopping the Db2/z Capture Agent

Capture Agents may be Stopped simply to apply changes to the configuration file as described earlier or Unmounted,which will completely terminate the capture.

Example

Stop the agent and then, using the SQDCONF Unmount command, terminate the Capture Agent which performs acomplete shutdown of the address space. JCL is included in the distribution similar to the following and can beedited to conform to the operating environment and then used to execute the SQDCONF Configuration Manager.

//*---------------------------------------- //*- STOP CAPTURE AGENT //*---------------------------------------- //STOP EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * stop /home/sqdata/db2cdc/db2cdc.cab //*---------------------------------------- //*- UNMOUNT CAPTURE AGENT //*---------------------------------------- //UNMOUNT EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * unmount /home/sqdata/db2cdc/db2cdc.cab //*

Page 70: Connect CDC SQData - Db2 Capture Reference

70 Connect CDC SQData Db2 Capture Reference

Operation

Health Checker

While there are many ways to monitor the operation of the zOS Capture/Publishers, an optional Health Checker taskis include that can periodically confirm operational status and provide notification of various conditions.

The Health Checker consists of two parts, a REX EXEC and a Started Task or Job that executes the REX EXEC. A sampleREX EXEC and JCL are provided in CNTL library members HCHKREXX and HCHKDB2 included in the distribution. While itreferences Db2 and a db2cdct.cab file it can point to the configuration of any Db2 or zLogC Publisher configuration.Only the "comments" from HCHKREXX are included below followed by the HCHKDB2 JCL. Precisely recommends thatyou discuss the configuration of the Health Checker before implementation with Preciselyhttps://www.precisely.com/support.

The REXX EXEC checks the following items:

/*REXX*/ /*----------------------------------------------------------------*/ /* Capture/Publisher Health Checker REXX Exec */ /*----------------------------------------------------------------*/ /* This REXX Exec is provided as a sample that you can customize */ /* to meet your specific requirements. If you happen to make any */ /* nice adjustments, please share them with us. */ /* */ /* Invocation: */ /* EX '<rexx_library>' <last_commit_lag> <safe_restart_lag> */ /* */ /* Where: */ /* <rexx_library> the name of library that contains this exec */ /* <last_commit_lag> last commit lag threshold in minutes */ /* default is 30 */ /* <safe_restart_lag> safe restart lag threshold in minutes */ /* default is 30 */ /* */ /* Function: */ /* Check Publisher Agent (DB2 Capture / zLOGC Publisher */ /* 1) Verify that the Last Logpoint is moving forward */ /* 2) Verify that the capture is not in FULL / DEADLOCK status */ /* 3) Calculate the capture log against the current time */ /* 4) Verify that multiple targets are at same commit point */ /* 5) Check engine safe restart lag - zLOGC publishers only */ /* */ /* Return Codes: */ /* 0 - Everything checks out ok */ /* 8 - Logpoint not moving, capture stopped/full/deadlocked, */ /* Lag minutes exceed lag limit, targets not in sync */ /* 10 - One or more engines exceed the specified lag time */ /* 12 - Capture exceeds the specified lag time */ /* 20 - sqdconf invocation error - see sqdlog dd for more info */ /*----------------------------------------------------------------*/

//HCHKDB2 JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID //* //*-------------------------------------------------------------------//* Execute Health Checker for Db2 Capture Agent //*-------------------------------------------------------------------//* DDNAMEs of Interest: //* SQDPARMS - Input to sqdconf display - enter cab file here

Page 71: Connect CDC SQData - Db2 Capture Reference

71Connect CDC SQData Db2 Capture Reference

Operation

//* SAVELRSN - Dataset used to Store Db2 LRSN - See #1 Below //* SYSOUT - Dataset used for SQDCONF - See #2 Below //* //* Return Codes: //* 0 - Agent Running as Exepcted //* 8 - Health Checker Found Issues //* //* Notes: //* 1) Preallocate a file for DD SAVELRSN to store the last LRSN //* Attributes: RECFM=FB, LRECL=80 //* //* 2) Preallocate a file for DD SYSOUT for the SQDCONF Display //* Attributes: RECFM=FBA, LRECL=133 //* //* 3) Change the SET parms below for: //* LOADLIB - the name of the SQDATA load library //* HCHECKER - fully qualified name of the Health Checker Exec//* LASTCMIT - last commit time latency threshold in minutes //*-------------------------------------------------------------------//* // EXPORT SYMLIST=(HCHECKER,LASTCMIT) //* // SET LOADLIB=SQDATA.V400.LOADLIB // SET HCHECKER=SQDATA.CNTL(HCHKREXX) // SET LASTCMIT=30 //* //*-------------------------------------------------------------------//JOBLIB DD DISP=SHR,DSN=&LOADLIB //DB2HCHK EXEC PGM=IKJEFT01,REGION=64M,DYNAMNBR=99 //SYSPRINT DD * //SAVELRSN DD DISP=SHR,DSN=USER.SAVELRSN //SYSOUT DD DISP=SHR,DSN=USER.CONFOUT //*SQDLOG8 DD DUMMY //SQDLOG DD SYSOUT=* //SYSTSPRT DD SYSOUT=* //*---------------------------------------------------------------- //* SQDCONF Display Output - Feeds into HCHKREXX Exec //*---------------------------------------------------------------- //SQDPARMS DD *,SYMBOLS=JCLONLY display /home/sqdata/db2cdct/db2cdct.cab //* //*---------------------------------------------------------------- //* Invoke Health Checker Exec //*---------------------------------------------------------------- //SYSTSIN DD *,SYMBOLS=JCLONLY EX '&HCHECKER' '&LASTCMIT'

Page 72: Connect CDC SQData - Db2 Capture Reference

72 Connect CDC SQData Db2 Capture Reference

Operating Scenarios

This section covers several operational scenarios likely to be encountered both before and after the initial Capturehas been placed into operation including, Intial Loads and Target Refresh, changes in the scope of data capture andadditional use or processing of the captured data by downstream Engines.

One factor to consider when contemplating or implementing changes in an operational configuration is theimplementation sequence. In particular processes that will consume captured data must be tested, installed andoperational before initiating capture in a production environment. This is critical because the volume of captureddata can overwhelm transient storage if the processes that will consume the captured data are not enabled in atimely fashion.

While the examples in this section will generally proceed from capture of changed data to the population of thetarget by an Engine, it is essential to fully understand the expected results before configuring the data capture, forexample:

· A new column added to a target table is populated from an existing table by an existing engine; while theEngine would be changed to accommodate the new column, NO CHANGES would be required in the existingcapture configuration.

· A new table maintained by a new transactions that will be the source of data for a new data warehouse targettable will require configuration changes from one end to the other.

The examples below include some common scenarios encountered during and after an initial implementation hasproven successful. Review the examples in sequence when thinking about how to implement your own newscenario:

· Initial Target Load and Refresh

· Capture New Db2/z Data

· Send Existing Data to New Target

· Filter Captured Data

· Adding Uncataloged Tables

· Db2/z Straight Replication

· Db2/z Active/Active Replication

Page 73: Connect CDC SQData - Db2 Capture Reference

73Connect CDC SQData Db2 Capture Reference

Operating Scenarios

Initial Target Load and Refresh

One additional activity often overlooked when discussing Change Data Capture is the initial load of the Targetdatastores and the methods employed to achieve full synchronization of the source and target. Modifications toSource datastores, overlooked requirements, business rule changes affecting filters or transformations or evenoperational issues may also surface the need to Refresh all or a subset of the CDC/Apply targets.

Various methods that support Initial Load and Refresh should be considered based on all applicable factors includingperformance, ease of configuration and operational impact:

1. Connect CDC SQData Dynamic Capture Based Refresh provides the ability to refresh an entire table in parallelwith CDC replication. This allows for tables of any size to be introduced to the CDC data replication flow.

2. A special Connect CDC SQData Unload engine that reads the source datastore locally and writes records to beloaded by a database utility or one that reads the source datatastore remotely and writes directly to anothertarget like Kafka. When the source and target datastores are not identical, Precisely recommends that aspecial version of the already tested Engine script be used for the initial load of the target datastore. Thisapproach has the additional benefit of providing a mechanism for "refreshing" target datastores if for somereason an "out of synchronization" situation occurs because of an operational problem or a business rulechange affecting filters or transformations. Contact Precisely https://www.precisely.com/support to discussboth the benefits and techniques for implementing and perhaps more importantly maintaining, aload/refresh Engine solution.

3. Native database unload/reload utilities may be available to unload the source datastore and load the targetdatastore, they are however generally restricted to the same type (RDBMS, IMS, etc) source and targetdatastore.

4. Third party remote disk mirroring, often the only practical solution when large scale disaster type replicationsystems are being implemented.

Notes:

1. The method selected for the initial load of the target datastore must also consider concurrent sourcedatabase activity. The source capture and target apply process must ensure that source and targetsynchronization is achieved, often with a "catch-up" phase during which Connect CDC SQData will performcompensation.

2. Precisely recommends contacting support at https://www.precisely.com/support for assistance planning forinitial loads.

Dynamic Capture Based RefreshIntroduced in V4.1, Dynamic Refresh works automatically when using a Replicator Engine or the Apply Engines and itdoes it concurrently with CDC processing, enabling both initial load and target refresh. The Dynamic Refreshgenerates a series of Inserts that will replace the original target content. Generally when a relational target is to berefreshed in this manner a separate procedure would be followed to drop the target table(s) content. The procedureshould include where practical making a backup of that content prior to its replacement. Kafka targets do not requirethe elimination of existing topic history but consideration should be given to the impact of downstream consumers.Contact Precisely https://www.precisely.com/support for assistance.

Page 74: Connect CDC SQData - Db2 Capture Reference

74 Connect CDC SQData Db2 Capture Reference

Operating Scenarios

The following sections provide the detailed instructions for configuring and executing the Dynamic Refresh.

Note, see Confirm/Install Replication Related APARS for required Db2 maintenance level.

Refresh PreparationPreparing for Dynamic Refresh requires both some one-time and source table specific consideration. Dynamicrefresh will utilize different linear dataset(s) to minimize the impact on the flow of existing Change Data capturedwhile the refresh is in process. While the directory where the capture .cab file is located will be used by default, itmust have enough free space to contain the largest refresh slice. For this reason a secondary mount point may beneeded that can be adjusted to accommodate subsequent initial load / refresh requests.

One time activities:

1. Creation of the Connect CDC SQData refresh control table (SQDATA.REFRESH_REQUEST_LOG) on the sourcesystem. Job CRDREFR in the Connect CDC SQData CNTL library can be used to create this control table.

2. Examine the Db2 statistics for the table(s) that will be the source for the Dynamic Refresh. Determine thenumber of rows to be extracted at one time (the refresh slice size) for the largest table. Multiply the slice sizeby the average length or size of the rows in that table to estimate how much space is required for a singleslice. Decide if the default location, the directory where the capture .cab file is located will be adequate. Ifnot, prepare a second transient area following the instructions used to Create zFS Transient Data Filesystemfor the existing Db2/z Log Reader Capture Agent. This section assumes the directory createdis: /home/sqdata/db2cdc/refreshdata

3. If required, register the new transient area in the Capture Configuration .cab files using the sqdconf utility.Using using JCL similar to sample member SQDCONDC included in the distribution, modify the configurationto include the new transient area (Note, this is not presently a supported function of the ISPF panels):

Syntax

//*-------------------------------------------//* Add a Db2 Refresh Transient area//*-------------------------------------------//CREFAREA EXEC PGM=SQDCONF

Page 75: Connect CDC SQData - Db2 Capture Reference

75Connect CDC SQData Db2 Capture Reference

Operating Scenarios

//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify /home/sqdata/db2cdc/db2cdc.cab --work-storage=/home/sqdata/db2cdc/refreshdata/*

Keyword and Parameter Descriptions

--work-storage - The path and directory created for the Dynamic Refresh transient data.

Individual Table Refresh activities:

1. The space required for a single slice for each table should be compared to the size of the existing transientstorage area to confirm it will fit or if a larger Refresh transient area is required.

2. Consideration given to the exiting source application workload to determine an optimal time to schedule theinitial load or refresh.

3. Determine if the table is published to multiple downstream Engines and targets datastores. The load/refreshcan be disabled as needed for individual Engine subscriptions.

Once these tasks are complete all that remains is the specification of the refresh conditions and execution of theDynamic Refresh.

Refresh ExecutionInstruct the Capture to refresh the target with data from the table(s) using the new sqdconf refresh command,specifying the size of the refresh slice with optional key ranges. Using using JCL similar to sample memberSQDCONDC included in the distribution, request the Dynamic Refresh (this is not presently a supported function ofthe ISPF panels):

Syntax

//*-------------------------------------------//* Request a Refresh from a Db2 source table//*-------------------------------------------//REFRESH EXEC PGM=SQDCONF//SYSOUT DD SYSOUT=*//SQDPARMS DD * refresh /home/sqdata/db2cdc/db2cdc.cab --schema=<name> --table=<name> | --key=<name> --block-size=<number_of_rows> [--from=<(key_value1, key_valuex)> | --from-included=<(key_value1, key_valuex)>] [--to=<(key_value1, key_valuex)> | --to-included=<(key_value1, key_valuex)>] [with-cs]/*

Keyword and Parameter Descriptions

--schema=<name> - Schema name, owner, or qualifier of a table. Different databases use different semantics,but a table is usually uniquely identified as S.T where S is referenced here as schema. This parametercannot be specified with --key.

--table=<name> - A qualified table name in the form of schema.name that identifies the source. This may beused in place of two parameters, --schema and --table. Both cannot be specified.

--key=<name> - Same as --table.

--block-size=<number_of_rows> - the number of rows to extract at a time (refresh slice size).

Page 76: Connect CDC SQData - Db2 Capture Reference

76 Connect CDC SQData Db2 Capture Reference

Operating Scenarios

[--from=<(key_value1, key_valuex)> | --from-included=<(key_value1, key_valuex)>] - Optional starting pointof refresh; --from starts on row after keys are matched | --from-included starts from row with matchingkeys.

[--to=<(key_value1, key_valuex)> | --to-included=<(key_value1, key_valuex)>] - Optional ending point ofrefresh; --to ends on row before keys are matched | --to-included ends with row that matches keys.

[with-cs | with-ur | with-rr | with-rs] - CS is the default when this optional parm is not specified. There arefour Db2/z locking isolation levels that provide control over the number and duration of read (share) locksin a unit-of-work. The objective of the Dynamic Refresh is to eliminate any need to consider lockingimplication by sizing slices based on the other parameters above. The default, CS minimizes locking whileensuring that uncommitted data is not read. Precisely recommends using caution and consulting withPrecisely Support, your DBA's and Source Application SME's before inquiring about other options. UR forexample does less locking but allows reading uncommitted data which can clearly cause issues. Advancedusers may determine these optional parameter values assist in fine tuning their refresh while maintainingdata integrity.

Example 1

Refresh table SQDATA.EMP using 10,000 row slices to all replication engines defined in the Capture CAB filedb2cdc.cab, after first preparing the Capture to accommodate Dynamic Refresh requests using a second TransientStorage area, using JCL similar to this:

//*---------------------------------------------- //* Request Refresh of a target Db2 Table//*---------------------------------------------- //REFRESH EXEC PGM=SQDCONF //SYSPRINT DD SYSOUT=* //SYSOUT DD SYSOUT=* //SQDPARMS DD * refresh --key=SQDATA.EMP --block-size=10000 /*

Cancel Table RefreshA refresh that has been started can be canceled before it has completed however one must consider that allpublished slices will have already been processed downstream.

Syntax

//*-------------------------------------------//* Cancel a Refresh from a Db2 source table//*-------------------------------------------//CANCEL EXEC PGM=SQDCONF//SYSOUT DD SYSOUT=*//SQDPARMS DD * refresh /home/sqdata/db2cdc/db2cdc.cab --schema=<name> --table=<name> | --key=<name> --cancel/*

Keyword and Parameter Descriptions

--schema=<name> - Schema name, owner, or qualifier of a table. Different databases use different semantics,but a table is usually uniquely identified as S.T where S is referenced here as schema. This parametercannot be specified with --key.

Page 77: Connect CDC SQData - Db2 Capture Reference

77Connect CDC SQData Db2 Capture Reference

Operating Scenarios

--table=<name> - A qualified table name in the form of schema.name that identifies the source. This may beused in place of two parameters, --schema and --table. Both cannot be specified.

--key=<name> - Same as --table.

Disable Table RefreshAs noted above, all engines configured to received CDC data for a table will receive the published unit-of-work. Toprevent a specific apply engine from receiving a table refresh unit-of-work, sqdconf can be used to block or un-blockthe refresh from that engine.

Syntax

//*-------------------------------------------//* Block Table Refresh from an Apply Engine//*-------------------------------------------//BLOCK EXEC PGM=SQDCONF//SYSOUT DD SYSOUT=*//SQDPARMS DD *

conf modify <cab_file_name> [--block-refresh | --allow-refresh --target=<engine_name>]

Keyword and Parameter Descriptions

<cab_file_name> - Must be specified and must match the name specified in the previous create command.

[--block-refresh | --allow-refresh --target=<engine_name>] - Db2 only. Block or Un-Block the the data from acapture based refresh from being published to a specific target subscription (Engine)

Example 1

Block the SQDATA.EMP table refresh from being published to DB2TOORA using JCL similar to this:

//*----------------------------------------------- //*- Block Table Refresh from an Apply Engine //*----------------------------------------------- //BLOCK EXEC PGM=SQDCONF//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify /home/sqdata/db2cdc/db2cdc.cab

--block-refresh --target=DB2TOORA /*

Example 2

Un-Block refreshes published to DB2TOORA using JCL similar to this:

//*----------------------------------------------- //*- Un-Block Table Refresh from an Apply Engine //*----------------------------------------------- //UNBLOCK EXEC PGM=SQDCONF//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify /home/sqdata/db2cdc/db2cdc.cab

--allow-refresh --target=DB2TOORA /*

Page 78: Connect CDC SQData - Db2 Capture Reference

78 Connect CDC SQData Db2 Capture Reference

Operating Scenarios

Monitoring RefreshOnce a Refresh request has been completed, the Apply engine will display messages indicating the start andcompletion time of the refresh as well as a count of the rows refreshed for the table.

While the Refresh is being processed, progress can be displayed by a query of the SQDATA.REFRESH_REQUEST_LOGtable or through a request to the capture/publisher:

Syntax

//*----------------------------------------------- //*- Display Table Refresh progress //*----------------------------------------------- //DISPLAY EXEC PGM=SQDCONF//SYSOUT DD SYSOUT=*//SQDPARMS DD *

display <cab_file_name> --refresh

Keyword and Parameter Descriptions

<cab_file_name> - Must be specified and must match the name specified in the previous create command.

--refresh - Displays the number of rows and slices published.

DB2 Unload EnginesThere are two basic types of unload engines:

A special version of the an Apply Engine that reads the source datastore locally and writes records that can be loadedto the same or different type of target database, using a database load utility. Often this engine can use unmodifiedversions of the "mapping" PROCS used by the Apply Engine and only the target datastore type will be modified tooutput comma separated records to one or more individual files that then are used as input to the chosen databaseload utility. The engine will often have to be parametrized to allow specification of the particular source descriptions(tables, segments, records, etc). Then the parameter driven engine will be run as many times as needed to generatefiles for each of the specified descriptions.

The second type of unload engine typically runs on the same platform as the Apply Engine. Instead of connecting to aremote Publisher it connects remotely to the source database, reads the source from top to bottom and then usingeither the same mapping PROCS as the Apply engine or a simple REPLICATE script, writes directly to the Targetdatastore, be it a traditional RDBMS or Kafka or HDFS.

Contact Precisely support to discuss both the benefits and techniques for implementing and perhaps moreimportantly maintaining, a load/refresh Engine solution.

Page 79: Connect CDC SQData - Db2 Capture Reference

79Connect CDC SQData Db2 Capture Reference

Operating Scenarios

Capture New Db2/z Data

Whether initiating Change Data Capture for the first time, expanding the original selection of data or including a newsource of data from the implementation of a new application, the steps are very similar. The impact on new orexisting Capture and Apply processes can be determined once the source of the data is known, precisely what data isrequired from the Source, whether business rules require filters or data transformations and finally where the Targetof the captured data will reside.

While this example assumes that an existing Capture and Apply configuration is being modified it also applies to anentirely new implementation.

Example

SQDATA.dept table has been added to an existing Db2 Database that will be a new Source for an existing Enginethat will apply the captured changed data to a new Target table.

In order to capture changes made to a Db2 table, the source table must be ALTERED to allow for change datacapture. This action is performed using the SQL ALTER TABLE statement as follows:

ALTER TABLE SQDATA.dept DATA CAPTURE CHANGES;

Next the Capture configuration must be updated to include the new table. The sample member SQDCONDC,used earlier, includes the following step for adding a table to the capture that will be published to the existingtarget Engine.

//*---------------------------------------- //*- ADD TABLES TO CONFIG FILE //*---------------------------------------- //ADDTBL EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * add /home/sqdata/db2cdc/db2cdc.cab --key=dept --datastore=cdc:////DB2TODB2 --active //*

Finally, determine how the new data will affect the the existing Engine and modify the Engine script accordingly.See the Engine Reference for all the options available through Engine Script commands and functions.

Note:

Whenever a new source is to be captured for the first time, consideration must be given to the existing state of thesource datastore when capture is first initiated. The most common situation is that the source already contains datathat would have qualified to be captured and applied to the target, if the CDC and Apply process had already been inplace.

Depending on the type of source and target datastore the following solutions that can insure source and target are insync when Change Data Capture is implemented:

1. While utilities may be available to unload the source datastore and load the target datastore, they willgenerally be restricted to both the same type (RDBMS, IMS, etc) of source and target datastore.

2. Those utilities generally also require the source and target datastores to have identical structure (columns,fields, etc). Precisely recommends the use of utility programs if those two constraints are acceptable.

3. If however, the source and target are not identical, Precisely recommends that a special version of thealready tested Engine script be used for the initial load of the target datastore. This approach has the

Page 80: Connect CDC SQData - Db2 Capture Reference

80 Connect CDC SQData Db2 Capture Reference

Operating Scenarios

additional benefit of providing a mechanism for "refreshing" target datastores if for some reason an "out ofsynchronization" situation occurs because of an operational problem or a business rule change affectingfilters or transformations. Contact Precisely support to discuss both the benefits and techniques forimplementing and perhaps more importantly maintaining, a load/refresh Engine solution.

Page 81: Connect CDC SQData - Db2 Capture Reference

81Connect CDC SQData Db2 Capture Reference

Operating Scenarios

Send Existing Db2/z Data to New Target

Our example began with the addition of a new Db2 Table to the data capture process and publishing the captureddata to an existing Engine. Often however, a change results from recognition that a new downstream process orapplication can benefit from the ability to capture changes to existing data. Whether the scenario is event processingor some form of straight replication the implementation process is essentially the same.

Our example continues with the addition of a new Engine DB2TOORA), that will populate a new Oracle table withcolumns corresponding to a subset of the columns from the SQDATA.dept Db2 source table.

While no changes are required to the Storage agent to support the new Engine, the Capture agent will requireconfiguration changes and the new subscribing Engine must be added.

Add Subscription to Log Reader CaptureOne or more output Datastores, also referred to as Subscriptions, may be specified for each Source table in theconfiguration file. Once the initial configuration file has been created, Datastores are added or removed using theSQDCONF modify command.

The following example adds a subscription for a second Target Engine DB2TOORA for changes to the SQDATA.depttable:

//*----------------------------------------------- //*- ADD SECOND TARGET FOR A TABLE TO CONFIG FILE //*----------------------------------------------- //ADDTBL2 EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify /home/sqdata/db2cdc/db2cdc.cab

--key=<sqd.dept> --datastore=cdc:////DB2TOORA --active //*

Note, the configuration file changes must be followed by an apply in order to have the capture agent recognize theupdated configuration file.

Add New EngineAdding a new Engine on a new target platform begins with the installation of the Connect CDC SQData product. Seethe Installation Section for the applicable platform Installation instructions. Once the product is installed, the stepsto configure the new Engine parallel those required to configure the existing Engine. In our example the new Engineis named DB2TOORA. The Engine script will specify only simple mapping of columns in a DDL description to columnsin the target relational table. See the Engine Reference for all the options available through Engine Script commandsand functions.

While not as simple as straight replication due to potentially different names for corresponding columns, the mostimportant aspect of the script will be the DATASTORE specification for the source CDC records:

Syntax

DATASTORE cdc://<host><:port>/<capture_agent_alias>/<engine_agent_alias>

Keyword and Parameter Descriptions

<host> Location of the Capture Controller Daemon.

Page 82: Connect CDC SQData - Db2 Capture Reference

82 Connect CDC SQData Db2 Capture Reference

Operating Scenarios

<:port> Optional, required only if non-standard port is specified by the service parameter in the ControllerDaemon configuration.

<capture_agent_alias> must match the alias specified in the Controller Daemon agents configuration file. Theengine will connect to the Controller Daemon on the specified host and request to be connected to thatagent.

<engine_agent_alias> Must match the alias provided in the "sqdconf add" command "--datastore" parameterwhen the Publisher is configured to support the new target Engine.

Example:

DATASTORE cdc://<host><:port>/db2cdc/DB2TOORA OF UTSCDC AS CDCIN DESCRIBED BY <schema_name>.<table_name> ;

Add Engine Controller DaemonThe Controller Daemon, SQDaemon plays a key role in the authentication process by being the first point of contactfor any agent requesting communication with any other agent in both single and multi-platform environments. Seethe Secure Communications Guide for more details regarding the Controller Daemon's role in security. ControllerDaemons are accessed via a TCP/IP interface to an assigned Port on the platform where they are running. Theirsymbolic name is often synonymous with a specific Host (platform or Environment on which they are running.

The primary difference between an Engine Controller Daemon and a Daemon on Capture platforms is that theAuthorized Key File of the Engine Controller Daemon need only contain the Public keys of SQDmon utility users onboth the local and remote platforms.

Page 83: Connect CDC SQData - Db2 Capture Reference

83Connect CDC SQData Db2 Capture Reference

Operating Scenarios

Setup and configuration of the Engine Controller Daemon, SQDaemon, includes:

# Task Utility

Configure Engine Daemon

1 Reserve TCP/IP port for Engine Daemon N/A

2 Generate Engine public / private keys SQDutil

3 Add the public key generated in step #2 to the Authorized Key List files onthe Source system and target system

N/A

4 Create the Access Control List Configuration N/A

5 Create the Agent Configuration File N/A

6 Prepare the Controller Daemon JCL, shell or batch script N/A

Engine Environment Preparation Complete

See the Setup Capture Controller Daemon section for a detailed description of these activities and the examplebelow.

Page 84: Connect CDC SQData - Db2 Capture Reference

84 Connect CDC SQData Db2 Capture Reference

Operating Scenarios

Example

A sample sqdagent.cfg file for a Controller Daemon containing the Engine DB2TOORA follows. Changes are notknown to the daemon until the configuration file is reloaded, using the SQDMON Utility, or the sqdaemonprocess is stopped and started.

acl=<SQDATA_VAR_DIR>/daemon/cfg/acl.cfgauthorized_keys=<SQDATA_VAR_DIR>/daemon/nacl_auth_keysidentity=<SQDATA_VAR_DIR>/id_naclmessage_file=../logs/daemon.logservice=2626

[DB2TOORA]type=engineprogram=SQDATAargs=DB2TOORA.prcworking_directory=<SQDATA_VAR_DIR>message=<SQDATA_VAR_DIR>stderr_file=<SQDATA_VAR_DIR>/DB2TOORA.rptstdout_file=<SQDATA_VAR_DIR>/DB2TOORA.rptauto_start=yes

Update Capture Controller DaemonIn our example, the Data Capture Agent and its controlling structures existed prior to the addition of this new Engine.Consequently the only modification required to the Capture Controller Daemon is the addition of the new Engine'sPublic Key to the Authorized Key File.

Changes are not known to the daemon until the configuration file is reloaded or the daemon process is stopped andstarted.

On the z/OS platform this is usually done by an administrator using ISPF.

Apply Configuration File changesChanges made to the Capture/Publisher Agent Configuration (.cab) file are not effective until they are applied. Ifchanges were effective immediately or automatically upon start, then those changes could not be made until theproduction Agent is stopped for the migration. Staging the changes eliminates the risk of premature activationshould the agent be stopped for unrelated production issues.

Forcing a distinct and explicit apply step, insures that such changes can be planned and prepared in advance, withoutputting the current production replication in jeopardy. This allows Agent maintenance to be staged outside of theproduction implementation window.

In order to apply changes, the agent must first be recycled using the ISPF panel or JCL containing the steps below,also found in sample member RECYCLE included in the distribution. This operation in effect pauses the agent taskand permits the additions and/or modifications to the configuration to be applied. Once the agent is restarted, theupdated configuration will become active.

Example: Implement a new set of Db2 tables previously staged for this weekend:

//*---------------------------------------- //*- STOP THE AGENT //*----------------------------------------//STOP EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * stop /home/sqdata/db2cdc/db2cdc.cab

Page 85: Connect CDC SQData - Db2 Capture Reference

85Connect CDC SQData Db2 Capture Reference

Operating Scenarios

//* //*---------------------------------------- //*- APPLY UPDATED CONFIGURATION FILE //*---------------------------------------- //APPLY EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * apply /home/sqdata/db2cdc/db2cdc.cab//* //*---------------------------------------- //*- START THE AGENT //*----------------------------------------//START EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * start /home/sqdata/db2cdc/db2cdc.cab//*

Page 86: Connect CDC SQData - Db2 Capture Reference

86 Connect CDC SQData Db2 Capture Reference

Operating Scenarios

Filter Captured Data

The introduction of Data Capture necessarily adds some overhead to the processing of the originating transaction bythe source database / file manager. For that reason it is customary to perform as little additional processing of thedata during the actual capture operation as possible. Filtering data from the capture process is therefore broken intotwo types:

· Capture Side Filters

· Engine Filters

Capture Side FiltersIn addition to controlling which tables are captured, it is also possible to add and remove items to be excluded fromcapture based on other parameters including: User, Program, Transaction, Correlation ID, Plan, etc. The basic syntaxvaries slightly based on the current state of the configuration and can be can be specified one or more times percommand line.

Syntax

Create state: --exclude-<item>=<variable>

Modify state: --auto-exclude-plan=NO | --add-excluded-<item>=<variable> | --remove-excluded-<item>=<variable>

Where item <variable> can be any one of the following list of optional keywords:

Item Variable Description

user User IDThe user-id associated with the transaction or programmaking Db2 data changes.

correlation-id Correlation IDThe Correlation ID of the program / transaction making Db2data changes.

plan Plan Name If non-blank, this is the Plan making Db2 data changes.

Note, the wild-card character "*" can be used as part of the variable associated with any of the exclusionkeywords, for example -correlation-id=db2tran*

Example

Disable the automatic exclusion of the Connect CDC SQData Default plan (SQDV4000) to enable capture ofupdates that are normally excluded to prevent cyclic updates in an Active/Active Replication configuration.

//*----------------------------------------------- //*- ADD USER EXCLUSION TO CONFIG FILE //*----------------------------------------------- //ADDEXCL EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify /home/sqdata/db2cdc/db2cdc.cab --auto-exclude-plan=NO

Page 87: Connect CDC SQData - Db2 Capture Reference

87Connect CDC SQData Db2 Capture Reference

Operating Scenarios

Notes: As with all modifications made to a Capture configuration file, the following steps must be followed toimplement the change:

1. Capture should be paused to allow subscribed Engines to consume all previously captured data

2. The Capture must then be Stopped

3. The changes must be Applied to the capture .CAB file to have the capture agent recognize the updatedconfiguration

4. The Capture must be Started

Engine FiltersBoth record and field level evaluation of data content including cross-reference to external files not part of theoriginal data capture. See the Engine Reference for all the options available through Engine script commands andfunctions.

Page 88: Connect CDC SQData - Db2 Capture Reference

88 Connect CDC SQData Db2 Capture Reference

Operating Scenarios

Adding Uncataloged Tables

Occasionally it may be convenient to add a table to the Capture Agent Configuration (.cab) file before it is present inthe Db2 Catalog. This scenario might occur when a new Db2 Business application is being implemented. The tableswill have been created in the test environment for the new application and because a downstream application willrequires that data to be captured, the capture configuration in the test environment has also been updated so thedownstream application can also be tested.

While the normal capture configuration maintenance process supports adding tables and marking them Inactive andthen subsequently changing them to Active, they must be in the Db2 Catalog even when they are marked Inactive.Because the scale of implementation is large it would be desirable to create or more likely update the productioncapture configuration in advance. That can be accomplished by adding the table with a Pending status (--pending)rather than Inactive, which would cause the capture to immediately fail because the production catalog did not yetcontain those tables.

The SQDCONF Utility will be used to add the Pending tables and later to modify the configuration when it is time forthem to be activated.

Syntax

sqdconf add <cab_file_name>--schema=<name> --table=<name> | --key=<name> --datastore=<url> [--pending]

Keyword and Parameter Descriptions

<cab_file_name>= Must be specified and must match the name specified in a previous create command.

--schema=<name> Schema name, owner, or qualifier of a table. Different databases use different semantics,but a table is usually uniquely identified as S.T where S is referenced here as schema. This parametercannot be specified with --key.

--table=<name> A qualified table name in the form of schema.name that identifies the source. This may beused in place of two parameters, --schema and --table. Both cannot be specified. In our example the firsttable is SQDATA.EMP

--key=<name> Same as --table

--datastore=<url> | -d <url> - While most references to the term datastore describe physical entities, adatastore URL represents a target subscription and takes the form: cdc://[localhost]/<agent_alias>/<target_name> where:

o <host_name> - Optional, typically specified as either cdc:///... or cdc://[localhost | localhost IP]... sincewe are describing the server side of the socket connection.

o <agent_alias> The alias name assigned to the Capture/Publisher agent and must match the<agent_name> defined in the Controller Daemon sqdagents.cfg configuration file. Engine scripts willuse the <agent_alias> when specifying the source "URL" and also on sqdmon <agent_name> displaycommands.

o <target_name> The subscriber name presented by a requesting target agent. Also referred to as theEngine name, the name provided here does not need to match the one specified in a local ControllerDaemon sqdagents.cfg configuration file. In our example we have used DB2TODB2.

Page 89: Connect CDC SQData - Db2 Capture Reference

89Connect CDC SQData Db2 Capture Reference

Operating Scenarios

[--pending] This parameter allows a table to be added to the configuration before it exists in the databasecatalog.

Notes, Like any table being added, if there are multiple target datastores (Engines), an add command must beprocessed for each individual table/datastore pair.

Finally, when it is time to begin to capture the new tables, use the modify command to change the status of thetables from Pending to Active (or Inactive).

Syntax

sqdconf modify <cab_file_name>--schema=<name> --table=<name> | --key=<name> [--active | --inactive]

Keyword and Parameter Descriptions

<cab_file_name>= Must be specified and must match the name specified in a previous create command.

--schema=<name> Schema name, owner, or qualifier of a table. Different databases use different semantics,but a table is usually uniquely identified as S.T where S is referenced here as schema. This parametercannot be specified with --key.

--table=<name> A qualified table name in the form of schema.name that identifies the source. This may beused in place of two parameters, --schema and --table. Both cannot be specified. In our example the firsttable is SQDATA.EMP

--key=<name> Same as --table

[--active | --inactive] This parameter marks the added source active for capture when the change is appliedand the agent is (re)started. If this parameter is not specified the default is --inactive.

Notes:

1. The sqdconf modify command only needs to be run once for each Pending table regardless of the numberof datastores (Engines) subscribed.

2. Like all modifications to the Capture Agent Configuration (.cab) file the must be activated, see Applyingthe Configuration File changes.

Page 90: Connect CDC SQData - Db2 Capture Reference

90 Connect CDC SQData Db2 Capture Reference

Operating Scenarios

Db2/z Straight Replication

Simple replication is often used when a read only version of an existing datastore is needed or a remote hot backupis desired. The Apply Engine provides an easy to implement simple replication solution requiring very fewinstructions. It will also automatically detect out of sync conditions that have occurred due to issues outside ofSQData's control and perform compensation by converting updates to inserts (if the record does not exist in thetarget), inserts to updates (if the record already exists in the target) and drop deletes, if the record does not exist inthe target.

Note, this section assumes two things: First, that the environment on the target platform fully supports the type ofdatastore being replicated. Second, that an Connect CDC SQData Change Data Capture solution for the sourcedatastore type has been selected, configured and tested.

Target Implementation ChecklistThis checklist covers all the tasks required to preparing the target operating environment and configure StraightDb2/z Replication. It assumes two things: First that a Db2 subsystem exists on the target platform. Second, that theDb2/z Log Reader Capture has been configured and tested on the source platform, see that ImplementationChecklist.

# Task Sample JCL z/OS ControlCenter

Prepare Environment

1 Perform the base product installation on the Target System Various

2 Modify Procedure Lib (PROCLIB) Members N/A

3 Verify that the Connect CDC SQData product has been Linked SQDLINK

4 Bind the Db2 Package BINDSQD/

5 Verify APF Authorization of LOADLIB N/A

6 Create ZFS directories if running a Controller Daemon on Target System ALLOCZDR

7 Identify/Authorize Operating User(s) and Started Task(s) N/A

Environment Preparation Complete

Engine Configuration Tasks

1 Collect DDL for tables to be replicated and Create Target tables N/A

2 Generate Public/Private Keys for Engine, Update Auth Key File on SourceSystem

NACLKEYS

3 Create Straight Replication Script SQDPARSE *

4 Prepare Engine JCL SQDATA *

Engine Configuration Complete

Verification Tasks

1 Start the Db2/z Capture agent and SQDAEMON on Source System Various *

2 Start the Engine on Target System SQDATA

3 Apply changes to the source tables using SPUFI or other means. N/A

4 Verify that changes were captured and processed by Engine N/A

Verification Complete

The following sections focus on the Engine Configuration and communication with the Source platform ControllerDaemon. Detailed descriptions of the other steps required to prepare the environment for Connect CDC SQDataoperation are described in previous sections.

Page 91: Connect CDC SQData - Db2 Capture Reference

91Connect CDC SQData Db2 Capture Reference

Operating Scenarios

Create Target TablesUsing SPUFI or other means and DDL or DCLGENS from the source system, create duplicates of the Source tables onthe Target system.

Generate Engine Public / Private KeysAs previously mentioned, Engines usually run on a different platform than the Data Capture Agent. The ControllerDaemon on the Capture platform manages secure communication between Engines and their Capture/PublisherAgents. Therefore a Public / Private Key pair must be generated for the Engine on the platform where the Engine isrun. The SQDutil program must be used to generate the necessary keys and must be run under the user-id that willbe used by the Engine.

Syntax

$ sqdutil keygen

On z/OS, JCL similar to the sample member NACLKEYS included in the distribution executes the SQDutil programusing the keygen command and generates the key pair.

The Public key must then be provided to the administrator of the Capture platform so that it can be added to thenacl.auth.keys file used by the Controller Daemon.

Note, there should also be a Controller Daemon on the platform running Engines to enable command and controlfeatures and the browser based Control Center. While it is critical to use unique key pairs when communicatingbetween platforms, it is common to use the same key pair for components running together on the same platform.Consequently, the key pair used by an Engine may be the same pair used by it's Controller Daemon.

Create Straight Replication ScriptA Simple Replication script requires DESCRIPTIONS for each Source and Target DATASTORE as well as either a straightmapping procedure for each table or use of the REPLICATE Command as shown in the sample script below. In theexample Descriptions are in-line rather than through external files. In the sample script, a CDCzLog type Publisheruses TCP/IP to publish and transport data to the target Apply Engine. The Main Select section contains onlyreferences to the Source and Target Datastore aliases and the REPLICATE Command. Individual mapping proceduresare not required in this case.

The sample script, DB2TODB2, is listed below. Note how the same table descriptions are used for both Source andTarget environments, how the Schema, which may have been present in the descriptions is overridden and how asingle function, REPLICATE performs all the work. See the Engine Reference for more details regarding the use of theREPLICATE Command.

If you choose to exercise this script, which is based on IBM's Db2 IVP tables, it will be necessary to create two copiesof the DEPT and EMP tables as referenced in the script on the target system. Once that is complete, the script can beparsed and exercised.

--------------------------------------------------------------- DB2 REPLICATION SCRIPT FOR ENGINE: DB2TODB2 --------------------------------------------------------------- SUBSTITUTION VARS USED IN THIS SCRIPT: -- %(ENGINE) - ENGINE / REPORT NAME -- %(HOST) - HOST OF Capture -- %(PORT) - TCP/IP PORT of SQDAEMON -- %(PUBN) - Capture/Publisher alias in sqdagents.cfg -- %(SSID) - DB2 Subsystem ID --------------------------------------------------------------- CHANGE LOG: -- 2018/01/01: INITIAL RELEASE

Page 92: Connect CDC SQData - Db2 Capture Reference

92 Connect CDC SQData Db2 Capture Reference

Operating Scenarios

-------------------------------------------------------------JOBNAME %(ENGINE); RDBMS NATIVEDB2 %(SSID); OPTIONS CDCOP('I','U','D'); --------------------------------------------------------------- DATA DEFINITION SECTION---------------------------------------------------------------------------------------- -- Source Data Descriptions --------------------------- BEGIN GROUP SOURCE_DDL; DESCRIPTION DB2SQL DD:DB2DDL(EMP) AS S_EMP; DESCRIPTION DB2SQL DD:DB2DDL(DEPT) AS S_DEPT; END GROUP; --------------------------- -- Target Data Descriptions --------------------------- -- None required for Straight Replication --------------------------- -- Source Datastore(s) --------------------------- DATASTORE cdc://%(HOST):%(PORT)/%(PUBN)/%(ENGINE) OF UTSCDC AS CDCIN RECONNECT DESCRIBED BY GROUP SOURCE_DDL ; --------------------------- -- Target Datastore(s) --------------------------- DATASTORE RDBMS OF RELATIONAL AS TARGET FORCE QUALIFIER TGT DESCRIBED BY GROUP SOURCE_DDL FOR CHANGE ; --------------------------- -- Variables --------------------------- -- None required for Straight Replication --------------------------- -- Procedure Section --------------------------- -- None required for Straight Replication --------------------------------------------------- Main Section - Script Execution Entry Point -------------------------------------------------PROCESS INTO TARGET SELECT { -- OUTMSG(0,STRING(' TABLE=',CDC_TBNAME(CDCIN) -- ,' CHGOP=',CDCOP(CDCIN) -- ,' TIME=' ,CDCTSTMP(CDCIN))) -- Source and Target Datastores must have the same table names

Page 93: Connect CDC SQData - Db2 Capture Reference

93Connect CDC SQData Db2 Capture Reference

Operating Scenarios

REPLICATE(TARGET) } FROM CDCIN;

Prepare z/OS Engine JCLThe parsed replication Engine script in this example, DB2TODB2 will run on a z/OS platform. In this case JCL similar tosample member SQDATA included in the distribution can be edited to conform to the operating environment,including the necessary Public / Private key files.

//sqdata JOB 1,MSGLEVEL=(1,1),MSGCLASS=H,NOTIFY=&SYSUID//*//*--------------------------------------------------------------------//* Execute the SQDATA Engine under DB2//*--------------------------------------------------------------------//* Note: 1) This Job may require specification of the Public/Private//* Key pair in order to connect to a Capture/Publisher//* running on another platform//*//* 2) To run the SQDATA Engine as a started task, refer to//* member SQDAMAST//*//* Required DDNAME://* SQDFILE DD - File that contains the Parsed Engine Script//*//*********************************************************************//*//JOBLIB DD DISP=SHR,DSN=SQDATA.V400.LOADLIB// DD DISP=SHR,DSN=DSNB10.SDSNLOAD//*//sqdataD EXEC PGM=SQDATA,REGION=0M//SQDPUBL DD DISP=SHR,DSN=SQDATA.NACL.PUBLIC//SQDPKEY DD DISP=SHR,DSN=SQDATA.NACL.PRIVATE//SYSPRINT DD SYSOUT=*//SQDLOG DD SYSOUT=*//*SQDLOG8 DD DUMMY//CEEDUMP DD SYSOUT=*//*//*---- PARSED ENGINE SCRIPT FILE ----//SQDFILE DD DISP=SHR,DSN=SQDATA.V400.SQDOBJ(DB2TODB2)

Note: The Controller Daemon (both Capture and Engine) uses a Public / Private key mechanism to ensure componentcommunications are valid and secure. While it is critical to use unique key pairs when communicating betweenplatforms, it is common to use the same key pair for components running together on the same platform.Consequently, the key pair used by an Engine may be the same pair used by it's Controller Daemon.

Verify Straight ReplicationVerification begins with the Capture Agent and the specific steps depend on the type of Capture being used. Followthe verification steps described previously depending on which Capture has been implemented. Then start theEngine on the target system.

Using SPUFI or other means, perform a variety of insert, update and delete activities against source tables. Then onthe Target system, again using SPUFI or other means, verify that the content of the target tables match the source.

Page 94: Connect CDC SQData - Db2 Capture Reference

94 Connect CDC SQData Db2 Capture Reference

Operating Scenarios

Db2/z Active/Active Replication

An overview of Active/Active Replication is provided in the Change Data Capture Guide. Implementing such aconfiguration for Db2 is a two step process:

1. Create a single Straight Replication Engine script that can be reused on each system.

2. Creation of Capture and Apply configurations on each of the systems.

add --active | --inactive --key=<TABLE_NAME>--datastore=cdc:////<engine_agent_alias><cab_file_name>

Note: The Db2/z Log Reader Capture, by default excludes from capture all updates made by an Apply Engine runningunder the same plan name used by the capture to connect to the Db2 subsytem including the default Db2 Plan,SQDDB2D. The reason for this is to avoid circular replication.

Page 95: Connect CDC SQData - Db2 Capture Reference

95Connect CDC SQData Db2 Capture Reference

Db2/z Capture Troubleshooting

This section describes some common operational issues that may be encountered while using the Db2/z Log ReaderCapture agent:

· Db2/z Source Database Reorgs and Load Replace

· Db2/z Compression Dictionary Delays

· CPU Utilization on MOUNT

· Changes Not Being Captured

· Long Table Names

· Db2/z Log Buffer Delays

· Upgrading Db2/z to 10 Byte LSN

· Compensation Analysis and Elimination

· Signal Errors

· z/OS Diagnostic Dumps

Page 96: Connect CDC SQData - Db2 Capture Reference

96 Connect CDC SQData Db2 Capture Reference

Db2/z Capture Troubleshooting

Db2/z Source Database Reorgs and Load Replace

Change Data Capture places constraints on both both database Reorgs and Load Replace operations. Two items inparticular are important the Db2 Compression Dictionary and Db2 Logging:

Db2 Compression Dictionary

Before the Connect CDC SQData Db2/z capture agent can read the Db2 recovery log records, Db2 must decompressthe log records of any table that is stored in a compressed table space. Db2 uses the current compression dictionaryfor decompression. If the CDC process is run asynchronously, for some reason gets behind or is configured torecapture older logs, the proper Compression Dictionary may be unavailable if a database Reorg or Load Replace hasoccurred.

In Precisely's experience, customers have already specified the KEEPDICTIONARY=YES parameter when using usingthe Db2 REORG or LOAD utility. This parameter should be confirmed prior to implementation of the Db2/z ChangeData Capture.

Alternatively, ensure that the Connect CDC SQData Db2 Capture has processed all log records for a table beforeperforming any activity that affects the compression dictionary for that table. The following activities can affect thecompression dictionaries:

1. Altering a table space to change its compression setting.

2. Using DSN1COPY to copy compressed table spaces from one subsystem to another, including from datasharing to non-data-sharing environments.

3. Running the REORG or LOAD utility on the table space.

Db2 Logging

The Db2 Load/Replace operation clearly impacts the content of Db2 Tables that are a source for Change Data Capture.In addition to the other prerequisites for Db2/z Capture, the Load/Replace operation requires RESUME withSHRLEVEL CHANGE to log change data for Connect CDC SQData to capture.

Page 97: Connect CDC SQData - Db2 Capture Reference

97Connect CDC SQData Db2 Capture Reference

Db2/z Capture Troubleshooting

Db2/z Compression Dictionary Delays

The Db2/z Log Capture often requires access to Db2's compression dictionary. There can be delays introduced due tothe table space is stopped or another utility claim that prevents its access. Db2 will issue a C90063 return code,echoed by the Capture when this occurs:

[ DB2C] :undecompressed record at rba:000000019B1607BF33AD rc=8 reason=0xc90063 diag=0xc90082

The capture will by default retry 5 times with a 5 second wait interval before terminating. The Capture can berestarted immediately or if known after the condition causing contention has been resolved.

If this occurs frequently, both the number of retrys and the retry interval can be specified for the capture using thefollowing two parameters in addition to the others already present in your configuration:

// SQDPARMS DD *... existing parameters

[--log-retry-count=<number of retrys>][--log-retry-interval=<seconds>]

Keyword and Parameter Descriptions

[--log-retry-count=<number of retrys>] - Db2/z Only, retry IFI Read for undecompressed log records when thecompression dictionary was unavailable to the IFI. Used to modify the default retry count of 5.

[--log-retry-interval=<seconds>] - Db2/z Only, retry IFI Read for undecompressed log records when thecompression dictionary was unavailable to the IFI. Used to modify the default retry interval of 5 seconds.

Page 98: Connect CDC SQData - Db2 Capture Reference

98 Connect CDC SQData Db2 Capture Reference

Db2/z Capture Troubleshooting

CPU Utilization on MOUNT

High CPU utilization may be observed when the Db2/z Log Capture is initially started on zOS either when the JOB thatexecutes the Db2/z Capture is run or the Db2/z Capture Started Task is started.This occurs when many logs have to be read in order to resume Capture, when either the Job/Task was not executingfor a long time or when it has been instructed to perform a Point-in-time recovery, requiring logs to be remined fromsome point in the past.

This may also occur due to support for Db2 Schema Evolution introduced in V4.0. If downstream Replicator Enginesare not in use the --no-ddl-tracking option may be used to bypass the collection of schema information uponinitiation of Capture.

Page 99: Connect CDC SQData - Db2 Capture Reference

99Connect CDC SQData Db2 Capture Reference

Db2/z Capture Troubleshooting

Changes Not Being Captured

The common culprits for changed data not being captured include:

· The table has not been altered to activate the capture. See Configure Db2 Tables for Capture above forinformation on activating a table for capture.

· The table has not been added to the capture configuration CAB file. See Create Db2/z Capture CAB file andCapture New Db2/z Data.

· The table added to the capture configuration CAB file and made Active but the table name is incorrect. This is amore serious mistake in configuration because during the initialization phase the capture the catalog will bequeried for each table marked Active. Any such table that does not exist in the catalog will prevent capturefrom starting. The job log will contain the following error:SQDF048E The description of table <schema.table_name> could not be retrieved from the database

· The capture agent is not active. Even though the capture agent was MOUNTED and STARTED successfully, itmay have terminated due to the unavailability of an archived log.

· The following message is an example of a Capture failure that requires looking at the Db2 system log for moredetails: 08.03.40 STC48240 SQDF023E Capture thread was interrupted at Log Point:0x00DB534666F224AA 08.03.40 STC48240 SQDF023X 0000 for config file /home/sqdata/DB2CDC.cab, rc(0x45c001:IF 08.03.40 STC48240 SQDF023X I_READS_ERROR)

Db2 only returned a general IFI_READS_ERROR. The Db2 system log from the same point in time pointed outthat the LSN was on an inaccessible Archive Log.

· The Capture requires both the Bind and authorization of a Db2 Package/Plan to utilize the Db2/z Log Reader. Ifyour capture started task Job Log contains the following error the Bind was likely performed but the Db2 Grantto authorize its use was not:

BROWSE - SYSOUT START - Page 1 Line 1 Cols 1-80Command ===> Scroll ===> CURSOR****** ***************************** Top of Data *******************************SQD0133E (26) OPEN THREAD completed with X'00000008':X'00F30034'

SQDC016i sqdconf(pid=0x18f) terminated with Return Code 8, Reason Code=0x4bc002*********************************** Bottom of Data *****************************

· Switch to the diagnostic (dev) version of the executable code by switching to to the XQDDB2C version of theCapture program and un-commenting //*SQDLOG8 DD DUMMY, see Prepare Db2/z Capture Runtime JCL includingany additional logging parameters instructed by Precisely support.

Page 100: Connect CDC SQData - Db2 Capture Reference

100 Connect CDC SQData Db2 Capture Reference

Db2/z Capture Troubleshooting

Long Table Names

Fully qualified table names in Db2 frequently contain very long fully qualified schemas. In order to shorten the namewhen it is used to qualify columns in Apply Engine scripts an ALIAS is often used.

In the example database one table is named HumanResources.EmployeeDepartmentHistory which contains thecolumn StartDate. The ALIAS parameter can be used to shorten the table name to something more manageable usingthe syntax below:

DESCRIPTION DB2 ALIAS(HumanResources.EmployeeDepartmentHistory_StartDate AS HR.Dept_StartDateHumanResources.EmployeeDepartmentHistory_EmployeeID AS HR.Dept_EmployeeIDHumanResources.EmployeeDepartmentHistory_DeptID AS HR.Dept_DeptIDHumanResources.EmployeeDepartmentHistory_ShiftID AS HR.Dept_ShiftID)

Then in a subsequent procedure statement in the script a reference to StartDate could look like this:

If HR.Dept_StartDate < V_Current_Year

Rather than:

If HumanResources.EmployeeDepartmentHistory_StartDate < V_Current_Year

Page 101: Connect CDC SQData - Db2 Capture Reference

101Connect CDC SQData Db2 Capture Reference

Db2/z Capture Troubleshooting

Db2/z Log Buffer Delays

The Connect CDC SQData Db2/z Log Reader Capture constantly monitors the Db2 transaction log for new data.However, in some environments with low transaction activity, the DB2 log buffer may not be flushed frequently,which can delay the capture of recent data changes. Flushing the Db2 Log buffer can reduce the delay of captureddata. One way to accomplish that is to create a special SQDCDC.CHURNING table that the Db2/z Log Capture will thenitself update, triggering log records that flush the log buffer in order to pick up all other recent log changes.

To create a SQDCDC.CHURNING table, use the following schema and the DB2 Log Reader Capture will automaticallypick up all DB2 log changes.

CREATE TABLE SQDCDC.CHURNING(COMMENT VARCHAR(128) NOT NULL);ALTER TABLE SQDCDC.CHURNING DATA CAPTURE CHANGES;

Next, stop and restart the DB2 capture agent.

Page 102: Connect CDC SQData - Db2 Capture Reference

102 Connect CDC SQData Db2 Capture Reference

Db2/z Capture Troubleshooting

Upgrading Db2/z to 10 Byte LSN

Db2 Version 12 requires that your databases be upgraded to use a 10 Byte Log Sequence Number (LSN). While thattask falls into the domain of the Db2 Database Administrator, the Connect CDC SQData Db2/z Change Data Capturerequires the following steps to be performed at the time the LSN length is changed:

1. Find the RBA that you want to start from in the DB2 MSTR address space, once the migration to NFM (new-function mode) is complete. Look for the following messages from the Db2 recovery manager in the systemlog that indicate the progress of Db2 through a restart process. You are looking for the RBA from the priorcheckpoint. You may need the assistance of the Db2 DBA assigned to the migration of a System operator.

DSNR001I -DBBG RESTART INITIATED DSNR003I -DBBG RESTART...PRIOR CHECKPOINT RBA=000000000000CFB28090

2. With the capture down completely, run the following JCL to set the global LSN and target engine LSN, using theRBA the value identified in step 1.

//*----------------------------------------------//*- Modify DB2 Capture CAB File Global LSN/RBA//*----------------------------------------------//STOP EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify --lsn=000000000000CFB28090 /home/sqdata/db2cdc/db2cdc.cab/*//*//*----------------------------------------------//*- Modify DB2 Capture CAB File Target LSN/RBA//*----------------------------------------------//STOP EXEC PGM=SQDCONF//SYSPRINT DD SYSOUT=*//SYSOUT DD SYSOUT=*//SQDPARMS DD * modify --lsn=000000000000CFB28090 --target=cdc:////DB2REPL1 --datastore=cdc:////DB2TODB2 /home/sqdata/db2cdc/db2cdc.cab//*

3. In the capture startup parm, modify the safe restart point. Note that you will need to remove the --safe-restart after the capture has been started and is running because you do not want it used again accidentallyfollowing another restart of the Capture.

--apply --start --safe-restart=000000000000CFB28090 /home/sqdata/db2cdc/db2cdc.cab

Page 103: Connect CDC SQData - Db2 Capture Reference

103Connect CDC SQData Db2 Capture Reference

Db2/z Capture Troubleshooting

Compensation Analysis and Elimination

If your use case involves replication of identical source/target tables then you will have generated an Apply EngineReplication script as described in the Db2/z Quickstart. If you have confirmed that primary indexes have beencreated on the target tables then the apply engine can use the target database catalog to identify the primary keys.Alternatively you may have decided to specify the target keys in the replication script which would be required bythose source tables that had no primary keys. This is often true with "audit" type tables.

In either case you are likely to see "compensation" occur on one or more of the target tables. Compensation is anoptional feature of the Apply Engine which automatically detects out of sync conditions between source and target.This condition frequently occurs during the initial implementation of replication either because there was no initialload of the target from the source or because the initial load was performed while the source was still subject toconcurrent application database activity. In both cases there will be a period of "catch up" replication where a CDCUpdate or Delete for a missing row or a CDC insert for an existing row was captured, while the initial load or refreshwas in progress.

Why do compensation you may ask?

Compensation in these cases will drop the deletes if the record does not exist in the target, convert an update to aninsert and convert an insert to an update. Without compensation, all of these situations would normally cause a SQLerror when an CDC Update or Delete for a missing row is processed or CDC inserts for an existing row are attempted.Compensation processing is optional but nearly always utilized in Apply Engines performing Replication.Compensation not only eliminates the impact of these timing related errors during the initial implementation but itensures that once the initial "catch up" phase of replication is complete that the source and target are fullysynchronized. During this time, out of sequence inserts, updates and deletes should reduce in number andeventually reach zero. Once the initial replication has "caught up" with processing occurring on the source side,compensation should no longer occur.

To clarify, when a COMPENSATION is reported in either a WTO, engine message or the engine report the term meansthe following:

· Compensated Insert - A CDC Insert record was transformed by the Engine into an Update.

· Compensated Update - A CDC Update record was transformed by the Engine into an Insert.

· Compensated Delete - A CDC Delete record was dropped because there was not target with a matching "key".

There are three consequences to the use of compensation:

1. Write to Operator (WTO) warning messages may be optionally generated by the Apply Engine to identify theiroccurrence and we recommend using existing console monitoring tools to alert staff as they typically indicatean unexpected but non-catastrophic event after the initial "catch up" implementation phase of replication.

2. Performing compensation requires Inserts, updates and deletes to be preceded by a select using the Key ofthe CDC record. This additional operation adds some overhead but not enough to be concerned about in mostcases. Options provide for the elimination of the pre-insert select on tables known to have large volumes ofinsert activities, like an audit table.

3. Compensations that continue past the initial synchronization period indicates a problem. Either there wereerrors introduced by the initial load process or the keys of the target do not match those defined for thesource or they do not provide for identification of a unique row.

The remainder of this section addresses the diagnosis and remediation of errors related to incorrect target keys. It isimportant to acknowledge that full resolution of the problem will often require repeating the initial load and "catchup" processing but only for those targets affected.

Page 104: Connect CDC SQData - Db2 Capture Reference

104 Connect CDC SQData Db2 Capture Reference

Db2/z Capture Troubleshooting

1. Identify the engine(s) that continue to produce compensation messages, the tables being compensated andcollect statistics to identify the relative number of compensations occurring for each target table in order toprioritize the order in which to correct them.

2. Gather the DDL for the source tables from the DBA including their primary key and index specifications.

3. Using the prioritized list of target tables, examine each table one-by-one comparing the primary keys definedin the source database with the keys defined in the target database and the columns listed in the script, if theKEY IS clause has been specified, and make the necessary corrections.

4. Reparse the engine script and resume processing.

5. Monitor the output of the engine to confirm compensation has ceased for the target table and if it has not,reconfirm the source and target key specifications for the table and make further corrections.

6. Determine if the nature of the table requires re-load and resynchronization. There may be tables that areperiodically emptied or purged in the course of source application processing. Depending on the purpose ofthe target table and the method used to empty it of rows, it may be possible to simply wait for the source andtarget, to be emptied through replication and natural resynchronization to occur. Alternatively it may bepossible to simply drop and re-create the target table and allow replication to slowly bring the table current.

7. If the table contains data that will have to be reloaded to achieve synchronization. Determine the best timeand method to perform the individual table reload as previously outlined in the introductory Quick StartApproach.

If possible use the Dynamic Capture Based Refresh to reload the target table since it requires the least intervention.

If another method of refresh will be used it will be necessary to disconnect the Apply engine that normally processeschange data for the target table. This should be done immediately before the unload/reload process is initiated toensure that concurrent changes made by applications to the table are captured but not processed until after the loadrefresh has been completed. Once the load has been completed the Apply engine can be restarted.

With either method of Refresh, there will again be some expected compensation while the resynchronizationoccurs.

Repeat these steps with the next table in the same or another engine and continue the process for each table thatwas identified with compensation until all are eliminated.

Page 105: Connect CDC SQData - Db2 Capture Reference

105Connect CDC SQData Db2 Capture Reference

Db2/z Capture Troubleshooting

Signal Errors

A Signal error indicates that an internal error has occurred within the Connect CDC SQData. Signal errors areaccompanied by a return code of 16. Each signal error is accompanied by a list of values that are used for diagnosticpurposes by Precisely support personnel.

When you encounter a signal error, please save the runtime report from the component and contact Preciselyhttps://www.precisely.com/support.

Page 106: Connect CDC SQData - Db2 Capture Reference

106 Connect CDC SQData Db2 Capture Reference

Db2/z Capture Troubleshooting

z/OS Diagnostic Dumps

The diagnosis of an issue may occasionally require a system dump. While rare, if Precisely Support asks for a dump,the request will be entered at the console and take the following form:

Syntax

DUMP COMM=(<some sort of comment>)

Then reply to the outstanding WTOR (from dump command)

Rnnn,JOBNAME=(<address_space>[,address_space>]),SDATA=(ALLNUC,CSA,GRSQ,LPA,PSA,SQA,RGN,LSQA,SWA,TRT,SUM)

Keyword and Parameter Descriptions

JOBNAME=(<address_space>[,address_space>]) - One or more address spaces, depending on what isrequested by Precisely Support where address_space corresponds to a Job Name or Task Name.

SDATA=(ALLNUC,CSA,GRSQ,LPA,PSA,SQA,RGN,LSQA,SWA,TRT,SUM) - List of dump options, those specifiedhere are the typical options requested by Precisely Support.

Example 1:

Request a dump of the Db2 Log Capture at the operator console for the DB2CDCP Task and the Db2 subsystemtask DB2MSTR

DUMP COMM=(DB2CDC DUMP FOR SQDATA)

Then reply to the outstanding WTOR (from dump command)

R nnn,JOBNAME=(DB2CDCP,DB2MSTR),SDATA=(ALLNUC,CSA,GRSQ,LPA,PSA,SQA,RGN,LSQA,SWA,TRT,SUM)

Upload the resulting dump at https://www.precisely.com/support or if necessary, request temporary FTPcredentials.

Note, if a dump has been required it is likely that the Db2 Capture will have to be stopped until the issue's causehas been determined.

Example 2:

Request a dump of the zLog Publisher for IMS at the operator console for the IMSPUBP Task

DUMP COMM=(IMSPUB DUMP FOR SQDATA)

Then reply to the outstanding WTOR (from dump command)

R nnn,JOBNAME=(IMSPUBP),SDATA=(ALLNUC,CSA,GRSQ,LPA,PSA,SQA,RGN,LSQA,SWA,TRT,SUM)

Upload the resulting dump at https://www.precisely.com/support or if necessary request temporary FTPcredentials.

Page 107: Connect CDC SQData - Db2 Capture Reference

107Connect CDC SQData Db2 Capture Reference

Index Ind

ex

AAKV 45ALIAS 100ALTER TABLE 32APPLCOMPAT 16

Apply 60, 84Apply Engine 50Archived Logs 25Azure Key Vault 45

BBINDSQD 16

CCapture Based Refresh 103

CDCSTORE 25, 64compensated 103compensation 103Console Commands 54

Controller Daemon 42

Ddaemon 52data sharing environment 40data sharing group 40Datastores 81

Db2 Catalog 88DB2 LOAD 96DB2 Package 16DB2 Plan 16

DB2 REORG 96Db2 Version 12 102DBRM 16display 64

display action 62Dump 106Dynamic Refresh 73DynaRefresh 73

EEvent Hub 50EventHub 50

--exclude 86

FFilter Tables 73

IInitial Load 73Initial Target Load 73

KKEY IS 103

LLog Reader Capture 12LSN 56--lsn 56

LSN/RBA 56

MMASTER 52modify 81

NNACL encryption 36NACLKEYS 22, 91New Tables 73

New Targets 73no-ddl-tracking 40

OOperator (WTO) messages 103

Pprimary keys 103Private 22, 91Public 22, 91

Public / Private key 22, 91

RRefresh 73refresh slice 73reload 55

Page 108: Connect CDC SQData - Db2 Capture Reference

108 Connect CDC SQData Db2 Capture Reference

Index Ind

ex

re-load 103reorg 96Replicator Engine 50

resynchronization 103

SSchema Evolution 40Signal error 105SQDAEMON 42sqdata_cloud.conf 45

sqdconf 69SQDDB2C 12SQDDDB2D 16SQDF023E 99

SQDF048E 99SQDZLOGC 40start 52statistics 64

stop 69storage 64Storage Agent 64Straight replication 90

TTarget Refresh 73

TLS 38

UUncataloged 88Unit-of-Work 9unmount 69

VV12R1M500 16

ZzFS SQDATA Variable Directory 16zIIP 36

zIIP processors 36

Page 109: Connect CDC SQData - Db2 Capture Reference

2 Blue Hi l l PlazaPearl River, NY 10965USA

precisely.com

© 2001, 2022 SQData. Al l rights reserved.