Top Banner
Course #: L1-626.3 IBM Part #: Z251-1686-00 December 9, 2003 Education Services IBM DB2 Universal Database V8.1 System Administration Student Manual
546

System Administration

Nov 18, 2014

Download

Documents

deep51983

db2 admin
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: System Administration

Course #: L1-626.3IBM Part #: Z251-1686-00December 9, 2003

Education Services

IBM DB2 Universal Database V8.1 System Administration

Student Manual

Page 2: System Administration
Page 3: System Administration

iii

Copyright, Trademarks, Disclaimer of Warranties, and Limitation of Liability© Copyright IBM Corporation 2002, 2003.

IBM Software GroupOne Rogers StreetCambridge, MA 02142

All rights reserved. Printed in the United States.

IBM and the IBM logo are registered trademarks of International Business Machines Corporation.

The following are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both:

Microsoft, Windows, Window NT, SQL Server and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

Java, JDBC, and all Java-based trademarks are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other countries.

All other product or brand names may be trademarks of their respective companies.

The information contained in this document has not been submitted to any formal IBM test and is distributed on an “as is” basis without any warranty either express or implied. The use of this information or the implementation of any of these techniques is a customer responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will result elsewhere. Customers attempting to adapt these techniques to their own environments do so at their own risk. The original repository material for this course has been certified as being Year 2000 compliant.

This document may not be reproduced in whole or in part without the prior written permission of IBM.

Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication, or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.

Answers OnLineAIXAPPNAS/400BookMasterC-ISAMClient SDKCloudscapeConnection ServicesDatabase ArchitectureDataBladeDataJoinerDataPropagatorDB2DB2 ConnectDB2 ExtendersDB2 Universal DatabaseDistributed DatabaseDistributed Relational

DPIDRDADynamic Scalable

ArchitectureDynamic Server Dynamic Server.2000 Dynamic Server with

Advanced DecisionSupport Option

Dynamic Server withExtended Parallel Option

Dynamic Server withUniversal Data Option

Dynamic Server with WebIntegration Option

Dynamic Server, WorkgroupEdition

Enterprise Storage Server

FFST/2Foundation.2000IllustraInformix Informix 4GL Informix Extended

Parallel Server Informix Internet

Foundation.2000 Informix Red Brick

Decision ServerJ/FoundationMaxConnectMVSMVS/ESANet.DataNUMA-QON-Bar

OnLine Dynamic ServerOS/2OS/2 WARPOS/390OS/400PTXQBICQMFRAMACRed Brick DesignRed Brick Data MineRed Brick Decision

ServerRed Brick Mine BuilderRed Brick DecisionscapeRed Brick ReadyRed Brick SystemsRelyon Red Brick

S/390SequentSPSystem ViewTivoliTMEUniDataUniData and DesignUniversal Data

Warehouse BlueprintUniversal Database

ComponentsUniversal Web ConnectUniVerseVirtual Table InterfaceVisionaryVisualAgeWeb Integration SuiteWebSphere

Page 4: System Administration

iv

Page 5: System Administration

v

Course DescriptionThis course provides students with the knowledge and skills they need to perform the routine tasks of a DB2 Universal Database Systems Administrator. Through instructor presentations, they will learn about the tools and commands needed to configure and maintain instances and database objects. Through lab exercises, students will have the opportunity to practice the skills they’ve learned in a simulated database server environment.

ObjectivesAt the end of this course, you will be able to:

Configure and maintain DB2 instancesManipulate databases and database objectsOptimize placement of dataControl user access to instances and databasesImplement security on instances and databasesUse DB2 activity monitoring utilitiesUse DB2 data movement and reorganization utilitiesDevelop and implement Database recovery strategyInterpret basic information in the db2diag.log file

PrerequisitesTo maximize the benefits of this course, we require that you have met the following prerequisites:

Some experience with writing Structured Query Language scriptsKnowledge of relational database design conceptsKnowledge of UNIX operating system fundamentals and VI editorKnowledge of Windows GUI navigation skills

Page 6: System Administration

vi

AcknowledgmentsCourse Developers . . . . . . . . . . Manish K. Sharma, Jagadisha Bhat, Sunminder S. Saini, Kumar AnuragAdditional Development. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Gene RebmanTechnical Review Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Harold Luse, Glen Mules, Bob BernardCourse Production Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Susan Dykman

This course was developed at the DB2 Center for Competency, e-Business Solution Center, IBM India.

Further InformationTo find out more about IBM education solutions and resources, please visit the IBM Education website at http://www-3.ibm.com/software/info/education.

Additional information about IBM Data Management education and certification can be found at http://www-3.ibm.com/software/data/education.html.

To obtain further information regarding IBM Informix training, please visit the IBM Informix Education Services website at http://www-3.ibm.com/software/data/informix/education.

Comments or SuggestionsThank you for attending this training class. We strive to build the best possible courses, and we value your feedback. Help us to develop even better material by sending comments, suggestions and compliments to [email protected].

Page 7: System Administration

Table of Contents

vii

Module 1 Overview of DB2 Major ComponentsObjectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-2DB2 and E-Business . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-3Effective E-Business Model . . . . . . . . . . . . . . . . . . . . . . . . .1-4DB2 E-Business Components . . . . . . . . . . . . . . . . . . . . . . .1-6DB2 Product Family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-8DB2 Object Architecture . . . . . . . . . . . . . . . . . . . . . . . . . .1-10DB2 Architecture — Multiple Instances . . . . . . . . . . . . . . .1-11DB2 Architecture — Processes . . . . . . . . . . . . . . . . . . . . .1-12DB2 Architecture — Shared Memory . . . . . . . . . . . . . . . .1-13DB2 Architecture — Configuration Files . . . . . . . . . . . . . .1-14Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-15Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-16

Module 2 Introduction to GUI ToolsObjectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-2What are the GUI Tools? . . . . . . . . . . . . . . . . . . . . . . . . . . .2-3List of GUI Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-4First Steps - Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-5First Steps: Create Sample Databases . . . . . . . . . . . . . . . .2-6First Steps: Working With Tutorials . . . . . . . . . . . . . . . . . . .2-7First Steps: Quick Tour . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-8Information Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-9Control Center: Overview . . . . . . . . . . . . . . . . . . . . . . . . .2-10Control Center: Tools Menu . . . . . . . . . . . . . . . . . . . . . . .2-12Control Center: Object Menus . . . . . . . . . . . . . . . . . . . . . .2-13Configuration Assistant: Overview . . . . . . . . . . . . . . . . . . .2-14Command Center: Overview . . . . . . . . . . . . . . . . . . . . . . .2-15Command Center: Query Results . . . . . . . . . . . . . . . . . . .2-16Task Center: Overview . . . . . . . . . . . . . . . . . . . . . . . . . . .2-17Task Center: Task Menu . . . . . . . . . . . . . . . . . . . . . . . . . .2-18Task Center: New Task . . . . . . . . . . . . . . . . . . . . . . . . . . .2-19Task Center: Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-20Health Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-21

Page 8: System Administration

viii

Journal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-22License Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-23Development Center: Create a Project . . . . . . . . . . . . . . .2-24Development Center: Project View . . . . . . . . . . . . . . . . . .2-25Development Center: Create a New Routine . . . . . . . . . .2-26Visual Explain: Scenario . . . . . . . . . . . . . . . . . . . . . . . . . .2-27Visual Explain: Overview . . . . . . . . . . . . . . . . . . . . . . . . . .2-28Visual Explain: Access Plan . . . . . . . . . . . . . . . . . . . . . . .2-29Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-30Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-31

Module 3 Data PlacementObjectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-2Table Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-3Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-4Extents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-5Bufferpool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-6SMS Table Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-7DMS Table Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-8SMS Table Spaces and Tables . . . . . . . . . . . . . . . . . . . . . .3-9DMS Table Spaces and Tables . . . . . . . . . . . . . . . . . . . . .3-11SMS vs DMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-12Bufferpools and Table Spaces . . . . . . . . . . . . . . . . . . . . .3-13Table Spaces Defined at Database Creation . . . . . . . . . .3-14CREATE DATABASE Example . . . . . . . . . . . . . . . . . . . . .3-15CREATE TABLESPACE Syntax . . . . . . . . . . . . . . . . . . . .3-16CREATE TABLE SPACE Examples . . . . . . . . . . . . . . . . .3-17EXTENTSIZE and PREFETCHSIZE . . . . . . . . . . . . . . . . .3-18Authority to Create Table Space . . . . . . . . . . . . . . . . . . . .3-19Listing Table Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-20Listing Table Spaces with Detail . . . . . . . . . . . . . . . . . . . .3-21Listing Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-22LIST TABLE SPACES Authority . . . . . . . . . . . . . . . . . . . .3-23LIST TABLESPACE CONTAINERS Authority . . . . . . . . . .3-24Altering Table Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-25ALTER TABLESPACE Syntax . . . . . . . . . . . . . . . . . . . . .3-26ALTER TABLESPACE: Example 1 . . . . . . . . . . . . . . . . . .3-28ALTER TABLESPACE: Example 2 . . . . . . . . . . . . . . . . . .3-29ALTER TABLESPACE Authorization . . . . . . . . . . . . . . . .3-30

Page 9: System Administration

ix

Dropping Table Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . .3-31DROP TABLESPACE Authority . . . . . . . . . . . . . . . . . . . .3-32DMS Table Space Minimum Size . . . . . . . . . . . . . . . . . . .3-33Performance: Container Size . . . . . . . . . . . . . . . . . . . . . .3-34RAID Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-35Performance: SMS Table Spaces . . . . . . . . . . . . . . . . . . .3-36Performance: DMS Table Spaces . . . . . . . . . . . . . . . . . . .3-37Performance: Catalog Table Space . . . . . . . . . . . . . . . . .3-38Performance: System-Temporary Space . . . . . . . . . . . . .3-39Performance: User Table Spaces . . . . . . . . . . . . . . . . . . .3-40Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-41Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-42

Module 4 Creating an InstanceObjectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-2Instance Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-3Requirements to Create a UNIX Instance . . . . . . . . . . . . . .4-4The SYSADM User and Group . . . . . . . . . . . . . . . . . . . . . .4-5Fenced User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-6DAS User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-7Database Administration Server (DAS) . . . . . . . . . . . . . . . .4-8Creating an Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-9The db2icrt Command in Detail . . . . . . . . . . . . . . . . . . . . .4-10Creating an Instance in Windows . . . . . . . . . . . . . . . . . . .4-11Drop Instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-12Why Drop the DAS? . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-13Starting Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-14Stopping Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-15Instance Configuration Using the CLP . . . . . . . . . . . . . . .4-16Instance Configuration Using the Control Center . . . . . . .4-17Update Instance Configuration . . . . . . . . . . . . . . . . . . . . .4-18Registry Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-19Levels of the Registry . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-20View Registry Variables . . . . . . . . . . . . . . . . . . . . . . . . . . .4-21Set Registry Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-22Instance Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-23Client/Server Connectivity . . . . . . . . . . . . . . . . . . . . . . . . .4-24Manual Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-25Manual Configuration Scenario . . . . . . . . . . . . . . . . . . . . .4-26

Page 10: System Administration

x

Cataloging the Server . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-27Cataloging the Database . . . . . . . . . . . . . . . . . . . . . . . . . .4-28Cataloging Authorization . . . . . . . . . . . . . . . . . . . . . . . . . .4-29Configuration Assistant (CA) . . . . . . . . . . . . . . . . . . . . . . .4-30Discovery Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-31Configuration Assistant Overview . . . . . . . . . . . . . . . . . . .4-33Configuration Assistant: Add a Database . . . . . . . . . . . . .4-34Add Database Wizard: Set Up Connection . . . . . . . . . . . .4-35Add Database Wizard: Search the Network . . . . . . . . . . .4-36Add Database Wizard: Testing the Connection . . . . . . . . .4-37Search: Configuration is Complete . . . . . . . . . . . . . . . . . .4-38Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-39Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-40

Module 5 Database Tables and ViewsObjectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-2The Database Configuration File . . . . . . . . . . . . . . . . . . . . .5-3Managing the DB CFG File Using CLP . . . . . . . . . . . . . . . .5-4Managing DB CFG Using CC . . . . . . . . . . . . . . . . . . . . . . .5-6Database Configuration Window . . . . . . . . . . . . . . . . . . . . .5-7Starting and Stopping Databases . . . . . . . . . . . . . . . . . . . .5-8FORCE APPLICATION . . . . . . . . . . . . . . . . . . . . . . . . . . .5-10Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-11Schemas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-12Current Schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-13CREATE SCHEMA SQL Statement . . . . . . . . . . . . . . . . .5-14CREATE SCHEMA Examples . . . . . . . . . . . . . . . . . . . . . .5-15Create a Schema Using SQL Statements . . . . . . . . . . . . .5-16System Catalog Tables . . . . . . . . . . . . . . . . . . . . . . . . . . .5-17Querying Catalog Tables for Table Names . . . . . . . . . . . .5-18Querying Catalog Tables for Table Spaces . . . . . . . . . . . .5-19Querying Catalog Tables for Bufferpools . . . . . . . . . . . . .5-20Querying Catalog Tables for Constraints . . . . . . . . . . . . .5-21Large Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-22Large Objects and Tables . . . . . . . . . . . . . . . . . . . . . . . . .5-23Temporary Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-24Benefits of Global Temporary Tables . . . . . . . . . . . . . . . .5-25Creating Temporary Tables . . . . . . . . . . . . . . . . . . . . . . . .5-26Temporary Table Authorizations . . . . . . . . . . . . . . . . . . . .5-28

Page 11: System Administration

xi

Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-29Classifying Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-31CREATE VIEW Examples . . . . . . . . . . . . . . . . . . . . . . . . .5-33Creating Views Using the Control Center . . . . . . . . . . . . .5-34Create View Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-35SQL Assist Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-36SQL Assist: Tables Page . . . . . . . . . . . . . . . . . . . . . . . . . .5-37Federated System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-38Federated System Objects . . . . . . . . . . . . . . . . . . . . . . . .5-39Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-40Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5-41

Module 6 Creating IndexesObjectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-2Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-3Type-2 Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-4Types of Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-6Unique Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-7Index: Tables and Columns . . . . . . . . . . . . . . . . . . . . . . . . .6-8Clustered Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-9Index: PCTFREE and MINPCTUSED . . . . . . . . . . . . . . . .6-11Bidirectional Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-12Index: Illustration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-13Unique Index: Example . . . . . . . . . . . . . . . . . . . . . . . . . . .6-14Clustered Index: Example . . . . . . . . . . . . . . . . . . . . . . . . .6-15Bidirectional Index: Example . . . . . . . . . . . . . . . . . . . . . . .6-16Creating Indexes in the Control Center . . . . . . . . . . . . . . .6-17Design Advisor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-19Design Advisor: Workload Panel . . . . . . . . . . . . . . . . . . . .6-21Design Advisor: Collect Statistics . . . . . . . . . . . . . . . . . . .6-22Design Advisor: Disk Usage . . . . . . . . . . . . . . . . . . . . . . .6-23Design Advisor: Calculate . . . . . . . . . . . . . . . . . . . . . . . . .6-24Design Advisor: Recommendations . . . . . . . . . . . . . . . . .6-25Design Advisor: Unused Objects . . . . . . . . . . . . . . . . . . . .6-26Design Advisor: Schedule . . . . . . . . . . . . . . . . . . . . . . . . .6-27Design Advisor: Summary . . . . . . . . . . . . . . . . . . . . . . . . .6-28Design Advisor in the CLP: db2advis . . . . . . . . . . . . . . . .6-29db2advis: Implementation . . . . . . . . . . . . . . . . . . . . . . . . .6-31Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-32Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-33

Page 12: System Administration

xii

Module 7 Using ConstraintsObjectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-2Keys: Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-3Primary Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-4Primary Key: Table Creation Time . . . . . . . . . . . . . . . . . . .7-5Creating Tables in the Control Center . . . . . . . . . . . . . . . . .7-7Adding a Primary Key to an Existing Table: SQL . . . . . . .7-13Adding a Primary Key: Alter Table Window . . . . . . . . . . . .7-14Foreign Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-16Foreign Key: Table Creation Time . . . . . . . . . . . . . . . . . . .7-18Foreign Key: Control Center . . . . . . . . . . . . . . . . . . . . . . .7-20Unique Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-22Specifying a Unique Key at Table Creation Time . . . . . . .7-23Changing a Unique Key: ALTER TABLE . . . . . . . . . . . . . .7-25Check Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-26Check Constraint: Table Creation Time . . . . . . . . . . . . . .7-27Create Table Window: Adding Check Constraints . . . . . .7-28Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-30Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-31

Module 8 Data Movement UtilitiesObjectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-2Exporting Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-3Data Movement Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-4Data Movement Utilities: EXPORT . . . . . . . . . . . . . . . . . . .8-5EXPORT Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-6EXPORT Option: Filename and File Type . . . . . . . . . . . . . .8-7EXPORT Option: LOBS TO . . . . . . . . . . . . . . . . . . . . . . . . .8-8EXPORT Option: LOBFILE . . . . . . . . . . . . . . . . . . . . . . . . .8-9EXPORT Option: File Type Modifier (1) . . . . . . . . . . . . . .8-10EXPORT Option: METHOD . . . . . . . . . . . . . . . . . . . . . . . .8-12EXPORT Option: MESSAGES . . . . . . . . . . . . . . . . . . . . .8-13EXPORT Option: Select Statement . . . . . . . . . . . . . . . . . .8-14EXPORT Option: HIERARCHY . . . . . . . . . . . . . . . . . . . . .8-15EXPORT Option: HIERARCHY . . . . . . . . . . . . . . . . . . . . .8-16EXPORT: Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . .8-17IMPORT Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-18IMPORT Command: File Options and LOB . . . . . . . . . . . .8-19IMPORT Command: File Type Modifiers . . . . . . . . . . . . . .8-20

Page 13: System Administration

xiii

IMPORT Command: METHOD . . . . . . . . . . . . . . . . . . . . .8-21IMPORT Command: Count and Message Options . . . . . .8-22IMPORT: Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . .8-23Data Movement Utilities: LOAD . . . . . . . . . . . . . . . . . . . . .8-24LOAD Phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-25LOAD: Load Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-26LOAD: Build Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-27LOAD: Delete Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-28LOAD: Target and Exception Tables . . . . . . . . . . . . . . . . .8-29LOAD Command: Syntax . . . . . . . . . . . . . . . . . . . . . . . . .8-30LOAD Command: Filename and Location . . . . . . . . . . . . .8-31LOAD Command: Filetype and Modifier . . . . . . . . . . . . . .8-32LOAD Command: METHOD . . . . . . . . . . . . . . . . . . . . . . .8-34LOAD Command: Counter Options . . . . . . . . . . . . . . . . . .8-36LOAD Command: Mode . . . . . . . . . . . . . . . . . . . . . . . . . .8-37LOAD Command: Exception Table . . . . . . . . . . . . . . . . . .8-39LOAD Command: Statistics . . . . . . . . . . . . . . . . . . . . . . . .8-40Load Command: Parallelism . . . . . . . . . . . . . . . . . . . . . . .8-41LOAD Command: Copy Options . . . . . . . . . . . . . . . . . . . .8-42LOAD Command: Indexing . . . . . . . . . . . . . . . . . . . . . . . .8-44LOAD: Performance Modifiers . . . . . . . . . . . . . . . . . . . . . .8-45Unsuccessful Load Operation . . . . . . . . . . . . . . . . . . . . . .8-46Post Load: Table Space State . . . . . . . . . . . . . . . . . . . . . .8-47Post Load: Removing Pending States . . . . . . . . . . . . . . . .8-48LOAD: Additional Features in DB2 8.1 . . . . . . . . . . . . . . .8-49LOAD: Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-51IMPORT Versus LOAD . . . . . . . . . . . . . . . . . . . . . . . . . . .8-52Data Movement Utilities: db2move . . . . . . . . . . . . . . . . . .8-53Data Movement Utilities: db2move . . . . . . . . . . . . . . . . . .8-54db2move Command: Syntax . . . . . . . . . . . . . . . . . . . . . . .8-55Data Movement Utilities: db2look . . . . . . . . . . . . . . . . . . .8-57db2look Command: Syntax . . . . . . . . . . . . . . . . . . . . . . . .8-58Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-60Lab Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8-61

Module 9 Data Maintenance UtilitiesObjectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-2Data Maintenance: The Need . . . . . . . . . . . . . . . . . . . . . . .9-3REORGCHK: Analyzing Physical Data Organization . . . . .9-4

Page 14: System Administration

xiv

REORGCHK Command: Syntax and Examples . . . . . . . . .9-5REORGCHK: Table Statistics . . . . . . . . . . . . . . . . . . . . . . .9-6REORGCHK: Index Statistics . . . . . . . . . . . . . . . . . . . . . . .9-8REORGCHK: Interpreting of Index Information . . . . . . . . .9-10Reorganization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-11REORG Command: Syntax and Example . . . . . . . . . . . . .9-12REORG: Using Index . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-14Generating Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-15RUNSTATS Command: Syntax and Examples . . . . . . . . .9-16RUNSTATS: Distribution Statistics . . . . . . . . . . . . . . . . . .9-17REBIND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-18REBIND and db2rbind: Syntax . . . . . . . . . . . . . . . . . . . . .9-19Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-20Lab Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9-21

Module 10 Locking and ConcurrencyObjectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-2Why Are Locks Needed? . . . . . . . . . . . . . . . . . . . . . . . . . .10-3Types of Locks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-4Locking Type Compatibility . . . . . . . . . . . . . . . . . . . . . . . .10-6Lock Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-7Lock Escalation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-8Explicit and Implicit Locking . . . . . . . . . . . . . . . . . . . . . . .10-10Possible Problems When Data Is Shared . . . . . . . . . . . .10-11Isolation Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-12Deadlocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-14Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-15Lab Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10-16

Module 11 Backup and RecoveryObjectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-2Types of Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-3Logging Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-4Transaction Log File Usage . . . . . . . . . . . . . . . . . . . . . . . .11-5Circular Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-6Dual Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-8Archival Logging/Log Retain . . . . . . . . . . . . . . . . . . . . . . .11-9Backup Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-10Backup Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-11

Page 15: System Administration

xv

Restoring a Backup Image . . . . . . . . . . . . . . . . . . . . . . .11-12The Database Roll Forward . . . . . . . . . . . . . . . . . . . . . . .11-13Redirected Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-14Restore Considerations . . . . . . . . . . . . . . . . . . . . . . . . . .11-15Table Space Recovery . . . . . . . . . . . . . . . . . . . . . . . . . .11-16Table Space State: Offline . . . . . . . . . . . . . . . . . . . . . . . .11-17Table Space Offline State (cont.) . . . . . . . . . . . . . . . . . .11-18Backup and Restore Summary . . . . . . . . . . . . . . . . . . . .11-19Recovery History File . . . . . . . . . . . . . . . . . . . . . . . . . . .11-20Dropped Table Recovery . . . . . . . . . . . . . . . . . . . . . . . . .11-21Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-22Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11-23

Module 12 Performance MonitoringObjectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12-2Performance Tuning Overview . . . . . . . . . . . . . . . . . . . . .12-3Common Database Server Parameters . . . . . . . . . . . . . .12-4Common Database Parameters . . . . . . . . . . . . . . . . . . . .12-6AUTOCONFIGURE . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12-10Query Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12-12Monitoring Performance . . . . . . . . . . . . . . . . . . . . . . . . .12-13Snapshot Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12-14Snapshot Monitor: Switches . . . . . . . . . . . . . . . . . . . . . .12-15Retrieving Snapshot Information . . . . . . . . . . . . . . . . . . .12-16Snapshot Output: Locks . . . . . . . . . . . . . . . . . . . . . . . . .12-17Snapshot Monitoring: Authority . . . . . . . . . . . . . . . . . . . .12-20Event Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12-21Event Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12-22Event Monitor: Data Collection . . . . . . . . . . . . . . . . . . . .12-23Event Monitor Interface . . . . . . . . . . . . . . . . . . . . . . . . . .12-24Event Monitoring: Steps and Authorization . . . . . . . . . . .12-26Creating an Event Monitor: Type of Event . . . . . . . . . . . .12-27Creating an Event Monitor: Event Condition . . . . . . . . . .12-28Creating an Event Monitor: File Path . . . . . . . . . . . . . . . .12-29Creating an Event Monitor: Maxfiles . . . . . . . . . . . . . . . .12-30Creating an Event Monitor: Maxfilesize . . . . . . . . . . . . . .12-31Creating an Event Monitor: Buffersize . . . . . . . . . . . . . . .12-32Creating an Event Monitor: Append/Replace . . . . . . . . .12-33Creating an Event Monitor: Manual Start/Autostart . . . . .12-34

Page 16: System Administration

xvi

Creating an Event Monitor: Example . . . . . . . . . . . . . . . .12-35Event Monitor: Start/Flush . . . . . . . . . . . . . . . . . . . . . . . .12-36Event Monitor: Reading Output . . . . . . . . . . . . . . . . . . . .12-37Event Monitor: db2eva . . . . . . . . . . . . . . . . . . . . . . . . . . .12-38Event Monitor: db2eva (cont.) . . . . . . . . . . . . . . . . . . . . .12-39Event Monitor: db2eva (cont.) . . . . . . . . . . . . . . . . . . . . .12-40Health Monitor and Health Center . . . . . . . . . . . . . . . . . .12-41Health Indicator Settings . . . . . . . . . . . . . . . . . . . . . . . . .12-43Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12-44Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12-45

Module 13 Query OptimizationObjectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13-2Query Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13-3SQL Compiler Overview . . . . . . . . . . . . . . . . . . . . . . . . . .13-4What is Explain? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13-6Query Explain Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . .13-7Capturing Explain Data . . . . . . . . . . . . . . . . . . . . . . . . . . .13-8Capturing Explain Data: Explain Statement . . . . . . . . . . .13-9Capture Explain Data: Special Register . . . . . . . . . . . . .13-11Prep-Bind Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13-13DB2 SQL Explain Tools . . . . . . . . . . . . . . . . . . . . . . . . . .13-15View Explain Data: db2expln . . . . . . . . . . . . . . . . . . . . . .13-16View Explain Data: Visual Explain . . . . . . . . . . . . . . . . . .13-18Visual Explain: Graphical Output . . . . . . . . . . . . . . . . . . .13-19Visual Explain: Component Details . . . . . . . . . . . . . . . . .13-20Visual Explain: Uses . . . . . . . . . . . . . . . . . . . . . . . . . . . .13-21Explain: Setting the Optimization Level . . . . . . . . . . . . . .13-22Minimize Client-Server Communication . . . . . . . . . . . . .13-23Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13-24Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13-25

Module 14 Problem DeterminationObjectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14-2To Solve a Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14-3Describe the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . .14-4Problem Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14-5Required Diagnostic Data . . . . . . . . . . . . . . . . . . . . . . . . .14-6Required Data Checklist . . . . . . . . . . . . . . . . . . . . . . . . . .14-7

Page 17: System Administration

xvii

Additional Data Available . . . . . . . . . . . . . . . . . . . . . . . . . .14-8The db2diag.log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14-9Suggestion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14-10DIAGLEVEL 4 Considerations . . . . . . . . . . . . . . . . . . . . .14-11Location of the db2diag.log File . . . . . . . . . . . . . . . . . . . .14-12db2diag.log Information Example . . . . . . . . . . . . . . . . . .14-13db2diag.log Example: Starting the Database . . . . . . . . .14-14db2diag.log: Finding Error Information . . . . . . . . . . . . . .14-15Looking Up Internal Codes . . . . . . . . . . . . . . . . . . . . . . .14-16Byte Reversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14-17Looking Up Internal Return Codes . . . . . . . . . . . . . . . . .14-18db2diag.log Example: Container Error . . . . . . . . . . . . . .14-19db2diag.log Example: Sharing Violation . . . . . . . . . . . . .14-20db2diag.log Example: Manual Cleanup . . . . . . . . . . . . . .14-21db2diag.log Example: Database Connection . . . . . . . . .14-22Which Container? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14-23Error Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14-24Error Reasons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14-25Error Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14-26Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14-27Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14-28

Module 15 SecurityObjectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15-2Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15-3Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15-4Authentication Type: Server . . . . . . . . . . . . . . . . . . . . . . .15-5Authentication Type: DCS . . . . . . . . . . . . . . . . . . . . . . . . .15-6Encrypted Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15-7Authentication Type: KERBEROS . . . . . . . . . . . . . . . . . . .15-8Authentication Type: KRB_SERVER_ENCRYPT . . . . . . .15-9Authentication Type: CLIENT . . . . . . . . . . . . . . . . . . . . .15-10TRUST_ALLCLNTS . . . . . . . . . . . . . . . . . . . . . . . . . . . .15-11TRUST_CLNTAUTH . . . . . . . . . . . . . . . . . . . . . . . . . . . .15-12Authorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15-13Authorities in the DBM Configuration . . . . . . . . . . . . . . .15-15Database Authority Summary . . . . . . . . . . . . . . . . . . . . .15-16Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15-17Levels of Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15-18

Page 18: System Administration

xviii

Database Level Privileges . . . . . . . . . . . . . . . . . . . . . . . .15-19Schema Level Privileges . . . . . . . . . . . . . . . . . . . . . . . . .15-20Table and View Privileges . . . . . . . . . . . . . . . . . . . . . . . .15-21Package and Routine Privileges . . . . . . . . . . . . . . . . . . .15-22Index and Table Space Privileges . . . . . . . . . . . . . . . . . .15-23Implicit Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15-24Privileges Required for Application Development . . . . . .15-25System Catalog Views . . . . . . . . . . . . . . . . . . . . . . . . . . .15-26Hierarchy of Authorizations and Privileges . . . . . . . . . . .15-27Audit Facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15-28The db2audit Command: How It Works . . . . . . . . . . . . . .15-29Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15-30Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15-31

Module 16 SummaryObjectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16-2Course Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16-3Basic Technical References . . . . . . . . . . . . . . . . . . . . . . .16-4Advanced Technical References . . . . . . . . . . . . . . . . . . . .16-5Next Courses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16-6Evaluation Sheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16-7

AppendixLE Lab Exercises EnvironmentOverview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LE-2Client Setup (Windows) . . . . . . . . . . . . . . . . . . . . . . . . . . LE-3DB2 Server Setup (Windows) . . . . . . . . . . . . . . . . . . . . . LE-4DB2 Server Setup (UNIX/Linux) . . . . . . . . . . . . . . . . . . . LE-5DB2 Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LE-6DB2 Command Line Syntax . . . . . . . . . . . . . . . . . . . . . . LE-7DB2 Online Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . LE-8Starting a Command Line Session . . . . . . . . . . . . . . . . . LE-9QUIT vs. TERMINATE vs. CONNECT RESET . . . . . . . LE-10List CLP Command Options . . . . . . . . . . . . . . . . . . . . . LE-11Modify CLP Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . LE-13Input File: No Operating System Commands . . . . . . . . LE-14Input File: Operating System Commands . . . . . . . . . . . LE-15

Page 19: System Administration

Overview of DB2 Major Components 06-2003 1-1© 2002, 2003 International Business Machines Corporation

Overview of DB2 Major Components

Module 1

Page 20: System Administration

1-2 Overview of DB2 Major Components

Objectives

1-2

At the end of this module, you will be able to:Describe how DB2 facilitates e-businessDescribe the DB2 product familyDescribe the DB2 object architecture

Page 21: System Administration

Overview of DB2 Major Components 1-3

IBM has the highest-impact, business-based solutions available on the market, and an e-business software portfolio that is robust, scalable, and multiplatform.

DB2 and E-Business

1-3

Page 22: System Administration

1-4 Overview of DB2 Major Components

Transformation and IntegrationUsing web-enabled applications, a business can realize significant gains in productivity as the core business transactions become tied together. In addition, enterprise-wide applications can be built that integrate with suppliers, partners, and customers.

Leverage InformationThe quantity of business documents, images, and data continues to grow as businesses become more complex. By utilizing the IBM DB2 Universal Database, a business can manage all types of data no matter how complex. Utilizing business intelligence tools, such as data warehousing and data mining, a business can develop a competitive advantage.

Organizational EffectivenessUsing web-based teamwork and collaboration tools, a business can reduce the time necessary to complete a project, as well as produce products of better quality. In addition, as the virtual classroom replaces the traditional classroom, a business can manage the training and development of its employees with greater effectiveness.

Effective E-Business Model

1-4

Page 23: System Administration

Overview of DB2 Major Components 1-5

Managing TechnologyBy choosing one database that integrates with all of the business processes, a business can upgrade to improved technology on a scheduled time frame, and be assured of seamless integration with the other parts of the e-business structure.

Page 24: System Administration

1-6 Overview of DB2 Major Components

DB2 Universal DatabaseAt the core of many business-critical systems, you will find DB2 Universal Database. For mobile and embedded devices, DB2 Everyplace is IBM’s newest relational database management system.

Business IntelligenceThe IBM Business Intelligence product family includes DB2 DataJoiner, DB2 OLAP Server, Intelligence Miner, and Warehouse Manager.

Content ManagerThe IBM Content Manager product family includes Content Manager, Content Manager OnDemand, Content Manager CommonStore for SAP, Content Manager Common Store for Lotus Domino, Content Manager VideoCharger, and the IBM EIP Client Kit for Content Manager.

DB2 E-Business Components

1-6

OS/390 OS/400 AIX Solaris HP-UX Linux Win2000 WinXP

Page 25: System Administration

Overview of DB2 Major Components 1-7

IBM DB2 Information Integrator for ContentIBM DB2 Information Integrator for Content provides broad information integration and access to:

unstructured digital content such as text, XML and HTML files, document images, computer output, audio and video.structured enterprise information via connectors to relational databasesLotus Notes Domino databases and popular Web search engines using IBM Lotus Extended Searchobjects within business process workflows

Users can personalize data queries and search extensively for very specific needs across traditional and multimedia data sources.

Developers can more rapidly develop and deploy portal applications with the information integration application development toolkit.

DB2 Information Integrator for Content V8.2 leverages the power of IBM WebSphere® MQ Workflow, and IBM Lotus Extended Search 4.0

Thus the IBM Enterprise Information Portal (EIP) is powered by DB2 Universal Database and incorporates technologies such as EDMSuite ContentConnect, DB2 Digital Library, and Lotus Domino Extended Search into one integrated portal platform.

IBM DB2 Information Integrator for Content was formerly know as Enterprise Information Portal (EIP) in its versions 8.1 and earlier.

Page 26: System Administration

1-8 Overview of DB2 Major Components

IBM DB2 Universal Database

Personal EditionThis is a fully functional database for personal computers using the OS/2, Windows and Linux operating environments. It enables local users to create databases on the workstation where the product is installed, and it has the capability to access remote DB2 servers as a DB2 client.

Workgroup Server EditionThis product provides full DB2 database server capacity for departmental LANs running Windows NT/2000/XP, Linux, AIX, and Solaris.

Enterprise Server EditionThis product provides multi-machine, distributed database processing capability for HP-UX, Solaris, AIX, Linux, and Windows NT, 2000, and XP operating environments. It can handle multiple nodes and installations on uniprocessor and multiprocessor configurations for the most demanding data management needs.

DB2 Product Family

1-8

Page 27: System Administration

Overview of DB2 Major Components 1-9

Developer’s EditionsTwo products are available for those who develop applications in a DB2 UDB environment. The Universal Developer’s Edition includes all client and server editions of DB2 and a package of tools needed for developing applications for those environments. The Personal Developer’s Edition includes DB2 Personal Edition for Windows and Linux, plus additional tools for applications development.

For this course we will be using DB2 UDB Enterprise Server Edition v8.1. The course is designed to use:

a Windows version of the server;a Linux/Unix version of the server and a Windows client; ora combination of both these approaches.

If you do not already have an installed copy of DB2 UDB ESE v8.1, this may be the best time to start the installation (Lab Exercise 2 for Module 1). Since the installation takes some time, you should then return here to complete the remainder of the module.

The lab exercises for this course are in a separate Lab Exercises book.

Page 28: System Administration

1-10 Overview of DB2 Major Components

One machine can have many instances of DB2, and each instance of DB2 can have many databases. A database consists partly of table spaces and all of the database objects are located in these table spaces. Initially, there is a minimum of three table spaces (a space for the system catalog, one for system temporary use, and one for user applications), however, more can be added at any time as data storage needs grow.

To understand DB2 on Linux, Unix, and Windows better, we will look at the DB2 Architecture from the following perspectives:

Multiple instances on the same host (each with more than one database)Processes active in memory for each instanceShared memory for each instanceConfiguration files

DB2 Object Architecture

1-10

There is a small, specialized instance of DB2 named the Database Administration Server (DAS) that provides remote connectivity to the regular instances located on the same machine. There can only be one of these DAS instances per machine.

Note There is a small, specialized DB2 process called the Database Administration Server (DAS) that provides remote connectivity to the regular instances located on the same machine. There can only be one DAS per machine.

Page 29: System Administration

Overview of DB2 Major Components 1-11

This slide shows two instances, inst01 and inst02. Each instance is independent of the other, and can be independently administered and configured. There is no overlap between the two instances. The DBM CFG file contains instance-wide configuration parameters.

Each instance contains two databases. Instance inst01 contains two databases, db1 and db2, and instance inst02 contains two databases, db3 and db4. Each database has its own set of system catalog tables and log files, as well as its own DB CFG file. There is no overlap between databases. Although all databases within an instance share certain instance-wide parameters located in the DBM CFG file, the database-wide configuration parameters are contained in the DB CFG file for each database.

DB2 Architecture — Multiple Instances

1-11

Page 30: System Administration

1-12 Overview of DB2 Major Components

On the client side, either local or remote applications, or both, are linked with the DB2 UDB client library. Local clients communicate using shared memory and semaphores; remote clients use a protocol such as Named Pipes (NPIPE), TCP/IP, NetBIOS, or SNA.

On the server side, activity is controlled by engine dispatchable units (EDUs). In the above and on the next page, EDUs are shown as circles or groups of circles.

ProcessesEDUs are implemented as threads in a single process on Windows-based platforms and as processes on UNIX (single-threaded). DB2 agents are the most common type of EDUs. These agents perform most of the SQL processing on behalf of applications. Prefetchers and page cleaners are other common EDUs.

A set of subagents might be assigned to process the client application requests. Multiple subagents can be assigned if the machine where the server resides has multiple processors or is part of a partitioned database.

All agents and subagents are managed using a pooling algorithm that minimizes the creation and destruction of EDUs.

DB2 Architecture — Processes

1-12

$DB2INSTANCE Database directory fileRegistryDatabase Manager (DBM) config fileDiagnostic file

DB2 ARCHITECTUREclientclient

UDB Client Library

clientclientclientclient

System Controller WatchdogWatchdog Local

ListenerRemote

ListenersOther

ProcessesOther

Processes

Idle Agents

Idle Agents

SubAgents SubAgents SubAgents

Co-ordinator AgentMemory

Co-ordinator AgentMemory

Co-ordinator AgentMemory

SubAgents SubAgents SubAgentsSubAgentsSubAgents SubAgentsSubAgents SubAgentsSubAgents

Co-ordinator AgentMemory

Co-ordinator AgentMemory

Co-ordinator AgentMemory

Co-ordinator AgentMemory

Co-ordinator AgentMemory

Co-ordinator AgentMemory

Fenced

UDF

Fenced

UDF

Page 31: System Administration

Overview of DB2 Major Components 1-13

Shared MemoryBuffer pools are areas of database server memory where database pages of user table data, index data, and catalog data are temporarily moved and can be modified.

The configuration of the buffer pools, as well as prefetcher and page cleaner EDUs, controls how quickly data can be accessed and how readily available it is to applications.

Prefetchers retrieve data from disk and move it into the buffer pool before applications need the data. Agents of the application send asynchronous read-ahead requests to a common prefetch queue. As prefetchers become available, they implement those requests by using big-block or scatter-read input operations to bring the requested pages from disk to the buffer pool.Page cleaners move data from the buffer pool back out to disk. Page cleaners are background EDUs that are independent of the application agents. They look for pages from the buffer pool that are no longer needed and write the pages to disk. Page cleaners ensure that there is room in the buffer pool for the pages being retrieved by the prefetchers.

Without the independent prefetchers and the page cleaner EDUs, the application agents would have to do all of the reading and writing of data between the buffer pool and disk storage.

DB2 Architecture — Shared Memory

1-13

Database (DB)Database (DB) configconfig filefile

Others

Database (DB)Database (DB) configconfig filefile

Primarylog filesPrimarylog files

Secondarylog filesSecondarylog files

database1database1 database2database2

Primarylog filesPrimarylog files 32 kb pages

File container

Devicecontainer

tablespace2

File container

Devicecontainer

tablespace2

Directory container

Directorycontainer

tablespace1

4 kb pages

Directory container

Directorycontainer

tablespace1

4 kb pages

Log buffers

BufferPool4 kbpages

BufferPool32 kbpages

Packages

Locklist

Device container

tablespace1

4 kb pages

File container

Device container

Others

Log buffers

BufferPool4 kbpages

Packages

Locklist

loggerlogger deadlock detector

pre-fetchers

pre-fetchers

page cleaners

page cleaners loggerlogger deadlock

detectorpre-

fetcherspage

cleaners

Page 32: System Administration

1-14 Overview of DB2 Major Components

DBM configuration fileEach DB2 UDB instance has a configuration file that contains parameter values for that instance. These are instance level parameters, which control the use of all the databases in that instance.

DB configuration fileEvery DB2 UDB database also has a configuration file that contains parameters for just that one database. The various databases in an instance are configured separately within the bounds of the instance.

DB2 Architecture — Configuration Files

1-14

With DB2 UDB, there is one configuration file for each instanceCalled the Database Manager (DBM) Configuration fileContains parameter values for that instance

Each database also has its own configuration fileCalled the Database (DB) Configuration fileContains parameters for one database

Page 33: System Administration

Overview of DB2 Major Components 1-15

Summary

1-15

You should now be able to:Describe how DB2 facilitates e-businessDescribe the DB2 product familyDescribe the DB2 object architecture

Page 34: System Administration

1-16 Overview of DB2 Major Components

Lab Exercises

1-16

You should now complete the lab exercises for Module 1.

Page 35: System Administration

Introduction to GUI Tools 02-2003 2-1© 2002, 2003 International Business Machines Corporation

Introduction to GUI Tools

Module 2

Page 36: System Administration

2-2 Introduction to GUI Tools

Objectives

2-2

At the end of this module, you will be able to:List the major tools of the GUI interfaceDescribe the major purpose of each GUI tool

Page 37: System Administration

Introduction to GUI Tools 2-3

The GUI (graphical user interface) tools consist of an easy to use, integrated set of tools to help you administer and manage DB2. Their main features are:

An intuitive, point and click navigation scheme.A scalable architecture that can grow with your needs.Support for the object-oriented and multimedia extensions of DB2.Smart guides and wizards to provide step-by-step expert advice.

What are the GUI Tools?

2-3

Page 38: System Administration

2-4 Introduction to GUI Tools

These tools are described on the following pages.

List of GUI Tools

2-4

First StepsInformation CenterControl CenterClient Configuration AssistantCommand CenterTask CenterHealth CenterJournalLicense CenterApplication Center

Page 39: System Administration

Introduction to GUI Tools 2-5

First Steps is a package of tutorial guides and programs to facilitate setting up and learning DB2. First Steps allows you to:

Create the sample, warehouse, and olap tutorial databases.Work with these tutorial databases.View the DB2 Product Information Library.Launch the DB2 UDB Quick Tour.View other DB2 resources on the World Wide Web

Each of these functions will be illustrated in more detail in the following slides.

You can invoke First Steps by selecting the following from the Windows Start menu: Start > Programs > IBM DB2 > Set-up Tools > First Steps.

First Steps - Overview

2-5

Page 40: System Administration

2-6 Introduction to GUI Tools

By using the First Steps Create Sample Database wizard, you to can create the following databases:

DB2 UDB Sample—This tutorial database is used for learning the concepts of a relational database and is required in many of the courses taught by IBM.OLAP Sample—This database is used to perform multidimensional analysis of relational data using Online Analytical Processing (OLAP) techniques.Data Warehousing Sample—This database is used with the Data Warehouse Center to move, transform, and store data in a target warehouse database.

The time necessary to create a database is from 3 to 30 minutes, and each of these databases can also be created at a later time by accessing this First Steps wizard.

First Steps: Create Sample Databases

2-6

Page 41: System Administration

Introduction to GUI Tools 2-7

Choose the Work with Tutorials option from First Steps to access various DB2 UDB tutorials.

First Steps: Working With Tutorials

2-7

Page 42: System Administration

2-8 Introduction to GUI Tools

Quick Tour launches a tutorial covering the e-business, Business Intelligence and Data Management subject areas. It also demonstrates the multimedia capabilities of the DB2 Universal Database as well as the built-in Java and XML functionality.

First Steps: Quick Tour

2-8

Page 43: System Administration

Introduction to GUI Tools 2-9

The Information Center is a repository of electronic documentation. The sections of the Information Center consist of:

Tasks — a section containing instructions for performing specific tasks.Books — a section containing product documentation.Reference — a keyword search section.Troubleshooting — a publications section containing expanded descriptions of diagnostic messages.Sample Programs — a section containing code samples from many popular application languages.DB2 Knowledge Base — a web-enabled section that searches the IBM DB2 UDB online knowledge base.

You can invoke the Information Center by selecting Start > Programs > IBM DB2 > Information > Information Center.

Information Center

2-9

Select tabs to view different types of information

Search option

Page 44: System Administration

2-10 Introduction to GUI Tools

The Control Center is the main work area for DB2 administration. You can access most of the other GUI tools from here. The principle components of this tool are:

Menu bar — This menu accesses the Control Center functions and online help menus.Toolbar — From the toolbar located at the top of the window, you can launch any other DB2 centers that are integrated into the Control Center. The Control Center toolbar is shown here in more detail:

A similar toolbar appears in each Administration Client tool. You can also access these tools by selecting them from the Tools menu.

Control Center: Overview

2-10

Menu bar

Contents pane tool

bar

Tool bar

Contents pane

Objects pane

Control Center Journal

Development Center

Tools Settings

Satellite Admin Center

Task Center

Information Center

Health Center

License Center

ContactsReplication Center

CommandCenter

Show/Hide Legend

Help

Page 45: System Administration

Introduction to GUI Tools 2-11

Objects pane — This pane is located on the left side of the Control Center window and contains all the objects that can be managed from the Control Center. It also displays the hierarchical relationships of the objects.Contents pane—This pane is located on the right side of the Control Center window. The icons shown are context sensitive and are based on the object that is highlighted in the Objects pane. Contents pane toolbar—These icons are used to tailor the view of the objects and information in the Contents pane. These functions can also be selected in the View menu of the menu bar.

Use the Control Center to:

Manage systemsManage DB2 instancesManage DB2 databasesManage DB2 database objects such as tables, views, and user groupsAccess DB2 for OS/390 subsystemsLaunch other GUI tools

You can invoke the Control Center by selecting Start > Programs > IBM DB2 > General Administration Tools > Control Center.

Page 46: System Administration

2-12 Introduction to GUI Tools

All of the tools that are available on the toolbar are also available in the Tools menu of the menu bar.

Control Center: Tools Menu

2-12

Page 47: System Administration

Introduction to GUI Tools 2-13

By right-clicking on a database object, menu options appear that allow you to manipulate the object. In the slide above, the menu options for the sample database allow you to perform such actions as connecting, restarting, and dropping the database. The menu options for the employee table allow you to perform such actions as renaming, dropping, and loading the table.

Control Center: Object Menus

2-13

Page 48: System Administration

2-14 Introduction to GUI Tools

The Configuration Assistant (CA) allows you to easily configure an application for connections to local and remote databases. The client can be configured using one of three methods:

Imported profilesDB2 discoveryManual configuration

In addition, the CA can be used to:

Display existing connections.Update and delete existing connections.Perform CLI/ODBC administration tasks. Test connections to cataloged databases.Bind applications to databases.You can invoke the CA by selecting Start > Programs > IBM DB2 > Set-up Tools > Configuration Assistant.

Configuration Assistant: Overview

2-14

Page 49: System Administration

Introduction to GUI Tools 2-15

The Command Center consists of windows in which you can enter SQL statements, scripts, and DB2 commands and view the results. Use the Command Center to:

Execute SQL statements interactively.Execute SQL using the SQL Assist wizard.Create and save command scripts.View query access plans.Delete, update, export, and view the query result set.

You can invoke the Command Center by selecting the tool from the toolbar or Tools menu in another GUI tool, or by selecting Start > Programs > IBM DB2 > Command Line Tools > Command Center.

Command Center: Overview

2-15

Execute iconTabs

Type commands

here

DB2 messages

Page 50: System Administration

2-16 Introduction to GUI Tools

The query results from an SQL statement can be viewed by selecting the Query Results pane, and they can be manipulated, saved, or exported by selecting options from the Query Results menu.

Command Center: Query Results

2-16

Page 51: System Administration

Introduction to GUI Tools 2-17

The Task Center contains a list of all the scripts that have been created. Use the Script Center to:

Create or modify an SQL script or command file.Import a previously created script.Execute scripts immediately.Schedule scripts to run at a later time.

To invoke the Task Center, first invoke either the Control Center or the Command Center. Then click on the Task Center icon, or choose Task Center from the Tools menu on the menu bar. You can also invoke the Task Center directly from the desktop by selecting Start > Programs > IBM DB2 > General Administration Tools > Task Center.

Task Center: Overview

2-17

Page 52: System Administration

2-18 Introduction to GUI Tools

To create a new script, click on the Task menu, and select New from the list of options.

Task Center: Task Menu

2-18

Page 53: System Administration

Introduction to GUI Tools 2-19

The New Task window is shown above. Enter the appropriate values into the fields to define your script. Go to the Command Script tab to enter the text of your script. Additional information can be provided in the other tabs.

Task Center: New Task

2-19

Page 54: System Administration

2-20 Introduction to GUI Tools

To work with a particular task, select the task, click on the Selected menu in the menu bar, and choose from the options available.

Task Center: Options

2-20

Page 55: System Administration

Introduction to GUI Tools 2-21

The Health Center monitors the system and warns of potential problems. It can be configured to automatically open and display any monitored objects that have exceeded their threshold setting, which means they are in a state of alarm or warning. Use the Health Center to:

Specify the action to be taken when a threshold is exceeded.Specify the message to be displayed.Specify if an audible warning is to be used.

To invoke the Health Center, first invoke either the Command Center or the Control Center. Then click on the Health Center icon, or choose the Health Center option from the Tools menu on the menu bar.You can also access the Health Center from the desktop by selecting Start > Programs > IBM DB2 > Monitoring Tools > Health Center.

Health Center

2-21

Page 56: System Administration

2-22 Introduction to GUI Tools

The Journal displays the status of the jobs that have been created from scripts and logs the results of their execution. Use the Journal to:

View job histories.Monitor running and pending jobs.Review job results.Display recovery history.View DB2 message logs.

To invoke the Journal, first invoke either the Command Center or the Control Center. Then click on the Journal icon, or choose the Journal option from the Tools menu on the menu bar. You can also access the Journal from the desktop by selecting Start > Programs > IBM DB2 > General Administration Tools > Journal.

Journal

2-22

Page 57: System Administration

Introduction to GUI Tools 2-23

The License Center provides a central point to manage the licensing requirements of the DB2 products. Use the License Center to:

Add a new license.Upgrade from a trial license to a permanent license.View the details of your licenses, including version information, expiration date, and number of entitled users.

To invoke the License Center, first invoke either the Command Center or the Control Center. Then click on the License Center icon, or choose the License Center option from the Tools menu on the menu bar.

License Center

2-23

Page 58: System Administration

2-24 Introduction to GUI Tools

The Development Center provides an easy means of creating and managing stored procedures and user-defined functions (UDFs). Use the Development Center to:

Create development projects.Create stored procedures, functions, and structured types on local and remote servers.Modify existing routines and types.Run procedures and functions for testing and debugging purposes.

You can invoke the Development Center by selecting Start > Programs > IBM DB2 > Development Tools > Development Center.

The first time you start the Development Center, you must create a new project. To create a new project, click on Create Project in the Development Center Launchpad window. You are then asked to provide a project name.

Development Center: Create a Project

2-24

Page 59: System Administration

Introduction to GUI Tools 2-25

The main DB2 Development Center window provides two options for viewing projects and application. The Project View, shown above, displays the object pane hierarchy based on defined projects. The server view displays the objects using an object hierarchy similar to the view shown by the Control Center.

The following types of applications and objects can be created in the Development Center:

Stored proceduresUser-defined functionsStructured data types

Status information is displayed at the bottom of the Development Center window.

Development Center: Project View

2-25

Page 60: System Administration

2-26 Introduction to GUI Tools

To create a new stored procedure, right-click on Stored Procedure and select New > SQL Stored Procedure. To create a new function, right-click on User-Defined Functions and select New > SQL User-Defined Function. An example of the editing window for a stored procedure is shown above. The editing window for a function is similar.

Procedures and functions that use other programming languages can be created by using a create wizard. Right click on Stored Procedure or User-Defined Function, select New, and then select the appropriate wizard from the menu. Structured types can only be created through the Development Center by using a wizard.

To modify an existing routine, right-click on the routine name and select Edit.

Development Center: Create a New Routine

2-26

Page 61: System Administration

Introduction to GUI Tools 2-27

Visual Explain is a tool provided through the Command Center that allows you to analyze SQL queries.

To demonstrate Visual Explain we will use two database tables, one index, and a query. These objects are shown above.

Visual Explain: Scenario

2-27

To understand Visual Explain you will use the following:Database objects:

Table: staffTable: staff_regionIndex: inx_staff_id ON staff (id_desc)

Query:SELECT staff_region.id, staff_region.region,

staff_region.cityFROM staff, staff_regionWHERE staff.id = staff_region.id

Page 62: System Administration

2-28 Introduction to GUI Tools

Visual Explain provides a graphical representation of the access plan developed by the optimizer. Use Visual Explain to:

View the statistics used at optimization time. You can compare this set of statistics to the statistics in the current system catalog to determine if rebinding the package will improve performance.Determine if an index was used to access a table. If an index was not used, the visual explain function helps to determine which columns might benefit from being indexed.View the effects of database tuning changes. By comparing the before and after versions of the access plan for a query, you can determine the effect that a tuning change had on the database.Obtain information about each operation in the access plan. If a query is in need of improvement, you can examine each operation performed by the query and isolate possible trouble spots.

To invoke Visual Explain, first invoke the Command Center. Then using the Interactive tab and the Command window, enter the CONNECT TO database statement. Once you have connected to the database, the Create Access Plan icon becomes active. Enter your query in the Command window and click on the Create Access Plan icon. A Visual Explain access plan appears.

Visual Explain: Overview

2-28

Create Access Plan icon

Page 63: System Administration

Introduction to GUI Tools 2-29

In our scenario, the optimizer chooses to use the inx_staff_id index to scan the staff table. Since there is no suitable index for the staff_region table, the optimizer performs a full table scan. After both tables have been scanned, the optimizer performs a nested loop join and returns the result set. The estimated time is cumulative, reading from the bottom to the top, and is returned in units of timerons. A large jump in cumulative timerons between operational blocks represents a large amount of processing effort, and these operations may be candidates for performance improvement.

Visual Explain: Access Plan

2-29

Tables

Join

Index scan

Index

Table scan

Timerons represent an estimated amount of CPU cycles and disk I/O operations required to process a query. A simplified way of defining them would be to think of 100 CPU cycles being equal to one timeron, and one disk read being equal to one timeron, since disk reads cost more in processing time.

Note

Page 64: System Administration

2-30 Introduction to GUI Tools

Summary

2-30

You should now be able to:List the major tools of the GUI interfaceDescribe the major purpose of each GUI tool

Page 65: System Administration

Introduction to GUI Tools 2-31

Lab Exercises

2-31

There are no lab exercises for Module 2.

Page 66: System Administration

2-32 Introduction to GUI Tools

Page 67: System Administration

Data Placement 02-2003 3-1© 2002, 2003 International Business Machines Corporation

Data Placement

Module 3

Page 68: System Administration

3-2 Data Placement

Objectives

3-2

At the end of this module, you will be able to:Describe the attributes of an SMS table spaceDescribe the attributes of a DMS table spaceDescribe the different types of containersDescribe the concept of a bufferpoolExplain how bufferpools are assigned to table spacesList, alter, and drop a table spaceIdentify the optimum number of containers per table space

Page 69: System Administration

Data Placement 3-3

A table space is a logical storage structure where the data for a database is stored. It consists of one or more physical storage containers, and is associated with one memory structure called a bufferpool.

Table spaces are categorized by the method used to access the data:

System-managed space (SMS)—This type of table space is managed by the operating system and utilizes the O/S disk processes and data buffers. Therefore, the data access time can be slower, but this type of table space is relatively easy to manage.Database-managed space (DMS)—This type of space is managed directly by the DB2 database manager and bypasses the O/S system data buffers. Therefore, the access time can be faster, but this type of table space is potentially more difficult to manage.

Table spaces are divided up in terms of pages and extents. A page is the smallest quantity of data that can be retrieved from disk in one I/O operation. An extent is a set of pages grouped contiguously to minimize I/O operations and improve performance. Both page size and extent size are defined when a table space is created and cannot be changed.

Table Spaces

3-3

Table Space (Disk)

Container 1

Container 2

Buffer Pool(Memory)

Extent 1

Extent 3

Extent 2

Extent 4

Page

Page 70: System Administration

3-4 Data Placement

A container is a physical storage device that is assigned to one table space. Depending on the type of table space, a container can be a directory, a file, or a device. SMS table spaces only use directories as containers. DMS table spaces use either files or raw devices as containers and both can be used in the same table space. The container definition is stored in the system catalog tables along with the other attributes of the table space.

Containers must reside on disks that are local. Therefore, resources such as LAN-redirected drives or NFS-mounted file systems cannot be used as containers for table spaces.

Containers

3-4

SMS DMS

Directory FileDevice

Page 71: System Administration

Data Placement 3-5

One extent consists of a number of pages grouped together and defined during table space creation. If no extent size is specified, the default is the value for the DB CFG parameter DFT_EXTENT_SZ.

DB2 writes extents to the containers in round-robin fashion.

Extents

3-5

Page 72: System Administration

3-6 Data Placement

Bufferpools are memory structures that cache data and index pages in memory. This reduces the need for the DBM to access disk storage and thereby increases performance. Bufferpools use a least-recently-used algorithm that ensures that the most recently accessed data is retained in memory.

On UNIX, the creation of a DB2 database creates a default bufferpool, ibmdefaultbp, consisting of 1000 4-KB pages. On other platforms the buffer pool size is 250 4-KB pages.

A buffer pool can be associated with specific table spaces. For example, you might create a separate buffer pool to store index pages or a separate bufferpool to handle data for high activity tables. The page size for the table space must match the page size for the buffer pool.

Bufferpool

3-6

A bufferpool is memory structure that:Allows faster access to dataCaches data and index pages in memoryIs associated with one or more table spacesMust have the same page size as the associated table spaces

Page 73: System Administration

Data Placement 3-7

In an SMS table space, the file manager controls the location of the data in the storage space and the DBM only controls the table space name and storage path, which are defined at creation time and cannot be altered. New containers cannot be added dynamically unless you are doing a redirected restore, which is a process you will learn about in the module on backup and recovery.

Space is allocated only when needed, and only one page at a time, until the number of pages allocated equals the amount for one extent. At this point, the database manager switches to the next container in the table space and begins allocating one page at a time in that container. This process of filling up extents and switching to the next container continues in a round-robin fashion (also called striping) and balances the data requirement across all the containers.

Since the data is automatically balanced across all the containers and additional containers cannot be added dynamically, there is little administration required for SMS table spaces.

SMS Table Spaces

3-7

For an SMS table space:Storage medium is a directoryO/S controls storage spaceDatabase manager only controls the name and pathNumber of containers defined at creation and cannot be alteredSpace allocated one page at a time and only when neededExtents allocated in round-robin fashionLittle administration is required

Since DB2 considers an SMS table space to be full when it cannot add any more space to one of the containers, it is important to make all of the containers of an SMS table space the same size. If the containers are different sizes, DB2 marks the table space as full when the smallest container is full.

Note

Page 74: System Administration

3-8 Data Placement

For a DMS table space, the database manager has control of the placement of the data within the containers and can ensure that the pages are physically contiguous. The size of the containers are defined at creation time, and additional containers can be added later.

All of the space is allocated at creation time and data is initially stored in the first extent for the first container. When this extent becomes full, the database manager switches to the next container and begins filling up the first extent in that container. This process continues in a round robin fashion and balances the data requirement across all the containers.

Since containers can be dynamically allocated at any time, the administration requirements are higher with DMS table spaces. However, the performance can be better particularly when raw devices are used.

DMS Table Spaces

3-8

For a DMS table space:Storage media can be files or raw devicesDatabase manager controls the storage spaceAdditional containers can be dynamically addedEntire space is allocated when a table space is createdExtents are allocated in round-robin fashionTable, index, and large data can be separated into different table spacesAdministration requirements are higher, but performance is better

DB2 does not consider a DMS table space to be full until all the containers are full, therefore, the containers can be different sizes. When the smallest container is full, DB2 eliminates it from the rotation and continues to fill up the remaining containers. However, containers should be the same size for best performance.

Note

Page 75: System Administration

Data Placement 3-9

SMS table spaces can be created as regular, system-temporary or user-temporary spaces. There are three different classes of data associated with tables:

Table data: This is the data contained in the data rows of the table.Index data: This includes the unique values and row identifiers for any columns on the table that are indexed.Large data: This includes the long varchar, long vargraphic, and LOB data types.

Catalog Table SpaceA catalog space contains the system catalog tables and indexes. A catalog table space can only be created at the time the database is created.

Regular Table SpaceRegular table spaces contain table, index, and large data for permanent tables. All of the data shares the same table space and is interleaved together.

SMS Table Spaces and Tables

3-9

SMS

Directory

System temp

User temp

Table space types Table data types

Regular

Catalog

Table data Index dataLarge data

Page 76: System Administration

3-10 Data Placement

System Temporary SpaceThis is the space used by the system when the database manager needs to create temporary tables during query operations, such as sorts or joins. The database manager needs to have at least one system temporary space created for the database.

User Temporary SpaceThis is the space used by the database manager to store temporary tables that are explicitly created by the users.

Page 77: System Administration

Data Placement 3-11

DMS table spaces can be created as regular or large, as well as system-temporary and user-temporary. This allows the database administrator to spread a table over multiple table spaces for better performance. To do this, the regular table data is located in a regular table space, the index data is located in a separate, regular table space, and the large data is located in a large table space. The large table space type is optimized to hold large data strings.

DMS Table Spaces and Tables

3-11

DMS

DeviceFile

Regular

Large

System temp

User temp

Table data

Index data

Large data

Table space types Table data types

Regular

Page 78: System Administration

3-12 Data Placement

The chart above compares the features and limitations of SMS and DMS table spaces.

SMS vs DMS

3-12

Page 79: System Administration

Data Placement 3-13

When a database is created, one bufferpool and three table spaces are created. The bufferpool is named ibmdefaultbp, and is associated with the three table spaces. During creation, the system administrator can specify names for these three table spaces or use the defaults:

syscatspace — This table space contains all the data for the system catalog tables.userspace1 — This table space contains all the data for any permanent tables created by users.tempspace1 — This table space contains any temporary tables needed by the system to execute queries.

In the illustration above, three additional bufferpools and table spaces have been created:

myregspace—This table space contains all the index and table data for the permanent tables. The large data has been separated out and is not stored with the table and index data, but is stored in its own table space. The myregspace table space is associated with the mybuff1 bufferpool.mytempspace—This table space contains all the temporary tables that are explicitly created by the users. It is associated with the mybuff2 bufferpool.mylongspace—This table space contains all the long data for the tables in the myregspace table space. The mylongspace table space is associated with the mybuff3 bufferpool.

Bufferpools and Table Spaces

3-13

Table spaces

Buffer pools

Default

Page 80: System Administration

3-14 Data Placement

Table spaces can be created when the database is created. When they are created in this way, values other than the default values for names and management types can be used. In the example above, the following variables are available:

db_name—allows you to specify the name of the database.CATALOG, USER, or TEMPORARY specifies what type of table space to create.SYSTEM or DATABASE defines the management type for the table space. The default is SYSTEM.

SYSTEM specifies an SMS table space. The containers must be directories, and you cannot specify a size. The container definition string cannot exceed 250 characters.DATABASE specifies a DMS table space. The containers must be files or raw devices, and you can use a mixture of both. You must specify location and size. The container definition string cannot exceed 254 bytes in size.

Table Spaces Defined at Database Creation

3-14

Syntax:

CREATE DATABASE db_name[CATALOG TABLESPACE

MANAGED BY {SYSTEM | DATABASE}USING (container_definition_string)]

[USER TABLESPACE MANAGED BY {SYSTEM | DATABASE}USING (container_definition_string)]

[TEMPORARY TABLESPACE MANAGED BY {SYSTEM | DATABASE}USING (container_definition_string)]

Page 81: System Administration

Data Placement 3-15

When the db2cert database is created, the following table spaces are also created:

A catalog space for the system catalog tables, which is managed by the database manager.A temporary space for the system-temporary tables, which is managed by the operating system.A user space for the user-created permanent tables, which is managed by the database manager.

CREATE DATABASE Example

3-15

CREATE DATABASE db2certCATALOG TABLESPACEMANAGED BY DATABASE USING

(FILE 'C:\catalog.dat' 2000, FILE 'D:\catalog.dat' 2000

USER TABLESPACEMANAGED BY DATABASEUSING (FILE 'C:\TS\USERTS.DAT' 121)TEMPORARY TABLESPACEMANAGED BY SYSTEMUSING ('C:\TEMPTS', 'D:\TEMPTS')

Table space category

Storage media,

location, and size

Table space type

Page 82: System Administration

3-16 Data Placement

Additional table spaces can also be created at any time by executing the SQL statement CREATE TABLESPACE. The following options are available:

REGULAR | LARGE | SYSTEM TEMPORARY | USER TEMPORARY specifies the type of data that will be stored in the table space. The default is REGULAR, which can store any type of data except temporary table data.PAGESIZE defines the size of the pages used for the table space. The default is 4K.

MANAGED BY SYSTEM | DATABASE specifies either an SMS or DMS table spaceUSING specifies the container definitionsBUFFERPOOL specifies the associated bufferpool. The default bufferpool is ibmdefaultbp.

CREATE TABLESPACE Syntax

3-16

CREATE [{REGULAR | LARGE | SYSTEM TEMPORARY | USER TEMPORARY}] TABLESPACE table_space_name

[PAGESIZE integer [K]]MANAGED BY {SYSTEM | DATABASE}USING (container_definition_string)[BUFFERPOOL buffpool_name]

There are two types of valid PAGESIZE integer values. The values without the K suffix are 4096, 8192, 16384, or 32768. If the K suffix is used, the valid integer values are 4 or 8 and 16 or 32. If the page-size integer is not consistent with these values, an error is returned.

Note

Page 83: System Administration

Data Placement 3-17

Some examples of CREATE TABLESPACE commands are shown above.

CREATE TABLE SPACE Examples

3-17

CREATE TABLESPACE dms_ts1PAGESIZE 4096MANAGED BY DATABASE USING (DEVICE '/dev/rcont' 20000)BUFFERPOOL bp_dms_ts1

CREATE TABLESPACE sms_ts1PAGESIZE 8KMANAGED BY SYSTEM USING ('/database/inst01/')BUFFERPOOL bp_small_tables

Page 84: System Administration

3-18 Data Placement

There are numerous additional options that can be used when creating a table space. Some of the more common ones are:

EXTENTSIZE specifies the number of pages that are written to a container before skipping to the next container. The default is the value of DFT_EXTENT_SZ in the database configuration file.PREFETCHSIZE specifies the number of pages read from the table space when data prefetching is performed. The default is the value of DFT_PREFETCH_SZ.

The values for EXTENTSIZE and PREFETCHSIZE and the sizes for file or device containers can be entered in one of four different ways:

integer — indicating number of PAGESIZE pagesinteger K — indicating kilobytesinteger M — indicating megabytesinteger G — indicating gigabytes

EXTENTSIZE and PREFETCHSIZE

3-18

CREATE TABLESPACE payroll MANAGED BY DATABASE USING (DEVICE '/dev/rhdisk6' 10000,

DEVICE '/dev/rhdisk7' 10000, DEVICE '/dev/rhdisk8' 10000)

EXTENTSIZE 64 PREFETCHSIZE 32

Page 85: System Administration

Data Placement 3-19

You must have either SYSADM or SYSCTRL authority to create a table space.

Authority to Create Table Space

3-19

To create a table space you must have either:SYSADM authoritySYSCTRL authority

Page 86: System Administration

3-20 Data Placement

To list the table spaces in a database, use the LIST TABLESPACES command. The output provides you with:

Table space ID numberTable space nameType (system-managed space or database-managed space)Data type or contents (any data, large data only, or temporary data) State, which is a hexadecimal value indicating the current table space state (for example: 0x0 for Normal or 0x20 for Backup Pending)

Listing Table Spaces

3-20

You must be connected to a database to use the LIST TABLESPACES command.Tip

Page 87: System Administration

Data Placement 3-21

If you execute the LIST TABLESPACES SHOW DETAIL command, you get all of the information for the LIST TABLESPACES command plus:

Total number of pages Number of usable pages Number of used pages Number of free pages High water mark (in pages) Page size (in bytes) Extent size (in pages) Prefetch size (in pages) Number of containers

You may see some additional information if special conditions exist:

Minimum recovery time (displayed only if not zero) Number of quiescers (displayed only if the table space state is quiesced: SHARE, quiesced: UPDATE, or quiesced: EXCLUSIVE) Table space ID and object ID for each quiescer (displayed only if the number of quiescers is greater than zero)

Listing Table Spaces with Detail

3-21

Page 88: System Administration

3-22 Data Placement

To list the containers associated with a table space, use the LIST TABLESPACE CONTAINERS FOR table_space_id [SHOW DETAIL] command. The output without the optional SHOW DETAIL clause returns:

Container IDContainer name Container type (file, disk, or path)

The output with the SHOW DETAIL clause returns the following additional information:

Total number of pagesNumber of usable pagesAccessible (yes or no)

The table_space_id is an integer with a unique value for each table space in the database. To get a list of all the table space IDs contained in the database, execute the LIST TABLESPACES command.

Listing Containers

3-22

You must be connected to a database to use the LIST TABLESPACE CONTAINERS command.

Tip

Page 89: System Administration

Data Placement 3-23

LIST TABLE SPACES Authority

3-23

To execute the LIST TABLESPACES command, you must have one of the following authorities or privileges:

SYSADMSYSCTRLSYSMAINTDBADMLOAD

Page 90: System Administration

3-24 Data Placement

LIST TABLESPACE CONTAINERS Authority

3-24

To execute the LIST TABLESPACE CONTAINERS command, you must have one of the following authorities or privileges:

SYSADMSYSCTRLSYSMAINTDBADM

Page 91: System Administration

Data Placement 3-25

Use the ALTER TABLESPACE command to modify an existing table space.

Altering Table Spaces

3-25

Use the SQL statement ALTER TABLESPACE to: Add a container (DMS only)Increase the size of a container (DMS only)Modify the PREFETCHSIZE settingModify the BUFFERPOOL assignment

You must be connected to a database to use the ALTER TABLESPACE statement.Tip

Page 92: System Administration

3-26 Data Placement

To alter a table space, use the SQL statement, ALTER TABLESPACE tblspace_name with the following options:

ADD, EXTEND, or RESIZE — Use only one of these options per statement. ADD specifies that a new container is to be added to the table space. EXTEND indicates the amount of additional space to allocate to the container. RESIZE indicates a new container size. Resizing an existing container to a smaller size is only supported with v8 and later.

container_clause specifies the container definitionSyntax:

ADD (FILE|DEVICE 'path_and_name' size)

Example: ADD (FILE '/database/sample/newfile.dat' 100K)

all_container_clause specifies that all containers in the table space will be extended or resized. Syntax:

EXTEND | RESIZE (ALL CONTAINERS size)

Example: RESIZE (ALL CONTAINERS 200M)

ALTER TABLESPACE Syntax

3-26

ALTER TABLESPACE table_space_name[ADD (container_clause)][EXTEND ({container_clause|all_container_clause)][RESIZE ({container_clause|all_container_clause)]

[PREFETCHSIZE size][BUFFERPOOL bufferpool_name]

Page 93: System Administration

Data Placement 3-27

PREFETCHSIZE size — This specifies the number of pages read from the table space when data prefetching is performedBUFFERPOOL bufferpool_name — This is the name of the buffer pool used for tables in this table space. The bufferpool must currently exist in the database, and the page size of bufferpool must be the same as the table space.

Page 94: System Administration

3-28 Data Placement

Above is an example of a command that adds a container to a table space.

ALTER TABLESPACE: Example 1

3-28

ALTER TABLE SPACE dms_ts1ADD (FILE '/database/sample/new/' 200K)PREFETCHSIZE 64BUFFERPOOL bp_newbuff

Page 95: System Administration

Data Placement 3-29

In this example, the size of a container in the table space is increased.

ALTER TABLESPACE: Example 2

3-29

ALTER TABLE SPACE dms_ts1RESIZE

(FILE '/database/sample/new/' 250K, DEVICE '/dev/cont0' 2M)

PREFETCHSIZE 75BUFFERPOOL bp_newbuff

Page 96: System Administration

3-30 Data Placement

You must have SYSADM or SYSCTRL authority to execute the ALTER TABLESPACE command.

ALTER TABLESPACE Authorization

3-30

To execute the ALTER TABLESPACE command, you must have either:

SYSADM authoritySYSCTRL authority

Page 97: System Administration

Data Placement 3-31

To drop a table space, use the DROP TABLESPACE table_space_name SQL statement. Replace table_space_name with the name of the table space to be dropped. You can also use a comma-separated list of table spaces.

Dropping a table space drops all objects defined in the table space. All existing database objects with dependencies on the table space, such as packages and referential constraints, are dropped or invalidated (as appropriate), and dependent views and triggers are made inoperative.

Table spaces are not dropped in the following cases:

A table in the table space spans more than one table space, and the other table spaces associated with the table are not being dropped. In this case drop the table first.The table space is a system table space such as syscatspace.The table space is a system-temporary table space and it is the only system-temporary table space that exists in the database.The table space is a user-temporary table space and there is a declared temporary table in it.

Dropping Table Spaces

3-31

Syntax:

DROP TABLESPACE|TABLESPACES table_space_name

Example:

DROP TABLESPACE dms_ts1, dms_ts6, sms_ts2

Page 98: System Administration

3-32 Data Placement

DROP TABLESPACE Authority

3-32

To drop a table space you need either:

SYSADM authority

SYSCTRL authority

Page 99: System Administration

Data Placement 3-33

The minimum number of extents required in a DMS table space is five:

Three extents for table space overhead and control informationTwo extents for each table object—one for the table extent map and at least one for the data

All the indexes for one table share the same extent, and the indexes only require a minimum of one extent for the index extent map and none for data, so the minimum size of the indexes extent is one.

DMS Table Space Minimum Size

3-33

3 extents for overhead

DMS Table Space

+ 2 extents for a table

= 5 extents minimum

Page 100: System Administration

3-34 Data Placement

In a DMS table space, one page in every container is reserved for overhead, and the remaining pages are used one extent at a time. Only full extents are used in the container, so add one extra page to the container size to allow for the overhead page.

With an SMS table space, you do not specify the size of container. Since the container is a directory, the size of the container is defined when the directory structure is created at the O/S level.

For optimum performance with either SMS or DMS table spaces, the containers should be of equal size and on different physical drives. The greater the number of containers, the greater the potential for parallel I/O operations.

Performance: Container Size

3-34

Guidelines for container sizes:DMS table space containers:One overhead page + (extentsize in pages * number of extents)SMS table space containers:Sizes are not specified because the O/S handles page allocation

Page 101: System Administration

Data Placement 3-35

When using RAID devices for containers, observe the following guidelines for increased performance:

Define one DMS container per RAID array.Make the extent size a multiple of the RAID stripe size so that only one I/O operation is required per extent.Make the container size a multiple of the extent size so that disk space is not wasted.Make the prefetch size a multiple of the extent size so that disk I/O is minimized.Use the DB2 registry variable DB2_STRIPED_CONTAINERS to align extents to the RAID stripe boundaries. The single overhead page for the container is placed in its own extent, which means that the rest of the first extent is empty space. However, it allows the rest of the extents to line up with the RAID stripes, thus improving I/O performance. When this variable is used, the size for the container must be one extent less than the size of the RAID device.Use the DB2 registry variable DB2_PARALLEL_IO to enable parallel disk I/O.

RAID Devices

3-35

When using RAID arrays:Define one DMS container per RAID arrayMake EXTENTSIZE a multiple of RAID stripe sizeMake container a multiple of the EXTENTSIZEMake PREFETCHSIZE a multiple of EXTENTSIZEUse the DB2_STRIPED_CONTAINERS registry variableUse the DB2_PARALLEL_LO registry variable

Page 102: System Administration

3-36 Data Placement

Since pages are only allocated as needed in SMS table spaces, small tables will have less wasted space if SMS table spaces are used.

If you need to allocate multiple pages at a time, enable multipage file allocation. This feature is implemented by running the db2empfa utility and is indicated by the MULTIPAGE_ALLOC database configuration parameter. When the value is set to yes, all SMS table spaces are affected—there is no selection possible for individual SMS table spaces—and the value cannot be reset to no.

Performance: SMS Table Spaces

3-36

Use SMS for small tables

Use db2empfa to allocate multiple pages

Page 103: System Administration

Data Placement 3-37

One of the major benefits of DMS table spaces is that data can be separated across multiple table spaces. Use separate table spaces for table, index, and large data. To realize maximum performance, place the table spaces on separate physical disks, and use multiple containers for each table space.

When it comes to choosing between using files or devices as the containers, be aware that devices provide a 10 to 15 percent performance enhancement over files. However, since the pages from file containers are already cached in the file system cache, you can reduce the size of the buffer pool for the table space and still get good performance. In addition, files are more useful when you want to avoid the extra administrative effort associated with setting up and maintaining devices. Finally, a file may be preferable when a container size is small, since a device can only support one container, and placing a small container in a large device would be a waste of space.

Performance: DMS Table Spaces

3-37

DMS table space benefits: You can separate data for one table across multiple table spaces:• Row data — Regular table space• Index data — Regular table space• LOB data — Large table spaceDevice containers provide better performanceFile containers provide:• Less DB2 buffer pool resources• Less administration effort• Better utilization for small containers

Page 104: System Administration

3-38 Data Placement

There are several factors to consider when designing system catalog space. If you want to maximize storage capacity, use an SMS table space since pages are allocated only as they are needed, and most system catalog tables are small. If you use a DMS table space, create one with a small extent size (2–4 pages).

If you want to take advantage of the file system cache for LOB data types, use either an SMS table space, or a DMS table space with file-type containers.

If the database is expected to grow and the size cannot be predicted, use a DMS table space since this type of table space has the option of adding containers.

Performance: Catalog Table Space

3-38

When planning for catalog space, consider the following:SMS space provides maximum storageSMS, or DMS space with file containers, provides file system caching of LOB dataDMS space provides unlimited growth

Page 105: System Administration

Data Placement 3-39

To properly utilize a System Temporary table space:

Create one SMS system temporary table space for every different page size used by the regular table spaces. The database manager is more efficient if it can use a temporary space that has a page size that matches the table spaces in the query.Define the containers for the SMS table spaces so they share the same file system(s) and are placed on different physical disks. Three containers on three separate disks is a recommended starting point. This has the benefit of maximizing parallel I/O operations. When a temporary table is created and deleted by DB2, the disk space that was used is reclaimed. This minimizes the total disk requirement.If the highest level of performance is required and dedicated disk space is available, use DMS space for system temporary table spaces.

Performance: System-Temporary Space

3-39

When considering system-temporary space:Create one temporary space for each page sizeUse multiple containers on different disksIf maximum performance is required, use DMSIf minimum space is required, use SMS

Page 106: System Administration

3-40 Data Placement

When planning for the storage of user table data, consider the following factors:

Amount of data — If the design involves tables with a small amount of data, consider using SMS table spaces. It is more prudent to use DMS table space for larger, more frequently accessed tables.Type of data — Infrequently used data without critical response time requirements may be placed on slower, less expensive devices.Minimizing disk reads — It may be beneficial for user table spaces to have their own bufferpool.Recoverablilty — Group related tables into a single table space. They may be related via referential integrity, triggers, or structured data types. Since backup and restore utilities work at the table space level, all the tables in one table space stay consistent and recoverable.

Performance: User Table Spaces

3-40

For user table spaces consider the following factors:Amount of dataType of dataMinimizing disk readsRecoverablilty

Page 107: System Administration

Data Placement 3-41

Summary

3-41

You should now be able to:Describe the attributes of an SMS table spaceDescribe the attributes of a DMS table spaceDescribe the different types of containersDescribe the concept of a bufferpoolExplain how bufferpools are assigned to table spacesList, alter, and drop a table spaceIdentify the optimum number of containers per table space

Page 108: System Administration

3-42 Data Placement

Lab Exercises

3-42

You should now complete the lab exercises for Module 3.

Page 109: System Administration

Creating an Instance 02-2003 4-1© 2002, 2003 International Business Machines Corporation

Creating an Instance

Module 4

Page 110: System Administration

4-2 Creating an Instance

Objectives

4-2

At the end of this module, you will be able to:Create, configure, manage, and drop an instanceObtain and modify database manager configuration informationObtain and modify DB2 registry variable valuesConfigure client/server connectivityUse DB2 discoveryCatalog databases using DB2 Configuration Assistant

Page 111: System Administration

Creating an Instance 4-3

An instance is comprised of the database manager, the databases that are assigned to it, and a configuration file called the DBM CFG. All of the configuration parameters for the instance are contained within this file. The DBM CFG file is actually a file in the instance home directory named db2systm. However, you can only edit this file using DB2 commands or the GUI tools and not normal text editors, so it is best to refer to it by the logical name DBM CFG.

There can be many databases for one instance and many instances on one machine. A single database, however, can only belong to one instance.

Instance Components

4-3

Machine

Instance

In addition to the parameters found in the DBM CFG file, there are registry variables that modify the behavior of the instance. They are similar to environment variables. The registry variables are discussed later in this module.

Tip

Page 112: System Administration

4-4 Creating an Instance

Each of these users will be discussed in following slides. To create users and groups on a system, you should consult your operating system documentation. However, the command samples below should give you an idea as to the steps required.

To create a user and group as the owner of the instance, where the group is instgrp and the user is instusr, type:

mkgroup instgrpmkuser pgrp=instgrp instusr passwd instpwd

To create a fenced user and fenced group where the group is fencgrp and the user is fencusr:

mkgroup fencgrpmkuser pgrp=fencgrp fencusr passwd fencpwd

To create users and groups, you need root access on UNIX-based systems or local Administrator access on Windows and OS/2 operating systems.

Requirements to Create a UNIX Instance

4-4

Prior to creating an instance on UNIX, the following users must exist:SYSADM userFenced userDAS user

Page 113: System Administration

Creating an Instance 4-5

Before a database manager instance can be created on UNIX platforms, a user must exist to function as the systems administrator (SYSADM) for the instance. Some thought should be given to the name chosen for this user, because the name of the database manager instance is the same as the name for this user. This user also becomes the owner of the instance. When the instance is created, this user's primary group name is used to set the value of the database manager configuration parameter SYSADM_GROUP. Any additional users that wish to have SYSADM authority on the instance must also belong to this group. SYSADM authority has total authority over all functions for the instance in a similar way that root has total authority on a UNIX system, or Administrator has total authority on a Windows system.

The SYSADM User and Group

4-5

Before the instance can be created, you need to create:A DB2 UDB systems administrator user (SYSADM)A DB2 UDB systems administrator group (SYSADM_GROUP)

Page 114: System Administration

4-6 Creating an Instance

Before a database manager instance can be created on a UNIX platform, a user must exist that can run any user-defined functions (UDFs) and stored procedures in a fenced mode. This user is necessary since UDFs can be created using the C programming language, which can use pointers to reference memory addresses outside of its defined memory space. To prevent a poorly written UDF from corrupting the DB2 UDB memory, UDFs are commonly run in a fenced section of memory to prohibit references to memory addresses outside of the fence.

Fenced User

4-6

You must create a fenced user, which: Allows user-defined functions to run in fenced modePrevents a poorly written UDF from corrupting the DB2 UDB memory structures

Page 115: System Administration

Creating an Instance 4-7

If this installation of DB2 UDB is a new installation, then a database administration server (DAS) is automatically created along with the first database manager instance. During the installation process, you are asked to provide a name for the DAS. The user name that you provide becomes the name of the DAS and the installing user has SYSADM authority on the DAS. In addition, the registry variable DB2ADMINSERVER is set to the name of the DAS. If you do not plan on using the GUI administration tools, the DAS is not needed and can be dropped after the database manager instance has been created.

You can drop the DAS by using the dasidrop command (UNIX) or the db2admin drop command (Windows).

If, at a later date, you decide to use the GUI administration tools and you need to have a DAS, you can create one using the dasicrt command (UNIX), or the db2admin create command (Windows).

DAS User

4-7

During DB2 installation, a database administration server (DAS) is created, which requires a separate SYSADM user.

Page 116: System Administration

4-8 Creating an Instance

The DAS is a special DB2 process for managing local and remote DB2 servers. There can be only one DAS per machine and it listens to port 523. The DAS:

Is a special purpose DB2 serverHas no user acessible databases

The DAS is used to satisfy requests from the DB2 administration tools such as the Control Center and the Configuration Assistant. Some examples of these requests are:

Obtain user, group, and operating system configuration informationStart/stop DB2 instancesSet up communications for DB2 server instancesReturn information about the DB2 servers to remote clientsCollect information results from DB2 Discovery

The DBM CFG file for the DAS is similar to the DBM CFG files used by other instances, except that it only contains a subset of the parameters found in a normal DBM CFG file. Unlike other DBM CFG files, it is actually a file named das2systm that resides in the home directory of the instance, and you cannot modify it with normal text editors.

Database Administration Server (DAS)

4-8

Machine

GUI Tools

Port 523

Page 117: System Administration

Creating an Instance 4-9

The DB2 utility used to create the database manager instance is db2icrt. In the example above, a database manager instance is created with the name instusr and assigned a fenced user named fencusr. The user instusr is the owner of the instance and is assigned SYSADM authority over the instance.

In addition, all files associated with the instance, plus any default SMS table spaces, are created in the $HOME directory for user instusr.

Creating an Instance

4-9

To create a database manager instance, use the db2icrt command:Syntax:

db2icrt -u fenced_user instance_owner

Exampledb2icrt -u fencusr instusr

To create DAS instance use the dascrt command:Syntax:

dascrt instance_owner

Exampledascrt dasusr

Normally only root can run these commands

Page 118: System Administration

4-10 Creating an Instance

The db2icrt command installs and configures the database manager instance on the UNIX server. Normally only the user root has authority to run this command, but in our classroom environment, the student logins have been given authority to run this command.

The environment variable DB2INSTANCE is set to the name of the database manager instance and PATH is set to include the path to the DB2 UDB binary files. A new directory, sqllib, is created in the $HOME directory of the user specified as the SYSADM.

If it is a new installation on a Windows system, a DAS is created. The DAS is not created on Linux or Unix systems.

The communications protocols that are supported on the server are examined and entries are made in the operating system services file to allow communications with the database manager instance.

Finally, the files necessary to set environment variables are created. The first of these two files is db2profile (or db2bashrc or db2cshrc, depending on your shell), which sets the default environment variables. This file is often overwritten by new versions of DB2 UDB or by fixpacks, and you should not make any changes to it. The second file is called userprofile and is provided for your use to set environment variables unique to your installation. It will not be overwritten by new versions of DB2 UDB or by fixpaks.

The db2icrt Command in Detail

4-10

The db2icrt command:Creates the database manager instanceSets the environment variables DB2INSTANCE and PATHCreates the /sqllib subdirectory in the $HOME directory of the SYSADMCreates the DAS, if it is a new Windows installationConfigures communications based on the server's available protocolsCreates the db2profile and userprofile files

Page 119: System Administration

Creating an Instance 4-11

Installing DB2 on a Windows platform is much simpler than on a UNIX platform. During installation, a default DAS is created named db2das00, and a default instance is created named db2. During the installation process, the installation program prompts you for the name of the user that you want to be the system administrator for both of these instances. If the user does not exist, the program asks if you want one created.

The installation program builds the C:\Program Files\SQLLIB\DB2DAS00 and the C:\Program Files\SQLLIB\DB2 directories and put the files associated with each instance in the appropriate directory (such as the db2systm file). The installation program also builds the directory C:\DB2\NODE0000 and places any table spaces for any databases associated with the DB2 instance in this directory.

Creating an Instance in Windows

4-11

The installation program:Creates a Systems Administrator userCreates a DAS Creates a default instance

Page 120: System Administration

4-12 Creating an Instance

Use the following command if you would like to drop a database manager instance on either UNIX or Windows:

Syntax:

db2idrop instance_name

Example:

db2idrop instusr

Use the following command if you would like to drop the DAS on a UNIX platform:

dasdrop

Use the following command if you would like to drop the DAS on a Windows platform:

db2admin drop

In order to execute these commands you need root access on UNIX based systems or local Administrator access on the Windows operating system.

Drop Instance

4-12

To drop an instance, use the following commands:Database manager instance

db2idrop instance_name

UNIX DAS dasidrop

Windows DAS db2admin drop

Page 121: System Administration

Creating an Instance 4-13

The DAS provides:Remote users the opportunity to DISCOVER instances and databasesThe ability to schedule jobsRemote access of the server by the GUI tools on the client

Why drop the DAS?The DAS may not be needed (there is no GUI interface, no scheduling of jobs, etc.).You need to reduce memory footprint.You don’t want anyone using DISCOVER (perhaps for security reasons) to find other servers and databases on the network.

What if you do drop and recreate the DAS?If you DROP the DAS and recreate it, your other instances and all your databases are untouched.

You are exactly back to where you were (except for some GET ADMIN CFG settings), including all other instances and databases being intact.

Why Drop the DAS?

4-13

The DAS may not be needed (no GUI interface, no scheduling of jobs, etc.)You need to reduce memory footprintYou don’t want anyone using DISCOVER

What if you do drop and recreate the DAS?If you DROP the DAS and recreate it, your other instances and all your databases are untouched

Page 122: System Administration

4-14 Creating an Instance

The DAS normally starts automatically when the operating system boots. However, the DAS can be set to start manually. In this case, you must use the db2admin start command to start the DAS. Since there is only one DAS per machine, it is not necessary to specify the name of the DAS.

You can start a normal instance in two different ways depending on which tool you use:

Command Line Processor — Enter db2start at the command prompt. The CLP starts the database manager instance specified in the environment variable DB2INSTANCE.GIU Control Center — Invoke the Control Center and expand the objects in the left pane until the instances are visible. Then right-click on the instance you want, and a menu appears. Click on the Start menu option.

Starting Instances

4-14

The DAS starts when:The operating system boots upYou use the db2admin start command

You can start a normal instance:By using the db2start commandBy using the Start option in the GUI Control Center menus

Page 123: System Administration

Creating an Instance 4-15

To stop an instance, use the db2stop command:

db2stop

The command stops the current instance (as specified by the DB2INSTANCE environment variable) so no instance name is required. The instance is not stopped if a connection to any of the databases in the instance exists. In this case, the instance needs to be forcefully stopped by using the force keyword, which forces all applications to disconnect, and then stop the instance.

db2stop force

Stopping Instances

4-15

The DAS stops:Automatically when the operating system shuts downWhen you use the db2admin stop command

You can stop a normal instance:Using the db2stop commandUsing the GUI Control Center Stop menu optionA normal instance is stopped only when all applications have disconnected

Page 124: System Administration

4-16 Creating an Instance

If you wish to view the current DBM CFG parameter values, type the command:

db2 GET DBM CFG

This returns a list of all of the configuration parameters and their current values. For illustration purposes, here are a couple of configuration parameters:

MAXAGENTS indicates the maximum number of database manager agents (db2agent) available at any given time to accept application requests.NUMDB limits the maximum number of concurrently active databases.

If you wish to change the current values of these parameters, enter the following command:

Syntax:UPDATE DBM CFG USING parameter value [parameter value...]

Exampledb2 UPDATE DBM CFG USING MAXAGENTS 10 NUMDB 3

Note that you can change several parameters at one time by listing them in sequential order. Once values have been updated, they do not take effect until the database manager is restarted.

To see a list of current and pending configuration values, run the command:

db2 GET DBM CFG SHOW DETAIL

Instance Configuration Using the CLP

4-16

You can use the Command Line Processor to access the instance configuration file:

Use the GET DBM CFG command to view current valuesUse the UPDATE DBM CFG command to change values

Page 125: System Administration

Creating an Instance 4-17

To configure DBM CFG parameters in the Control Center, expand the objects in the left pane of the Control Center until the instances are visible. Right-click on the instance you want and select Configure Parameters to display the DBM Configuration window.

Instance Configuration Using the Control Center

4-17

Page 126: System Administration

4-18 Creating an Instance

The parameters are grouped in the DBM Configuration window into related categories. Scroll through the list of parameters to find the desired category, then locate the parameter you want to change in that section. Click on the parameter, and then click on the value in the next column to change it. The Hint section at the bottom contains a detailed description of the parameter plus the value ranges that are valid for the parameter.

To modify a parameter value, highlight the parameter and enter a value in the Value field at the bottom left of the screen. Some parameters require you to select from a drop-down list that appears in the Value field.

Update Instance Configuration

4-18

Page 127: System Administration

Creating an Instance 4-19

The DB2 Profile Registry holds variable values that function similarly to environment variables and control the DB2 environment. However, there are very few environment variables recognized by DB2. Registry variables have two distinct advantages: they take effect immediately without having to restart the instance, and they are in a centrally located registry and are easily managed. The registry variables are available in both UNIX and Windows environments.

The variables in the registry vary by platform, but here are a few examples that are common to all platforms:

DB2CODEPAGE specifies the human language in which the data is presented. If not set, DB2 uses the code page set for the operating system.DB2DBFT specifies the default database for implicit connections.DB2COMM specifies which DB2 communication listeners are started when the database manager is started. If this is not set, no DB2 communications managers are started at the server.

DB2 Profile Registry variables are stored in profile files on UNIX platforms and in the Registry on Windows platforms.

Registry Variables

4-19

Registry variables control the instance environment and have two main advantages:

Changes take effect without restarting the instanceAll controlling factors are centrally located and easily managed

Page 128: System Administration

4-20 Creating an Instance

The DB2 Profile Registry is divided into two levels:

DB2 instance-level Profile Registry — The variable settings for a particular instance are kept in this registry. The majority of DB2 registry variables are placed here.DB2 global-level Profile Registry — This registry contains the machine-wide variable settings. If a variable is not set for a particular instance, the value in this registry is used.

Levels of the Registry

4-20

The DB2 Profile Registry is divided into two levels:Instance levelGlobal level

Page 129: System Administration

Creating an Instance 4-21

Use the db2set command to view the parameters.

db2set -i displays all of the instance-level parameters that have been set.db2set -g displays all of the global-level parameters that have been set.db2set -l displays all of the defined profiles (db2 instances) on the machine.db2set -all displays all of the parameters that have been set to a value.db2set -lr displays all of the registry variables that are available on the platform, regardless of whether or not they have been set.

View Registry Variables

4-21

Use the db2set command to view registry variable values:db2set -i for instance-level parametersdb2set -g for global-level parametersdb2set -I for all the defined profilesdb2set -all for all the registry variables with valuesdb2set -lr for all available parameters

Page 130: System Administration

4-22 Creating an Instance

Examples of commands to set registry variables are shown above.

Set Registry Variables

4-22

To set a parameter for the current instance:

To set a parameter’s value for a specific instance:

To set a parameter at the global level:

Syntax: db2set parameter=value

Example: db2set DB2COMM=tcpip,npipe

Syntax: db2set parameter=value -i instance_name

Example: db2set DB2COMM=tcpip,npipe -i altinst

Syntax: db2set parameter=value -g

Example: db2set DB2COMM=tcpip,npipe -g

Page 131: System Administration

Creating an Instance 4-23

Most system administration operations require that you have a certain level of authority or privilege in order to perform them. Some of these operations are shown above.

Instance Authorization

4-23

Operation Authority or PrivilegeCreate or drop an instance root access on UNIX systems and

local Administrator on Windows systems

Start or stop an instance SYSADM, SYSCTRL or SYSMAINT

Update DB CFG and DBM CFG files

SYSADM

db2set SYSADM

Page 132: System Administration

4-24 Creating an Instance

There are two ways to configure connectivity between a client machine and a server machine: the manual method and using the Configuration Assistant, which is a GUI based tool.

Both of these methods are discussed on the following slides.

Client/Server Connectivity

4-24

There are two ways to configure client/server connectivity:Manual configurationDB2 Configuration Assistant (CA)

Page 133: System Administration

Creating an Instance 4-25

The client must be able to identify the server on the network. To do this, the client must have information about the server, such as the communications protocol and the server name, in its system catalog. In order to recognize a database, the client must have information about the database, such as the database name and alias, in its system catalog. Finally, there are some additional steps required that are specific to the communications protocol being used.

Manual Configuration

4-25

To enable connectivity, you must complete the following configurations on the client:

Catalog the server systemCatalog the databaseSet up a communications protocol

Page 134: System Administration

4-26 Creating an Instance

In the scenario, you will be setting up client/server connectivity on a Windows client that will be connecting to a UNIX server, which has a DB2 instance that contains a database called SAMPLE.

The server has already been set up with the following:

The value of the registry variable DB2COMM has been set to tcpip.There is a valid entry in the UNIX services file that identifies a TCP/IP protocol with a port number; in our case, it is port number 3700.The DBM CFG parameter SVCENAME has been set to the same name that was used for the TCP/IP port number 3700 in the services file.

The Windows client has already been set up with a name and IP address in the hosts file that will resolve the server on the network. In our scenario, the host name is db2server and the IP address is 9.186.128.141.

Manual Configuration Scenario

4-26

To illustrate connectivity, we will use a scenario:Settings on the UNIX server:• Value of DB2COMM is tcpip• services file contains valid TCP/IP port number• SVCENAME matches the services file entry

Settings on the Windows client machine:• Host file entry to resolve the server• Catalog the server• Catalog the database

Page 135: System Administration

Creating an Instance 4-27

The syntax required to catalog the server on the client machine is shown above. Note that the IP address could have been replaced with the server name and the port number could have been replaced with the service name. In addition the port number on the client and the server must be the same.

Cataloging the Server

4-27

Syntax for cataloging a server:

CATALOG TCPIP NODE node_name REMOTE {hostname | ip_address}SERVER {svcename | port_number}

Example:

CATALOG TCPIP NODE db2serv REMOTE 9.186.128.141 SERVER 3700

Page 136: System Administration

4-28 Creating an Instance

The required syntax to catalog a database on the client is shown above.

Cataloging the Database

4-28

Syntax for cataloging a database:CATALOG DATABASE db_name AS db_alias

AT NODE node_name

Example:CATALOG DATABASE sample AS srv_samp

AT NODE db2server

Page 137: System Administration

Creating an Instance 4-29

You must have either SYSADM or SYSCTRL authority to catalog a node or database.

Cataloging Authorization

4-29

To catalog a node or a database you need either:SYSADM authoritySYSCTRL authority

Page 138: System Administration

4-30 Creating an Instance

The Configuration Assistant is a GUI tool that takes advantage of the discovery function of DB2 to automate the configuration of remote databases. The discovery function operates by searching the network for all the DAS instances, normal instances, and databases that have their discovery configuration parameters set to allow them to be discovered. These instances and databases reply to the CA’s discovery request with their connectivity information. The instances and databases that do not have their discovery configuration parameters set to allow for discovery do not reply and remain hidden. Therefore, it is possible to have some machines, instances, and databases that are discoverable and some that are hidden. The discovery function has two operating modes:

Search discovery — The network machines are searched for all instances and databases with configuration parameters set to allow them to be discovered by the discovery function.Known discovery — A specific hostname is provided to the discovery function and the network is searched for that machine. Any instances and databases on that machine that are discoverable reply to the CA request.

Configuration Assistant (CA)

4-30

The Configuration Assistant uses two forms of automatic client configuration:

Search discoveryKnown discovery

Page 139: System Administration

Creating an Instance 4-31

You must set the database manager and database configuration parameters to enable the proper functioning of the discovery feature. There are configuration parameters at the DAS level, the database manager instance level and the database level. Therefore, it is possible to have whole machines, just instances, or just databases that do not respond to a discovery request.

There are two discovery parameters at the DAS level:

DISCOVER_COMM — This discovery parameter defines protocols that clients use to issue search discovery requests. The valid values are TCPIP and NETBIOS, or a combination of both. There is no default.DISCOVER — This discovery parameter determines the type of discovery mode that is started when the DAS starts. The valid values are SEARCH, KNOWN, and DISABLE, and the default is SEARCH.

SEARCH — When the DAS starts, connection protocols for all of the connections specified in the DBM CFG parameter, DISCOVER_COMM and the registry variable, DB2COMM are started. SEARCH provides a superset of the functionality provided by KNOWN discovery; when set to search, the DAS server handles both search and known-discovery requests from clients.

Discovery Parameters

4-31

Use the following parameters to establish DB2 discovery: DAS DBM CFG • DISCOVER_COMM• DISCOVERDBM CFG• DISCOVER_INSTDB CFG • DISCOVER_DB

Page 140: System Administration

4-32 Creating an Instance

KNOWN — When the DAS starts, only the connections specified in the DB2COMM registry variable are started. Therefore, only KNOWN discovery requests are processed.DISABLE — The DAS for the machine does not handle any discovery requests.

There is one discovery parameter at the instance level:

DISCOVER_INST — This discovery parameter specifies whether the instance replies to a discovery request. The acceptable values are ENABLE and DISABLE. The default is ENABLE.

There is one discovery parameter at the database level:

DISCOVER_DB — This discovery parameter specifies whether the instance replies to a discovery request. The acceptable values are ENABLE and DISABLE. The default is ENABLE.

Page 141: System Administration

Creating an Instance 4-33

To invoke the CA click on: Start > Program Files > IBM DB2 > Set-up Tools > Configuration Assistant. You can also start CA by executing the db2ca command from a command line.

Configuration Assistant Overview

4-33

Page 142: System Administration

4-34 Creating an Instance

Add a remote database by selecting Selected > Add Database Using Wizard from the menu bar.

Configuration Assistant: Add a Database

4-34

Page 143: System Administration

Creating an Instance 4-35

The Add Database wizard shows a list of pages on the left side of the window. This list of pages changes according to selections you make on the first and on subsequent pages.

The first page of the Add Database wizard is the Source page where you can choose a method of connection to the database. You can choose to:

Create and use a file that contains connection profile information.Search the network to find local or remote databases.Manually configure a connection.

To add a remote database, select the Search the network option and press Next.

Add Database Wizard: Set Up Connection

4-35

Wizard pages

Page 144: System Administration

4-36 Creating an Instance

On the Network page, you choose whether you want to add a new database that is already known by your local instance or whether you want to search the network for other databases. Expand one of these options to display the database you wish to add and select the database. Click Next to continue to the Alias page.

Alias PageAn alias can be specified on this page to provide a local name for a remote database. Provide a database alias and click Next.

Data Source PageOn this page, you can register the database as an ODBC data source.

Add Database Wizard: Search the Network

4-36

Page 145: System Administration

Creating an Instance 4-37

When you click Finish, you will receive a confirmation window. You can then verify the connection by entering your user ID and password and clicking on the Test Connection button in the confirmation window.

Add Database Wizard: Testing the Connection

4-37

Page 146: System Administration

4-38 Creating an Instance

The database has been configured on the client.

Search: Configuration is Complete

4-38

Page 147: System Administration

Creating an Instance 4-39

Summary

4-39

You should now be able to:Create, configure, manage and drop an instanceObtain and modify database manager configuration informationObtain and modify DB2 registry variable valuesConfigure client/server connectivityUse DB2 discoveryCatalog databases using DB2 Configuration Assistant

Page 148: System Administration

4-40 Creating an Instance

Lab Exercises

4-40

You should now complete the lab exercises for Module 4.

Page 149: System Administration

Database Tables and Views 02-2003 5-1© 2002, 2003 International Business Machines Corporation

Database Tables and Views

Module 5

Page 150: System Administration

5-2 Database Tables and Views

Objectives

5-2

At the end of this module, you will be able to:Obtain and modify database configuration informationStart and stop a databaseForce users and applications off an instanceCreate and use schemasQuery the contents of the system catalog tablesUnderstand the impact of large objects on tablesCreate temporary tablesCreate viewsCreate and use federated objects

Page 151: System Administration

Database Tables and Views 5-3

The parameters for each individual database are stored in the database configuration file, or DB CFG. This file is named sqldbcon and is located in the /NODE0000/SQLnnnnn directory, where nnnnn is the number assigned to the database when it was created. Even though the file physically exists as the file sqldbcon, it can only be viewed and modified by using DB2 commands through the CLP, or by using the DB2 UDB Control Center. It cannot be edited using a normal text editor. Therefore, it is best to refer to this file by its logical name of DB CFG.

The starting point for the directory /NODE0000/SQLnnnnn is one of the options specified in the CREATE DATABASE statement, or the value assigned to the DFTDBPATH database manager (DBM) configuration parameter for the instance.

The Database Configuration File

5-3

DB CFG (DB 1)

DB CFG (DB 2)

Page 152: System Administration

5-4 Database Tables and Views

When you use the GET DB CFG command to view the DB CFG parameter values, you get a list of all the configuration parameters for a database and their assigned values. The CLP accepts several variations in the keywords for the GET DB CFG command. The following commands are equivalent:

db2 GET DB CFG FOR database_namedb2 GET DATABASE CFG FOR database_namedb2 GET DATABASE CONFIG FOR database_namedb2 GET DATABASE CONFIGURATION FOR database_name

To help you understand the purpose of the DB CFG parameters, here are a couple of examples:

BUFFPAGE — This parameter is the default bufferpool size that is used when the CREATE BUFFERPOOL statement does not specify the size.DFT_EXTENT_SZ — This parameter is the default table space extent size when the size is not been specified at the table space creation time.

Managing the DB CFG File Using CLP

5-4

You can view and update the DB CFG file using the CLP:Use GET DB CFG command to view DB CFG values.

Syntaxdb2 GET DB CFG FOR database_name

Exampledb2 GET DB CFG FOR db1

Use the UPDATE DB CFG command to update DB CFG values.Syntax

db2 UPDATE DB CFG USING parameter_value

Exampledb2 UPDATE DB CFG USING buffpage 1000

Page 153: System Administration

Database Tables and Views 5-5

You can use the UPDATE DB CFG command to modify the value of a parameter. Note that you can update several parameters at once by listing them in sequential order.

Syntax:db2 UPDATE DB CFG FOR database_name USING parameter_value

Example:db2 UPDATE DB CFG FOR db1 USING buffpage 1000

dft_extent_sz 4K

Updated values only take effect after the database is restarted.

Page 154: System Administration

5-6 Database Tables and Views

You can also manage the DB CFG file by using the DB2 Control Center. Invoke the Control Center by selecting Start > Program Files > IBM DB2 >General Administration Tools > Control Center

Expand to the Databases folder in the Control Center, right-click on the desired database, and select Configure Parameters. This displays the Database Configuration window.

Managing DB CFG Using CC

5-6

Page 155: System Administration

Database Tables and Views 5-7

The database configuration parameters are grouped into categories that are accessible by scrolling down to the desired section heading, then selecting the desired parameter in that section.

You can modify the parameter values in the Value column located to the right of the parameter name. You can view a detailed description of the parameter in the Hint box located at the bottom of the window.

Click OK after you have finished changing the parameter values, but be aware that the changes do not take effect until the database is restarted.

Database Configuration Window

5-7

Page 156: System Administration

5-8 Database Tables and Views

A database is implicitly started when the first application connects and is stopped when the last application disconnects. When a database is implicitly started by the first connection, all necessary services are started, the required memory is allocated, and only then is the database ready to process the request by the application. As soon as all applications have disconnected from the database, all services are stopped, the required memory is released, and the database is stopped.

To explicitly start a database, use the ACTIVATE DATABASE command:

Syntaxdb2 ACTIVATE DATABASE database_name

Exampledb2 ACTIVATE DATABASE sample

When the database is started by using the ACTIVATE DATABASE command, all necessary services are started, the required memory is allocated, and the database is idle, but ready for the first connection.

If ACTIVATE DATABASE was used to start the database, then the database must be explicitly stopped by using the DEACTIVATE DATABASE command. Until this command is issued, processes are not implicitly stopped, nor is the required memory released when the last

Starting and Stopping Databases

5-8

A database starts and stops under two related pairs of conditions:Implicit conditions:

The database starts when the first application connectsThe database stops when the last application disconnects

Explicit conditions:The ACTIVATE DATABASE command is issuedThe DEACTIVATE DATABASE command is issued and all applications have disconnected

Page 157: System Administration

Database Tables and Views 5-9

application disconnects. The database remains operational but waits idly for the next connection by an application. To explicitly stop a database use the DEACTIVATE DATABASE command:

Syntaxdb2 DEACTIVATE DATABASE database_name

Exampledb2 DEACTIVATE DATABASE sample

If this command is issued and applications are still connected to the database, it is not executed until the last application has disconnected.

Page 158: System Administration

5-10 Database Tables and Views

If you need to forcefully disconnect all applications on the instance and stop a database, you can use the FORCE APPLICATION command:

db2 FORCE APPLICATION ALL

It is also possible to force individual applications off of the instance by using a combination of the LIST APPLICATIONS command and the FORCE APPLICATION command:

LIST APPLICATIONS — This command provides you with descriptive information, including an application_handle, for all the applications that are connected to the instance.FORCE APPLICATION (application_handle) — Use this command to force the application specified by the application handle off of the instance. For example: db2 FORCE APPLICATION (1)

The application connection is terminated and any uncommitted transactions are rolled back.

FORCE APPLICATION

5-10

If a database needs to be stopped immediately and applications are still connected:

Use the FORCE APPLICATION ALL command for all connections to the instanceAlternatively, use the LIST APPLICATIONS command to view applications and their handles. Then run the following command to force an individual application:

FORCE APPLICATION (application_handle)

Page 159: System Administration

Database Tables and Views 5-11

Above is a list of the command options that have been discussed and the authority and privilege required.

Authorization

5-11

Option Authority or Privilege

Update the DB CFG SYSADM, SYSCTRL or SYSMAINT

Activate or deactivate database SYSADM, SYSCTRL or SYSMAINT

Force application SYSADM or SYSCTRL

Page 160: System Administration

5-12 Database Tables and Views

Schemas are database objects used in DB2 to logically group a set of database objects. Most DB2 objects are named using a two-part naming convention where the first part of the name is the schema—otherwise known as a qualifier for the database object—and the second part is the object name. This format is schema.object_name; an example of a schema name for the customer table would be db2admin.customer.

When you create an object, and you do not specify a schema, the object is associated with an implicit schema based on the login that you are using to access the database. For example, if you logged in as bobjones and created a customer table,. then the full two-part table name would be bobjones.customer. When the login is used in an implicit schema, it is referred to as an authorization ID.

When an object is referenced in an SQL statement, it is also implicitly qualified with the authorization ID of the issuer if no schema name is specified in the SQL statement. For example, if you logged on as bobjones and issued the SQL statement:

SELECT * FROM customer

the bobjones.customer table is accessed.

Schemas

5-12

Schemas are used to group database objects.

Page 161: System Administration

Database Tables and Views 5-13

CURRENT SCHEMA is a special register that contains the default qualifier used for unqualified objects referenced in dynamic SQL statements. The value of CURRENT SCHEMA is initially set to the value of the authorization ID and can be reset using the SQL statement, SET CURRENT SCHEMA.

Special registers are a set of storage values that are defined for an application process by the database manager when the application connects to a database. Each connection is assigned its own private set. They are used to store values that are accessible by using keywords in an SQL statement. For example, the special register USER is set to the value of the login name for the application’s user and can be used in an SQL statement, such as:

SELECT password FROM password_table WHERE user_id = USER

Current Schema

5-13

Use the SET CURRENT SCHEMA command to modify the value of the CURRENT SCHEMA special register:

Syntax:db2 SET CURRENT SCHEMA = schema_name

Example:db2 SET CURRENT SCHEMA = db2admin

Page 162: System Administration

5-14 Database Tables and Views

The CREATE SCHEMA statement creates a new schema in the database. The principle components are:

schema_name — This name identifies the new schema and it cannot identify a schema that already exists. The name cannot begin with sys, which is reserved for schemas that are created by the database manager when the database is created. If schema_name is used without an authorization_name then schema_name is also used to identify the user who owns the schema.auth_name — This name identifies the user who owns the schema. If auth_name is used without schema_name, then auth_name is also used as the name of the schema.schema_name AUTHORIZATION auth_name — These two names identify both the name of the schema and schema owner when you want them to be different values.sql_statement — This is an optional clause that allows SQL statements to be included as part of the CREATE SCHEMA statement. Acceptable SQL statements include: CREATE TABLE, CREATE VIEW, CREATE INDEX, COMMENT ON, and GRANT.

CREATE SCHEMA SQL Statement

5-14

Use the following syntax to create schemas:

CREATE {SCHEMA schema_name | AUTHORIZATION auth_name |SCHEMA schema_name AUTHORIZATION auth_name}[sql_statement]

Page 163: System Administration

Database Tables and Views 5-15

The first statement in the slide above creates a schema named admin that is owned by the user admin. The second statement creates a schema that is owned by the user db2admin and is also named db2admin. The third statement creates a schema that is named admin but is owned by the user db2admin.

CREATE SCHEMA Examples

5-15

Here are some examples of the CREATE SCHEMA statement:

CREATE SCHEMA admin

CREATE SCHEMA AUTHORIZATION db2admin

CREATE SCHEMA admin AUTHORIZATION db2admin

Page 164: System Administration

5-16 Database Tables and Views

The example in the slide above creates a schema named inventory. It then creates a table and index that become part of the inventory schema and grants all permissions on the table to the db2admin user. Note that a CREATE SCHEMA statement can have multiple SQL statements embedded into it.

Create a Schema Using SQL Statements

5-16

Here is an example of the CREATE SCHEMA statement with some SQL statements:

CREATE SCHEMA inventoryCREATE TABLE part

(partno SMALLINT NOT NULL, descr VARCHAR(24), quantity INTEGER)

CREATE INDEX partind ON part (partno) GRANT all ON part TO db2admin

Page 165: System Administration

Database Tables and Views 5-17

System catalog tables contain information about the definitions of the database objects (tables, views, indexes, and packages) and security information about the type of access users have to these objects. Catalog tables are stored in the syscatspace table space and assigned to the sysibm schema. These tables are updated during the operation of a database; for example, when a table, view, or index is created.

The tables belong to the sysibm schema. They cannot be directly created or dropped, however, they can be updated through a set of views that belong to the sysstat schema.

The following database objects are defined in the system catalog:

A set of user-defined functions (UDFs) is created in the sysfun schemaA set of read-only views for the system catalog tables is created in the syscat schemaA set of updateable catalog views is created in the sysstat schema

System Catalog Tables

5-17

Page 166: System Administration

5-18 Database Tables and Views

You can query on the views associated with the syscat schema. For example, you can retrieve data about the tables in the database. The next few slides illustrate a sampling of the other information that is available.

Querying Catalog Tables for Table Names

5-18

To find the names of existing tables:

SELECT tabname, type, create_timeFROM syscat.tablesWHERE definer = 'INST00'

Information is stored in catalog tables using uppercase

Page 167: System Administration

Database Tables and Views 5-19

Above is an example of a query to search for table spaces created by user inst00.

Querying Catalog Tables for Table Spaces

5-19

To query for existing table spaces:

SELECT tbspace, create_time, tbspaceid, tbspacetypeFROM syscat.tablespacesWHERE definer = 'INST00'

Page 168: System Administration

5-20 Database Tables and Views

The example above queries the catalog tables for information about bufferpools.

Querying Catalog Tables for Bufferpools

5-20

To query for existing buffer pools:

SELECT bpname, npages, pagesizeFROM syscat.bufferpools

Page 169: System Administration

Database Tables and Views 5-21

In the slide above, the SQL statement returns the name and type for all the constraints associated with the employee table. The values returned for the type column are:

F — foreign key K — check constraint P — primary keyU — unique

Querying Catalog Tables for Constraints

5-21

To query for constraints on specific tables:

SELECT constname, typeFROM syscat.tabconstWHERE tabschema = 'INST00' AND tabname = 'EMPLOYEE'

Page 170: System Administration

5-22 Database Tables and Views

The data type large object (LOB) is a special category of data types provided to store large data values:

Binary large objects (BLOBs)Single-byte character large objects (CLOBs)Double-byte character large objects (DBCLOBs)

There are several size limitations for large objects:

Any single LOB value cannot exceed 2 gigabytes.Any single row in the table cannot contain more than 24 gigabytes of LOB data.Any single table cannot contain more than 4 terabytes of LOB data.

Large Objects

5-22

Large object (LOB) categories:Binary large object (BLOB)Single-byte character large object (CLOB)Double-byte character large object (DBCLOB)

Page 171: System Administration

Database Tables and Views 5-23

There are three options for storing large object data:

In the table — The LOB data is stored in the table along with the rest of the data, which is referred to as inline storage. This makes the data easy to administrate, but the performance of table scans is seriously degraded. Separate table space — The LOB data is stored in a separate table space. This option allows parallel scans and therefore improves performance. However, this option is only available on DMS table spaces, which makes administration of the data more difficult.In a file system — The LOB data is stored in a file system which gives better performance, but it does require the use of the datalink utility and a separate backup mechanism.

Large Objects and Tables

5-23

Large objects have three storage options:Stored in table with other dataStored in separate table spaceStored in file system

Page 172: System Administration

5-24 Database Tables and Views

System Temporary TablesSystem temporary tables are used by DB2 for intermediate processing in operations such as sorts and loads. They are under the control of the system and are created and dropped as needed. The database manager utilizes the system-temporary table space to store these tables.

Global Temporary TablesGlobal temporary tables are explicitly created by users running applications. These tables are created with the DECLARE GLOBAL TEMPORARY TABLE statement and are explicitly dropped with the DROP TABLE statement. They are also dropped automatically when the application terminates.

Each application can only access the temporary tables that it has created, and the tables are not visible to other applications. Therefore, multiple applications can create a temporary table with the same name. The database manager utilizes the user-temporary table space to store these tables.

Temporary Tables

5-24

The purpose of temporary tables is to store intermediate results. There are two types:

System temporary tablesGlobal temporary tables

Page 173: System Administration

Database Tables and Views 5-25

Some of the benefits of using global temporary tables are listed above.

Benefits of Global Temporary Tables

5-25

Global temporary tables:Avoid catalog contentionDo not lock rowsDo not log transactionsDo not check authorityUse no name space contentionPerform automatic cleanup

Page 174: System Administration

5-26 Database Tables and Views

The following code examples illustrate how temporary tables are implemented:

Example 1CONNECT TO sampleDECLARE GLOBAL TEMPORARY TABLE tempdata

(id INTEGER, name CHAR(10))ON COMMIT DELETE ROWS NOT LOGGED IN dectemptab

DECLARE GLOBAL TEMPORARY TABLE tempinv (item CHAR(10), count INTEGER)ON COMMIT PRESERVE ROWS NOT LOGGED IN dectemptab

INSERT INTO session.tempdata VALUES(1,'John')INSERT INTO session.tempdata VALUES(2,'Susan')INSERT INTO session.tempinv VALUES('wheel',2)SELECT * FROM session.tempdata --returns 2 rowsSELECT * FROM session.tempinv --returns 1 rowCOMMITSELECT * FROM session.tempdata --returns 0 rowsSELECT * FROM session.tempinv --returns 1 rowCOMMITCONNECT RESET --tables are dropped

Creating Temporary Tables

5-26

Page 175: System Administration

Database Tables and Views 5-27

Example 2CONNECT TO sampleDECLARE GLOBAL TEMPORARY TABLE tempdata

( id INTEGER, name CHAR(10))ON COMMIT DELETE ROWS NOT LOGGED IN dectemptab

COMMIT --tempdata existsDECLARE GLOBAL TEMPORARY TABLE tempinv

(item CHAR(10), count INTEGER)ON COMMIT PRESERVE ROWS NOT LOGGED IN dectemptab

ROLLBACK --tempdata exists, tempinv is droppedINSERT INTO session.tempdata VALUES(1,'John')INSERT INTO session.tempdata VALUES(2,'Susan')ROLLBACKSELECT * FROM session.tempdata --returns 0 rowsINSERT INTO session.tempdata VALUES(1,'John')SELECT * FROM session.tempdata --returns 1 rowsDROP TABLE session.tempdataROLLBACKSELECT * FROM session.tempdata --returns 0 rows

Page 176: System Administration

5-28 Database Tables and Views

Temporary Table Authorizations

5-28

To execute the DECLARE GLOBAL TEMPORARY TABLE statement you must have:

SYSADM authorityDBADM authorityUSE privilege on the user temporary table space with SELECT or CONTROL privilege on the permanent table or view

Page 177: System Administration

Database Tables and Views 5-29

Views are logical tables in that they do not contain any data. They only exist as a definition in the system catalog tables for the database. A view can be thought of as a SELECT statement that returns data from one or more underlying base tables or other views. The view has a name like a regular table, and as far as the user is concerned, the view responds the same as a regular table.

The syntax for the SQL statement CREATE VEIW contains the following components:

view_name — Names the view. The name cannot match an existing table or view.column_names—Names the columns in the view. If a list of column names is specified, it must consist of as many names as there are columns in the result table of fullselect. Do not specify the data type for columns, as it is the same as the base table or view.fullselect — Defines the view. At any time, the view consists of the rows that result if the SELECT statement had been executed. CHECK OPTION — Specifies the constraint that every row inserted or updated through the view must conform to the definition of the view. The constraint is propagated to dependant views. CASCADED — The WITH CASCADED CHECK OPTION constraint on a view means that the view inherits the search conditions as constraints from any updateable view on which the view is dependent. CASCADED is the default for WITH CHECK OPTION.

Views

5-29

Views are logical tables that are derived from one or more base tables or views.

Syntax for creating views:

CREATE VIEW view_name (column_names)AS fullselectWITH {LOCAL|CASCADED} CHECK OPTION

Page 178: System Administration

5-30 Database Tables and Views

LOCAL — The WITH LOCAL CHECK OPTION constraint on a view means the search condition of the view is applied as a constraint for an insert or update of the view or any dependant view.

Creating a view with a schema name that does not already exist results in the implicit creation of that schema, provided the authorization ID of the statement has IMPLICIT_SCHEMA authority. The schema owner is sysibm. The CREATEIN privilege on the schema is granted to PUBLIC.

A view can be created to limit access to sensitive data while allowing more general access to other data.

Page 179: System Administration

Database Tables and Views 5-31

A view can be classified into four types:

DeleteableUpdateableInsertableRead-only

A view is deleteable if it meets all the following conditions:

Each FROM clause of the view definition identifies only one base table (with no OUTER clause), deleteable view (with no OUTER clause), or deleteable-nested-table expression.The view definition does not include a VALUES clause. The view definition does not include a GROUP BY clause or HAVING clause.The view definition does not include column functions in the select list.The view definition does not include SET operations (UNION, EXCEPT, or INTERSECT) with the exception of UNION ALL. The base tables in the operands of a UNION ALL must not be the same table, and each operand must be deleteable. The select list of the view definition does not include DISTINCT.

Classifying Views

5-31

Views can be classified as:DeleteableUpdateableInsertableRead-only

Page 180: System Administration

5-32 Database Tables and Views

A view is updateable if ANY column of the view is updateable. A column of a view is updateable if all of the following are true:

The view is deleteable. All corresponding columns of the operands of a UNION ALL have exactly matching data types (including length or precision and scale) and matching default values if the fullselect of the view includes a UNION ALL.

A view is insertable if ALL columns of the view are updateable and the definition of the view does not include UNION ALL.

A view is read-only if it is not deleteable.

Page 181: System Administration

Database Tables and Views 5-33

In the slide above, the view emp_view2 is created with the WITH CASCADED CHECK OPTION, and performs a select on the view emp_view1. Therefore, an INSERT or UPDATE statement for emp_view2 would have to meet both the WHERE clause condition in emp_view2 (workd_dept='B00'), and the WHERE clause condition in emp_view1 (work_dept='A00').

If the emp_view2 view was created instead with the WITH LOCAL CHECK OPTION, then any INSERT or UPDATE statement would only check for the WHERE clause condition in emp_view2 (work_dept= 'B00').

CREATE VIEW Examples

5-33

CREATE VIEW emp_view1 (emp_no, first_name, work_dept, job, hire_date) AS SELECT emp_no, first_name, work_dept, job,

hire_dateFROM employee WHERE work_dept ='A00'

CREATE VIEW emp_view2 (emp_num, name, dept, job, hire_date) AS SELECT emp_no, first_name, work_dept, job,

hire_dateFROM emp_view1 WHERE work_dept ='B00'WITH CASCADED CHECK OPTION

Page 182: System Administration

5-34 Database Tables and Views

To create a view using the Control Center:

Expand the objects in the left pane to display the Views folder in the sample database.Right click on the Views folder and select Create to display the Create View window.

Creating Views Using the Control Center

5-34

Page 183: System Administration

Database Tables and Views 5-35

In the Create View window, specify the View schema and View name, and select one of the Check options in the middle of the window.

You can either write the SQL statement manually in the SQL statement field, or you can use SQL Assist by clicking on the SQL Assist button.

Create View Window

5-35

Page 184: System Administration

5-36 Database Tables and Views

SQL Assist is a tool for creating SQL statements. In the Outline pane, the clauses of an SQL statement are presented in a hierarchical form. You create the SQL statement by selecting each clause and choosing options for that clause that appear in the Details pane. For example, choose the FROM clause to select the tables you want to use in your query and choose the WHERE clause to create the query filters. The SQL code field displays the SQL statement you have created based on selections made.

SQL Assist Window

5-36

Page 185: System Administration

Database Tables and Views 5-37

The employee table has been selected. Continue selecting clauses and choosing options for each clause and watch as the statement is created in the SQL code section.

SQL Assist: Tables Page

5-37

Page 186: System Administration

5-38 Database Tables and Views

A DB2 federated system consists of a DB2 server (called a federated server), a DB2 database, and a set of diverse data sources to which DB2 sends queries.

In a federated system, each data source consists of an instance of a relational database management system (RDBMS), plus the database(s) that the instance supports.

A DB2 federated system provides location transparency for database objects.

A DB2 federated server provides compensation for data sources that do not support all of the DB2 SQL dialect or certain optimization capabilities.

Federated System

5-38

A federated system is a DB2 instance that accesses other brands of RDBMS systems to obtain data. It consists of:

A DB2 instance (federated server)A DB2 database (federated database)A diverse set of data sources

Page 187: System Administration

Database Tables and Views 5-39

The federated database contains catalog entries identifying data sources and access methods. These catalog entries contain information about federated database objects: what they are called, information they contain, and conditions under which they can be used. Applications connect to the federated database just like any other DB2 database.

The following federated system objects are considered essential:

Wrappers — Identify the module (DLL or library) used to access a particular type of data source. Servers — Define the data source. Server data includes the wrapper name, server name, server type, server version, authorization information, and server options. Nicknames — Identify specific data source objects (such as tables or views). Applications reference nicknames in queries just like they reference tables and views.

Federated System Objects

5-39

The federated database catalogs identify data sources and access methodsThe following objects are essential in a federated system:

WrapperServersNicknames

For More InformationFor more information about federated systems and objects, refer to the IBM DB2 Universal Database Federated Systems Guide.

Page 188: System Administration

5-40 Database Tables and Views

Summary

5-40

You should now be able to:Obtain and modify database configuration informationStart and stop a databaseForce users and applications off an instanceCreate and use schemasQuery the contents of the system catalog tablesUnderstand the impact of large objects on tablesCreate temporary tablesCreate viewsCreate and use federated objects

Page 189: System Administration

Database Tables and Views 5-41

Lab Exercises

5-41

You should now complete the exercise for Module 5.

Page 190: System Administration

5-42 Database Tables and Views

Page 191: System Administration

Creating Indexes 02-2003 6-1© 2002, 2003 International Business Machines Corporation

Creating Indexes

Module 6

Page 192: System Administration

6-2 Creating Indexes

Objectives

6-2

At the end of this module, you will be able to:Create and manage indexesDescribe type-2 indexesUnderstand the purpose of unique indexesUnderstand the purpose of bidirectional indexesUnderstand the purpose of clustered indexesUse the DB2 Design Advisor to identify the need for indexes

Page 193: System Administration

Creating Indexes 6-3

An index is a database object that consists of an ordered list of values with pointers to corresponding values in a column on a table.

Any permanent table (user table or system table) can have indexes defined on it.

Multiple indexes can be defined on a single table.Computed columns can have indexes created on them (in Version 7 and later).Indexes cannot be defined on a view.

Indexes are used for two primary reasons:

Ensure uniqueness of data values.Improve SQL query performance.

Overview

6-3

An index is a database object consisting of an ordered list of values pointing to corresponding data values in a tableAny permanent table can have indexes defined on itIndexes are used for two primary reasons:

Ensure uniqueness of data valuesImprove SQL query performance

Page 194: System Administration

6-4 Creating Indexes

Version 8.1 of DB2 UDB introduced a new format for indexes called the type-2 index. To the database administrator, there is no apparent difference between a type-1 index, the type of indexes used before DB2 Version 8.1, and the type-2 index. The differences are primarily architectural, but the format of these indexes result, generally, in better overall index performance.

All indexes created in DB2 UDB Version 8 are automatically created as type-2 indexes, unless created on a table that already has existing type-1 indexes. Since a table can have only indexes of one type, it is necessary to convert the type-1 indexes to type-2 indexes before you can create additional type-2 indexes on a table. You can convert a type-1 index to a type-2 index using the REORG command. The syntax for this command is shown here:

REORG INDEXES ALL FOR TABLE table_name

Once the indexes for a table have been rebuilt as type-2 indexes, the table can begin to take advantage of new performance features.

Improved Concurrency From Reduction of Next-Key LockingWhen an insert, update, or delete operation is required on a row in a table, a lock is not only required on the row, but a lock is also required for each index on the table to protect the

Type-2 Indexes

6-4

New structure for indexes introduced in DB2 Version 8All indexes created in Version 8 are type-2 indexes, unless created on a table that still has type-1 indexesConvert type-1 indexes to type-2 indexes using REORG INDEXESNext-key locking reduced resulting in improved concurrencyIndexes can be created on columns with more than 255 bytesRequired for multidimensional clustering

Page 195: System Administration

Creating Indexes 6-5

corresponding key entries for the columns in the row. Performing certain tasks in an application under some isolation levels (see Module 10) required an additional lock on the key entry that follows the locked key. This is required to hold the former position of a key value in case a rollback is required after deleting or modifying key values in a row. These additional locks are called next-key locks.

Type-2 indexes are designed so that next-key locks are required much less frequently than type-1 indexes. By not having to place and hold as many locks, concurrency is improved because application users can access and modify more data than before.

Other Performance EnhancementsWhen you created a type-1 index in earlier versions of DB2 UDB, you were limited to creating indexes only on columns that were 255 bytes or less in length. Type-2 indexes can be created on columns that are greater than 255 bytes.

The introduction of type-2 indexes allow you to take advantage of multidimensional clustering, a feature introduced in DB2 UDB Version 8. This feature is discussed further later in this module.

Page 196: System Administration

6-6 Creating Indexes

Indexes can be classified into categories based on the functionality:

A unique index ensures uniqueness of key column(s) data.A bidirectional index allows scanning of indexes in either direction. This saves space and memory required for creating two separate unidirectional indexesA clustered index places the rows of the table in the same physical order as the index keys.

Types of Indexes

6-6

Indexes can be classified into categories based on the functionality:Unique index — Ensures uniqueness of key column(s) dataBidirectional index — Allows scanning of indexes in either directionClustered index — Places the rows of the table in the same physical order as the index keys

Page 197: System Administration

Creating Indexes 6-7

UNIQUE INDEXA unique index prevents the table from containing two or more rows with the same index key value. The uniqueness is enforced at the completion of SQL statements that update rows or insert new rows.

The uniqueness is also checked during the execution of the CREATE INDEX statement. If the table already contains rows with duplicate key values, the index is not created.

When UNIQUE is used, null values are treated as any other values. For example, if the key is a single column that may contain null values, that column may contain no more than one null value.

index_nameThis specifies the name of the index or index specification. The index name, including an implicit or explicit schema qualifier, must be unique, that is, not already used to identify an index or index specification already described in the catalog.

The schema qualifier must not be SYSIBM, SYSCAT, SYSFUN, or SYSSTAT.

Unique Indexes

6-7

CREATE UNIQUE INDEX index_name ON table_name (column_name {ASC | DESC}

[, column_name {ASC | DESC}…])INCLUDE column_namesCLUSTERPCTFREE integerMINPCTUSED integerALLOW REVERSE SCANS

Page 198: System Administration

6-8 Creating Indexes

ON table_nameThis specifies the name of a table on which an index is to be created. The table must be a base table (not a view) or a summary table described in the catalog. Indexes can be created on permanent user tables and declared temporary tables, but they cannot be created on catalog tables.

column_nameThis identifies a single column, or list of comma-separated column(s), that form the index key. Each column name must be unqualified. Up to 16 columns can be specified for a persistent table and 15 columns for a typed table. The sum of the stored lengths of the specified columns must not be greater than 1024 bytes for a persistent table and 1020 bytes for a typed table. Length of index key(s) cannot be more than 255 bytes.

ASC or DESCASC specifies that index entries are to be kept in ascending order of the column values; this is the default setting used when neither ASC nor DESC is specified. DESC specifies that index entries are to be kept in descending order.

Index: Tables and Columns

6-8

CREATE UNIQUE INDEX index_name ON table_name (column_name {ASC | DESC}

[, column_name {ASC | DESC}…])INCLUDE column_namesCLUSTERPCTFREE integerMINPCTUSED integerALLOW REVERSE SCANS

Page 199: System Administration

Creating Indexes 6-9

INCLUDEINCLUDE can only be specified with UNIQUE indexes—the option allows you to specify additional columns to be stored in the index record with the set of index key columns. The columns included with this clause are not used to enforce uniqueness and are not used for sorting the index, but they do require additional storage space in the index.

CLUSTERThe CLUSTER option specifies that the index is used for clustering the table. In earlier versions of DB2, you were allowed to have only one clustering index for a table, since the table data is physically arranged in the order of the index.

The cluster factor of a clustering index is maintained or improved dynamically as data is inserted into the associated table; an attempt is made to insert new rows so that they are physically close to rows that have key values that are logically close in the index.

Clustered Indexes

6-9

CREATE UNIQUE INDEX index_name ON table_name (column_name {ASC | DESC}

[, column_name {ASC | DESC}…])INCLUDE column_namesCLUSTERPCTFREE integerMINPCTUSED integerALLOW REVERSE SCANS

Page 200: System Administration

6-10 Creating Indexes

Multidimensional Clustering Multidimensional clustering (MDC) is a new feature of DB2 Version 8 that enables a table to be physically clustered on more than one key (or dimension) simultaneously. Prior to Version 8, DB2 only supported single-dimensional clustering of data. By allowing multiple keys to cluster a table, greater performance is possible through more efficient use of prefetching, for example.

Page 201: System Administration

Creating Indexes 6-11

PCTFREE integerThis specifies the percentage of each index page that should be left as free space when building the index. You should plan for free space on every index page so that when the index key is updated to a length greater than the previous length, the entries do not spill onto a new page.

The value of integer can range from 0 to 99. However, if a value greater than 10 is specified, leaf pages are created with the specified amount of free space, but non-leaf pages are created with only 10 percent free space. The default setting is 10 percent.

MINPCTUSED integerThis indicates whether indexes are automatically reorganized online and the threshold for the minimum percentage of space used on an index leaf page.

If, after a key is deleted from an index leaf page and the percentage of space used on the page is at or below the integer percentage, an attempt is made to merge the remaining keys on this page with those of a neighboring page.

Index: PCTFREE and MINPCTUSED

6-11

CREATE UNIQUE INDEX index_name ON table_name (column_name {ASC | DESC}

[, column_name {ASC | DESC}…])INCLUDE column_namesCLUSTERPCTFREE integerMINPCTUSED integerALLOW REVERSE SCANS

Page 202: System Administration

6-12 Creating Indexes

ALLOW REVERSE SCANSThis specifies that an index can support both forward and reverse scans; that is, in the order defined at INDEX CREATE time and in the opposite (or reverse) order. This saves storage space (over creating separate forward and reverse indexes) and provides better response time.

The default is to create only a forward-scannable index in the direction (ASC or DESC) specified.

Bidirectional Indexes

6-12

CREATE UNIQUE INDEX index_name ON table_name (column_name {ASC | DESC}

[, column_name {ASC | DESC}…])INCLUDE column_namesCLUSTERPCTFREE integerMINPCTUSED integerALLOW REVERSE SCANS

Page 203: System Administration

Creating Indexes 6-13

In the example above:

The index name is inx_emp_empno.The index is created on table employee.The key column is empno.Keys are sorted in ascending order.10% of the space on each index page is kept free.If the space usage on any index page falls below 40%, online reorganization of the index takes place.The index does not enforce uniqueness. It is neither bidirectional nor clustered. None of these options were chosen.

Index: Illustration

6-13

CREATE INDEX inx_emp_empno ON employee(empno ASC) PCTFREE 10 MINPCTUSED 40

Page 204: System Administration

6-14 Creating Indexes

Here is another example of a CREATE UNIQUE INDEX command:

CREATE UNIQUE INDEX inxunq_emp_empno ON employee(empno ASC) INCLUDE(firstname) PCTFREE 10 MINPCTUSED 40

In this example:

The index name is inxunq_emp_empno.The index is created on table employee.The key column is empno.Keys are sorted in ascending order.The index enforces uniqueness of empno column values. If the column values currently residing in the table are not unique, index creation would fail.Index keys contain empno and firstname. However, uniqueness of values would not be enforced on firstname values.10% of the space on each index page is kept free.If the space usage on any index page falls below 40%, the index is reorganized.Index is neither bidirectional nor clustered.

In the slide, RID indicates the row ID (location information for the row).

Unique Index: Example

6-14

Page 205: System Administration

Creating Indexes 6-15

Here is an example of a command to create a clustered index:

CREATE INDEX inxcls_emp_empno ON employee(empno ASC) CLUSTER PCTFREE 10 MINPCTUSED 40

Records inserted in the employee table after creation of the index would be physically placed in the same ascending order as the key, empno.

.

Clustered Index: Example

6-15

High cluster ratio index

Table

Low cluster ratio index

Using an index

Not using an index

Page 206: System Administration

6-16 Creating Indexes

Here is an example of a command to create a bidirectional index:

CREATE INDEX inxcls_emp_empno ON employee(empno ASC) PCTFREE 10 MINPCTUSED 40 ALLOW REVERSE SCANS

Bidirectional Index: Example

6-16

Page 207: System Administration

Creating Indexes 6-17

Indexes are created from the Control Center by expanding the Objects pane to list the database objects, right-clicking on Indexes, and choosing Create. In the Create Index window you specify the index schema and name, the source table schema and name, the column(s) to include in the index, the type of index to create, and other options. The Create Index window is shown on the next page.

Creating Indexes in the Control Center

6-17

Page 208: System Administration

6-18 Creating Indexes

Page 209: System Administration

Creating Indexes 6-19

The Design Advisor is a management tool that reduces the need to design and define suitable indexes for data.

Use the Design Advisor to:

Find the best indexes for a problem query.Find the best indexes for a set of queries (a workload), subject to resource limits which are optionally applied.Test an index on a workload without having to create the index.

You can invoke the Design Advisor from either the Control Center or from the DB2 CLP:

From the Control Center, right click on a database and select Design Advisor to invoke the Design Advisor wizard. The Design Advisor recommendations are part of the wizard notebook.For the DB2 CLP, use the command db2advis with appropriate options from the operating system prompt.

The introduction page for the Design Advisor is shown below.

Design Advisor

6-19

A management tool that reduces the need to design and define suitable indexes for dataUse Design Advisor to:

Find the best indexes for a problem queryFind the best indexes for a set of queries (a workload), subject to resource limits that are optionally appliedTest an index on a workload without having to create the index

Page 210: System Administration

6-20 Creating Indexes

Page 211: System Administration

Creating Indexes 6-21

The workload page is next. A workload is a set of SQL statements that the database manager must process over a period of time. The SQL statements can include SELECT, INSERT, UPDATE, and DELETE. Some statements that are frequently used to access system catalog tables are provided by default.

Add a workload name and click on one of the buttons on the right to import, add, change, or remove statements from the workload.

Design Advisor: Workload Panel

6-21

Page 212: System Administration

6-22 Creating Indexes

In the Collect Statistics page, you have the chance to make sure that statistics for selected tables are up to date. Select the tables you want to include in the statistics update and press >, or press the >> button to select all available tables.

Design Advisor: Collect Statistics

6-22

Page 213: System Administration

Creating Indexes 6-23

On the Disk Usage page, specify the table space to use for the recommended objects. You can also set a limit to the amount of space allocated for indexes.

Design Advisor: Disk Usage

6-23

Page 214: System Administration

6-24 Creating Indexes

In the calculate window, indicate when you want the Design Advisor to perform calculations based on the information provided so far. If you select Now, then click on Next to allow Design Advisor to start performing calculations immediately. When Design Advisor is finished with calculations, it presents you with its recommendations.

Design Advisor: Calculate

6-24

Page 215: System Administration

Creating Indexes 6-25

Based on the information you have provided to the Design Advisior, a list of recommended indexes is shown on the Recommendations page. Time estimates are provided to give you an idea of the time savings you can expect if you add the recommended indexes.

Design Advisor: Recommendations

6-25

Page 216: System Administration

6-26 Creating Indexes

Finally, the Design Advisor provides a list of objects that are of no use based on workload information provide. You can choose to drop the indexes or keep the indexes if you think they might be needed for other situations.

Design Advisor: Unused Objects

6-26

Page 217: System Administration

Creating Indexes 6-27

On the Schedule page, you can specify when and how to execute the script to create recommended objects you selected.

Design Advisor: Schedule

6-27

Page 218: System Administration

6-28 Creating Indexes

The Summary page provides a review of objects you chose to create and drop based on the recommendations of the Design Advisor. When you click on Finish, the objects are created or dropped according to the options you chose on the Schedule page.

Design Advisor: Summary

6-28

Page 219: System Administration

Creating Indexes 6-29

You can invoke the Design Advisor using the DB2 CLP by executing the db2advis command with appropriate options from the operating system prompt. The syntax for this command is shown above. The command options for the command are shown here:

-d database_name specifies the name of the database to which a connection is established. -w workload_name specifies the name of the workload for which indexes are advised. -s “sql_statement” specifies the text of a single SQL statement whose indexes are advised. The statement must be enclosed by double quotation marks. -i filename specifies the name of an input file containing one or more SQL statements. Statements must be terminated by semi colon (;).-a userid[/passwd] specifies the name and password used to connect to the database. The slash (/) must be included if a password is specified. -l disklimit specifies the maximum number of megabytes available for all indexes in the existing schema. The default is 64 GB.-t max_advise_time specifies the maximum allowable time, in minutes, to complete the operation. The default value is 10. Unlimited time is specified by a value of zero. -h displays help information. When this option is included, all other options are ignored; only help information is displayed.

Design Advisor in the CLP: db2advis

6-29

db2advis -d database_name [{-w workload_name | -s "sql_statement" |-i filename}][-a userid[/password] ][-l disklimit][-t max_advise_time][-h] [-p] [-o out_file]

Page 220: System Administration

6-30 Creating Indexes

-p keeps the plans that were generated while running the tool in the explain tables.-o out_file saves the script to create the recommended objects in out_file.

Note that you can choose only one of -w, -s, or -i.

Page 221: System Administration

Creating Indexes 6-31

In the example above, the utility connects to the sample database, and recommends indexes for the employee table.

Connection is made to the sample database with the appropriate user ID and password.The size of all indexes in the existing schema cannot exceed 53 MB.The maximum allowable time for finding a solution is 20 minutes.

db2advis: Implementation

6-31

db2advis -d sample -s "SELECT * FROM employee e WHERE firstnme

LIKE ’A%’" -a inst00/inst00 -l 53 -t 20

Page 222: System Administration

6-32 Creating Indexes

Summary

6-32

You should now be able to:Create and manage indexesDescribe type-2 indexesUnderstand the purpose of unique indexesUnderstand the purpose of bidirectional indexesUnderstand the purpose of clustered indexesUse the DB2 Design Advisor to identify the need for indexes

Page 223: System Administration

Creating Indexes 6-33

Lab Exercises

6-33

You should now complete the exercise for Module 6.

Page 224: System Administration

6-34 Creating Indexes

Page 225: System Administration

Using Constraints 02-2003 7-1© 2002, 2003 International Business Machines Corporation

Using Constraints

Module 7

Page 226: System Administration

7-2 Using Constraints

Objectives

7-2

At the end of this module, you will be able to:Explain the purpose of primary keysExplain the purpose of foreign keysExplain the purpose of unique constraintsExplain the purpose of check constraintsCreate keys and constraints using GUI toolsCreate keys and constraints using CLP

Page 227: System Administration

Using Constraints 7-3

Keys are a special set of columns defined on a table. Their purpose is to do any one of the following:

Identify a rowReference a uniquely identified row from another tableEnsure uniqueness of column values

Keys can be classified by the columns from which they are composed, or by the database constraint they support.

Composition:An atomic key is a single column key.A composite key is composed of two or more columns.

Constraints:A unique key is used to implement unique constraints.A primary key is used to implement entity integrity constraints.A foreign key is used to implement referential integrity constraints.

Keys: Overview

7-3

Keys are a set of columns defined on a table that are used to:Identify a rowReference a uniquely identified row from another tableEnsure uniqueness of column values

Keys can be classified by their source columns, or by the database constraint they support

Composition: ATOMIC KEY or COMPOSITE KEY Constraints: UNIQUE KEY, PRIMARY KEY, or FOREIGN KEY

Page 228: System Administration

7-4 Using Constraints

A primary key is a special type of unique key. Apart from guaranteeing uniqueness on column values, it also serves as the lookup for values on another table.

Important characteristics of a primary key include:

There can only be one primary key per table.The primary key column must be defined as NOT NULL.DB2 creates a system-generated unique index on the primary key column(s) if one does not already exist.

Primary Key

7-4

A primary key is a special type of unique key that serves as the lookup value on another table:

There can only be one primary key per table—it is either atomic or compositeThe primary key column must be defined as NOT NULL

DB2 creates a system-generated unique index on the primary key column(s) if one does not already exist

Page 229: System Administration

Using Constraints 7-5

If you define the primary key as part of a column definition, you cannot name the constraint:

CREATE TABLE student (id INTEGER NOT NULL PRIMARY KEY, name VARCHAR(30), subject VARCHAR(20), position INTEGER NOT NULL)

If you define the primary key after the table definition, you can name the primary key constraint:

CREATE TABLE student (id INTEGER NOT NULL, name VARCHAR(30), subject VARCHAR(20), position INTEGER NOT NULL, CONSTRAINT pk_id PRIMARY KEY(id))

Primary Key: Table Creation Time

7-5

You can define a primary key in two different places in the CREATE TABLE statement:

Defining a primary key as part of a column definition—user cannot control the name of the primary key constraintDefining a primary key after the table definition—user can name the primary key

Page 230: System Administration

7-6 Using Constraints

Or, for a composite key:

CREATE TABLE student (id INTEGER NOT NULL, name VARCHAR(30) NOT NULL, subject VARCHAR(20), position INTEGER NOT NULL, CONSTRAINT pk_idname PRIMARY KEY(id, name))

The following is an invalid SQL statement—the column id has not been defined with the NOT NULL option.

CREATE TABLE student (id INTEGER PRIMARY KEY, name VARCHAR(30), subject VARCHAR(20), position INTEGER)

Page 231: System Administration

Using Constraints 7-7

Alternatively, you can use the Control Center to create a table, and during table creation specify the primary key:

Open DB2 Control Center: Go to Start > Program Files > IBM DB2 > General Administration Tools > Control Center. In the Control Center, expand to the Tables folder in the sample database.Right click on Tables folder and select Create > Table to start the Create Table wizard.On the Table page (shown on the next page):

Enter the schema name and table name. Click on Next to continue.

Creating Tables in the Control Center

7-7

Page 232: System Administration

7-8 Using Constraints

On the Columns page (see next page):Click on Add button to open the Add Column window (shown below).Specify Column name, Datatype and Datatype characteristics, if applicableOptionally, select Nullable, select Default and specify a value, select Generate column contents, and enter a Comment.Click OK in the Add Column window.

Page 233: System Administration

Using Constraints 7-9

Repeat the above steps to add more columns.When finished adding columns, click OK.

On the Table Spaces page:Either select a table space from the list of table spaces in the pull-down menu, or create a new table space for the table (see next page).Click Next to continue.

Page 234: System Administration

7-10 Using Constraints

On the Keys page (see below):Create a primary key by clicking on Add Primary.In the Define Primary Key window (below), select the column(s) to include in the primary key and press > to move to the Selected columns side. Optionally, add a constraint name. Press OK to return to the Keys page.Click on Next to continue.

Page 235: System Administration

Using Constraints 7-11

Click Next on the Dimension page.Click Next on the Constraint page.On the Summary page (see below):

A summary of the options you chose for table creation are displayed. .

To view the SQL statement that will be used to create the table, click on Show SQL (see below).

Page 236: System Administration

7-12 Using Constraints

Click on Close to close the Show SQL window.Click on Finish to exit the window and build the table

Page 237: System Administration

Using Constraints 7-13

Yo

Use the following command to add a primary key to an existing table:

ALTER TABLE student ADD CONSTRAINT pk_id PRIMARY KEY(id)

Note that an existing primary key cannot be altered. If you need to change the primary key for a table:

Drop the existing primary key constraint and create another primary key constraint with the new definition.ALTER TABLE student DROP CONSTRAINT pk_idALTER TABLE student ADD CONSTRAINT pk_id

PRIMARY KEY(position) The ALTER TABLE statement used to add a primary key constraint would fail in the following cases:

Data values for the position column (a new primary key in this case) are non-unique.There is already a primary key on the table student.The position column has not been defined as NOT NULL.

Adding a Primary Key to an Existing Table: SQL

7-13

You can add a primary key to an existing table using SQL:

ALTER TABLE student ADD CONSTRAINT pk_id PRIMARY KEY(id)

You cannot alter an existing primary keyTo change the primary key, you must drop the old primary key and create a new one

Page 238: System Administration

7-14 Using Constraints

Alternatively, you can alter an existing table to add a primary key constraint through the Control Center:

Open the Control Center by selecting Start > Program Files > IBM DB2 > General Administration Tools > Control Center. Expand to the Tables folder in the sample database, right click on the student_cc table, and select Alter. This displays the Alter Table window:

Adding a Primary Key: Alter Table Window

7-14

Page 239: System Administration

Using Constraints 7-15

On the Keys page:

Highlight a key from the list shown on the Keys page and click on Change. Change the columns defined for the primary key by highlighting column(s) shown under Available columns and moving them to Selected columns by clicking on the selection button (>). Optionally, modify the Constraint name.

Page 240: System Administration

7-16 Using Constraints

The phrase foreign key is used to implement referential integrity constraints. Referential constraints reference only a primary key or unique key.

The values of a foreign key are constrained to have only values defined in the primary key or unique key that is referenced—alternatively, the foreign key can be set to NULL, if allowed. The table containing the referenced column is the parent and the table containing the referencing column table is the child or dependent.

You can specify the following actions for the child table record upon update of parent table column values:

NO ACTION indicates that an error occurs for the update operation on the parent table, and no rows are updated. RESTRICT indicates that an error occurs for the update operation on the parent table, and no rows are updated.

You can specify the following actions for child table record upon delete of parent table column values:

NO ACTION indicates that an error occurs for the delete operation on the parent table, and no rows are deleted.

Foreign Key

7-16

FOREIGN KEY is used to implement referential integrity constraintsThe values of a foreign key are constrained to have only values defined in the primary key or unique key that is referenced—alternatively, the foreign key can be set to NULL, if allowedUpon update of parent table column values, specify the following actions for child table record:• NO ACTION, or RESTRICTUpon deletion of parent table column values, specify the following actions for child table record:• NO ACTION, RESTRICT, CASCADE, or SET NULL

Page 241: System Administration

Using Constraints 7-17

RESTRICT indicates that an error occurs for the delete operation on the parent table, and no rows are deleted. CASCADE indicates that the delete operation is propagated to the dependents of the deleted row in the parent table. SET NULL indicates that each nullable column of the foreign key of each dependent of the deleted row is set to NULL.

Page 242: System Administration

7-18 Using Constraints

If you are defining a foreign key as part of column definition, you cannot name the constraint:

Parent table definitionCREATE TABLE parent (

id INTEGER NOT NULL PRIMARY KEY,depname VARCHAR(20))

Child table definition: The user does not control the name of foreign key constraint.CREATE TABLE child (

id INTEGER, name VARCHAR(30), dept INTEGER REFERENCES parent(id))

Child table definition: Specifying action when update/delete of record in parent.CREATE TABLE child (

id INTEGER, name VARCHAR(30), dept INTEGER REFERENCES parent(id)

ON DELETE CASCADE ON UPDATE RESTRICT)

Foreign Key: Table Creation Time

7-18

Defining foreign key as part of column definitionParent table definitionChild table definition: the user does not control the name of foreign key constraintChild table definition: specifying action when a record in the parent table is updated or deleted

Page 243: System Administration

Using Constraints 7-19

If you are defining a foreign key after the table definition, you can name the foreign key constraint.While a table can have only one primary key, you can define multiple foreign keys, atomic or composite, if so desired.

CREATE TABLE child (id INTEGER, name VARCHAR(30), dept INTEGER, CONSTRAINT fk_parent_id FOREIGN KEY(dept)

REFERENCES parent(id))

Specifying action upon update/delete of parent table record

CREATE TABLE child (id INTEGER, name VARCHAR(30), dept INTEGER, CONSTRAINT fk_parent_id FOREIGN KEY(dept)

REFERENCES parent(id) ON DELETE SET NULL ON UPDATE RESTRICT

)

The following is an invalid SQL statement—the parent table column depname was not previously defined as the primary key for that table.

CREATE TABLE child (id INTEGER, name VARCHAR(30), dept VARCHAR(20) REFERENCES parent(depname))

Page 244: System Administration

7-20 Using Constraints

Alternatively, you can use the Control Center to create a table and specify any foreign key. Use the same process to add a foreign key to a new table as you did to create a primary key. When you reach the Keys page in the Create Table wizard, do the following:

Press the Add Foreign button to display the Add Foreign Key window, as shown above.Highlight a column or a set of columns from the Available columns section. Click on the selection button (>) to select the column(s) for the foreign key.Optionally provide action for delete/update operations on the Parent table and Constraint name.Click OK.Repeat this process to add more foreign key constraints and the press OK.

Altering the Foreign KeyYou can use the ALTER TABLE statement in SQL to add a foreign key constraint. For example:

ALTER TABLE child ADD CONSTRAINT fk_dept FOREIGN KEY(dept) REFERENCES parent(id)

Foreign Key: Control Center

7-20

Page 245: System Administration

Using Constraints 7-21

Note that an existing foreign key cannot be altered. If you need to change an existing foreign key constraint on a table:

Drop the existing foreign key constraint with a new definition.ALTER TABLE child

DROP CONSTRAINT fk_parent_idIf need be, create another table which would now act as a parent to the child table.CREATE TABLE depthead (

code INTEGER NOT NULL PRIMARY KEY, name VARCHAR(30))

Create a new foreign key constraint:ALTER TABLE child

ADD FOREIGN KEY (dept) REFERENCES Depthead (code) ON DELETE NO ACTION ON UPDATE NO ACTION

Alternatively, you can alter an existing table to add a foreign key constraint through the Control Panel. A table can have just one primary key, but it can have as many foreign keys as needed.

To alter a table in the Control Center, expand to the database, right-click on the table to modify, and select Alter. Choose the Keys tab to modify existing foreign keys or add addition keys.

Page 246: System Administration

7-22 Using Constraints

UNIQUE KEY is used to implement unique constraints. A unique constraint does not allow two different rows to have the same values on the key columns.

A table can have more than one unique key defined.A unique index is always created for unique key constraints; if a constraint name is defined, it is used to name the index; otherwise, a system-generated name is used for the index.Having unique constraints on more than one set of columns of a table is different than defining a composite unique key that includes the whole set of columns. For example, if we define a composite primary key on the columns id and name, there is still a chance that a name is duplicated using a different id.

Unique Key

7-22

UNIQUE KEY is used to implement unique constraintsA table can have more than one unique key definedA unique index is always created for unique key constraintsHaving unique constraints on more than one set of columns of a table is different than defining a composite unique key that includes the whole set of columns

Page 247: System Administration

Using Constraints 7-23

If you define a unique key as part of a column definition, you cannot name the constraint:

CREATE TABLE unik (id INTEGER NOT NULL UNIQUE, name VARCHAR(30), title CHAR(3))

If you define the unique key after the body of the table, you can name the unique key constraint.

CREATE TABLE unik (id INTEGER NOT NULL, name VARCHAR(30), title CHAR(3), CONSTRAINT unq_id UNIQUE(id))

Specifying a Unique Key at Table Creation Time

7-23

You can define a unique key in two different places in the CREATE TABLE statement:

Define unique key as part of column definition• User cannot control the name of unique key constraintDefine unique key after the table definition• User can name the unique key constraint

Page 248: System Administration

7-24 Using Constraints

The previous example was an atomic unique key. You can also have a composite unique key:

CREATE TABLE unik (id INTEGER NOT NULL, name VARCHAR(30) NOT NULL, title CHAR(3), CONSTRAINT unq_id UNIQUE(id, name))

The following statement is invalid—the column id is not defined with the NOT NULL option:

CREATE TABLE unik (id INTEGER UNIQUE, name VARCHAR(30), title CHAR(3))

Page 249: System Administration

Using Constraints 7-25

You can use the ALTER TABLE statement in SQL to add a check constraint to an existing table.

ALTER TABLE unik ADD CONSTRAINT chk_title CHECK(title LIKE 'M%')

Note that a unique key constraint cannot be altered, but you can drop an existing unique key constraint and create another unique key constraint with a new definition.

ALTER TABLE unik DROP CONSTRAINT unq_id

ALTER TABLE unik ADD CONSTRAINT unq_idname UNIQUE(id, name)

This statement will fail in the following circumstances:

Data values for combination of columns id and name are not unique.Either column id or column name has not been defined as NOT NULL.

Changing a Unique Key: ALTER TABLE

7-25

Use ALTER TABLE statement to add unique key constraint to an existing tableUnique key cannot be altered

Drop the existing unique key constraint and create another unique key constraint with new definition

Page 250: System Administration

7-26 Using Constraints

A check constraint is a rule that specifies the values that are allowed in one or more columns of every row of a table.

A check constraint enforces data integrity at the table level.Once a table-check constraint has been defined for a table, every INSERT and UPDATE statement involves a checking of the restriction or constraint.The check constraint is used to implement business specific rules. This saves the application developer the rigor of data validation.The definitions for all check constraints are stored in sysibm.syschecks table.

Check Constraint

7-26

Check constraint is a rule that specifies the values allowed in one or more columns of every row of a table

Enforces data integrity at the table levelOnce a table-check constraint has been defined for a table, every INSERT and UDDATE statement involves a checking of the restriction or constraintUsed to implement business specific rules

Page 251: System Administration

Using Constraints 7-27

If you define a check constraint as part of a column definition, you cannot name the constraint:

CREATE TABLE chek (id INTEGER CHECK(id > 5), name VARCHAR(30), age INTEGER)

If you define the check constraint after the body of the table, you can name the constraint:

CREATE TABLE chek (id INTEGER, name VARCHAR(30), age INTEGER, CONSTRAINT chk_idage CHECK(id < age))

The following is an invalid statement—the column age is referenced before its own declaration:

CREATE TABLE chek (id INTEGER CHECK(id < age), name VARCHAR(30), age INTEGER)

Alternatively, you can use the Control Center to add check constraint(s) at the same time that you create a table.

Check Constraint: Table Creation Time

7-27

You can define a check constraint in two different places in the CREATE TABLE statement:

Defining check constraint as part of column definition• User cannot control the name of the check constraintDefining check constraint after the table definition• User can name the check constraint

Page 252: System Administration

7-28 Using Constraints

When you create the table, go to the Constraints page in the Create Table wizard to add a check constraint. Click on Add to bring up the Add Check Constraint window. An example is shown here:

When a Check condition and Constraint name have been added, click OK to return to the Constraints page. Repeat the above steps to add additional check constraints.

Create Table Window: Adding Check Constraints

7-28

Page 253: System Administration

Using Constraints 7-29

Altering Check ConstraintsYou can use the ALTER TABLE statement in SQL to add a check constraint to an existing table.

ALTER TABLE chek ADD CONSTRAINT chk_age CHECK(age > 0)

Note that a check constraint cannot be altered. If you need to change a check constraint, you must drop the existing check constraint and create another check constraint with new definition.

ALTER TABLE chek DROP CONSTRAINT chk_id;ALTER TABLE chek ADD CONSTRAINT chk_id CHECK(id > 1)

This statement fails if any row violates the check constraint (that is, if any row has a value of one or less for the column Id).

Using the ALTER TABLE Page to Add Check ConstraintsYou can add check constraints when altering a table by clicking on the Check Constraints table in the Alter Table window. Open the Add Check Constraints page by clicking on the Add button.

Add the check constraint, the constraint name, and optional comments; click OK to return to the Alter Table window.Repeat the process above to add other check constraints and click OK

Using the ALTER TABLE Panel to Alter Check ConstraintsTo alter an existing check constraints, highlight the constraint to modify on the Check Constraints page and click on the Change button to open the Change Check Constraint window.

Modify the check constraint, constraint name, or comment, and click OK to return to Alter Table window.Repeat the process above to modify other check constraints. Click OK.You can accomplish the same task by removing and then adding the check constraint.

Page 254: System Administration

7-30 Using Constraints

Summary

7-30

You should now be able to:Explain the purpose of primary keysExplain the purpose of foreign keysExplain the purpose of unique constraintsExplain the purpose of check constraintsCreate keys and constraints using GUI toolsCreate keys and constraints using CLP

Page 255: System Administration

Using Constraints 7-31

Lab Exercises

7-31

You should now complete the exercise for Module 7.

Page 256: System Administration

7-32 Using Constraints

Page 257: System Administration

Data Movement Utilities 02-2003 8-1© 2002, 2003 International Business Machines Corporation

Data Movement Utilities

Module 8

Page 258: System Administration

8-2 Data Movement Utilities

Objectives

8-2

At the end of this module, you will be able to:Use the EXPORT utility to extract data from a tableUse the IMPORT utility to insert data into a tableUse the LOAD utility to insert data into a tableKnow when to use IMPORT versus LOAD utilitiesUse the db2move utilityUse the db2look utilityUnderstand table space states after LOAD

Page 259: System Administration

Data Movement Utilities 8-3

Whenever data is extracted or inserted into the database, particular care must be taken to check the format of the data. DB2 supports various data formats for extraction and insertion.

The formats include:

Delimited ASCII format (DEL)Integrated exchange format (IXF)Worksheet format (WSF)Non-delimited ASCII (ASC)

Exporting Data

8-3

DB2 supports the following data formats for extraction and insertion:Delimited ASCII format (DEL)Integrated exchange format (IXF)Worksheet format (WSF)Non-delimited ASCII (ASC)

Page 260: System Administration

8-4 Data Movement Utilities

A set of utilities is provided with DB2 to populate tables or to extract data from tables. These utilities enable easy movement of large amounts of data into or out of DB2 databases. The speed of these operations is very important. When working with large databases and tables, extracting or inserting new data may take a long time.

These utilities are:

EXPORTIMPORTLOAD

Data Movement Utilities

8-4

When working with large databases and tables, extracting or inserting new data may take a long timeData movement utilities enable easy movement of large amounts of data into or out of DB2 databases. These utilities are:

EXPORTIMPORTLOAD

Page 261: System Administration

Data Movement Utilities 8-5

The EXPORT utility is used to extract data from a database table into a file.

Data can be extracted into several different file formats, which can be used either by the IMPORT or LOAD utilities to populate tables. These files can also be used by other software products such as spreadsheets, word processors, and other RDBMS packages to populate tables or generate reports.

You must be connected to the database from which data is to be exported.

Data Movement Utilities: EXPORT

8-5

The EXPORT utility is used to extract data from a database table into a fileData can be extracted into several different file formats, which can be used either by the IMPORT or LOAD utilities to populate tablesThese files can also be used by other software products such as:

SpreadsheetsWord processorsOther RDBMSs to populate tables or generate reports

Page 262: System Administration

8-6 Data Movement Utilities

Syntax for the EXPORT command is shown above. The options shown are described in the following pages.

EXPORT Command

8-6

EXPORT TO file_name OF {IXF | DEL | WSF}[LOBS TO lob_path [ ,lob_path ...]][LOBFILE lob_file [ ,lob_file ...]][MODIFIED BY filetype_mod ...][METHOD N ( column_name [ ,column_name ...] )][MESSAGES message_file]select_statement[Hierarchy {STARTING sub_table_name | traversal_order_list}][where_clause]

traversal-order-list:( sub_table_name [ ,sub_table_name ...] )

Page 263: System Administration

Data Movement Utilities 8-7

TO file_nameThis option specifies the name of the file to which data is exported. If the complete path to the file is not specified, the export utility uses the current directory and the default drive as the destination.

If the specified file name already exists, the export utility overwrites the contents of the file; it does not append the information.

OF file_typeThis specifies the format of the data in the output file (DEL, WSF, or IXF).

EXPORT Option: Filename and File Type

8-7

EXPORT TO file_name OF {IXF | DEL | WSF}[LOBS TO lob_path [ ,lob_path ...]][LOBFILE lob_file [ ,lob_file ...]][MODIFIED BY filetype_mod ...][METHOD N ( column_name [ ,column_name ...] )][MESSAGES message_file]select_statement[Hierarchy {STARTING sub_table_name | traversal_order_list}][where_clause]

traversal-order-list:( sub_table_name [ ,sub_table_name ...] )

Page 264: System Administration

8-8 Data Movement Utilities

LOBS TO lob_path This specifies one or more paths to directories in which the LOB files are to be stored. When file space is exhausted on the first path, the second path is used, and so on.

EXPORT Option: LOBS TO

8-8

EXPORT TO file_name OF {IXF | DEL | WSF}[LOBS TO lob_path [ ,lob_path ...]][LOBFILE lob_file [ ,lob_file ...]][MODIFIED BY filetype_mod ...][METHOD N ( column_name [ ,column_name ...] )][MESSAGES message_file]select_statement[Hierarchy {STARTING sub_table_name | traversal_order_list}][where_clause]

traversal-order-list:( sub_table_name [ ,sub_table_name ...] )

Page 265: System Administration

Data Movement Utilities 8-9

LOBFILE lob_fileThis option specifies one or more base file names for the LOB files. When name space is exhausted for the first name, the second name is used, and so on.

When creating LOB files during an export operation, file names are constructed by appending the current base name from this list to the current path (from lob-path), and then appending a 3-digit sequence number. For example, if the current LOB path is the directory /u/foo/lob/path, and the current LOB file name is bar, the LOB files created are /u/foo/lob/path/bar.001, /u/foo/lob/path/bar.002, and so on.

EXPORT Option: LOBFILE

8-9

EXPORT TO file_name OF {IXF | DEL | WSF}[LOBS TO lob_path [ ,lob_path ...]][LOBFILE lob_file [ ,lob_file ...]][MODIFIED BY filetype_mod ...][METHOD N ( column_name [ ,column_name ...] )][MESSAGES message_file]select_statement[Hierarchy {STARTING sub_table_name | traversal_order_list}][where_clause]

traversal-order-list:( sub_table_name [ ,sub_table_name ...] )

Page 266: System Administration

8-10 Data Movement Utilities

Here is a list of the modifier options:

EXPORT Option: File Type Modifier (1)

8-10

Modifier Descriptionlobsinfile Specifies the path to the files containing LOB valueschardelx x is a single-character string delimiter. Default is

double-quotation mark (")coldelx x is a single-character column delimiter. Default value

is comma (,)decplusblank Causes positive decimal values to be refixed with a

blank space instead of the default plus sign (+)decptx x is a single character substitute for the period as a

decimal point character. Default value is a period (.)dldelx x is a single character DATALINK delimiter. Default

is semicolon (;). x cannot be the same character as specified for row, column, or character delimiter

nodoubledel Suppresses recognition of double-character delimiters

EXPORT TO file_name OF {IXF | DEL | WSF}[LOBS TO lob_path [ ,lob_path ...]][LOBFILE lob_file [ ,lob_file ...]][MODIFIED BY filetype_mod ...][METHOD N ( column_name [ ,column_name ...] )][MESSAGES message_file]select_statement[Hierarchy {STARTING sub_table_name | traversal_order_list}][where_clause]

traversal-order-list:( sub_table_name [ ,sub_table_name ...] )

Page 267: System Administration

Data Movement Utilities 8-11

WSF File1 Creates a WSF file that is compatible with Lotus

1-2-3 Release 1 (default)2 Creates a WSF file that is compatible with Lotus

Symphony Release 1.03 Creates a WSF file that is compatible with Lotus

1-2-3 Version 2, or Lotus Symphony Release 1.1

Page 268: System Administration

8-12 Data Movement Utilities

METHOD NThis option specifies one or more column names to be used in the output file. If this parameter is not specified, the column names in the table are used. This parameter is valid only for WSF and IXF files, but is not valid when exporting hierarchical data.

EXPORT Option: METHOD

8-12

EXPORT TO file_name OF {IXF | DEL | WSF}[LOBS TO lob_path [ ,lob_path ...]][LOBFILE lob_file [ ,lob_file ...]][MODIFIED BY filetype_mod ...][METHOD N ( column_name [ ,column_name ...] )][MESSAGES message_file]select_statement[Hierarchy {STARTING sub_table_name | traversal_order_list}][where_clause]

traversal-order-list:( sub_table_name [ ,sub_table_name ...] )

Page 269: System Administration

Data Movement Utilities 8-13

MESSAGES message_fileThis option specifies the destination for warning and error messages that occur during an export operation. If the file already exists, the export utility appends the information. If message_file is omitted, the messages are written to standard output.

EXPORT Option: MESSAGES

8-13

EXPORT TO file_name OF {IXF | DEL | WSF}[LOBS TO lob_path [ ,lob_path ...]][LOBFILE lob_file [ ,lob_file ...]][MODIFIED BY filetype_mod ...][METHOD N ( column_name [ ,column_name ...] )][MESSAGES message_file]select_statement[Hierarchy {STARTING sub_table_name | traversal_order_list}][where_clause]

traversal-order-list:( sub_table_name [ ,sub_table_name ...] )

Page 270: System Administration

8-14 Data Movement Utilities

select_statementThis specifies the select statement that returns the exported data. If the select statement causes an error, a message is written to the message file (or to standard output). If the error code is SQL0012W, SQL0347W, SQL0360W, SQL0437W, or SQL1824W, the export operation continues; otherwise, it stops.

EXPORT Option: Select Statement

8-14

EXPORT TO file_name OF {IXF | DEL | WSF}[LOBS TO lob_path [ ,lob_path ...]][LOBFILE lob_file [ ,lob_file ...]][MODIFIED BY filetype_mod ...][METHOD N ( column_name [ ,column_name ...] )][MESSAGES message_file]select_statement[Hierarchy {STARTING sub_table_name | traversal_order_list}][where_clause]

traversal-order-list:( sub_table_name [ ,sub_table_name ...] )

Page 271: System Administration

Data Movement Utilities 8-15

HIERARCHY STARTING sub_table_nameUsing the default traverse order (OUTER order for ASC, DEL, or WSF files, or the order stored in PC/IXF data files), export a sub-hierarchy starting from sub_table_name.

EXPORT Option: HIERARCHY

8-15

EXPORT TO file_name OF {IXF | DEL | WSF}[LOBS TO lob_path [ ,lob_path ...]][LOBFILE lob_file [ ,lob_file ...]][MODIFIED BY filetype_mod ...][METHOD N ( column_name [ ,column_name ...] )][MESSAGES message_file]select_statement[Hierarchy {STARTING sub_table_name | traversal_order_list}][where_clause]

traversal-order-list:( sub_table_name [ ,sub_table_name ...] )

Page 272: System Administration

8-16 Data Movement Utilities

HIERARCHY traversal_order_listExport a subhierarchy using the specified traverse order. All subtables must be listed in PREORDER fashion. The first sub_table_name is used as the target name for the SELECT statement.

EXPORT Option: HIERARCHY

8-16

EXPORT TO file_name OF {IXF | DEL | WSF}[LOBS TO lob_path [ ,lob_path ...]][LOBFILE lob_file [ ,lob_file ...]][MODIFIED BY filetype_mod ...][METHOD N ( column_name [ ,column_name ...] )][MESSAGES message_file]select_statement[Hierarchy {STARTING sub_table_name | traversal_order_list}][where_clause]

traversal-order-list:( sub_table_name [ ,sub_table_name ...] )

Page 273: System Administration

Data Movement Utilities 8-17

To perform an EXPORT, you must have either SYSADM or DBADM permissions on the database manager, or have CONTROL or SELECT permissions on each participating table or view.

EXPORT: Authorization

8-17

To export, you must have the following administrative permissions: SYSADMDBADM

You can also have one of the following privileges on each participating table or view:

CONTROLSELECT

Page 274: System Administration

8-18 Data Movement Utilities

The syntax for the IMPORT command is shown above. The options are described in the following pages.

IMPORT Command

8-18

IMPORT FROM file_name OF {IXF | DEL | WSF}[LOBS FROM lob_path [ ,lob-path ...]][MODIFIED BY filetype_mod ...][METHOD L ( col_start col_end [ ,col_start, col_end ...] )[NULL INDICATORS (col_position [,col_position ...])]N (col_name [,col_name ...])P (col_position [, col_position ...])][COMMITCOUNT n] [RESTARTCOUNT N] [MESSAGES message_file][{INSERT | INSERT_UPDATE | REPLACE | REPLACE_CREATE}

INTO table_name [(insert_column_list)]][CREATE INTO table_name [(insert_column_list)]][IN table_space_name] [INDEX IN table_space_name][LONG IN table_space_name]

Page 275: System Administration

Data Movement Utilities 8-19

FROM file_nameThis option specifies the file containing the data being imported. If the path is omitted, the current working directory is used.

LOBS FROM lob_pathThis specifies one or more paths that store LOB files. The names of the LOB data files are stored in the main file (ASC, DEL, or IXF), in the column that is loaded into the LOB column. This option is ignored if the lobsinfile modifier is not specified.

MODIFIED BY filetype_modThis specifies additional options, such as LOBSINFILE. If the LOBSINFILE modifier is not specified, the LOBS FROM option is ignored.

IMPORT Command: File Options and LOB

8-19

IMPORT FROM file_name OF {IXF | DEL | WSF}[LOBS FROM lob_path [ ,lob-path ...]][MODIFIED BY filetype_mod ...][METHOD L ( col_start col_end [ ,col_start, col_end ...] )[NULL INDICATORS (col_position [,col_position ...])]N (col_name [,col_name ...])P (col_position [, col_position ...])][COMMITCOUNT n] [RESTARTCOUNT N] [MESSAGES message_file][{INSERT | INSERT_UPDATE | REPLACE | REPLACE_CREATE}

INTO table_name [(insert_column_list)]][CREATE INTO table_name [(insert_column_list)]][IN table_space_name] [INDEX IN table_space_name][LONG IN table_space_name]

Page 276: System Administration

8-20 Data Movement Utilities

Above is a list of modifiers that can be used in the MODIFIED BY option

IMPORT Command: File Type Modifiers

8-20

Modifier Descriptioncompound = x Nonatomic compount SQL is used to insert the data,

and x (a number from 1 to 100) statements are attempted each time

generatedignore / identityignore

This modifier informs the import utility that data for all generated/identity columns are present in the data file but should be ignored

generatedmissing / identitymissing

If this is specified, the utility asumes that the input data file contains no data for the generated identity

usedefaults If a source column for a target table column has been specified, but it contains no data for one or more row instances, default values are loaded

Page 277: System Administration

Data Movement Utilities 8-21

Method OptionsMETHOD L — this specifies the start and end column numbers from which to import data. A column number is a byte offset from the beginning of a row of data. It is numbered starting from 1. (This option can only be used for ASCII and is the only valid option for this file type.)METHOD N — this specifies the names of the columns to be imported. (This option can only be used with IXF files).METHOD P — this specifies the indexes (numbered from 1) of the input data fields to be imported. (This option can only be used with IXF or DEL files and is the only valid option for the DEL file type)

IMPORT Command: METHOD

8-21

IMPORT FROM file_name OF {IXF | DEL | WSF}[LOBS FROM lob_path [ ,lob-path ...]][MODIFIED BY filetype_mod ...][METHOD L ( col_start col_end [ ,col_start, col_end ...] )[NULL INDICATORS (col_position [,col_position ...])]N (col_name [,col_name ...])P (col_position [, col_position ...])][COMMITCOUNT n] [RESTARTCOUNT N] [MESSAGES message_file][{INSERT | INSERT_UPDATE | REPLACE | REPLACE_CREATE}

INTO table_name [(insert_column_list)]][CREATE INTO table_name [(insert_column_list)]][IN table_space_name] [INDEX IN table_space_name][LONG IN table_space_name]

Page 278: System Administration

8-22 Data Movement Utilities

COMMITCOUNT nThis option performs a COMMIT after every n records are imported.

RESTARTCOUNT NSpecifies that an import operation is to be started at record N + 1. The first N records are skipped.

MESSAGES message-fileSpecifies the destination for warning and error messages that occur during an import operation. If the file already exists, the import utility appends the information. If the complete path to the file is not specified, the utility uses the current directory and the default drive as the destination. If message-file is omitted, the message files are written to standard output.

IMPORT Command: Count and Message Options

8-22

IMPORT FROM file_name OF {IXF | DEL | WSF}[LOBS FROM lob_path [ ,lob-path ...]][MODIFIED BY filetype_mod ...][METHOD L ( col_start col_end [ ,col_start, col_end ...] )[NULL INDICATORS (col_position [,col_position ...])]N (col_name [,col_name ...])P (col_position [, col_position ...])][COMMITCOUNT n] [RESTARTCOUNT N] [MESSAGES message_file][{INSERT | INSERT_UPDATE | REPLACE | REPLACE_CREATE}

INTO table_name [(insert_column_list)]][CREATE INTO table_name [(insert_column_list)]][IN table_space_name] [INDEX IN table_space_name][LONG IN table_space_name]

Page 279: System Administration

Data Movement Utilities 8-23

IMPORT: Authorization

8-23

Page 280: System Administration

8-24 Data Movement Utilities

The LOAD utility can be used:

To populate tables in DB2 databases.To load or append data to a table where large amounts of data will be inserted. The input file must be in one of three file formats: IXF, DEL, or ASC.

The LOAD utility:

Moves data into tables.Can create an index.Can generate statistics.

The LOAD utility is significantly faster than the IMPORT utility.

LOAD writes formatted pages directly into the database while IMPORT performs SQL inserts. LOAD does a minimal amount of logging.

Data Movement Utilities: LOAD

8-24

The LOAD utility can be used to:Populate tables in DB2 databasesLoad or append data to a table where large amounts of data will be inserted

The LOAD utility moves data into tables, can create an index, and can generate statisticsThe LOAD utility is significantly faster than the IMPORT utilityLOAD utility does a minimal amount of logging

Page 281: System Administration

Data Movement Utilities 8-25

All phases of the LOAD process are part of one operation that is run only after all three phases complete successfully. The three phases are:

Load—data is written into the table.Build—indexes are created.Delete—rows that caused a unique constraint violation are removed from the table.

LOAD Phases

8-25

All phases of the LOAD process are part of one operation that is run only after all three phases complete successfully. The three phases are:

Load—data is written into the tableBuild—indexes are createdDelete—rows that caused a unique constraint violation are removed from the table

Page 282: System Administration

8-26 Data Movement Utilities

During the LOAD phase, data is stored in a table and index keys are collected.

Save points are established at intervals specified by the SAVECOUNT parameter of the LOAD command.

Messages inform as to the number of input rows successfully loaded during the operation.

If a failure occurs in this phase, use the RESTART option for LOAD to restart from the last successful consistency point.

Alternatively, if the failure occurs near the beginning of the load, restart the load from the beginning of the input file.

LOAD: Load Phase

8-26

During the LOAD phase, data is stored in a table and index keys are collectedSave points are established at intervalsMessages indicate the number of input rows successfully loadedIf a failure occurs in this phase, use the RESTART option for LOAD to restart from the last successful consistency point

Page 283: System Administration

Data Movement Utilities 8-27

During the BUILD phase, indexes are created based on the index keys collected in the load phase.

The index keys are arranged during the load phase.

If a failure occurs during this phase, LOAD restarts from the BUILD phase.

LOAD: Build Phase

8-27

During the BUILD phase, indexes are created based on the index keys collected in the load phaseThe index keys are sorted during the load phaseIf a failure occurs during this phase, LOAD restarts from the BUILD phase

Page 284: System Administration

8-28 Data Movement Utilities

During the DELETE phase, all rows that have violated a unique constraint are deleted.

If a failure occurs, LOAD restarts from the DELETE phase.

Once the database indexes are rebuilt, information about the rows containing the invalid keys is contained in an exception table, if the exception table was created before the load began.

Messages on these rejected rows are stored in the message file.

Finally, any duplicate keys found are deleted.

The exception table must be identified in the syntax of the LOAD command.

LOAD: Delete Phase

8-28

During the DELETE phase, all rows that have violated a unique constraint are deletedIf a failure occurs, LOAD restarts from the DELETE phaseOnce the database indexes are rebuilt, information about the rows containing the invalid keys is contained in an exception table, if one existsFinally, any duplicate keys found are deleted

Page 285: System Administration

Data Movement Utilities 8-29

The LOAD utility moves data into a target table that must exist within the database prior to the start of the load process.

The target table may be a new or existing table into which data is appended or replaced.

Indexes on the table may or may not already exist. However, the LOAD process only builds indexes that are already defined on the table.

In addition to the target table, it is recommended that an exception table be created to hold any rows that violate unique constraints.

If an exception table is neither created nor specified with the LOAD utility, any rows that violate unique constraints are discarded without any chance of recovering or altering them.

LOAD: Target and Exception Tables

8-29

The LOAD utility moves data into a target table that must exist within the database prior to the start of the load processThe target table may be a new or existing table into which data is appended or replacedThe LOAD process only builds indexes that are already defined on the tableAn exception table should be created to hold any rows that violate unique constraints, otherwise violated rows are discarded

Page 286: System Administration

8-30 Data Movement Utilities

The syntax for the LOAD command is shown above. Options are described on the following pages.

LOAD Command: Syntax

8-30

LOAD FROM [CLIENT] {filename|pipename|device} [ ,{filename|pipename|device} ... ] OF {ASC | DEL | IFX} [LOBS FROM lob_path [ ,lob_path ...]][MODIFIED BY filetype_mod ...][METHOD L ( col-start col_end [ ,col_start, col_end ...] )

[NULL INDICATORS (col_position [,col_position ...])] || N (col_name [,col_name ...]) | P (col_position [, col_position ...]) }]

[SAVECOUNT n] [ROWCOUNT N] [WARNINGCOUNT N][MESSAGES message_file] [TEMPFILES PATH pathname][{INSERT | REPLACE | RESTART | TERMINATE}

INTO table_name [(insert_column , |)]][datalink_specification] [FOR EXCEPTION table_name][STATISTICS {YES | NO} {WITH DISTRIBUTION} {AND | FOR} [DETAILED][INDEXES ALL][ COPY {NO | YES [USE TSM [OPEN num_sess SESSIONS]

| TO {directory | device} [ ,{directory | device} ... ]| LOAD lib_name [OPEN num_sess SESSIONS]| NONRECOVERABLE} ]

[HOLD QUIESCE] [WITHOUT PROMPTING] [DATA BUFFER buffer_size][CPU_PARALLELISM n] [DISK_PARALLELISM n][INDEXING MODE {AUTOSELECT | REBUILD | INCREMENTAL | DEFERRED}]

Page 287: System Administration

Data Movement Utilities 8-31

CLIENTThis specifies the data loaded resides on a remotely connected client. This operation is ignored if the load operation is not being invoked from a remote client.

FROM filename | pipename | deviceThis specifies the file, pipe, or device containing the data being loaded. This file, pipe, or device must reside on the node where the database resides, unless the CLIENT option is specified. If several names are specified, they are processed in sequence. If the last item specified is a tape device, the user is prompted for another tape. Valid response options are:

c (continue) — Continue using the device that generated the warning message (for example, when a new tape has been mounted).d (device terminate) — Stop using the device that generated the warning message (for example, when there are no more tapes).t (terminate) — Terminate all devices.

LOAD Command: Filename and Location

8-31

LOAD FROM [CLIENT] {filename|pipename|device} [ ,{filename|pipename|device} ... ] OF {ASC | DEL | IFX} [LOBS FROM lob_path [ ,lob_path ...]][MODIFIED BY filetype_mod ...][METHOD L ( col-start col_end [ ,col_start, col_end ...] )

[NULL INDICATORS (col_position [,col_position ...])] || N (col_name [,col_name ...]) | P (col_position [, col_position ...]) }]

[SAVECOUNT n] [ROWCOUNT N] [WARNINGCOUNT N][MESSAGES message_file] [TEMPFILES PATH pathname][{INSERT | REPLACE | RESTART | TERMINATE}

INTO table_name [(insert_column , |)]][datalink_specification] [FOR EXCEPTION table_name][STATISTICS {YES | NO} {WITH DISTRIBUTION} {AND | FOR} [DETAILED][INDEXES ALL][ COPY {NO | YES [USE TSM [OPEN num_sess SESSIONS]

| TO {directory | device} [ ,{directory | device} ... ]| LOAD lib_name [OPEN num_sess SESSIONS]| NONRECOVERABLE} ]

[HOLD QUIESCE] [WITHOUT PROMPTING] [DATA BUFFER buffer_size][CPU_PARALLELISM n] [DISK_PARALLELISM n][INDEXING MODE {AUTOSELECT | REBUILD | INCREMENTAL | DEFERRED}]

Page 288: System Administration

8-32 Data Movement Utilities

Filetype ModifierThis option specifies the format of the data in the input file:

ASC — non-delimited ASCII formatDEL — delimited ASCII formatIXF — integrated exchange format (PC version) exported from the same or from another DB2 table.

LOBS FROM lob_pathThis option indicates the path to the data files containing LOB values to be loaded. The path must end with a forward slash (/). If the CLIENT option is specified, the path must be fully qualified. The names of the LOB data files are stored in the main data file (ASC, DEL, or IXF), in the column that will be loaded into the LOB column. This option is ignored if lobsinfile is not specified within the filetype_mod string.

LOAD Command: Filetype and Modifier

8-32

LOAD FROM [CLIENT] {filename|pipename|device} [ ,{filename|pipename|device} ... ] OF {ASC | DEL | IFX} [LOBS FROM lob_path [ ,lob_path ...]][MODIFIED BY filetype_mod ...][METHOD L ( col-start col_end [ ,col_start, col_end ...] )

[NULL INDICATORS (col_position [,col_position ...])] || N (col_name [,col_name ...]) | P (col_position [, col_position ...]) }]

[SAVECOUNT n] [ROWCOUNT N] [WARNINGCOUNT N][MESSAGES message_file] [TEMPFILES PATH pathname][{INSERT | REPLACE | RESTART | TERMINATE}

INTO table_name [(insert_column , |)]][datalink_specification] [FOR EXCEPTION table_name][STATISTICS {YES | NO} {WITH DISTRIBUTION} {AND | FOR} [DETAILED][INDEXES ALL][ COPY {NO | YES [USE TSM [OPEN num_sess SESSIONS]

| TO {directory | device} [ ,{directory | device} ... ]| LOAD lib_name [OPEN num_sess SESSIONS]| NONRECOVERABLE} ]

[HOLD QUIESCE] [WITHOUT PROMPTING] [DATA BUFFER buffer_size][CPU_PARALLELISM n] [DISK_PARALLELISM n][INDEXING MODE {AUTOSELECT | REBUILD | INCREMENTAL | DEFERRED}]

Page 289: System Administration

Data Movement Utilities 8-33

MODIFIED BY filetype_modThis specifies additional options, such as LOBSINFILE. Here is a list of all filetype_mod options: anyorder, fastparse, generatedignore, generatedmissing, generatedoverride, identityignore, identitymissing, identityoverride, indexfreespace=x, lobsinfile, noheader, norowwarnings, pagefreespace=x, totalfreespace=x, usedefaults ASCII (ASC/DEL): codepage=x, dateformat=x, dumpfile=x, implieddecimal, timeformat='x', timestampformat='x' Non-delimited ASCII: noeofchar, binarynumerics, nochecklengths, nullindchar=x, packeddecimal, reclen=x, striptblanks, striptnulls, zoneddecimal DEL: chardelx, coldelx, datesiso, decplusblank, decptx, delprioritychar, dldelx, keepblanks, nodoubledel IXF: forcein, nochecklengths.

Page 290: System Administration

8-34 Data Movement Utilities

METHOD OptionsMETHOD L — this option specifies the start and end column numbers from which to load data. A column number is a byte offset from the beginning of a row of data. It is numbered starting from 1. This method can only be used with ASC files, and is the only option available for that file type.METHOD N — this specifies the names of the columns in the data file to be loaded. The case of these column names must match the case of the corresponding names in the system catalogs. Each table column that is not nullable should have a corresponding entry in the METHOD N list.For example, given data fields F1, F2, F3, F4, F5, and F6, and table columns C1 INT, C2 INT NOT NULL, C3 INT NOT NULL, and C4 INT, method N (F2, F1, F4, F3) is a valid request, while method N (F2, F1) is not valid (only for IXF files).METHOD P — this specifies the indexes (numbered from 1) of the input data fields to be loaded. Each table column that is not nullable should have a corresponding entry in the METHOD P list.

LOAD Command: METHOD

8-34

LOAD FROM [CLIENT] {filename|pipename|device} [ ,{filename|pipename|device} ... ] OF {ASC | DEL | IFX} [LOBS FROM lob_path [ ,lob_path ...]][MODIFIED BY filetype_mod ...][METHOD L ( col-start col_end [ ,col_start, col_end ...] )

[NULL INDICATORS (col_position [,col_position ...])] || N (col_name [,col_name ...]) | P (col_position [, col_position ...]) }]

[SAVECOUNT n] [ROWCOUNT N] [WARNINGCOUNT N][MESSAGES message_file] [TEMPFILES PATH pathname][{INSERT | REPLACE | RESTART | TERMINATE}

INTO table_name [(insert_column , |)]][datalink_specification] [FOR EXCEPTION table_name][STATISTICS {YES | NO} {WITH DISTRIBUTION} {AND | FOR} [DETAILED][INDEXES ALL][ COPY {NO | YES [USE TSM [OPEN num_sess SESSIONS]

| TO {directory | device} [ ,{directory | device} ... ]| LOAD lib_name [OPEN num_sess SESSIONS]| NONRECOVERABLE} ]

[HOLD QUIESCE] [WITHOUT PROMPTING] [DATA BUFFER buffer_size][CPU_PARALLELISM n] [DISK_PARALLELISM n][INDEXING MODE {AUTOSELECT | REBUILD | INCREMENTAL | DEFERRED}]

Page 291: System Administration

Data Movement Utilities 8-35

For example, given data fields F1, F2, F3, F4, F5, and F6, and table columns C1 INT, C2 INT NOT NULL, C3 INT NOT NULL, and C4 INT, method P (2, 1, 4, 3) is a valid request, while method P (2, 1) is not validThis option is only for IXF or DEL files; it is the only valid option for DEL files.

Page 292: System Administration

8-36 Data Movement Utilities

Counter OptionsSAVECOUNT specifies that the load utility is to establish consistency points after every n rows. This value is converted to a page count, and rounded up to intervals of the extent size.ROWCOUNT specifies the number of n physical records in the file to be loaded. It allows a user to load only the first n rows in a file.WARNINGCOUNT stops the load operation after n warnings.

LOAD Command: Counter Options

8-36

LOAD FROM [CLIENT] {filename|pipename|device} [ ,{filename|pipename|device} ... ] OF {ASC | DEL | IFX} [LOBS FROM lob_path [ ,lob_path ...]][MODIFIED BY filetype_mod ...][METHOD L ( col-start col_end [ ,col_start, col_end ...] )

[NULL INDICATORS (col_position [,col_position ...])] || N (col_name [,col_name ...]) | P (col_position [, col_position ...]) }]

[SAVECOUNT n] [ROWCOUNT N] [WARNINGCOUNT N][MESSAGES message_file] [TEMPFILES PATH pathname][{INSERT | REPLACE | RESTART | TERMINATE}

INTO table_name [(insert_column , |)]][datalink_specification] [FOR EXCEPTION table_name][STATISTICS {YES | NO} {WITH DISTRIBUTION} {AND | FOR} [DETAILED][INDEXES ALL][ COPY {NO | YES [USE TSM [OPEN num_sess SESSIONS]

| TO {directory | device} [ ,{directory | device} ... ]| LOAD lib_name [OPEN num_sess SESSIONS]| NONRECOVERABLE} ]

[HOLD QUIESCE] [WITHOUT PROMPTING] [DATA BUFFER buffer_size][CPU_PARALLELISM n] [DISK_PARALLELISM n][INDEXING MODE {AUTOSELECT | REBUILD | INCREMENTAL | DEFERRED}]

Page 293: System Administration

Data Movement Utilities 8-37

Mode OptionsINSERT adds the loaded data to the table without changing the existing table data.REPLACE deletes all existing data from the table, and inserts the loaded data. The table definition and index definitions are not changed. If this option is used when moving data between hierarchies, only the data for an entire hierarchy, not individual subtables, can be replaced. This option is not supported for tables with DATALINK columns.RESTART restarts a previously interrupted load operation. The load operation automatically continues from the last consistency point in the load, build, or delete phase.

LOAD Command: Mode

8-37

LOAD FROM [CLIENT] {filename|pipename|device} [ ,{filename|pipename|device} ... ] OF {ASC | DEL | IFX} [LOBS FROM lob_path [ ,lob_path ...]][MODIFIED BY filetype_mod ...][METHOD L ( col-start col_end [ ,col_start, col_end ...] )

[NULL INDICATORS (col_position [,col_position ...])] || N (col_name [,col_name ...]) | P (col_position [, col_position ...]) }]

[SAVECOUNT n] [ROWCOUNT N] [WARNINGCOUNT N][MESSAGES message_file] [TEMPFILES PATH pathname][{INSERT | REPLACE | RESTART | TERMINATE}

INTO table_name [(insert_column , |)]][datalink_specification] [FOR EXCEPTION table_name][STATISTICS {YES | NO} {WITH DISTRIBUTION} {AND | FOR} [DETAILED][INDEXES ALL][ COPY {NO | YES [USE TSM [OPEN num_sess SESSIONS]

| TO {directory | device} [ ,{directory | device} ... ]| LOAD lib_name [OPEN num_sess SESSIONS]| NONRECOVERABLE} ]

[HOLD QUIESCE] [WITHOUT PROMPTING] [DATA BUFFER buffer_size][CPU_PARALLELISM n] [DISK_PARALLELISM n][INDEXING MODE {AUTOSELECT | REBUILD | INCREMENTAL | DEFERRED}]

Page 294: System Administration

8-38 Data Movement Utilities

TERMINATE stops a previously interrupted load operation and rolls it back to the point in time at which it started, even if consistency points were passed. The states of any table spaces involved in the operation return to normal, and all table objects are made consistent (index objects may be marked as invalid, in which case index rebuild will automatically take place at next access). If the load operation being terminated is a load REPLACE, the table will be truncated to an empty table after the load TERMINATE operation. If the load operation being terminated is a load INSERT, the table will retain all of its original records after the load TERMINATE operation. The load terminate option will not remove a backup pending state from table spaces. Note: This option is not supported for tables with DATALINK columns.

Page 295: System Administration

Data Movement Utilities 8-39

Exception table table_nameThis is a user-created table that reflects the definition of the table being loaded, and includes two additional columns: Timestamp and Message (CLOB). Rows that create an error condition on a load are copied here by the Load utility. Any row that is in violation of a unique index or a primary key index is copied.

LOAD Command: Exception Table

8-39

LOAD FROM [CLIENT] {filename|pipename|device} [ ,{filename|pipename|device} ... ] OF {ASC | DEL | IFX} [LOBS FROM lob_path [ ,lob_path ...]][MODIFIED BY filetype_mod ...][METHOD L ( col-start col_end [ ,col_start, col_end ...] )

[NULL INDICATORS (col_position [,col_position ...])] || N (col_name [,col_name ...]) | P (col_position [, col_position ...]) }]

[SAVECOUNT n] [ROWCOUNT N] [WARNINGCOUNT N][MESSAGES message_file] [TEMPFILES PATH pathname][{INSERT | REPLACE | RESTART | TERMINATE}

INTO table_name [(insert_column , |)]][datalink_specification] [FOR EXCEPTION table_name][STATISTICS {YES | NO} {WITH DISTRIBUTION} {AND | FOR} [DETAILED][INDEXES ALL][ COPY {NO | YES [USE TSM [OPEN num_sess SESSIONS]

| TO {directory | device} [ ,{directory | device} ... ]| LOAD lib_name [OPEN num_sess SESSIONS]| NONRECOVERABLE} ]

[HOLD QUIESCE] [WITHOUT PROMPTING] [DATA BUFFER buffer_size][CPU_PARALLELISM n] [DISK_PARALLELISM n][INDEXING MODE {AUTOSELECT | REBUILD | INCREMENTAL | DEFERRED}]

Page 296: System Administration

8-40 Data Movement Utilities

STATISTICS {YES | NO} — specifies whether or not statistics are collected for the table and for any existing indexes. This option is supported only if the load operation is in REPLACE mode.WITH DISTRIBUTION — specifies that distribution statistics are collectedAND INDEXES ALL — specifies that both table and index statistics are collectedFOR INDEXES ALL — specifies that only index statistics are collectedDETAILED — specifies that extended index statistics are collected

LOAD Command: Statistics

8-40

LOAD FROM [CLIENT] {filename|pipename|device} [ ,{filename|pipename|device} ... ] OF {ASC | DEL | IFX} [LOBS FROM lob_path [ ,lob_path ...]][MODIFIED BY filetype_mod ...][METHOD L ( col-start col_end [ ,col_start, col_end ...] )

[NULL INDICATORS (col_position [,col_position ...])] || N (col_name [,col_name ...]) | P (col_position [, col_position ...]) }]

[SAVECOUNT n] [ROWCOUNT N] [WARNINGCOUNT N][MESSAGES message_file] [TEMPFILES PATH pathname][{INSERT | REPLACE | RESTART | TERMINATE}

INTO table_name [(insert_column , |)]][datalink_specification] [FOR EXCEPTION table_name][STATISTICS {YES | NO} {WITH DISTRIBUTION} {AND | FOR} [DETAILED][INDEXES ALL][ COPY {NO | YES [USE TSM [OPEN num_sess SESSIONS]

| TO {directory | device} [ ,{directory | device} ... ]| LOAD lib_name [OPEN num_sess SESSIONS]| NONRECOVERABLE} ]

[HOLD QUIESCE] [WITHOUT PROMPTING] [DATA BUFFER buffer_size][CPU_PARALLELISM n] [DISK_PARALLELISM n][INDEXING MODE {AUTOSELECT | REBUILD | INCREMENTAL | DEFERRED}]

Page 297: System Administration

Data Movement Utilities 8-41

CPU_PARALLELISM nThis specifies the number of processes or threads that the load utility spawns for parsing, converting, and formatting records when building table objects. This parameter is designed to exploit intrapartition parallelism. It is particularly useful when loading presorted data, because record order in the source data is preserved. If the value of this parameter is zero or has not been specified, the load utility uses an intelligent default value (usually based on the number of CPUs available) at runtime.

DISK_PARALLELISM nThis specifies the number of processes or threads that the load utility spawns for writing data to the table space containers. If a value is not specified, the utility selects an intelligent default based on the number of table space containers and the characteristics of the table.

Load Command: Parallelism

8-41

LOAD FROM [CLIENT] {filename|pipename|device} [ ,{filename|pipename|device} ... ] OF {ASC | DEL | IFX} [LOBS FROM lob_path [ ,lob_path ...]][MODIFIED BY filetype_mod ...][METHOD L ( col-start col_end [ ,col_start, col_end ...] )

[NULL INDICATORS (col_position [,col_position ...])] || N (col_name [,col_name ...]) | P (col_position [, col_position ...]) }]

[SAVECOUNT n] [ROWCOUNT N] [WARNINGCOUNT N][MESSAGES message_file] [TEMPFILES PATH pathname][{INSERT | REPLACE | RESTART | TERMINATE}

INTO table_name [(insert_column , |)]][datalink_specification] [FOR EXCEPTION table_name][STATISTICS {YES | NO} {WITH DISTRIBUTION} {AND | FOR} [DETAILED][INDEXES ALL][ COPY {NO | YES [USE TSM [OPEN num_sess SESSIONS]

| TO {directory | device} [ ,{directory | device} ... ]| LOAD lib_name [OPEN num_sess SESSIONS]| NONRECOVERABLE} ]

[HOLD QUIESCE] [WITHOUT PROMPTING] [DATA BUFFER buffer_size][CPU_PARALLELISM n] [DISK_PARALLELISM n][INDEXING MODE {AUTOSELECT | REBUILD | INCREMENTAL | DEFERRED}]

Page 298: System Administration

8-42 Data Movement Utilities

COPY NOThis specifies that the table spaces in which the table resides are placed in backup pending state if forward recovery is enabled (that is, log retain or userexit is on).

COPY YESThis specifies that a copy of the loaded data is saved. This option is invalid if forward recovery is disabled (both logretain and userexit are off).

Use TSM — Specifies the copy is stored using Tivoli Storage Manager (TSM).OPEN num_sess SESSIONS — The number of I/O sessions used with TSM or the vendor product. The default value is 1.TO device | directory — Specifies the device or directory where the copy image is created.LOAD lib_name — the name of the shared library (DLL on OS/2 or the Windows operating system) containing the vendor backup and restore I/O functions used.

LOAD Command: Copy Options

8-42

LOAD FROM [CLIENT] {filename|pipename|device} [ ,{filename|pipename|device} ... ] OF {ASC | DEL | IFX} [LOBS FROM lob_path [ ,lob_path ...]][MODIFIED BY filetype_mod ...][METHOD L ( col-start col_end [ ,col_start, col_end ...] )

[NULL INDICATORS (col_position [,col_position ...])] || N (col_name [,col_name ...]) | P (col_position [, col_position ...]) }]

[SAVECOUNT n] [ROWCOUNT N] [WARNINGCOUNT N][MESSAGES message_file] [TEMPFILES PATH pathname][{INSERT | REPLACE | RESTART | TERMINATE}

INTO table_name [(insert_column , |)]][datalink_specification] [FOR EXCEPTION table_name][STATISTICS {YES | NO} {WITH DISTRIBUTION} {AND | FOR} [DETAILED][INDEXES ALL][ COPY {NO | YES [USE TSM [OPEN num_sess SESSIONS]

| TO {directory | device} [ ,{directory | device} ... ]| LOAD lib_name [OPEN num_sess SESSIONS]| NONRECOVERABLE} ]

[HOLD QUIESCE] [WITHOUT PROMPTING] [DATA BUFFER buffer_size][CPU_PARALLELISM n] [DISK_PARALLELISM n][INDEXING MODE {AUTOSELECT | REBUILD | INCREMENTAL | DEFERRED}]

Page 299: System Administration

Data Movement Utilities 8-43

HOLD QUIESCEThis directs the utility to leave the table in quiesced exclusive state after the load operation. To unquiesce the table spaces, issue the command:

db2 QUIESCE TABLE SPACES FOR TABLE table_name RESET

Page 300: System Administration

8-44 Data Movement Utilities

INDEXING MODEAUTOSELECT — the load utility automatically decides between REBUILD or INCREMENTAL mode.REBUILD — all indexes are rebuilt.INCREMENTAL — indexes are extended with new data. It only requires enough sort space to append index keys for the inserted records. This method is only supported in cases where the index object is valid and accessible at the start of a load operation.DEFERRED — the load utility does not attempt index creation. Indexes are rebuilt upon first non-load related access.

LOAD Command: Indexing

8-44

LOAD FROM [CLIENT] {filename|pipename|device} [ ,{filename|pipename|device} ... ] OF {ASC | DEL | IFX} [LOBS FROM lob_path [ ,lob_path ...]][MODIFIED BY filetype_mod ...][METHOD L ( col-start col_end [ ,col_start, col_end ...] )

[NULL INDICATORS (col_position [,col_position ...])] || N (col_name [,col_name ...]) | P (col_position [, col_position ...]) }]

[SAVECOUNT n] [ROWCOUNT N] [WARNINGCOUNT N][MESSAGES message_file] [TEMPFILES PATH pathname][{INSERT | REPLACE | RESTART | TERMINATE}

INTO table_name [(insert_column , |)]][datalink_specification] [FOR EXCEPTION table_name][STATISTICS {YES | NO} {WITH DISTRIBUTION} {AND | FOR} [DETAILED][INDEXES ALL][ COPY {NO | YES [USE TSM [OPEN num_sess SESSIONS]

| TO {directory | device} [ ,{directory | device} ... ]| LOAD lib_name [OPEN num_sess SESSIONS]| NONRECOVERABLE} ]

[HOLD QUIESCE] [WITHOUT PROMPTING] [DATA BUFFER buffer_size][CPU_PARALLELISM n] [DISK_PARALLELISM n][INDEXING MODE {AUTOSELECT | REBUILD | INCREMENTAL | DEFERRED}]

Page 301: System Administration

Data Movement Utilities 8-45

Here is a list of the performance modifiers available:

FASTPARSE reduces the amount of data checking on source files. Use with DEL and ASC files that are known to be correct.ANYORDER indicats that data is loaded without respect to the order the data appears in the file.DATA BUFFERS specifies the number of 4KB pages allocated from utility heap for use as internal load buffers.CPU_PARALLELISM indicates the number of processes or threads used to parse, convert, or format data. This is for use on SMP hardware.DISK_PARALLELISM specifies the number of processes or threads used for writing data to disk.INDEXFREESPACE is an integer in the range of 0–99 representing the percentage of each index page that is left as free space when loading the index.

LOAD: Performance Modifiers

8-45

FASTPARSEANYORDERDATA BUFFERSCPU_PARALLELISMDISK_PARALLELISMINDEXFREESPACE

Page 302: System Administration

8-46 Data Movement Utilities

The LOAD QUERY command is used to interrogate a LOAD operation and generate a report on its progress.

Specify table being loaded.May choose to display only summary information.May choose to display only updated information.

The syntax for the LOAD QUERY command is shown above. The command options are described here:

message_file — specifies the destination for warning and error messages that occur during the load operationNOSUMMARY — no load summary information is reportedSUMMARY ONLY — only load-summary information (rows read, rows skipped, rows loaded, rows rejected, rows deleted, rows committed, and number of warnings) is reportedSHOWDELTA — specifies that only new information (pertaining to load events that have occurred since the last invocation of the LOAD QUERY command) is reported

Unsuccessful Load Operation

8-46

The LOAD QUERY command is used to interrogate a LOAD operation and generate a report on its progressSyntax:

LOAD QUERY TABLE table_name [ TO local_message_file] [{NOSUMMARY | SUMMARY ONLY}] [SHOWDELTA]

Page 303: System Administration

Data Movement Utilities 8-47

Database Integrity: Load utility uses the mechanism of “pending states” as regular logging is not performed.

Load Pending: Operation failed or interrupted during Load / Build phaseDelete Pending: Operation failed or interrupted during Delete phaseBackup Pending: The database configuration parameter logretain is set to recovery, or userexit is enabled, and the load option COPY YES is not specified. The load option NONRECOVERABLE is not specified.Check Pending: Violation of referential, check, datalinks, or generated column constraintsQuery for table space state:

db2 "LIST TABLE SPACES SHOW DETAIL"Verify the output value of state

0x8 = Load Pending0x10 = Delete Pending0x20 = Backup Pending

Post Load: Table Space State

8-47

Database Integrity: Load utility uses the mechanism of pending states as regular logging is not performed

Load PendingDelete PendingBackup PendingCheck Pending

Query for Table space statedb2 "LIST TABLE SPACES SHOW DETAIL"

Verify that state is:0x8 = Load Pending, 0x10 = Delete Pending, or 0x20 = Backup Pending

Page 304: System Administration

8-48 Data Movement Utilities

Load, Build and Delete PendingRestart the load operation after rectifying the cause of failure.Terminate the load operation: Use Terminate option of LOAD.Invoke a LOAD REPLACE operation against the same table on which a load operation failed.Drop and then recreate table spaces for the loading table.

Backup Pending

Take a backup of the table space or database.

Check Pending

Use SET INTEGRITY Command. SET INTEGRITY FOR table_name IMMEDIATE CHECKED FORCE

{GENERATED | INCREMENTAL} FOR EXCEPTION IN table_name USE exception_table

Post Load: Removing Pending States

8-48

For Load, Build and Delete Pending:Restart the load operation after rectifying the cause of failureTerminate the load operationInvoke a LOAD REPLACE operationDrop and then recreate the table spaces

Backup PendingTake a backup of the table space / database

Check PendingUse SET INTEGRITY Command

Page 305: System Administration

Data Movement Utilities 8-49

Several enhancements have been made to the load utility in Version 8. New functionality has been added to simplify the process of loading data into both single partition and multi-partition database environments. Here are some of the new load features introduced:

Load operations now take place at the table level. This means that the load utility no longer requires exclusive access to the entire table space, and concurrent access to other table objects in the same table space is possible during a load operation. Further, table spaces that are involved in the load operation are not quiesced. When the COPY NO option is specified for a recoverable database, the table space will be placed in the backup pending table space state when the load operation begins.The load utility now has the ability to query pre-existing data in a table while new data is being loaded. You can do this by specifying the READ ACCESS option of the LOAD command.The LOCK WITH FORCE option allows you to force applications to release the locks they have on a table, allowing the load operation to proceed and to acquire the locks it needs.Data in partitioned database environments can be loaded using the same commands (LOAD, db2load) and APIs (db2load, db2loadquery) used in single partition database environments. The AutoLoader utility (db2atld) and the AutoLoader control file are no longer needed.

LOAD: Additional Features in DB2 8.1

8-49

Exclusive access to table space no longer requiredRead access during loadApplications can be forced to release locks Can be used in partitioned database environmentsCan load from an SQL query into a cursorColumns generated during loadExpanded functionality of LOAD QUERYNew Load Wizard in Command Center

Page 306: System Administration

8-50 Data Movement Utilities

Through the use of the new CURSOR file type, you can now load the results of an SQL query into a database without having to export them to a data file first.Prior to Version 8, following a load operation the target table remained in check pending state if it contained generated columns. The load utility will now generate column values, and you no longer need to issue the SET INTEGRITY statement after a load operation into a table that contains generated columns and has no other table constraints.The functionality of the LOAD QUERY command has also been expanded. It now returns the table state of the target table into which data is being loaded as well as the status information it previously included on a load operation in progress. The LOAD QUERY command can be used to query table states whether or not a load operation is in progress on a particular table.The Control Center now has a load wizard to assist you in the set up of a load operation.

Page 307: System Administration

Data Movement Utilities 8-51

LOAD: Authorization

8-51

Page 308: System Administration

8-52 Data Movement Utilities

The above table compares the IMPORT utility with the LOAD utility.

IMPORT Versus LOAD

8-52

The IMPORT Utility The LOAD UtilitySignificantly slower than LOAD on large amounts of data

Significantly faster than IMPORT on large amounts of data because LOAD writes formatted pages directly into the database

Creation of table and indexes supported with IXF format

Table and indexes must exist

Can import into views, tables, or aliases Can load into tables or aliasesTable spaces that are home to a table and its indexes are online for duration of the import

Table spaces that are home to a table and indexes are offline for duration of the load (pre-8.1 versions only)

All rows are logged Minimal logging is performedTriggers are fired Triggers are not supportedIf import is interrupted and a commitcount was specified, table is unusable and table contains rows loaded up to last commit. The user has the choice to restart or use the table as is.

If load is interrupted and a savecount was specified, the table remains in load-pending state and cannot be used until the load is restarted or table is restored from a backup image created before the load

All constraints are validated during import Uniqueness is verified during load, but all other constraints must be checked

Keys of each row are inserted into the index one at a time during the import

During load, all keys are sorted and the index is built after data has been loaded

Page 309: System Administration

Data Movement Utilities 8-53

The db2move utility moves data between different DB2 databases that may reside on different servers. It is useful when a large number of tables need to be copied from one database to another.

The utility can run in one of three modes:

Export — the EXPORT utility is used to export data from the table or tables specified into data files of type IXF.

It also produces a file named db2move.lst that records all the names of the tables exported and the file names produced when exported. It also produces various message files that record any errors or warning messages generated during the execution of the utility.

Import — the IMPORT utility used to import data files of type IXF into a given database.

It attempts to read the db2move.lst file to find the link between the file names of the data files and the table names into which the data must be imported.

Load — the input files specified in the db2move.lst file are loaded into the tables using the LOAD utility.

Data Movement Utilities: db2move

8-53

Moves data between different DB2 databases that may reside on different serversUseful when a large number of tables need to be copied from one database to anotherThe utility can run in one of three modes:

Export — the EXPORT utility is used to export data from the table or tables specified into data files of type IXFImport — the IMPORT utility used to import data files of type IXF into a given databaseLoad — the input files specified in the db2move.lst file are loaded into the tables using the LOAD utility

Page 310: System Administration

8-54 Data Movement Utilities

Performance evaluation of db2move is undertaken using utilities like Visual Explain, which use the database statistics to report on how an SQL statement is executed and gives an indication of its likely performance.

Data Movement Utilities: db2move

8-54

For testing codesame as the production DB except the data

Script can be used as input for the creation of the development database; update the

statistics without actually copying the production data.

Extracting the DDL and statistics

into a script

Page 311: System Administration

Data Movement Utilities 8-55

The syntax for the db2move command is shown above. Here is a description of the options:

database_name — the name of the databaseaction — EXPORT, IMPORT, or LOAD-tc — followed by one or more creator IDs separated by a comma-tn — if specified, it should be followed by one or more (up to ten) table names separated by commas-io — specifies the import action to take.

Options are: INSERT, INSERT_UPDATE, REPLACE, CREATE and REPLACE_CREATEREPLACE_CREATE is default

-lo — specifies the load option to use. Valid options are: INSERT and REPLACEDefault is INSERT

-l — specifies the absolute path names for the directories used when importing, exporting, or loading LOB values into or from separate files. If specified with the EXPORT action the directories are cleared before the LOBs are exported to files in the directory or directories.

db2move Command: Syntax

8-55

db2move database_name action -tc table_creators-tn table_names -io import_options-lo load_options -u userid -p password

Page 312: System Administration

8-56 Data Movement Utilities

-u — followed by a userid to be used to run the utility. If not specified, the userid the user is logged on to is used to execute the utility.-p — followed by the password used to authenticate the user ID used when executing the utility..

Here are some db2move command examples:

In the first example, all tables in the sample database are exported. Default values are used for all options:db2move sample export

The following command exports all tables created by userid1 or user IDs LIKE us%rid2, and with the name tbname1 or table names LIKE %tbname2:db2move sample export -tc userid1,us*rid2 -tn

tbname1,*tbname2This example is applicable to the Windows operating system only. The command imports all tables in the sample database. LOB paths d:\lobpath1 and c:\lobpath2 are searched for LOB files:db2move sample import -l d:\lobpath1,c:\lobpath2

Page 313: System Administration

Data Movement Utilities 8-57

Data Movement Utilities: db2look

8-57

Generates a report on the statistics in the databaseExtracts the DDL statements for the creation of tables, indexes, views, and so on, that are needed to generate a CLP script file used to recreate database objectsExtracts statistics that are stored in a script to update the statistics

Model a development database to simulate a production database by updating the catalog statistics

Page 314: System Administration

8-58 Data Movement Utilities

The parameter explanation:

-d database_name — Name of database from which statistics are extracted -u creator — Creator ID. Limits output to objects with this creator ID. If option -a is specified, this parameter is ignored. -a — When this option is specified, the output is not limited to the objects created under a particular creator ID. If neither -u nor -a is specified, the environment variable USER is used.-h — Displays help information-t tname — Limits the output to this particular table-p — Use plain text format. -o fname — Specifies output file. If this option is not specified, output is written to standard output.-e — Extract DDL statements for database objects. DDL for the following database objects are extracted when using the -e option: tables, views, automatic, summary tables (AST), aliases, indexes, triggers, user-defined distinct types, primary key, referential and check constraints, user-defined structured types, user-defined functions, user-defined methods, and user-defined transforms.

db2look Command: Syntax

8-58

db2look –d database_name -u creator, -a, -h, -t -t tname, -p, -o fname, -e, -l, -x

Page 315: System Administration

Data Movement Utilities 8-59

-l — If this option is specified, the db2look utility generates DDL for user defined table spaces, nodegroups, and buffer pools. DDL for the following database objects is extracted when using the -l option: user-defined table spaces, user-defined nodegroups, and user-defined buffer pools.-x — If this option is specified, the db2look utility generates authorization DDL (GRANT statement, for example).

Here is an example:

db2look -d department -a -e -o db2look.sql

This command generates the DDL statements for objects created by all users in the department database. The db2look output is sent to file db2look.sql.

Page 316: System Administration

8-60 Data Movement Utilities

Summary

8-60

You should now be able to:Use the EXPORT utility to extract data from a tableUse the IMPORT utility to insert data into a tableUse the LOAD utility to insert data into a tableKnow when to use IMPORT versus LOAD utilitiesUse the db2move utilityUse the db2look utilityUnderstand table space states after LOAD

Page 317: System Administration

Data Movement Utilities 8-61

Lab Exercise

8-61

You should now complete the lab exercises for Module 8.

Page 318: System Administration

8-62 Data Movement Utilities

Page 319: System Administration

Data Maintenance Utilities 02-2003 9-1© 2002, 2003 International Business Machines Corporation

Data Maintenance Utilities

Module 9

Page 320: System Administration

9-2 Data Maintenance Utilities

Objectives

9-2

At the end of this module, you will be able to:Use the REORGCHK commandUse the REORG command to reorganize dataUse the RUNSTATS command to update data statisticsDescribe new DB2 Version 8 features of REORG, REORGCHK, and RUNSTATSUse the REBIND command to utilize data statistics

Page 321: System Administration

Data Maintenance Utilities 9-3

The physical distribution of the data stored in tables has a significant effect on the performance of applications using those tables. The way the data is physically stored in a table is affected by the update, insert, and delete operations on the table.

Examples:

Delete operation may leave empty pages of data that may not be reused later. Updates to variable-length columns result in the new column value not fitting in the same data page—this can cause the row to be moved to a different page and produce internal gaps or unused space in the table.

When the cost-based optimizer determines how a query should be executed, incorrect/outdated data may result in a cost ineffective plan, leading to slower response time and degraded performance.

The solutions that we will explore in this module are:

REORGCHKREORGRUNSTATS

Data Maintenance: The Need

9-3

The physical distribution of data stored in tables has a significant effect on the performance of applications using those tablesThe way the data is physically stored in a table is affected by the update, insert, and delete operations on the tableCost based optimizer: Incorrect/outdated data results in a cost ineffective plan leading to slower response time and degraded performanceSolution:

REORGCHKREORGRUNSTATS

Page 322: System Administration

9-4 Data Maintenance Utilities

REORGCHK analyzes the system catalog tables and gathers information about the physical organization of tables and indexes. It determines the physical organization of tables and corresponding indexes, including how much space is currently used and how much is free.

REORGCHK uses six formulas to help decide if tables and indexes require physical reorganization (general recommendations that show the relationship between the allocated space and the space used for the data in tables). Three formulas are applied to tables and the other three are applied to indexes.

Authority to run REORGCHKTo use the REORGCHK command, you must have SYSADM or DBADM authority, or CONTROL privilege on the table.

REORGCHK: Analyzing Physical Data Organization

9-4

REORGCHK:Analyzes the system catalog tables and gathers information about the physical organization of tables and indexesDetermines how much space is currently being used and how much is freeUses six formulas to help decide if tables and indexes require physical reorganization; three formulas for tables and three for indexes

To use the REORGCHK command, you must have SYSADM or DBADM authority, or CONTROL privilege on the table.

Page 323: System Administration

Data Maintenance Utilities 9-5

The syntax of the REORGCHK utility provides for choices at two positions:

REORGCHK {UPDATE | CURRENT} STATISTICS ON {TABLE {USER | SYSTEM | ALL | table_name}

Use the CURRENT STATISTICS option of REORGCHK to use the statistics in the system catalog tables at that time. For example, to analyze the current statistics of the employee table:

db2 REORGCHK CURRENT STATISTICS ON TABLE inst00.employee

To review the statistics of all the tables in a database, including system catalog and user tables:

db2 REORGCHK CURRENT STATISTICS ON TABLE ALL

Verify the organization of the system catalog tables using the SYSTEM option. Alternatively, select all the tables under the current user schema name by specifying the USER keyword:

db2 REORGCHK CURRENT STATISTICS ON TABLE SYSTEM

If the CURRENT STATISTICS parameter is not specified, REORGCHK calls RUNSTATS.

You can also update statistics based on a defined schema. Here is the alternate syntax:

REORGCHK [{UPDATE | CURRENT} STATISTICS] ON SCHEMA schema_name

REORGCHK Command: Syntax and Examples

9-5

Command syntax:REORGCHK [{UPDATE | CURRENT}] STATISTICS ON TABLE

{USER | SYSTEM | ALL | table_name}

Use the CURRENT STATISTICS option to utilize the statistics in the system catalog tables at that time

db2 REORGCHK CURRENT STATISTICS ON TABLE inst00.employee

To review the current statistics of all the tables:db2 REORGCHK CURRENT STATISTICS ON TABLE ALL

Verify the system catalog tables using the SYSTEM option:db2 REORGCHK CURRENT STATISTICS ON TABLE SYSTEM

Syntax for running REORGCHK on a schema:REORGCHK [{UPDATE | CURRENT} STATISTICS]

ON SCHEMA schema_name

Page 324: System Administration

9-6 Data Maintenance Utilities

An example of a REORGCHK command and the output are shown above.

The formulas used for table statistics are:

F1: 100*OVERFLOW/CARD < 5F2: TSIZE / ((FPAGES-1) * (TABLEPAGESIZE-76)) > 70F3: 100*NPAGES/FPAGES > 80

Interpretation:

F1 recommends a table reorganization if 5% or more of the total number of rows are overflow rows.F2 recommends a table reorganization if the table size (TSIZE) is less than or equal to 70% the size of the total space allocated to the table.F3 recommends a table reorganization when more than 20% of the pages in a table are free.

In the example above, the asterisk in the REORG column indicates that reorganization is needed.

The columns of the report have the following titles and meanings:

CREATOR — the schema to which the table belongs

REORGCHK: Table Statistics

9-6

db2 REORGCHK UPDATE STATISTICS ON TABLE inst00.employee

CREATOR NAME CARD OV NP FP TSIZE F1 F2 F3 REORGinst00 employee 8 3 1 1 72 0 – 100 – – *

Page 325: System Administration

Data Maintenance Utilities 9-7

NAME — the name of the table for which the REORGCHK utility was run—REORGCHK checks a series of tables at one timeCARD — the number of data rows in the base tableOV (OVERFLOW) — overflow indicator: the number of overflow rows. An overflow occurs when a new column is added to a table or when a variable-length value increases its size.NP (NPAGES) — the total number of pages that contain dataFP (FPAGES) — the total number of pages allocated to the tableTSIZE — the table size in bytes. This value is calculated from the result of multiplying the number of rows in the table times the average row length.REORG — a separate indicator for each of the three formulas:

A hyphen (-) indicates that reorganization is not recommended.An asterisk (*) indicates that reorganization is recommended.

Page 326: System Administration

9-8 Data Maintenance Utilities

During the same run, the following formulas are used for index statistics:

F4: CLUSTERRATIO or normalized CLUSTERFACTOR > 80F5: 100*(KEYS*(ISIZE+10)+(CARD-KEYS)*4) / (NLEAF*INDEXPAGESIZE) > 50F6: (100*PCTFREE) * (INDEXPAGESIZE-96) / (ISIZE+12) * (NLEVELS-2) + (INDEXPAGESIZE-96) / (KEYS * (ISIZE+8) + (CARD-KEYS) * 4) < 100

Interpretation:

F4 indicates the CLUSTERRATIO or normalized CLUSTERFACTOR. This ratio shows the percentage of data rows stored in same physical sequence as the index.F5 calculates space reserved for index entries. Less than 50% of the space allocated for the index should be empty.F6 measures the usage of the index pages. The number of index pages should be more than 90% of the total entries that NLEVELS can handle.

If, for example, the CLUSTERRATIO of an index is below the recommended level, an asterisk appears in the REORG column, as shown in the example above.

REORGCHK: Index Statistics

9-8

CREATOR NAME CARD LEAF LVL ISIZE KEYS F4 F5 F6 REORGTable: inst00

sysibmemployee

SQL9708122258532708 1 1 9 8 100 – – – – *

Page 327: System Administration

Data Maintenance Utilities 9-9

The columns of the report have the following titles and meanings:

CREATOR — the schema to which the index belongsNAME — the name of the table and index(es). Although REORGCHK is only specified at the table level, it also shows statistics about all the indexes of a table. The information is also collected for system-defined indexes, such as primary key indexes.CARD — the number of rows in the associated base tableLEAF — the number of leaf nodes for the indexLVLS (LEVELS) — the total number of levels of the indexISIZE — index size calculated from the average column length of the key columnsKEYS — the number of unique index entriesREORG —a separate indicator for each of the index formulas:

A hyphen (-) indicates that reorganization is not recommended. An asterisk (*) indicates that reorganization is recommended.

Page 328: System Administration

9-10 Data Maintenance Utilities

For interpreting the output gathered from indexes, some information about the structure of indexes is needed. Indexes in DB2 are created using a B+ tree structure. These data structures provide an efficient search method to locate the entry values of an index. The logical structure of a DB2 index is shown above.

REORGCHK: Interpreting of Index Information

9-10

Logical structure of a DB2 index

Page 329: System Administration

Data Maintenance Utilities 9-11

The need for reorganization is indicated by an asterisk (*) in the REORG column of the REORGCHK output.

The REORG command deletes all the unused space and writes the table and index data to contiguous pages. An index is used to place the data rows in the same physical sequence as the index. These actions are used to increase the CLUSTERRATIO of the selected index. This helps DB2 find the data in contiguous space and in the desired order, reducing the seek time needed to read the data. If DB2 finds an index with a very high cluster ratio, it might use it to avoid a sort, thus improving the performance of applications that require sort operations.

AuthorityTo use REORG, you must have SYSADM, SYSCTRL, SYSMAINT, or DBADMIN authority, or CONTROL privilege on the table.

Reorganization

9-11

The DB2 REORG command:Deletes all the unused space and writes the table and index data to contiguous pagesPlaces the data rows in the same physical sequence as the index

This helps DB2 find the data in contiguous space and in the desired order, reducing the seek time needed to read the data

If DB2 finds an index with a very high cluster ratio, it might use it to avoid a sort, thus improving the performance of applications that require sort operations

Page 330: System Administration

9-12 Data Maintenance Utilities

When using REORG, it is mandatory to use the fully qualified name of the table. The following command options are available:

Reorganize specific indexesTABLE table_name — Specifies the name of the table containing the index to reorganizeINDEX index_name — Specifies the name of the index to reorganize.USE table_space_name — Specifies the name of a system temporary table space where the database manager can temporarily store the table being reconstructed. If a table space name is not entered, the database manager stores a working copy of the table in the table space(s) in which the table being reorganized resides.

Reorganize all indexes in a tableINDEXES ALL FOR TABLE table_name — All indexes for the specified table are to be reorganized.

Allow access to the tableALLOW NO ACCESS — Specifies that no other users can access the table while the indexes are being reorganized. This is the default for the REORG INDEXES command.

REORG Command: Syntax and Example

9-12

Partial syntax:REORG

{TABLE table_name [INDEX index_name] [ALLOW {READ | NO} ACCESS] [USE table_space_name] |

INDEXES ALL FOR TABLE table_name[ALLOW {READ | NO | WRITE} ACCESS]}

Examples:Reorganize the inst00.employee table and all of its indexes:

db2 REORG TABLE inst00.employee

Place the rows of the table in the order of the workdept index:db2 REORG TABLE inst00.employee INDEX workdept

Reorganize all the indexes created on the employee table:db2 REORG INDEXES ALL FOR TABLE inst00.employee

ALLOW READ ACCESS

Page 331: System Administration

Data Maintenance Utilities 9-13

ALLOW READ ACCESS — Specifies that other users can have read-only access to the table while the indexes are being reorganized.ALLOW WRITE ACCESS — Specifies that other users can read from and write to the table while the indexes are being reorganized. This is the default for the REORG TABLE command.

ExamplesThe first example above recognizes the inst00.employee table and all of its indexes, but does not put the data in any specific order.

Assume that the table inst00.employee has an index called workdept and that most of the queries using the table are grouped by department number. In the second example above, the REORG command is used to physically place the rows of the table ordered by workdept.

The final example shows the command to use when you want all of the indexes created on the employee table to be reorganized. Read access is allowed for other users during reorganization.

Page 332: System Administration

9-14 Data Maintenance Utilities

The INDEX option tells the REORG utility to use the specified index to reorganize the table. After the REORG command has completed, the physical organization of the table should match the order of the selected index. In this way, the key columns are found sequentially in the table.

After reorganizing a table using the index option, DB2 does not force the subsequent inserts or updates to match the physical organization of the table.

A clustering index defined on the table might assist DB2 in keeping future data in a clustered index order by trying to insert new rows physically close to the rows for which the key values of the index are in the same range.

Only one clustering index is allowed for a table.

REORG: Using Index

9-14

The INDEX option tells the REORG utility to use the specified index to reorganize the tableAfter reorganizing a table using the index option, DB2 does not force the subsequent inserts or updates to match the physical organization of the tableOnly one clustering index for a table

Page 333: System Administration

Data Maintenance Utilities 9-15

syscat.tables contains information about columns, tables, indexes, number of rows in a table, the use of space by a table or index, and the number of different values of a column. This information is not kept current and has to be generated by executing the RUNSTATS command.

The statistics collected by the RUNSTATS command can be used to display the physical organization of the data and provide information that the DB2 optimizer needs to select the best access path for executing SQL statements.

Generating Statistics

9-15

Use the DB2 RUNSTATS command to generate statistics:syscat.tables contains information about columns, tables, indexes, number of rows in a table, the use of space by a table or index, and the number of different values of a columnThis information is not kept current and has to be generated by executing RUNSTATS

The statistics collected by the RUNSTATS command can be used to:Display the physical organization of the dataProvide information that the DB2 optimizer needs to select the best access path for executing SQL statements

Page 334: System Administration

9-16 Data Maintenance Utilities

It is recommended that you execute RUNSTATS on a frequent basis on tables that have a large number of updates, inserts, or deletes. Also, use the RUNSTATS utility after a REORG of a table.

RUNSTATS does not produce any output. View its results by querying the system catalog tables only.

The following example uses the sysibm.syscolumns table:

db2 RUNSTATS ON TABLE sysibm.syscolumns

To collect statistics for a table and all of its indexes at the same time:

db2 RUNSTATS ON TABLE inst00.employee AND INDEXES ALL

To collect statistics for table indexes only:

db2 RUNSTATS ON TABLE inst00.employee FOR INDEXES ALL

You can permit other uses to have access to the table by including either the ALLOW READ ACCESS or ALLOW WRITE ACCESS syntax to the command.

RUNSTATS Command: Syntax and Examples

9-16

Syntax:RUNSTATS ON TABLE table_name

[WITH DISTRIBUTION][{AND|FOR}] [DETAILED] [{INDEXES ALL|INDEX

index_name}][SHRLEVEL {CHANGE|REFERENCE}][ALLOW {READ | WRITE} ACCESS]

To collect statistics for a table and all of its indexes at the same time:db2 RUNSTATS ON TABLE inst00.employee AND INDEXES ALL

To collect statistics for table indexes only:db2 RUNSTATS ON TABLE inst00.employee FOR INDEXES ALL

Page 335: System Administration

Data Maintenance Utilities 9-17

To collect distribution statistics on table columns, run:

db2 RUNSTATS ON TABLE inst00.employee WITH DISTRIBUTION

The WITH DISTRIBUTION option instructs DB2 to collect data from the distribution of values for the columns in a table. This option is related to three DB CFG parameters:

NUM_FREQVALUES indicates the number of most-frequent values that DB2 collects. For example, if it is set to 10, only information for the 10 most frequent values is obtained.NUM_QUANTILES indicates the number of quantiles that DB2 looks for. This is the amount of information DB2 retains about the distribution of values for columns in the table.STAT_HEAP_SZ indicates how much memory DB2 uses for collecting these statistics.

Collection of distribution statistics is demanding, and it is not recommended for all tables. Only tables presenting a high volume of non-uniform values are candidates.

RUNSTATS: Distribution Statistics

9-17

To collect distribution statistics on table columns:RUNSTATS ON TABLE inst00.employee WITH DISTRIBUTION

WITH DISTRIBUTION instructs DB2 to collect data from the distribution of values for the columns in a table. This option is related to three DB CFG parameters: • NUM_FREQVALUES• NUM_QUANTILES• STAT_HEAP_SZ

Collection of distribution statistics need only be done on high usage tables with a large number of different values

Page 336: System Administration

9-18 Data Maintenance Utilities

The DB2 REBIND command and db2rbind utility provide the following functionality:

They provide a quick way to recreate a package, enabling the user to take advantage of a change in the system without a need for the original bind file.They provide a method to recreate inoperative packages.They control the rebinding of invalid packages.

You should use a qualified package name, otherwise these programs assume the current authorization ID. They do not automatically commit unless auto-commit is enabled.

The db2rbind utility rebinds all packages in the database.

REBIND

9-18

DB2 REBIND command and db2rbind utility provide the following functionality:

Quick way to recreate a package, enabling the user to take advantage of a change in the system without a need for the original bind fileMethod to recreate inoperative packagesControl over the rebinding of invalid packages

Use a qualified package nameREBIND does not automatically commit unless auto-commit is enableddb2rbind utility rebinds all packages in database

Page 337: System Administration

Data Maintenance Utilities 9-19

The syntax for db2rebind is shown above. The options for this command include:

-l — Specifies the (optional) path and the (mandatory) file name used for recording errors that result from the package revalidation procedure.all — Specifies that rebinding of all valid and invalid packages is to be done. If this option is not specified, all packages in the database are examined, but only those packages that are marked as invalid are rebound, so that they are not rebound implicitly during application execution. -u userid -p password — User ID and password.

REBIND and db2rbind: Syntax

9-19

Syntax:

db2rebind database -l logfile [ALL -u userid -p password] [-r {CONSERVATIVE|ANY}]

Page 338: System Administration

9-20 Data Maintenance Utilities

Summary

9-20

You should now be able to:Use the REORGCHK commandUse the REORG command to reorganize dataUse the RUNSTATS command to update data statisticsDescribe new DB2 Version 8 features of REORG, REORGCHK, and RUNSTATSUse the REBIND command to utilize data statistics

Page 339: System Administration

Data Maintenance Utilities 9-21

Lab Exercise

9-21

You should now complete the lab exercises for Module 9.

Page 340: System Administration

9-22 Data Maintenance Utilities

Page 341: System Administration

Locking and Concurrency 02-2003 10-1© 2002, 2003 International Business Machines Corporation

Locking and Concurrency

Module 10

Page 342: System Administration

10-2 Locking and Concurrency

Objectives

10-2

At the end of this module, you will be able to:Explain why locking is neededList and describe the types of locksExplain lock conversion and escalationTune MAXLOCKS and LOCKLISTExplain and apply isolation levelsDescribe the situation that causes deadlocks

Page 343: System Administration

Locking and Concurrency 10-3

Locking allows multiple applications to share the same data on an instance. It protects data by allowing only one application to update the data at a time, but still allows applications to share data. Locking also prevents applications from accessing data that has been modified, but not committed by other applications, except where uncommitted read isolation is used.

Why Are Locks Needed?

10-3

Ensures data integrity while permitting multiple applications access to dataProhibits applications from accessing uncommitted data written by other applications (unless the Uncommitted Read isolation level is used)

Page 344: System Administration

10-4 Locking and Concurrency

The table below lists the different lock modes available and the objects that use these modes.

Types of Locks

10-4

Mode Applicable Objects DescriptionIN — Intent None Table spaces & tables The lock owner can read any data in the table,

including uncommitted data, but cannot update it. No row locks are acquired by the lock owner. Other concurrent applications can read or update the table.

IS — Intent Share Table spaces and tables The lock owner can read data in the locked table, but not update this data. When an application holds the IS table lock, the application acquires an S or NS lock on each row read. In either case, other applications can read or update the table.

NS — Next Key Share Rows The lock owner and all concurrent applications can read, but not update, the locked row. This lock is acquired on rows of a table, instead of an S lock, where the isolation level is either RS or CS on data that is read.

S — Share Tables and rows The lock owner and all concurrent applications can read, but not update, the locked data. Individual rows of a table can be S locked. If a table is S locked, no row locks are necessary.

Locks can be categorized based on following attributes:MODEOBJECTDURATION

Page 345: System Administration

Locking and Concurrency 10-5

IX — Intent exclusive Table spaces and tables The lock owner and concurrent applications can read and update data in the table. When the lock owner reads data, an S, NS, X, or U lock is acquired on each row read. An X-lock is also acquired on each row that the lock owner updates. Other concurrent applications can both read and update the table.

SIX — Share with intent exclusive

Tables The lock owner can read and update data in the table. The lock owner acquires X locks on the rows it updates, but acquires no locks on rows that it reads. Other concurrent applications can read the table.

U — Update Tables and rows The lock owner can update data in the locked row or table. The lock owner acquires X locks on the rows before it updates the rows. Other units of work can read the data in the locked row or table; but cannot attempt to update it.

NK — Next key exclusive

Rows The lock owner can read but not update the locked row. This mode is similar to an X lock except that it is compatible with the NS lock.

NW — Next key weak exclusive

Rows This lock is acquired on the next row when a row is inserted into the index of a non-catalog table. The lock owner can read but not update the locked row. This mode is similar to X and NX locks except that it is compatible with the W and NS locks.

X — Exclusive Tables and rows The lock owner can both read and update data in the locked row or table. Tables can be Exclusive locked, meaning that no row locks are acquired on rows in those tables. Only uncommitted read applications can access the locked table.

WX — Weak exclusive Rows This lock is acquired on the row when a row is inserted into a non-catalog table. The lock owner can change the locked row. This lock is similar to an X lock except that it is compatible with the NW lock. Only uncommitted read applications can access the locked row.

Z — Superexclusive Table spaces and tables This lock is acquired on a table in certain conditions, that is, when the table is altered or dropped, an index on the table is created or dropped, or a table is reorganized. No other concurrent application can read or update the table.

Page 346: System Administration

10-6 Locking and Concurrency

The above table summarizes the compatibility of the different lock modes. The horizontal heading shows the lock mode of the application that is holding the locked resources and the vertical heading shows the mode of the lock requested by another application. For example, if the application holding the resource is holding an update lock (U), and another application requests an exclusive lock on that same resource, the request for the lock is denied.

Locking Type Compatibility

10-6

State of held resources

NoYesNoNoNoNoNoNoNoNoNoYesYesW

YesNoNoNoNoNoNoNoNoYesNoYesYesNW

NoNoNoNoNoNoNoNoNoNoNoNoYesZ

NoNoNoNoNoNoNoNoNoNoNoYesYesX

NoNoNoNoNoNoNoNoNoYesNoYesYesNX

NoNoNoNoNoNoNoNoYesYesYesYesYesU

NoNoNoNoNoNoNoNoNoNoYesYesYesSIX

NoNoNoNoNoNoNoYesNoNoYesYesYesIX

NoNoNoNoNoYesNoNoYesYesYesYesYesS

NoYesNoNoYesYesNoNoYesYesYesYesYesNS

NoNoNoNoNoYesYesYesYesYesYesYesYesIS

YesYesNoYesYesYesYesYesYesYesYesYesYesIN

YesYesYesYesYesYesYesYesYesYesYesYesYesNone

WNWZXNXUSIXIXSNSISINNone

NoYesNoNoNoNoNoNoNoNoNoYesYesW

YesNoNoNoNoNoNoNoNoYesNoYesYesNW

NoNoNoNoNoNoNoNoNoNoNoNoYesZ

NoNoNoNoNoNoNoNoNoNoNoYesYesX

NoNoNoNoNoNoNoNoNoYesNoYesYesNX

NoNoNoNoNoNoNoNoYesYesYesYesYesU

NoNoNoNoNoNoNoNoNoNoYesYesYesSIX

NoNoNoNoNoNoNoYesNoNoYesYesYesIX

NoNoNoNoNoYesNoNoYesYesYesYesYesS

NoYesNoNoYesYesNoNoYesYesYesYesYesNS

NoNoNoNoNoYesYesYesYesYesYesYesYesIS

YesYesNoYesYesYesYesYesYesYesYesYesYesIN

YesYesYesYesYesYesYesYesYesYesYesYesYesNone

WNWZXNXUSIXIXSNSISINNone

Weak Exclusive

W

Next key Weak Exclusive

NW

Super Exclusive

Z

UpdateU

ExclusiveX

Next Key Exclusive

NX

ShareS

Next Key Share

NS

NoneN

IntentI

Weak Exclusive

W

Next key Weak Exclusive

NW

Super Exclusive

Z

UpdateU

ExclusiveX

Next Key Exclusive

NX

ShareS

Next Key Share

NS

NoneN

IntentI

Stat

e re

ques

ted

Page 347: System Administration

Locking and Concurrency 10-7

Lock conversion is required when an application has already locked a data object and requires a more restrictive lock. A process can hold only one lock on a data object at any time.

The operation of changing the mode of the lock already held is called a conversion.

Lock Conversion

10-7

If an X lock is needed and an S or U lock is held:

SELECT * FROM inst##.staffWHERE salary > 10000 FOR UPDATE OF salary

UPDATE inst##.staffSET salary = 10000 +1000WHERE salary > 10000

Page 348: System Administration

10-8 Locking and Concurrency

If an application changes many rows in one table, it is better to have one lock on the entire table. Each lock, regardless of whether it is a lock on a database, table, or row, consumes the same amount of memory, so a single lock on the table requires less memory than locks on multiple rows in the table. However, table locks result in decreased concurrency, since other applications are prevented from accessing the table for the duration of the lock.

Database configuration parameters that affect lock escalation include LOCKLIST, which sets a limit to the amount of space allocated to the lock list, and MAXLOCKS, which is a percent value representing the maximum amount of lock list space that can be used by a single application.

Tuning Lock ParametersWhen the percentage of the lock list used by one application reaches MAXLOCKS, the database manager performs lock escalation, from row to table.

If lock escalations are causing performance concerns, increase the value of this parameter or the MAXLOCKS parameter. Use the database system monitor to determine if lock escalations are occurring.

Lock Escalation

10-8

Row locks

Table lock

Page 349: System Administration

Locking and Concurrency 10-9

Determining an Initial Setting for LOCKLISTHere is one method you could use to determine an initial setting for LOCKLIST:

1. Calculate a lower bound for the size of lock list: (512 * 36 * maxappls) / 4096

where 512 is an estimate of the average number of locks per application and 36 is the number of bytes required for each lock against an object that has an existing lock.

2. Calculate an upper bound for the size of lock list: (512 * 72 * maxappls) / 4096

where 72 is the number of bytes required for the first lock against an object. 3. Estimate the amount of concurrency against the data and, based on expectations, choose

an initial value for LOCKLIST that falls between the upper and lower bounds calculated. 4. Tune the value of LOCKLIST using the database system monitor.

Page 350: System Administration

10-10 Locking and Concurrency

The lock mode that is used by an application is determined by the isolation level. Also, locks placed on table elements can be on individual rows, or on pages of rows in the table. The default used by DB2 is row locking.

Databases, table spaces, and tables can be explicitly locked. Here are some examples of commands that can be used to lock these database objects:

Database lock CONNECT TO database IN EXCLUSIVE MODE

Table space lockQUIESCE table_spaces FOR TABLE table-name INTENT FOR UPDATE

Table lockLOCK TABLE table_name IN EXCLUSIVE MODE

Database, tables, and rows can be implicitly locked. For example:

Database are locked during full database restoreTables are locked during lock escalationRows are locked through normal data modification

Explicit and Implicit Locking

10-10

Locking is controlled by isolation level

By default, DB2 uses row-level locking

Database, table spaces, and tables can be explicitly locked

Database, tables, and rows can be implicitly locked

Page 351: System Administration

Locking and Concurrency 10-11

To guarantee the integrity of the data, some sort of modification rules are required to control the use of data. Without these rules, serious problems could occur.

Phantom Read PhenomenonThe phantom read phenomenon occurs when:

1. Your application executes a query that reads a set of rows based on some search criterion.2. Another application inserts new data or updates existing data that would satisfy your

application’s query.3. Your application repeats the query from step 1 (within the same unit of work).

Some additional (phantom) rows are returned as part of the result set that were not returned when the query was initially executed (step 1).

Possible Problems When Data Is Shared

10-11

Problems encountered when many users access the same data source:

Lost updateUncommitted readNonrepeatable readPhantom read

Page 352: System Administration

10-12 Locking and Concurrency

The isolation level is set within an application to control the type of locks and the degree of concurrency allowed by the application. DB2 provides four different levels of isolation: uncommitted read, cursor stability, read stability, and repeatable read.

Uncommitted ReadUncommitted read, also known as dirty read, is the lowest level of isolation. It is the least restrictive, but provides the greatest level of concurrency. However, it is possible for a query executed under uncommitted read to return data that has never been committed to the database. for example, if an application has performed an insert of a row but has not committed, this row can be selected by an application using uncommitted read. Phantom reads are also possible under uncommitted read isolation.

Cursor StabilityCursor stability is the default isolation mode; it is used when no isolation is set in an application. In this isolation mode, only the row on which the cursor is currently positioned is locked. This lock is held until a new row is fetched or the unit of work is terminated. If a row is updated, the lock is held for the duration of the transaction.

Isolation Levels

10-12

DB2 provides different levels of protection to isolate data:Uncommitted readCursor stabilityRead stabilityRepeatable read

Cursor stability is the default isolation levelIsolation level can be specified for a session, a client connection, or an application before a database connectionFor embedded SQL, the level is set at bind timeFor dynamic SQL, the level is set at run time

Page 353: System Administration

Locking and Concurrency 10-13

Under cursor stability, there is no possibility of selecting uncommitted data, but it is still possible for a non-repeated read or phantom read to occur.

Read StabilityUnder read stability isolation, locks are only placed on the rows an application retrieves within a unit of work. Applications cannot read uncommitted data and no other application can change the rows locked by the read stability application. It is possible to retrieve phantom rows if the application retrieves the same row more than once within the same unit of work.

Repeatable ReadRepeatable read is the highest level of isolation and has the lowest level of concurrency. Locks are held on all rows processed (scanned) for the duration of a transaction. Because so many locks are required for repeatable read, the optimizer might choose to lock the entire table instead of locking individual rows.

The same query issued by the application more than once in a unit of work gives the same result each time (no phantom reads). No other application can update, delete, or insert a row that affects the result table until the unit of work completes.

Page 354: System Administration

10-14 Locking and Concurrency

A deadlock occurs when two or more applications connect to the same database wait indefinitely for a resource. The waiting is never resolved because each application is holding a resource that the other needs to continue.

Process 1 locks table A in X (exclusive) mode and Process 2 locks table B in X mode; if Process 1 then tries to lock table B in X mode and Process 2 tries to lock table A in X mode, the processes will be in a deadlock.

The ultimate cause of all deadlocks is poor programming. With proper design, they are impossible.

Deadlock DetectorDeadlocks in the lock system are handled in the database manager by an asynchronous system background process called the deadlock detector.

The deadlock check interval defines the frequency at which the database manager checks for deadlocks among all the applications connected to a database.

Time_interval_for_checking_deadlock = dlchktime

Default [Range]: 10,000 (10 seconds) [1000–600,000] Unit of measure: milliseconds

Deadlocks

10-14

Process - 1 Process - 2

A B

Table Table

X-Exclusive Lock

X-Exclusive LockWants X

- Lock on A

Wants X- Lock on BDeadlock

Page 355: System Administration

Locking and Concurrency 10-15

Summary

10-15

You should now be able to:Explain why locking is neededList and describe the types of locksExplain lock conversion and escalationTune MAXLOCKS and LOCKLISTExplain and apply isolation levelsDescribe the situation that causes deadlocks

Page 356: System Administration

10-16 Locking and Concurrency

Lab Exercise

10-16

You should now complete the lab exercises for Module 10.

Page 357: System Administration

Backup and Recovery 02-2003 11-1© 2002, 2003 International Business Machines Corporation

Backup and Recovery

Module 11

Page 358: System Administration

11-2 Backup and Recovery

Objectives

11-2

At the end of this module, you will be able to:Describe the different types of recoveryExplain the importance of loggingUse the BACKUP commandRestore a database to end of logs or point in timePerform a table space backup and recovery

Page 359: System Administration

Backup and Recovery 11-3

Recovery occurs in a DB2 instance as a result of the need to restore the instance to a state of consistency when some event has caused portions of the instance to be out of sync. You can initiate any of the following types of recovery:

Crash/restart recovery — Uses the RESTART DATABASE command or sets the AUTORESTART configuration parameter to protect a database from being left in an inconsistent or unusable state.Version/image recovery — Uses the BACKUP command in conjunction with the RESTORE command to put the database in a state that was previously saved. This is used for nonrecoverable databases or databases for which there are no archived logs.Rollforward recovery — Uses the BACKUP command in conjunction with the RESTORE and ROLLFORWARD commands to recover a database or table space to a specified point in time.

Types of Recovery

11-3

Crash/restart recovery

Version/image recovery

Rollforward recovery

Page 360: System Administration

11-4 Backup and Recovery

Log files are used to keep records of all changes made to database objects. The maximum size for each log file is 32 gigabytes in version 7.1 and 256 gigabytes in Version 8.1.

Changes made to the databases are first written to log buffers in memory, then are flushed from memory to the log files on disk. The transactions written to the logs define a unit of work that can be rolled back if the entire work unit cannot complete successfully.

Log files are necessary to perform recovery operations. There are two phases to the recovery process:

Reapplication of all transactions, regardless of whether or not they have been committed.Rollback of those changes that were NOT committed.

Logging Concepts

11-4

Log file usageLogs database changes

Changes written to buffer first

Raw devices can be used for logs

Used to terminate or roll back a unit of work

Used for crash recovery

Roll-forward recovery

Page 361: System Administration

Backup and Recovery 11-5

In this example, three user processes are accessing the same database. The life of every transaction is also depicted (A-F). The lower middle section of the diagram shows how the database changes are synchronously recorded in the log files (x, y).

When a COMMIT is issued, the log buffer containing the transaction is written to disk. Transaction E is never written to disk because it ends with a ROLLBACK statement. When log file x runs out of room to store the first database change of Transaction D, the logging process switches to log file y. Log file x remains active until all Transaction C changes are written to the database disk files. The hexagon represents the period of time during which log file x remains active after logging is switched to log file y.

Transaction Log File Usage

11-5

TIM E

Transaction A

Transaction B

Transaction Cco

mm

it

com

mit

T ransaction D

Transaction E

Transaction Fco

mm

it

com

mit

com

mit

rollb

ack

A B C A B C A B

D C E F D E E F D D

Log F ile x

Log F ile y

D atabaseD isk Files

P rocess 1

Process 2

Process 3

active

Page 362: System Administration

11-6 Backup and Recovery

When a log file has become full, transactions are automatically written to the next log file in sequence. When the last log file is filled, transactions are written to the first log, and so on. This is known as circular logging.

Circular logging is the default DB2 logging method. Primary log files are used to record all changes and are reused when changes are committed. Secondary log files are allocated when the limit of primary log files is reached. This method of circular logging makes crash recovery and version recovery possible.

If all log files become full, rollforward recovery is not possible. An error is returned when the limit of secondary logs is reached or there is insufficient disk space.

Configure circular logging by setting the LOGSECOND database configuration parameter to the number of secondary logs you wish to allow. If you set LOGSECOND to -1, then the database is configured for an infinite number of secondary logs. The default setting for LOGSECOND is 2. Circular logging can be disabled by setting LOGSECOND to 0.

Circular Logging

11-6

PRIMARY

SECONDARY

1

"n" 2

3

1 "n"

Page 363: System Administration

Backup and Recovery 11-7

Infinite Active LoggingInfinite active logging is introduced in Version 8 of DB2 UDB. It allows an active unit of work to span the primary logs and archive logs, effectively allowing a transaction to use an infinite number of log files. When infinite active log is not enabled, the log records for a unit of work must fit in the primary log space.

Infinite active logging is enabled by setting LOGSECOND to -1. It can be used to support environments with large jobs that require more log space than you would normally allocate to the primary logs.

Page 364: System Administration

11-8 Backup and Recovery

Dual logging provides a way to maintain mirror copies of both primary and secondary logs. If the primary or secondary logs become corrupt, or if the device where the logs are stored becomes unavailable, the database can still be accessed.

Dual logging is enabled by setting the MIRRORLOGPATH database configuration parameter to a path where the mirror logs are to be located.

Dual Logging

11-8

Mirror logs

Primary and secondary logs

LOGPATH

MIRRORLOGPATH

Transaction records

Dual logging in DB2 Version 8

Page 365: System Administration

Backup and Recovery 11-9

Archival logging is the process of moving the contents of log files to an external storage medium. Archival logging is enabled by setting the LOGRETAIN parameter in DB CFG to RECOVERY. When LOGRETAIN is enabled, log files are not deleted, but are stored either offline or online. A userexit routine can be used to move archived log files to other storage media. This makes online backup and roll forward possible.

Retained logs are handled in the following way:

With log retention, all logs are kept in the log path unless userexits are enabled or they are moved manually.Logs are closed and archived when they are no longer required for RESTART recovery.Userexits are used to archive the log files to another path/drive/storage media (tape device).

Userexits are programs called by the DB2 system controller for every log file as soon as it is full. During roll forward, a userexit may be called to get the log file if it is not in the current log path.Userexits must always be named db2uext2 and are only available for full database restore and not a table-space-level restore. Sample userexits included with DB2 can be modified for any installation. They include: db2uext2.cadsm, db2uext2.ctape, db2uext2.cdisk, and db2uext2.cxbsa.

Archival Logging/Log Retain

11-9

ONLINE ARCHIVE -Contains informationfor committed andexternalized transactions.Stored in the ACTIVElog subdirectory.

OFFLINE ARCHIVE -Archive moved fromACTIVE log subdirectory.(May also be on other media)

12

13

14

15

16

Manual orUser-exit

ACTIVE - Containsinformation fornon-committed or non-externalizedtransactions

Page 366: System Administration

11-10 Backup and Recovery

The backup utility:

Creates a backup copy of a database or a table space.Supports offline and online backups.Is started by commands from the Command Line Processor or the Control Center.Can be checked using the db2ckbkp utility. This utility:

Displays information about the backupTests the integrity of the backup image

Here are some addition considerations regarding online backup:

Log flushing during an online backup:After an online backup is complete, DB2 forces the currently active log file to be closed; as a result, the log is archived.This ensures that online backup has a complete set of archived logs available for recovery.This can reduce log management requirements during online backups.

Backup Utility

11-10

SYSADMSYSCTRLSYSMAINT

BACKUP DATABASE DB2CERT TO directory

DB2CERT

REMOTE

LOCAL

*SIZE?

-or-

*REFERENCE?

(backbufsz, command option) Number?

-or-

ADSM

Page 367: System Administration

Backup and Recovery 11-11

The file name (or folders for Intel platforms) used for images on disk or diskette contains:

The database aliasThe type of backup (0=FULL, 3=TABLESPACE, 4=Copy from LOAD)The instance nameThe database node (always 0 for non-partitioned databases)The timestamp of the backupA sequence number

The exact naming convention varies slightly by platform. Tape images are not named, but contain the same information in the backup header for verification purposes. The backup history provides key information in an easy-to-use format.

Backup Files

11-11

DBALIAS.0.DB2INST.0.19960314131259.001

Alias Instance Year Day Minute Sequence

Type Node Month Hour Second

DBALIAS.0\DB2INST.0\19960314\131259.001

Alias Instance Year Day Minute Sequence

Type Node Month Hour Second

Intel

Unix

Page 368: System Administration

11-12 Backup and Recovery

Restoring a backup image requires rebuilding the database or table space that has been backed up with the BACKUP command. The restore can be issued from the Command Line Processor or the Control Center.

Restoring a Backup Image

11-12

BACKUP

EXISTS

DELETE TABLE, INDEX, LONG FIELD FILES

RETAIN AUTHENTICATION RETAIN DATABASE

DIRECTORIES REPLACE TABLE SPACE

ENTRIES RETAIN HISTORY CHECK DATABASE SEED *

NEW

CREATE NEW DATABASE RESTORE AUTHENTICATION RESTORE DATABASE

CONFIGURATION FILE SET DEFAULT LOG PATH RESTORE COMMENTS

Page 369: System Administration

Backup and Recovery 11-13

This is a description of what happens when a database is restored and the log files are re-applied during the roll forward phase.

Look for the required log file in the current log path.If found, reapply transactions from the log file.If not found, manually move the required archived log files to the current path.If not found and USEREXIT is configured, the userexit is called to retrieve the log file from the archive path. The userexit is only called to retrieve the log file if rolling forward for a full database restore.If rolling forward for a table space restore, specify the OVERFLOWLOGPATH parameter or manually move the files back to the active log path. Once the log is in the current log path, the transactions are reapplied.

The Database Roll Forward

11-13

During roll forward processing, DB2 looks for the required log file:If found, it reapplies transactions If not found, a userexit can be called to retrieve the log file, or it may be moved manuallyIf rolling forward for a table space restore, specify the OVERFLOWLOGPATH parameter, or manually move the files back to the active log path

Once the log is in the current log path, transactions are reapplied

Page 370: System Administration

11-14 Backup and Recovery

A redirected restore allows you to redefined or redirect table space containers during the restore process. The definitions of table space containers are saved during a backup, but if these containers are not available during the restore, you can specify new containers. In a redirected restore, the Restore command must be executed twice.

Redirected Restore

11-14

Used to redefine or redirect table space containers during a restore

Definition of table space containers is kept during a backup

If containers are not available during a restore, new containers can be specified

Database restore command must be issued twice

Page 371: System Administration

Backup and Recovery 11-15

Above is a list of additional restore considerations.

Restore Considerations

11-15

Need SYSADM, SYSCTRL, or SYSMAINT authority to restore to an existing database

Need SYSADM or SYSCTRL to restore a new database

RESTORE only works with images taken with the BACKUP command on the same operating system

RESTORE works with external storage managers like the Tivoli Storage Management Solutions (ADSM)

Restore takes an exclusive connection to the database

Database can be local or remote

Page 372: System Administration

11-16 Backup and Recovery

During table space recovery, LOGRETAIN and/or a userexit must be enabled. You can then restore the database using the ROLLFORWARD command to bring the table space to the desired point in time.

Table space recovery requires a recoverable database (retention logging or a userexit must be enabled) as the table space must be rolled forward to a minimum point in time (PIT).

Minimum PIT ensures the table space agrees with what is in the system catalogs. It is initially the time when a backup occurred, but can be increased by changes which cause system catalog updates:

Alter tableCreate indexTable space definition change

The remainder of the database and table spaces are accessible during restore of a particular table space.

Table Space Recovery

11-16

Units of Work12

34

Crash

RESTORETable Space(s)

Log-Files

n active logs

n archived logs

Logging

ROLLFORWARDchanges in logs

Point of Consistency

time

Active/ucommitted UoW's are rolled back

BACKUPTable space

Image

Backup Table space(s)

Page 373: System Administration

Backup and Recovery 11-17

What is an offline table space state?

If a container is not accessible, the table space is placed in the offline state.Consequence of placing the table space in an offline state:

If the problem can be corrected, the table space can be made available.Allows connection to the database even if circular logging is used.

Table Space State: Offline

11-17

CTNR

CTNR

TBS1

CONNECT /ACTIVATE

Backup_Pending (Before)Backup_Pending + Offline (After)

CTNR

CTNR

TBS1

CONNECT /ACTIVATE

Normal (Before)OFFLINE (After)

Example 1: Example 2:

Page 374: System Administration

11-18 Backup and Recovery

When table spaces are offline, a connection is allowed to be made to the database even if the table space is damaged and circular logging is used. If only a temporary table space is damaged, you can create a new one after connecting to the database. The bad temporary table space can then be dropped.

Table Space Offline State (cont.)

11-18

V5CIRCULARLOGGING

TBSCONNECT

NO

V6/7CIRCULARLOGGING

TBSCONNECT

OKOFFLINE

Page 375: System Administration

Backup and Recovery 11-19

Above is a summary of the backup and restore features of DB2.

Backup and Restore Summary

11-19

Full Database Backup off-line

Full Database Backup off-line

Full Database Backup on-line

Table Space

Backup off-line

Table Space

Backup on-line

Logging Type Archival Circular Archival Archival Archival

Access allowed during Backup

N/A N/A Full None Full

Database state after

restoreRollforward

Pending Consistent Rollforward Pending

TS inRFPending

TS in RF Pending

Rollforward Required

Any Point in Time

after backup

N/AAny Point in Time past

backupMin PIT Min PIT

Page 376: System Administration

11-20 Backup and Recovery

A recovery history file is created for each database and is updated whenever any of the above events occur.

You can use this information to recover all or part of the database to a given point in time. The size of the file is controlled by the REC_HIS_RETENTN configuration parameter This parameter specifies a retention period (in days) for the entries in the file (db2rhist). You can execute OPEN, CLOSE, GET NEXT, UPDATE, and PRUNE commands against this file.

Recovery History File

11-20

Created with each database and updated whenever there is a: Backup of a database or table spaceRestore of a database or table spaceRoll forward of a database or table spaceAlter of a table spaceQuiesce of a table spaceLoad of a tableDrop of a tableReorganization of a tableUpdate of table statistics

Page 377: System Administration

Backup and Recovery 11-21

The dropped table recovery feature is provided in DB2 as a way to restore tables that are accidently dropped. Above is a list of steps required to restore a dropped table.

Dropped Table Recovery

11-21

1. Set DROPPED TABLE RECOVERY option on table space before table is dropped

2. Get DDL for the dropped table—in recovery history file3. Extract using the LIST HISTORY DROPPED TABLES command4. Restore the database or table space from a backup image5. Roll forward the database or table space using the RECOVER

DROPPED TABLE option—this generates an export file of the data for the dropped table

6. Recreate table with the DDL that was extracted earlier7. Import the recovered data8. Triggers, summary tables, unique constraints, referential

constraints, and check constraints must be manually applied

Page 378: System Administration

11-22 Backup and Recovery

Summary

11-22

You should now be able to:Describe the different types of recoveryExplain the importance of loggingUse the BACKUP commandRestore a database to end of logs or point in timePerform a table space backup and recovery

Page 379: System Administration

Backup and Recovery 11-23

Lab Exercises

11-23

You should now complete the lab exercises for Module 11.

Page 380: System Administration

11-24 Backup and Recovery

Page 381: System Administration

Performance Monitoring 02-2003 12-1© 2002, 2003 International Business Machines Corporation

Performance Monitoring

Module 12

Page 382: System Administration

12-2 Performance Monitoring

Objectives

12-2

At the end of this module, you will be able to:List and describe performance tuning parametersCapture and analyze snapshotsDescribe the different types of Event MonitorsAnalyze the output of Event MonitorsView Health Monitor alerts through the Health Center

Page 383: System Administration

Performance Monitoring 12-3

Why Performance Tuning?Performance in a database management system can usually be improved by increasing computer memory, adding faster disks, and adding processors, but most of the time, improved performance is available using the resources currently available. Generally, the goals of tuning a system for optimal performance include:

Processing a larger or more demanding work load without buying new hardware.Obtaining faster system response times without increasing processing costs.Reducing processing costs without negatively affecting service to users.

When a database server is first initialized, many parameters are left at default settings. Default configuration parameter values are implemented for systems with small databases and a relatively small amount of memory. These default values are not always sufficient for all installations.

Performance Tuning Overview

12-3

Performance is the capacity of the system to produce desired results with a minimum cost of time or resources and is measured by response time, throughput, and availability.

Performance tuning is the process of adjusting various parameters in the system in an attempt to make the system run more efficiently.

Page 384: System Administration

12-4 Performance Monitoring

DB2 UDB includes several database server parameters that can be tuned to improve overall server performance:

MAXAGENTSThis parameter indicates the maximum number of database manager agents (db2agent) available at any given time to accept application requests. These agents are required both for applications running locally and those running remotely. MAXAGENTS should be set to a value that is at least equal to the sum of the values of the MAXAPPLS database configuration parameters. This database parameter sets a limit to the number of users that can access the database.By increasing MAXAGENTS, more agents are available to handle database server requests, but more memory resources are required.MAXAGENTS can be set to any value from 1 to 64,000.

Common Database Server Parameters

12-4

MAXAGENTS — maximum number of agentsSHEAPTHRES — sort heap thresholdNUM_POOLAGENTS — agent pool sizeNUMDB — number of active databases

Page 385: System Administration

Performance Monitoring 12-5

SHEAPTHRESThe sort heap is a location in memory where sorting takes place. Setting the SHEAPTHRES parameter sets the total amount of memory for sorting across the entire database server instance. The initial setting for this parameter should be around 10 times the sum of all SORTHEAP settings for all databases. For systems that require a great deal of sorting operations, such as applications that generate a lot of reports, a higher threshold might be required.The SHEAPTHRES parameter can be set to any value between 250 and 2,097,152 pages. The default for SHEAPTHRES is 10,000 pages.

NUM_POOLAGENTSThe value of this parameter is used as a guideline for determining how large the agent pool can grow. If more agents are created than the number indicated by NUM_POOLAGENTS, then they are destroyed once they have completed their current request. The minimum value for this parameter is 1 and the maximum is 64,000.

NUMDBThis parameter is used to limit the number of databases that can be active at any given time. By limiting the number of databases, DB2 UDB can better manage the resources allocated for each database. The database server instance should be monitored over time to determine the maximum number of databases open at any given time and NUMDB should be set accordingly. If NUMDB is set too low, than some applications might not be able to access the databases they need. If set too high, memory resources might be allocated but never used.

Page 386: System Administration

12-6 Performance Monitoring

Above is a list of database configuration parameters that can be tuned for better performance. They are described in more detail below.

BUFFPAGEAmount of memory allocated to keep required data in cacheAlter buffer pools with NPAGES -1 so that the value of the BUFFPAGE database configuration parameter is used.It is recommended that you set the BUFFPAGE parameter in DB CFG.Start sizing the buffer pool at 75% of the total system memory.

CATALOGCACHE_SZThis parameter indicates the maximum amount of space the catalog cache can use from the database heap. It stores table descriptor information used during compilation of an SQL statement.The default value is 32, and the range is from 1 to the size of the database heap.More cache space is required if a unit of work contains several dynamic SQL statements or if a package is bound to the database containing a lot of static SQL statements.

Common Database Parameters

12-6

CHNGPGS_THRESH

NUM_IOCLEANERS

NUM_IOSERVERS

LOCKLIST

MAXLOCKS

MINCOMMIT

LOGFILSIZ

LOGPRIMARY & LOGSECOND

BUFFPAGE

CATALOGCACHE_SZ

LOGBUFSZ

PCKCACHESZ

SORTHEAP

STMTHEAP

DBHEAP

MAXAPPLS

Page 387: System Administration

Performance Monitoring 12-7

LOGBUFSZSpecifies the amount of the database heap to use as a buffer for log records before writing these records to disk.The default value for this parameter is eight pages, and the range is from 4 pages to 4096 pages. Increase the value of log buffer size if there is a high disk utilization on the disks dedicated to the purpose of holding log files.

PCKCACHESZThis is allocated out of the database global memory and is used for caching static and dynamic SQL statements on a database. The range for this parameter is from 32 to 64,000 4K pages.There is one package cache for each database node.

SORTHEAPThis variable defines the maximum number of memory pages for sorts.The variable can have a value of from16 to 524,288 4K pages.Set this parameter high enough to avoid overflowed sorts.

STMTHEAPThe statement heap is used as a workspace for the SQL compiler during compilation of an SQL statement.This parameter specifies the size of the workspace.The default value is 2,048 pages, and the range is from 128 to 60,000 pages.For most of the applications, the default value is big enough. Increase it if an application has very large SQL statements.

DBHEAPThe database heap contains control block information for tables, indexes, table spaces, and bufferpools, as well as space for log buffers (LOGBUFSZ).The default value for DBHEAP is 600 pages and the range is 32–524,288 pages.For databases with a large amount of buffer pool space, it is necessary to increase the database heap appropriately.

MAXAPPLSThis parameter specifies the maximum number of concurrent applications, both local and remote, allowed to connect to the database.The range for this parameter is from 1 to 5000 (counter).

Page 388: System Administration

12-8 Performance Monitoring

CHNGPGS_THRESHThis parameter specifies the percentage level of changed pages at which asynchronous page cleaners are started.The default value is 60 percent, and the range is from five percent to 80 percent of dirty pages to be written.Set this parameter to a lower value if insert, update, or delete activity is heavy.

NUM_IOCLEANERSThis parameter specifies the number of asynchronous page cleaners.The page cleaners write changed pages from the buffer pool to disk before the space in the buffer pool is required by a database agent.The default setting for this parameter is 1 and the range is 1–255 (counter).To avoid I/O waiting, set this parameter to a higher value if insert, update, or delete activity is heavy.

NUM_IOSERVERSSpecifies the number of I/O servers for the database.The I/O servers are used on behalf of the database agents to perform prefetch I/O and asynchronous I/O by utilities such as backup and restore.The default value is 3 and the range is 1–255 (counter).Set this value to be one or two more than the number of physical devices present on the server to maximize I/O parallelism.

LOCKLISTIndicates the amount of storage allocated to the lock list. There is one lock list per database and it contains the locks held by all applications concurrently connected to the database.The range of acceptable values for this parameters is 4–60,000 4K pages.Increase the value if insert, update, or delete activity is heavy.

MAXLOCKSThis defines a percentage of the lock list held by an application that must be filled before the database manager performs lock escalation.The range is from 1–100 percent.

MINCOMMITSetting this parameter causes a delay in the writing of log records to disk until the specified number of commits is performed.The default value is 1 and the range is 1–25 (counter).Increase this count if there is a high rate of update activity from many concurrent users.

Page 389: System Administration

Performance Monitoring 12-9

LOGFILSIZThis parameter defines the size of each primary and secondary log file. The size of these log files limits the number of log records that can be written to them before they become full and a new log file is required.The default is 250 pages and the range is 4 to 65,536 pages.Increase this parameter if the database has a large number of update, delete, or insert transactions.

LOGPRIMARYThis parameter specifies the number of primary log files to preallocate. The range of values for this parameter is 2–128 (counter).The primary log files establish a fixed amount of storage allocated to the recovery log files.

LOGSECONDThis parameter specifies the number of secondary log files that are created and used for recovery log files; however, secondary log files are allocated only as needed.The range is 0–126 (counter).When the primary log files become full, the secondary log files are allocated one at a time.

Page 390: System Administration

12-10 Performance Monitoring

AUTOCONFIGURE is a new DB2 command that recommends and optionally applies new values for buffer pool sizes, database configuration, and database manager configuration.

The syntax for this command is shown here:

AUTOCONFIGURE [USING input_keyword param_value][APPLY {DB ONLY | DB AND DBM | NONE}

Where:

input_keyword is the name of a resource that can be set to provide additional information to the autoconfiguration utility. Refer to the IBM DB2 Command Reference for a list of valid parameters.param_value is a value to assign to the input_keyword.DB ONLY indicates that only configuration changes for the currently selected database will be applied. This is the default setting.DB AND DBM indicates that changes to both DBM CFG and DB CFG will be applied.NONE displays the recommended changes, but does not apply them.

AUTOCONFIGURE

12-10

db2 => AUTOCONFIGURE APPLY DB ONLY

Current and Recommended Values for Database Manager Configuration

Description Parameter Current Recommended Value Value

----------------------------------------------------------------------------------------- Agent stack size (AGENT_STACK_SZ) = 16 16 Application support layer heap size (4KB) (ASLHEAPSZ) = 15 15 No. of int. communication buffers(4KB) (FCM_NUM_BUFFERS) = 4096 230 Enable intra-partition parallelism (INTRA_PARALLEL) = NO NO Maximum query degree of parallelism (MAX_QUERYDEGREE) = ANY 1 Max number of existing agents (MAXAGENTS) = 400 400 Agent pool size (NUM_POOLAGENTS) = 200(calculated) 10 Initial number of agents in pool (NUM_INITAGENTS) = 0 0 Private memory threshold (4KB) (PRIV_MEM_THRESH) = 20000 20000Max requester I/O block size (bytes) (RQRIOBLK) = 32767 32767 Sort heap threshold (4KB) (SHEAPTHRES) = 10000 2911

...

Page 391: System Administration

Performance Monitoring 12-11

Here is an example of a command to automatically configure the parameters for the sample database:

db2 CONNECT TO sampledb2 AUTOCONFIGURE APPLY DB ONLY

Partial results of these commands are displayed in the slide above.

The database instance must be restarted before any configuration changes actually take place.

CREATE DATABASE and AUTOCONFIGUREAUTOCONFIGURE can also be used with the CREATE DATABASE command to configure databases as soon as they are created. For example:

CREATE DATABASE sample2 AUTOCONFIGURE APPLY DB AND DBM

Page 392: System Administration

12-12 Performance Monitoring

When multiple processors are available on a computer, DB2 UDB takes advantage of them by performing some query operations in parallel. Parallel processing is possible only for queries that do not involve update operations.

Parallelism Configuration ParametersThe DB2 parallelism features are controlled by setting the following configuration parameters:

Instance-level parameters (DBM CFG)INTRA_PARALLEL enables and disables parallelism for the instance.MAX_QUERYDEGREE sets the maximum degree of parallelism for the instance.

Database-level parameters (DB CFG)DFT_DEGREE sets the default value for the CURRENT DEGREE special register and the DEGREE bind option.

Query Parallelism

12-12

DB2 takes advantage of multiple processors in an SMP machine to perform parallel, non-update operationsThis feature is enabled by the INTRA_PARALLEL DBM configuration parameterData parallelism divides data based on the number of processorsFunctional parallelism allows multiple operations to occur at once, one feeding the other through shared memory

Page 393: System Administration

Performance Monitoring 12-13

Monitoring activities that are related to database access and SQL processing is required for optimizing the performance of the queries. It involves:

Understanding how a given query is optimized in a specific environment. For example, a query that is used in an application that does not perform well.Understanding how applications use the database manager resources at a specific point of time. For example, database concurrency is reduced if a specific application is started.Understanding what database manager events occur when running applications. For example, observing a degradation in overall performance when certain applications are running.

DB2 provides the following tools for monitoring performance:

Snapshot MonitorEvent MonitorHealth Monitor

Monitoring Performance

12-13

Monitoring involves an understanding of:How a query is optimizedHow resources are used by applicationsWhat events occur when applications are running

Monitoring tools:Snapshot MonitorEvent MonitorHealth Monitor

Page 394: System Administration

12-14 Performance Monitoring

Similar to a snapshot from a camera, the Snapshot Monitor is used to gather information about database activity at any point in time. The collection of information for the Snapshot Monitor is enabled by setting a series of configuration parameters that act as switches for the monitor. These switches control the amount of information, as well as whether information is collected for the entire instance, or just for a single application.

Snapshot Monitor

12-14

The Snapshot Monitor gathers information on database activity at a point in time, and uses six monitor switches to determine how much data to gather:

DFT_MON_SORTDFT_MON_LOCKDFT_MON_TABLEDFT_MON_BUFPOOLDFT_MON_UOWDFT_MON_STMT

These switches are set at the instance level, or at the session level

Page 395: System Administration

Performance Monitoring 12-15

Above is a table summarizing the Snapshot Monitor switches.

Snapshot Monitor: Switches

12-15

Group Information provided Switch DBM parameterSorts Number of heaps used,

overflows, sort performance

SORT DFT_MON_SORT

Locks Number of locks held, number of deadlocks

LOCK DFT_MON_LOCK

Tables Measure activity on table (rows read, rows written)

TABLE DFT_MON_TABLE

Bufferpools Number reads and writes, time taken

BUFFERPOOL DFT_MON_BUFPOOL

Unit of Work Start/end times and completion status

UOW DFT_MON_UOW

SQL Start/stop time statement identification

STATEMENT DFT_MON_STMT

Page 396: System Administration

12-16 Performance Monitoring

Above are some examples of commands that are used to manage the Snapshot Monitor.

When the Snapshot Monitor is enabled at the instance level, information is captured for applications accessing all databases within the instance, and the change does not take affect until the instance is restarted. When the Snapshot Monitor is enabled at the application level, only information for that application is captured, and the change takes effect immediately.

Snapshot data accumulates as long as the instance is running. Use the RESET command to clear out the monitor data.

Retrieving Snapshot Information

12-16

To switch ON monitoring for statements at the instance level:UPDATE DBM CONFIGURATION USING DFT_MON_STMT ON

To switch ON monitoring for statements at the application level:UPDATE MONITOR SWITCHES USING STATEMENT ON

To view snapshot data for locks:GET SNAPSHOT FOR LOCKS ON database_name

To reset the Snapshot Monitor data:RESET MONITOR FOR DATABASE database_name

Page 397: System Administration

Performance Monitoring 12-17

Here is an example of the snapshot output for lock information:

Snapshot Output: Locks

12-17

db2 GET SNAPSHOT FOR LOCKS ON Sample

Database Lock Snapshot

Database name = SAMPLEDatabase path = C:\INST01\NODE0000\SQL00001\Input database alias = SAMPLELocks held = 1Applications currently connected = 1Agents currently waiting on locks = 0Snapshot timestamp = 06-26-2002 17:31:18.730281...

Database Lock Snapshot

Database name = SAMPLEDatabase path = C:\INST01\NODE0000\SQL00001\Input database alias = SAMPLELocks held = 1Applications currently connected = 1Agents currently waiting on locks = 0Snapshot timestamp = 06-26-2002 17:31:18.730281

Application handle = 3Application ID = *LOCAL.INST01.020626120056Sequence number = 0001Application name = db2bp.exeAuthorization ID = INST00Application status = UOW WaitingStatus change time = 06-26-2002 17:30:56.893183Application code page = 1252Locks held = 1Total wait time (ms) = 1

Page 398: System Administration

12-18 Performance Monitoring

Snapshot Output InterpretationFrom the above snapshot example, answer the following questions:

1. How many locks are currently being held?2. How many applications are currently connected to this database?3. What is the total amount of time spent waiting for locks (so far) by applications

connected to this database?4. What is the mode of the lock currently held?5. What is the type of lock (what database object)?

List of Locks

Lock Object Name = 4 Node number lock is held at = 0 Object Type = Row table space Name = SYSCATSPACE Granted Table Schema = SYSIBM Table Name = SYSTABLES Mode = NS Status = Granted Lock Escalation = NO

Page 399: System Administration

Performance Monitoring 12-19

Answers to questions:

1. How many locks are currently being held?One lock is currently held.

2. How many applications are currently connected to this database?One application is currently connected.

3. What is the total amount of time spent waiting for locks (so far) by applications connected to this database?The total wait time is 1 millisecond.

4. What is the mode of the lock currently held?The currently held lock mode is next share key (NS)

5. What is the type of lock (what database object)?The type of lock (object) is Row.

Page 400: System Administration

12-20 Performance Monitoring

Snapshot Monitoring: Authority

12-20

To perform any snapshot monitoring command, you must have one of the following authorities:

SYSADM, SYSCTRL, or SYSMAINT

Page 401: System Administration

Performance Monitoring 12-21

An Event Monitor records the database activity whenever a specific event or transition occurs. This is different from the snapshot monitors, which record the state of database activity when the snapshot is taken.

Here are a couple of examples of when event monitoring is more suitable to use than snapshot monitoring.

Deadlock— When a deadlock occurs, DB2 resolves the deadlock by issuing a ROLLBACK for one of the transactions. Information regarding the deadlock event cannot be easily captured using Snapshot Monitor since the deadlock has probably been resolved before a snapshot can be taken.Statement—The Snapshot Monitor for the application records cumulative data for all the SQL statements, so if you want just the data for an individual SQL statement, use the Event Monitor for statements.

Event Monitoring

12-21

Event Monitor — data about database activity is automatically recorded when a specific event occurs

Snapshot Monitor — data about database activity is recorded only once at the time the snapshot is taken

Page 402: System Administration

12-22 Performance Monitoring

You can create individual event monitors to monitor specific types of events or transitions. Once created, these monitors must be activated.

When creating event monitors, you must specify a directory in which to store the files for the captured data, and you must specify the number and size of the files. These files are sequentially numbered and have an .evt extension.

Event Monitor

12-22

Types of events that can be monitored:DatabaseTablesDeadlocksTable spacesBufferpoolsConnectionsStatementsTransactions

Page 403: System Administration

Performance Monitoring 12-23

Above is a description of the different types of events that can be monitored.

There are no switches associated with event monitors. When you create the monitor, you choose either the AUTOSTART or MANUAL START option according to whether you want the event monitor to start automatically when the specified database is started.

Event Monitor: Data Collection

12-23

Event type Description WhenDATABASE Database summary

informationLast application disconnects

CONNECTIONS Connection summary information

Every application disconnects

TABLE Table summary information Table application disconnectsSTATEMENTS SQL statement information Each SQL statementTRANSACTIONS Transaction summary

informationCOMMIT or ROLLBACK time

DEALOCKS Deadlock information When a deadlock occurs (-911)

Page 404: System Administration

12-24 Performance Monitoring

Event Monitor definitions are stored in the following catalog tables:

syscat.eventmonitorssyscat.events

To determine which event monitors are active, do the following:

Select monitor from syscat.eventmonitors table. Use the SQL function event_mon_state (event_monitor_name).An example of a select statement that includes an event monitor and the output from this statement are shown above. A STATE value of 1 denotes active while 0 denotes inactive.

Event Monitor OutputTo examine event monitor output, do the following:

Write an application to read the file and redirect output using a pipe.Use db2evmon productivity tool. Here are a couple of examplesdb2evmon -path event-monitor-targetdb2evmon -db database_name -evm event-monitor-name

Event Monitor Interface

12-24

Example:

SELECT evmonname, EVENT_MON_STATE(evmonname) state FROM syscat.eventmonitors

Output:

EVMONNAME STATE------------ -------EVMON_STAT 1LOCK_MON 0

Page 405: System Administration

Performance Monitoring 12-25

Use db2eva from Control CenterUse db2emcrt, a GUI tool that views reports.

Data element typesCounter — number of times an activity occursGauge — current valueHighest — high water marksInformation — reference informationTimestamp — time of event

Page 406: System Administration

12-26 Performance Monitoring

You can define an unlimited number of event monitors, but only 32 event monitors can be active at a time. You can perform the following tasks to manage event monitors:

Create an event monitor.Start the monitor.Flush the event monitor to write the recorded data from memory to files.Read the output of the event monitor.Drop the event monitor when it is no longer needed.

Event Monitoring: Steps and Authorization

12-26

To use any of the Event Monitor commands, you must have one of the following authorities:

SYSADM or DBADM

Page 407: System Administration

Performance Monitoring 12-27

Above is the complete syntax used to create an event monitor. Once the name has been specified, indicate which type(s) of events you want to monitor. You can use a comma to specify more than one event object.

Creating an Event Monitor: Type of Event

12-27

CREATE EVENT MONITOR monitor_nameFOR [DATABASE] [TABLES] [DEADLOCKS]

[TABLE SPACES] [BUFFERPOOLS] [CONNECTIONS] [STATEMENTS] [TRANSACTIONS]

[WHERE event_condition][{WRITE TO FILE file_path | WRITE TO TABLE}][MAXFILES number_of_files] [MAXFILESIZE size_of_file] [BUFFERSIZE size_of_buffer][{APPEND | REPLACE}][{MANUAL START | AUTOSTART}]

Page 408: System Administration

12-28 Performance Monitoring

The event_condition is a filter that determines which connections cause a CONNECTION, STATEMENT, or TRANSACTION event to occur.

The following comparison options are available: APPL_ID (application ID), AUTH_ID (authorization ID), and APPL_NAME (name of application)

Here is an example of an event condition:

WHERE APPL_NAME = 'PAYROLL' AND AUTH_ID = 'JSMITH'

Creating an Event Monitor: Event Condition

12-28

CREATE EVENT MONITOR monitor_nameFOR [DATABASE] [TABLES] [DEADLOCKS]

[TABLE SPACES] [BUFFERPOOLS] [CONNECTIONS] [STATEMENTS] [TRANSACTIONS]

[WHERE event_condition][{WRITE TO FILE file_path | WRITE TO TABLE}][MAXFILES number_of_files] [MAXFILESIZE size_of_file] [BUFFERSIZE size_of_buffer][{APPEND | REPLACE}][{MANUAL START | AUTOSTART}]

Page 409: System Administration

Performance Monitoring 12-29

You can choose to write event monitor information to a file, or, in Version 8.1 of DB2, you can choose to have the event monitor send streams of this information to a table.

For the WRITE TO FILE option, specify the path of the directory to where the event monitor should write the event data files. The event monitor writes out the stream of data as a series of 8 character numbered files, with the extension evt. (for example, 00000000.evt, 00000001.evt, and 00000002.evt).

If you specify the WRITE TO TABLE option, tables are created in the current database according to which events you have chosen to monitor. Event monitor tables can be identified by the _event_monitor_name string in the table name.

Creating an Event Monitor: File Path

12-29

CREATE EVENT MONITOR monitor_nameFOR [DATABASE] [TABLES] [DEADLOCKS]

[TABLE SPACES] [BUFFERPOOLS] [CONNECTIONS] [STATEMENTS] [TRANSACTIONS]

[WHERE event_condition][{WRITE TO FILE file_path | WRITE TO TABLE}][MAXFILES number_of_files] [MAXFILESIZE size_of_file] [BUFFERSIZE size_of_buffer][{APPEND | REPLACE}][{MANUAL START | AUTOSTART}]

Page 410: System Administration

12-30 Performance Monitoring

For the MAXFILES option, specify an integer value to set a limit on the number of event monitor files that can exist for a particular event monitor at any time.

By default, there is no limit to the number of files.

Creating an Event Monitor: Maxfiles

12-30

CREATE EVENT MONITOR monitor_nameFOR [DATABASE] [TABLES] [DEADLOCKS]

[TABLE SPACES] [BUFFERPOOLS] [CONNECTIONS] [STATEMENTS] [TRANSACTIONS]

[WHERE event_condition][{WRITE TO FILE file_path | WRITE TO TABLE}][MAXFILES number_of_files] [MAXFILESIZE size_of_file] [BUFFERSIZE size_of_buffer][{APPEND | REPLACE}][{MANUAL START | AUTOSTART}]

Page 411: System Administration

Performance Monitoring 12-31

This option specifies that there is a limit to the size of each event monitor file (in units of 4K pages). Specify the keyword None to remove any restriction on the size of the file and make sure the value for MAXFILES is 1.

The default for MAXFILESIZE for UNIX is 1000 4K pages, and the default for Windows is 200 4K pages.

Creating an Event Monitor: Maxfilesize

12-31

CREATE EVENT MONITOR monitor_nameFOR [DATABASE] [TABLES] [DEADLOCKS]

[TABLE SPACES] [BUFFERPOOLS] [CONNECTIONS] [STATEMENTS] [TRANSACTIONS]

[WHERE event_condition][{WRITE TO FILE file_path | WRITE TO TABLE}][MAXFILES number_of_files] [MAXFILESIZE size_of_file] [BUFFERSIZE size_of_buffer][{APPEND | REPLACE}][{MANUAL START | AUTOSTART}]

Page 412: System Administration

12-32 Performance Monitoring

The BUFFERSIZE option specifies the size of the event monitor buffers (in units of 4K pages).

The default size is 4 4K pages and two buffers.

Creating an Event Monitor: Buffersize

12-32

CREATE EVENT MONITOR monitor_nameFOR [DATABASE] [TABLES] [DEADLOCKS]

[TABLE SPACES] [BUFFERPOOLS] [CONNECTIONS] [STATEMENTS] [TRANSACTIONS]

[WHERE event_condition][{WRITE TO FILE file_path | WRITE TO TABLE}][MAXFILES number_of_files] [MAXFILESIZE size_of_file] [BUFFERSIZE size_of_buffer][{APPEND | REPLACE}][{MANUAL START | AUTOSTART}]

Page 413: System Administration

Performance Monitoring 12-33

APPEND indicates that, if event data files already exist when the event monitor is turned on, then the event monitor appends the new event data to the existing stream of data files.

REPLACE indicates that, if event data files already exist when the event monitor is turned on, then the event monitor erases all of the event files and starts writing data to file 00000000.evt.

The default is APPEND.

Creating an Event Monitor: Append/Replace

12-33

CREATE EVENT MONITOR monitor_nameFOR [DATABASE] [TABLES] [DEADLOCKS]

[TABLE SPACES] [BUFFERPOOLS] [CONNECTIONS] [STATEMENTS] [TRANSACTIONS]

[WHERE event_condition][{WRITE TO FILE file_path | WRITE TO TABLE}][MAXFILES number_of_files] [MAXFILESIZE size_of_file] [BUFFERSIZE size_of_buffer][{APPEND | REPLACE}][{MANUAL START | AUTOSTART}]

Page 414: System Administration

12-34 Performance Monitoring

MANUAL START indicates that the event monitor does not start automatically each time the database is started. AUTOSTART is used to start the event monitor automatically each time the database is started.

The default is MANUAL START.

Creating an Event Monitor: Manual Start/Autostart

12-34

CREATE EVENT MONITOR monitor_nameFOR [DATABASE] [TABLES] [DEADLOCKS]

[TABLE SPACES] [BUFFERPOOLS] [CONNECTIONS] [STATEMENTS] [TRANSACTIONS]

[WHERE event_condition][{WRITE TO FILE file_path | WRITE TO TABLE}][MAXFILES number_of_files] [MAXFILESIZE size_of_file] [BUFFERSIZE size_of_buffer][{APPEND | REPLACE}][{MANUAL START | AUTOSTART}]

Page 415: System Administration

Performance Monitoring 12-35

Above are two examples showing the creation of event monitors.

Example 1 creates an event monitor called smithstaff. This event monitor collects event data for the database as well as for the SQL statements performed by the staff application owned by the jsmith authorization ID.

Example 2 creates an event monitor called stmt_evts. This event monitor collects all events for statements fired. One file is written, and there is no maximum file size. Each time the event monitor is activated, it appends the event data to the file 00000000.evt if it exists. The event monitor is started each time the database is started.

Example 3 is identical to Example 2, except that a table event monitor is created instead of an event monitor that writes to a file. The event monitor writes streams of information to tables in the database that end with _stmt_evts. These tables can later be examined using SQL statements.

Creating an Event Monitor: Example

12-35

Example 1:CREATE EVENT MONITOR smithstaff

FOR DATABASE, STATEMENTS WHERE APPL_NAME = 'staff' AND AUTH_ID = 'jsmith' WRITE TO FILE '/database/inst101/sample' MAXFILES 25 MAXFILESIZE 1024 APPEND

Example 2:CREATE EVENT MONITOR stmt_evts FOR STATEMENTS

WRITE TO FILE '/database/inst101/sample' MAXFILES 1 MAXFILESIZE NONE AUTOSTART

Example 3 (DB2 UDB Version 8):CREATE EVENT MONITOR stmt_evts FOR STATEMENTS

WRITE TO TABLE MAXFILES 1MAXFILESIZE NONE AUTOSTART

Page 416: System Administration

12-36 Performance Monitoring

Above are some examples of commands to start event monitoring and flush the event monitoring buffer to disk.

Event Monitor: Start/Flush

12-36

Starting Event MonitorSET EVENT MONITOR monitor-name STATE 0 / 1

Use state 1 for activating and 0 for deactivating

Starting Event Monitor stmt_evts:db2 SET EVENT MONITOR stmt_evts STATE 1

Flushing Event MonitorFLUSH EVENT MONITOR monitor-name BUFFER

Flushing Event Monitor recorded data for stmt_evts from memory to file(s) (in directory ’/database/inst##/sample’):

db2 FLUSH EVENT MONITOR stmt_evts BUFFER

Page 417: System Administration

Performance Monitoring 12-37

Two utilities are provided that allow you to read the output from an event monitor. The db2evmon utility displays results in a text-based format. The db2eva utility displays the event monitor results using a graphical format. Examples of commands that use these utilities are shown above.

Event Monitor: Reading Output

12-37

Use db2evmon utility: (text based)db2evmon -path directory pathdb2evmon -path /database/inst101/sample

Here, directory-path is the path and name of the directory that stores the event monitor files (.evt files)

db2eva utility: (graphical)db2eva -path directory-pathdb2eva -path /database/inst101/sample

Here, directory-path is the path and name of the directory that stores the event monitor files (.evt files)

Page 418: System Administration

12-38 Performance Monitoring

Above is an example of events displayed through the db2eva graphical interface.

Event Monitor: db2eva

12-38

Page 419: System Administration

Performance Monitoring 12-39

To display the statements that were executing during the monitored event, right-click on the monitored time period and choose Open as > Statements.

Event Monitor: db2eva (cont.)

12-39

Page 420: System Administration

12-40 Performance Monitoring

Above is a sample view of statements that were executing when the monitored event occurred.

Note in the slide panel above, you can readily determine the type of SQL statement being monitored, as well as the database operation occurring as a result of that statement’s execution.

Event Monitor: db2eva (cont.)

12-40

Page 421: System Administration

Performance Monitoring 12-41

The Health Monitor is a server-side tool that constantly monitors the health of the instance, even without user interaction. If the Health Monitor finds that a defined threshold has been exceeded (for example, the available log space is not sufficient), or if it detects an abnormal state for an object (for example, an instance is down), the Health Monitor will raise an alert.

When an alert is raised two things can occur:

Alert notifications can be sent by e-mail or to a pager address, allowing you to contact whoever is responsible for a system.Preconfigured actions can be taken. For example, a script or a task (implemented from the new Task Center) can be run.

Alerts can be monitored and configured through the Health Center. To start the Health Center, select the Health Center icon from the Command Center tool bar, select Tools > Health Center from the Command Center menu, or choose Start > Program Files > IBM DB2 > Monitoring Tools > Health Center from the Windows desktop. You can also start the Health Center by executing the db2hc command at the command line.

In the Health Center window, you can choose the type of alerts (alarm, warning, attention, or normal) to display by selecting one of the four buttons above the object panel on the right side of the window. The icons highlighted in the button indicate which alert type will be presented in the alert panel on the right side of the window. These icons are shown below.

Health Monitor and Health Center

12-41

Page 422: System Administration

12-42 Performance Monitoring

The Health Monitor gathers information about the health of the system using new interfaces that do not impose a performance penalty. It does not turn on any snapshot monitor switches to collect information. The Health Monitor is enabled by default when an instance is created; you can deactivate it by setting the HEALT_MON database manager configuration parameter to Off.

Alarm Warning

Attention Normal

Page 423: System Administration

Performance Monitoring 12-43

A health indicator is a system characteristic that the Health Monitor checks. The Health Monitor comes with a set of predefined thresholds for these health indicators. The Health Monitor checks the state of your system against these health-indicator thresholds when determining whether to issue an alert.

Using the Health Center, commands, or APIs, you can customize the threshold settings of the health indicators, and define who should be notified and what script or task should be run if an alert is issued. This allows you to configure the Health Monitor to retune itself when performance problems occur, or even “heal” itself when it encounters a critical problem.

To modify Health Monitor indicator settings, expand the object window to display databases, right-click on the database name, and choose Configure > Database Object Health Indicator Settings. This displays the Configure Database Object Health Indicator window, as shown above.

Health Indicator Settings

12-43

Page 424: System Administration

12-44 Performance Monitoring

Summary

12-44

You should now be able to:List and describe performance tuning parametersCapture and analyze snapshotsDescribe the different types of Event MonitorsAnalyze the output of Event MonitorsView Health Monitor alerts through the Health Center

Page 425: System Administration

Performance Monitoring 12-45

Lab Exercises

12-45

You should now complete the lab exercises for Module 12.

Page 426: System Administration

12-46 Performance Monitoring

Page 427: System Administration

Query Optimization 02-2003 13-1© 2002, 2003 International Business Machines Corporation

Query Optimization

Module 13

Page 428: System Administration

13-2 Query Optimization

Objectives

13-2

At the end of this module, you will be able to:Explain the basic purpose of the query optimizerCapture EXPLAIN/Visual Explain informationAnalyze EXPLAIN/Visual Explain informationUse QUERYOPT to set optimization classExplain how to minimize client/server communication

Page 429: System Administration

Query Optimization 13-3

Query Optimization

13-3

Query optimization consists of:Query rewriting — DB2 modifies the SQL query to improve performance while still producing the same result setCost based optimization — DB2 internally calculates the cost of the SQL query taking into consideration resource usage, objects to be accessed, and object properties• The unit of cost measurement is a timeron• The optimizer selects the most efficient access plan based on

the current statistics in the catalog tables

Page 430: System Administration

13-4 Query Optimization

Compiler StepsWhen an SQL command is submitted, the following actions occur:

Parse query — The SQL compiler analyzes the SQL query to validate the syntax.Check semantics — The SQL compiler makes sure the database objects referenced in the statement are correct. For example, the compiler checks to make sure that the data types of the columns match the actual table definition. In addition, behavioral semantics are added, such as referential integrity, constraints, triggers, and so forth.Rewrite query — The compiler transforms the query so that it can be optimized more easily.Pushdown analysis — This step is only relevant for federated database queries. The compiler determines if an operation can be remotely evaluated (pushed-down) at a data source.Optimize access plan — The SQL optimizer (a portion of the SQL compiler) generates many alternative execution plans and chooses the plan with the least estimated execution cost.

SQL Compiler Overview

13-4

Execute Plan

Access Plan

Query Graph Model

Explain Tables

Visual Explain

db2exfmt Tool

Executable Plan

db2expln

SQL Compiler Parse Query

Check Semantics

Rewrite Query

Pushdown Analysis

Optimize Access Plan

Remote SQL Generation

Generate Executable Code

Page 431: System Administration

Query Optimization 13-5

Remote SQL generation and global optimization — The final plan selected by the DB2 optimizer consists of a set of steps that might operate on a federated or remote data source. For those operations performed by each remote data source, the remote SQL-generation step creates an efficient SQL statement based on the data source’s SQL dialect. Generate executable code — The SQL compiler creates an executable access plan for the query. Access plans are stored in the system catalog.

Page 432: System Administration

13-6 Query Optimization

Explain is a facility to capture detailed information about the access plan chosen by the SQL compiler to resolve an SQL statement. It supports both static and dynamic SQL, and it supports both text and graphical displays. All elements of SQL processing are captured, including table access, index access, joins, unions, scans, and so forth. The explain output information is stored in a set of persistent explain tables and recommendations are written to a set of advise tables.

What is Explain?

13-6

Explain Tables

Advise Tables

EXPLAIN

Explain output Recommendations

SELECT...

Page 433: System Administration

Query Optimization 13-7

There are seven explain tables and two advise tables that are used to provide access plan information. The explain tables include:

EXPLAIN_ARGUMENT — unique characteristics for each individual operatorEXPLAIN_INSTANCE — main control table for explainEXPLAIN_OBJECT — objects required by access plan (tables, indexes, and so forth)EXPLAIN_OPERATOR — operators needed by access plan (table/index scans)EXPLAIN_PREDICATE — matches predicates to operatorsEXPLAIN_STATEMENT — text of the statement (original and rewritten)EXPLAIN_STREAM — data flows within the query

The advise tables contain recommendation information and include:

ADVISE_INDEX — represents the recommended indexesADVISE_WORKLOAD — represents the statement that makes up the workload

Query Explain Tables

13-7

When explain data is requested:Several tables are populated that contain the query execution plan, and the table names start with EXPLAIN_ (such as EXPLAIN_OBJECT)Utilities and tools are provided with IBM DB2 to read the data from these tables and present it in a meaningful format:• Visual Explain for graphical view• db2expln for static SQL statements and dynexpln for

dynamic SQL statementsAccess plan information is captured in seven DB2 tablesTables are either created automatically or by executing the EXPLAIN.DDL script from the Command Line Processor

Page 434: System Administration

13-8 Query Optimization

There are four general methods of populating the explain tables:

You can explicitly request it through a command line interface using the EXPLAIN statement.You can set either the EXPLAIN MODE or EXPLAIN SNAPSHOT special registers for the session.You can use the options provided on the PREP or BIND commands when using embedded SQL. For example, when you specify either the EXPLAIN and EXPLSNAP parameters then information is gathered during the bind process.You can use the GUI tools, such as the Command Center, Control Center, and db2vexp to populate the tables and display the formatted plan in one step.

Capturing Explain Data

13-8

Methods for populating the explain table:Explicitly request it using the EXPLAIN statementSet a special register for the sessionSpecify the EXPLAIN and EXPLSNAP parameters when doing a PREP or a BINDUse the GUI tools to both populate the tables and display the formatted plan in one step

Page 435: System Administration

Query Optimization 13-9

The syntax for the EXPLAIN statement is shown above. Here is a description of the syntax elements:

FOR | WITH SNAPSHOT — The FOR clause indicates that only an explain snapshot is taken and placed into the SNAPSHOT column of the EXPLAIN_STATEMENT table. The WITH clause indicates that, in addition to the regular explain information, an explain snapshot is taken. The explain snapshot information is intended for use with Visual Explain. SET queryno=integer — This option associates an integer, using the QUERYNO column in the explain_statement table, with an explainable SQL statement. The integer supplied must be a positive value. For all dynamic SQL statements the default is 1, and for any static EXPLAIN statement, the default value is the statement number assigned by the precompiler.SET querytag=string — This option associates a string, using the querytag column in the explain_statement table, with an explainable SQL statement. The string can be up to 20 bytes. FOR sql_statement — This clause specifies the SQL statement to be explained. This statement can be any valid DELETE, INSERT, SELECT, SELECT INTO, UPDATE, VALUES, or VALUES INTO statement.

Capturing Explain Data: Explain Statement

13-9

EXPLAIN ALL {FOR | WITH SNAPSHOT}[SET queryno = integer][SET querytag = string]FOR sql_statement

Page 436: System Administration

13-10 Query Optimization

In order to have the authorization to execute the EXPLAIN statement, you must have INSERT privilege on the explain tables and the authority or privilege required to execute the SQL statement.

Hear are some examples of some EXPLAIN commands:

db2 EXPLAIN ALL FOR SELECT empno FROM employee

db2 EXPLAIN ALL WITH SNAPSHOT SET queryno = 13 SET querytag= 'TEST13' FOR SELECT * FROM employee

db2 EXPLAIN ALL FOR SNAPSHOT SET queryno = 13 SET querytag= 'TEST13' FOR SELECT * FROM employee

Page 437: System Administration

Query Optimization 13-11

The CURRENT EXPLAIN MODE and CURRENT EXPLAIN SNAPSHOT special registers hold a VARCHAR(254) value which controls the behavior of the explain facility with respect to eligible dynamic SQL statements. The CURRENT EXPLAIN MODE facility generates and inserts explain information into the explain tables. The CURRENT EXPLAIN SNAPSHOT generates explain and snapshot information.

Above is the syntax for the SET CURRENT EXPLAIN command. Here is an explanation of the command options:

NO — This option disables the explain facility, and no explain information is captured. This is the default value.YES — This option enables the explain facility and causes explain information to be inserted into the explain tables for eligible dynamic SQL statements.EXPLAIN — This option enables the explain facility and causes explain information to be captured for any eligible dynamic SQL statement that is prepared. However, dynamic statements are not executed.

Capture Explain Data: Special Register

13-11

SET CURRENT {EXPLAIN MODE | SNAPSHOT}{NO | YES | EXPLAIN | RECOMMEND INDEXES | EVALUATE INDEXES}

Page 438: System Administration

13-12 Query Optimization

RECOMMEND INDEXES — This option enables the SQL compiler to recommend indexes. All queries executed in this explain mode populate the ADVISE_INDEX table with recommended indexes. In addition, the explain information is captured in the explain tables to reveal how the recommended indexes are used, but the statements are neither compiled nor executed. EVALUATE INDEXES — This option enables the SQL compiler to evaluate the indexes that are stored in the ADVISE_INDEX table.

No special authorization is required to set these register values.

Page 439: System Administration

Query Optimization 13-13

Capturing Explain Data: PREPPrecompilation (or PREPing) is the conversion of embedded SQL code into third-generation-language code. Precompilation of an SQL command is done using the PREP command. The syntax is:

PREP embedded_sql_filename BINDFILE [options]EXPLAIN {NO | YES | ALL}EXPLSNAP {NO | YES | ALL}

The options available for the statement are:

Explain — This option specifies the behavior of the explain information capture. Snapshot information is not captured.

NO — Explain information is not captured.YES — Explain tables are populated with information about the chosen access plan at prep or bind time for static statements.ALL — Same as the above option. Additionally, explain information is gathered for eligible dynamic SQL statements at run time, even if the CURRENT EXPLAIN MODE register is set to NO.

Prep-Bind Overview

13-13

Source File — Static SQL

Precompilation (db2 PREP) This creates a bind file

Binder (db2 BIND)

Bind File

Database Manager Package

Page 440: System Administration

13-14 Query Optimization

EXPLSNAP — This option specifies the behavior of the explain information capture, including the snapshot information.

NO — An explain snapshot is not captured.YES — An explain snapshot for each eligible static SQL statement is placed in the explain tables.ALL — Same as the above option. Additionally, explain information is gathered for eligible dynamic SQL statements at run time, even if the CURRENT EXPLAIN SNAPSHOT register is set to NO.

Here is an example of a PREP command:db2 PREP '/usr/lpp/db2_07_01/samples/c/static.sqc' BINDFILE

EXPLAIN YES EXPLSNAP ALL

Capturing Explain Data: BINDBinding is the creation of a package in the database server. The syntax for the BIND command is shown here:

BIND bind_filename BINDFILE [options]EXPLAIN {NO | YES | ALL}EXPLSNAP {NO | YES | ALL}

EXPLAIN — This option specifies the behavior of the explain information capture. Snapshot information is not captured.

NO — Explain information is not captured.YES — Explain tables are populated with information about the chosen access plan at prep or bind time for static statements.ALL — Same as the above option. Additionally, explain information is gathered for eligible dynamic SQL statements at run time, even if the CURRENT EXPLAIN MODE register is set to NO.

EXPLSNAP — This option specifies the behavior of the explain information capture including the snapshot information.

NO — An explain snapshot is not captured.YES — An explain snapshot for each eligible static SQL statement is placed in the explain tables.ALL — Same as the above option. Additionally, explain information is gathered for eligible dynamic SQL statements at run time, even if the CURRENT EXPLAIN SNAPSHOT register is set to NO.

Here is an example of a BIND command:

db2 BIND '/usr/lpp/db2_07_01/samples/c/static.sqc' BINDFILE EXPLAIN YES EXPLSNAP YES

Page 441: System Administration

Query Optimization 13-15

Visual Explain — This tool allows for the analysis of the access plan and optimizer information from the explain tables through a graphical interface. It is invoked from the Control Center.db2exfmt — This command displays the contents of the explain tables in a predefined format. db2expln — This command is for static SQL statements. It shows the access plan information from the system catalog, and contains no optimizer information. This command is invoked through the command line.dynexpln — This command is for dynamic SQL statements. It creates a static package for the statements and then uses the db2expln tool to describe them. It is invoked through the command line.

DB2 SQL Explain Tools

13-15

Visual Explain

db2exfmt

db2expln

dynexpln

Page 442: System Administration

13-16 Query Optimization

The db2expln command describes the access plan selection for static SQL statements in packages that are stored in the DB2 system catalogs. Given a database name, package name, package creator, and section number, the tool interprets and describes the information in these catalogs. To use this command, you must have SELECT privilege on the system catalog views and EXECUTE authority for the db2expln package.

The options available with this command are:

–c creator — This is the user ID of the creator. You can specify the creator name using the pattern matching characters, percent sign (%) and underscore (_) used in a LIKE predicate. -d database — This is the name of the database that contains the package to be explained.-g — This option directs db2expln to show the optimizer plan graphs. Each section is examined, and the original optimizer plan graph, as presented by the Visual Explain tool, is constructed. -h — This option directs db2expln to display the help information about the input parameters.-i — This option directs db2expln to display the operator IDs in the explained plan.

View Explain Data: db2expln

13-16

db2expln –c creator -d database -g –h –i –l {-o output_file | -t} -p package -s section -u user_id_password

Page 443: System Administration

Query Optimization 13-17

-l — This option allows you to use either lower or mixed-case in the package name. If this -l option is not specified, the package name is converted to uppercase.-o output_file | -t — This option specifies the name of the file where db2expln writes the results. If -o is specified without a file name, a prompt for a file name appears. The default file name is db2expln.out. When -t is used, output is directed to the terminal.-p package — This option specifies the name of the package that is explained. If this option is not specified, a prompt for a name appears. You can specify the package name using the pattern matching characters, percent sign (%) and underscore (_) that are used in a LIKE predicate.-s section — This option specifies the section number to explain within the package. The number zero (0) is specified if all sections in the package are explained.-u user_id_password — This option specifies the user ID and password for db2expln to use when connecting to a database.

Here is an example that uses the db2expln command:

db2expln -d SAMPLE -p p% -c % -s 0 -t

This statement explains all packages starting with p in the sample database and directs the output to the terminal.

Page 444: System Administration

13-18 Query Optimization

Visual Explain is a GUI (graphical user interface) utility that gives the database administrator or application developer the ability to examine the access plan constructed by the optimizer.

Visual Explain:

Can only be used with access plans that are explained using the snapshot optionCan be used to analyze previously generated explain snapshots or to gather explain data and explain dynamic SQL statementsCreates the explain tables if they do not existIs invoked from the Command Center or Control Center, as shown aboveIs displayed in terms of graphical objects called nodes

An operator indicates an action that is performed on a group of data.An operand shows the database objects where an operator action takes place. In other words, an operand is an object that the operators act upon.

View Explain Data: Visual Explain

13-18

Page 445: System Administration

Query Optimization 13-19

The slide above contains an example of the graphical output displayed by the Visual Explain tool. To view more detail about any of the operator nodes, right-click on the node and select Show Details to view this detailed information.

Visual Explain: Graphical Output

13-19

O p era to r N o d es1 ) F ilte r2 ) S o rt3 ) Jo in4 ) T ab le S can5 ) In d ex S can

O p era n d N o d es1 ) T ab les2 ) In d ex

Right-click for detail

Page 446: System Administration

13-20 Query Optimization

An example of an Operator Details window for a table scan after a sort on the employee table, is shown above.

Visual Explain: Component Details

13-20

Page 447: System Administration

Query Optimization 13-21

The access plan graph is a very useful analysis tool that can be used to:

Design application programs to make the best use of available indexesDesign databases that make the best use of available disk resourcesExplain how two tables are joined; including the join method, the order in which the tables are joined, whether sorting is required and, if so, the type of sorting Determine ways of improving the performance of SQL statements, for example, such as creating a new indexView the statistics that were used at the time of optimization, then compare these statistics to the current catalog statistics to determine whether rebinding the package might improve performance. It also helps determine whether collecting statistics might improve performance. Determine whether or not an index was used to access a table. If an index was not used, the visual explain function helps determine which columns could be included in an index to help improve query performance.View the effects of tuning by comparing the before and after versions of the access plan graph for a query.Obtain information about each operation in the access plan, including the total estimated cost and the number of rows retrieved.

Visual Explain: Uses

13-21

The information obtained from the access plan graph can help to: Design application programsDesign databasesSee how tables are joinedDetermine how to improve performance View the statistics used at the time of optimizationDetermine if indexes were usedView the effects of tuningObtain information about each query plan operation

Page 448: System Administration

13-22 Query Optimization

The optimizer has a throttle to control how much optimization is done during the generation of an access plan. This throttle is managed by setting the DB CFG parameter, DFT_QUERYOPT. This parameter can be set to any integer from 0 to 9 and the default value is 5. The syntax for setting this variable using the UPDATE DATABASE CFG command and an example are shown above.

In general, 5 is a good DFT_QUERYOPT setting for OLTP applications and/or static SQL. A value of 3 or less is appropriate for OLAP, data warehousing, and/or dynamic SQL. The updated parameter value takes effect only after restarting the database.

Explain: Setting the Optimization Level

13-22

Syntax:

UPDATE DATABASE CFG FOR database_nameUSING parameter integer

Example:

db2 UPDATE DATABASE CFG FOR sample USING dft_queryopt 3

Page 449: System Administration

Query Optimization 13-23

Record blocking is a caching technique used to send a group of records across the network to the client at one time. Records are blocked by DB2 according to the cursor type and the BLOCKING parameter setting in the BIND command. The BLOCKING parameter values are shown above.

For local applications, the ASLHEAPSZ database manager configuration parameter is used to allocate the cache for row blocking.

For remote applications, the RQRIOBLK database configuration parameter on the client workstation is used to allocate the cache for row blocking.

Minimize Client-Server Communication

13-23

To group records, use the BLOCKING parameter values for the BIND command:

UNAMBIG — Blocking occurs for read-only cursors and cursors not specified as FOR UPDATE OF (ambiguous cursors are treated as updateable)ALL — Blocking occurs for read-only cursors and cursors not specified as FOR UPDATE OF (ambiguous cursors are treated as read-only)NO — Blocking does not occur for any cursors (ambiguous cursors are treated as read-only)

Page 450: System Administration

13-24 Query Optimization

Summary

13-24

You should now be able to:Explain the basic purpose of the query optimizerCapture EXPLAIN/Visual Explain informationAnalyze EXPLAIN/Visual Explain informationUse QUERYOPT to set optimization classExplain how to minimize client/server communication

Page 451: System Administration

Query Optimization 13-25

Lab Exercises

13-25

You should now complete the lab exercise for Module 13.

Page 452: System Administration

13-26 Query Optimization

Page 453: System Administration

Problem Determination 02-2003 14-1© 2002, 2003 International Business Machines Corporation

Problem Determination

Module 14

Page 454: System Administration

14-2 Problem Determination

Objectives

14-2

At the end of this module, you will be able to:Understand the fundamentals of first failure data capture (FFDC)Interpret the output in a db2diag.log file

Page 455: System Administration

Problem Determination 14-3

In order to solve the problem, you must understand the nature of the problem, and determine the cause of the conditions.

Important points to consider:

Is the problem reproducible? If so, how is it reproducible (by the clock, a certain application)?Was it a one time occurrence? What were the operating conditions at the time?

To Solve a Problem

14-3

Understand the nature of the problemDetermine the cause of the conditions

Is the problem reproducible?Was it a one time occurrence?

Page 456: System Administration

14-4 Problem Determination

A comprehensive problem description should include:

ALL error codes/error conditionsALWAYS include the reason code, if applicableThe actions which preceded the errorA reproducible scenario, if possibleSystem and application log files, if available

Describe the Problem

14-4

Problem description should include:ALL error codes/error conditionsALWAYS include the reason code, if applicableThe actions which preceded the errorA reproducible scenario, if possible

Page 457: System Administration

Problem Determination 14-5

There are a number of problem types, including:

Unexpected messages/SQL codesAbends (abnormal endings) of application or DBMLoops and hangsTrapsDatabase/data corruptionData lossIncorrect or inconsistent documentsIncorrect outputInstall failurePerformanceUsability

Also consider system problems, like operating system software faults and hardware failures.

Problem Types

14-5

Unexpected messages/SQL codesAbends of application or DBMLoops and hangsTrapsDatabase/data corruptionData lossIncorrect or inconsistent documentsIncorrect outputInstall failurePerformanceUsability

Page 458: System Administration

14-6 Problem Determination

To troubleshoot the problem correctly, you need to collect the information required to analyze a DB2 problem and determine the solution.

Use the db2diag.log file to view captured diagnostic information.

If reproducible, setting the DIAGLEVEL to 4 and recapturing the information is recommended.Any dump files mentioned in db2diag.log (pid/tid.node).Any traceback/trap files in the DIAGPATH (tpid/tid.node or *.trp). You must check for them manually—know the format!With WE or EE, send all files in the DB2DUMP directory.

To reduce the number of viewed files, clean up this directory on a regular basis.Also copy and truncate the db2diag.log file.

Required Diagnostic Data

14-6

What information is required to analyze a DB2 problem and determine the solution?

SQL Code/StateAlways include the Reason Code, if applicableProvide the system error code if not an SQL errorA GOOD problem descriptionInclude the SQL code and any associated reason codeDescription of the actions preceding the errorDatabase configurationDatabase Manager configuration

Page 459: System Administration

Problem Determination 14-7

Your checklist for the required data for proper error diagnosis includes:

The SQL codeReason codesSystem error codesA GOOD problem descriptionA description of the actions preceding the errorDatabase code levelDBM/DB configuration datadb2diag.logTime of the errorAny dump file listed in the db2diag.logAny trap file in the DIAGPATHOperating system software level and hardware model

Required Data Checklist

14-7

The SQL code (reason codes, system error codes)A GOOD problem descriptionA description of the actions preceding the errorDatabase code levelDBM/DB configurationdb2diag.logTime of the errorAny dump file listed in the db2diag.logAny trap file in the DIAGPATHOperating system software level, hardware model

Page 460: System Administration

14-8 Problem Determination

You may find additional useful information in the following:

DB2 traceSYSLOGDB2 event monitors / database snapshotsReproducible scenario and dataFor UNIX:

db2profile.profile.rhostsservices file

Additional Data Available

14-8

DB2 traceSYSLOGDB2 event monitors / database snapshotsReproducible scenario and dataFor UNIX:

db2profile.profile.rhostsservices file

Page 461: System Administration

Problem Determination 14-9

The db2diag.log file provides first failure data capture (FFDC):

Information is gathered as the error is occurring.The amount and type of information gathered is controlled by a configuration parameter.Point of failure information is also captured.This reduces the need to reproduce the error.

The FFDC collects the data required to analyze the problem based on:

What the error isWhere the error is being encounteredWhich partition is encountering the error

The information in the db2diag.log is written in an easy-to-read format.

The DIAGLEVEL can be set to:

0 - No logging1 - Severe errors2 - Severe + non severe errors3 - Severe, non-severe & warning4 - Severe, non-severe, warning & informational

The db2diag.log File

14-9

Provides first failure data capture (FFDC)Collects the data required to analyze the problemInformation is written in an easy to read formatDIAGLEVEL can be set—default is level 3

Page 462: System Administration

14-10 Problem Determination

To get the most useful diagnostic information, use DIAGLEVEL 4 whenever possible.

Always use DIAGLEVEL 4 during:

Initial install/configuration timeTimes of configuration changesTimes when experiencing errors

Suggestion

14-10

Use DIAGLEVEL 4 whenever possibleAlways use DIAGLEVEL 4 during the following times:

During initial install/configuration timeDuring times of configuration changesDuring times when experiencing errors

Page 463: System Administration

Problem Determination 14-11

When using the db2diag.log tool, DIAGLEVEL 4 logs more information than lower levels. This causes DB2 to run a little slower, but only during the following times:

During an error condition.During db2start processing.During an initial connect to a database.

Therefore, you must balance out the extra data provided with the decreased response time to determine the best setting for the environment.

DIAGLEVEL 4 Considerations

14-11

DIAGLEVEL 4 logs more information than lower levelsThis causes DB2 to run a little slower, but only during the following times:

During an error conditionDuring db2start processingDuring an initial connect to a database

Balance out the extra data provided with the decreased response time to determine the best setting for the environment

Be careful when using DIAGLEVEL 4. Do not set it at that level during normal daily activities; use it only as a troubleshooting aid. Remember, db2diag.log grows in size as it is used, so it is a good idea to copy it to a safe location and truncate the original periodically.

Warning!

Page 464: System Administration

14-12 Problem Determination

Errors captured in the db2diag.log file include the following information:

Time/date informationInstance namePartition number (this is 0 (zero) for non-partitioned objects)Process ID and thread ID in Windows NT/2000/XPApplication IDComponent identifierFunction identifierUnique error identifier (probe ID)Database nameError description and/or error code

This file is located in the folders or directories shown above, depending on which operating system you are using.

Location of the db2diag.log File

14-12

The db2diag.log file placement is:Specified by the DIAGPATH database manager configuration parameter• Default is \Program Files\IBM\SQLLIB\DB2 in Windows

NT/2000/XP • Default is $HOME/SQLLIB/db2dump in UNIX

Page 465: System Administration

Problem Determination 14-13

In the example above, there is no error code because it is simply an informational message. From the application ID, observe that this is a local connection from another session on the same machine where the instance is running.

db2diag.log Information Example

14-13

2000-03-23-14.59.01.303000 Instance:DB2 Node:000PID:147(db2syscs.exe) TID:203 Appid:*LOCALDB2.980323195820buffer_pool_services sqlbStartPools Probe:0 Database:SAMPLE

Starting the database.

Page 466: System Administration

14-14 Problem Determination

From the application ID, observe this is a remote connection from another machine.

We can also see the TCP/IP address of the client is:

9.21.16.109 from ---> 09.15.10.6D

This allows the easy linking of client side calls to server side actions.

db2diag.log Example: Starting the Database

14-14

This connection is a remote connection from another machine

2000-03-23-15.04.06.397000 Instance:DB2 Node:000PID:147(db2syscs.exe) TID:203 Appid:0915106D.0805.980323200328buffer_pool_services sqlbStartPools Probe:0 Database:SAMPLE

The TCP/IP address is captured in hexadecimal format. You will need to convert it to decimal to recognize it.

Note

Page 467: System Administration

Problem Determination 14-15

In most cases, the db2diag.log interprets error codes it receives into text. In the above example, a bad container path error occurred as a result of specifying a path that does not exist when trying to add a container.

You can find additional error information in several places. Start with the IBM DB2 UDB Message Reference. Do a search for your error message. For example, if you look up bad container path, you will see the error message:

SQL0298N Bad container path.Explanation: The container path violates one of the following requirements:...

Examine the list of violations that follow to help identify the cause of the error.

db2diag.log: Finding Error Information

14-15

2003-02-14-11.38.10.691002 Instance:DB2 Node:000PID:1072(db2syscs.exe) TID:2056 Appid:*LOCAL.DB2.010504173542buffer pool services sqlbSMSAcquireContainer Probe:816 Database:SAMPLE

User error: bad container path: c:\db2\spaces\ts_lab3:clp

Page 468: System Administration

14-16 Problem Determination

The return codes returned in the db2diag.log file are sometimes internal DB2 return codes. They are in hexadecimal and can be in one of two forms:

FFFF1111 — Must be in this form to be used1111FFFF — If it is in this form, you need to convert by byte reversing the value

Looking Up Internal Codes

14-16

Generally, the return codes returned in the db2diag.log file are internal DB2 return codes.They are in hexadecimal and must be in the form FFFFxxxx to be used.

Page 469: System Administration

Problem Determination 14-17

Integers are stored byte reversed on Intel platforms, so DB2 can have return codes in the form xxxxFFFF. You need to convert them so that their form is FFFFxxxx. Below is a method of reversing the bytes.

To convert do the following:

If the original: 1234ABCDByte reversal produces: CDAB3412

Where xxxx is 12 34, it is reversed to 34 12, and where FFFF is AB CD becomes CD AB. Then they are combined in the form of FFFFxxxx to be CD AB 34 12.

Byte Reversal

14-17

DB2 can have return codes in the form xxxxFFFFIntegers are stored byte-reversed (byte-swapped) on Intel platforms

To convert do the following:Original: 1234ABCDConverts to: CDAB3412

Page 470: System Administration

14-18 Problem Determination

Check to see if the return code is an SQL Code.

If it is an SQL code, look up the error information for that SQL code.db2 ? SQLxxxx or db2 "? SQLxxxx"• If it is not an SQL code, look up its meaning in the Troubleshooting Guide.

Once the return code is in the form FFFFxxxx:

Look up the last 4 digits of the return code in the Troubleshooting Guide.Get the error description from the file.

In this case we look up 8139 and find:

The container is already being used.

Looking Up Internal Return Codes

14-18

Check to see if the return code is an SQL CodeIf needed, convert it from hexadecimal to decimal

Once the return code is in the form FFFFxxxx:Look up the last 4 digits of the return code in the Troubleshooting Guide

In rare cases the code is neither an SQL code nor can it be found in the DB2 documentation. In this case, the code is only used internally. Contact DB2 support toconfirm.

Note

Page 471: System Administration

Problem Determination 14-19

This is an operation on an SMS container in the database SAMP.

In this case, the message states the container listed in the message has an error. The error can be found by looking up the error code, F611, in the Troubleshooting Guide.

The F611 error message in the Troubleshooting Guide is:

F611 -902 17 Invalid path

Check the SQL Code:

db2 ? SQL0902

Error output:

SQL0902C A system error (reason code =“<reason-code>”) occurred.Subsequent SQL statements cannot be processed.Explanation: A system error occurred.User Response: Record the message number (SQLCODE) and reason code in the message.sqlcode: -902sqlstate: 58005

The SQL code is -902, with a reason code of 17.

db2diag.log Example: Container Error

14-19

This is an operation on an SMS container, and returned the F611 error

1999-10-27-12.27.59.376000 Instance:DB2 Node:000PID:370(db2syscs.exe) TID:361 Appid:*LOCAL.DB2.991027162758buffer_pool_services sqlbSMSDoContainerOp Probe:815 Database:SAMP

Error checking container 0 (D:\DB2\NODE0000\SQL00001\SQLT0002.0) for tbsp 2. Rc = FFFFF611

Page 472: System Administration

14-20 Problem Determination

In this case, the return code is F616. You can look up this error in the Troubleshooting Guide as previously discussed.

The F616 error message in the Troubleshooting Guide is:

F616 -902 22 File sharing error

Check the SQL Code:

db2 ? SQL0902

Error output:

SQL0902C A system error (reason code ="<reason-code>") occurred.Subsequent SQL statements cannot be processed.Explanation: A system error occurred.User Response: Record the message number (SQLCODE) and reason code in the message.sqlcode: -902sqlstate: 58005

The SQL code is -902, but this one has a reason code of 22.

db2diag.log Example: Sharing Violation

14-20

Below is a file sharing error, F616:

2000-04-27-15.28.17.879000 Instance:DB2 Node:000PID:280(db2syscs.exe) TID:342 Appid:nonebuffer_pool_services sqlbDeleteDir Probe:860

DIA3819C A file sharing violation has occurred, filename was "".ZRC=FFFFF616

Page 473: System Administration

Problem Determination 14-21

The directory indicated above had a sharing violation when DB2 attempted to remove the directory. DB2 cleaned up what it could and dropped the database. However, the DBA must manually remove the above directory.

db2diag.log Example: Manual Cleanup

14-21

DB2 could not remove the directory due to a sharing violation. This requires manual cleanup of the directory

2000-04-27-15.28.17.909000 Instance:DB2 Node:000PID:280(db2syscs.exe) TID:342 Appid:nonebuffer_pool_services sqlbDeleteDir Probe:860

Error deleting directory D:\DB2\NODE0000\SQL00001. Manual cleanup may be required

Page 474: System Administration

14-22 Problem Determination

Connecting to the database sample.

db2diag.log Example: Database Connection

14-22

This is an example of connecting to the database SAMPLE

1998-03-26-09.45.05.111000 Instance:DB2 Node:000PID:186(db2syscs.exe) TID:190 Appid:*LOCAL.DB2.980326144448buffer_pool_services sqlbStartPools Probe:0 Database:SAMPLE

Starting the database.

Page 475: System Administration

Problem Determination 14-23

The function name does give a good description of what it is doing. You can get more information from DB2 by the command:

db2 "LIST TABLESPACE CONTAINERS FOR 0 SHOW DETAIL"

Which Container?

14-23

Find which container is causing the error

1998-03-26-09.47.08.348000 Instance:DB2 Node:000 PID:186(db2syscs.exe) TID:190 Appid:*LOCAL.DB2.980326144448buffer_pool_services sqlbDMSAcquireContainer Probe:869 Database:SAMPLE

Error acquiring container 0 (d:\tt) for tbsp 5. RC = FFFF8139

Container Info table space# Return Code

Page 476: System Administration

14-24 Problem Determination

Error Explanation

14-24

The container error shown earlier was caused by trying to execute this statement:

CREATE TABLE SPACE...USING(FILE 'd:\tt' 1000)

The tag below indicates the file is in use and assigned to table space 2

The container is already in useDB2 does not currently fail if the file is not DB2 relatedThe container file must be, or have been, a DB2 container

Page 477: System Administration

Problem Determination 14-25

The previous container error can happen for a number of reasons, including:

In UNIX, the file systems were mounted incorrectlyThe container cannot be found

An old table space was dropped but the container was not deletedThe old container remains with a DB2 tag in it, preventing its reuse.

Especially true with raw device containersA drive was restored on the system without a database restore

The old file structure (hence containers) is restored, with the old DB2 tag still in it.

Error Reasons

14-25

The previous container error can happen for a number of reasons:In UNIX, the file systems were mounted incorrectlyAn old table space was dropped but the container was not deletedEspecially true with raw device containersA drive was restored on the system without a database restore

Page 478: System Administration

14-26 Problem Determination

To remedy this container problem:

Ensure the file systems are mounted correctly so that container addressing is properIf certain the container is NOT in use:

Use the db2untag tool to remove the tag, orDelete it manually

The best way is to untag the file using db2untag

Error Resolution

14-26

To remedy this container error problem:Ensure the file systems are mounted correctlyIf not sure the container is NOT in use:• Use the db2untag tool to remove the tag• Delete it manuallyThe best way is to untag the file using db2untag

Page 479: System Administration

Problem Determination 14-27

Summary

14-27

You should now be able to:Understand the fundamentals of first failure data capture (FFDC)Interpret the output in a db2diag.log file

Page 480: System Administration

14-28 Problem Determination

Lab Exercises

14-28

You should now complete the lab exercises for Module 14.

Page 481: System Administration

Security 02-2003 15-1© 2002, 2003 International Business Machines Corporation

Security

Module 15

Page 482: System Administration

15-2 Security

Objectives

15-2

At the end of this module, you will be able to:Understand methods of authenticationDescribe authorities and privilegesUnderstand the DB2 auditing featuresExplain the process of encrypting and decrypting data

Page 483: System Administration

Security 15-3

There are three levels of security that control access to a DB2 system:

Instance levelDatabase levelDatabase object level

All access to the instance is managed by a security facility external to DB2. The security facility is part of the operating system or a separate product. Database manager security parameters, administrative authorities and user privileges are used to control access to databases and data objects.

Security

15-3

Levels of security that control access to a DB2 system:Instance levelDatabase levelDatabase object level

All access to the instance is managed by a security facility external to DB2; the security facility is part of the operating system or is a separate productAccess to databases and data objects are controlled by the instance

Page 484: System Administration

15-4 Security

Authentication is used to verify the user's identity. DB2 passes all user IDs and passwords to the operating system or external security facility for verification.

You must set the authentication parameter at both the DB2 server and client to control where authentication takes place. At the DB2 server, the authentication type is defined in the database manager configuration file (DBM CFG). At the DB2 client, the authentication type is specified when cataloging a database.

Authentication types available at the DB2 server include:

SERVER, SERVER_ENCRYPT CLIENTKERBEROS, KRB_SERVER_ENCRYPT

Authentication types available at the DB2 client include:

SERVER, SERVER_ENCRYPTCLIENTDCSKERBEROS

Gateway authentication is no longer permitted.

Authentication

15-4

Authentication verifies the user's identityDB2 passes all user IDs and passwords to the operating system or external security facility for verificationAt the DB2 server, authentication type is defined in the DBM CFGAt the DB2 client, authentication type is specified when cataloging a database

Page 485: System Administration

Security 15-5

When authentication type is set to SERVER:

Authentication occurs at the serverUser ID and password are sent to the server for validationUser ID and password flow over network—can be encryptedUser is required to re-enter the user name and password for connecting to a remote DB2 server

Setting the Authentication TypeAt the DB2 server, you specify the authentication type in the DBM CFG. Here is an example of the command sequence used to set the authorization type:

db2 "GET DBM CFG"db2 "UPDATE DBM CFG USING AUTHENTICATION CLIENT"db2stopdb2start

At the DB2 client, you specify the authentication type when cataloging a database. For example:

db2 "LIST DATABASE DIRECTORY"db2 "CATALOG DATABASE sample AT NODE mynode AUTHENTICATION SERVER"

Authentication Type: Server

15-5

Page 486: System Administration

15-6 Security

When authentication type is set to DCS:

User is validated at the DRDA server—host system. User ID and password flow over network—can be encryptedUser is required to re-enter the user name and password for connecting to a remote DB2 server.

Authentication Type: DCS

15-6

Page 487: System Administration

Security 15-7

Encrypted passwords are used when authentication type is set to SERVER_ENCRYPT or DCS_ENCRYPT.

If AUTHENTICATION is set to SERVER_ENCRYPT:Same as SERVER in terms of authentication locationUser ID and encrypted password are sent for validation

If AUTHENTICATION is set to DCS_ENCRYPT:Same as DCS in terms of authentication locationUser ID and encrypted password are sent for validation

Encrypted Password

15-7

Passwords are encrypted when:AUTHENTICATION=SERVER_ENCRYPTAUTHENTICATION=DCS_ENCRYPT

Page 488: System Administration

15-8 Security

KERBEROS security protocol performs authentication as a third-party authentication service by using conventional cryptography to create a shared secret key. This key becomes a user's credential and is used to verify the identity of users during all occasions when local or network services are requested.

Setting the authentication type to KERBEROS eliminates the need to pass the user name and password across the network as clear text. It also enables the use of a single sign-on to a remote DB2 server.

Authentication Type: KERBEROS

15-8

KERBEROS security protocol performs authentication as a third-party authentication service

Eliminates the need to pass the user ID and password across the network as clear textEnables the use of a single sign-on to a remote DB2 server

Page 489: System Administration

Security 15-9

When the authentication type is set to KRB_SERVER_ENCRYPT, SERVER_ENCRYPT and KERBEROS authentications are used by clients accessing the same DB2 server instance.

Authentication Type: KRB_SERVER_ENCRYPT

15-9

SERVER_ENCRYPT and KERBEROS authentication are used by clients accessing the same DB2 server instance

Client specification Server specification Client/server resolution

KERBEROS KRB_SERVER_ENCRYPT KERBEROS

Any other setting KRB_SERVER_ENCRYPT SERVER_ENCRYPT

Page 490: System Administration

15-10 Security

When the authentication type is set to CLIENT, authentication occurs at the client and the password is NOT sent to the server for validation unless CLIENT authentication with SERVER validation is obtained. CLIENT authentication also enables single-point logon.

Be careful in insecure environments. Windows 9x, Windows 3.1, and Macintosh, for example, do not have a reliable security facility. They connect to the server as an administrator without any authentication unless TRUST_ALLCLNTS is set to NO on the server.

Authentication Type: CLIENT

15-10

Page 491: System Administration

Security 15-11

Authentication parameters are used to decide if all clients are trusted.

If TRUST_ALLCLNTS is set to YES, then the server trusts all clients, including trusted, non-trusted, and host clients. Authentication takes place at the client (with one exception).If TRUST_ALLCLNTS is set to NO, then all untrusted clients are authenticated at the server. Users must provide user ID and password.If TRUST_ALLCLNTS is set to DRDAONLY, then only host clients are allowed to authenticate at the client.

TRUST_ALLCLNTS

15-11

Authentication parameters are used to decide if all clients are trustedTRUST_ALLCLNTS = YES

Trust all clientsAuthentication at client

TRUST_ALLCLNTS = NOAll untrusted clients will be authenticated at the server

TRUST_ALLCLNTS = DRDAONLYOnly hosts clients are allowed to authenticate at the client

Page 492: System Administration

15-12 Security

Authentication parameters are used to specify where authentication occurs when a user ID and password are supplied with a CONNECT statement or ATTACH command.

Active only when AUTHENTICATION is set to CLIENT. If AUTHENTICATION is set to SERVER, the user ID and password must be sent to the DB2 server on connect.Active when the user ID and password are provided for connection.

If TRUST_CLNTAUTH is set to CLIENT, then authentication is done at the client; the user ID and password are not required for CONNECT and ATTACH statements.If TRUST_CLNTAUTH is set to SERVER, then authentication is done at the SERVER when a user ID and password are provided with a CONNECT or ATTACH statement.

Specify where the trusted client is authenticated. Untrusted clients are always validated at the DB2 server if TRUST_ALLCLNTS is set to NO (regardless of the setting of TRUST_CLNTAUTH).Useful if you need to control where authentication takes place, based on whether CONNECT sends the user ID and password. Set TRUST_CLNTAUTH to SERVER to reduce the RPC to the domain controller.

TRUST_CLNTAUTH

15-12

Specifies where authentication occurs when a user ID and password are supplied with a CONNECT statement or ATTACH commandActive when AUTHENTICATION=CLIENT onlyActive when user ID and password are provided for connectionSpecify where trusted client is authenticatedTRUST_ALLCLNTS TRUST_CLNTAUTH Trusted client

authentication no password

Trusted client authentication with

password

Untrusted client authentication

YES (default) CLIENT (default) CLIENT CLIENT N/A

YES (default) SERVER CLIENT SERVER N/A

NO CLIENT (default) CLIENT CLIENT SERVER

NO SERVER CLIENT SERVER SERVER

Page 493: System Administration

Security 15-13

Authorities are a high level set of user's rights allowing users to do administrative tasks such as backup or create databases. These are normally required for maintaining databases and instances.

There are five authorities in DB2:

SYSADM — system administration authorityHolds the most authorities and privileges for the DB2 instanceSpecify the user group to be used as the SYSADM_GROUP in the DBM CFG

SYSCTRL — system control authorityProvides the ability to perform almost any administration taskMembers of SYSCTRL cannot access database objects (unless explicitly granted the privileges) and cannot modify the DBM CFGSpecify the user group to be used as the SYSCTRL_GROUP in the DBM CFG.

SYSMAINT — system maintenance authorityAllows execution of maintenance activitiesDoes not allow access to user data and cannot modify the DBM CFGSpecify the user group to be used as the SYSMAINT_GROUP in the DBM CFG

Authorities

15-13

High level set of user's rights allowing users to do administrative tasks are normally required for maintaining databases and instancesThere are five authorities in DB2:

SYSADM—system administration authoritySYSCTRL—system control authoritySYSMAINT—system maintenance authorityLOAD—load table authorityDBADM—database administration authority

Page 494: System Administration

15-14 Security

LOAD — load table authorityNew authority introduced in DB2 UDB v7.1Authority defined at the database levelEnables the user to run the LOAD utility without the need for SYSADM or DBADM authority

DBADM — database administration authorityAuthority defined at the database levelUsers can perform any administrative task and data access on the database

Page 495: System Administration

Security 15-15

DB2 authority uses groups defined in the operating system security facility.

Authority is not established by the GRANT statement. Instead, it must be set in the database manager configuration. The configuration parameters that define this authority are shown above.

For example, to specify the SYSCTRL authority:

db2 "UPDATE DBM CFG USING SYSCTRL_GROUP db2cntrl"db2stop db2start

Here is an example of a command to list the current setting of the authority for the instance:

db2 "GET DBM CFG"

Authorities in the DBM Configuration

15-15

DB2 authority uses groups defined in the operating system security facilityAuthority is not established by the GRANT statement; it must be set in the database manager configurationDatabase Manager configuration parameters:

Authority Group Name Parameter

SYSADM (SYSADM_GROUP) =

SYSCTRL (SYSCTRL_GROUP) =

SYSMAINT (SYSMAINT_GROUP) =

Page 496: System Administration

15-16 Security

The table above is a summary of functions that are allowed by various authorities.

Database Authority Summary

15-16

Function SYSADM SYSCTRL SYSMAINT DBADM

Update DBM CFG yes

Grant/revoke DBADM yes

Specify SYSCTRL group yes

Specify SYSMAINT group yes

Force users yes yes

Create/drop database yes yes

Restore to new database yes yes

Update DB CFG yes yes yes

Backup database/table space yes yes yes

Restore/roll forward a database yes yes yes

Start/stop a database instance yes yes yes

Run trace yes yes yes

Take snapshots yes yes yes

Query table space state yes yes yes yes

Update log history file yes yes yes yes

QUISCE table space yes yes yes yes

Load tables yes yes

Create/activate/drop event monitors yes yes

Page 497: System Administration

Security 15-17

A privilege is the right to create or access a database object. In DB2, there are three types of privileges:

Ownership (or CONTROL)IndividualImplicit

CONTROL privilege has full access to the object.

Individual privileges allow the user to perform specific functions on a database object (for example, SELECT, DELETE, INSERT, and UPDATE).

Implicit privilege is automatically granted when a user is explicitly granted certain higher level privileges.

Privileges

15-17

A privilege is the right to create or access a database objectThree types of privileges:

Ownership (or CONTROL)IndividualImplicit

Page 498: System Administration

15-18 Security

The DB2 privileges can be set and used at the levels shown above.

Levels of Privileges

15-18

Database

Schema

Table and View

Package

Index

Table space

Alias

Distinct Type (UDT)

User Defined Function (UDF)

Page 499: System Administration

Security 15-19

Database level privileges include:

CONNECT allows a user to access the database.BINDADD allows a user to create new packages in the database.CREATETAB allows a user to create new tables in the database.CREATE_NOT_FENCED allows a user to create a user-defined function (UDF) or stored procedure that is not fenced.IMPLICIT_SCHEMA allows any user to create a schema implicitly by creating an object using a CREATE statement with a schema name that does not already exist. CREATETAB, BINDADD, CONNECT, IMPLICIT_SCHEMA, and SELECT privileges on the system catalog views are automatically granted to PUBLIC when the database is created. CREATE_EXTERNAL_ROUTINE allows a user to create a procedure or function for use by applications and other users of the database. This privilege was introduced in DB2 Version 8.

Database Level Privileges

15-19

CONNECT

BINDADD

CREATETAB

CREATE_NOT_FENCED

IMPLICIT_SCHEMA

CREATE_EXTERNAL_ROUTINE (Version 8)

Page 500: System Administration

15-20 Security

Schema level privileges include:

CREATEIN allows the user to create objects within the schema.ALTERIN allows the user to alter objects within the schema.DROPIN allows the user to drop objects from within the schema.

Schema Level Privileges

15-20

CREATEIN

ALTERIN

DROPIN

Page 501: System Administration

Security 15-21

Table and view level privileges include:

CONTROL provides the user with all privileges for a table or view including the ability to drop it, and to grant and revoke individual table privileges.ALTER allows the user to add columns to a table, to add or change comments on a table and its columns, to add a primary key or unique constraint, and to create or drop a table check constraint. The user can also create triggers on the table, although additional authority on all the objects referenced in the trigger (including SELECT on the table if the trigger references any of the columns of the table) is required.DELETE allows the user to delete rows from a table or view.INDEX allows the user to create an index on a table.INSERT allows the user to insert an entry into a table or view, and to run the IMPORT utility.REFERENCES allows the user to create and drop a foreign key, specifying the table as the parent in a relationship.SELECT allows the user to retrieve rows from a table or view, to create a view on a table, and to run the EXPORT utility.UPDATE allows the user to change an entry in a table, view, or one or more specific columns in a table or view. The user has this privilege only on specific columns.

Table and View Privileges

15-21

CONTROL

ALTER

DELETE

INDEX

INSERT

REFERENCES

SELECT

UPDATE

Page 502: System Administration

15-22 Security

Package-level privileges include:

CONTROL provides the user with the ability to rebind, drop, or execute a package as well as the ability to extend those privileges to others. A user with this privilege is granted the BIND and EXECUTE privileges, and can grant BIND and EXECUTE privileges to other users as well. To grant CONTROL privilege, the user must have SYSADM or DBADM authority. BIND allows the user to rebind an existing package.EXECUTE allows the user to execute a package.

EXECUTE is also a routine-level privilege that provides users with the ability to execute all types of routines, including functions, procedures, and methods, in a database. This privilege was introduced for use with routines in DB2 Version 8.

Package and Routine Privileges

15-22

Package privileges:CONTROLBINDEXECUTE

Routine privilege (Version 8):EXECUTE

Page 503: System Administration

Security 15-23

The only index-level privilege is CONTROL. The creator of an index or an index specification automatically receives CONTROL privilege on the index.

CONTROL privilege on an index is really the ability to drop the index. To grant CONTROL privilege on an index, a user must have SYSADM or DBADM authority.

The only table-space-level privilege is USE OF, which provides users with the ability to create tables only in table spaces to which they have been granted access.

Index and Table Space Privileges

15-23

Index privilege: CONTROLTable space privilege: USE OF

Page 504: System Administration

15-24 Security

Implicit privileges specify that a user can:

Create a databaseInternal GRANT of DBADM authority with CONNECT, CREATETAB, BINDADD, and CREATE_NOT_FENCED privileges to creator (SYSADM or SYSCTRL)Internal GRANT of BINDADD, CREATETAB, CONNECT and SELECT on system catalog tables to PUBLICBIND privilege on each successfully-bound utility to PUBLIC

Grant DBADM Internal GRANT of BINDADD, CREATETAB, CONNECT and CREATE_NOT_FENCED

Create objects (table, index, package)Internal GRANT of CONTROL to object creator

Create viewsInternal GRANT to intersection of creator's privileges on base table(s) to view creator

Implicit Privileges

15-24

With implicit privileges, a user can:Create database (Internal GRANT of DBADM authority)Grant DBADMCreate objects (table, index, package)Create views

Page 505: System Administration

Security 15-25

The table above shows a list of tasks that are required when developing an application and the privileges required to perform these tasks.

Privileges Required for Application Development

15-25

All actions below require CONNECT on the database

Page 506: System Administration

15-26 Security

Most of the information on authorizations is maintained in five system catalog views. These catalogs are listed above.

System Catalog Views

15-26

syscat.dbauth Database privileges

syscat.indexauth Index privileges

syscat.packageauth Package privileges

syscat.tabauth Table and view privileges

syscat.schemaauth Schema privileges

Page 507: System Administration

Security 15-27

The chart shown above depicts the various authorizations and privileges and how they relate to one another.

Hierarchy of Authorizations and Privileges

15-27

SYSADM

SYSADM SYSCTRL LOAD

SYSMAINT

CREATE NOT FENCED

(database)

BINDADD(database)

CONNECT(database)

CONTROL(indexes)

IMPLICIT SCHEMA(database)

CONTROL(packages)

CONTROL(tables)

CONTROL(views)

(Schema owners)

CREATETAB(database)

BINDEXECUTE ALL

ALTERDELETEINDEXINSERTREFERENCESSELECTUPDATE

ALLDELETESELECTINSERTUPDATE

ALTERINCREATEINDROPIN

Authorities

Privileges

Page 508: System Administration

15-28 Security

The audit facility of DB2 UDB allows you to predefine events at the instance level to generate records in an audit log file. The following event categories, based on scope, can be audited:

AUDIT — Changes in the state of auditingCHECKING — Authority checkingOBJMAINT — Creation and deletion of DB2 objectsSECMAINT — Overall security (GRANT, REVOKE, and so forth)SYSADMIN — Action requiring SYSADM authorityVALIDATE — User validation, retrieving user informationCONTEXT — Operation context (SQL statement, for example)

Audit Facility

15-28

Predefined events generate records in an audit log fileWorks at an instance levelAudits the following event categories (scope):• AUDIT• CHECKING• OBJMAINT• SECMAINT• SYSADMIN• VALIDATE• CONTEXT

Page 509: System Administration

Security 15-29

The audit facility is implemented by the following command-line commands:

Starting the audit facility:db2audit start

Stopping the audit facility:db2audit stop

Configuring the audit facilitydb2audit configure

Finding how the audit facility is currently configureddb2audit describe

Forcing the buffered audit record to diskdb2audit flush

Extracting information from the log filesdb2audit extract

Pruning the audit logdb2audit prune

The db2audit Command: How It Works

15-29

You can use the db2audit command to:Start and stop the audit facilityConfigure the audit facilityFind how the audit facility is currently configuredForce the buffered audit record to diskExtract information from the log filesPrune the audit log

Page 510: System Administration

15-30 Security

Summary

15-30

You should now be able to:Understand methods of authenticationDescribe authorities and privilegesUnderstand the DB2 auditing featuresExplain the process of encrypting and decrypting data

Page 511: System Administration

Security 15-31

Lab Exercises

15-31

You should now complete the lab exercises for Module 15.

Page 512: System Administration

15-32 Security

Page 513: System Administration

Summary 02-2003 16-1© 2002, 2003 International Business Machines Corporation

Summary

Module 16

Page 514: System Administration

16-2 Summary

Objectives

16-2

At the end of this module, you will be able to:Determine if you have met the objectives of this courseList reference materials beneficial to your job performanceConsider taking the next course in the DBA trackFill out a course evaluation sheet

Page 515: System Administration

Summary 16-3

Course Objectives

16-3

Configure and maintain DB2 instancesManipulate databases and database objectsOptimize placement of dataControl user access to instances and databasesImplement security on instances and databasesUse DB2 activity monitoring utilitiesUse DB2 data movement and reorganization utilitiesDevelop and implement Database recovery strategyInterpret basic information in the db2diag.log file

Page 516: System Administration

16-4 Summary

Basic Technical References

16-4

Basic Technical References:DB2 UDB Administration Guide: PlanningDB2 UDB Administration Guide: ImplementationDB2 UDB Command ReferenceDB2 UDB SQL Reference, Volumes 1 and 2DB2 UDB What’s New, Version 8

Page 517: System Administration

Summary 16-5

Advanced Technical References

16-5

Advanced Technical References:DB2 UDB System Monitor Guide and ReferenceDB2 UDB Data Movement Utilities Guide and ReferenceDB2 UDB Message Reference, Volumes 1 and 2DB2 UDB Administration Guide: Performance

Page 518: System Administration

16-6 Summary

Next Courses

16-6

Where to go from here:DB2 UDB Advanced Administration Workshop • 4 daysDB2 UDB Performance Tuning and Monitoring Workshop • 4 daysDB2 UDB Advanced Recovery and High Availability Workshop • 4 daysDB2 Stored Procedures Programming Workshop • 2days

Page 519: System Administration

Summary 16-7

Evaluation Sheet

16-7

Kindly provide us with your feedback. Please include written comments, which are better than checked boxes.

Thank You!

Page 520: System Administration

16-8 Summary

Page 521: System Administration

Appendixes

Page 522: System Administration
Page 523: System Administration

Lab Exercises Environment 02-2003 LE-1© 2002,2003 International Business Machines Corporation

Lab Exercises Environment

Appendix LE

Page 524: System Administration

LE-2 Lab Exercises Environment

Overview

LE-2

In this appendix, you will learn how to connect to the Lab Exercises environment in the IBM DB2 classrooms:

Client Setup (Windows) — page LE-3DB2 Server Setup (Windows) — page LE-4DB2 Server Setup (UNIX / Linux) — page LE-5

Page 525: System Administration

Lab Exercises Environment LE-3

Client Setup (Windows)Workstation name _______________________________________

Workstation login _______________________________________

Workstation password ____________________________________

Client software installation location:

Directory ______________________________________________

______________________________________________________

Client Setup (Windows)

LE-3

Information you will need for the client workstation:Workstation nameWorkstation loginWorkstation passwordClient software installation location

Page 526: System Administration

LE-4 Lab Exercises Environment

DB2 Server Setup (Windows)Each student will use a PC workstation, which has the DB2 product installed, and may have database server instances already created.

Use the following command from a DB2 command line window:

set

This will list your environment setup for your logon account.

COMPUTERNAME _____________________________________

DB2PATH _____________________________________________

DB2INSTANCE ________________________________________

etc\services ports _______________________________________

DB2 Server Setup (Windows)

LE-4

Information you will need for the server workstation is:COMPUTERNAMEDB2PATHDB2INSTANCEetc\services ports

Page 527: System Administration

Lab Exercises Environment LE-5

DB2 Server Setup (UNIX/Linux)Use Korn Shell on Unix or BASH on Linux. These shells use:

export ...

Set up a file (for example, $HOME/myenv) with environmental variables that will be used throughout the course and can be sourced to establish new telnet windows when needed.

Team Number __________________________________________

Login _________________________________________________

Password ______________________________________________

Host name _____________________________________________

DB2PATH _____________________________________________

DB2INSTANCE ________________________________________

/etc/services ports _______________________________________

Source the file that you have just created to set your current user (and any sub-user) environment, then display your environment to double-check.

. ./myenvenv

DB2 Server Setup (UNIX/Linux)

LE-5

Information you will need for the UNIX/Linux server is:Team NumberLoginPasswordHost nameDB2PATHDB2INSTANCE/etc/services ports

Page 528: System Administration

LE-6 Lab Exercises Environment

Graphical User InterfaceUse the mouse and navigate to the required program. For example:

Start > Programs > IBM DB2 > Command Line Tools > Command Center

This selection opens the DB2 Command Center window.

Command Line ProcessorUse the mouse and navigate to the DB2 CLP. Example:

Start > Programs > IBM DB2 > Command Line Tools > Command Line Processor

This selection opens a DB2 Command Line Processor window and starts the CLP for you.

Command WindowUse the mouse and navigate to the DB2 Command Window. For example:

Start > Programs > IBM DB2 > Command Line Tools > Command Window

This selection opens a DB2 Command Window. To start the CLP in this window, you must type:

db2

DB2 Platforms

LE-6

On the Windows platform, you have three ways to work with DB2:Graphical User InterfaceCommand Line ProcessorCommand Window (using the CLP)

On the Unix/Linux platforms, you use a command window established by your logon session (for example, telnet)

Page 529: System Administration

Lab Exercises Environment LE-7

The basic command line syntax for the CLP is shown above.

DB2 Command Line Syntax

LE-7

db2

option-flag db2-commandsql-statement?

phrase

sql-state

class-code

message

Page 530: System Administration

LE-8 Lab Exercises Environment

While the DB2 server is running, you can use the CLP to get command line help as shown above.

You can also view PDF/HTML technical document files if they were installed with the server.

The IBM DB2 Command Reference document contains further information on using the CLP.

DB2 Online Reference

LE-8

Command help is available in several forms:Online command reference

db2 ?db2 ? command_stringdb2 ? SQLnnnn (nnnn = 4 or 5 digit SQLCODE)db2 nnnnn (nnnnn = 5 digit SQLSTATE)

Online reference manualsPDF filesHTML pages

Page 531: System Administration

Lab Exercises Environment LE-9

Use the non-interactive mode if you need to issue OS commands while performing your tasks.

Starting a Command Line Session

LE-9

Non-interactive mode:db2 CONNECT TO eddbdb2 "SELECT * FROM syscat.tables" | more

Interactive mode:db2db2=> CONNECT TO eddbdb2=> SELECT * FROM syscat.tables

Page 532: System Administration

LE-10 Lab Exercises Environment

There are several ways to finish your DB2 session.

As shown above, simply issuing a quit command while in the CLP does not terminate your resource use of the server.

For a clean separation, like when you want new database parameter values to take effect, you must terminate your database connection.

You may also need to force other applications off the server.

Example commands:

db2=> QUIT$db2 TERMINATE$db2 FORCE APPLICATIONS ALL$

QUIT vs. TERMINATE vs. CONNECT RESET

LE-10

CLPCOMMAND

quit

terminate

connect reset

Terminate CLPBack-end Process

No

Yes

No

Disconnect database Connection

No

YesYes if

CONNECT=1(RUOW)

Page 533: System Administration

Lab Exercises Environment LE-11

The DB2 LIST command provides the following information:

List CLP Command Options

LE-11

Command Line Processor Option SettingsBackend process wait time (seconds) (DB2BQTIME) = 1Number of retries to connect to backend (DB2BQTRY) = 60Request queue wait time (seconds) (DB2RQTIME) = 5Input queue wait time (seconds) (DB2IQTIME) = 5Command options (DB2OPTIONS) =

Option Description Current Setting-a Display SQLCA OFF-c Auto-commit ON-e Display SQLCODE/SQLSTATE OFF-f Read from input file OFF-l Log commands in history file OFF-o Display output ON-p Display interactive input prompt ON

Use the DB2 LIST command to view the Command Line Processor option settings:

db2 LIST COMMAND OPTIONS

The list created from this command is shown below.

Page 534: System Administration

LE-12 Lab Exercises Environment

-r Save output to report file OFF-s Stop execution on command error OFF-t Use ';' for statement termination OFF -v Echo current command OFF-w Display FETCH/SELECT warning messages ON-x Suppress printing of column headings OFF -z Save all output to output file OFF

Page 535: System Administration

Lab Exercises Environment LE-13

To temporarily modify CLP options for a command:

db2 -r options.rep LIST COMMAND OPTIONSdb2 -svtf create.tab3db2 +c "UPDATE tab3 SET salary=salary + 100"

To temporarily modify options for an interactive CLP session:

db2=> UPDATE COMMAND OPTIONS USING c off a on

To temporarily modify opitons for a non-interactive CLP session:

export DB2OPTIONS="-svt" (UNIX)set DB2OPTIONS="-svt" (Intel)db2 -f create.tab3

To modify CLP options for every session:

Place environment settings in UNIX db2profile, in OS/2 config.sys, or System Program Group in Windows NT

Modify CLP Options

LE-13

You can modify CLP options:Temporarily for a commandTemporarily for an interactive CLP sessionTemporarily for a non-interactive CLP sessionEvery session

Page 536: System Administration

LE-14 Lab Exercises Environment

Edit create.tab

Execute the script:

db2 -svtf create.tab

Input File: No Operating System Commands

LE-14

-- comment: db2 -svtf create.tabCONNECT TO sample;

CREATE TABLE tab3 (name VARCHAR(20) NOT NULL, phone CHAR(40), salary DEC(7,2));

SELECT * FROM tab3;

COMMIT WORK;

CONNECT RESET;

1. Create a file (create.tab shown below)

2. Edit the file, specifying the commands you want to execute

3. Execute the file using DB2

Page 537: System Administration

Lab Exercises Environment LE-15

Here is an example of the contents of the out.sel file:

Input File: Operating System Commands

LE-15

Table Name Is org DEPTNUMB DEPTNAME MANAGER DIVISION LOCATION10 Head Office 160 Corporate New York15 New England 50 Eastern Boston20 Mid Atlantic 10 Eastern Washington38 South Atlantic 30 Eastern Atlanta42 Great Lakes 100 Midwest Chicago51 Plains 140 Midwest Dallas66 Pacific 270 Western San Francisco84 Mountain 290 Western Denver

Edit the file (seltab):UNIX or Linux - vi seltab

OS/2 — epm seltab.cmdWindows — edit seltab.cmd

Execute the file:seltab org

echo "Table Name Is" $1 > out.seldb2 "SELECT * FROM $1" >> out.sel

echo 'Table Name Is' %1 > out.seldb2 SELECT * FROM %1 >> out.sel

Page 538: System Administration

LE-16 Lab Exercises Environment

Page 539: System Administration

Index

Page 540: System Administration
Page 541: System Administration

Index

Index-1

AAccess Plan 2-29ACTIVATE DATABASE command 5-8Administration

DAS user 4-7fenced user 4-6SYSADM 4-5SYSADM_GROUP 4-5

ALTER TABLESPACE command 3-25Audit facility 15-28

db2audit 15-29AUTOCONFIGURE command 12-10

BBackup 11-10

files 11-11restoring from 11-12

Bidirectional index 6-6, 6-12, 6-16BLOB 5-22Bufferpool 3-6

and table spaces 3-13

CCheck constraint 7-26

adding with CREATE TABLE 7-27modifying with ALTER TABLE 7-29

CLOB 5-22CLP 4-16Clustered index 6-6, 6-9, 6-15Command Center 2-15

Access Plan 2-29Query Results 2-16Visual Explain 2-27

Command Line Processor 4-16Compiler

overview 13-4Configuration

DB CFG 5-3DBM CFG 4-3parallelism 12-12using CLP 4-16using Control Center 4-17

Configuration Assistant 2-14, 4-30adding a database 4-34–4-38invoking 4-33

Configuration files 1-14Connectivity

cataloging the database 4-28cataloging the server 4-27configuring 4-24manual configuration 4-25

Constraintcheck 7-26

Container 3-4listing 3-22size and performance 3-34

Control Center 2-10toolbar 2-10

CREATE EVENT MONITOR 12-27–12-34CREATE INDEX 6-7CREATE SCHEMA 5-14Create Table wizard 7-7–7-12CREATE TABLESPACE command 3-16CREATE VIEW command 5-33Create View window 5-35

DDAS 4-8dasidrop 4-12Data maintenance 9-3

db2rebind 9-19REBIND

REBIND 9-18REORG 9-11–9-14REORGCHK 9-4–9-10RUNSTATS 9-15–9-17

Data movement utility 8-4db2look 8-57–8-59db2move 8-53–8-56EXPORT 8-5–8-17IMPORT 8-18–8-23IMPORT versus LOAD 8-52LOAD 8-24–8-51

Databaseconfiguration file 5-3starting and stopping 5-8

Database Administration Server 4-8Database Configuration window 5-7Database-managed space 3-3, 3-8, 3-11

minimum size 3-33performance 3-37

Page 542: System Administration

Index-2

DB CFG 5-3GET DB CFG command 5-4managing in CLP 5-4managing in Control Center 5-6

db2admin 4-12db2advis 6-29db2audit 15-29db2diag.log 14-6, 14-9

error codes 14-15location 14-12

db2exfmt 13-15db2expln 13-15, 13-16db2icrt 4-9, 4-10db2idrop 4-12db2look 8-57–8-59db2move 8-53–8-56db2rebind 9-19db2set 4-21db2start 4-14db2stop 4-15DBCLOB 5-22DBM CFG 4-3DEACTIVATE DATABASE command 5-9Design Advisor 6-19–6-28

db2advis 6-29Development Center 2-24

create a new routine 2-26Diagnostics

data required 14-6db2diag.log 14-6, 14-9error codes 14-15looking up internal codes 14-16, 14-18other data sources 14-8required data checklist 14-7suggestions 14-10

DROP TABLESPACE command 3-31Dropped table recovery 11-21dynexpln 13-15

EEvent Monitor 12-21–12-40

CREATE EVENT MONITOR 12-27–12-34Explain 13-6

binding 13-14capturing data 13-8db2exfmt 13-15db2expln 13-15, 13-16dynexpln 13-15EXPLAIN command 13-9precompilation 13-13setting optimization level 13-22special register 13-11

tables 13-7tools 13-15Visual Explain 13-15, 13-18

EXPORT command 8-5–8-17Exporting data

formats 8-3Extent 3-5

FFederated system 5-38

objects 5-39First Steps 2-5FORCE APPLICATION command 5-10Foreign key 7-16

altering 7-20creating through Control Center 7-20creating with new table 7-18

GGlobal temporary table 5-24Graphical user interface tools 2-3GUI Tool

Control Center 2-10GUI tools 2-3

Command Center 2-15Configuration Assistant 2-14, 4-30contents pane

Contents pane 2-11Design Advisor 6-19–6-28Development Center 2-24First Steps 2-5Health Center 2-21Information Center 2-9Journal 2-21License Center 2-23menu bar 2-10object menu 2-13object pane

Object pane 2-11SQL Assist 5-36Task Center 2-17toolbar 2-10Tools menu 2-12

HHealth Center 2-21Health Monitor 12-41

Health indicator 12-43

Page 543: System Administration

Index-3

IIMPORT 8-18–8-23Index

bidirectional 6-6, 6-12, 6-16clustered 6-6, 6-9, 6-15CREATE INDEX command 6-7creating in Control Center 6-17definition 6-3key 7-3multidimensional clustering 6-10primary key 7-4type-2 6-4unique 6-6, 6-7, 6-14

Information Center 2-9Instance 4-3

authority 4-23creating 4-9creating in UNIX 4-4creating in Windows 4-11dropping 4-12starting 4-14stopping 4-15

Isolation level 10-12cursor stability 10-12read stability 10-13repeatable read 10-13uncommitted read 10-12

JJournal 2-21

KKey 7-3

foreign 7-16primary 7-4unique 7-22

LLarge object 5-22

storing 5-23License Center 2-23LIST TABLESPACE command 3-20LIST TABLESPACE CONTAINERS 3-22LOAD command 8-24–8-51LOB 5-22Locking

compatibility 10-6deadlock 10-14escalation 10-8

explicit 10-10isolation levels 10-12lock conversion 10-7lock parameters 10-8types of locks 10-4why needed 10-3

Loggingarchival 11-9circular 11-6dual 11-8infinite active 11-7log file usage 11-5log files 11-4userexits 11-9

MMenu bar 2-10Monitoring

Event Monitor 12-21–12-40Health Monitor 12-41performance 12-13Snapshot Monitor 12-14–12-20

Multidimensional clustering 6-10

OObject menu 2-13Optimization

record blocking 13-23

PParallelism configuration 12-12Performance 12-3

monitoring 12-13Performance tuning 12-3

DB CFG parameters 12-6DBM CFG parameters 12-4

Primary key 7-4adding to existing table 7-13alter through Control Center 7-14dropping 7-13

Problemdescription 14-4solving 14-3types 14-5

Profile Registry 4-19levels 4-20

Page 544: System Administration

Index-4

QQuery optimization 13-3

explain 13-6Quick Tour 2-8

RRAID device 3-35Recovery

types of 11-3Recovery history file 11-20Redirected restore 11-14Registry variables 4-19

setting 4-22viewing 4-21

REORG 9-11–9-14REORGCHK 9-4–9-10Restore

database roll forward 11-13dropped table recovery 11-21point-in-time recovery 11-16recovery history file 11-20redirected 11-14table space recovery 11-16

Restoring from backup 11-12RUNSTATS 9-15–9-17

SSample databases 2-6Schema 5-12

creating 5-14setting current 5-13

Securityaudit facilty 15-28authentication 15-4

Client 15-10DCS 15-6KERBEROS 15-8KRB_SERVER_ENCRYPT 15-9server 15-5

authorities 15-13–15-16encrypted password 15-7levels 15-3privilege 15-17

database level 15-19development requirements 15-25implicit 15-24index level 15-23package level 15-22routine level 15-22schema level 15-20

table level 15-21table space level 15-23view level 15-21

system catalogs 15-26Server discovery 4-30SET CURRENT SCHEMA 5-13SQL Assist 5-36SYSADM 4-5SYSADM_GROUP 4-5System catalog

querying for bufferpools 5-20querying for constraints 5-21querying for table names 5-18querying for table spaces 5-19tables 5-17

System catalog spaceperformance 3-38

System temporary table 5-24System-managed space 3-3, 3-7, 3-9

performance 3-36System-temporary space

performance 3-39

TTable

creating in Control Center 7-7Table space 3-3

altering 3-25creating 3-16–3-19creating with database 3-14database-managed space 3-3, 3-8, 3-11dropping 3-31listing 3-20offline state 11-17performance 3-40SMS versus DMS 3-12system temporary space 3-10system-managed space 3-3, 3-7, 3-9user temporary space 3-10

Task Center 2-17new task 2-19Task menu 2-18

Temporary tableauthorization 5-28creating 5-26global 5-24system 5-24

Timeron 2-29Toolbar 2-10Tools menu 2-12Tutorials 2-7

Page 545: System Administration

Index-5

UUnique index 6-6, 6-7, 6-14Unique key 7-22

adding with CREATE TABLE 7-23modifying with ALTER TABLE 7-25

Utilitydasidrop 4-12data movement 8-4db2admin 4-12db2advis 6-29db2icrt 4-9, 4-10db2idrop 4-12db2set 4-21db2start 4-14db2stop 4-15

VView 5-29

Create View window 5-35creating 5-33creating in Control Center 5-34types 5-31

Visual Explain 2-27, 13-18

Page 546: System Administration

Index-6